Lee, A.S. “A scientific methodology for MIS case studies.,” MIS Quarterly (13:1), March 1989, pp 33-50.

In message 207 on Saturday, October 27, 2007 5:17pm, Charles Myers (cemyers) writes:
>Lee presents an argument against the use of a single case study for a research topic.

Actually, Lee is defending single-case-study research.

Key rhetorical strategy: Lee says he chooses the natural science model as the ideal for the social science model, in part because the critics of single-case studies often ground their criticism in natural science methodology; Lee thus intends to respond to the critics in their own language and show that single-case studies can satisfy natural science criteria [34].

The four problems of single-case studies:

>–Making controlled observations
>–Making controlled deductions
>–Allowing for replicability
>–Allowing for generalizability
>

To understand how Lee defends single-case studies against the charges that they can’t satisfy the above 4 criteria, Lee appeals to 4 other standards laid out by Popper, who said you have a good theory when it is…

1. falsifiable
2. logically consistent
3. competitively predictive (it predicts stuff at least as well as competing theories)
4. durable (scientists test it, and it survives!)

Markus tests her three theories, falsifies two of them, and finds the third (the interaction theory) satisfies Popper’s conditions. Someone else could still construct a test to falsify it; it’s consistent with what Markus observed; it makes better predictions than two competing theories; and it survived this case study. That’s science!

>Seemely Allen both agreed with Marcus and disagreed with Marcus.
>In the end Ibelive he states that multple case study approach is better
>for reasearch because of the rigor and potential for quantity and quality
>of data.

I don’t want to offer excuses for sloppy science — I’ll take multiple examples over one anecdote any day — but I might propose the 9/11 Theory of IS Research in support of single-case studies. The United States restructured the entire federal security apparatus, engaged in two significant military adventures, and radically changed the nature of defendant rights with the PATRIOT Act, all on the basis of a single event, one coordinated terrorist attack. Without commenting on the wisdom of any particular policy adopted, I think it is safe to say that scorn and derision would have met anyone who stood up on September 12 and said, “Hang on, before we form any theories or design any solutions, we’d better try replicating the experiment and controlling the variables. We can’t derive any useful knowledge from just one event.”

In practice, single cases form the basis of important decisions and life-lessons all the time.  Granted, single events also often form the basis of really bad decisions and prejudices. Scientists have an obligation to be circumspect, to look around for more than one case when possible to back up what they wish to argue. But sometimes a single case can provide such powerful, useful information that we’d be fools to ignore it.

Given that IS interacts so closely with the world of practice (we really are flying the plane, studying it, and swapping out bolts and rotors all at the same time!), maybe we IS researchers get special dispensation to do more single-case studies. Astrophysics and molecular biology don’t move fast like IS; we’ve got to provide folks some useful results before the technology we’re studying mutates into something completely unrecognizable. Maybe we need to make the most we can out of single-case studies, give business all the advice and best-guesses we can derive, and hope for the best as we scramble to study the next new thing.

Advertisements
Lee, A.S., and Baskerville, R.L. “Generalizing generalizability in information systems research,” Information Systems Research (14:3), Sep 2003, pp 221-243.

Key passage #1:

An increase in sample size is beneficial, but the benefits take the form of improved reliability of the sampling procedure, rather than improved generalizability of a sample to its population. [226]

Bigger sample size doesn’t mean better generalizability to the entire population. A sample is a sample, not matter how big (until N = pop). It just means your sampling procedure is more likely to produce repeatable results. A researcher can take as much of a stab at generalization from one case study as from a quantitative analysis of a big-N sample. Both methods could make the right generalization; both could guess wrong. It’s up to later researchers to disprove that generalization.

Think of it as a language clarification: Saying you have a large sample doesn’t prove that your results will more likely apply to the entire population than the results of a smaller-sample study. It just says that if someone else applies your methods, it’s more likely they will get similar results. Nothing inductive can tell us about the items we haven’t surveyed yet. “Therefore, a larger sample size does increase generalizability, but it is the generalizability of a sample to other samples, not to the population” [227].

Key passage #2:

Geertz states the following about both theory and generalizability in anthropological studies about culture (Geertz 1973, pp. 25-26): “The essential task of theory building here is not to codify abstract regularities but to make thick description possible; not to generalize across cases but to generalize within them.”  [231]

The positivist wants to be able to extrapolate to the universe; the interpretivist is dealing with a different, sometimes not extrapolable beast: the meanings that exist only in the context of their culture. The interpretivist can find as much value in learning the general principles of meaning within a group, even if that information doesn’t generalize to the meanings other groups construct.

Overall, Lee and Baskerville offer a liberating message: recognizing the limits of a proper understanding of the term “generalizability” gives researchers more freedom to pursue interesting and useful ideas without worrying quite so much about proving that their results will apply everywhere to everyone.

Lee, A. S. (1999). Rigor and relevance in MIS research: Beyond the approach of positivism alone. MIS Quarterly, 23(1), 29-33.

Gold nugget from Lee: “I believe that there are often circumstances in which one of our responsibilities as academicians is to be the conscience for our practitioner colleagues and, indeed, for society in general” [31].

It is easy to imagine academics as the folks thinking about IS and practitioners as the folks doing things with IS. If that dichotomy is valid in any way, it makes sense that the role of conscience would fall to the academics. Assigning that role to academics doesn’t excuse practitioners to act with wanton disregard for the general welfare. All citizens have an obligation to act thoughtfully. But IS academics can position themselves uniquely to see beyond the confines of a single firm or industry (the normal realm of the practitioner) and recognize the broader social and moral ramifications of where practitioners are taking IS.

Of course, the role of conscience only increases the burden on IS academics to keep their perspectives and their entire field diverse. To answer not only “Can we do X?” but also “Should we do X?” we must be able in our worldviews to encompass algorithms and allegories, management theories and moral precepts.

Lee, A. S., Zmud, R. W., Robey, D., Watson, R., Zigurs, I., Wei, K.-K., et al. (2001). Editor’s comments: Research in information systems: what we haven’t learned. MIS Quarterly, 25(4).

Zmud notes that lots of research has looked at whether firms are getting a good return on investment in IT. That’s encouraging for my dissertation purposes — lots of literature to review. He also notes that much of the IT investment has been in small-return areas. Zmud directs us toward study of on-going IT portfolio management with an eye toward high-return areas.

Robey seems to embrace diversity in IS. He encourages us not to focus so exclusively on publication in MISQ and instead fire our missives out to other journals. Such encouragement fits with the outreach push discussed in an earlier article, the idea that we should evangelize our findings and our field by seeking publication in journals of related fields. Robey also asks for more “interesting” work — i.e., research with “unconventional departures from accepted wisdom” that opens “new and interesting avenues for inquiry.” That sort of “interesting” work will only come from diverse viewpoints, not a rigid delineation of what IS is and isn’t.

Lewis offers my kind of bullet list: six key principles, all tied to one core goal: improving organizational performance. But do these principles comprise a “good grand theory,” or are we just restating the same sensible goals that every organization has: do the job faster, better, cheaper? A grand theory for the field needs to be something more than a goal or value statement. I would assume we are pursuing something like the grand unifying theory of physics, a general theory that explains how things work, not just how we want them to work. Then again, how profound a result is it to say, “These systems work, these don’t”? Is that enough to make us more than a glorified vo-tech? Is our field so weird and diverse that its grand theory needs to be an odd merging of physics and philosophy? (Maybe I’m just grappling with the issue Webster refers to in this article as “meta-theory,” which she says IS hasn’t sufficiently tackled.)
Zigurs also beats the diversity drum. We have to make the expansion of knowledge and understanding our main goal, not our methodology or personal definition of the field. We do not have to put our own approach to IS on a podium to advance the field. We need to keep the doors open to all comers. As long as researchers are interested in the core goal Lewis identified, improving organizational performance with IS, and pursue that goal with scientific rigor, we should communicate and work with them, keep our doors and journals open to them, regardless of theory, methodology, or specialization. We stand to learn (and serve!) from every approach.

Wei and I will be good friends, it appears, at least with respect to increasing the connections between IS and economics. I’m also intrigued by Wei’s suggestion that system development needs more research. We need to look not only at the impact of IS on everything else, but of IS on IS. How do programmers, analysts, and other participants in the software development process get from “Gee, I wish we could…” to the shelves at Best Buy? That sounds like some navel-gazing that would be very interesting!

Myers — another vote for diversity, but with a caveat: we can have lots of researchers looking into lots of different problems, but we still need to be able to talk to, work with, and learn from each other. Again, if there is a Grand Unified Theory of IS, we will find it by comparing notes from all the different disciplines that IS overlaps and finding out what the IS-econ, IS-health-tech, IS-marketing, IS-behavioral, IS-etc. research has in common.

Sambamurthy: another new friend for my research area!

What business and IT capabilities, structure, and processes are associated with continued success in leveraging information technologies for superior performance through innovation, globalization, speed-to-market, operational excellence, cost leadership, and customer intimacy?

Add “in South Dakota” to the end of that question, and you’ve got my research area. I would spin a different alternative from Sambamurthy’s recommendations: where Sambamurthy sees potential for global and specific studies of specific IT capabilities, I would add the possibility of studying those capabilities in specific regions and cultures. Friedman tells us the world is flat, but differences in wealth, education, infrastructure, and who knows what else may show some differences in how those various IT capabilities, structures, and processes work in different regions.

Sambamurthy will also be all over my intended research sources:

“…[R]elevant ideas about business or IT capabilities are not likely to emerge simply from a literature review. Researchers must develop their insights by examining the trade press, working with a few companies, talking to senior executives, and then blending these emerging insights with theory and prior literature.”

Expect to see Darin Namken in my bibliography….

Agarwal mentions “IT human capital,” a concept perhaps particularly relevant in South Dakota. As I research the impact of investment in IT, I will certainly want to consider measures of productivity and profit. But I may also want to look for a way to quantify the creation of IT human capital, students and workers who because of the incorporation of IT in their schools and businesses become experts (of varying degrees) in IT and can apply that knowledge themselves, not only within the innovating institution, but in other pursuits. IT implementation and integration improves the performance of the specific institution being studied (we hope), but it also creates more IT human capital and thus a more marketable, value workforce for attracting other businesses. Then we can talk about balancing costs: turnover costs for specific firms versus overall benefits for the state economy.