Applied Univariate, Bivariate, and Multivariate Statistics. Daniel J. Denis

Чтение книги онлайн.

Читать онлайн книгу Applied Univariate, Bivariate, and Multivariate Statistics - Daniel J. Denis страница 18

Applied Univariate, Bivariate, and Multivariate Statistics - Daniel J. Denis

Скачать книгу

but whether her empirical project is advancing our state of knowledge more than the experimental design of the student using a t‐test cannot even begin to be evaluated based on the statistical methodology used. It must instead be based on scientific merit and the overall strength of the scientific claim. Which scientific contribution is more noteworthy? That is the essential question, not the statistical technique used. The statistics used rarely have anything to do with whether good science versus bad science was performed. Good science is good science, which at times may require statistical analysis as a tool for communicating its findings.

      In fact, much of the most rigorous science often requires the most simple and elementary of statistical tools. Students of research can often become dismayed and temporarily disillusioned when they learn that complex statistical methodology, aesthetic and pleasurable on its own that it may be (i.e., SEM models can be fun to work with), still does not solve their problems. Research wise, their problems are usually those of design, controls, and coming up with good experiments, arguments, and ingenious studies. Their problems are usually not statistical at all, and in this sense, an overemphasis on statistical complexity could actually delay their progress to conjuring up innovative, ground‐breaking scientific ideas.

      The cold hard facts then are that if you have poor design, weak research ideas, and messy measurement of questionable phenomena, your statistical model will provide you with anticlimactic findings, and will be nothing more than an exercise in the old adage garbage in, garbage out. Quantitative modeling, sophisticated as it has become, has not replaced the need for strict, rigorous experimental controls and good experimental design. Quantitative modeling has not made correlational research somehow more “on par” with the gold standard of experimental studies. Even with the advent of latent variable modeling strategies and methodologies such as confirmatory factor analysis and structural equation modeling, statistics does not purport to “discover,” for real, hidden variables. Modeling is simply concerned with the partitioning of variability and the estimation of parameters. Beyond that, the remainder of the job of the scientist is to know his or her craft and to design experiments and studies that enlighten and advance our knowledge of a given field. When applied to sound design and thoughtful investigatory practices, statistical modeling does partake in this enlightenment, but it does nothing to save the scientist from his or her poorly planned or executed research design. Statistical modeling, complex and enjoyable as it may be on its own, guarantees nothing.

      One might say that the ultimate goal of any science is still to establish causal relations, even if classical “Laplacian” determinism has been somewhat jettisoned by theoretical physicists, which would imply that there may actually not be “true causes” to events (despite our continued attempts to assign them). Our search for them may be entirely misguided. Still, and a bit more down to earth, nothing suggests a stronger understanding of a scientific field than to be able to speak of causation about the phenomena it studies. However, more difficult than establishing causation in a given research paradigm is that of understanding what causation means in the first place. There exist several definitions of causality. Most definitions have at their core that causation is a relation between two events in which the second event is assumed to be a consequence, in some sense, of the first event.

      For example, if I slip on a banana peel and fall, we might hypothesize that the banana peel caused my fall. However, was it the banana peel that caused my fall, or was it the worn out soles on my shoes that I was wearing that day that caused the fall? Had I been wearing mountain climbers instead of worn‐out running shoes, I might not have fallen. Who am I to say the innocent banana peel caused my fall? Causality is hard. Even if it seems that A caused B, there are usually many variables associated with the problem such that if adjusted or tweaked may threaten the causal claim. Some would say this is simply a trivial philosophical problem of specifying causality and it is “obvious” from the situation that the banana peel caused the fall. Nonetheless, it is clear from even such a simple example that causation is in no way an easy conclusion to draw. Perhaps this is also why it is extremely difficult to pinpoint true causes of virtually any behavior, natural or social. Hindsight is 20/20, but attributing causal attributes with any kind of methodological certainty in violent crimes, for instance, usually turns out to be speculative at best. True, we may accumulate evidence for prediction, but equating that with causation is under most circumstances the wish, not the reality, of a social theory.

      In our brief discussion here we will not attempt to define causality. Books, dissertations, and treatises have been written exclusively on the topic. At most, what we can do in the amount of space we have is to simply heed the following advice to the reader—If you are going to speak of causation with regard to your research, be prepared to back up your theory of causation to your audience. It is simply not enough to say A causes B without subjecting yourself to at least some of the philosophical issues that accompany such a statement. Otherwise, it is strongly advised that you avoid words such as “cause” in hypothesizing or explaining results and findings. Relations and predictions are much epistemologically “safer” words to use, less prone to critique ending in quicksand. For a brief, but enlightening discussion of causality in the social sciences, see Fox (1997, pp. 3–14). For a more thorough treatment of the subject as it relates to structural equation models, see Mulaik (2009, pp. 63–117). Even a brief study of the philosophy of science goes a long way to understanding the complexities involved in using “causal” statements in research. These issues are not nearly as simple as they may at first appear.

      Ian Stewart (1995) said it best when he wrote that the mathematician is not a juggler of numbers, he is a juggler of concepts. The greatest ambivalence to learning statistical modeling experienced by students outside (and even inside, I suppose) the mathematical sciences is that of the presumed mathematical complexity involved in such pursuits. Who wants to learn a mathematically-based subject such as statistics when one has “never been good at math?”

      More than likely, the “dislike” of these subjects has more to do with the perceptions one has learned to associate with these subjects than with an inherent ontological disdain for them. Human beings are creatures of psychological association. Any dislike of anything without knowing what that thing is in the first place is almost akin to disliking a restaurant dish you have never tried. You cannot dislike something until you at least know something about it and open your mind to new possibilities of what it might be that you are forming opinions about. Not to sound overly “Jamesian,” (the analogy isn't perfect, but it's close) but perhaps you are afraid of mathematics because of your fear of it rather than the mathematics itself. That is, you run, not because of the mathematics, but because of the fear. If you accept that you are yet unsure of what mathematics is, and will not judge it until you are knowledgeable of it, it may delay derogatory opinion about it. It is only when we assume we know something (to some extent, at least) that we usually

Скачать книгу