Social Work Research Methods. Reginald O. York

Чтение книги онлайн.

Читать онлайн книгу Social Work Research Methods - Reginald O. York страница 13

Автор:
Серия:
Издательство:
Social Work Research Methods - Reginald O. York

Скачать книгу

fact that I had eggs for breakfast this morning does not necessarily mean that I prefer eggs over cereal for breakfast in general. It could be that I have eggs half the time and cereal half the time and I just happened to have had eggs this morning. If you observed me at breakfast several times and noted that I had eggs each and every time, you would have more reliable evidence that I prefer eggs for breakfast. The more observations you make, the more confident you would be in your conclusion that I prefer eggs for breakfast.

      We are referring to a thing called “probability.” Let’s discuss this concept in a general way. Logic would suggest that there is a 50% chance of getting a heads on a given flip of a coin because there are only two possibilities—heads and tails. But let’s suppose that someone said that there was one coin in a set of coins that was rigged to land on heads more often than on tails because of the distribution of the weight of the coin. You pick out one coin, and you want to know if this is the one that is rigged. Let’s suppose that your first flip was heads and the second was also heads. Are you convinced you have the rigged coin? Probably not because you have only flipped it 2 times and we know that two heads in a row can happen just by chance. What if you have flipped this coin 10 times and it came out heads every time? Now you have more reason to believe that you have the rigged coin. A similar result after 20 flips would be even better. If you do not have the rigged coin, you would not likely have very many flips in a row that were similar. The more flips you have that are similar, the better are your chances that you have found the rigged coin. Determining how many flips you need to be confident is a matter for statistics. If you knew how to use a statistical test known as the binomial test, you could see that 5 flips in a row with only heads appearing would be so unusual that you would be safe to bet that you have found the rigged coin.

      Now let us put the same lesson to use with a more practical example. Suppose that you wanted to know whether males and females differ in their satisfaction with instruction in research courses. Are females higher or lower than males in their level of satisfaction? You could ask a given group of students if they are generally satisfied with their research instruction, with the options of YES or NO. You could then compare the proportion of females who answered YES with the proportion of males who answered YES. What if you found that 63% of females were satisfied and that 65% of males were satisfied? Does that mean you can conclude that there is truly a difference between males and females? If so, would you be prepared to bet a large sum of money that a new study of this subject would result in males having a higher level of satisfaction? I doubt that you would, because you would realize that this small a difference between males and females could be easily explained by chance. If you had found that 60% of females were satisfied as compared with only 40% of males, you would be more likely to see this difference as noteworthy. However, such a difference with a sample of only 10 students would likely make you wonder if you should take these results seriously. Results with a sample of 100 students would be much more impressive.

      You examine the theme of probability in scientific research with the use of statistics. A statistical test applied to your data will tell you the likelihood that these data could have occurred by chance. If you fail to achieve statistical significance with your data, you cannot rule out chance as a likely explanation of them. Thus, you cannot take them seriously in your conclusions. Suppose you found that students had a slightly higher score on knowledge of scientific research at the end of a lesson than before the lesson began but your data failed to be statistically significant. Under these circumstances, you should conclude that you failed to find that your students improved in research knowledge. You should not conclude that they had a slight improvement. Why? Because your data can be explained by chance, and you should not take them seriously. If you had found your data to be statistically significant, then you could conclude that you found that your students had achieved a slight gain in knowledge.

      Limitations of Common Sense

      There is much wisdom in common sense, but there are pitfalls as well. Common sense is not a form of knowledge based on scientific inquiry. It is used here to show the connections between ideas we may embrace and the nature of science. There are many commonsense phrases from past times that may have been refuted by science; so we no longer embrace them.

      Pseudoscience as an Alternative to Science

      Pseudoscience presents the appearance of science but lacks a scientific basis (Thyer & Pignotti, 2015). An assertion of an idea based on pseudoscience may provide tables and charts that are behind the idea presented, but these tables and charts have not been validated by scientific studies. Another characteristic of pseudoscience is the reliance only on anecdotal evidence to support the idea or theory. Anecdotal evidence is the use of single examples that fit one’s theory. But anecdotal evidence is quite weak and is not considered to be a legitimate basis for scientific inquiry. You can find an example to prove just about any point you make. Science is based on the systematic review of many facts, not just a few examples.

      Another characteristic of claims based on pseudoscience is a tendency to cherry-pick facts to fit the theory rather than make an objective examination of all facts relevant to the theory. One of the red flags of pseudoscience is a profound claim of effectiveness. You have heard the statement “If something seems to be too good to be true, it probably is not true.” Solutions based on pseudoscience often claim greatness in the absence of scientific evidence of any effectiveness at all.

      Advocates of approaches that are in the category of pseudoscience usually are not inclined to engage in serious scientific work to test the approach, and these people will work hard to make excuses when evidence is produced that refutes the theory. The approach of science is to put the burden of proof on the researcher, to prove that an assertion is correct. The approach of the advocate for pseudoscience is to reverse the burden of proof and claim that the new approach should be considered correct until science clearly proves that it is not.

      A good source on this topic is the book Science and Pseudoscience by Thyer and Pignotti (2015). You can see in this book a discussion of many treatment approaches that fall into the category of pseudoscience. For example, you will find information on Reiki assessment, thought field therapy, neurolinguistics programming, holding therapy for children, and militaristic boot camps for youth. There are many more. These are just a few examples.

      If you see a model of practice that has met the criteria for being pseudoscience, you do not necessarily have evidence that this practice is effective or that it is not effective. Instead, you have information suggesting that there is a lack of evidence of its effectiveness. You also have information suggesting that the basis for the claim of success is not consistent with a scientific basis for decision making. It may be effective but without evidence to prove it. It may be ineffective. In fact, it may even be harmful. We will not know unless we have full evidence.

      There have been treatments that have been found, through scientific evidence, to be harmful. An example is the Scared Straight approach to the prevention of delinquency. This program exposes at-risk youth to the perils of prison life by taking them to prison for the day and having them listen to the messages of the prisoners about how bad prison life is. The assumption of this program is that this exposure will scare these youth sufficiently to cause them to avoid a life of crime. The results, however, have shown that it makes things worse. Here is the plain language summary of a review of many studies of this program:

      Programs such as “Scared Straight” involve organized visits to prison facilities by juvenile delinquents or children at risk for becoming delinquent. The programs are designed to deter participants from future offending by providing firsthand observations of prison life and interaction with adult inmates. This review, which is an update of one published in 2002, includes nine studies that involved 946 teenagers, almost all males. The studies were conducted in different parts of the USA and involved young people of different races whose average age ranged from 15 to 17 years. Results indicate that not only do these programs fail to

Скачать книгу