Data analysis can be very difficult! If the data in a research article have no statistical significance, the results are meaningless. The following three basic principles can help you briefly critique the effectiveness of quantitative research:
Level of significance
- The level of statistical significance measures how much evidence a researcher has against the null hypothesis (no significant difference in the specified populations). The level is written as a “p value”.
- For a result to be considered significant, it must have a “p value” of less than 0.05 which would indicate that a difference exists between the control and experimental group.
- To understand the magnitude of the effect of a study, look at the confidence interval which reflects the degree of risk researchers are willing to take of being wrong. The higher the confidence interval, the better.
- Example: if the confidence interval is 95%, the probability that the researcher will be wrong is 5 times out of 100.
- Confidence intervals are calculated based on the mean and standard deviation. A standard deviation shows the amount of variation from the mean (average of all the numbers).
- For a good result, the research will have a standard deviation as close to the number 1 as possible, negative or positive.
Source: Godshall, M. (2010). Fast facts for evidence-based practice. New York, NY: Springer.
Click here to read a basic introduction of research statistics.