JCDR - Register at Journal of Clinical and Diagnostic Research
Journal of Clinical and Diagnostic Research, ISSN - 0973 - 709X
Education Section DOI : 10.7860/JCDR/2017/26047.9942
Year : 2017 | Month : May | Volume : 11 | Issue : 5 Full Version Page : JE01 - JE05

Critical Appraisal of Clinical Research

Azzam Al-Jundi1, Salah Sakka2

1 Professor, Department of Orthodontics, King Saud bin Abdul Aziz University for Health Sciences-College of Dentistry, Riyadh, Kingdom of Saudi Arabia.
2 Associate Professor, Department of Oral and Maxillofacial Surgery, Al Farabi Dental College, Riyadh, KSA.


NAME, ADDRESS, E-MAIL ID OF THE CORRESPONDING AUTHOR: Dr. Salah Sakka, Associate Professor, Al-Farabi Dental College, Riyadh-11691, Kingdom of Saudi Arabia.
E-mail: salah.sakka@hotmail.com
Abstract

Evidence-based practice is the integration of individual clinical expertise with the best available external clinical evidence from systematic research and patient’s values and expectations into the decision making process for patient care. It is a fundamental skill to be able to identify and appraise the best available evidence in order to integrate it with your own clinical experience and patients values. The aim of this article is to provide a robust and simple process for assessing the credibility of articles and their value to your clinical practice.

Keywords

Introduction

Decisions related to patient value and care is carefully made following an essential process of integration of the best existing evidence, clinical experience and patient preference. Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [1].

Critical appraisal is essential to:

Combat information overload;

Identify papers that are clinically relevant;

Continuing Professional Development (CPD).

Carrying out Critical Appraisal:

Assessing the research methods used in the study is a prime step in its critical appraisal. This is done using checklists which are specific to the study design.

Standard Common Questions:

What is the research question?

What is the study type (design)?

Selection issues.

What are the outcome factors and how are they measured?

What are the study factors and how are they measured?

What important potential confounders are considered?

What is the statistical method used in the study?

Statistical results.

What conclusions did the authors reach about the research question?

Are ethical issues considered?

The Critical Appraisal starts by double checking the following main sections:

I. Overview of the paper:

The publishing journal and the year

The article title: Does it state key trial objectives?

The author (s) and their institution (s)

The presence of a peer review process in journal acceptance protocols also adds robustness to the assessment criteria for research papers and hence would indicate a reduced likelihood of publication of poor quality research. Other areas to consider may include authors’ declarations of interest and potential market bias. Attention should be paid to any declared funding or the issue of a research grant, in order to check for a conflict of interest [2].

II. ABSTRACT: Reading the abstract is a quick way of getting to know the article and its purpose, major procedures and methods, main findings, and conclusions.

Aim of the study: It should be well and clearly written.

Materials and Methods: The study design and type of groups, type of randomization process, sample size, gender, age, and procedure rendered to each group and measuring tool(s) should be evidently mentioned.

Results: The measured variables with their statistical analysis and significance.

Conclusion: It must clearly answer the question of interest.

III. Introduction/Background section:

An excellent introduction will thoroughly include references to earlier work related to the area under discussion and express the importance and limitations of what is previously acknowledged [2].

-Why this study is considered necessary? What is the purpose of this study? Was the purpose identified before the study or a chance result revealed as part of ‘data searching?’

-What has been already achieved and how does this study be at variance?

-Does the scientific approach outline the advantages along with possible drawbacks associated with the intervention or observations?

IV. Methods and Materials section: Full details on how the study was actually carried out should be mentioned. Precise information is given on the study design, the population, the sample size and the interventions presented. All measurements approaches should be clearly stated [3].

V. Results section: This section should clearly reveal what actually occur to the subjects. The results might contain raw data and explain the statistical analysis. These can be shown in related tables, diagrams and graphs.

VI. Discussion section: This section should include an absolute comparison of what is already identified in the topic of interest and the clinical relevance of what has been newly established. A discussion on a possible related limitations and necessitation for further studies should also be indicated.

Does it summarize the main findings of the study and relate them to any deficiencies in the study design or problems in the conduct of the study? (This is called intention to treat analysis).

Does it address any source of potential bias?

Are interpretations consistent with the results?

How are null findings interpreted?

Does it mention how do the findings of this study relate to previous work in the area?

Can they be generalized (external validity)?

Does it mention their clinical implications/applicability?

What are the results/outcomes/findings applicable to and will they affect a clinical practice?

Does the conclusion answer the study question?

-Is the conclusion convincing?

-Does the paper indicate ethics approval?

-Can you identify potential ethical issues?

-Do the results apply to the population in which you are interested?

-Will you use the results of the study?

Once you have answered the preliminary and key questions and identified the research method used, you can incorporate specific questions related to each method into your appraisal process or checklist.

1-What is the research question?

For a study to gain value, it should address a significant problem within the healthcare and provide new or meaningful results. Useful structure for assessing the problem addressed in the article is the Problem Intervention Comparison Outcome (PICO) method [3].

P = Patient or problem: Patient/Problem/Population:

It involves identifying if the research has a focused question. What is the chief complaint?

E.g.,: Disease status, previous ailments, current medications etc.,

I = Intervention: Appropriately and clearly stated management strategy e.g.,: new diagnostic test, treatment, adjunctive therapy etc.,

C= Comparison: A suitable control or alternative

E.g.,: specific and limited to one alternative choice.

O= Outcomes: The desired results or patient related consequences have to be identified. e.g.,: eliminating symptoms, improving function, esthetics etc.,

The clinical question determines which study designs are appropriate. There are five broad categories of clinical questions, as shown in [Table/Fig-1].

Categories of clinical questions and the related study designs.

Clinical QuestionsClinical Relevance and Suggested Best Method of Investigation
Aetiology/CausationWhat caused the disorder and how is this related to the development of illness.Example: randomized controlled trial - case-control study- cohort study.
TherapyWhich treatments do more good than harm compared with an alternative treatment?Example: randomized control trial, systematic review, meta- analysis.
PrognosisWhat is the likely course of a patient’s illness?What is the balance of the risks and benefits of a treatment?Example: cohort study, longitudinal survey.
DiagnosisHow valid and reliable is a diagnostic test?What does the test tell the doctor?Example: cohort study, case -control study
Cost- effectivenessWhich intervention is worth prescribing?Is a newer treatment X worth prescribing compared with older treatment Y?Example: economic analysis

2- What is the study type (design)?

The study design of the research is fundamental to the usefulness of the study.

In a clinical paper the methodology employed to generate the results is fully explained. In general, all questions about the related clinical query, the study design, the subjects and the correlated measures to reduce bias and confounding should be adequately and thoroughly explored and answered.

Participants/Sample Population:

Researchers identify the target population they are interested in. A sample population is therefore taken and results from this sample are then generalized to the target population.

The sample should be representative of the target population from which it came. Knowing the baseline characteristics of the sample population is important because this allows researchers to see how closely the subjects match their own patients [4].

Sample size calculation (Power calculation): A trial should be large enough to have a high chance of detecting a worthwhile effect if it exists. Statisticians can work out before the trial begins how large the sample size should be in order to have a good chance of detecting a true difference between the intervention and control groups [5].

Is the sample defined? Human, Animals (type); what population does it represent?

Does it mention eligibility criteria with reasons?

Does it mention where and how the sample were recruited, selected and assessed?

Does it mention where was the study carried out?

Is the sample size justified? Rightly calculated? Is it adequate to detect statistical and clinical significant results?

Does it mention a suitable study design/type?

Is the study type appropriate to the research question?

Is the study adequately controlled? Does it mention type of randomization process? Does it mention the presence of control group or explain lack of it?

Are the samples similar at baseline? Is sample attrition mentioned?

All studies report the number of participants/specimens at the start of a study, together with details of how many of them completed the study and reasons for incomplete follow up if there is any.

Does it mention who was blinded? Are the assessors and participants blind to the interventions received?

Is it mentioned how was the data analysed?

Are any measurements taken likely to be valid?

Researchers use measuring techniques and instruments that have been shown to be valid and reliable.

Validity refers to the extent to which a test measures what it is supposed to measure.

(the extent to which the value obtained represents the object of interest.)

-Soundness, effectiveness of the measuring instrument;

-What does the test measure?

-Does it measure, what it is supposed to be measured?

-How well, how accurately does it measure?

Reliability: In research, the term reliability means “repeatability” or “consistency”

Reliability refers to how consistent a test is on repeated measurements. It is important especially if assessments are made on different occasions and or by different examiners. Studies should state the method for assessing the reliability of any measurements taken and what the intra –examiner reliability was [6].

3-Selection issues:

The following questions should be raised:

- How were subjects chosen or recruited? If not random, are they representative of the population?

- Types of Blinding (Masking) Single, Double, Triple?

- Is there a control group? How was it chosen?

- How are patients followed up? Who are the dropouts? Why and how many are there?

- Are the independent (predictor) and dependent (outcome) variables in the study clearly identified, defined, and measured?

- Is there a statement about sample size issues or statistical power (especially important in negative studies)?

- If a multicenter study, what quality assurance measures were employed to obtain consistency across sites?

- Are there selection biases?

• In a case-control study, if exercise habits to be compared:

- Are the controls appropriate?

- Were records of cases and controls reviewed blindly?

- How were possible selection biases controlled (Prevalence bias, Admission Rate bias, Volunteer bias, Recall bias, Lead Time bias, Detection bias, etc.,)?

• Cross Sectional Studies:

- Was the sample selected in an appropriate manner (random, convenience, etc.,)?

- Were efforts made to ensure a good response rate or to minimize the occurrence of missing data?

- Were reliability (reproducibility) and validity reported?

• In an intervention study, how were subjects recruited and assigned to groups?

• In a cohort study, how many reached final follow-up?

- Are the subject’s representatives of the population to which the findings are applied?

- Is there evidence of volunteer bias? Was there adequate follow-up time?

- What was the drop-out rate?

- Any shortcoming in the methodology can lead to results that do not reflect the truth. If clinical practice is changed on the basis of these results, patients could be harmed.

Researchers employ a variety of techniques to make the methodology more robust, such as matching, restriction, randomization, and blinding [7].

Bias is the term used to describe an error at any stage of the study that was not due to chance. Bias leads to results in which there are a systematic deviation from the truth. As bias cannot be measured, researchers need to rely on good research design to minimize bias [8]. To minimize any bias within a study the sample population should be representative of the population. It is also imperative to consider the sample size in the study and identify if the study is adequately powered to produce statistically significant results, i.e., p-values quoted are <0.05 [9].

4-What are the outcome factors and how are they measured?

-Are all relevant outcomes assessed?

-Is measurement error an important source of bias?

5-What are the study factors and how are they measured?

-Are all the relevant study factors included in the study?

-Have the factors been measured using appropriate tools?

Data Analysis and Results:

Assessment of the statistical significance should be evaluated:

- Were the tests appropriate for the data?

- Are confidence intervals or p-values given?

How strong is the association between intervention and outcome?

How precise is the estimate of the risk?

Does it clearly mention the main finding(s) and does the data support them?

Does it mention the clinical significance of the result?

Is adverse event or lack of it mentioned?

Are all relevant outcomes assessed?

Was the sample size adequate to detect a clinically/socially significant result?

Are the results presented in a way to help in health policy decisions?

Is there measurement error?

Is measurement error an important source of bias?

Confounding Factors:

A confounder has a triangular relationship with both the exposure and the outcome. However, it is not on the causal pathway. It makes it appear as if there is a direct relationship between the exposure and the outcome or it might even mask an association that would otherwise have been present [9].

6- What important potential confounders are considered?

-Are potential confounders examined and controlled for?

-Is confounding an important source of bias?

7- What is the statistical method in the study?

-Are the statistical methods described appropriate to compare participants for primary and secondary outcomes?

-Are statistical methods specified insufficient detail (If I had access to the raw data, could I reproduce the analysis)?

-Were the tests appropriate for the data?

-Are confidence intervals or p-values given?

-Are results presented as absolute risk reduction as well as relative risk reduction?

Interpretation of p-value:

The p-value refers to the probability that any particular outcome would have arisen by chance. A p-value of less than 1 in 20 (p<0.05) is statistically significant.

When p-value is less than significance level, which is usually 0.05, we often reject the null hypothesis and the result is considered to be statistically significant. Conversely, when p-value is greater than 0.05, we conclude that the result is not statistically significant and the null hypothesis is accepted.

Confidence interval:

Multiple repetition of the same trial would not yield the exact same results every time. However, on average the results would be within a certain range. A 95% confidence interval means that there is a 95% chance that the true size of effect will lie within this range.

8- Statistical results:

-Do statistical tests answer the research question?

Are statistical tests performed and comparisons made (data searching)?

Correct statistical analysis of results is crucial to the reliability of the conclusions drawn from the research paper. Depending on the study design and sample selection method employed, observational or inferential statistical analysis may be carried out on the results of the study.

It is important to identify if this is appropriate for the study [9].

-Was the sample size adequate to detect a clinically/socially significant result?

-Are the results presented in a way to help in health policy decisions?

Clinical significance:

Statistical significance as shown by p-value is not the same as clinical significance. Statistical significance judges whether treatment effects are explicable as chance findings, whereas clinical significance assesses whether treatment effects are worthwhile in real life. Small improvements that are statistically significant might not result in any meaningful improvement clinically. The following questions should always be on mind:

-If the results are statistically significant, do they also have clinical significance?

-If the results are not statistically significant, was the sample size sufficiently large to detect a meaningful difference or effect?

9- What conclusions did the authors reach about the study question?

Conclusions should ensure that recommendations stated are suitable for the results attained within the capacity of the study. The authors should also concentrate on the limitations in the study and their effects on the outcomes and the proposed suggestions for future studies [10].

-Are the questions posed in the study adequately addressed?

-Are the conclusions justified by the data?

-Do the authors extrapolate beyond the data?

-Are shortcomings of the study addressed and constructive suggestions given for future research?

-Is the conclusion convincing?

-Bibliography/References:

Do the citations follow one of the Council of Biological Editors’ (CBE) standard formats?

10- Are ethical issues considered?

If a study involves human subjects, human tissues, or animals, was approval from appropriate institutional or governmental entities obtained? [10,11].

-Does the paper indicate ethics approval?

-Can you identify potential ethical issues?

Critical appraisal of RCTs: Factors to look for:

Allocation (randomization, stratification, confounders).

Blinding.

Follow up of participants (intention to treat).

Data collection (bias).

Sample size (power calculation).

Presentation of results (clear, precise).

Applicability to local population.

[Table/Fig-2] summarizes the guidelines for Consolidated Standards of Reporting Trials CONSORT [12].

Summary of the CONSORT guidelines.

Title and abstractIdentification as a RCT in the title- Structured summary (trial design, methods, results, and conclusions)
Introduction-Scientific background-Objectives
Methods-Description of trial design and important changes to methods-Eligibility criteria for participants-The interventions for each group-Completely defined and assessed primary and secondary outcome measures-How sample size was determined-Method used to generate the random allocation sequence-Mechanism used to implement the random allocation sequence-Blinding details -Statistical methods used
Results-Numbers of participants, losses and exclusions after randomization-Results for each group and the estimated effect size and its precision (such as 95% confidence interval)-Results of any other subgroup analyses performed
Discussion-Trial limitations-Generalisability
Other information- Registration number

Critical appraisal of systematic reviews: provide an overview of all primary studies on a topic and try to obtain an overall picture of the results.

In a systematic review, all the primary studies identified are critically appraised and only the best ones are selected. A meta-analysis (i.e., a statistical analysis) of the results from selected studies may be included. Factors to look for:

Literature search (did it include published and unpublished materials as well as non-English language studies? Was personal contact with experts sought?).

Quality-control of studies included (type of study; scoring system used to rate studies; analysis performed by at least two experts).

Homogeneity of studies.

Presentation of results (clear, precise).

Applicability to local population.

[Table/Fig-3] summarizes the guidelines for Preferred Reporting Items for Systematic reviews and Meta-Analyses PRISMA [13].

Summary of PRISMA guidelines.

TitleIdentification of the report as a systematic review, meta-analysis, or both.
AbstractStructured Summary: background; objectives; eligibility criteria; results; limitations; conclusions; systematic review registration number.
Introduction-Description of the rationale for the review-Provision of a defined statement of questions being concentrated on with regard to participants, interventions, comparisons, outcomes, and study design (PICOS).
Methods-Specification of study eligibility criteria-Description of all information sources-Presentation of full electronic search strategy-State the process for selecting studies-Description of the method of data extraction from reports and methods used for assessing risk of bias of individual studies in addition to methods of handling data and combining results of studies.
ResultsProvision of full details of:-Study selection.-Study characteristics (e.g., study size, PICOS, follow-up period) -Risk of bias within studies.-Results of each meta-analysis done, including confidence intervals and measures of consistency.-Methods of additional analyses (e.g., sensitivity or subgroup analyses, meta-regression).
Discussion-Summary of the main findings including the strength of evidence for each main outcome.-Discussion of limitations at study and outcome level.-Provision of a general concluded interpretation of the results in the context of other evidence.
FundingSource and role of funders.

Conclusion

Critical appraisal is a fundamental skill in modern practice for assessing the value of clinical researches and providing an indication of their relevance to the profession. It is a skills-set developed throughout a professional career that facilitates this and, through integration with clinical experience and patient preference, permits the practice of evidence based medicine and dentistry. By following a systematic approach, such evidence can be considered and applied to clinical practice.

References

[1]Burls A, What is critical appraisal? 2016 LondonHayward Medical CommunicationsAvailable from: http://www.whatisseries.co.uk/what-is-critical-appraisal/  [Google Scholar]

[2]MacInnes A, Lamont T, Critical appraisal of a research paper Scott Uni Med J 2014 3(1):10-17.  [Google Scholar]

[3]Richards D, Lawrence A, Evidence-based dentistry Br Dent J 1995 179(7):270-73.  [Google Scholar]

[4]Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS, Evidence based medicine: what it is and what it isn’t BMJ 1996 312(7023):71-72.  [Google Scholar]

[5]Greenhalgh T, How to read a paper: The basics of evidence based medicine 2014 5th edNew York United StatesJohn Wiley & Sons  [Google Scholar]

[6]Sakka S, Al-ani Z, Kasioumis T, Worthington H, Coulthard P, Inter-examiner and intra-examiner reliability of the measurement of marginal bone loss around oral implants Implant Dent 2005 14(4):386-88.  [Google Scholar]

[7]Rosenberg W, Donald A, Evidence based medicine: an approach to clinical problem-solving BMJ 1995 310(6987):1122-26.  [Google Scholar]

[8]Stewart LA, Parmar MK, Bias in the analysis and reporting of randomized controlled trials Int J Technol Assess Health Care 1996 12(2):264-75.  [Google Scholar]

[9]Egger M, Smith GD, Bias in location and selection of studies BMJ 1998 316(7124):61-66.  [Google Scholar]

[10]Haynes RB, Of studies, syntheses, synopses, summaries, and systems: the “5S” evolution of information services for evidence-based healthcare decisions Evid Based Med 2006 11(6):162-64.  [Google Scholar]

[11]Al-Jundi A, Sakka S, Protocol writing in clinical research J Clin Diagn Res 2016 10(11):ZE10-ZE13.  [Google Scholar]

[12]Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials BMJ 2010 340:c869  [Google Scholar]

[13]Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA GroupPreferred reporting items for systematic reviews and meta-analyses: The PRISMA Statement PLoS Med 2009 6(7):e1000097  [Google Scholar]