Home » Downloads » Different methods name for evaluating evidence

Different methods name for evaluating evidence

Different methods name for evaluating evidence

1) The two different methods name for evaluating evidence is Quantitative method, which is consist of a Systematic Review (SR) and Meta-Analysis. The qualitative method consists of data-Analysis. According to Sriganesh, Shanthanna & Busse, (2016).

A SR can be either qualitative, in which eligible studies are summarized, or quantitative (meta-analysis) when data from individual studies are statistically combined. Not all SRs may result in meta-analyses. Similarly, not all meta-analyses may have been preceded by a SR, though this element is essential to ensure that findings are not affected by selection bias.

Quantitative and qualitative researchers use different methods and have different goals. At the level of methods, quantitative researchers criticize qualitative researchers for not performing null hypothesis significance tests. However, reviewed literature shows that these are invalid, and so it is not particularly meaningful to criticize a lack of performance of something that should not perform anyhow. More generally, one has “suggested that there are strengths and limitations to the quantitative and qualitative methods” (Trafimow, 2014). The more interesting question pertains to goals, and quantitative and qualitative researchers differ there, too, and some limitations of the usual quantitative goal, which is to find causal mechanisms. However, a typical qualitative goal of describing personal or subjective experience also has limitations. Finally, when comparing both quantitative and qualitative social science research to physics, it shows that each has similarities and differences. There is much for quantitative and qualitative social science researchers to gain, not only by considering each other’s methods and goals carefully but also by going outside social science and considering the accomplishments in nonsocial sciences.

(2) There are different methods of evaluating evidence. The two common methods of evaluating evidence in the field of nursing are Systematic Reviews and Meta-Analyses. These two methods help in the determination of the relevance and validity of the evidence. The two methods of evaluating evidence are both similar and have got some differences as well.

Similarities and Differences of both the methods:

Both the Systematic Reviews and Meta-Analyses are considered the highest quality of evidence for clinical decision making and can be used above all the other methods of evaluating evidence. Both the methods for evaluating evidence are similar because they involve the collection of data from different sources and summarizing the all the evidence and results of the studies.

While systematic review collects and summarizes all the empirical evidence, the meta-analysis uses statistical methods to summarize the results of the studies. Meta-analysis is a statistical method used to combine the numerical results from such studies, if it is possible to do. On the other hand, systematic review is a formal, systematic and structured approach to review all the relevant literature on a topic. The other difference is, the rationale for Meta-analysis is that through the combination of samples from different studies the overall sample size is increased, while the rationale for systematic reviews is that when data is combined together from different sources a greater reliability would be gained.

When performing a systematic literature review or meta-analysis, if the quality of studies is not properly evaluated or if proper methodology is not strictly applied, the results can be biased and the outcomes can be incorrect. However, when systematic reviews and meta-analyses are properly implemented, they can yield powerful results.

(3) In some journals, you will see a ‘level of evidence’ assigned to a research article. Levels of evidence are assigned to studies based on the methodological quality of their design, validity, and applicability to patient care. What would be the benefit of having research with different levels of evidence when scholarly writing?

(4) Article one by Aldridge, Linford, & Bray (2017), reviewed previous research on Screening, Brief Intervention, and Referral to Treatment (SBIRT) which showed that patients decreased use of substances after SBIRT was performed (strength). Because the study looked at prior research, researchers were unable to verify if patients changed their behavior because of interventions or referrals for treatment, and they did not witness the patient receiving services, so there may have been other reasons for patients to decrease substance use (weakness). Because the researchers reviewed prior studies and results were consistent, this study should provide evidence that SBIRT does result in decreased substance use in substance abusers. “To feel most confident in the use of a particular intervention, a practitioner would want to be sure that study findings supporting this evidence were replicated or repeated in numerous studies by similar and different groups of researchers” (Reinhardt, 2010, p. 41). This study may help support practice changes since studies were repeated and had significant results.

Article two by Babor, Del Boca, & Bray (2017), looked at two different groups of SAMHSA’s grant recipients which had screened over a million patients over their funding period, and many patients were referred for intervention or treatment. The article states that “…SAMHSA programs were implemented with sufficient adherence to evidence-based practice to serve as a viable test of SBIRT effectiveness” (Babor et al., 2017, p. 113). This is a strength for this article. “Greater intervention intensity was associated with larger decreases in substance use” (Barbor et al., 2017, p. 110). Not all patients received the same type of intervention, so the researcher may not be able to compare patient outcomes in the same way (weakness). The study may help support practice changes since there was adherence to evidence-based practice.

Article three by Glass, Hamilton, Powell, Perron, Brown, & Ilgen (2015), discussed how their research used a systematic review of randomized controlled trials (RCTs) to see how brief interventions impacted alcohol use. “The RCT is considered a true experiment and one of the most powerful tools in clinical research because it provides the potential to show a causal relationship between the treatment variable and the outcome…” (Reinhardt, 2010, p. 38). This article did have weaknesses because researchers were not sure if they missed unpublished studies that could have impacted the outcome of this study. In addition, results lacked information on whether “…referral to and receipt of specialty alcohol treatment improved clinical outcomes among brief alcohol intervention recipients” (Glass et al., 2015, p. 1412). This study may not support practice changes because the study lacked information.

Article four by Hargraves, White, Frederick, Cinibulk, Peters, Young, & Elder (2017), looked at a high number of patients who were screened for SBIRT, and used quantitative and qualitative data to look at barriers and facilitators when using SBIRT in practice (this could be a strength or weakness). “Quantitative research often utilizes widely accepted measures with established reliability and validity, and data are subject to more rigorous statistical analyses compared to qualitative data” (Reinhardt, 2010, p. 40). Qualitative data are used to show what works or does not work, so a facility can use it to make practice changes (strength) (Reinhardt, 2010). A weakness in this study is that it looked a all different types of practices using SBIRT, and looked are a variety of conditions instead of just drugs and alcohol. This study could support practice change because it looks at best practices for using SBIRT, and does include primary care facilities and federally qualified health centers which is similar to this nurse’s practice.

Article five by Hodgson, Stanton, Borst, Moran, Atherton, Toriello, & Winter (2016), used qualitative data to study barriers and facilitators for SBIRT. The results demonstrated themes that were compared to past research. One strength in this study “…is that it sought input from all levels of providers” (Hodgson et al., 2016, p. 56). The study used focus groups so there could be bias, which is considered a weakness. At Indian Health, all levels of providers perform screenings, so this study could support this nurse’s practice change because of similar practice settings.

The impact of screening, brief intervention and referral for treatment in emergency department patients’ alcohol use: a 3-, 6- and 12-month follow-up(2010), article six, used a control and intervention group to see if SBIRT made an impact on substance use after three, six, and twelve months. The study was strong based on the control groups being similar. The study’s weakness was impacted by patients being lost to follow up after one year. This study may not support practice changes unless patients who receive SBIRT have follow up at three, six, and twelve-month periods. With time, the patients may go back to using substances.

Minimum of 60 words per responses with proper citations and references

Answer preview to different methods name for evaluating evidence

Different methods name for evaluating evidence

APA

476 words

Get instant access to the full solution from yourhomeworksolutions by clicking the purchase button below

Accounting

Applied Sciences

Article Writing

Astronomy

Biology

Business

Calculus

Chemistry

Communications

Computer Science

Counselling

Criminology

Economics

Education

Engineering

English

Environmental

Ethics

Film

Food and Nutrition

Geography

Healthcare

History and Government

Human Resource Managment

Information Systems

Law

Literature

Management

Marketing

Mathematics

Nursing

Philospphy

Physics

Political Science

Psychology

Religion

Sociology

Statistics

Writing

Terms of service

Contact