Evidence‐informed practice versus evidence‐based practice educational interventions for improving knowledge, attitudes, understanding, and behavior toward the application of evidence into practice: A comprehensive systematic review of UG student

Abstract Background To produce graduates with strong knowledge and skills in the application of evidence into healthcare practice, it is imperative that all undergraduate health and social care students are taught, in an efficient manner, the processes involved in applying evidence into practice. The two main concepts that are linked to the application of evidence into practice are “evidence‐based practice” and “evidence‐informed practice.” Globally, evidence‐based practice is regarded as the gold standard for the provision of safe and effective healthcare. Despite the extensive awareness of evidence‐based practice, healthcare practitioners continue to encounter difficulties in its implementation. This has generated an ongoing international debate as to whether evidence‐based practice should be replaced with evidence‐informed practice, and which of the two concepts better facilitate the effective and consistent application of evidence into healthcare practice. Objectives The primary objective of this systematic review was to evaluate and synthesize literature on the effectiveness of evidence‐informed practice versus evidence‐based practice educational interventions for improving knowledge, attitudes, understanding, and behavior of undergraduate health and social care students toward the application of evidence into practice. Specifically, we planned to answer the following research questions: (1) Is there a difference (i.e., difference in content, outcome) between evidence‐informed practice and evidence‐based practice educational interventions? (2) Does participating in evidence‐informed practice educational interventions relative to evidence‐based practice educational interventions facilitate the application of evidence into practice (as measured by, e.g., self‐reports on effective application of evidence into practice)? (3) Do both evidence‐informed practice and evidence‐based practice educational interventions targeted at undergraduate health and social care students influence patient outcomes (as measured by, e.g., reduced morbidity and mortality, absence of nosocomial infections)? (4) What factors affect the impact of evidence‐informed practice and evidence‐based practice educational interventions (as measured by, e.g., course content, mode of delivery, multifaceted interventions, standalone intervention)? Search Methods We utilized a number of search strategies to identify published and unpublished studies: (1) Electronic databases: we searched Academic Search Complete, Academic search premier, AMED, Australian education index, British education index, Campbell systematic reviews, Canada bibliographic database (CBCA Education), CINAHL, Cochrane Library, Database of Abstracts of Reviews on Effectiveness, Dissertation Abstracts International, Education Abstracts, Education complete, Education full text: Wilson, ERIC, Evidence‐based program database, JBI database of systematic reviews, Medline, PsycInfo, Pubmed, SciELO (Scientific Electronic Library Online), and Scopus; (2) A web search using search engines such as Google and Google scholar; (3) Grey literature search: we searched OpenGrey (System for Information on Grey Literature in Europe), System for information on Grey Literature, the Society for Research on Educational Effectiveness, and Virginia Henderson Global Nursing e‐Repository; (4) Hand searching of journal articles; and (5) Tracking bibliographies of previously retrieved studies. The searches were conducted in June 2019. Selection Criteria We planned to include both quantitative (including randomized controlled trials, non‐randomized controlled trials, quasi‐experimental, before and after studies, prospective and retrospective cohort studies) and qualitative primary studies (including, case series, individual case reports, and descriptive cross‐sectional studies, focus groups, and interviews, ethnography, phenomenology, and grounded theory), that evaluate and compare the effectiveness of any formal evidence‐informed practice educational intervention to evidence‐based practice educational intervention. The primary outcomes were evidence‐informed practice and evidence‐based practice knowledge, attitudes, understanding, and behavior. We planned to include, as participants, undergraduate pre‐registration health and social care students from any geographical area. Data Collection and Analysis Two authors independently screened the search results to assess articles for their eligibility for inclusion. The screening involved an initial screening of the title and abstracts, and subsequently, the full‐text of selected articles. Discrepancies were resolved through discussion or consultation with a third author. We found no article eligible for inclusion in this review. Main Results No studies were found which were eligible for inclusion in this review. We evaluated and excluded 46 full‐text articles. This is because none of the 46 studies had evaluated and compared the effectiveness of evidence‐informed practice educational interventions with evidence‐based practice educational interventions. Out of the 46 articles, 45 had evaluated solely, the effectiveness of evidence‐based practice educational interventions and 1 article was on evidence‐informed practice educational intervention. Hence, these articles were excluded as they did not meet the inclusion criteria. Authors' Conclusions There is an urgent need for primary studies evaluating the relative effectiveness of evidence‐informed practice and evidence‐based practice educational interventions targeted at improving undergraduate healthcare students' competencies regarding the application of evidence into practice. Such studies should be informed by current literature on the concepts (i.e., evidence‐informed practice and evidence‐based practice) to identify the differences, similarities, as well as appropriate content of the educational interventions. In this way, the actual effect of each of the concepts could be determined and their effectiveness compared.

informed practice educational intervention. Hence, these articles were excluded as they did not meet the inclusion criteria.
Authors' Conclusions: There is an urgent need for primary studies evaluating the relative effectiveness of evidence-informed practice and evidence-based practice educational interventions targeted at improving undergraduate healthcare students' competencies regarding the application of evidence into practice. Such studies should be informed by current literature on the concepts (i.e., evidence-informed practice and evidence-based practice) to identify the differences, similarities, as well as appropriate content of the educational interventions. In this way, the actual effect of each of the concepts could be determined and their effectiveness compared.
1 | PLAIN LANGUAGE SUMMARY 1.1 | Evidence-informed versus evidence-based practice educational interventions for improving knowledge, attitudes, understanding, and behavior toward the application of evidence into practice: A comprehensive systematic review of undergraduate students We found no studies that compared the effectiveness of evidenceinformed practice educational interventions to evidence-based practice educational interventions in targeting undergraduate health and social care students' knowledge, attitudes, understanding, and behavior.

| The review in brief
This review aimed to compare the relative effectiveness of evidenceinformed practice versus evidence-based practice educational interventions on the knowledge, attitudes, understanding, and behavior of undergraduate health and social care students. We did not find any studies that met our inclusion criteria, therefore we cannot draw any conclusions regarding the relative effectiveness of the two approaches. The evidence is current to June 17, 2019.

| What is this review about?
The effective application of the best evidence into healthcare practice is strongly endorsed, alongside a growing need for healthcare organizations to ensure the delivery of services in an equitable and efficient manner. Existing evidence shows that guiding healthcare practice with the best available evidence enhances healthcare delivery, improves efficiency and ultimately improves patient outcomes. Nevertheless, there is often the ineffective and inconsistent application of evidence into healthcare practice.
The two main concepts that have been associated with the application of evidence into healthcare practice are "evidence-based practice" and "evidence-informed practice." This review assesses the relative effectiveness of these two approaches, specifically in relation to improving knowledge, attitudes, understanding, and behavior of undergraduate health and social care students. In addition, we aimed to assess the impact of evidence-informed practice and/or evidencebased practice educational programmes on patient outcomes.
Examples of patient outcome indicators that we would have assessed had eligible studies been found are: user experience, length of hospital stay, nosocomial infections, patient and health practitioner satisfaction, mortality, and morbidity rates.

What is the aim of this review?
This Campbell systematic review examines the effectiveness of evidence-informed practice and evidence-based practice educational interventions for improving knowledge, attitudes, understanding, and behavior of undergraduate health and social care students toward the application of evidence into practice.
1.5 | What are the main findings of this review?
A total of 45 full-text articles on evidence-based practice educational interventions and one full-text article on evidenceinformed practice educational intervention were screened for their eligibility for inclusion. However, we identified no studies examining the relative effectiveness of evidence-informed practice versus evidence-based practice educational interventions. As a result, we are unable to answer the question as to which of the two concepts better facilitates the application of evidence into healthcare practice.
1.6 | What do the findings of this review mean?
Whilst evidence suggests that evidence-informed practice can be effective (compared to a no-intervention control) in improving student outcomes, we are unable to conclude which approach better facilitates the application of evidence into practice.

| How up-to-date is this review?
The review authors searched for studies published up to June 2019.
2 | BACKGROUND 2.1 | Description of the condition Over the past three decades, there has been increasing attention on improving healthcare quality, reliability, and ultimately, patient outcomes, through the provision of healthcare that is influenced by the best available evidence, and devoid of rituals and tradition (Andre et al., 2016;Sackett et al., 1996). There is an expectation by professional regulators such as the Nursing and Midwifery Council, United Kingdom (Nursing and Midwifery Council, 2015) and the Health and Care Professions Council (Health and Care Professions Council, 2012) that the professional, as part of their accountability applies the best available evidence to inform their clinical decision-making, roles, and responsibilities. This is imperative for several reasons. First, it enhances the delivery of healthcare and improves efficiency. Second, it produces better intervention outcomes and promotes transparency. Third, it enhances co-operation and knowledge sharing among professionals and service users, and ultimately, the effective application of evidence into practice improves patient outcomes and enhances job satisfaction. Indeed, the need to guide healthcare practice with evidence has been emphasized by several authors, including Kelly et al. (2015), Nevo and Slonim-Nevo (2011), Scott and Mcsherry (2009), Shlonsky and Stern (2007), Smith and Rennie (2014) Straus et al. (2011), Tickle-Degnen and Bedell (2003), and Sackett et al. (1996). According to these authors, the effective and consistent application of evidence into practice helps practitioners to deliver the best care to their patients and patient relatives.
Two main concepts have been associated with the application of evidence into healthcare practice: "evidence-based practice" and "evidence-informed practice." Evidence-based practice is an offshoot of evidence-based medicine; hence, the universally accepted definition of evidence-based practice is adapted from the definition of evidence-based medicine, which is "the conscientious, explicit and judicious use of the best evidence in making decisions about the care of the individual patient" (Sackett et al., 1996, p. 71). Evidenceinformed practice, on the other hand, is defined as the assimilation of professional judgment and research evidence regarding the efficiency of interventions (McSherry et al., 2002). This definition was further elaborated by Nevo and Slonim-Nevo, 2011 as an approach to patient care where: Practitioners are encouraged to be knowledgeable about findings coming from all types of studies and to use them in an integrative manner, taking into consideration clinical experience and judgment, clients' preferences and values, and context of the interventions (p. 18).
The primary aim of both evidence-informed practice and evidence-based practice is to facilitate the application of evidence into healthcare practice. However, there are significant differences between the two concepts. These differences are discussed in detail in the ensuing sections. Nonetheless, it is important to note here that a characteristic difference between evidence-informed practice and evidence-based practice is the processes involved in applying the concepts. Evidence-based practice provides a step-wise approach to the application of evidence into practice, where practitioners are required to follow a series of steps to implement evidence-based practice. According to Sackett (2000), the core steps of evidencebased practice include: (1) formulating a clinical question, (2) searching the literature for the best research evidence to answer the question, (3) critically appraising the research evidence, (4) integrating the appraised evidence with own clinical expertise, patient preferences, and values, and (5) evaluating outcomes of decision-making.
Evidence-informed practice, on the other hand, offers an integrated, all-inclusive approach to the application of evidence into practice (Nevo & Slonim-Nevo, 2011). As illustrated by McSherry (2007), evidence-informed practice provides a systems-based approach (made up of input, throughput, and output) to applying evidence into practice, which contains, as part of its elements, the steps of evidencebased practice. Besides, unlike evidence-based practice, the main process involved in the implementation of evidence-informed practice is cyclical and interdependent (McSherry et al., 2002).
Evidence-based practice is a well-established concept in health and social care (Titler, 2008) and is regarded as the norm for the delivery of efficient healthcare service. In recent times, however, the concept of evidence-informed practice is often used instead of evidence-based practice. For example, in countries such as Canada, the term has been widely adopted and is used more often in the health and social care fields. This was reflected in a position statement by the Canadian Nurses Association (CNA, 2008)

and the Canadian Physiotherapy
Association (Canadian Physiotherapy Association, 2017), where healthcare practitioners, including nurses, clinicians, researchers, educators, administrators, and policy-makers were encouraged to collaborate with other stakeholders to enhance evidenceinformed practice, to ensure integration of the healthcare system.
In the United Kingdom, the term evidence-informed practice has been extensively adopted in the field of education, with a lot of resources being invested to assess the progress toward evidenceinformed teaching (Coldwell et al., 2017). In addition, an evidence-informed chartered college of teaching has been launched (Bevins et al., 2011) to ensure evidence-informed teaching and learning.
This has generated an on-going international debate as to whether the term "evidence-based practice" should be replaced by "evidenceinformed practice," and which of the two concepts best facilitate the effective and consistent application of evidence into practice.
Researchers, such as Melnyk (2017), Melnyk and Newhouse (2014), and Gambrill (2010) believe that knowledge and skills in evidencebased practice help the healthcare professional to effectively apply evidence into practice. Conversely, Epstein (2009), Nevo and Slonim-Nevo (2011), and McSherry (2007 have argued the need to equip healthcare professionals with the necessary knowledge and skills of evidence-informed practice to facilitate the effective and consistent application of evidence into practice. According to Nevo and Slonim-Nevo (2011), the application of evidence into practice should, in principle be "informed by" evidence and not necessarily "based on" evidence. This suggests that decision-making in healthcare practice "might be enriched by prior research but not limited to it" (Epstein, 2009, p. 9).
It is imperative that healthcare training institutions produce graduates who are equipped with the knowledge and skills necessary for the effective and consistent application of evidence into practice (Dawes et al., 2005;Frenk et al., 2010;Melnyk, 2017). Hence, healthcare training institutions are required to integrate the principles and processes involved in the application of evidence into undergraduate health and social care curricula. However, the question that often arises is: which of the two concepts (i.e., evidence-informed practice and evidence-based practice) best facilitates the application of evidence into practice? While Melnyk et al. (2010) have suggested a seven-step approach to the application of evidence into practice (termed the "evidence-based practice model"), as stated earlier, McSherry (2007) has argued that the principle involved in the application of evidence into practice is a systems-based approach, with an input, throughput and an output (named the "evidenceinformed practice model").
The main purpose of this systematic review was to determine the differences and similarities, if any, between evidence-informed practice and evidence-based practice educational interventions; as well as explore the role each concept plays in the application of evidence into practice. In addition, the present review aimed at determining whether the two concepts act together, or individually to facilitate the effective application of evidence into practice. We hoped to achieve these aims by reviewing published and unpublished primary papers that have evaluated and compared the effectiveness of evidence-informed practice educational interventions with evidence-based practice educational interventions targetted at improving undergraduate pre-registration health and social care students' knowledge, attitudes, understanding, and behavior regarding the application of evidence into practice.

| Description of the intervention
The gap between evidence and healthcare practice is well acknowledged (Lau et al., 2014;Melnyk, 2017;Straus et al., 2009). Difficulties in using evidence to make decisions in healthcare practice are evident across all groups of decision-makers, including health care providers, policymakers, managers, informal caregivers, patients, and patient relatives (Straus et al., 2009). Consequently, several interventions have been developed to improve the implementation of evidence into healthcare practice and policy. Specifically, evidence-based practice educational interventions are widely used and have been greatly evaluated (e.g., Callister et al., 2005;Dawley et al., 2011;Schoonees et al., 2017;and Goodfellow, 2004). Evidence-informed practice educational interventions have also been used (e.g., Almost et al., 2013), although to a much smaller extent. Conducting a systematic review of currently available research offers a rigorous process for evaluating the comparative effectiveness of both evidence-informed practice and evidencebased practice educational interventions. Dawes et al. (2005) (Dawes et al., 2005) and 2009 (Tilson et al., 2011). The statements provide suggestions for evidence-based practice competencies, curricula, and evaluation tools for educational interventions. All health and social care students and professionals are required to understand the principles of evidence-based practice, to have a desirable attitude toward evidence-based practice, and to effectively implement evidence-based practice (Dawes et al., 2005). To incorporate a culture of evidence-based practice among health and social care students, Melnyk (2017) believes undergraduate health and social care research modules need to be based on the seven-step KUMAH ET AL. model of evidence-based practice that was developed by Melnyk et al. (2010). In addition, the curricula should include learning across the four components of evidence-based practice, namely, knowledge, attitudes, behavior, and practice (Haggman-Laitila et al., 2016). Tilson et al. (2011) identified major principles for the design of evidence-based practice evaluation tools for learners. Among the identified categories for evaluating evidence-based practice educational interventions include the learner's knowledge of, and attitudes regarding evidence-based practice, the learner's reaction to the educational experience, behavior congruent with evidence-based practice as part of patient care, as well as skills in implementing evidence-based practice. According to Tilson et al. (2011), frameworks used in assessing the effectiveness of evidence-based practice interventions need to reflect the aims of the research module, and the aims must also correspond to the needs and characteristics of learners. For example, students may be expected to perform the seven-steps of evidence-based practice, whilst health practitioners may be required to acquire skills in applying evidence into practice. Tilson et al. (2011) also stated that the setting where learning, teaching, and the implementation of evidence-based practice occur needs to be considered.
Evidence-informed practice, on the other hand, extends beyond the initial definitions of evidence-based practice (LoBiondo-Wood et al., 2013) and is more inclusive than evidence-based practice (Epstein, 2009). This is due to the following reasons. First, evidenceinformed practice recognizes practitioners as critical thinkers and encourages them to be knowledgeable about findings from all types of research (including systematic reviews, randomized controlled trials (RCTs), qualitative research, quantitative research, and mixed methods), and to utilize them in an integrative manner. Second, evidence-informed practice considers the best available research evidence, practitioner knowledge and experience, client preferences and values, and the clinical state and circumstances (Nevo & Slonim-Nevo, 2011). However, Melnyk and Newhouse, 2014 (p. 347) disagreed with this assertion as a difference between the two concepts. According to the authors, like evidence-informed practice, evidence-based practice has broadened to "integrate the best evidence for well-designed studies and evidence-based theories (i.e., external evidence) with a clinician's expertise, which includes internal evidence gathered from a thorough patient assessment and patient data, and a patient's preferences and values." Although this statement may be true, the existing evidence-based practice models (e.g., DiCenso et al., 2005;Dufault, 2004;Greenhalgh et al., 2005;Melnyk et al., 2010;Titler et al., 2001) place too much emphasis on the scientific evidence in clinical decision-making, and give little or no attention to the other forms of evidence such as the clinical context, patient values and preferences, and practitioner's knowledge and experiences (McTavish, 2017;Miles & Loughlin, 2011).
Inasmuch as scientific evidence plays a major role in clinical decision-making, the decision-making process must be productive and adaptable enough to meet the on-going changing condition and needs of the patient, as well as the knowledge and experiences of the health practitioner (LoBiondo-Wood et al., 2013;Nevo & Slonim-Nevo, 2011). Hence, researchers, including Nevo & Slonim-Nevo, 2011and McSherry, 2007, have advocated for a creative and flexible model of applying evidence into practice, where healthcare practitioners are not limited to following a series of steps (as advocated in evidence-based practice) to apply evidence into practice. Third, unlike evidence-informed practice, evidence-based practice uses a formal hierarchy of research evidence, which ranks certain forms of evidence (e.g., systematic reviews and RCTs) higher than others (such as qualitative research and observational studies). Instead of the hierarchy of research evidence, proponents of evidence-informed practice support an integrative model of practice that considers all forms of studies and prefers the evidence that provides the best answer to the clinical question (Epstein, 2009). Therefore, in place of the hierarchy of research evidence, Epstein, 2011 suggested a "wheel of evidence," where "all forms of research, information gathering, and interpretations would be critically assessed but equally valued" (p. 225). This will ensure that all forms of evidence are considered during decision-making in healthcare practice.
Evidence-informed practice does not follow a stepwise approach to applying evidence into practice. According to McSherry (2007), the actual process involved in applying evidence into practice occurs in a cyclical manner, termed the evidence-informed cycle. Similarly, Epstein (2009)  practice is an integration of three components, namely, evidencebased programs, evidence-based processes, and client and professional values. According to Moore (2016), these sources of evidence need to be blended in practice to achieve optimal personcentered care.
Thus, an evidence-informed practice educational intervention needs to recognize the learner as a critical thinker who is expected to consider various types of evidence in clinical decision-making (Almost et al., 2013;McSherry et al., 2002). One is not expected to be a researcher to effectively implement evidence-informed practice.
Rather, McSherry et al. (2002) argue that the healthcare professional must be aware of the various types of evidence (such as the context of care, patient preferences, and experience, as well as the professional's own skills and expertise), not just research evidence, to deliver person-centered care. Table 1 presents a summary of the differences and similarities between evidence-informed practice and evidence-based practice.  (2007) is a systems-based model comprising input (e.g., roles and responsibilities of the health practitioner) throughput (i.e., research awareness, application of knowledge, informed decision-making, evaluation), and output, which is an empowered professional who is a critical thinker and doer (McSherry, 2007).
Evidence-based practice educational interventions referred to any formal educational program that enhances the application of the T A B L E 1 Summary of the differences and similarities between evidence-informed practice and evidence-based practice Evidence-based practice Evidence-informed practice Similarities between evidence-based practice and evidence-informed practice Evidence-based practice adopts a "cook-book" approach to applying evidence into practice, and so leaves no room for flexibility (Nevo & Slonim-Nevo, 2011).
Evidence-informed practice recognizes practitioners as critical thinkers (McSherry, 2007;Nevo & Slonim-Nevo, 2011), and encourages them to be creative and to consider the clinical state and circumstances when making patient care decisions.
Both evidence-informed practice and evidence-based practice are approaches for making informed clinical decisions (Woodbury & Kuhnke, 2014) Both evidence-informed practice and evidence-based practice integrate research with patient values and preferences and clinical knowledge and expertise (Melnyk & Newhouse, 2014) The existing evidence-based practice models (e.g., DiCenso et al., 2005;Dufault, 2004;Greenhalgh et al., 2005;Melnyk et al., 2010;Titler et al., 2001)  The existing evidence-informed practice models (e.g., McSherry, 2007;Nevo & Slonim-Nevo, 2011) are innovative and flexible. The client is at the centre not the evidence (McTavish, 2017). One is not expected to be a researcher in order to effectively implement evidenceinformed practice; the healthcare professional must be aware of the various types of evidence, such as the context of care, patient preferences and experience, as well as the clinician's skills and expertise, not just research evidence, in order to deliver effective person-centred care.
Evidence-based practice uses a formal hierarchy of research evidence, which ranks certain forms of research evidence (e.g., systematic reviews and randomized controlled trials) higher than others (such as qualitative research and observational studies).
Instead of the hierarchy of research evidence, evidence-informed practice supports an integrative model of practice that considers all forms of research evidence (including, systematic reviews, randomized controlled trials, qualitative research, quantitative research and mixed methods), and prefers the evidence that provides the best answer to the clinical question (Epstein, 2009).
The existing models of Evidence-based practice adopt a stepwise approach to applying evidence into healthcare practice.
Evidence-informed practice is an integrative (McTavish, 2017) and a systems-based approach to applying evidence into practice, which comprises of an input, throughput and an output (McSherry, 2007) The linear approach of evidence-based practice does not allow health practitioners to be creative enough, so as to meet the on-going changing needs and conditions of the patient and the healthcare setting.
Evidence-informed practice is adaptable, and considers the complexities of health and healthcare delivery (LoBiondo-Wood et al., 2013;Nevo & Slonim-Nevo, 2011  In addition, definitions for "knowledge," "attitudes," "understanding," and "behavior" were based on the Classification Rubric for Evidence-based practice Assessment Tools in Education (CREATE) created by Tilson et al. (2011). These are provided below.
Knowledge referred to learners' retention of facts and concepts about evidence-informed practice and evidence-based practice.
Hence, assessments of evidence-informed practice and evidencebased practice knowledge might assess a learner's ability to define evidence-based practice and evidence-informed practice concepts, list their basic principles, or describe levels of evidence.
Attitudes referred to the values ascribed by the learner to the importance and usefulness of evidence-informed practice and evidence-based practice to inform clinical decision-making.
Understanding referred to learners' comprehension of facts and concepts about evidence-based practice and evidence-informed practice.
Behavior referred to what learners actually do in practice. It is inclusive of all the processes that a learner would use in the implementation of evidence-informed practice and evidence-based practice, such as assessing patient circumstances, values, preferences, and goals along with identifying the learners' own competence relative to the patient's needs to determine the focus of an answerable clinical question.
We planned that the mode of delivery of the educational program could be in the form of workshops, seminars, conferences, journal clubs, and lectures (both face-to-face and online). It was anticipated that the content, manner of delivery, and length of the educational program may differ in each of the studies that were to be included as there is no standard evidenceinformed practice/evidence-based practice educational program.
Evidence-informed practice and evidence-based practice educational interventions that are targeted toward health and social care postgraduate students or registered health and social care practitioners were excluded.
F I G U R E 1 Evidence-informed practice model

| How the intervention might work
Most efforts to apply evidence into healthcare practice have either been unsuccessful or partially successful Eccles et al., 2005;Grimshaw et al., 2004;Lechasseur et al., 2011;McTavish, 2017). The resultant effects include ineffective patient outcomes, reduced patient safety, reduced job satisfaction, and increased rate of staff turnover (Adams, 2009;Fielding & Briss, 2006;Huston, 2010;Knops et al., 2009;Melnyk & Fineout-Overholt, 2005;Schmidt & Brown, 2007). Consequently, a lot of emphases have been placed on integrating evidence-based practice (Masters, 2009;Melnyk, 2017;Scherer & Smith, 2002;Straus et al., 2005) and/or evidence-informed practice competencies (Epstein, 2009;McSherry, 2007;McSherry et al., 2002;Nevo & Slonim-Nevo, 2011) into undergraduate health and social care curricula. Yet, it remains unclear the exact components of an evidence-based practice/evidence-informed practice educational intervention. Healthcare educators continue to encounter challenges with regard to finding the most efficient approach to preparing health and social care students toward the application of evidence into practice (Almost et al., 2013;Flores-Mateo & Argimon, 2007;Oh et al., 2010;Straus et al., 2005). This has resulted in an increase in the rate and number of research investigating the effectiveness of educational interventions for enhancing knowledge, attitudes, and skills regarding, especially, evidence-based practice (Phillips et al., 2013). There is also empirical evidence (primary studies) to support a direct link between evidence-based practice/ evidence-informed practice educational interventions and knowledge, attitudes, understanding, and behavior, which in turn may have a positive impact on the application of evidence into practice.
However, participants in most of the studies reviewed were nursing students. Some examples are given below. Ashtorab et al. (2014) developed an evidence-based practice educational intervention for nursing students and assessed its effectiveness, based on Rogers' diffusion of innovation model (Rogers, 2003). The authors concluded that evidence-based practice education grounded on Roger's model leads to improved attitudes, knowledge, and adoption of evidence-based practice. According to the authors, Rogers' diffusion of innovation model contains all the important steps that need to be applied in the teaching of evidencebased practice.
Heye and Stevens (2009) developed an evidence-based practice educational intervention and assessed its effectiveness on 74 undergraduate nursing students, using the Academic Center for Evidence-based practice (ACE) Star model of knowledge transformation (Stevens, 2004). The ACE star model describes how evidence is progressively applied to healthcare practice by transforming the evidence through various stages (including translation, integration, evaluation, discovery, and summary). Heye and Stevens (2009) indicated that the students who participated in the educational program gained research appraisal skills and knowledge in evidence-based practice. Furthermore, the authors reported that the students acquired evidence-based practice competencies and skills that are required for the work environment.
Several other studies have reported on the effectiveness of evidence-based practice educational interventions and their underpinning theoretical foundations: the Self-directed learning strategies (Fernandez et al., 2014;Kruszewski et al., 2009;Zhang et al., 2012), the Constructivist Model of learning (Fernandez et al., 2014), Bandura's self-efficacy theory , as well as the Iowa model of evidence-based practice (Kruszewski et al., 2009). Nonetheless, research in the area of evidenceinformed practice educational interventions has been limited. Almost et al. (2013) developed an educational intervention aimed at supporting nurses in the application of evidence-informed practice. Before developing the intervention, the authors conducted interviews to examine the scope of practice, contextual setting, and learning needs of participants. A Delphi survey was then conducted to rank learning needs, which were identified by the interview participants, to select the key priorities for the intervention. The authors then conducted a pre and post-survey, before the intervention and six months after the intervention, respectively, to assess the impact of the intervention. Thus, the development of the intervention was learner-directed, which reaffirms McSherry (2007)'s description of the evidenceinformed practitioner as a critical thinker and doer. Unlike evidence-based practice, practice knowledge and intervention decisions regarding evidence-informed practice are enriched by previous research but not limited to it. In this way, evidenceinformed practice is more inclusive than evidence-based practice (Epstein, 2009 p. 9). Nevo and Slonim-Nevo (2011) argue that rather than focusing educational interventions on the researchevidence dominated steps of evidence-based practice, research findings should be included in the intervention process, but the process itself must be creative and flexible enough to meet the continually changing needs, conditions, experiences, and preferences of patients and health professionals.
A logic model has been presented in Figure 2 to indicate the connection between evidence-based practice/evidence-informed practice educational intervention and outcomes.

| Why it is important to do this review
Despite the seeming confusion surrounding the terms "evidenceinformed practice" and "evidence-based practice," together with the on-going debate in the literature as to which concept leads to better patient outcomes, no study, to the best of the researchers' knowledge, has compared through a systematic review, the effects of the two concepts on the effective implementation of evidence into practice. A review of the literature reveals several systematic reviews conducted on evidence-based practice educational interventions and the effects of such interventions. Examples of such systematic reviews are described below. Young et al. (2014) conducted an overview of systematic reviews that evaluated interventions for teaching evidence-based practice to healthcare professionals (including undergraduate KUMAH ET AL.  By conducting a comprehensive systematic review of the literature that specifically compares the effectiveness of evidenceinformed practice to evidence-based practice educational interventions on undergraduate health and social care students, we hoped to review and analyze current evidence-informed practice and evidence-based practice approaches in higher education settings. In addition, we hoped that the results of this systematic review would help to determine the relative effectiveness of evidence-informed practice and evidence-based practice educational interventions, as well as identify gaps in the current literature. We hoped to be able to offer direction for practice, policy, and future inquiry in this growing area of research and practice.

| OBJECTIVES
The primary objective of this systematic review is as follows.
• To evaluate and synthesize literature on the relative effectiveness of evidence-informed practice and evidence-based practice educational interventions for improving knowledge, attitudes, understanding, and behavior of undergraduate preregistration health and social care students regarding the application of evidence into practice.
Specifically, the review aimed to address the following research questions: For the primary analysis, the intention was to follow the recommended steps by Sandelowski et al. (2012): to first conduct two separate syntheses for included quantitative and qualitative primary studies. We planned to synthesize qualitative studies by way of meta-aggregation and quantitative studies by way of meta-analysis (Lockwood et al., 2015). We planned to then integrate the results of the two separate syntheses by means of an aggregative mixed-methods synthesis. We intended to integrate the two results (i.e., qualitative and quantitative results) by translating findings from the quantitative synthesis into qualitative statements, by the use of Bayesian conversion (Joanna Briggs Institute, 2014). Figure 3 presents the mixed-methods approach we intended to employ in this systematic review.

| Types of participants
We intended to include undergraduate pre-registration health and social care students in higher education (University) from any geographical area. We planned to include undergraduate pre-registration students studying health and social care programs such as nursing, midwifery, dental hygiene, and dental therapy, dental nurse practice, diagnostic radiography, occupational therapy, operating department practice studies, paramedic practice, social work, and physiotherapy.
We planned to exclude studies whose participants were registered health and social care practitioners and postgraduate students.

| Types of interventions
The intention was to include primary studies that evaluate and compare any formal evidence-based practice educational inter- Since there is no uniform tool for evaluating the effectiveness of evidence-based practice and evidence-informed practice educational interventions, we planned that measurement of the above outcomes may be conducted using standardized or unstandardized instruments. Some examples of these instruments include: • the use of a standardized questionnaire to evaluate knowledge, attitude, understanding, and behavior toward the application of evidence into practice. Examples of such questionnaires include (1)  • unstandardized instruments include self-reports from study participants and researcher-administered measures.

Secondary outcomes
The intention was to include studies that measure the impact of evidence-informed practice and/or evidence-based practice educational programs on patient outcomes. We planned to assess patient outcome indicators such as user experience, length of hospital stays, absence of nosocomial infections, patient and health practitioner satisfaction, mortality, and morbidity rates.

| Duration of follow-up
No limit was placed on the duration of follow-up. The rationale was to give room for studies with either short-or long-term follow-up duration to be eligible for inclusion.

| Types of settings
We intended to include primary studies from any geographical area. However, due to language translation issues, we planned to include only studies written in English. We also planned that studies whose title and abstracts are in English and meet the inclusion criteria, but the full article is reported in another language would be included, subject to the availability of translation services.

| Time
To qualify for inclusion in this systematic review, studies must have been published during the period from 1996 (the date when evidence-based practice first emerged in the literature) (Closs & Cheater, 1999;Sackett et al., 1996) • Targeted population: nurs* OR physio* OR "occupa* therap*" OR "dental Hygiene" OR "undergraduate healthcare student*" OR "undergraduate social care student*" OR baccalaureat* OR "social work" OR dent* OR BSc OR student* OR "higher education" OR "undergrad* nurs* student*" • Intervention: evidence-informed* OR evidence-based* OR "evidence-informed practice" OR "evidence-based practice" OR EBP OR EIP OR "evidence-informed practice education" OR "evidence-based practice education" OR "evidence into practice" OR evidence-informed near. practice teaching learning OR evidence-based near. practice teaching learning • Outcomes: "knowledge, attitudes, understanding and behavio* regarding EBP" OR "knowledge near. attitudes understanding behavio* regarding EIP OR "Knowledge of evidence-informed*" OR "knowledge of evidence-based*" OR "patient outcome*" OR outcome* • Study design/type: trial* OR "randomi?ed control trial" OR "qua?
i-experiment*" OR random OR experiment OR "control* group*" OR program OR intervention OR evaluat* OR qualitative OR quantitative OR ethnograpy OR "control* study" OR "control* studies" OR "control* design*" OR "control* trial*" OR "control group design" OR RCT OR rct OR "trial registration"

| Management of references
We exported the full set of search results directly into an Endnote X9 Library. Where this was not possible, search results were manually entered into the Endnote Library. The Endnote library made it easier to identify duplicates and manage references.

| Search strategy
The search to identify eligible studies was initially carried out in June 2018, and then a repeat search was conducted in June 2019. We utilized a number of strategies, to identify published and unpublished studies that meet the inclusion criteria described above. These strategies are outlined below.

| Electronic searches
The following electronic searches were conducted to identify eligible studies.
1. An electronic database search was conducted using the following databases.
• Academic Search complete 2. A web search was conducted using the following search engines.
• Google • Google Scholar 3. A gray literature search was conducted using the following databases.
• OpenGrey (System for Information on Grey Literature in Europe) • System for information on Grey Literature • The Society for Research on Educational Effectiveness • Virginia Henderson Global Nursing e-Repository

| Searching other resources
The following strategies were also used to identify eligible studies.

| Incomplete outcome data
We planned to assess studies to determine if there are any missing outcome data. We would have examined the differences between intervention and control groups in relation to measurement attrition and the reasons for missing data. Studies with low attrition (<20%), no attrition, or no evidence of differential attrition would have been considered as having a low risk of bias. We planned to record the use of Intention to Treat (ITT) analysis and methods of account for missing data (e.g., using missing multiple imputations).

| Selective outcome reporting
We intended to assess studies for reporting bias to determine whether there are inconsistencies between measured outcomes and KUMAH ET AL.

Dichotomous data
For dichotomous data, we planned to calculate the risk ratio (and its 95% confidence interval) for the occurrence of an event. For meta-analysis, we planned to convert risk ratios to the standardized mean difference, using David Wilson's practical effect size calculator. We intended to use meta-regression to assess the impact of moderator variables on the effect size of interventions.
We planned to conduct moderator analysis if a reasonable number of eligible research articles were identified and if the required data is presented in the report.

Studies with multiple groups
For studies with one control group versus two or more intervention groups, and all the interventions are regarded as relevant to the study, we planned to use the following options: (1) if the intervention groups are not similar, we would have divided the sample size of the control group into two (or more based on the number of intervention groups), and then compared with the intervention groups (2) if the intervention groups were similar, we would have treated the two groups as a single group.
Therefore, we would have provided two effect size estimates in this study. This was to ensure that participants in the control group were not "double-counted" (Higgins & Green, 2011). We planned to employ a similar approach, but in reverse, in the event that an included study has one intervention group but two control groups. We also planned that if an included study contained an irrelevant and relevant intervention group, we would have included only data from the relevant intervention group for analysis.

| Unit of analysis issues
In this systematic review, it was anticipated that included studies may have either involved individual participants or clusters

| Dealing with missing data
We planned to contact the first author of studies with incomplete reports on data or to request relevant information that is missing from the report.
We planned that if requested data was not provided, our options for dealing with missing data would be based on whether data is "missing at random" or "missing not at random." We planned that if data were missing at random (i.e., if the fact that they are missing is unrelated to actual values of the missing data), data analysis would have been conducted based on the available data.
However, if data is missing not at random (i.e., if the fact that they are missing is related to the actual missing data), we planned to impute the missing data with replacement values, and treat these values as if they were observed (e.g., last observation carried forward, imputing an assumed outcome such as assuming all were poor outcomes, imputing the mean, imputing based on predicted values from a regression analysis).

| Description of studies
The findings of the current systematic review are presented below.

| Included studies
We did not identify any qualitative nor quantitative study that was eligible for inclusion in this review.

| Risk of bias in included studies
We evaluated no study for methodological quality.

| Allocation (selection bias)
We found no study eligible for inclusion in this review.

| Blinding (performance bias and detection bias)
We found no study eligible for inclusion in this review.

| Incomplete outcome data (attrition bias)
We found no study eligible for inclusion in this review.

| Selective reporting (reporting bias)
We found no study eligible for inclusion in this review.

| Other potential sources of bias
We found no study eligible for inclusion in this review.

| Effects of interventions
We found no study eligible for inclusion in this review.

| Summary of main results
We did not identify any evidence on the relative effectiveness of evidence-informed practice and evidence-based practice educational interventions for improving knowledge, attitudes, understanding, and behavior of undergraduate pre-registration health and social care students toward the application of evidence into practice.  (2011)

| Potential biases in the review process
We made every effort to minimize bias in this review. A comprehensive search of multiple health and social care databases was conducted, and no limit was applied to publication status or language. The search strategy was developed by an The main potential bias of this review is the unlikely event that we have missed any study that evaluated and compared the effectiveness of evidence-informed practice to evidence-based practice educational interventions on the knowledge, attitudes, understanding, and behavior of undergraduate health and social care students. However, given our extensive search of the current literature, it is unlikely that we have missed any eligible studies. A limitation of our review is that we are unable to answer our research questions since we did not identify any eligible studies.

| Agreements and disagreements with other studies or reviews
Currently, there is no primary study that has evaluated and compared the effectiveness of evidence-informed practice to evidence-based practice educational interventions. In addition, there is no previous systematic review that has compared the effectiveness of these two concepts. We are, therefore, unable to compare our results with other studies. The purpose of evidence-based practice to healthcare practice is to provide appropriate healthcare in a timely and effective manner to the patient (WHO, 2017). Evidence-based practice is expected to improve patient outcomes, give job satisfaction, and provide cost-effective care (Melnyk et al., 2010).
Nevertheless, healthcare practitioners continue to struggle to implement the concept in clinical practice. Thus, there is an urgent need for a change in the way in which evidence is applied to healthcare practice. This change could be realized if other methods of applying evidence into practice, such as evidenceinformed practice, are considered and researched.

| Implications for research
There is a need for primary studies evaluating the effectiveness and patient outcome, which she believes could be achieved by effective and consistent implementation of evidence-informed practice. She will also contribute to the methodological aspects of the systematic review.
• Content and Systematic review methods: Professor Robert McSherry will bring both methodological as well as content expertise relating to evidence-informed practice and the development of teaching programmes to the team. His area of expertise is around evidence-informed practice, patient safety, quality, and clinical governance using practice development. Practice development is about promoting person-centered care and approaches, which Rob has integrated effectively within both educational and research programs. He is the co-author of a book on systematic reviews and has over 30 years of experience as a registered nurse. Robs educational and professional expertise has been recognized and rewarded internationally and nationally. He was awarded the highly prestigious National Teaching Fellow award in the UK in 2011.
• Content and systematic review methods: Dr. Josette Bettany-Saltikov will bring significant expertise of Systematic review methods and content to this systematic review, both in terms of knowledge about evidence-based practice and knowledge about developing educational programs.
She has taught systematic review methods to university students at all levels for over 15 years. She has also published a book on how to conduct a systematic review and has been involved in three Cochrane reviews, one of which she led. She has authored a number of systematic reviews on diverse topics published in other journals and has significant experience in developing educational programs from her teaching experience as a university Senior lecturer for 23 years.
• Content and systematic review methods: Professor Sharon Hamilton will bring expertise in systematic reviewing. She is the director of the Teesside Centre for Evidence-Informed Practice: A Joanna Briggs Institute Centre of Excellence, and has conducted a number of qualitative and quantitative reviews.
Sharon is a registered nurse and has research expertise in the evaluation of clinical interventions.
• Information retrieval: Mrs. Julie Hogg brings Information retrieval expertise to the team. Julie is an Academic Librarian at Teesside University and will carry out a thorough and systematic search of the literature.
• Statistical analysis: Mrs. Vicki Whittaker is a very experienced statistician with over 18 years of experience in teaching and advising students and academics on their research projects and clinical trials. She has been involved in data analysis and meta-analysis of numerous research projects and systematic reviews.

SUMMARY OF FINDINGS TABLES
Characteristics of excluded studies

DECLARATIONS OF INTEREST
The review team declares no potential conflicts of interest.

DIFFERENCES BETWEEN PROTOCOL AND REVIEW
In this review, we made every effort to identify eligible studies by following the methods outlined in the protocol.
In the protocol, we planned to assess whether evidenceinformed practice compared to evidence-based practice educational interventions improve knowledge, attitudes, understanding, and behavior of undergraduate health and social care students toward the application of evidence into practice. In addition, we aimed to assess the impact of evidence-informed practice and/or evidence-based practice educational programs on patient outcomes. Examples of patient outcome indicators that we planned to assess include user experience, length of hospital stay, nosocomial infections, patient and health practitioner satisfaction, mortality, and morbidity rates. However, we could not explore these objectives in the final review because we did not identify any eligible studies for inclusion.

PUBLISHED NOTES
Characteristics of excluded studies KUMAH ET AL.
| 31 of 39 Baarends et al., 2017 Reason for exclusion The study does not compare evidence-based practice educational interventions to evidence-informed practice educational interventions.

Reason for exclusion
The study does not compare evidence-informed practice educational interventions to evidence-based practice educational interventions.

Jang et al., 2015
Reason for exclusion Full text of the article is not in English. However, from the title of the study, it does not compare evidence-informed practice educational interventions to evidence-based practice educational interventions.

SOURCES OF SUPPORT
Internal sources • Teesside University, UK This review forms part of a Ph.D programme, which is supported and funded by the Teesside University, Middlesbrough.

External sources
• Not applicable to the current systematic review, UK This systematic review did not receive any form of external support.