Skip to main content
Advertisement
  • Loading metrics

Strengthening implementation guidelines for HIV service delivery: Considerations for future evidence generation and synthesis

  • Ingrid Eshun-Wilson ,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    i.eshun-wilsonova@wustl.edu

    Affiliations Division of Infectious Diseases, School of Medicine, Washington University in Saint Louis, Saint Louis, Missouri, United States of America, Department of Global Health, Stellenbosch University, Cape Town, South Africa

  • Nathan Ford,

    Roles Conceptualization, Funding acquisition, Methodology, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Department of HIV, Viral Hepatitis and Sexually Transmitted Infectionss, World Health Organization, Geneva, Switzerland

  • Aaloke Mody,

    Roles Conceptualization, Writing – original draft, Writing – review & editing

    Affiliation Division of Infectious Diseases, School of Medicine, Washington University in Saint Louis, Saint Louis, Missouri, United States of America

  • Laura Beres,

    Roles Writing – original draft, Writing – review & editing

    Affiliation Department of International Health, John Hopkins School of Public Health, Baltimore, Maryland, United States of America

  • Sheree Schwartz,

    Roles Writing – original draft, Writing – review & editing

    Affiliation Department of Epidemiology, John Hopkins School of Public Health, Baltimore, Maryland, United States of America

  • Stefan Baral,

    Roles Writing – original draft, Writing – review & editing

    Affiliation Department of Epidemiology, John Hopkins School of Public Health, Baltimore, Maryland, United States of America

  • Elvin H. Geng

    Roles Conceptualization, Formal analysis, Funding acquisition, Methodology, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Division of Infectious Diseases, School of Medicine, Washington University in Saint Louis, Saint Louis, Missouri, United States of America

Summary points

  • With highly effective diagnostic, prevention, and treatment innovations available in HIV programs globally, the HIV field is increasingly turning to implementation and service delivery questions.
  • Developing guidelines for implementation of interventions is markedly challenged by limitations in primary implementation research design and reporting, as well as difficulties in application of evidence synthesis and guideline development tools originally developed to appraise evidence for efficacy.
  • Drawing on the processes of developing the WHO HIV service delivery guidelines for testing and treatment between 2018 and 2021, we present challenges and identify areas for future methodological development to improve the incorporation of implementation research across the full spectrum of the evidence generation continuum.
  • We highlight gaps in design, measurement, and reporting of primary implementation research, as well as underreporting of relevant program data.
  • We describe how routine application of current evidence synthesis tools may not sufficiently answer implementation questions and propose that methodological tools be optimized to identify high-quality non-randomized evidence and reduce penalization for heterogeneity in meta-analysis of implementation research.
  • These findings serve as a blueprint for further methodological work to strengthen existing evidence synthesis and guideline development tools for HIV service delivery guidelines and for implementation research more broadly.

Introduction

Over the last 10 years, global technical guidance for the public health response to HIV—including but not limited to the World Health Organization (WHO)—has sought to make recommendations not only for clinical practice, but also how services should be implemented [1,2]. While the field of HIV has leveraged robust clinical interventions for prevention, diagnosis, and treatment for decades [3,4], much empirical HIV research today is directed at how to alter health systems, optimize performance, and extend the reach of services—that is, implementation research.

The increased focus on implementation questions has highlighted complexities in synthesizing and appraising research for development of scientific guidance on implementation practice [5]. Many of the hallmarks for rigor such as randomization or masking in individual studies or consistency of effects across studies were developed to assess clinical efficacy data. When applied to implementation questions, similar standards may penalize unavoidable procedures in implementation research (e.g., unmasked treatment assignment) as well as fail to assess features of implementation evidence quality such as relevance for implementation in real-world health systems [6]. We need evidence synthesis tools that highlight research domains that assess the setting or population for scientific inferences (considering external validity) as well as threats to validity of those inferences (considering internal validity), as appropriate for implementation research questions [7]. This will ensure that guidance resulting from such syntheses can address “target validity” of the strategies under consideration.

Between 2018 and 2021, the LIVE project (a living database of HIV implementation science research) supported WHO guideline development for HIV through the conduct of systematic reviews and meta-analyses for the 2019 HIV self-testing guidelines and the 2021 HIV treatment service delivery guidelines [810]. Review questions and critical outcomes of interest were based on emerging priorities identified during stakeholder-driven processes convened by WHO [11]. Resulting evidence syntheses were presented to guideline development panels comprised of implementing partners from various HIV programs, as well as academics, people living with HIV, and representatives from funder organizations. In this paper, we summarize gaps in primary HIV implementation research methods and reporting identified through this work and propose areas for future methodological development.

Challenges for evidence synthesis and guideline development for HIV implementation research

To supplement recognized evidence synthesis methods during the development of WHO guidelines for HIV service delivery, we applied implementation science tools to systematically characterize research findings. These included the Proctor and colleagues frameworks for characterizing implementation strategies and implementation outcomes and the PRECIS-2 framework for assessing real-world relevance in trials [1214]. We also re-examined well-recognized systematic review tools, including the Cochrane Risk of Bias tools and the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) framework regarding their application to HIV implementation science questions [15,16]. Through discussion with systematic review teams and WHO guideline developers, a workshop with key HIV implementation scientists, guideline developers and funders, and reviewing relevant literature, we identified several challenges for implementation evidence synthesis and guideline development for HIV service delivery (Table 1).

thumbnail
Table 1. Methodological gaps in evidence synthesis and guideline development for HIV implementation research.

https://doi.org/10.1371/journal.pmed.1004168.t001

Suboptimal characterization of strategies, adaptations, mechanisms of action, context, and implementation outcomes in primary HIV implementation research studies

Inconsistent or incomplete specification of strategies’ components and co-interventions

Detailed specification and characterization of primary implementation strategies and co-strategies was not robust across studies included in reviews. As an example, in a review of re-engagement in care strategies—actors (those who conducted tracing efforts) and actions (tracing activities performed) were generally well specified; however, other implementation characteristics, such as the dose (frequency and intensity of contact attempts) and temporality (time when tracing was initiated in relation to last visit or appointment) and the action target (how a “lost” patient was defined and how the tracing sought to re-engage them) were less clear [12,17]. In addition, descriptions of adaptations made during implementation were infrequent. A lack of standardized descriptions of implementation characteristics across studies prevented assessment of effect modification by re-engagement strategy components, even qualitatively, and as a result, the ability to develop recommendations for optimal timing or frequency of re-engagement efforts was limited.

Infrequent reporting of implementation outcomes

Studies rarely presented (in primary or ancillary publications) implementation outcomes. Among 8 studies included in a review of community ART initiation, 3 studies reported on cost of implementation and none reported on other implementation outcomes (i.e., acceptability and appropriateness of implementation strategies, fidelity to delivery method, adoption by health workers and systems, or sustainability) [18]. For guidelines to broadly endorse practice innovations, their effectiveness must be accompanied by such measures—implementation outcomes complement data on effectiveness by contextualizing meta-analytic findings.

Mechanisms of action rarely considered or described

Hypothesized mechanisms by which implementation strategies were expected to exert effects were also rarely characterized. It was rare for protocols, primary or ancillary publications to include any explicit theory of program effectiveness and assumptions—or for studies to share revised mechanistic theories ex-post given new information during study implementation [1921]. Community adherence groups (CAGs), for example, have been conceptualized as improving HIV treatment outcomes by reducing structural barriers to care through changes in service location and reduced health facility visit frequency, as well as reduce psychosocial barriers through peer support [2224]. Studies included in a review of reduced visit frequency, however, showed little difference in outcomes for those in CAGs receiving 3-monthly multi-month scripting (MMS) compared to those receiving 3-monthly MMS at a health facility [25,26]—this raises questions about the additional effect of peer support in CAGs (assuming populations and contexts were comparable). Proposing, refining, and synthesizing evidence on mechanisms can help delineate what components are fundamental to the success of a strategy and what components are more contextually specific in their effects.

Critical contextual factors frequently not specified or sufficiently characterized

A further challenge for synthesis was the limited reporting of contextual factors that influence the effectiveness of an implementation strategy—such as the historical and sociopolitical climate, social structures, organizational hierarchies, financing, governance, leadership, human resource capacity, and stigma [27,28]. Understanding how context interacts with intervention exposure and modulates implementation success can facilitate inference in highly varied implementation settings and help identify necessary adaptations for success under new contextual conditions [27,29]. While standardized methods to measure context are still a work in progress conceptually, the eventual development of tools to specify the right kind of context to enable generalization (and knowledge of when generalization is bounded) will aid research utilization and detailed specification of service delivery guidelines [2931].

Informative program data missing from the evidence base

Implementation evidence generation and synthesis to date has infrequently incorporated extensive program implementation data [32,33]. Routine assessments of publication bias focus on the absence of negative and small trials in the synthesis; for implementation research however, extensive programmatic data is frequently missing from the literature due to programs lacking time, resources, or skill sets to publish reports or scientific manuscripts. Such evidence reflecting real-world implementation is nevertheless important to strengthen the evidence base for future HIV service delivery guidelines.

Imperfect fit of evidence synthesis methods and HIV implementation research

Few tools to assess formal applicability to varied real-world (non-trial) settings

The feasibility of implementing a given intervention within health system constraints is a core component of meaningful guidance for evidence-to-decision processes, yet there are no commonly used tools for assessing whether synthesized study findings are sufficiently pragmatic and relevant to real-world conditions. Thus, understanding to what degree the trial context and conditions mimic real-world conditions has implications for establishing guidance for scale up [34].

To address this gap during guideline review processes, we applied the PRECIS-2 tool to evaluate the extent to which research findings could be applied to routine care [14] (Table 2). The PRECIS-2 tool was originally developed to characterize explanatory or pragmatic trial designs—further refinement and application of such tools for evidence synthesis could inform feasibility determinations for future guideline development processes. The PRECIS-2 framework was not formally used to inform recommendations but provided structure to feasibility guideline development discussions.

thumbnail
Table 2. PRECIS-2 tool to assess real-world relevance and feasibility of research [14].

https://doi.org/10.1371/journal.pmed.1004168.t002

Extensive unexplained effect heterogeneity complicates interpretation of pooled effect estimates

Synthesized effect estimates for implementation research frequently do not represent one true underlying effect (or a single distribution of effects). This was demonstrated by high levels of unexplained statistical heterogeneity in our guideline work: the I2 statistic, measuring statistical heterogeneity was greater than 90% in several meta-analyses, prompting evidence downgrading even in instances where the majority of effect estimates showed benefit (Fig 1). This heterogeneity reflects a well-recognized reality in implementation: strategies inevitably operate differently across contexts. In such instances, where no one true underlying effect measure exists (relevant to all populations and settings), pooled estimates reflect a broader concept of overall benefit or harm [35]. Thus, when considering pooling of estimates from HIV implementation research meta-analyses with high levels of statistical heterogeneity, it is important to assess whether the pooled estimate will be representative or misleading. Future methodological work should formalize additional considerations for implementation research meta-analysis regarding consistency of direction of effects from individual studies, instances where pooling may or may not be valid, as well as ideal interpretation of pooled estimates in light of statistical heterogeneity.

thumbnail
Fig 1. Examples of 2 hypothetical meta-analyses with high statistical heterogeneity but with differing interpretations of overall direction of effect estimates.

(a) High statistical heterogeneity with inconsistency in direction of effects (uncertain benefit or harm). (b) High statistical heterogeneity with consistency in direction of effects (benefit). Blue boxes represent effect estimates from individual studies; red diamonds represent pooled effect estimates. Risk ratio greater than 1 represents benefit, less than 1 represents harm [3740].

https://doi.org/10.1371/journal.pmed.1004168.g001

Difficulties establishing transitivity in network meta-analysis to obtain indirect estimates

Unexplained differences in otherwise similarly named strategies, another form of heterogeneity, also complicates the use of network meta-analyses (NMA). NMAs offer a complementary methodology for comparing several implementation strategies through the use of common comparators. In theory, this approach is highly attractive for implementation research where the large number of potential strategies precludes head-to-head comparison of all possibilities. However, the application of this method to implementation research can by challenged by the need to meet the assumption of “transitivity.” In a systematic review of the effects HIVST delivery strategies marked differences in settings and population groups posed a threat to “transitivity” [36]. Transitivity indicates that indirect effects compared through a common group are valid, yet for implementation research, differences in - for example, “standard of care” raises questions about indirect comparisons obtained in such analyses [37]. To address this, networks were separated, and sensitivity and meta-regression analyses are used to explore heterogeneity. It is likely, however, that substantial heterogeneity remains between common comparators [38]. Guidelines and new methods for application of network meta-analysis to implementation research could help to understand effects of interventions where a substantial number of unique implementation strategies have been explored.

Inherent nature of HIV implementation research frequently results in low overall evidence certainty ratings

During the HIV service delivery guideline evidence synthesis process, discussion arose regarding what may be considered good evidence within the GRADE approach and what might be considered good evidence for implementation. GRADE offers a rigorous and standardized approach for assessing overall evidence certainty for a systematic review outcome (e.g., retention in care) and determining the strength of a guideline recommendation. The application of such rating systems—that value high internal validity (e.g., RCTs) and consistency across effect estimates—to implementation questions, however, means that implementation research can rarely attain a status of “high certainty” evidence, often leading to recommendations that are not strong and, thus, may not be adopted. While this may reflect an overall weak evidence base, another possibility worth interrogating is that the guidance for implementation demands data with different priorities (even if principles are shared).

Observational research and natural experiments, provide critical insights in implementation evidence, but are penalized in meta-analyses

Non-randomized studies—a critical component of HIV implementation research—are initially an assigned low value within GRADE, and unless effect estimates are large or show a dose-response gradient, the inclusion of observational research results in immediate downgrading of evidence certainty [41]. For questions of implementation, however, it is important to consider that findings from observational research may generate estimates that reflect effectiveness during real-world implementation conditions, and their inclusion in evidence synthesis is therefore crucial [42]. The quality of observational research can be highly variable, and further consideration should be given to strengthening tools for distinguishing the methodological quality of non-randomized evidence and adapting guideline development tools to broaden criteria for assigning high-quality evidence ratings.

Unexplained heterogeneity, an inherent feature of implementation research, reduces evidence certainty ratings

Methodological, clinical, and statistical heterogeneity are frequently present in syntheses of implementation research. This frequently results in low evidence quality ratings for implementation research, which limits the ability to make strong implementation guideline recommendations to inform policy. Implementation questions must, however, be tested across heterogeneous settings—the adaptability of the strategy to different settings is what tests its robustness for scalability—and heterogeneity is to be expected [27,43]. Guidelines are needed that outline instances where high-certainty evidence ratings may still be warranted despite heterogeneity.

Limitations of tools and methods for incorporation of implementation research into guideline development frameworks

Limited tools and guidance for evaluation of “indirectness” evidence appraisal domain

Strengthening the guidelines for the GRADE evidence certainty domain of “indirectness” (a measure of the external validity of the synthesized evidence) could improve the application of this domain for implementation questions [7,44]. Guidelines and tools for establishing internal validity are extensive but for external validity, methods are comparatively less well developed, resulting in more subjective assessments. Providing greater detail on requirements within each category of indirectness (i.e., population, intervention, outcome, and indirect comparison) as it relates to questions beyond efficacy can aid review teams to identify if sufficient information is available to establish directness for implementation in the first place and provide more nuanced interpretations. Given the frequent desire for guidelines to translate to diverse and varied global settings, considerations of expanded descriptions of what data are available from the synthesized evidence to enable inferences in these diverse settings could improve the utility of this domain. Understanding if the mechanism by which a strategy is considered to exert its effects can translate wholly or partially to other settings or populations and could aid in determining where and for whom the evidence is direct or indirect [42]. Incorporation of tools such as the PRECIS-2 tool could strengthen assessments regarding directness or indirectness of evidence [14]. A more in-depth characterization of this GRADE domain for implementation research could help specify conditions under which guideline recommendations are most applicable, as well as inform reporting requirements of primary research to inform future guidelines.

Few guidelines for incorporation of quasi-experimental designs and natural experiments into evidence synthesis and guideline development processes

Quasi-experimental designs are increasingly being used to explore HIV implementation questions. However, they have little recognition in evidence hierarchies and there are few tools to guide incorporation into evidence synthesis [4549]. For HIV implementation research, quasi-experimental population level effects and programmatic data analyses that can establish causality (if assumptions are met) can contribute real-world program data. Current thinking has argued that natural experiments may be as rigorous as randomized trials because they occur without anyone knowing about the outcomes (they are blinded by nature), and they avoid various artefacts introduced by randomized trials (e.g., so called Hawthorne and John Henry effects). These designs are however limited by the inability to test assumptions for causality and possible bias amplification [50]. Guidelines are emerging for the inclusion of natural experiments in evidence synthesis and clarifying guidance around what constitutes high-quality evidence for natural experiments and how such evidence may be incorporated; this could broaden the evidence base for future HIV service delivery guideline development processes [51].

Limited standardized methods for incorporating implementation outcomes (e.g., acceptability, feasibility) into final guideline development decision-making

There are few detailed methodological guidelines for incorporating the implementation outcomes of acceptability, appropriateness, and adoption into guideline development processes. The incorporation of values and preferences is a cornerstone of the WHO evidence to decision framework; however, specific guidelines for assessments of acceptability, appropriateness, or adoption of strategies for those who are expected to enact or receive the intervention are less clearly defined [52,53]. The Confidence in the Evidence from Reviews of Qualitative research (GRADE-CERQUAL) tool can aid determination of the certainty from qualitative research but with extensive mixed methods and preference research emerging for HIV service delivery, detailed guidance on what evidence, and what questions must be answered to establish “acceptability,” for example, remain unclear [54].

Recommendations for future evidence synthesis for HIV implementation research

This reflection identified several critical areas for improvement in implementation research evidence generation, synthesis, and guideline development (Table 3). Firstly, implementation considerations—including characterization of implementation strategies, implementation outcomes, hypothesized mechanisms of action, and context—should be incorporated more broadly in HIV implementation research study methods and dissemination efforts [12,13,55,56]. Second, adapted or new approaches to evidence synthesis that address the limitations as well as the benefits of heterogeneity—in terms of context, implementation strategy, or study design—in addition to the relevance of primary research to real-world settings, are needed [14]. Lastly, guideline-informing evidence rating tools (e.g., GRADE) should determine how rating may differ for questions of implementation as compared to efficacy, particularly in considering the role of high-quality non-randomized research, determinations of indirectness, as well as robust assessment and incorporation of implementation outcomes (e.g., acceptability, adoption, feasibility, sustainability).

thumbnail
Table 3. Recommendations and areas for future methodological development to address evidence generation, synthesis, and guideline development challenges relevant to HIV implementation research.

https://doi.org/10.1371/journal.pmed.1004168.t003

The alignment of methods for the production, synthesis, and dissemination is critical for improving the utility and impact of implementation research for the HIV response. International guideline recommendations are routinely based on combinations of expert opinion and research evidence (guided by relevance and quality). If synthesized evidence does not adequately reflect the breadth and depth of implementation efforts, this limits the capacity for guidelines to actually inform translation and real-world implementation efforts. Adapting current evidence generation and synthesis methods to answer efficacy, effectiveness, and implementation questions can support the production of guidelines that provide more comprehensive recommendations on what to implement, how to do it, and under what conditions.

References

  1. 1. WHO. World Health Organization: Implementation tool for pre-exposure prophylaxis (PrEP). 2022 [cited 2022]. Available from: http://apps.who.int/iris/bitstream/handle/10665/255890/WHO-HIV-2017.19-eng.pdf;jsessionid=ACE3151798ED2A3C720A20476D8DCE69?sequence=1.
  2. 2. WHO. World Health Organization: Service Delivery for the treatment and care of people living with HIV. 2021 [cited 2022 Aug 27]. Available from: https://www.who.int/publications/i/item/9789240023581.
  3. 3. Hargreaves J, Dalal S, Rice B, Anderegg N, Bhattacharjee P, Gafos M, et al. Repositioning Implementation Science in the HIV Response: Looking Ahead From AIDS 2018. J Acquir Immune Defic Syndr. 2019:82.
  4. 4. Schmidt HA, Rodolph M, Schaefer R, Baggaley R, Doherty M. Long-acting injectable cabotegravir: implementation science needed to advance this additional HIV prevention choice. J Int AIDS Soc. 2022;25 (7):e25963. pmid:35903882; PubMed Central PMCID: PMC9334859.
  5. 5. Hilton Boon M, Thomson H, Shaw B, Akl EA, Lhachimi SK, López-Alcalde J, et al. Challenges in applying the GRADE approach in public health guidelines and systematic reviews: a concept article from the GRADE Public Health Group. J Clin Epidemiol. 2021;135:42–53. Epub 2021/01/22. pmid:33476768; PubMed Central PMCID: PMC8352629.
  6. 6. Geng EH, Peiris D, Kruk ME. Implementation science: Relevance in the real world without sacrificing rigor. PLoS Med. 2017;14(4):e1002288. pmid:28441435
  7. 7. Westreich D, Edwards JK, Lesko CR, Cole SR, Stuart EA. Target Validity and the Hierarchy of Study Designs. Am J Epidemiol. 2019;188(2):438–443. pmid:30299451
  8. 8. WHO. World Health Organization: Consolidated guidelines on HIV prevention, testing, treatment, service delivery and monitoring: recommendations for a public health approach. 2021 [cited 2022]. Available from: https://www.who.int/publications/i/item/9789240031593.
  9. 9. WHO. World Health Organization: Guidelines on HIV self-testing and partner notification. 2019 [cited 2022]. Available from: https://www.who.int/hiv/pub/self-testing/hiv-self-testing-guidelines/en/.
  10. 10. Eshun-Wilson I, Ford N, Le Tourneau N, Baral S, Schwartz S, Kemp C, et al. A Living Database of HIV Implementation Research (LIVE Project): Protocol for Rapid Living Reviews. JMIR Res Protoc. 2022;11(10):e37070. pmid:36197704
  11. 11. Ford N, Geng E, Ellman T, Orrell C, Ehrenkranz P, Sikazwe I, et al. Emerging priorities for HIV service delivery. PLoS Med. 2020;17(2):e1003028. pmid:32059023
  12. 12. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139. Epub 2013/12/03. pmid:24289295; PubMed Central PMCID: PMC3882890.
  13. 13. Proctor EK, Silmere H, Raghavan R, Hovmand P, Aarons GA, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38. pmid:20957426
  14. 14. Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 2015;350:h2147. pmid:25956159
  15. 15. Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, et al. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64(4):383–94. Epub 2011/01/05. pmid:21195583.
  16. 16. Higgins JPT, Thomas J. Cochrane Handbook for Systematic Reviews of Interventions. version 6.0 ed. Cochrane; 2019.
  17. 17. Mirzazadeh A, Eshun-Wilson I, Thompson RR, Bonyani A, Kahn JG, Baral SD, et al. Interventions to reengage people living with HIV who are lost to follow-up from HIV treatment programs: A systematic review and meta-analysis. PLoS Med. 2022;19(3):e1003940. pmid:35290369
  18. 18. Eshun-Wilson I, Awotiwon AA, Germann A, Amankwaa SA, Ford N, Schwartz S, et al. Effects of community-based antiretroviral therapy initiation models on HIV treatment outcomes: A systematic review and meta-analysis. PLoS Med. 2021;18(5):e1003646. pmid:34048443
  19. 19. Smith JD, Li DH, Rafferty MR. The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects. Implement Sci. 2020;15(1):84. pmid:32988389
  20. 20. Dixon-Woods M, Bosk CL, Aveling EL, Goeschel CA, Pronovost PJ. Explaining Michigan: developing an ex post theory of a quality improvement program. Milbank Q. 2011;89(2):167–205. pmid:21676020.
  21. 21. Geng EH, Baumann AA, Powell BJ. Mechanism mapping to advance research on implementation strategies. PLoS Med. 2022;19(2):e1003918. pmid:35134069
  22. 22. Luque-Fernandez MA, Van Cutsem G, Goemaere E, Hilderbrand K, Schomaker M, Mantangana N, et al. Effectiveness of patient adherence groups as a model of care for stable patients on antiretroviral therapy in Khayelitsha, Cape Town, South Africa. PLoS ONE. 2013;8(2):e56088. Epub 2013/02/19. pmid:23418518; PubMed Central PMCID: PMC3571960.
  23. 23. Umberto Pellecchia SB, Spencer Nundwe, Andy Bwanali, Bote Zamadenga, Metcalf CA, Helen Bygrave, et al. “We are part of a family”. Benefits and limitations of community ART groups (CAGs) in Thyolo, Malawi: a qualitative study. J Int AIDS Soc. 2017;(20):21374. pmid:28406273
  24. 24. Decroo T, Koole O, Remartinez D, dos Santos N, Dezembro S, Jofrisse M, et al. Four-year retention and risk factors for attrition among members of community ART groups in Tete, Mozambique. Trop Med Int Health. 2014;19(5):514–21. Epub 2014/06/06. pmid:24898272.
  25. 25. Fatti G, Ngorima-Mabhena N, Mothibi E, Muzenda T, Choto R, Kasu T, et al. Outcomes of Three- Versus Six-Monthly Dispensing of Antiretroviral Treatment (ART) for Stable HIV Patients in Community ART Refill Groups: A Cluster-Randomized Trial in Zimbabwe. J Acquir Immune Defic Syndr. 2020;84(2):162–72. Epub 2020/02/26. pmid:32097252; PubMed Central PMCID: PMC7172979.
  26. 26. Tukei BB, Fatti G, Tiam A, Ngorima-Mabhena N, Tukei VJ, Tshabalala I, et al. Twelve-Month Outcomes of Community-Based Differentiated Models of Multimonth Dispensing of ART Among Stable HIV-Infected Adults in Lesotho: A Cluster-Randomized Noninferiority Trial. J Acquir Immune Defic Syndr. 2020;85(3):280–91. Epub 2020/07/16. pmid:32665460.
  27. 27. Edwards N, Barker PM. The importance of context in implementation research. J Acquir Immune Defic Syndr. 1999;2014(67 Suppl 2):S157–S162. pmid:25310123.
  28. 28. Hontelez JAC, Bulstra CA, Yakusik A, Lamontagne E, Bärnighausen TW, Atun R. Evidence-based policymaking when evidence is incomplete: The case of HIV programme integration. PLoS Med. 2021;18(11):e1003835. pmid:34752470
  29. 29. Mehrotra ML, Petersen ML, Geng EH. Understanding HIV Program Effects: A Structural Approach to Context Using the Transportability Framework. J Acquir Immune Defic Syndr. 2019;82 Suppl 3(Suppl 3):S199–S205. pmid:31764255.
  30. 30. Pfadenhauer LM, Gerhardus A, Mozygemba K, Lysdahl KB, Booth A, Hofmann B, et al. Making sense of complexity in context and implementation: the Context and Implementation of Complex Interventions (CICI) framework. Implement Sci. 2017;12(1):21. pmid:28202031
  31. 31. May CR, Johnson M, Finch T. Implementation, context and complexity. Implement Sci. 2016;11(1):141. pmid:27756414
  32. 32. Rucinski K, Masankha Banda L, Olawore O, Akolo C, Zakaliya A, Chilongozi D, et al. HIV Testing Approaches to Optimize Prevention and Treatment for Key and Priority Populations in Malawi. Open Forum Infect Dis. 2022;9(4):ofac038. Epub 20220307. pmid:35265725; PubMed Central PMCID: PMC8900928.
  33. 33. Olawore O, Astatke H, Lillie T, Persaud N, Lyons C, Kamali D, et al. Peer Recruitment Strategies for Female Sex Workers Not Engaged in HIV Prevention and Treatment Services in Côte d’Ivoire: Program Data Analysis. JMIR Public Health Surveill. 2020;6(4):e18000. Epub 20201001. pmid:33001039; PubMed Central PMCID: PMC7563635.
  34. 34. Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012:50. pmid:22310560
  35. 35. Deeks JJ, Higgins J, Altman DG. Cochrane Handbook for Systematic Reviews of Interventions. Chapter 10: Analysing data and undertaking meta-analyses. 6.2 ed2021.
  36. 36. Eshun-Wilson I, Jamil MS, Witzel TC, Glidded DV, Johnson C, Le Trouneau N, et al. A Systematic Review and Network Meta-analyses to Assess the Effectiveness of Human Immunodeficiency Virus (HIV) Self-testing Distribution Strategies. Clin Infect Dis. 2021;73(4):e1018–e1028. pmid:34398952
  37. 37. Chaimani A, Caldwell DM, Li T, Higgins JPT, Salanti G. Chapter 11: Undertaking network meta-analyses. Cochrane Handbook for Systematic Reviews of Interventions version 62 2019.
  38. 38. Geng EH, Petersen ML. Network meta-analyses: powerful but not without perils. Lancet HIV. 2014;1(3):e95–6. Epub 2015/10/02. pmid:26424122.
  39. 39. Jamil MS, Eshun-Wilson I, Witzel TC, Siegfried N, Figueroa C, Chitembo L, et al. Examining the effects of HIV self-testing compared to standard HIV testing services in the general population: A systematic review and meta-analysis. EClinicalMedicine 2021;38:100991. pmid:34278282
  40. 40. Witzel TC, Eshun-Wilson I, Jamil MS, Tilouche N, Figueroa C, Johnson CC, et al. Comparing the effects of HIV self-testing to standard HIV testing for key populations: a systematic review and meta-analysis. BMC Med. 2020;18(1):381. pmid:33267890
  41. 41. Guyatt GH, Oxman AD, Sultan S, Glasziou P, Akl EA, Alonso-Coello P, et al. GRADE guidelines: 9. Rating up the quality of evidence. J Clin Epidemiol. 2011;64(12):1311–1316. pmid:21802902
  42. 42. Rothwell PM. External validity of randomised controlled trials: “to whom do the results of this trial apply?”. Lancet (London, England). 2005;365(9453):82–93. pmid:15639683.
  43. 43. Bulstra CA, Hontelez JAC, Otto M, Stepanova A, Lamontagne E, Yakusik A, et al. Integrating HIV services and other health services: A systematic review and meta-analysis. PLoS Med. 2021;18(11):e1003836. Epub 20211109. pmid:34752477; PubMed Central PMCID: PMC8577772.
  44. 44. Guyatt GH, Oxman AD, Kunz R, Woodcock J, Brozek J, Helfand M, et al. GRADE guidelines: 8. Rating the quality of evidence—indirectness. J Clin Epidemiol. 2011;64(12):1303–1310. pmid:21802903
  45. 45. Oladele EA, Badejo OA, Obanubi C, Okechukwu EF, James E, Owhonda G, et al. Bridging the HIV treatment gap in Nigeria: examining community antiretroviral treatment models. J Int AIDS Soc. 2018;21(4):e25108. Epub 2018/04/21. pmid:29675995; PubMed Central PMCID: PMC5909112.
  46. 46. Bor J, Fox MP, Rosen S, Venkataramani A, Tanser F, Pillay D, et al. Treatment eligibility and retention in clinical HIV care: A regression discontinuity study in South Africa. PLoS Med. 2017;14(11):e1002463. pmid:29182641
  47. 47. Mody A, Sikazwe I, Czaicki NL, Wa Mwanza M, Savory T, Sikombe K, et al. Estimating the real-world effects of expanding antiretroviral treatment eligibility: Evidence from a regression discontinuity analysis in Zambia. PLoS Med. 2018;15(6):e1002574. pmid:29870531
  48. 48. Mody A, Sikazwe I, Namwase AS, Wa Mwanza M, Savory T, Mwila A, et al. Effects of implementing universal and rapid HIV treatment on initiation of antiretroviral therapy and retention in care in Zambia: a natural experiment using regression discontinuity. Lancet HIV. 2021. Epub 2021/10/18. pmid:34656208.
  49. 49. Reeves BC, Deeks JJ, Higgins JPT, Shea B, Tugwell P, Wells GA. Chapter 24: Including non-randomized studies on intervention effects. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al., editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.2 (updated February 2021). 2021.
  50. 50. Craig P, Katikireddi SV, Leyland A, Popham F. Natural Experiments: An Overview of Methods, Approaches, and Contributions to Public Health Intervention Research. Annu Rev Public Health. 2017;38(1):39–56. pmid:28125392
  51. 51. Bärnighausen T, Tugwell P, Røttingen JA, Shemilt I, Rockers P, Geldsetzer P, et al. Quasi-experimental study designs series-paper 4: uses and value. J Clin Epidemiol. 2017;89:21–9. Epub 20170330. pmid:28365303.
  52. 52. Luoto J, Maglione MA, Johnsen B, Chang C. A Comparison of Frameworks Evaluating Evidence for Global Health Interventions. PLoS Med. 2013;10(7):e1001469. pmid:23874159
  53. 53. Moberg J, Oxman AD, Rosenbaum S, Schünemann HJ, Guyatt G, Flottorp S, et al. The GRADE Evidence to Decision (EtD) framework for health system and public health decisions. Health Res Policy Syst. 2018;16(1):45. Epub 2018/05/31. pmid:29843743; PubMed Central PMCID: PMC5975536.
  54. 54. Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gulmezoglu M, et al. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Med. 2015;12(10):e1001895. pmid:26506244; PubMed Central PMCID: PMC4624425.
  55. 55. Nilsen P, Bernhardsson S. Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes. BMC Health Serv Res. 2019;19(1):189. pmid:30909897
  56. 56. Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, et al. From Classification to Causality: Advancing Understanding of Mechanisms of Change in Implementation Science. Front Public Health. 2018;6:136. Epub 2018/06/06. pmid:29868544; PubMed Central PMCID: PMC5949843.
  57. 57. Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, et al. Lost in knowledge translation: time for a map? J Contin Educ Heal Prof. 2006;26(1):13–24. Epub 2006/03/25. pmid:16557505.