Journal: PLoS Med

Sorted by: date / impact
Abstract

Diagnostic performance of congestion score index evaluated from chest radiography for acute heart failure in the emergency department: A retrospective analysis from the PARADISE cohort.

Kobayashi M, Douair A, Duarte K, Jaeger D, ... Chouihed T, Girerd N
Background
Congestion score index (CSI), a semiquantitative evaluation of congestion on chest radiography (CXR), is associated with outcome in patients with heart failure (HF). However, its diagnostic value in patients admitted for acute dyspnea has yet to be evaluated.
Methods and findings
The diagnostic value of CSI for acute HF (AHF; adjudicated from patients\' discharge files) was studied in the Pathway of dyspneic patients in Emergency (PARADISE) cohort, including patients aged 18 years or older admitted for acute dyspnea in the emergency department (ED) of the Nancy University Hospital (France) between January 1, 2015 and December 31, 2015. CSI (ranging from 0 to 3) was evaluated using a semiquantitative method on CXR in consecutive patients admitted for acute dyspnea in the ED. Results were validated in independent cohorts (N = 224). Of 1,333 patients, mean (standard deviation [SD]) age was 72.0 (18.5) years, 686 (51.5%) were men, and mean (SD) CSI was 1.42 (0.79). Patients with higher CSI had more cardiovascular comorbidities, more severe congestion, higher b-type natriuretic peptide (BNP), poorer renal function, and more respiratory acidosis. AHF was diagnosed in 289 (21.7%) patients. CSI was significantly associated with AHF diagnosis (adjusted odds ratio [OR] for 0.1 unit CSI increase 1.19, 95% CI 1.16-1.22, p < 0.001) after adjustment for clinical-based diagnostic score including age, comorbidity burden, dyspnea, and clinical congestion. The diagnostic accuracy of CSI for AHF was >0.80, whether alone (area under the receiver operating characteristic curve [AUROC] 0.84, 95% CI 0.82-0.86) or in addition to the clinical model (AUROC 0.87, 95% CI 0.85-0.90). CSI improved diagnostic accuracy on top of clinical variables (net reclassification improvement [NRI] = 94.9%) and clinical variables plus BNP (NRI = 55.0%). Similar diagnostic accuracy was observed in the validation cohorts (AUROC 0.75, 95% CI 0.68-0.82). The key limitation of our derivation cohort was its single-center and retrospective nature, which was counterbalanced by the validation in the independent cohorts.
Conclusions
In this study, we observed that a systematic semiquantified assessment of radiographic pulmonary congestion showed high diagnostic value for AHF in dyspneic patients. Better use of CXR may provide an inexpensive, widely, and readily available method for AHF triage in the ED.



PLoS Med: 30 Oct 2020; 17:e1003419
Kobayashi M, Douair A, Duarte K, Jaeger D, ... Chouihed T, Girerd N
PLoS Med: 30 Oct 2020; 17:e1003419 | PMID: 33175832
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Association between country preparedness indicators and quality clinical care for cardiovascular disease risk factors in 44 lower- and middle-income countries: A multicountry analysis of survey data.

Davies JI, Reddiar SK, Hirschhorn LR, Ebert C, ... Manne-Goehler J, Jaacks LM
Background
Cardiovascular diseases are leading causes of death, globally, and health systems that deliver quality clinical care are needed to manage an increasing number of people with risk factors for these diseases. Indicators of preparedness of countries to manage cardiovascular disease risk factors (CVDRFs) are regularly collected by ministries of health and global health agencies. We aimed to assess whether these indicators are associated with patient receipt of quality clinical care.
Methods and findings
We did a secondary analysis of cross-sectional, nationally representative, individual-patient data from 187,552 people with hypertension (mean age 48.1 years, 53.5% female) living in 43 low- and middle-income countries (LMICs) and 40,795 people with diabetes (mean age 52.2 years, 57.7% female) living in 28 LMICs on progress through cascades of care (condition diagnosed, treated, or controlled) for diabetes or hypertension, to indicate outcomes of provision of quality clinical care. Data were extracted from national-level World Health Organization (WHO) Stepwise Approach to Surveillance (STEPS), or other similar household surveys, conducted between July 2005 and November 2016. We used mixed-effects logistic regression to estimate associations between each quality clinical care outcome and indicators of country development (gross domestic product [GDP] per capita or Human Development Index [HDI]); national capacity for the prevention and control of noncommunicable diseases (\'NCD readiness indicators\' from surveys done by WHO); health system finance (domestic government expenditure on health [as percentage of GDP], private, and out-of-pocket expenditure on health [both as percentage of current]); and health service readiness (number of physicians, nurses, or hospital beds per 1,000 people) and performance (neonatal mortality rate). All models were adjusted for individual-level predictors including age, sex, and education. In an exploratory analysis, we tested whether national-level data on facility preparedness for diabetes were positively associated with outcomes. Associations were inconsistent between indicators and quality clinical care outcomes. For hypertension, GDP and HDI were both positively associated with each outcome. Of the 33 relationships tested between NCD readiness indicators and outcomes, only two showed a significant positive association: presence of guidelines with being diagnosed (odds ratio [OR], 1.86 [95% CI 1.08-3.21], p = 0.03) and availability of funding with being controlled (OR, 2.26 [95% CI 1.09-4.69], p = 0.03). Hospital beds (OR, 1.14 [95% CI 1.02-1.27], p = 0.02), nurses/midwives (OR, 1.24 [95% CI 1.06-1.44], p = 0.006), and physicians (OR, 1.21 [95% CI 1.11-1.32], p < 0.001) per 1,000 people were positively associated with being diagnosed and, similarly, with being treated; and the number of physicians was additionally associated with being controlled (OR, 1.12 [95% CI 1.01-1.23], p = 0.03). For diabetes, no positive associations were seen between NCD readiness indicators and outcomes. There was no association between country development, health service finance, or health service performance and readiness indicators and any outcome, apart from GDP (OR, 1.70 [95% CI 1.12-2.59], p = 0.01), HDI (OR, 1.21 [95% CI 1.01-1.44], p = 0.04), and number of physicians per 1,000 people (OR, 1.28 [95% CI 1.09-1.51], p = 0.003), which were associated with being diagnosed. Six countries had data on cascades of care and nationwide-level data on facility preparedness. Of the 27 associations tested between facility preparedness indicators and outcomes, the only association that was significant was having metformin available, which was positively associated with treatment (OR, 1.35 [95% CI 1.01-1.81], p = 0.04). The main limitation was use of blood pressure measurement on a single occasion to diagnose hypertension and a single blood glucose measurement to diagnose diabetes.
Conclusion
In this study, we observed that indicators of country preparedness to deal with CVDRFs are poor proxies for quality clinical care received by patients for hypertension and diabetes. The major implication is that assessments of countries\' preparedness to manage CVDRFs should not rely on proxies; rather, it should involve direct assessment of quality clinical care.



PLoS Med: 30 Oct 2020; 17:e1003268
Davies JI, Reddiar SK, Hirschhorn LR, Ebert C, ... Manne-Goehler J, Jaacks LM
PLoS Med: 30 Oct 2020; 17:e1003268 | PMID: 33170842
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

The risk of Plasmodium vivax parasitaemia after P. falciparum malaria: An individual patient data meta-analysis from the WorldWide Antimalarial Resistance Network.

Hossain MS, Commons RJ, Douglas NM, Thriemer K, ... Simpson JA, Price RN
Background
There is a high risk of Plasmodium vivax parasitaemia following treatment of falciparum malaria. Our study aimed to quantify this risk and the associated determinants using an individual patient data meta-analysis in order to identify populations in which a policy of universal radical cure, combining artemisinin-based combination therapy (ACT) with a hypnozoitocidal antimalarial drug, would be beneficial.
Methods and findings
A systematic review of Medline, Embase, Web of Science, and the Cochrane Database of Systematic Reviews identified efficacy studies of uncomplicated falciparum malaria treated with ACT that were undertaken in regions coendemic for P. vivax between 1 January 1960 and 5 January 2018. Data from eligible studies were pooled using standardised methodology. The risk of P. vivax parasitaemia at days 42 and 63 and associated risk factors were investigated by multivariable Cox regression analyses. Study quality was assessed using a tool developed by the Joanna Briggs Institute. The study was registered in the International Prospective Register of Systematic Reviews (PROSPERO: CRD42018097400). In total, 42 studies enrolling 15,341 patients were included in the analysis, including 30 randomised controlled trials and 12 cohort studies. Overall, 14,146 (92.2%) patients had P. falciparum monoinfection and 1,195 (7.8%) mixed infection with P. falciparum and P. vivax. The median age was 17.0 years (interquartile range [IQR] = 9.0-29.0 years; range = 0-80 years), with 1,584 (10.3%) patients younger than 5 years. 2,711 (17.7%) patients were treated with artemether-lumefantrine (AL, 13 studies), 651 (4.2%) with artesunate-amodiaquine (AA, 6 studies), 7,340 (47.8%) with artesunate-mefloquine (AM, 25 studies), and 4,639 (30.2%) with dihydroartemisinin-piperaquine (DP, 16 studies). 14,537 patients (94.8%) were enrolled from the Asia-Pacific region, 684 (4.5%) from the Americas, and 120 (0.8%) from Africa. At day 42, the cumulative risk of vivax parasitaemia following treatment of P. falciparum was 31.1% (95% CI 28.9-33.4) after AL, 14.1% (95% CI 10.8-18.3) after AA, 7.4% (95% CI 6.7-8.1) after AM, and 4.5% (95% CI 3.9-5.3) after DP. By day 63, the risks had risen to 39.9% (95% CI 36.6-43.3), 42.4% (95% CI 34.7-51.2), 22.8% (95% CI 21.2-24.4), and 12.8% (95% CI 11.4-14.5), respectively. In multivariable analyses, the highest rate of P. vivax parasitaemia over 42 days of follow-up was in patients residing in areas of short relapse periodicity (adjusted hazard ratio [AHR] = 6.2, 95% CI 2.0-19.5; p = 0.002); patients treated with AL (AHR = 6.2, 95% CI 4.6-8.5; p < 0.001), AA (AHR = 2.3, 95% CI 1.4-3.7; p = 0.001), or AM (AHR = 1.4, 95% CI 1.0-1.9; p = 0.028) compared with DP; and patients who did not clear their initial parasitaemia within 2 days (AHR = 1.8, 95% CI 1.4-2.3; p < 0.001). The analysis was limited by heterogeneity between study populations and lack of data from very low transmission settings. Study quality was high.
Conclusions
In this meta-analysis, we found a high risk of P. vivax parasitaemia after treatment of P. falciparum malaria that varied significantly between studies. These P. vivax infections are likely attributable to relapses that could be prevented with radical cure including a hypnozoitocidal agent; however, the benefits of such a novel strategy will vary considerably between geographical areas.



PLoS Med: 30 Oct 2020; 17:e1003393
Hossain MS, Commons RJ, Douglas NM, Thriemer K, ... Simpson JA, Price RN
PLoS Med: 30 Oct 2020; 17:e1003393 | PMID: 33211712
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Circulating tumour DNA in metastatic breast cancer to guide clinical trial enrolment and precision oncology: A cohort study.

Zivanovic Bujak A, Weng CF, Silva MJ, Yeung M, ... Wong SQ, Dawson SJ
Background
Metastatic breast cancer (mBC) is a heterogenous disease with increasing availability of targeted therapies as well as emerging genomic markers of therapeutic resistance, necessitating timely and accurate molecular characterization of disease. As a minimally invasive test, analysis of circulating tumour DNA (ctDNA) is well positioned for real-time genomic profiling to guide treatment decisions. Here, we report the results of a prospective testing program established to assess the feasibility of ctDNA analysis to guide clinical management of mBC patients.
Methods and findings
Two hundred thirty-four mBC patients (median age 54 years) were enrolled between June 2015 and October 2018 at the Peter MacCallum Cancer Centre, Melbourne, Australia. Median follow-up was 15 months (range 1-46). All patient samples at the time of enrolment were analysed in real time for the presence of somatic mutations. Longitudinal plasma testing during the course of patient management was also undertaken in a subset of patients (n = 67, 28.6%), according to clinician preference, for repeated molecular profiling or disease monitoring. Detection of somatic mutations from patient plasma was performed using a multiplexed droplet digital PCR (ddPCR) approach to identify hotspot mutations in PIK3CA, ESR1, ERBB2, and AKT1. In parallel, subsets of samples were also analysed via next-generation sequencing (targeted panel sequencing and low-coverage whole-genome sequencing [LC-WGS]). The sensitivity of ddPCR and targeted panel sequencing to identify actionable mutations was compared. Results were discussed at a multidisciplinary breast cancer meeting prior to treatment decisions. ddPCR and targeted panel sequencing identified at least 1 actionable mutation at baseline in 80/234 (34.2%) and 62/159 (39.0%) of patients tested, respectively. Combined, both methods detected an actionable alteration in 104/234 patients (44.4%) through baseline or serial ctDNA testing. LC-WGS was performed on 27 patients from the cohort, uncovering several recurrently amplified regions including 11q13.3 encompassing CCND1. Increasing ctDNA levels were associated with inferior overall survival, whether assessed by ddPCR, targeted sequencing, or LC-WGS. Overall, the ctDNA results changed clinical management in 40 patients including the direct recruitment of 20 patients to clinical trials. Limitations of the study were that it was conducted at a single site and that 31.3% of participants were lost to follow-up.
Conclusion
In this study, we found prospective ctDNA testing to be a practical and feasible approach that can guide clinical trial enrolment and patient management in mBC.



PLoS Med: 29 Sep 2020; 17:e1003363
Zivanovic Bujak A, Weng CF, Silva MJ, Yeung M, ... Wong SQ, Dawson SJ
PLoS Med: 29 Sep 2020; 17:e1003363 | PMID: 33001984
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Trends in prevalence of acute stroke impairments: A population-based cohort study using the South London Stroke Register.

Clery A, Bhalla A, Rudd AG, Wolfe CDA, Wang Y
Background
Acute stroke impairments often result in poor long-term outcome for stroke survivors. The aim of this study was to estimate the trends over time in the prevalence of these acute stroke impairments.
Methods and findings
All first-ever stroke patients recorded in the South London Stroke Register (SLSR) between 2001 and 2018 were included in this cohort study. Multivariable Poisson regression models with robust error variance were used to estimate the adjusted prevalence of 8 acute impairments, across six 3-year time cohorts. Prevalence ratios comparing impairments over time were also calculated, stratified by age, sex, ethnicity, and aetiological classification (Trial of Org 10172 in Acute Stroke Treatment [TOAST]). A total of 4,683 patients had a stroke between 2001 and 2018. Mean age was 68.9 years, 48% were female, and 64% were White. After adjustment for demographic factors, pre-stroke risk factors, and stroke subtype, the prevalence of 3 out of the 8 acute impairments declined during the 18-year period, including limb motor deficit (from 77% [95% CI 74%-81%] to 62% [56%-68%], p < 0.001), dysphagia (37% [33%-41%] to 15% [12%-20%], p < 0.001), and urinary incontinence (43% [39%-47%) to 29% [24%-35%], p < 0.001). Declines in limb impairment over time were 2 times greater in men than women (prevalence ratio 0.73 [95% CI 0.64-0.84] and 0.87 [95% CI 0.77-0.98], respectively). Declines also tended to be greater in younger patients. Stratified by TOAST classification, the prevalence of all impairments was high for large artery atherosclerosis (LAA), cardioembolism (CE), and stroke of undetermined aetiology. Conversely, small vessel occlusions (SVOs) had low levels of all impairments except for limb motor impairment and dysarthria. While we have assessed 8 key acute stroke impairments, this study is limited by a focus on physical impairments, although cognitive impairments are equally important to understand. In addition, this is an inner-city cohort, which has unique characteristics compared to other populations.
Conclusions
In this study, we found that stroke patients in the SLSR had a complexity of acute impairments, of which limb motor deficit, dysphagia, and incontinence have declined between 2001 and 2018. These reductions have not been uniform across all patient groups, with women and the older population, in particular, seeing fewer reductions.



PLoS Med: 29 Sep 2020; 17:e1003366
Clery A, Bhalla A, Rudd AG, Wolfe CDA, Wang Y
PLoS Med: 29 Sep 2020; 17:e1003366 | PMID: 33035232
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Association of technologically assisted integrated care with clinical outcomes in type 2 diabetes in Hong Kong using the prospective JADE Program: A retrospective cohort analysis.

Lim LL, Lau ESH, Ozaki R, Chung H, ... Luk AOY, Chan JCN
Background
Diabetes outcomes are influenced by host factors, settings, and care processes. We examined the association of data-driven integrated care assisted by information and communications technology (ICT) with clinical outcomes in type 2 diabetes in public and private healthcare settings.
Methods and findings
The web-based Joint Asia Diabetes Evaluation (JADE) platform provides a protocol to guide data collection for issuing a personalized JADE report including risk categories (1-4, low-high), 5-year probabilities of cardiovascular-renal events, and trends and targets of 4 risk factors with tailored decision support. The JADE program is a prospective cohort study implemented in a naturalistic environment where patients underwent nurse-led structured evaluation (blood/urine/eye/feet) in public and private outpatient clinics and diabetes centers in Hong Kong. We retrospectively analyzed the data of 16,624 Han Chinese patients with type 2 diabetes who were enrolled in 2007-2015. In the public setting, the non-JADE group (n = 3,587) underwent structured evaluation for risk factors and complications only, while the JADE (n = 9,601) group received a JADE report with group empowerment by nurses. In a community-based, nurse-led, university-affiliated diabetes center (UDC), the JADE-Personalized (JADE-P) group (n = 3,436) received a JADE report, personalized empowerment, and annual telephone reminder for reevaluation and engagement. The primary composite outcome was time to the first occurrence of cardiovascular-renal diseases, all-site cancer, and/or death, based on hospitalization data censored on 30 June 2017. During 94,311 person-years of follow-up in 2007-2017, 7,779 primary events occurred. Compared with the JADE group (136.22 cases per 1,000 patient-years [95% CI 132.35-140.18]), the non-JADE group had higher (145.32 [95% CI 138.68-152.20]; P = 0.020) while the JADE-P group had lower event rates (70.94 [95% CI 67.12-74.91]; P < 0.001). The adjusted hazard ratios (aHRs) for the primary composite outcome were 1.22 (95% CI 1.15-1.30) and 0.70 (95% CI 0.66-0.75), respectively, independent of risk profiles, education levels, drug usage, self-care, and comorbidities at baseline. We reported consistent results in propensity-score-matched analyses and after accounting for loss to follow-up. Potential limitations include its nonrandomized design that precludes causal inference, residual confounding, and participation bias.
Conclusions
ICT-assisted integrated care was associated with a reduction in clinical events, including death in type 2 diabetes in public and private healthcare settings.



PLoS Med: 29 Sep 2020; 17:e1003367
Lim LL, Lau ESH, Ozaki R, Chung H, ... Luk AOY, Chan JCN
PLoS Med: 29 Sep 2020; 17:e1003367 | PMID: 33007052
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Association between prehospital time and outcome of trauma patients in 4 Asian countries: A cross-national, multicenter cohort study.

Chen CH, Shin SD, Sun JT, Jamaluddin SF, ... Ma MH, Chiang WC
Background
Whether rapid transportation can benefit patients with trauma remains controversial. We determined the association between prehospital time and outcome to explore the concept of the \"golden hour\" for injured patients.
Methods and findings
We conducted a retrospective cohort study of trauma patients transported from the scene to hospitals by emergency medical service (EMS) from January 1, 2016, to November 30, 2018, using data from the Pan-Asia Trauma Outcomes Study (PATOS) database. Prehospital time intervals were categorized into response time (RT), scene to hospital time (SH), and total prehospital time (TPT). The outcomes were 30-day mortality and functional status at hospital discharge. Multivariable logistic regression was used to investigate the association of prehospital time and outcomes to adjust for factors including age, sex, mechanism and type of injury, Injury Severity Score (ISS), Revised Trauma Score (RTS), and prehospital interventions. Overall, 24,365 patients from 4 countries (645 patients from Japan, 16,476 patients from Korea, 5,358 patients from Malaysia, and 1,886 patients from Taiwan) were included in the analysis. Among included patients, the median age was 45 years (lower quartile [Q1]-upper quartile [Q3]: 25-62), and 15,498 (63.6%) patients were male. Median (Q1-Q3) RT, SH, and TPT were 20 (Q1-Q3: 12-39), 21 (Q1-Q3: 16-29), and 47 (Q1-Q3: 32-60) minutes, respectively. In all, 280 patients (1.1%) died within 30 days after injury. Prehospital time intervals were not associated with 30-day mortality. The adjusted odds ratios (aORs) per 10 minutes of RT, SH, and TPT were 0.99 (95% CI 0.92-1.06, p = 0.740), 1.08 (95% CI 1.00-1.17, p = 0.065), and 1.03 (95% CI 0.98-1.09, p = 0.236), respectively. However, long prehospital time was detrimental to functional survival. The aORs of RT, SH, and TPT per 10-minute delay were 1.06 (95% CI 1.04-1.08, p < 0.001), 1.05 (95% CI 1.01-1.08, p = 0.007), and 1.06 (95% CI 1.04-1.08, p < 0.001), respectively. The key limitation of our study is the missing data inherent to the retrospective design. Another major limitation is the aggregate nature of the data from different countries and unaccounted confounders such as in-hospital management.
Conclusions
Longer prehospital time was not associated with an increased risk of 30-day mortality, but it may be associated with increased risk of poor functional outcomes in injured patients. This finding supports the concept of the \"golden hour\" for trauma patients during prehospital care in the countries studied.



PLoS Med: 29 Sep 2020; 17:e1003360
Chen CH, Shin SD, Sun JT, Jamaluddin SF, ... Ma MH, Chiang WC
PLoS Med: 29 Sep 2020; 17:e1003360 | PMID: 33022018
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Midwifery continuity of care versus standard maternity care for women at increased risk of preterm birth: A hybrid implementation-effectiveness, randomised controlled pilot trial in the UK.

Fernandez Turienzo C, Bick D, Briley AL, Bollard M, ... Sandall J,
Background
Midwifery continuity of care is the only health system intervention shown to reduce preterm birth (PTB) and improve perinatal survival, but no trial evidence exists for women with identified risk factors for PTB. We aimed to assess feasibility, fidelity, and clinical outcomes of a model of midwifery continuity of care linked with a specialist obstetric clinic for women considered at increased risk for PTB.
Methods and findings
We conducted a hybrid implementation-effectiveness, randomised, controlled, unblinded, parallel-group pilot trial at an inner-city maternity service in London (UK), in which pregnant women identified at increased risk of PTB were randomly assigned (1:1) to either midwifery continuity of antenatal, intrapartum, and postnatal care (Pilot study Of midwifery Practice in Preterm birth Including women\'s Experiences [POPPIE] group) or standard care group (maternity care by different midwives working in designated clinical areas). Pregnant women attending for antenatal care at less than 24 weeks\' gestation were eligible if they fulfilled one or more of the following criteria: previous cervical surgery, cerclage, premature rupture of membranes, PTB, or late miscarriage; previous short cervix or short cervix this pregnancy; or uterine abnormality and/or current smoker of tobacco. Feasibility outcomes included eligibility, recruitment and attrition rates, and fidelity of the model. The primary outcome was a composite of appropriate and timely interventions for the prevention and/or management of preterm labour and birth. We analysed by intention to treat. Between 9 May 2017 and 30 September 2018, 334 women were recruited; 169 women were allocated to the POPPIE group and 165 to the standard group. Mean maternal age was 31 years; 32% of the women were from Black, Asian, and ethnic minority groups; 70% were in employment; and 46% had a university degree. Nearly 70% of women lived in areas of social deprivation. More than a quarter of women had at least one pre-existing medical condition and multiple risk factors for PTB. More than 75% of antenatal and postnatal visits were provided by a named/partner midwife, and a midwife from the POPPIE team was present at 80% of births. The incidence of the primary composite outcome showed no statistically significant difference between groups (POPPIE group 83.3% versus standard group 84.7%; risk ratio 0.98 [95% confidence interval (CI) 0.90 to 1.08]; p = 0.742). Infants in the POPPIE group were significantly more likely to have skin-to-skin contact after birth, to have it for a longer time, and to breastfeed immediately after birth and at hospital discharge. There were no differences in other secondary outcomes. The number of serious adverse events was similar in both groups and unrelated to the intervention (POPPIE group 6 versus standard group 5). Limitations of this study included the limited power and the nonmasking of group allocation; however, study assignment was masked to the statistician and researchers who analysed the data.
Conclusions
In this study, we found that it is feasible to set up and achieve fidelity of a model of midwifery continuity of care linked with specialist obstetric care for women at increased risk of PTB in an inner-city maternity service in London (UK), but there is no impact on most outcomes for this population group. Larger appropriately powered trials are needed, including in other settings, to evaluate the impact of relational continuity and hypothesised mechanisms of effect based on increased trust and engagement, improved care coordination, and earlier referral on disadvantaged communities, including women with complex social factors and social vulnerability.
Trial registration
We prospectively registered the pilot trial on the UK Clinical Research Network Portfolio Database (ID number: 31951, 24 April 2017). We registered the trial on the International Standard Randomised Controlled Trial Number (ISRCTN) (Number: 37733900, 21 August 2017) and before trial recruitment was completed (30 September 2018) when informed that prospective registration for a pilot trial was also required in a primary clinical trial registry recognised by WHO and the International Committee of Medical Journal Editors (ICMJE). The protocol as registered and published has remained unchanged, and the analysis conforms to the original plan.



PLoS Med: 29 Sep 2020; 17:e1003350
Fernandez Turienzo C, Bick D, Briley AL, Bollard M, ... Sandall J,
PLoS Med: 29 Sep 2020; 17:e1003350 | PMID: 33022010
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Socioeconomic level and associations between heat exposure and all-cause and cause-specific hospitalization in 1,814 Brazilian cities: A nationwide case-crossover study.

Xu R, Zhao Q, Coelho MSZS, Saldiva PHN, ... Li S, Guo Y
Background
Heat exposure, which will increase with global warming, has been linked to increased risk of a range of types of cause-specific hospitalizations. However, little is known about socioeconomic disparities in vulnerability to heat. We aimed to evaluate whether there were socioeconomic disparities in vulnerability to heat-related all-cause and cause-specific hospitalization among Brazilian cities.
Methods and findings
We collected daily hospitalization and weather data in the hot season (city-specific 4 adjacent hottest months each year) during 2000-2015 from 1,814 Brazilian cities covering 78.4% of the Brazilian population. A time-stratified case-crossover design modeled by quasi-Poisson regression and a distributed lag model was used to estimate city-specific heat-hospitalization association. Then meta-analysis was used to synthesize city-specific estimates according to different socioeconomic quartiles or levels. We included 49 million hospitalizations (58.5% female; median [interquartile range] age: 33.3 [19.8-55.7] years). For cities of lower middle income (LMI), upper middle income (UMI), and high income (HI) according to the World Bank\'s classification, every 5°C increase in daily mean temperature during the hot season was associated with a 5.1% (95% CI 4.4%-5.7%, P < 0.001), 3.7% (3.3%-4.0%, P < 0.001), and 2.6% (1.7%-3.4%, P < 0.001) increase in all-cause hospitalization, respectively. The inter-city socioeconomic disparities in the association were strongest for children and adolescents (0-19 years) (increased all-cause hospitalization risk with every 5°C increase [95% CI]: 9.9% [8.7%-11.1%], P < 0.001, in LMI cities versus 5.2% [4.1%-6.3%], P < 0.001, in HI cities). The disparities were particularly evident for hospitalization due to certain diseases, including ischemic heart disease (increase in cause-specific hospitalization risk with every 5°C increase [95% CI]: 5.6% [-0.2% to 11.8%], P = 0.060, in LMI cities versus 0.5% [-2.1% to 3.1%], P = 0.717, in HI cities), asthma (3.7% [0.3%-7.1%], P = 0.031, versus -6.4% [-12.1% to -0.3%], P = 0.041), pneumonia (8.0% [5.6%-10.4%], P < 0.001, versus 3.8% [1.1%-6.5%], P = 0.005), renal diseases (9.6% [6.2%-13.1%], P < 0.001, versus 4.9% [1.8%-8.0%], P = 0.002), mental health conditions (17.2% [8.4%-26.8%], P < 0.001, versus 5.5% [-1.4% to 13.0%], P = 0.121), and neoplasms (3.1% [0.7%-5.5%], P = 0.011, versus -0.1% [-2.1% to 2.0%], P = 0.939). The disparities were similar when stratifying the cities by other socioeconomic indicators (urbanization rate, literacy rate, and household income). The main limitations were lack of data on personal exposure to temperature, and that our city-level analysis did not assess intra-city or individual-level socioeconomic disparities and could not exclude confounding effects of some unmeasured variables.
Conclusions
Less developed cities displayed stronger associations between heat exposure and all-cause hospitalizations and certain types of cause-specific hospitalizations in Brazil. This may exacerbate the existing geographical health and socioeconomic inequalities under a changing climate.



PLoS Med: 29 Sep 2020; 17:e1003369
Xu R, Zhao Q, Coelho MSZS, Saldiva PHN, ... Li S, Guo Y
PLoS Med: 29 Sep 2020; 17:e1003369 | PMID: 33031393
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Genetics of height and risk of atrial fibrillation: A Mendelian randomization study.

Levin MG, Judy R, Gill D, Vujkovic M, ... Voight BF, Damrauer SM
Background
Observational studies have identified height as a strong risk factor for atrial fibrillation, but this finding may be limited by residual confounding. We aimed to examine genetic variation in height within the Mendelian randomization (MR) framework to determine whether height has a causal effect on risk of atrial fibrillation.
Methods and findings
In summary-level analyses, MR was performed using summary statistics from genome-wide association studies of height (GIANT/UK Biobank; 693,529 individuals) and atrial fibrillation (AFGen; 65,446 cases and 522,744 controls), finding that each 1-SD increase in genetically predicted height increased the odds of atrial fibrillation (odds ratio [OR] 1.34; 95% CI 1.29 to 1.40; p = 5 × 10-42). This result remained consistent in sensitivity analyses with MR methods that make different assumptions about the presence of pleiotropy, and when accounting for the effects of traditional cardiovascular risk factors on atrial fibrillation. Individual-level phenome-wide association studies of height and a height genetic risk score were performed among 6,567 European-ancestry participants of the Penn Medicine Biobank (median age at enrollment 63 years, interquartile range 55-72; 38% female; recruitment 2008-2015), confirming prior observational associations between height and atrial fibrillation. Individual-level MR confirmed that each 1-SD increase in height increased the odds of atrial fibrillation, including adjustment for clinical and echocardiographic confounders (OR 1.89; 95% CI 1.50 to 2.40; p = 0.007). The main limitations of this study include potential bias from pleiotropic effects of genetic variants, and lack of generalizability of individual-level findings to non-European populations.
Conclusions
In this study, we observed evidence that height is likely a positive causal risk factor for atrial fibrillation. Further study is needed to determine whether risk prediction tools including height or anthropometric risk factors can be used to improve screening and primary prevention of atrial fibrillation, and whether biological pathways involved in height may offer new targets for treatment of atrial fibrillation.



PLoS Med: 29 Sep 2020; 17:e1003288
Levin MG, Judy R, Gill D, Vujkovic M, ... Voight BF, Damrauer SM
PLoS Med: 29 Sep 2020; 17:e1003288 | PMID: 33031386
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Impact of providing free HIV self-testing kits on frequency of testing among men who have sex with men and their sexual partners in China: A randomized controlled trial.

Zhang C, Koniak-Griffin D, Qian HZ, Goldsamt LA, ... Brecht ML, Li X
Background
The HIV epidemic is rapidly growing among men who have sex with men (MSM) in China, yet HIV testing remains suboptimal. We aimed to determine the impact of HIV self-testing (HIVST) interventions on frequency of HIV testing among Chinese MSM and their sexual partners.
Methods and findings
This randomized controlled trial was conducted in 4 cities in Hunan Province, China. Sexually active and HIV-negative MSM were recruited from communities and randomly assigned (1:1) to intervention or control arms. Participants in the control arm had access to site-based HIV testing (SBHT); those in the intervention arm were provided with 2 free finger-prick-based HIVST kits at enrollment and could receive 2 to 4 kits delivered through express mail every 3 months for 1 year in addition to SBHT. They were encouraged to distribute HIVST kits to their sexual partners. The primary outcome was the number of HIV tests taken by MSM participants, and the secondary outcome was the number of HIV tests taken by their sexual partners during 12 months of follow-up. The effect size for the primary and secondary outcomes was evaluated as the standardized mean difference (SMD) in testing frequency between intervention and control arms. Between April 14, 2018, and June 30, 2018, 230 MSM were recruited. Mean age was 29 years; 77% attended college; 75% were single. The analysis population who completed at least one follow-up questionnaire included 110 (93%, 110/118) in the intervention and 106 (95%, 106/112) in the control arm. The average frequency of HIV tests per participant in the intervention arm (3.75) was higher than that in the control arm (1.80; SMD 1.26; 95% CI 0.97-1.55; P < 0.001). This difference was mainly due to the difference in HIVST between the 2 arms (intervention 2.18 versus control 0.41; SMD 1.30; 95% CI 1.01-1.59; P < 0.001), whereas the average frequency of SBHT was comparable (1.57 versus 1.40, SMD 0.14; 95% CI -0.13 to 0.40; P = 0.519). The average frequency of HIV tests among sexual partners of each participant was higher in intervention than control arm (2.65 versus 1.31; SMD 0.64; 95% CI 0.36-0.92; P < 0.001), and this difference was also due to the difference in HIVST between the 2 arms (intervention 1.41 versus control 0.36; SMD 0.75; 95% CI 0.47-1.04; P < 0.001) but not SBHT (1.24 versus 0.96; SMD 0.23; 95% CI -0.05 to 0.50; P = 0.055). Zero-inflated Poisson regression analyses showed that the likelihood of taking HIV testing among intervention participants were 2.1 times greater than that of control participants (adjusted rate ratio [RR] 2.10; 95% CI 1.75-2.53, P < 0.001), and their sexual partners were 1.55 times more likely to take HIV tests in the intervention arm compared with the control arm (1.55, 1.23-1.95, P < 0.001). During the study period, 3 participants in the intervention arm and none in the control arm tested HIV positive, and 8 sexual partners of intervention arm participants also tested positive. No other adverse events were reported. Limitations in this study included the data on number of SBHT were solely based on self-report by the participants, but self-reported number of HIVST in the intervention arm was validated; the number of partner HIV testing was indirectly reported by participants because of difficulties in accessing each of their partners.
Conclusions
In this study, we found that providing free HIVST kits significantly increased testing frequency among Chinese MSM and effectively enlarged HIV testing coverage by enhancing partner HIV testing through distribution of kits within their sexual networks.
Trial registration
Chinese Clinical Trial Registry ChiCTR1800015584.



PLoS Med: 29 Sep 2020; 17:e1003365
Zhang C, Koniak-Griffin D, Qian HZ, Goldsamt LA, ... Brecht ML, Li X
PLoS Med: 29 Sep 2020; 17:e1003365 | PMID: 33035206
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Neurodevelopmental multimorbidity and educational outcomes of Scottish schoolchildren: A population-based record linkage cohort study.

Fleming M, Salim EE, Mackay DF, Henderson A, ... Cooper SA, Pell JP
Background
Neurodevelopmental conditions commonly coexist in children, but compared to adults, childhood multimorbidity attracts less attention in research and clinical practice. We previously reported that children treated for attention deficit hyperactivity disorder (ADHD) and depression have more school absences and exclusions, additional support needs, poorer attainment, and increased unemployment. They are also more likely to have coexisting conditions, including autism and intellectual disability. We investigated prevalence of neurodevelopmental multimorbidity (≥2 conditions) among Scottish schoolchildren and their educational outcomes compared to peers.
Methods and findings
We retrospectively linked 6 Scotland-wide databases to analyse 766,244 children (390,290 [50.9%] boys; 375,954 [49.1%] girls) aged 4 to 19 years (mean = 10.9) attending Scottish schools between 2009 and 2013. Children were distributed across all deprivation quintiles (most to least deprived: 22.7%, 20.1%, 19.3%, 19.5%, 18.4%). The majority (96.2%) were white ethnicity. We ascertained autism spectrum disorder (ASD) and intellectual disabilities from records of additional support needs and ADHD and depression through relevant encashed prescriptions. We identified neurodevelopmental multimorbidity (≥2 of these conditions) in 4,789 (0.6%) children, with ASD and intellectual disability the most common combination. On adjusting for sociodemographic (sex, age, ethnicity, deprivation) and maternity (maternal age, maternal smoking, sex-gestation-specific birth weight centile, gestational age, 5-minute Apgar score, mode of delivery, parity) factors, multimorbidity was associated with increased school absenteeism and exclusion, unemployment, and poorer exam attainment. Significant dose relationships were evident between number of conditions (0, 1, ≥2) and the last 3 outcomes. Compared to children with no conditions, children with 1 condition, and children with 2 or more conditions, had more absenteeism (1 condition adjusted incidence rate ratio [IRR] 1.28, 95% CI 1.27-1.30, p < 0.001 and 2 or more conditions adjusted IRR 1.23, 95% CI 1.20-1.28, p < 0.001), greater exclusion (adjusted IRR 2.37, 95% CI 2.25-2.48, p < 0.001 and adjusted IRR 3.04, 95% CI 2.74-3.38, p < 0.001), poorer attainment (adjusted odds ratio [OR] 3.92, 95% CI 3.63-4.23, p < 0.001 and adjusted OR 12.07, 95% CI 9.15-15.94, p < 0.001), and increased unemployment (adjusted OR 1.57, 95% CI 1.49-1.66, p < 0.001 and adjusted OR 2.11, 95% CI 1.83-2.45, p < 0.001). Associations remained after further adjustment for comorbid physical conditions and additional support needs. Coexisting depression was the strongest driver of absenteeism and coexisting ADHD the strongest driver of exclusion. Absence of formal primary care diagnoses was a limitation since ascertaining depression and ADHD from prescriptions omitted affected children receiving alternative or no treatment and some antidepressants can be prescribed for other indications.
Conclusions
Structuring clinical practice and training around single conditions may disadvantage children with neurodevelopmental multimorbidity, who we observed had significantly poorer educational outcomes compared to children with 1 condition and no conditions.



PLoS Med: 29 Sep 2020; 17:e1003290
Fleming M, Salim EE, Mackay DF, Henderson A, ... Cooper SA, Pell JP
PLoS Med: 29 Sep 2020; 17:e1003290 | PMID: 33048945
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Universal third-trimester ultrasonic screening using fetal macrosomia in the prediction of adverse perinatal outcome: A systematic review and meta-analysis of diagnostic test accuracy.

Moraitis AA, Shreeve N, Sovio U, Brocklehurst P, ... Papageorghiou A, Smith GC
Background
The effectiveness of screening for macrosomia is not well established. One of the critical elements of an effective screening program is the diagnostic accuracy of a test at predicting the condition. The objective of this study is to investigate the diagnostic effectiveness of universal ultrasonic fetal biometry in predicting the delivery of a macrosomic infant, shoulder dystocia, and associated neonatal morbidity in low- and mixed-risk populations.
Methods and findings
We conducted a predefined literature search in Medline, Excerpta Medica database (EMBASE), the Cochrane library and ClinicalTrials.gov from inception to May 2020. No language restrictions were applied. We included studies where the ultrasound was performed as part of universal screening and those that included low- and mixed-risk pregnancies and excluded studies confined to high risk pregnancies. We used the estimated fetal weight (EFW) (multiple formulas and thresholds) and the abdominal circumference (AC) to define suspected large for gestational age (LGA). Adverse perinatal outcomes included macrosomia (multiple thresholds), shoulder dystocia, and other markers of neonatal morbidity. The risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Meta-analysis was carried out using the hierarchical summary receiver operating characteristic (ROC) and the bivariate logit-normal (Reitsma) models. We identified 41 studies that met our inclusion criteria involving 112,034 patients in total. These included 11 prospective cohort studies (N = 9986), one randomized controlled trial (RCT) (N = 367), and 29 retrospective cohort studies (N = 101,681). The quality of the studies was variable, and only three studies blinded the ultrasound findings to the clinicians. Both EFW >4,000 g (or 90th centile for the gestational age) and AC >36 cm (or 90th centile) had >50% sensitivity for predicting macrosomia (birthweight above 4,000 g or 90th centile) at birth with positive likelihood ratios (LRs) of 8.74 (95% confidence interval [CI] 6.84-11.17) and 7.56 (95% CI 5.85-9.77), respectively. There was significant heterogeneity at predicting macrosomia, which could reflect the different study designs, the characteristics of the included populations, and differences in the formulas used. An EFW >4,000 g (or 90th centile) had 22% sensitivity at predicting shoulder dystocia with a positive likelihood ratio of 2.12 (95% CI 1.34-3.35). There was insufficient data to analyze other markers of neonatal morbidity.
Conclusions
In this study, we found that suspected LGA is strongly predictive of the risk of delivering a large infant in low- and mixed-risk populations. However, it is only weakly (albeit statistically significantly) predictive of the risk of shoulder dystocia. There was insufficient data to analyze other markers of neonatal morbidity.



PLoS Med: 29 Sep 2020; 17:e1003190
Moraitis AA, Shreeve N, Sovio U, Brocklehurst P, ... Papageorghiou A, Smith GC
PLoS Med: 29 Sep 2020; 17:e1003190 | PMID: 33048935
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Health outcomes and cost-effectiveness of diversion programs for low-level drug offenders: A model-based analysis.

Bernard CL, Rao IJ, Robison KK, Brandeau ML
Background
Cycles of incarceration, drug abuse, and poverty undermine ongoing public health efforts to reduce overdose deaths and the spread of infectious disease in vulnerable populations. Jail diversion programs aim to divert low-level drug offenders toward community care resources, avoiding criminal justice costs and disruptions in treatment for HIV, hepatitis C virus (HCV), and drug abuse. We sought to assess the health benefits and cost-effectiveness of a jail diversion program for low-level drug offenders.
Methods and findings
We developed a microsimulation model, calibrated to King County, Washington, that captured the spread of HIV and HCV infections and incarceration and treatment systems as well as preexisting interventions such as needle and syringe programs and opiate agonist therapy. We considered an adult population of people who inject drugs (PWID), people who use drugs but do not inject (PWUD), men who have sex with men, and lower-risk heterosexuals. We projected discounted lifetime costs and quality-adjusted life years (QALYs) over a 10-year time horizon with and without a jail diversion program and calculated resulting incremental cost-effectiveness ratios (ICERs) from the health system and societal perspectives. We also tracked HIV and HCV infections, overdose deaths, and jail population size. Over 10 years, the program was estimated to reduce HIV and HCV incidence by 3.4% (95% CI 2.7%-4.0%) and 3.3% (95% CI 3.1%-3.4%), respectively, overdose deaths among PWID by 10.0% (95% CI 9.8%-10.8%), and jail population size by 6.3% (95% CI 5.9%-6.7%). When considering healthcare costs only, the program cost $25,500/QALY gained (95% CI $12,600-$48,600). Including savings from reduced incarceration (societal perspective) improved the ICER to $6,200/QALY gained (95% CI, cost-saving $24,300). Sensitivity analysis indicated that cost-effectiveness depends on diversion program participants accessing community programs such as needle and syringe programs, treatment for substance use disorder, and HIV and HCV treatment, as well as diversion program cost. A limitation of the analysis is data availability, as fewer data are available for diversion programs than for more established interventions aimed at people with substance use disorder. Additionally, like any model of a complex system, our model relies on simplifying assumptions: For example, we simplified pathways in the healthcare and criminal justice systems, modeled an average efficacy for substance use disorder treatment, and did not include costs associated with homelessness, unemployment, and breakdown in family structure.
Conclusions
We found that diversion programs for low-level drug offenders are likely to be cost-effective, generating savings in the criminal justice system while only moderately increasing healthcare costs. Such programs can reduce incarceration and its associated costs, and also avert overdose deaths and improve quality of life for PWID, PWUD, and the broader population (through reduced HIV and HCV transmission).



PLoS Med: 29 Sep 2020; 17:e1003239
Bernard CL, Rao IJ, Robison KK, Brandeau ML
PLoS Med: 29 Sep 2020; 17:e1003239 | PMID: 33048929
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Evaluation of a pharmacist-led actionable audit and feedback intervention for improving medication safety in UK primary care: An interrupted time series analysis.

Peek N, Gude WT, Keers RN, Williams R, ... Avery AJ, Ashcroft DM
Background
We evaluated the impact of the pharmacist-led Safety Medication dASHboard (SMASH) intervention on medication safety in primary care.
Methods and findings
SMASH comprised (1) training of clinical pharmacists to deliver the intervention; (2) a web-based dashboard providing actionable, patient-level feedback; and (3) pharmacists reviewing individual at-risk patients, and initiating remedial actions or advising general practitioners on doing so. It was implemented in 43 general practices covering a population of 235,595 people in Salford (Greater Manchester), UK. All practices started receiving the intervention between 18 April 2016 and 26 September 2017. We used an interrupted time series analysis of rates (prevalence) of potentially hazardous prescribing and inadequate blood-test monitoring, comparing observed rates post-intervention to extrapolations from a 24-month pre-intervention trend. The number of people registered to participating practices and having 1 or more risk factors for being exposed to hazardous prescribing or inadequate blood-test monitoring at the start of the intervention was 47,413 (males: 23,073 [48.7%]; mean age: 60 years [standard deviation: 21]). At baseline, 95% of practices had rates of potentially hazardous prescribing (composite of 10 indicators) between 0.88% and 6.19%. The prevalence of potentially hazardous prescribing reduced by 27.9% (95% CI 20.3% to 36.8%, p < 0.001) at 24 weeks and by 40.7% (95% CI 29.1% to 54.2%, p < 0.001) at 12 months after introduction of SMASH. The rate of inadequate blood-test monitoring (composite of 2 indicators) reduced by 22.0% (95% CI 0.2% to 50.7%, p = 0.046) at 24 weeks; the change at 12 months (23.5%) was no longer significant (95% CI -4.5% to 61.6%, p = 0.127). After 12 months, 95% of practices had rates of potentially hazardous prescribing between 0.74% and 3.02%. Study limitations include the fact that practices were not randomised, and therefore unmeasured confounding may have influenced our findings.
Conclusions
The SMASH intervention was associated with reduced rates of potentially hazardous prescribing and inadequate blood-test monitoring in general practices. This reduction was sustained over 12 months after the start of the intervention for prescribing but not for monitoring of medication. There was a marked reduction in the variation in rates of hazardous prescribing between practices.



PLoS Med: 29 Sep 2020; 17:e1003286
Peek N, Gude WT, Keers RN, Williams R, ... Avery AJ, Ashcroft DM
PLoS Med: 29 Sep 2020; 17:e1003286 | PMID: 33048923
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

The potential health impact of restricting less-healthy food and beverage advertising on UK television between 05.30 and 21.00 hours: A modelling study.

Mytton OT, Boyland E, Adams J, Collins B, ... Viner RM, Cobiac LJ
Background
Restrictions on the advertising of less-healthy foods and beverages is seen as one measure to tackle childhood obesity and is under active consideration by the UK government. Whilst evidence increasingly links this advertising to excess calorie intake, understanding of the potential impact of advertising restrictions on population health is limited.
Methods and findings
We used a proportional multi-state life table model to estimate the health impact of prohibiting the advertising of food and beverages high in fat, sugar, and salt (HFSS) from 05.30 hours to 21.00 hours (5:30 AM to 9:00 PM) on television in the UK. We used the following data to parameterise the model: children\'s exposure to HFSS advertising from AC Nielsen and Broadcasters\' Audience Research Board (2015); effect of less-healthy food advertising on acute caloric intake in children from a published meta-analysis; population numbers and all-cause mortality rates from the Human Mortality Database for the UK (2015); body mass index distribution from the Health Survey for England (2016); disability weights for estimating disability-adjusted life years (DALYs) from the Global Burden of Disease Study; and healthcare costs from NHS England programme budgeting data. The main outcome measures were change in the percentage of the children (aged 5-17 years) with obesity defined using the International Obesity Task Force cut-points, and change in health status (DALYs). Monte Carlo analyses was used to estimate 95% uncertainty intervals (UIs). We estimate that if all HFSS advertising between 05.30 hours and 21.00 hours was withdrawn, UK children (n = 13,729,000), would see on average 1.5 fewer HFSS adverts per day and decrease caloric intake by 9.1 kcal (95% UI 0.5-17.7 kcal), which would reduce the number of children (aged 5-17 years) with obesity by 4.6% (95% UI 1.4%-9.5%) and with overweight (including obesity) by 3.6% (95% UI 1.1%-7.4%) This is equivalent to 40,000 (95% UI 12,000-81,000) fewer UK children with obesity, and 120,000 (95% UI 34,000-240,000) fewer with overweight. For children alive in 2015 (n = 13,729,000), this would avert 240,000 (95% UI 65,000-530,000) DALYs across their lifetime (i.e., followed from 2015 through to death), and result in a health-related net monetary benefit of £7.4 billion (95% UI £2.0 billion-£16 billion) to society. Under a scenario where all HFSS advertising is displaced to after 21.00 hours, rather than withdrawn, we estimate that the benefits would be reduced by around two-thirds. This is a modelling study and subject to uncertainty; we cannot fully and accurately account for all of the factors that would affect the impact of this policy if implemented. Whilst randomised trials show that children exposed to less-healthy food advertising consume more calories, there is uncertainty about the nature of the dose-response relationship between HFSS advertising and calorie intake.
Conclusions
Our results show that HFSS television advertising restrictions between 05.30 hours and 21.00 hours in the UK could make a meaningful contribution to reducing childhood obesity. We estimate that the impact on childhood obesity of this policy may be reduced by around two-thirds if adverts are displaced to after 21.00 hours rather than being withdrawn.



PLoS Med: 29 Sep 2020; 17:e1003212
Mytton OT, Boyland E, Adams J, Collins B, ... Viner RM, Cobiac LJ
PLoS Med: 29 Sep 2020; 17:e1003212 | PMID: 33048922
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

The association between circulating 25-hydroxyvitamin D metabolites and type 2 diabetes in European populations: A meta-analysis and Mendelian randomisation analysis.

Zheng JS, Luan J, Sofianopoulou E, Sharp SJ, ... Forouhi NG, Wareham NJ
Background
Prior research suggested a differential association of 25-hydroxyvitamin D (25(OH)D) metabolites with type 2 diabetes (T2D), with total 25(OH)D and 25(OH)D3 inversely associated with T2D, but the epimeric form (C3-epi-25(OH)D3) positively associated with T2D. Whether or not these observational associations are causal remains uncertain. We aimed to examine the potential causality of these associations using Mendelian randomisation (MR) analysis.
Methods and findings
We performed a meta-analysis of genome-wide association studies for total 25(OH)D (N = 120,618), 25(OH)D3 (N = 40,562), and C3-epi-25(OH)D3 (N = 40,562) in participants of European descent (European Prospective Investigation into Cancer and Nutrition [EPIC]-InterAct study, EPIC-Norfolk study, EPIC-CVD study, Ely study, and the SUNLIGHT consortium). We identified genetic variants for MR analysis to investigate the causal association of the 25(OH)D metabolites with T2D (including 80,983 T2D cases and 842,909 non-cases). We also estimated the observational association of 25(OH)D metabolites with T2D by performing random effects meta-analysis of results from previous studies and results from the EPIC-InterAct study. We identified 10 genetic loci associated with total 25(OH)D, 7 loci associated with 25(OH)D3 and 3 loci associated with C3-epi-25(OH)D3. Based on the meta-analysis of observational studies, each 1-standard deviation (SD) higher level of 25(OH)D was associated with a 20% lower risk of T2D (relative risk [RR]: 0.80; 95% CI 0.77, 0.84; p < 0.001), but a genetically predicted 1-SD increase in 25(OH)D was not significantly associated with T2D (odds ratio [OR]: 0.96; 95% CI 0.89, 1.03; p = 0.23); this result was consistent across sensitivity analyses. In EPIC-InterAct, 25(OH)D3 (per 1-SD) was associated with a lower risk of T2D (RR: 0.81; 95% CI 0.77, 0.86; p < 0.001), while C3-epi-25(OH)D3 (above versus below lower limit of quantification) was positively associated with T2D (RR: 1.12; 95% CI 1.03, 1.22; p = 0.006), but neither 25(OH)D3 (OR: 0.97; 95% CI 0.93, 1.01; p = 0.14) nor C3-epi-25(OH)D3 (OR: 0.98; 95% CI 0.93, 1.04; p = 0.53) was causally associated with T2D risk in the MR analysis. Main limitations include the lack of a non-linear MR analysis and of the generalisability of the current findings from European populations to other populations of different ethnicities.
Conclusions
Our study found discordant associations of biochemically measured and genetically predicted differences in blood 25(OH)D with T2D risk. The findings based on MR analysis in a large sample of European ancestry do not support a causal association of total 25(OH)D or 25(OH)D metabolites with T2D and argue against the use of vitamin D supplementation for the prevention of T2D.



PLoS Med: 29 Sep 2020; 17:e1003394
Zheng JS, Luan J, Sofianopoulou E, Sharp SJ, ... Forouhi NG, Wareham NJ
PLoS Med: 29 Sep 2020; 17:e1003394 | PMID: 33064751
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Pulmonary vascular dysfunction among people aged over 65 years in the community in the Atherosclerosis Risk In Communities (ARIC) Study: A cross-sectional analysis.

Teramoto K, Santos M, Claggett B, John JE, ... Skali H, Shah AM
Background
Heart failure (HF) risk is highest in late life, and impaired pulmonary vascular function is a risk factor for HF development. However, data regarding the contributors to and prognostic importance of pulmonary vascular dysfunction among HF-free elders in the community are limited and largely restricted to pulmonary hypertension. Our objective was to define the prevalence and correlates of abnormal pulmonary pressure, resistance, and compliance and their association with incident HF and HF phenotype (left ventricular [LV] ejection fraction [LVEF] ≥ or < 50%) independent of LV structure and function.
Methods and findings
We performed cross-sectional and time-to-event analyses in a prospective epidemiologic cohort study, the Atherosclerosis Risk in Communities study. This is an ongoing, observational study that recruited 15,792 persons aged 45-64 years between 1987 and 1989 (visit 1) from four representative communities in the United States: Minneapolis, Minnesota; Jackson, Mississippi; Hagerstown, Maryland; and Forsyth County, North Carolina. The current analysis included 2,810 individuals aged 66-90 years, free of HF, who underwent echocardiography at the fifth study visit (June 8, 2011, to August 28, 2013) and had measurable tricuspid regurgitation by spectral Doppler. Echocardiography-derived pulmonary artery systolic pressure (PASP), pulmonary vascular resistance (PVR), and pulmonary arterial compliance (PAC) were measured. The main outcome was incident HF after visit 5, and key secondary end points were incident HF with preserved LVEF (HFpEF) and incident HF with reduced LVEF (HFrEF). The mean ± SD age was 76 ± 5 years, 66% were female, and 21% were black. Mean values of PASP, PVR, and PAC were 28 ± 5 mm Hg, 1.7 ± 0.4 Wood unit, and 3.4 ± 1.0 mL/mm Hg, respectively, and were abnormal in 18%, 12%, and 14%, respectively, using limits defined from the 10th and 90th percentile limits in 253 low-risk participants free of cardiovascular disease or risk factors. Left heart dysfunction was associated with abnormal PASP and PAC, whereas a restrictive ventilatory deficit was associated with abnormalities of PASP, PVR, and PAC. PASP, PVR, and PAC were each predictive of incident HF or death (hazard ratio per SD 1.3 [95% CI 1.1-1.4], p < 0.001; 1.1 [1.0-1.2], p = 0.04; 1.2 [1.1-1.4], p = 0.001, respectively) independent of LV measures. Elevated pulmonary pressure was predictive of incident HFpEF (HFpEF: 2.4 [1.4-4.0, p = 0.001]) but not HFrEF (1.4 [0.8-2.5, p = 0.31]). Abnormal PAC predicted HFrEF (HFpEF: 2.0 [1.0-4.0, p = 0.05], HFrEF: 2.8 [1.4-5.5, p = 0.003]), whereas abnormal PVR was not predictive of either (HFpEF: 0.9 [0.4-2.0, p = 0.85], HFrEF: 0.7 [0.3-1.4, p = 0.30],). A greater number of abnormal pulmonary vascular measures was associated with greater risk of incident HF. Major limitations include the use of echo Doppler to estimate pulmonary hemodynamic measures, which may lead to misclassification; inclusions bias related to detectable tricuspid regurgitation, which may limit generalizability of our findings; and survivor bias related to the cohort age, which may result in underestimation of the described associations.
Conclusions
In this study, we observed abnormalities of PASP, PVR, and PAC in 12%-18% of elders in the community. Higher PASP and lower PAC were independently predictive of incident HF. Abnormally high PASP predicted incident HFpEF but not HFrEF. These findings suggest that impairments in pulmonary vascular function may precede clinical HF and that a comprehensive pulmonary hemodynamic evaluation may identify pulmonary vascular phenotypes that differentially predict HF phenotypes.



PLoS Med: 29 Sep 2020; 17:e1003361
Teramoto K, Santos M, Claggett B, John JE, ... Skali H, Shah AM
PLoS Med: 29 Sep 2020; 17:e1003361 | PMID: 33057391
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Risk of disease and willingness to vaccinate in the United States: A population-based survey.

Baumgaertner B, Ridenhour BJ, Justwan F, Carlisle JE, Miller CR
Background
Vaccination complacency occurs when perceived risks of vaccine-preventable diseases are sufficiently low so that vaccination is no longer perceived as a necessary precaution. Disease outbreaks can once again increase perceptions of risk, thereby decrease vaccine complacency, and in turn decrease vaccine hesitancy. It is not well understood, however, how change in perceived risk translates into change in vaccine hesitancy. We advance the concept of vaccine propensity, which relates a change in willingness to vaccinate with a change in perceived risk of infection-holding fixed other considerations such as vaccine confidence and convenience.
Methods and findings
We used an original survey instrument that presents 7 vaccine-preventable \"new\" diseases to gather demographically diverse sample data from the United States in 2018 (N = 2,411). Our survey was conducted online between January 25, 2018, and February 2, 2018, and was structured in 3 parts. First, we collected information concerning the places participants live and visit in a typical week. Second, participants were presented with one of 7 hypothetical disease outbreaks and asked how they would respond. Third, we collected sociodemographic information. The survey was designed to match population parameters in the US on 5 major dimensions: age, sex, income, race, and census region. We also were able to closely match education. The aggregate demographic details for study participants were a mean age of 43.80 years, 47% male and 53% female, 38.5% with a college degree, and 24% nonwhite. We found an overall change of at least 30% in proportion willing to vaccinate as risk of infection increases. When considering morbidity information, the proportion willing to vaccinate went from 0.476 (0.449-0.503) at 0 local cases of disease to 0.871 (0.852-0.888) at 100 local cases (upper and lower 95% confidence intervals). When considering mortality information, the proportion went from 0.526 (0.494-0.557) at 0 local cases of disease to 0.916 (0.897-0.931) at 100 local cases. In addition, we ffound that the risk of mortality invokes a larger proportion willing to vaccinate than mere morbidity (P = 0.0002), that older populations are more willing than younger (P<0.0001), that the highest income bracket (>$90,000) is more willing than all others (P = 0.0001), that men are more willing than women (P = 0.0011), and that the proportion willing to vaccinate is related to both ideology and the level of risk (P = 0.004). Limitations of this study include that it does not consider how other factors (such as social influence) interact with local case counts in people\'s vaccine decision-making, it cannot determine whether different degrees of severity in morbidity or mortality failed to be statistically significant because of survey design or because participants use heuristically driven decision-making that glosses over degrees, and the study does not capture the part of the US that is not online.
Conclusions
In this study, we found that different degrees of risk (in terms of local cases of disease) correspond with different proportions of populations willing to vaccinate. We also identified several sociodemographic aspects of vaccine propensity. Understanding how vaccine propensity is affected by sociodemographic factors is invaluable for predicting where outbreaks are more likely to occur and their expected size, even with the resulting cascade of changing vaccination rates and the respective feedback on potential outbreaks.



PLoS Med: 29 Sep 2020; 17:e1003354
Baumgaertner B, Ridenhour BJ, Justwan F, Carlisle JE, Miller CR
PLoS Med: 29 Sep 2020; 17:e1003354 | PMID: 33057373
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Time trends and prescribing patterns of opioid drugs in UK primary care patients with non-cancer pain: A retrospective cohort study.

Jani M, Birlie Yimer B, Sheppard T, Lunt M, Dixon WG
Background
The US opioid epidemic has led to similar concerns about prescribed opioids in the UK. In new users, initiation of or escalation to more potent and high dose opioids may contribute to long-term use. Additionally, physician prescribing behaviour has been described as a key driver of rising opioid prescriptions and long-term opioid use. No studies to our knowledge have investigated the extent to which regions, practices, and prescribers vary in opioid prescribing whilst accounting for case mix. This study sought to (i) describe prescribing trends between 2006 and 2017, (ii) evaluate the transition of opioid dose and potency in the first 2 years from initial prescription, (iii) quantify and identify risk factors for long-term opioid use, and (iv) quantify the variation of long-term use attributed to region, practice, and prescriber, accounting for case mix and chance variation.
Methods and findings
A retrospective cohort study using UK primary care electronic health records from the Clinical Practice Research Datalink was performed. Adult patients without cancer with a new prescription of an opioid were included; 1,968,742 new users of opioids were identified. Mean age was 51 ± 19 years, and 57% were female. Codeine was the most commonly prescribed opioid, with use increasing 5-fold from 2006 to 2017, reaching 2,456 prescriptions/10,000 people/year. Morphine, buprenorphine, and oxycodone prescribing rates continued to rise steadily throughout the study period. Of those who started on high dose (120-199 morphine milligram equivalents [MME]/day) or very high dose opioids (≥200 MME/day), 10.3% and 18.7% remained in the same MME/day category or higher at 2 years, respectively. Following opioid initiation, 14.6% became long-term opioid users in the first year. In the fully adjusted model, the following were associated with the highest adjusted odds ratios (aORs) for long-term use: older age (≥75 years, aOR 4.59, 95% CI 4.48-4.70, p < 0.001; 65-74 years, aOR 3.77, 95% CI 3.68-3.85, p < 0.001, compared to <35 years), social deprivation (Townsend score quintile 5/most deprived, aOR 1.56, 95% CI 1.52-1.59, p < 0.001, compared to quintile 1/least deprived), fibromyalgia (aOR 1.81, 95% CI 1.49-2.19, p < 0.001), substance abuse (aOR 1.72, 95% CI 1.65-1.79, p < 0.001), suicide/self-harm (aOR 1.56, 95% CI 1.52-1.61, p < 0.001), rheumatological conditions (aOR 1.53, 95% CI 1.48-1.58, p < 0.001), gabapentinoid use (aOR 2.52, 95% CI 2.43-2.61, p < 0.001), and MME/day at initiation (aOR 1.08, 95% CI 1.07-1.08, p < 0.001). After adjustment for case mix, 3 of the 10 UK regions (North West [16%], Yorkshire and the Humber [15%], and South West [15%]), 103 practices (25.6%), and 540 prescribers (3.5%) had a higher proportion of patients with long-term use compared to the population average. This study was limited to patients prescribed opioids in primary care and does not include opioids available over the counter or prescribed in hospitals or drug treatment centres.
Conclusions
Of patients commencing opioids on very high MME/day (≥200), a high proportion stayed in the same category for a subsequent 2 years. Age, deprivation, prescribing factors, comorbidities such as fibromyalgia, rheumatological conditions, recent major surgery, and history of substance abuse, alcohol abuse, and self-harm/suicide were associated with long-term opioid use. Despite adjustment for case mix, variation across regions and especially practices and prescribers in high-risk prescribing was observed. Our findings support greater calls for action for reduction in practice and prescriber variation by promoting safe practice in opioid prescribing.



PLoS Med: 29 Sep 2020; 17:e1003270
Jani M, Birlie Yimer B, Sheppard T, Lunt M, Dixon WG
PLoS Med: 29 Sep 2020; 17:e1003270 | PMID: 33057368
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Developing and validating subjective and objective risk-assessment measures for predicting mortality after major surgery: An international prospective cohort study.

Wong DJN, Harris S, Sahni A, Bedford JR, ... , Moonesinghe SR
Background
Preoperative risk prediction is important for guiding clinical decision-making and resource allocation. Clinicians frequently rely solely on their own clinical judgement for risk prediction rather than objective measures. We aimed to compare the accuracy of freely available objective surgical risk tools with subjective clinical assessment in predicting 30-day mortality.
Methods and findings
We conducted a prospective observational study in 274 hospitals in the United Kingdom (UK), Australia, and New Zealand. For 1 week in 2017, prospective risk, surgical, and outcome data were collected on all adults aged 18 years and over undergoing surgery requiring at least a 1-night stay in hospital. Recruitment bias was avoided through an ethical waiver to patient consent; a mixture of rural, urban, district, and university hospitals participated. We compared subjective assessment with 3 previously published, open-access objective risk tools for predicting 30-day mortality: the Portsmouth-Physiology and Operative Severity Score for the enUmeration of Mortality (P-POSSUM), Surgical Risk Scale (SRS), and Surgical Outcome Risk Tool (SORT). We then developed a logistic regression model combining subjective assessment and the best objective tool and compared its performance to each constituent method alone. We included 22,631 patients in the study: 52.8% were female, median age was 62 years (interquartile range [IQR] 46 to 73 years), median postoperative length of stay was 3 days (IQR 1 to 6), and inpatient 30-day mortality was 1.4%. Clinicians used subjective assessment alone in 88.7% of cases. All methods overpredicted risk, but visual inspection of plots showed the SORT to have the best calibration. The SORT demonstrated the best discrimination of the objective tools (SORT Area Under Receiver Operating Characteristic curve [AUROC] = 0.90, 95% confidence interval [CI]: 0.88-0.92; P-POSSUM = 0.89, 95% CI 0.88-0.91; SRS = 0.85, 95% CI 0.82-0.87). Subjective assessment demonstrated good discrimination (AUROC = 0.89, 95% CI: 0.86-0.91) that was not different from the SORT (p = 0.309). Combining subjective assessment and the SORT improved discrimination (bootstrap optimism-corrected AUROC = 0.92, 95% CI: 0.90-0.94) and demonstrated continuous Net Reclassification Improvement (NRI = 0.13, 95% CI: 0.06-0.20, p < 0.001) compared with subjective assessment alone. Decision-curve analysis (DCA) confirmed the superiority of the SORT over other previously published models, and the SORT-clinical judgement model again performed best overall. Our study is limited by the low mortality rate, by the lack of blinding in the \'subjective\' risk assessments, and because we only compared the performance of clinical risk scores as opposed to other prediction tools such as exercise testing or frailty assessment.
Conclusions
In this study, we observed that the combination of subjective assessment with a parsimonious risk model improved perioperative risk estimation. This may be of value in helping clinicians allocate finite resources such as critical care and to support patient involvement in clinical decision-making.



PLoS Med: 29 Sep 2020; 17:e1003253
Wong DJN, Harris S, Sahni A, Bedford JR, ... , Moonesinghe SR
PLoS Med: 29 Sep 2020; 17:e1003253 | PMID: 33057333
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Serially assessed bisphenol A and phthalate exposure and association with kidney function in children with chronic kidney disease in the US and Canada: A longitudinal cohort study.

Jacobson MH, Wu Y, Liu M, Attina TM, ... Trachtman H, Trasande L
Background
Exposure to environmental chemicals may be a modifiable risk factor for progression of chronic kidney disease (CKD). The purpose of this study was to examine the impact of serially assessed exposure to bisphenol A (BPA) and phthalates on measures of kidney function, tubular injury, and oxidative stress over time in a cohort of children with CKD.
Methods and findings
Samples were collected between 2005 and 2015 from 618 children and adolescents enrolled in the Chronic Kidney Disease in Children study, an observational cohort study of pediatric CKD patients from the US and Canada. Most study participants were male (63.8%) and white (58.3%), and participants had a median age of 11.0 years (interquartile range 7.6 to 14.6) at the baseline visit. In urine samples collected serially over an average of 3.0 years (standard deviation [SD] 1.6), concentrations of BPA, phthalic acid (PA), and phthalate metabolites were measured as well as biomarkers of tubular injury (kidney injury molecule-1 [KIM-1] and neutrophil gelatinase-associated lipocalin [NGAL]) and oxidative stress (8-hydroxy-2\'-deoxyguanosine [8-OHdG] and F2-isoprostane). Clinical renal function measures included estimated glomerular filtration rate (eGFR), proteinuria, and blood pressure. Linear mixed models were fit to estimate the associations between urinary concentrations of 6 chemical exposure measures (i.e., BPA, PA, and 4 phthalate metabolite groups) and clinical renal outcomes and urinary concentrations of KIM-1, NGAL, 8-OHdG, and F2-isoprostane controlling for sex, age, race/ethnicity, glomerular status, birth weight, premature birth, angiotensin-converting enzyme inhibitor use, angiotensin receptor blocker use, BMI z-score for age and sex, and urinary creatinine. Urinary concentrations of BPA, PA, and phthalate metabolites were positively associated with urinary KIM-1, NGAL, 8-OHdG, and F2-isoprostane levels over time. For example, a 1-SD increase in ∑di-n-octyl phthalate metabolites was associated with increases in NGAL (β = 0.13 [95% CI: 0.05, 0.21], p = 0.001), KIM-1 (β = 0.30 [95% CI: 0.21, 0.40], p < 0.001), 8-OHdG (β = 0.10 [95% CI: 0.06, 0.13], p < 0.001), and F2-isoprostane (β = 0.13 [95% CI: 0.01, 0.25], p = 0.04) over time. BPA and phthalate metabolites were not associated with eGFR, proteinuria, or blood pressure, but PA was associated with lower eGFR over time. For a 1-SD increase in ln-transformed PA, there was an average decrease in eGFR of 0.38 ml/min/1.73 m2 (95% CI: -0.75, -0.01; p = 0.04). Limitations of this study included utilization of spot urine samples for exposure assessment of non-persistent compounds and lack of specific information on potential sources of exposure.
Conclusions
Although BPA and phthalate metabolites were not associated with clinical renal endpoints such as eGFR or proteinuria, there was a consistent pattern of increased tubular injury and oxidative stress over time, which have been shown to affect renal function in the long term. This raises concerns about the potential for clinically significant changes in renal function in relation to exposure to common environmental toxicants at current levels.



PLoS Med: 29 Sep 2020; 17:e1003384
Jacobson MH, Wu Y, Liu M, Attina TM, ... Trachtman H, Trasande L
PLoS Med: 29 Sep 2020; 17:e1003384 | PMID: 33052911
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Medicalization of female genital cutting in Malaysia: A mixed methods study.

Rashid A, Iguchi Y, Afiqah SN
Background
Despite the clear stand taken by the United Nations (UN) and other international bodies in ensuring that female genital cutting (FGC) is not performed by health professionals, the rate of medicalization has not reduced. The current study aimed to determine the extent of medicalization of FGC among doctors in Malaysia, who the doctors were who practiced it, how and what was practiced, and the motivations for the practice.
Methods and findings
This mixed method (qualitative and quantitative) study was conducted from 2018 to 2019 using a self-administered questionnaire among Muslim medical doctors from 2 main medical associations with a large number of Muslim members from all over Malaysia who attended their annual conference. For those doctors who did not attend the conference, the questionnaire was posted to them. Association A had 510 members, 64 male Muslim doctors and 333 female Muslim doctors. Association B only had Muslim doctors; 3,088 were female, and 1,323 were male. In total, 894 questionnaires were distributed either by hand or by post, and 366 completed questionnaires were received back. For the qualitative part of the study, a snowball sampling method was used, and 24 in-depth interviews were conducted using a semi-structured questionnaire, until data reached saturation. Quantitative data were analysed using SPSS version 18 (IBM, Armonk, NY). A chi-squared test and binary logistic regression were performed. The qualitative data were transcribed manually, organized, coded, and recoded using NVivo version 12. The clustered codes were elicited as common themes. Most of the respondents were women, had medical degrees from Malaysia, and had a postgraduate degree in Family Medicine. The median age was 42. Most were working with the Ministry of Health (MoH) Malaysia, and in a clinic located in an urban location. The prevalence of Muslim doctors practising FGC was 20.5% (95% CI 16.6-24.9). The main reason cited for practising FGC was religious obligation. Qualitative findings too showed that religion was a strong motivating factor for the practice and its continuation, besides culture and harm reduction. Although most Muslim doctors performed type IV FGC, there were a substantial number performing type I. Respondents who were women (adjusted odds ratio [aOR] 4.4, 95% CI 1.9-10.0. P ≤ 0.001), who owned a clinic (aOR 30.7, 95% CI 12.0-78.4. P ≤ 0.001) or jointly owned a clinic (aOR 7.61, 95% CI 3.2-18.1. P ≤ 0.001), who thought that FGC was legal in Malaysia (aOR 2.09, 95% CI 1.02-4.3. P = 0.04), and who were encouraged in religion (aOR 2.25, 95% CI 3.2-18.1. P = 0.036) and thought that FGC should continue (aOR 3.54, 95% CI 1.25-10.04. P = 0.017) were more likely to practice FGC. The main limitations of the study were the small sample size and low response rate.
Conclusions
In this study, we found that many of the Muslim doctors were unaware of the legal and international stand against FGC, and many wanted the practice to continue. It is a concern that type IV FGC carried out by traditional midwives may be supplanted and exacerbated by type I FGC performed by doctors, calling for strong and urgent action by the Malaysian medical authorities.



PLoS Med: 29 Sep 2020; 17:e1003303
Rashid A, Iguchi Y, Afiqah SN
PLoS Med: 29 Sep 2020; 17:e1003303 | PMID: 33108371
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Effects of health literacy, screening, and participant choice on action plans for reducing unhealthy snacking in Australia: A randomised controlled trial.

Ayre J, Cvejic E, Bonner C, Turner RM, Walter SD, McCaffery KJ
Background
Low health literacy is associated with poorer health outcomes. A key strategy to address health literacy is a universal precautions approach, which recommends using health-literate design for all health interventions, not just those targeting people with low health literacy. This approach has advantages: Health literacy assessment and tailoring are not required. However, action plans may be more effective when tailored by health literacy. This study evaluated the impact of health literacy and action plan type on unhealthy snacking for people who have high BMI or type 2 diabetes (Aim 1) and the most effective method of action plan allocation (Aim 2).
Methods and findings
We performed a 2-stage randomised controlled trial in Australia between 14 February and 6 June 2019. In total, 1,769 participants (mean age: 49.8 years [SD = 11.7]; 56.1% female [n = 992]; mean BMI: 32.9 kg/m2 [SD = 8.7]; 29.6% self-reported type 2 diabetes [n = 523]) were randomised to 1 of 3 allocation methods (random, health literacy screening, or participant selection) and 1 of 2 action plans to reduce unhealthy snacking (standard versus literacy-sensitive). Regression analysis evaluated the impact of health literacy (Newest Vital Sign [NVS]), allocation method, and action plan on reduction in self-reported serves of unhealthy snacks (primary outcome) at 4-week follow-up. Secondary outcomes were perceived extent of unhealthy snacking, difficulty using the plans, habit strength, and action control. Analyses controlled for age, level of education, language spoken at home, diabetes status, baseline habit strength, and baseline self-reported serves of unhealthy snacks. Average NVS score was 3.6 out of 6 (SD = 2.0). Participants reported consuming 25.0 serves of snacks on average per week at baseline (SD = 28.0). Regarding Aim 1, 398 participants in the random allocation arm completed follow-up (67.7%). On average, people scoring 1 SD below the mean for health literacy consumed 10.0 fewer serves per week using the literacy-sensitive action plan compared to the standard action plan (95% CI: 0.05 to 19.5; p = 0.039), whereas those scoring 1 SD above the mean consumed 3.0 fewer serves using the standard action plan compared to the literacy-sensitive action plan (95% CI: -6.3 to 12.2; p = 0.529), although this difference did not reach statistical significance. In addition, we observed a non-significant action plan × health literacy (NVS) interaction (b = -3.25; 95% CI: -6.55 to 0.05; p = 0.054). Regarding Aim 2, 1,177 participants across the 3 allocation method arms completed follow-up (66.5%). There was no effect of allocation method on reduction of unhealthy snacking, including no effect of health literacy screening compared to participant selection (b = 1.79; 95% CI: -0.16 to 3.73; p = 0.067). Key limitations include low-moderate retention, use of a single-occasion self-reported primary outcome, and reporting of a number of extreme, yet plausible, snacking scores, which rendered interpretation more challenging. Adverse events were not assessed.
Conclusions
In our study we observed nominal improvements in effectiveness of action plans tailored to health literacy; however, these improvements did not reach statistical significance, and the costs associated with such strategies compared with universal precautions need further investigation. This study highlights the importance of considering differential effects of health literacy on intervention effectiveness.
Trial registration
Australia and New Zealand Clinical Trial Registry ACTRN12618001409268.



PLoS Med: 30 Oct 2020; 17:e1003409
Ayre J, Cvejic E, Bonner C, Turner RM, Walter SD, McCaffery KJ
PLoS Med: 30 Oct 2020; 17:e1003409 | PMID: 33141834
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Differential association of air pollution exposure with neonatal and postneonatal mortality in England and Wales: A cohort study.

Kotecha SJ, Watkins WJ, Lowe J, Grigg J, Kotecha S
Background
Many but not all studies suggest an association between air pollution exposure and infant mortality. We sought to investigate whether pollution exposure is differentially associated with all-cause neonatal or postneonatal mortality, or specific causes of infant mortality.
Methods and findings
We separately investigated the associations of exposure to particulate matter with aerodynamic diameter ≤ 10 μm (PM10), nitrogen dioxide (NO2), and sulphur dioxide (SO2) with all-cause infant, neonatal, and postneonatal mortality, and with specific causes of infant deaths in 7,984,366 live births between 2001 and 2012 in England and Wales. Overall, 51.3% of the live births were male, and there were 36,485 infant deaths (25,110 neonatal deaths and 11,375 postneonatal deaths). We adjusted for the following major confounders: deprivation, birthweight, maternal age, sex, and multiple birth. Adjusted odds ratios (95% CI; p-value) for infant deaths were significantly increased for NO2, PM10, and SO2 (1.066 [1.027, 1.107; p = 0.001], 1.044 [1.007, 1.082; p = 0.017], and 1.190 [1.146, 1.235; p < 0.001], respectively) when highest and lowest pollutant quintiles were compared; however, neonatal mortality was significantly associated with SO2 (1.207 [1.154, 1.262; p < 0.001]) but not significantly associated with NO2 and PM10 (1.044 [0.998, 1.092; p = 0.059] and 1.008 [0.966, 1.052; p = 0.702], respectively). Postneonatal mortality was significantly associated with all pollutants: NO2, 1.108 (1.038, 1.182; p < 0.001); PM10, 1.117 (1.050, 1.188; p < 0.001); and SO2, 1.147 (1.076, 1.224; p < 0.001). Whilst all were similarly associated with endocrine causes of infant deaths (NO2, 2.167 [1.539, 3.052; p < 0.001]; PM10, 1.433 [1.066, 1.926; p = 0.017]; and SO2, 1.558 [1.147, 2.116; p = 0.005]), they were differentially associated with other specific causes: NO2 and PM10 were associated with an increase in infant deaths from congenital malformations of the nervous (NO2, 1.525 [1.179, 1.974; p = 0.001]; PM10, 1.457 [1.150, 1.846; p = 0.002]) and gastrointestinal systems (NO2, 1.214 [1.006, 1.466; p = 0.043]; PM10, 1.312 [1.096, 1.571; p = 0.003]), and NO2 was also associated with deaths from malformations of the respiratory system (1.306 [1.019, 1.675; p = 0.035]). In contrast, SO2 was associated with an increase in infant deaths from perinatal causes (1.214 [1.156, 1.275; p < 0.001]) and from malformations of the circulatory system (1.172 [1.011, 1.358; p = 0.035]). A limitation of this study was that we were not able to study associations of air pollution exposure and infant mortality during the different trimesters of pregnancy. In addition, we were not able to control for all confounding factors such as maternal smoking.
Conclusions
In this study, we found that NO2, PM10, and SO2 were differentially associated with all-cause mortality and with specific causes of infant, neonatal, and postneonatal mortality.



PLoS Med: 29 Sep 2020; 17:e1003400
Kotecha SJ, Watkins WJ, Lowe J, Grigg J, Kotecha S
PLoS Med: 29 Sep 2020; 17:e1003400 | PMID: 33079932
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

The impact of delayed treatment of uncomplicated P. falciparum malaria on progression to severe malaria: A systematic review and a pooled multicentre individual-patient meta-analysis.

Mousa A, Al-Taiar A, Anstey NM, Badaut C, ... William T, Okell LC
Background
Delay in receiving treatment for uncomplicated malaria (UM) is often reported to increase the risk of developing severe malaria (SM), but access to treatment remains low in most high-burden areas. Understanding the contribution of treatment delay on progression to severe disease is critical to determine how quickly patients need to receive treatment and to quantify the impact of widely implemented treatment interventions, such as \'test-and-treat\' policies administered by community health workers (CHWs). We conducted a pooled individual-participant meta-analysis to estimate the association between treatment delay and presenting with SM.
Methods and findings
A search using Ovid MEDLINE and Embase was initially conducted to identify studies on severe Plasmodium falciparum malaria that included information on treatment delay, such as fever duration (inception to 22nd September 2017). Studies identified included 5 case-control and 8 other observational clinical studies of SM and UM cases. Risk of bias was assessed using the Newcastle-Ottawa scale, and all studies were ranked as \'Good\', scoring ≥7/10. Individual-patient data (IPD) were pooled from 13 studies of 3,989 (94.1% aged <15 years) SM patients and 5,780 (79.6% aged <15 years) UM cases in Benin, Malaysia, Mozambique, Tanzania, The Gambia, Uganda, Yemen, and Zambia. Definitions of SM were standardised across studies to compare treatment delay in patients with UM and different SM phenotypes using age-adjusted mixed-effects regression. The odds of any SM phenotype were significantly higher in children with longer delays between initial symptoms and arrival at the health facility (odds ratio [OR] = 1.33, 95% CI: 1.07-1.64 for a delay of >24 hours versus ≤24 hours; p = 0.009). Reported illness duration was a strong predictor of presenting with severe malarial anaemia (SMA) in children, with an OR of 2.79 (95% CI:1.92-4.06; p < 0.001) for a delay of 2-3 days and 5.46 (95% CI: 3.49-8.53; p < 0.001) for a delay of >7 days, compared with receiving treatment within 24 hours from symptom onset. We estimate that 42.8% of childhood SMA cases and 48.5% of adult SMA cases in the study areas would have been averted if all individuals were able to access treatment within the first day of symptom onset, if the association is fully causal. In studies specifically recording onset of nonsevere symptoms, long treatment delay was moderately associated with other SM phenotypes (OR [95% CI] >3 to ≤4 days versus ≤24 hours: cerebral malaria [CM] = 2.42 [1.24-4.72], p = 0.01; respiratory distress syndrome [RDS] = 4.09 [1.70-9.82], p = 0.002). In addition to unmeasured confounding, which is commonly present in observational studies, a key limitation is that many severe cases and deaths occur outside healthcare facilities in endemic countries, where the effect of delayed or no treatment is difficult to quantify.
Conclusions
Our results quantify the relationship between rapid access to treatment and reduced risk of severe disease, which was particularly strong for SMA. There was some evidence to suggest that progression to other severe phenotypes may also be prevented by prompt treatment, though the association was not as strong, which may be explained by potential selection bias, sample size issues, or a difference in underlying pathology. These findings may help assess the impact of interventions that improve access to treatment.



PLoS Med: 29 Sep 2020; 17:e1003359
Mousa A, Al-Taiar A, Anstey NM, Badaut C, ... William T, Okell LC
PLoS Med: 29 Sep 2020; 17:e1003359 | PMID: 33075101
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Predictive value of pulse oximetry for mortality in infants and children presenting to primary care with clinical pneumonia in rural Malawi: A data linkage study.

Colbourn T, King C, Beard J, Phiri T, ... Bin Nisar Y, McCollum ED
Background
The mortality impact of pulse oximetry use during infant and childhood pneumonia management at the primary healthcare level in low-income countries is unknown. We sought to determine mortality outcomes of infants and children diagnosed and referred using clinical guidelines with or without pulse oximetry in Malawi.
Methods and findings
We conducted a data linkage study of prospective health facility and community case and mortality data. We matched prospectively collected community health worker (CHW) and health centre (HC) outpatient data to prospectively collected hospital and community-based mortality surveillance outcome data, including episodes followed up to and deaths within 30 days of pneumonia diagnosis amongst children 0-59 months old. All data were collected in Lilongwe and Mchinji districts, Malawi, from January 2012 to June 2014. We determined differences in mortality rates using <90% and <93% oxygen saturation (SpO2) thresholds and World Health Organization (WHO) and Malawi clinical guidelines for referral. We used unadjusted and adjusted (for age, sex, respiratory rate, and, in analyses of HC data only, Weight for Age Z-score [WAZ]) regression to account for interaction between SpO2 threshold (pulse oximetry) and clinical guidelines, clustering by child, and CHW or HC catchment area. We matched CHW and HC outpatient data to hospital inpatient records to explore roles of pulse oximetry and clinical guidelines on hospital attendance after referral. From 7,358 CHW and 6,546 HC pneumonia episodes, we linked 417 CHW and 695 HC pneumonia episodes to 30-day mortality outcomes: 16 (3.8%) CHW and 13 (1.9%) HC patients died. SpO2 thresholds of <90% and <93% identified 1 (6%) of the 16 CHW deaths that were unidentified by integrated community case management (iCCM) WHO referral protocol and 3 (23%) and 4 (31%) of the 13 HC deaths, respectively, that were unidentified by the integrated management of childhood illness (IMCI) WHO protocol. Malawi IMCI referral protocol, which differs from WHO protocol at the HC level and includes chest indrawing, identified all but one of these deaths. SpO2 < 90% predicted death independently of WHO danger signs compared with SpO2 ≥ 90%: HC Risk Ratio (RR), 9.37 (95% CI: 2.17-40.4, p = 0.003); CHW RR, 6.85 (1.15-40.9, p = 0.035). SpO2 < 93% was also predictive versus SpO2 ≥ 93% at HC level: RR, 6.68 (1.52-29.4, p = 0.012). Hospital referrals and outpatient episodes with referral decision indications were associated with mortality. A substantial proportion of those referred were not found admitted in the inpatients within 7 days of referral advice. All 12 deaths in 73 hospitalised children occurred within 24 hours of arrival in the hospital, which highlights delay in appropriate care seeking. The main limitation of our study was our ability to only match 6% of CHW episodes and 11% of HC episodes to mortality outcome data.
Conclusions
Pulse oximetry identified fatal pneumonia episodes at HCs in Malawi that would otherwise have been missed by WHO referral guidelines alone. Our findings suggest that pulse oximetry could be beneficial in supplementing clinical signs to identify children with pneumonia at high risk of mortality in the outpatient setting in health centres for referral to a hospital for appropriate management.



PLoS Med: 29 Sep 2020; 17:e1003300
Colbourn T, King C, Beard J, Phiri T, ... Bin Nisar Y, McCollum ED
PLoS Med: 29 Sep 2020; 17:e1003300 | PMID: 33095763
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Effectiveness of the 23-valent pneumococcal polysaccharide vaccine against vaccine serotype pneumococcal pneumonia in adults: A case-control test-negative design study.

Lawrence H, Pick H, Baskaran V, Daniel P, ... McKeever TM, Lim WS
Background
Vaccination with the 23-valent pneumococcal polysaccharide vaccine (PPV23) is available in the United Kingdom to adults aged 65 years or older and those in defined clinical risk groups. We evaluated the vaccine effectiveness (VE) of PPV23 against vaccine-type pneumococcal pneumonia in a cohort of adults hospitalised with community-acquired pneumonia (CAP).
Methods and findings
Using a case-control test-negative design, a secondary analysis of data was conducted from a prospective cohort study of adults (aged ≥16 years) with CAP hospitalised at 2 university teaching hospitals in Nottingham, England, from September 2013 to August 2018. The exposure of interest was PPV23 vaccination at any time point prior to the index admission. A case was defined as PPV23 serotype-specific pneumococcal pneumonia and a control as non-PPV23 serotype pneumococcal pneumonia or nonpneumococcal pneumonia. Pneumococcal serotypes were identified from urine samples using a multiplex immunoassay or from positive blood cultures. Multivariable logistic regression was used to derive adjusted odds of case status between vaccinated and unvaccinated individuals; VE estimates were calculated as (1 - odds ratio) × 100%. Of 2,357 patients, there were 717 PPV23 cases (48% vaccinated) and 1,640 controls (54.5% vaccinated). The adjusted VE (aVE) estimate against PPV23 serotype disease was 24% (95% CI 5%-40%, p = 0.02). Estimates were similar in analyses restricted to vaccine-eligible patients (n = 1,768, aVE 23%, 95% CI 1%-40%) and patients aged ≥65 years (n = 1,407, aVE 20%, 95% CI -5% to 40%), but not in patients aged ≥75 years (n = 905, aVE 5%, 95% CI -37% to 35%). The aVE estimate in relation to PPV23/non-13-valent pneumococcal conjugate vaccine (PCV13) serotype pneumonia (n = 417 cases, 43.7% vaccinated) was 29% (95% CI 6%-46%). Key limitations of this study are that, due to high vaccination rates, there was a lack of power to reject the null hypothesis of no vaccine effect, and that the study was not large enough to allow robust subgroup analysis in the older age groups.
Conclusions
In the setting of an established national childhood PCV13 vaccination programme, PPV23 vaccination of clinical at-risk patient groups and adults aged ≥65 years provided moderate long-term protection against hospitalisation with PPV23 serotype pneumonia. These findings suggest that PPV23 vaccination may continue to have an important role in adult pneumococcal vaccine policy, including the possibility of revaccination of older adults.



PLoS Med: 29 Sep 2020; 17:e1003326
Lawrence H, Pick H, Baskaran V, Daniel P, ... McKeever TM, Lim WS
PLoS Med: 29 Sep 2020; 17:e1003326 | PMID: 33095759
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

The diagnostic performance of CA125 for the detection of ovarian and non-ovarian cancer in primary care: A population-based cohort study.

Funston G, Hamilton W, Abel G, Crosbie EJ, Rous B, Walter FM
Background
The serum biomarker cancer antigen 125 (CA125) is widely used as an investigation for possible ovarian cancer in symptomatic women presenting to primary care. However, its diagnostic performance in this setting is unknown. We evaluated the performance of CA125 in primary care for the detection of ovarian and non-ovarian cancers.
Methods and findings
We studied women in the United Kingdom Clinical Practice Research Datalink with a CA125 test performed between 1 May 2011-31 December 2014. Ovarian and non-ovarian cancers diagnosed in the year following CA125 testing were identified from the cancer registry. Women were categorized by age: <50 years and ≥50 years. Conventional measures of test diagnostic accuracy, including sensitivity, specificity, and positive predictive value, were calculated for the standard CA125 cut-off (≥35 U/ml). The probability of a woman having cancer at each CA125 level between 1-1,000 U/ml was estimated using logistic regression. Cancer probability was also estimated on the basis of CA125 level and age in years using logistic regression. We identified CA125 levels equating to a 3% estimated cancer probability: the \"risk threshold\" at which the UK National Institute for Health and Care Excellence advocates urgent specialist cancer investigation. A total of 50,780 women underwent CA125 testing; 456 (0.9%) were diagnosed with ovarian cancer and 1,321 (2.6%) with non-ovarian cancer. Of women with a CA125 level ≥35 U/ml, 3.4% aged <50 years and 15.2% aged ≥50 years had ovarian cancer. Of women with a CA125 level ≥35 U/ml who were aged ≥50 years and who did not have ovarian cancer, 20.4% were diagnosed with a non-ovarian cancer. A CA125 value of 53 U/ml equated to a 3% probability of ovarian cancer overall. This varied by age, with a value of 104 U/ml in 40-year-old women and 32 U/ml in 70-year-old women equating to a 3% probability. The main limitations of our study were that we were unable to determine why CA125 tests were performed and that our findings are based solely on UK primary care data, so caution is need in extrapolating them to other healthcare settings.
Conclusions
CA125 is a useful test for ovarian cancer detection in primary care, particularly in women ≥50 years old. Clinicians should also consider non-ovarian cancers in women with high CA125 levels, especially if ovarian cancer has been excluded, in order to prevent diagnostic delay. Our results enable clinicians and patients to determine the estimated probability of ovarian cancer and all cancers at any CA125 level and age, which can be used to guide individual decisions on the need for further investigation or referral.



PLoS Med: 29 Sep 2020; 17:e1003295
Funston G, Hamilton W, Abel G, Crosbie EJ, Rous B, Walter FM
PLoS Med: 29 Sep 2020; 17:e1003295 | PMID: 33112854
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Defining remission of type 2 diabetes in research studies: A systematic scoping review.

Captieux M, Prigge R, Wild S, Guthrie B
Background
Remission has been identified as a top priority by people with type 2 diabetes. Remission is commonly used as an outcome in research studies; however, a widely accepted definition of remission of type 2 diabetes is lacking. A report on defining remission was published (but not formally endorsed) in Diabetes Care, an American Diabetes Association (ADA) journal. This Diabetes Care report remains widely used. It was the first to suggest 3 components necessary to define the presence of remission: (1) absence of glucose-lowering therapy (GLT); (2) normoglycaemia; and (3) for duration ≥1 year. Our aim is to systematically review how remission of type 2 diabetes has been defined by observational and interventional studies since publication of the 2009 report.
Methods and findings
Four databases (MEDLINE, EMBASE, Cochrane Library, and CINAHL) were searched for studies published from 1 September 2009 to 18 July 2020 involving at least 100 participants with type 2 diabetes in their remission analysis, which examined an outcome of type 2 diabetes remission in adults ≥18 years and which had been published in English since 2009. Remission definitions were extracted and categorised by glucose-lowering therapy, glycaemic thresholds, and duration. A total of 8,966 titles/abstracts were screened, and 178 studies (165 observational and 13 interventional) from 33 countries were included. These contributed 266 definitions, of which 96 were unique. The 2009 report was referenced in 121 (45%) definitions. In total, 247 (93%) definitions required the absence of GLT, and 232 (87%) definitions specified numeric glycaemic thresholds. The most frequently used threshold was HbA1c<42 mmol/mol (6.0%) in 47 (20%) definitions. Time was frequently omitted. In this study, a total of 104 (39%) definitions defined time as a duration. The main limitations of this systematic review lie in the restriction to published studies written in English with sample sizes of over 100. Grey literature was not included in the search.
Conclusions
We found that there is substantial heterogeneity in the definition of type 2 diabetes remission in research studies published since 2009, at least partly reflecting ambiguity in the 2009 report. This complicates interpretation of previous research on remission of type 2 diabetes and the implications for people with type 2 diabetes. Any new consensus definition of remission should include unambiguous glycaemic thresholds and emphasise duration. Until an international consensus is reached, studies describing remission should clearly define all 3 components of remission.
Systematic review registration
PROSPERO CRD42019144619.



PLoS Med: 29 Sep 2020; 17:e1003396
Captieux M, Prigge R, Wild S, Guthrie B
PLoS Med: 29 Sep 2020; 17:e1003396 | PMID: 33112845
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Quantifying Plasmodium falciparum infections clustering within households to inform household-based intervention strategies for malaria control programs: An observational study and meta-analysis from 41 malaria-endemic countries.

Stresman G, Whittaker C, Slater HC, Bousema T, Cook J
Background
Reactive malaria strategies are predicated on the assumption that individuals infected with malaria are clustered within households or neighbourhoods. Despite the widespread programmatic implementation of reactive strategies, little empirical evidence exists as to whether such strategies are appropriate and, if so, how they should be most effectively implemented.
Methods and findings
We collated 2 different datasets to assess clustering of malaria infections within households: (i) demographic health survey (DHS) data, integrating household information and patent malaria infection, recent fever, and recent treatment status in children; and (ii) data from cross-sectional and reactive detection studies containing information on the household and malaria infection status (patent and subpatent) of all-aged individuals. Both datasets were used to assess the odds of infections clustering within index households, where index households were defined based on whether they contained infections detectable through one of 3 programmatic strategies: (a) Reactive Case Detection (RACD) classifed by confirmed clinical cases, (b) Mass Screen and Treat (MSAT) classifed by febrile, symptomatic infections, and (c) Mass Test and Treat (MTAT) classifed by infections detectable using routine diagnostics. Data included 59,050 infections in 208,140 children under 7 years old (median age = 2 years, minimum = 2, maximum = 7) by microscopy/rapid diagnostic test (RDT) from 57 DHSs conducted between November 2006 and December 2018 from 23 African countries. Data representing 11,349 infections across all ages (median age = 22 years, minimum = 0.5, maximum = 100) detected by molecular tools in 132,590 individuals in 43 studies published between April 2006 and May 2019 in 20 African, American, Asian, and Middle Eastern countries were obtained from the published literature. Extensive clustering was observed-overall, there was a 20.40 greater (95% credible interval [CrI] 0.35-20.45; P < 0.001) odds of patent infections (according to the DHS data) and 5.13 greater odds (95% CI 3.85-6.84; P < 0.001) of molecularly detected infections (from the published literature) detected within households in which a programmatically detectable infection resides. The strongest degree of clustering identified by polymerase chain reaction (PCR)/ loop mediated isothermal amplification (LAMP) was observed using the MTAT strategy (odds ratio [OR] = 6.79, 95% CI 4.42-10.43) but was not significantly different when compared to MSAT (OR = 5.2, 95% CI 3.22-8.37; P-difference = 0.883) and RACD (OR = 4.08, 95% CI 2.55-6.53; P-difference = 0.29). Across both datasets, clustering became more prominent when transmission was low. However, limitations to our analysis include not accounting for any malaria control interventions in place, malaria seasonality, or the likely heterogeneity of transmission within study sites. Clustering may thus have been underestimated.
Conclusions
In areas where malaria transmission is peri-domestic, there are programmatic options for identifying households where residual infections are likely to be found. Combining these detection strategies with presumptively treating residents of index households over a sustained time period could contribute to malaria elimination efforts.



PLoS Med: 29 Sep 2020; 17:e1003370
Stresman G, Whittaker C, Slater HC, Bousema T, Cook J
PLoS Med: 29 Sep 2020; 17:e1003370 | PMID: 33119589
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Clinicogenomic factors of biotherapy immunogenicity in autoimmune disease: A prospective multicohort study of the ABIRISK consortium.

Hässler S, Bachelet D, Duhaze J, Szely N, ... Broët P,
Background
Biopharmaceutical products (BPs) are widely used to treat autoimmune diseases, but immunogenicity limits their efficacy for an important proportion of patients. Our knowledge of patient-related factors influencing the occurrence of antidrug antibodies (ADAs) is still limited.
Methods and findings
The European consortium ABIRISK (Anti-Biopharmaceutical Immunization: prediction and analysis of clinical relevance to minimize the RISK) conducted a clinical and genomic multicohort prospective study of 560 patients with multiple sclerosis (MS, n = 147), rheumatoid arthritis (RA, n = 229), Crohn\'s disease (n = 148), or ulcerative colitis (n = 36) treated with 8 different biopharmaceuticals (etanercept, n = 84; infliximab, n = 101; adalimumab, n = 153; interferon [IFN]-beta-1a intramuscularly [IM], n = 38; IFN-beta-1a subcutaneously [SC], n = 68; IFN-beta-1b SC, n = 41; rituximab, n = 31; tocilizumab, n = 44) and followed during the first 12 months of therapy for time to ADA development. From the bioclinical data collected, we explored the relationships between patient-related factors and the occurrence of ADAs. Both baseline and time-dependent factors such as concomitant medications were analyzed using Cox proportional hazard regression models. Mean age and disease duration were 35.1 and 0.85 years, respectively, for MS; 54.2 and 3.17 years for RA; and 36.9 and 3.69 years for inflammatory bowel diseases (IBDs). In a multivariate Cox regression model including each of the clinical and genetic factors mentioned hereafter, among the clinical factors, immunosuppressants (adjusted hazard ratio [aHR] = 0.408 [95% confidence interval (CI) 0.253-0.657], p < 0.001) and antibiotics (aHR = 0.121 [0.0437-0.333], p < 0.0001) were independently negatively associated with time to ADA development, whereas infections during the study (aHR = 2.757 [1.616-4.704], p < 0.001) and tobacco smoking (aHR = 2.150 [1.319-3.503], p < 0.01) were positively associated. 351,824 Single-Nucleotide Polymorphisms (SNPs) and 38 imputed Human Leukocyte Antigen (HLA) alleles were analyzed through a genome-wide association study. We found that the HLA-DQA1*05 allele significantly increased the rate of immunogenicity (aHR = 3.9 [1.923-5.976], p < 0.0001 for the homozygotes). Among the 6 genetic variants selected at a 20% false discovery rate (FDR) threshold, the minor allele of rs10508884, which is situated in an intron of the CXCL12 gene, increased the rate of immunogenicity (aHR = 3.804 [2.139-6.764], p < 1 × 10-5 for patients homozygous for the minor allele) and was chosen for validation through a CXCL12 protein enzyme-linked immunosorbent assay (ELISA) on patient serum at baseline before therapy start. CXCL12 protein levels were higher for patients homozygous for the minor allele carrying higher ADA risk (mean: 2,693 pg/ml) than for the other genotypes (mean: 2,317 pg/ml; p = 0.014), and patients with CXCL12 levels above the median in serum were more prone to develop ADAs (aHR = 2.329 [1.106-4.90], p = 0.026). A limitation of the study is the lack of replication; therefore, other studies are required to confirm our findings.
Conclusion
In our study, we found that immunosuppressants and antibiotics were associated with decreased risk of ADA development, whereas tobacco smoking and infections during the study were associated with increased risk. We found that the HLA-DQA1*05 allele was associated with an increased rate of immunogenicity. Moreover, our results suggest a relationship between CXCL12 production and ADA development independent of the disease, which is consistent with its known function in affinity maturation of antibodies and plasma cell survival. Our findings may help physicians in the management of patients receiving biotherapies.



PLoS Med: 29 Sep 2020; 17:e1003348
Hässler S, Bachelet D, Duhaze J, Szely N, ... Broët P,
PLoS Med: 29 Sep 2020; 17:e1003348 | PMID: 33125391
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Primary healthcare expansion and mortality in Brazil\'s urban poor: A cohort analysis of 1.2 million adults.

Hone T, Saraceni V, Medina Coeli C, Trajman A, ... Millett C, Durovni B
Background
Expanding delivery of primary healthcare to urban poor populations is a priority in many low- and middle-income countries. This remains a key challenge in Brazil despite expansion of the country\'s internationally recognized Family Health Strategy (FHS) over the past two decades. This study evaluates the impact of an ambitious program to rapidly expand FHS coverage in the city of Rio de Janeiro, Brazil, since 2008.
Methods and findings
A cohort of 1,241,351 low-income adults (observed January 2010-December 2016; total person-years 6,498,607) with linked FHS utilization and mortality records was analyzed using flexible parametric survival models. Time-to-death from all-causes and selected causes were estimated for FHS users and nonusers. Models employed inverse probability treatment weighting and regression adjustment (IPTW-RA). The cohort was 61% female (751,895) and had a mean age of 36 years (standard deviation 16.4). Only 18,721 individuals (1.5%) had higher education, whereas 102,899 (8%) had no formal education. Two thirds of individuals (827,250; 67%) were in receipt of conditional cash transfers (Bolsa Família). A total of 34,091 deaths were analyzed, of which 8,765 (26%) were due to cardiovascular disease; 5,777 (17%) were due to neoplasms; 5,683 (17%) were due to external causes; 3,152 (9%) were due to respiratory diseases; and 3,115 (9%) were due to infectious and parasitic diseases. One third of the cohort (467,155; 37.6%) used FHS services. In IPTW-RA survival analysis, an average FHS user had a 44% lower hazard of all-cause mortality (HR: 0.56, 95% CI 0.54-0.59, p < 0.001) and a 5-year risk reduction of 8.3 per 1,000 (95% CI 7.8-8.9, p < 0.001) compared with a non-FHS user. There were greater reductions in the risk of death for FHS users who were black (HR 0.50, 95% CI 0.46-0.54, p < 0.001) or pardo (HR 0.57, 95% CI 0.54-0.60, p < 0.001) compared with white (HR 0.59, 95% CI 0.56-0.63, p < 0.001); had lower educational attainment (HR 0.50, 95% CI 0.46-0.55, p < 0.001) for those with no education compared to no significant association for those with higher education (p = 0.758); or were in receipt of conditional cash transfers (Bolsa Família) (HR 0.51, 95% CI 0.49-0.54, p < 0.001) compared with nonrecipients (HR 0.63, 95% CI 0.60-0.67, p < 0.001). Key limitations in this study are potential unobserved confounding through selection into the program and linkage errors, although analytical approaches have minimized the potential for bias.
Conclusions
FHS utilization in urban poor populations in Brazil was associated with a lower risk of death, with greater reductions among more deprived race/ethnic and socioeconomic groups. Increased investment in primary healthcare is likely to improve health and reduce health inequalities in urban poor populations globally.



PLoS Med: 29 Sep 2020; 17:e1003357
Hone T, Saraceni V, Medina Coeli C, Trajman A, ... Millett C, Durovni B
PLoS Med: 29 Sep 2020; 17:e1003357 | PMID: 33125387
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Metabolically healthy obesity, transition to unhealthy metabolic status, and vascular disease in Chinese adults: A cohort study.

Gao M, Lv J, Yu C, Guo Y, ... Li L,
Background
Metabolically healthy obesity (MHO) and its transition to unhealthy metabolic status have been associated with risk of cardiovascular disease (CVD) in Western populations. However, it is unclear to what extent metabolic health changes over time and whether such transition affects risks of subtypes of CVD in Chinese adults. We aimed to examine the association of metabolic health status and its transition with risks of subtypes of vascular disease across body mass index (BMI) categories.
Methods and findings
The China Kadoorie Biobank was conducted during 25 June 2004 to 15 July 2008 in 5 urban (Harbin, Qingdao, Suzhou, Liuzhou, and Haikou) and 5 rural (Henan, Gansu, Sichuan, Zhejiang, and Hunan) regions across China. BMI and metabolic health information were collected. We classified participants into BMI categories: normal weight (BMI 18.5-23.9 kg/m²), overweight (BMI 24.0-27.9 kg/m²), and obese (BMI ≥ 28 kg/m²). Metabolic health was defined as meeting less than 2 of the following 4 criteria (elevated waist circumference, hypertension, elevated plasma glucose level, and dyslipidemia). The changes in obesity and metabolic health status were defined from baseline to the second resurvey with combination of overweight and obesity. Among the 458,246 participants with complete information and no history of CVD and cancer, the mean age at baseline was 50.9 (SD 10.4) years, and 40.8% were men, and 29.0% were current smokers. During a median 10.0 years of follow-up, 52,251 major vascular events (MVEs), including 7,326 major coronary events (MCEs), 37,992 ischemic heart disease (IHD), and 42,951 strokes were recorded. Compared with metabolically healthy normal weight (MHN), baseline MHO was associated with higher hazard ratios (HRs) for all types of CVD; however, almost 40% of those participants transitioned to metabolically unhealthy status. Stable metabolically unhealthy overweight or obesity (MUOO) (HR 2.22, 95% confidence interval [CI] 2.00-2.47, p < 0.001) and transition from metabolically healthy to unhealthy status (HR 1.53, 1.34-1.75, p < 0.001) were associated with higher risk for MVE, compared with stable healthy normal weight. Similar patterns were observed for MCE, IHD, and stroke. Limitations of the analysis included lack of measurement of lipid components, fasting plasma glucose, and visceral fat, and there might be possible misclassification.
Conclusions
Among Chinese adults, MHO individuals have increased risks of MVE. Obesity remains a risk factor for CVD independent of major metabolic factors. Our data further suggest that metabolic health is a transient state for a large proportion of Chinese adults, with the highest vascular risk among those remained MUOO.



PLoS Med: 29 Sep 2020; 17:e1003351
Gao M, Lv J, Yu C, Guo Y, ... Li L,
PLoS Med: 29 Sep 2020; 17:e1003351 | PMID: 33125374
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Qualitative process evaluation from a complex systems perspective: A systematic review and framework for public health evaluators.

McGill E, Marks D, Er V, Penney T, Petticrew M, Egan M
Background
Public health evaluation methods have been criticized for being overly reductionist and failing to generate suitable evidence for public health decision-making. A \"complex systems approach\" has been advocated to account for real world complexity. Qualitative methods may be well suited to understanding change in complex social environments, but guidance on applying a complex systems approach to inform qualitative research remains limited and underdeveloped. This systematic review aims to analyze published examples of process evaluations that utilize qualitative methods that involve a complex systems perspective and proposes a framework for qualitative complex system process evaluations.
Methods and findings
We conducted a systematic search to identify complex system process evaluations that involve qualitative methods by searching electronic databases from January 1, 2014-September 30, 2019 (Scopus, MEDLINE, Web of Science), citation searching, and expert consultations. Process evaluations were included if they self-identified as taking a systems- or complexity-oriented approach, integrated qualitative methods, reported empirical findings, and evaluated public health interventions. Two reviewers independently assessed each study to identify concepts associated with the systems thinking and complexity science traditions. Twenty-one unique studies were identified evaluating a wide range of public health interventions in, for example, urban planning, sexual health, violence prevention, substance use, and community transformation. Evaluations were conducted in settings such as schools, workplaces, and neighborhoods in 13 different countries (9 high-income and 4 middle-income). All reported some utilization of complex systems concepts in the analysis of qualitative data. In 14 evaluations, the consideration of complex systems influenced intervention design, evaluation planning, or fieldwork. The identified studies used systems concepts to depict and describe a system at one point in time. Only 4 evaluations explicitly utilized a range of complexity concepts to assess changes within the system resulting from, or co-occurring with, intervention implementation over time. Limitations to our approach are including only English-language papers, reliance on study authors reporting their utilization of complex systems concepts, and subjective judgment from the reviewers relating to which concepts featured in each study.
Conclusion
This study found no consensus on what bringing a complex systems perspective to public health process evaluations with qualitative methods looks like in practice and that many studies of this nature describe static systems at a single time point. We suggest future studies use a 2-phase framework for qualitative process evaluations that seek to assess changes over time from a complex systems perspective. The first phase involves producing a description of the system and identifying hypotheses about how the system may change in response to the intervention. The second phase involves following the pathway of emergent findings in an adaptive evaluation approach.



PLoS Med: 30 Oct 2020; 17:e1003368
McGill E, Marks D, Er V, Penney T, Petticrew M, Egan M
PLoS Med: 30 Oct 2020; 17:e1003368 | PMID: 33137099
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Estimated health benefits, costs, and cost-effectiveness of eliminating industrial trans-fatty acids in Australia: A modelling study.

Marklund M, Zheng M, Veerman JL, Wu JHY
Background
trans-fatty acids (TFAs) are a well-known risk factor of ischemic heart disease (IHD). In Australia, the highest TFA intake is concentrated to the most socioeconomically disadvantaged groups. Elimination of industrial TFA (iTFA) from the Australian food supply could result in reduced IHD mortality and morbidity while improving health equity. However, such legislation could lead to additional costs for both government and food industry. Thus, we assessed the potential cost-effectiveness, health gains, and effects on health equality of an iTFA ban from the Australian food supply.
Methods and findings
Markov cohort models were used to estimate the impact on IHD burden and health equity, as well as the cost-effectiveness of a national ban of iTFA in Australia. Intake of TFA was assessed using the 2011-2012 Australian National Nutrition and Physical Activity Survey. The IHD burden attributable to TFA was calculated by comparing the current level of TFA intake to a counterfactual setting where consumption was lowered to a theoretical minimum distribution with a mean of 0.5% energy per day (corresponding to TFA intake only from nonindustrial sources, e.g., dairy foods). Policy costs, avoided IHD events and deaths, health-adjusted life years (HALYs) gained, and changes in IHD-related healthcare costs saved were estimated over 10 years and lifetime of the adult Australian population. Cost-effectiveness was assessed by calculation of incremental cost-effectiveness ratios (ICERs) using net policy cost and HALYs gained. Health benefits and healthcare cost changes were also assessed in subgroups based on socioeconomic status, defined by Socio-Economic Indexes for Areas (SEIFA) quintile, and remoteness. Compared to a base case of no ban and current TFA intakes, elimination of iTFA was estimated to prevent 2,294 (95% uncertainty interval [UI]: 1,765; 2,851) IHD deaths and 9,931 (95% UI: 8,429; 11,532) IHD events over the first 10 years. The greatest health benefits were accrued to the most socioeconomically disadvantaged quintiles and among Australians living outside of major cities. The intervention was estimated to be cost saving (net cost <0 AUD) or cost-effective (i.e., ICER < AUD 169,361/HALY) regardless of the time horizon, with ICERs of 1,073 (95% UI: dominant; 3,503) and 1,956 (95% UI: 1,010; 2,750) AUD/HALY over 10 years and lifetime, respectively. Findings were robust across several sensitivity analyses. Key limitations of the study include the lack of recent data of TFA intake and the small sample sizes used to estimate intakes in subgroups. As with all simulation models, our study does not prove that a ban of iTFA will prevent IHD, rather, it provides the best quantitative estimates and corresponding uncertainty of a potential effect in the absence of stronger direct evidence.
Conclusions
Our model estimates that a ban of iTFAs could avert substantial numbers of IHD events and deaths in Australia and would likely be a highly cost-effective strategy to reduce social-economic and urban-rural inequalities in health. These findings suggest that elimination of iTFA can cost-effectively improve health and health equality even in countries with low iTFA intake.



PLoS Med: 30 Oct 2020; 17:e1003407
Marklund M, Zheng M, Veerman JL, Wu JHY
PLoS Med: 30 Oct 2020; 17:e1003407 | PMID: 33137090
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Bone marrow microenvironments that contribute to patient outcomes in newly diagnosed multiple myeloma: A cohort study of patients in the Total Therapy clinical trials.

Danziger SA, McConnell M, Gockley J, Young MH, ... Ratushny AV, Morgan GJ
Background
The tumor microenvironment (TME) is increasingly appreciated as an important determinant of cancer outcome, including in multiple myeloma (MM). However, most myeloma microenvironment studies have been based on bone marrow (BM) aspirates, which often do not fully reflect the cellular content of BM tissue itself. To address this limitation in myeloma research, we systematically characterized the whole bone marrow (WBM) microenvironment during premalignant, baseline, on treatment, and post-treatment phases.
Methods and findings
Between 2004 and 2019, 998 BM samples were taken from 436 patients with newly diagnosed MM (NDMM) at the University of Arkansas for Medical Sciences in Little Rock, Arkansas, United States of America. These patients were 61% male and 39% female, 89% White, 8% Black, and 3% other/refused, with a mean age of 58 years. Using WBM and matched cluster of differentiation (CD)138-selected tumor gene expression to control for tumor burden, we identified a subgroup of patients with an adverse TME associated with 17 fewer months of progression-free survival (PFS) (95% confidence interval [CI] 5-29, 49-69 versus 70-82 months, χ2 p = 0.001) and 15 fewer months of overall survival (OS; 95% CI -1 to 31, 92-120 versus 113-129 months, χ2 p = 0.036). Using immunohistochemistry-validated computational tools that identify distinct cell types from bulk gene expression, we showed that the adverse outcome was correlated with elevated CD8+ T cell and reduced granulocytic cell proportions. This microenvironment develops during the progression of premalignant to malignant disease and becomes less prevalent after therapy, in which it is associated with improved outcomes. In patients with quantified International Staging System (ISS) stage and 70-gene Prognostic Risk Score (GEP-70) scores, taking the microenvironment into consideration would have identified an additional 40 out of 290 patients (14%, premutation p = 0.001) with significantly worse outcomes (PFS, 95% CI 6-36, 49-73 versus 74-90 months) who were not identified by existing clinical (ISS stage III) and tumor (GEP-70) criteria as high risk. The main limitations of this study are that it relies on computationally identified cell types and that patients were treated with thalidomide rather than current therapies.
Conclusions
In this study, we observe that granulocyte signatures in the MM TME contribute to a more accurate prognosis. This implies that future researchers and clinicians treating patients should quantify TME components, in particular monocytes and granulocytes, which are often ignored in microenvironment studies.



PLoS Med: 30 Oct 2020; 17:e1003323
Danziger SA, McConnell M, Gockley J, Young MH, ... Ratushny AV, Morgan GJ
PLoS Med: 30 Oct 2020; 17:e1003323 | PMID: 33147277
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Paternal country of origin and adverse neonatal outcomes in births to foreign-born women in Norway: A population-based cohort study.

Vik ES, Aasheim V, Nilsen RM, Small R, Moster D, Schytt E
Background
Migration is a risk factor for adverse neonatal outcomes. The various impacts of maternal origin have been reported previously. The aim of this study was to investigate associations between paternal origin and adverse neonatal outcomes in births to migrant and Norwegian-born women in Norway.
Methods and findings
This nationwide population-based study included births to migrant (n = 240,759, mean age 29.6 years [±5.3 SD]) and Norwegian-born women (n = 1,232,327, mean age 29.0 years [±5.1 SD]) giving birth in Norway in 1990-2016. The main exposure was paternal origin (Norwegian-born, foreign-born, or unregistered). Neonatal outcomes were very preterm birth (22+0-31+6 gestational weeks), moderately preterm birth (32+0-36+6 gestational weeks), small for gestational age (SGA), low Apgar score (<7 at 5 minutes), and stillbirth. Associations were investigated in migrant and Norwegian-born women separately using multiple logistic regression and reported as adjusted odds ratios (aORs) with 95% confidence intervals (CIs), adjusted for year of birth, parity, maternal and paternal age, marital status, maternal education, and mother\'s gross income. In births to migrant women, a foreign-born father was associated with increased odds of very preterm birth (1.1% versus 0.9%, aOR 1.20; CI 1.08-1.33, p = 0.001), SGA (13.4% versus 9.5%, aOR 1.48; CI 1.43-1.53, p < 0.001), low Apgar score (1.7% versus 1.5%, aOR 1.14; CI 1.05-1.23, p = 0.001), and stillbirth (0.5% versus 0.3%, aOR 1.26; CI 1.08-1.48, p = 0.004) compared with a Norwegian-born father. In Norwegian-born women, a foreign-born father was associated with increased odds of SGA (9.3% versus 8.1%, aOR 1.13; CI 1.09-1.16, p < 0.001) and decreased odds of moderately preterm birth (4.3% versus 4.4%, aOR 0.95; CI 0.91-0.99, p = 0.015) when compared with a Norwegian-born father. In migrant women, unregistered paternal origin was associated with increased odds of very preterm birth (2.2% versus 0.9%, aOR 2.29; CI 1.97-2.66, p < 0.001), moderately preterm birth (5.6% versus 4.7%, aOR 1.15; CI 1.06-1.25, p = 0.001), SGA (13.0% versus 9.5%, aOR 1.50; CI 1.42-1.58, p < 0.001), low Apgar score (3.4% versus 1.5%, aOR 2.23; CI 1.99-2.50, p < 0.001), and stillbirth (1.5% versus 0.3%, aOR 4.87; CI 3.98-5.96, p < 0.001) compared with a Norwegian-born father. In Norwegian-born women, unregistered paternal origin was associated with increased odds of very preterm birth (4.6% versus 1.0%, aOR 4.39; CI 4.05-4.76, p < 0.001), moderately preterm birth (7.8% versus 4.4%, aOR 1.62; CI 1.53-1.71, p < 0.001), SGA (11.4% versus 8.1%, aOR 1.30; CI 1.24-1.36, p < 0.001), low Apgar score (4.6% versus 1.3%, aOR 3.51; CI 3.26-3.78, p < 0.001), and stillbirth (3.2% versus 0.4%, aOR 9.00; CI 8.15-9.93, p < 0.001) compared with births with a Norwegian-born father. The main limitations of this study were the restricted access to paternal demographics and inability to account for all lifestyle factors.
Conclusion
We found that a foreign-born father was associated with adverse neonatal outcomes among births to migrant women, but to a lesser degree among births to nonmigrant women, when compared with a Norwegian-born father. Unregistered paternal origin was associated with higher odds of adverse neonatal outcomes in births to both migrant and nonmigrant women when compared with Norwegian-born fathers. Increased attention to paternal origin may help identify women in maternity care at risk for adverse neonatal outcomes.



PLoS Med: 30 Oct 2020; 17:e1003395
Vik ES, Aasheim V, Nilsen RM, Small R, Moster D, Schytt E
PLoS Med: 30 Oct 2020; 17:e1003395 | PMID: 33147226
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Maternal dysglycaemia, changes in the infant\'s epigenome modified with a diet and physical activity intervention in pregnancy: Secondary analysis of a randomised control trial.

Antoun E, Kitaba NT, Titcombe P, Dalrymple KV, ... Lillycrop KA,
Background
Higher maternal plasma glucose (PG) concentrations, even below gestational diabetes mellitus (GDM) thresholds, are associated with adverse offspring outcomes, with DNA methylation proposed as a mediating mechanism. Here, we examined the relationships between maternal dysglycaemia at 24 to 28 weeks\' gestation and DNA methylation in neonates and whether a dietary and physical activity intervention in pregnant women with obesity modified the methylation signatures associated with maternal dysglycaemia.
Methods and findings
We investigated 557 women, recruited between 2009 and 2014 from the UK Pregnancies Better Eating and Activity Trial (UPBEAT), a randomised controlled trial (RCT), of a lifestyle intervention (low glycaemic index (GI) diet plus physical activity) in pregnant women with obesity (294 contol, 263 intervention). Between 27 and 28 weeks of pregnancy, participants had an oral glucose (75 g) tolerance test (OGTT), and GDM diagnosis was based on diagnostic criteria recommended by the International Association of Diabetes and Pregnancy Study Groups (IADPSG), with 159 women having a diagnosis of GDM. Cord blood DNA samples from the infants were interrogated for genome-wide DNA methylation levels using the Infinium Human MethylationEPIC BeadChip array. Robust regression was carried out, adjusting for maternal age, smoking, parity, ethnicity, neonate sex, and predicted cell-type composition. Maternal GDM, fasting glucose, 1-h, and 2-h glucose concentrations following an OGTT were associated with 242, 1, 592, and 17 differentially methylated cytosine-phosphate-guanine (dmCpG) sites (false discovery rate (FDR) ≤ 0.05), respectively, in the infant\'s cord blood DNA. The most significantly GDM-associated CpG was cg03566881 located within the leucine-rich repeat-containing G-protein coupled receptor 6 (LGR6) (FDR = 0.0002). Moreover, we show that the GDM and 1-h glucose-associated methylation signatures in the cord blood of the infant appeared to be attenuated by the dietary and physical activity intervention during pregnancy; in the intervention arm, there were no GDM and two 1-h glucose-associated dmCpGs, whereas in the standard care arm, there were 41 GDM and 160 1-h glucose-associated dmCpGs. A total of 87% of the GDM and 77% of the 1-h glucose-associated dmCpGs had smaller effect sizes in the intervention compared to the standard care arm; the adjusted r2 for the association of LGR6 cg03566881 with GDM was 0.317 (95% confidence interval (CI) 0.012, 0.022) in the standard care and 0.240 (95% CI 0.001, 0.015) in the intervention arm. Limitations included measurement of DNA methylation in cord blood, where the functional significance of such changes are unclear, and because of the strong collinearity between treatment modality and severity of hyperglycaemia, we cannot exclude that treatment-related differences are potential confounders.
Conclusions
Maternal dysglycaemia was associated with significant changes in the epigenome of the infants. Moreover, we found that the epigenetic impact of a dysglycaemic prenatal maternal environment appeared to be modified by a lifestyle intervention in pregnancy. Further research will be needed to investigate possible medical implications of the findings.
Trial registration
ISRCTN89971375.



PLoS Med: 30 Oct 2020; 17:e1003229
Antoun E, Kitaba NT, Titcombe P, Dalrymple KV, ... Lillycrop KA,
PLoS Med: 30 Oct 2020; 17:e1003229 | PMID: 33151971
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Evaluation of an intervention to provide brief support and personalized feedback on food shopping to reduce saturated fat intake (PC-SHOP): A randomized controlled trial.

Piernas C, Aveyard P, Lee C, Tsiountsioura M, ... Madigan C, Jebb SA
Background
Guidelines recommend reducing saturated fat (SFA) intake to decrease cardiovascular disease (CVD) risk, but there is limited evidence on scalable and effective approaches to change dietary intake, given the large proportion of the population exceeding SFA recommendations. We aimed to develop a system to provide monthly personalized feedback and healthier swaps based on nutritional analysis of loyalty card data from the largest United Kingdom grocery store together with brief advice and support from a healthcare professional (HCP) in the primary care practice. Following a hybrid effectiveness-feasibility design, we tested the effects of the intervention on SFA intake and low-density lipoprotein (LDL) cholesterol as well as the feasibility and acceptability of providing nutritional advice using loyalty card data.
Methods and findings
The Primary Care Shopping Intervention for Cardiovascular Disease Prevention (PC-SHOP) study is a parallel randomized controlled trial with a 3 month follow-up conducted between 21 March 2018 to 16 January2019. Adults ≥18 years with LDL cholesterol >3 mmol/L (n = 113) were recruited from general practitioner (GP) practices in Oxfordshire and randomly allocated to \"Brief Support\" (BS, n = 48), \"Brief Support + Shopping Feedback\" (SF, n = 48) or \"Control\" (n = 17). BS consisted of a 10-minute consultation with an HCP to motivate participants to reduce their SFA intake. Shopping feedback comprised a personalized report on the SFA content of grocery purchases and suggestions for lower SFA swaps. The primary outcome was the between-group difference in change in SFA intake (% total energy intake) at 3 months adjusted for baseline SFA and GP practice using intention-to-treat analysis. Secondary outcomes included %SFA in purchases, LDL cholesterol, and feasibility outcomes. The trial was powered to detect an absolute reduction in SFA of 3% (SD3). Neither participants nor the study team were blinded to group allocation. A total of 106 (94%) participants completed the study: 68% women, 95% white ethnicity, average age 62.4 years (SD 10.8), body mass index (BMI) 27.1 kg/m2 (SD 4.7). There were small decreases in SFA intake at 3 months: control = -0.1% (95% CI -1.8 to 1.7), BS = -0.7% (95% CI -1.8 to 0.3), SF = -0.9% (95% CI -2.0 to 0.2); but no evidence of a significant effect of either intervention compared with control (difference adjusted for GP practice and baseline: BS versus control = -0.33% [95% CI -2.11 to 1.44], p = 0.709; SF versus control = -0.11% [95% CI -1.92 to 1.69], p = 0.901). There were similar trends in %SFA based on supermarket purchases: control = -0.5% (95% CI -2.3 to 1.2), BS = -1.3% (95% CI -2.3 to -0.3), SF = -1.5% (95% CI -2.5 to -0.5) from baseline to follow-up, but these were not significantly different: BS versus control p = 0.379; SF versus control p = 0.411. There were small reductions in LDL from baseline to follow-up (control = -0.14 mmol/L [95% CI -0.48, 0.19), BS: -0.39 mmol/L [95% CI -0.59, -0.19], SF: -0.14 mmol/L [95% CI -0.34, 0.07]), but these were not significantly different: BS versus control p = 0.338; SF versus control p = 0.790. Limitations of this study include the small sample of participants recruited, which limits the power to detect smaller differences, and the low response rate (3%), which may limit the generalisability of these findings.
Conclusions
In this study, we have shown it is feasible to deliver brief advice in primary care to encourage reductions in SFA intake and to provide personalized advice to encourage healthier choices using supermarket loyalty card data. There was no evidence of large reductions in SFA, but we are unable to exclude more modest benefits. The feasibility, acceptability, and scalability of these interventions suggest they have potential to encourage small changes in diet, which could be beneficial at the population level.
Trial registration
ISRCTN14279335.



PLoS Med: 30 Oct 2020; 17:e1003385
Piernas C, Aveyard P, Lee C, Tsiountsioura M, ... Madigan C, Jebb SA
PLoS Med: 30 Oct 2020; 17:e1003385 | PMID: 33151934
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Predicting suicide attempt or suicide death following a visit to psychiatric specialty care: A machine learning study using Swedish national registry data.

Chen Q, Zhang-James Y, Barnett EJ, Lichtenstein P, ... Larsson H, Fazel S
Background
Suicide is a major public health concern globally. Accurately predicting suicidal behavior remains challenging. This study aimed to use machine learning approaches to examine the potential of the Swedish national registry data for prediction of suicidal behavior.
Methods and findings
The study sample consisted of 541,300 inpatient and outpatient visits by 126,205 Sweden-born patients (54% female and 46% male) aged 18 to 39 (mean age at the visit: 27.3) years to psychiatric specialty care in Sweden between January 1, 2011 and December 31, 2012. The most common psychiatric diagnoses at the visit were anxiety disorders (20.0%), major depressive disorder (16.9%), and substance use disorders (13.6%). A total of 425 candidate predictors covering demographic characteristics, socioeconomic status (SES), electronic medical records, criminality, as well as family history of disease and crime were extracted from the Swedish registry data. The sample was randomly split into an 80% training set containing 433,024 visits and a 20% test set containing 108,276 visits. Models were trained separately for suicide attempt/death within 90 and 30 days following a visit using multiple machine learning algorithms. Model discrimination and calibration were both evaluated. Among all eligible visits, 3.5% (18,682) were followed by a suicide attempt/death within 90 days and 1.7% (9,099) within 30 days. The final models were based on ensemble learning that combined predictions from elastic net penalized logistic regression, random forest, gradient boosting, and a neural network. The area under the receiver operating characteristic (ROC) curves (AUCs) on the test set were 0.88 (95% confidence interval [CI] = 0.87-0.89) and 0.89 (95% CI = 0.88-0.90) for the outcome within 90 days and 30 days, respectively, both being significantly better than chance (i.e., AUC = 0.50) (p < 0.01). Sensitivity, specificity, and predictive values were reported at different risk thresholds. A limitation of our study is that our models have not yet been externally validated, and thus, the generalizability of the models to other populations remains unknown.
Conclusions
By combining the ensemble method of multiple machine learning algorithms and high-quality data solely from the Swedish registers, we developed prognostic models to predict short-term suicide attempt/death with good discrimination and calibration. Whether novel predictors can improve predictive performance requires further investigation.



PLoS Med: 30 Oct 2020; 17:e1003416
Chen Q, Zhang-James Y, Barnett EJ, Lichtenstein P, ... Larsson H, Fazel S
PLoS Med: 30 Oct 2020; 17:e1003416 | PMID: 33156863
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Evaluation of an unconditional cash transfer program targeting children\'s first-1,000-days linear growth in rural Togo: A cluster-randomized controlled trial.

Briaux J, Martin-Prevel Y, Carles S, Fortin S, ... Becquet R, Savy M
Background
In 2014, the government of Togo implemented a pilot unconditional cash transfer (UCT) program in rural villages that aimed at improving children\'s nutrition, health, and protection. It combined monthly UCTs (approximately US$8.40 /month) with a package of community activities (including behavior change communication [BCC] sessions, home visits, and integrated community case management of childhood illnesses and acute malnutrition [ICCM-Nut]) delivered to mother-child pairs during the first \"1,000 days\" of life. We primarily investigated program impact at population level on children\'s height-for-age z-scores (HAZs) and secondarily on stunting (HAZ < -2) and intermediary outcomes including household\'s food insecurity, mother-child pairs\' diet and health, delivery in a health facility and low birth weight (LBW), women\'s knowledge, and physical intimate partner violence (IPV).
Methods and findings
We implemented a parallel-cluster-randomized controlled trial, in which 162 villages were randomized into either an intervention arm (UCTs + package of community activities, n = 82) or a control arm (package of community activities only, n = 80). Two different representative samples of children aged 6-29 months and their mothers were surveyed in each arm, one before the intervention in 2014 (control: n = 1,301, intervention: n = 1,357), the other 2 years afterwards in 2016 (control: n = 996, intervention: n = 1,035). Difference-in-differences (DD) estimates of impact were calculated, adjusting for clustering. Children\'s average age was 17.4 (± 0.24 SE) months in the control arm and 17.6 (± 0.19 SE) months in the intervention arm at baseline. UCTs had a protective effect on HAZ (DD = +0.25 z-scores, 95% confidence interval [CI]: 0.01-0.50, p = 0.039), which deteriorated in the control arm while remaining stable in the intervention arm, but had no impact on stunting (DD = -6.2 percentage points [pp], relative odds ratio [ROR]: 0.74, 95% CI: 0.51-1.06, p = 0.097). UCTs positively impacted both mothers\' and children\'s (18-23 months) consumption of animal source foods (ASFs) (respectively, DD = +4.5 pp, ROR: 2.24, 95% CI: 1.09-4.61, p = 0.029 and DD = +9.1 pp, ROR: 2.65, 95% CI: 1.01-6.98, p = 0.048) and household food insecurity (DD = -10.7 pp, ROR: 0.63, 95% CI: 0.43-0.91, p = 0.016). UCTs did not impact on reported child morbidity 2 week\'s prior to report (DD = -3.5 pp, ROR: 0.80, 95% CI: 0.56-1.14, p = 0.214) but reduced the financial barrier to seeking healthcare for sick children (DD = -26.4 pp, ROR: 0.23, 95% CI: 0.08-0.66, p = 0.006). Women who received cash had higher odds of delivering in a health facility (DD = +10.6 pp, ROR: 1.53, 95% CI: 1.10-2.13, p = 0.012) and lower odds of giving birth to babies with birth weights (BWs) <2,500 g (DD = -11.8, ROR: 0.29, 95% CI: 0.10-0.82, p = 0.020). Positive effects were also found on women\'s knowledge (DD = +14.8, ROR: 1.86, 95% CI: 1.32-2.62, p < 0.001) and physical IPV (DD = -7.9 pp, ROR: 0.60, 95% CI: 0.36-0.99, p = 0.048). Study limitations included the short evaluation period (24 months) and the low coverage of UCTs, which might have reduced the program\'s impact.
Conclusions
UCTs targeting the first \"1,000 days\" had a protective effect on child\'s linear growth in rural areas of Togo. Their simultaneous positive effects on various immediate, underlying, and basic causes of malnutrition certainly contributed to this ultimate impact. The positive impacts observed on pregnancy- and birth-related outcomes call for further attention to the conception period in nutrition-sensitive programs.
Trial registration
ISRCTN Registry ISRCTN83330970.



PLoS Med: 30 Oct 2020; 17:e1003388
Briaux J, Martin-Prevel Y, Carles S, Fortin S, ... Becquet R, Savy M
PLoS Med: 30 Oct 2020; 17:e1003388 | PMID: 33201927
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Efficacy and safety of dolutegravir plus emtricitabine versus standard ART for the maintenance of HIV-1 suppression: 48-week results of the factorial, randomized, non-inferiority SIMPL\'HIV trial.

Sculier D, Wandeler G, Yerly S, Marinosci A, ... Calmy A,
Background
Dolutegravir (DTG)-based dual therapy is becoming a new paradigm for both the initiation and maintenance of HIV treatment. The SIMPL\'HIV study investigated the outcomes of virologically suppressed patients on standard combination antiretroviral therapy (cART) switching to DTG + emtricitabine (FTC). We present the 48-week efficacy and safety data on DTG + FTC versus cART.
Methods and findings
SIMPL\'HIV was a multicenter, open-label, non-inferiority randomized trial with a factorial design among treatment-experienced people with HIV in Switzerland. Participants were enrolled between 12 May 2017 and 30 May 2018. Patients virologically suppressed for at least 24 weeks on standard cART were randomized 1:1 to switching to DTG + FTC or to continuing cART, and 1:1 to simplified patient-centered monitoring versus standard monitoring. The primary endpoint was the proportion of patients virologically suppressed with <100 copies/ml through 48 weeks. The secondary endpoints included virological suppression at 48 weeks according to the US Food and Drug Administration (FDA) snapshot analysis. Non-inferiority of DTG + FTC versus cART for viral suppression was assessed using a stratified Mantel-Haenszel risk difference, with non-inferiority declared if the lower bound of the 95% confidence interval was greater than -12%. Adverse events were monitored to assess safety. Quality of life was evaluated using the PROQOL-HIV questionnaire. Ninety-three participants were randomized to DTG + FTC, and 94 individuals to cART. Median nadir CD4 count was 246 cells/mm3; median age was 48 years; 17% of participants were female. DTG + FTC was non-inferior to cART. The proportion of patients with viral suppression (<100 copies/ml) through 48 weeks was 93.5% in the DTG + FTC arm and 94.7% in the cART arm in the intention-to-treat population (risk difference -1.2%; 95% CI -7.8% to 5.6%). Per-protocol analysis showed similar results, with viral suppression in 96.5% of patients in both arms (risk difference 0.0%; 95% CI -5.6% to 5.5%). There was no relevant interaction between the type of treatment and monitoring (interaction ratio 0.98; 95% CI 0.85 to 1.13; p = 0.81). Using the FDA snapshot algorithm, 84/93 (90.3%) participants in the DTG + FTC arm had an HIV-1 RNA viral load of <50 copies/ml compared to 86/94 (91.5%) participants on standard cART (risk difference -1.1%; 95% CI -9.3% to 7.1%; p = 0.791). The overall proportion of patients with adverse events and discontinuations did not differ by randomization arm. The proportion of patients with serious adverse events was higher in the cART arm (16%) compared to the DTG + FTC arm (6.5%) (p = 0.041), but none was considered to be related to the study medication. Quality of life improved more between baseline and week 48 in the DTG + FTC compared to the cART arm (adjusted difference +2.6; 95% CI +0.4 to +4.7). The study\'s main limitations included a rather small proportion of women included, the open label design, and its short duration.
Conclusions
In this study, DTG + FTC as maintenance therapy was non-inferior to cART in terms of efficacy, with a similar safety profile and a greater improvement in quality of life, thus expanding the offer of 2-drug simplification options among virologically suppressed individuals.
Trial registration
ClinicalTrials.gov NCT03160105.



PLoS Med: 30 Oct 2020; 17:e1003421
Sculier D, Wandeler G, Yerly S, Marinosci A, ... Calmy A,
PLoS Med: 30 Oct 2020; 17:e1003421 | PMID: 33170863
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

A clinical score for identifying active tuberculosis while awaiting microbiological results: Development and validation of a multivariable prediction model in sub-Saharan Africa.

Baik Y, Rickman HM, Hanrahan CF, Mmolawa L, ... Katamba A, Dowdy DW
Background
In highly resource-limited settings, many clinics lack same-day microbiological testing for active tuberculosis (TB). In these contexts, risk of pretreatment loss to follow-up is high, and a simple, easy-to-use clinical risk score could be useful.
Methods and findings
We analyzed data from adults tested for TB with Xpert MTB/RIF across 28 primary health clinics in rural South Africa (between July 2016 and January 2018). We used least absolute shrinkage and selection operator regression to identify characteristics associated with Xpert-confirmed TB and converted coefficients into a simple score. We assessed discrimination using receiver operating characteristic (ROC) curves, calibration using Cox linear logistic regression, and clinical utility using decision curves. We validated the score externally in a population of adults tested for TB across 4 primary health clinics in urban Uganda (between May 2018 and December 2019). Model development was repeated de novo with the Ugandan population to compare clinical scores. The South African and Ugandan cohorts included 701 and 106 individuals who tested positive for TB, respectively, and 686 and 281 randomly selected individuals who tested negative. Compared to the Ugandan cohort, the South African cohort was older (41% versus 19% aged 45 years or older), had similar breakdown of biological sex (48% versus 50% female), and had higher HIV prevalence (45% versus 34%). The final prediction model, scored from 0 to 10, included 6 characteristics: age, sex, HIV (2 points), diabetes, number of classical TB symptoms (cough, fever, weight loss, and night sweats; 1 point each), and >14-day symptom duration. Discrimination was moderate in the derivation (c-statistic = 0.82, 95% CI = 0.81 to 0.82) and validation (c-statistic = 0.75, 95% CI = 0.69 to 0.80) populations. A patient with 10% pretest probability of TB would have a posttest probability of 4% with a score of 3/10 versus 43% with a score of 7/10. The de novo Ugandan model contained similar characteristics and performed equally well. Our study may be subject to spectrum bias as we only included a random sample of people without TB from each cohort. This score is only meant to guide management while awaiting microbiological results, not intended as a community-based triage test (i.e., to identify individuals who should receive further testing).
Conclusions
In this study, we observed that a simple clinical risk score reasonably distinguished individuals with and without TB among those submitting sputum for diagnosis. Subject to prospective validation, this score might be useful in settings with constrained diagnostic resources where concern for pretreatment loss to follow-up is high.



PLoS Med: 30 Oct 2020; 17:e1003420
Baik Y, Rickman HM, Hanrahan CF, Mmolawa L, ... Katamba A, Dowdy DW
PLoS Med: 30 Oct 2020; 17:e1003420 | PMID: 33170838
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Southeast Asian health system challenges and responses to the \'Andaman Sea refugee crisis\': A qualitative study of health-sector perspectives from Indonesia, Malaysia, Myanmar, and Thailand.

Legido-Quigley H, Leh Hoon Chuah F, Howard N
Background
Southeast Asian countries host signficant numbers of forcibly displaced people. This study was conducted to examine how health systems in Southeast Asia have responded to the health system challenges of forced migration and refugee-related health including the health needs of populations affected by forced displacement; the health systems-level barriers and facilitators in addressing these needs; and the implications of existing health policies relating to forcibly displaced and refugee populations. This study aims to fill in the gap in knowledge by analysing how health systems are organised in Southeast Asia to address the health needs of forcibly displaced people.
Methods and findings
We conducted 30 semistructured interviews with health policy-makers, health service providers, and other experts working in the United Nations (n = 6), ministries and public health (n = 5), international (n = 9) and national civil society (n = 7), and academia (n = 3) based in Indonesia (n = 6), Malaysia (n = 10), Myanmar (n = 6), and Thailand (n = 8). Data were analysed thematically using deductive and inductive coding. Interviewees described the cumulative nature of health risks at each migratory phase. Perceived barriers to addressing migrants\' cumulative health needs were primarily financial, juridico-political, and sociocultural, whereas key facilitators were many health workers\' humanitarian stance and positive national commitment to pursuing universal health coverage (UHC). Across all countries, financial constraints were identified as the main challenges in addressing the comprehensive health needs of refugees and asylum seekers. Participants recommended regional and multisectoral approaches led by national governments, recognising refugee and asylum-seeker contributions, and promoting inclusion and livelihoods. Main study limitations included that we were not able to include migrant voices or those professionals not already interested in migrants.
Conclusions
To our knowledge, this is one of the first qualitative studies to investigate the health concerns and barriers to access among migrants experiencing forced displacement, particularly refugees and asylum seekers, in Southeast Asia. Findings provide practical new insights with implications for informing policy and practice. Overall, sociopolitical inclusion of forcibly displaced populations remains difficult in these four countries despite their significant contributions to host-country economies.



PLoS Med: 30 Oct 2020; 17:e1003143
Legido-Quigley H, Leh Hoon Chuah F, Howard N
PLoS Med: 30 Oct 2020; 17:e1003143 | PMID: 33170834
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Violence prevention accelerators for children and adolescents in South Africa: A path analysis using two pooled cohorts.

Cluver LD, Rudgard WE, Toska E, Zhou S, ... Meinck F, Sherr L
Background
The INSPIRE framework was developed by 10 global agencies as the first global package for preventing and responding to violence against children. The framework includes seven complementary strategies. Delivering all seven strategies is a challenge in resource-limited contexts. Consequently, governments are requesting additional evidence to inform which \'accelerator\' provisions can simultaneously reduce multiple types of violence against children.
Methods and findings
We pooled data from two prospective South African adolescent cohorts including Young Carers (2010-2012) and Mzantsi Wakho (2014-2017). The combined sample size was 5,034 adolescents. Each cohort measured six self-reported violence outcomes (sexual abuse, transactional sexual exploitation, physical abuse, emotional abuse, community violence victimisation, and youth lawbreaking) and seven self-reported INSPIRE-aligned protective factors (positive parenting, parental monitoring and supervision, food security at home, basic economic security at home, free schooling, free school meals, and abuse response services). Associations between hypothesised protective factors and violence outcomes were estimated jointly in a sex-stratified multivariate path model, controlling for baseline outcomes and socio-demographics and correcting for multiple-hypothesis testing using the Benjamini-Hochberg procedure. We calculated adjusted probability estimates conditional on the presence of no, one, or all protective factors significantly associated with reduced odds of at least three forms of violence in the path model. Adjusted risk differences (ARDs) and adjusted risk ratios (ARRs) with 95% confidence intervals (CIs) were also calculated. The sample mean age was 13.54 years, and 56.62% were female. There was 4% loss to follow-up. Positive parenting, parental monitoring and supervision, and food security at home were each associated with lower odds of three or more violence outcomes (p < 0.05). For girls, the adjusted probability of violence outcomes was estimated to be lower if all three of these factors were present, as compared to none of them: sexual abuse, 5.38% and 1.64% (ARD: -3.74% points, 95% CI -5.31 to -2.16, p < 0.001); transactional sexual exploitation, 10.07% and 4.84% (ARD: -5.23% points, 95% CI -7.26 to -3.20, p < 0.001); physical abuse, 38.58% and 23.85% (ARD: -14.72% points, 95% CI -19.11 to -10.33, p < 0.001); emotional abuse, 25.39% and 12.98% (ARD: -12.41% points, 95% CI -16.00 to -8.83, p < 0.001); community violence victimisation, 36.25% and 28.37% (ARD: -7.87% points, 95% CI -11.98 to -3.76, p < 0.001); and youth lawbreaking, 18.90% and 11.61% (ARD: -7.30% points, 95% CI -10.50 to -4.09, p < 0.001). For boys, the adjusted probability of violence outcomes was also estimated to be lower if all three factors were present, as compared to none of them: sexual abuse, 2.39% to 1.80% (ARD: -0.59% points, 95% CI -2.24 to 1.05, p = 0.482); transactional sexual exploitation, 6.97% to 4.55% (ARD: -2.42% points, 95% CI -4.77 to -0.08, p = 0.043); physical abuse from 37.19% to 25.44% (ARD: -11.74% points, 95% CI -16.91 to -6.58, p < 0.001); emotional abuse from 23.72% to 10.72% (ARD: -13.00% points, 95% CI -17.04 to -8.95, p < 0.001); community violence victimisation from 41.28% to 35.41% (ARD: -5.87% points, 95% CI -10.98 to -0.75, p = 0.025); and youth lawbreaking from 22.44% to 14.98% (ARD -7.46% points, 95% CI -11.57 to -3.35, p < 0.001). Key limitations were risk of residual confounding and not having information on protective factors related to all seven INSPIRE strategies.
Conclusion
In this cohort study, we found that positive and supervisory caregiving and food security at home are associated with reduced risk of multiple forms of violence against children. The presence of all three of these factors may be linked to greater risk reduction as compared to the presence of one or none of these factors. Policies promoting action on positive and supervisory caregiving and food security at home are likely to support further efficiencies in the delivery of INSPIRE.



PLoS Med: 30 Oct 2020; 17:e1003383
Cluver LD, Rudgard WE, Toska E, Zhou S, ... Meinck F, Sherr L
PLoS Med: 30 Oct 2020; 17:e1003383 | PMID: 33166288
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Anticipatory changes in British household purchases of soft drinks associated with the announcement of the Soft Drinks Industry Levy: A controlled interrupted time series analysis.

Pell D, Penney TL, Mytton O, Briggs A, ... White M, Adams J
Background
Sugar-sweetened beverage (SSB) consumption is positively associated with obesity, type 2 diabetes, and cardiovascular disease. The World Health Organization recommends that member states implement effective taxes on SSBs to reduce consumption. The United Kingdom Soft Drinks Industry Levy (SDIL) is a two-tiered tax, announced in March 2016 and implemented in April 2018. Drinks with ≥8 g of sugar per 100 ml (higher levy tier) are taxed at £0.24 per litre, drinks with ≥5 to <8 g of sugar per 100 ml (lower levy tier) are taxed at £0.18 per litre, and drinks with <5 g sugar per 100 ml (no levy) are not taxed. Milk-based drinks, pure fruit juices, drinks sold as powder, and drinks with >1.2% alcohol by volume are exempt. We aimed to determine if the announcement of the SDIL was associated with anticipatory changes in purchases of soft drinks prior to implementation of the SDIL in April 2018. We explored differences in the volume of and amount of sugar in household purchases of drinks in each levy tier at 2 years post announcement.
Methods and findings
We used controlled interrupted time series to compare observed changes associated with the announcement of the SDIL to the counterfactual scenario of no announcement. We used data from Kantar Worldpanel, a commercial household purchasing panel with approximately 30,000 British members that includes linked nutritional data on purchases. We conducted separate analyses for drinks liable for the SDIL in the higher, lower, and no-levy tiers controlling with household purchase volumes of toiletries. At 2 years post announcement, there was no difference in volume of or sugar from purchases of higher-levy-tier drinks compared to the counterfactual of no announcement. In contrast, a reversal of the existing upward trend in volume (ml) of and amount of sugar (g) in purchases of lower-levy-tier drinks was seen. These changes led to a -96.1 ml (95% confidence interval [CI] -144.2 to -48.0) reduction in volume and -6.4 g (95% CI -9.8 to -3.1) reduction in sugar purchased in these drinks per household per week. There was a reversal of the existing downward trend in the amount of sugar in household purchases of the no-levy drinks but no change in volume purchased. At 2 years post announcement, these changes led to a 6.1 g (95% CI 3.9-8.2) increase in sugar purchased in these drinks per household per week. There was no evidence that volume of or amount of sugar in purchases of all drinks combined was different from the counterfactual. This is an observational study, and changes other than the SDIL may have been responsible for the results reported. Purchases consumed outside of the home were not accounted for.
Conclusions
The announcement of the UK SDIL was associated with reductions in volume and sugar purchased in lower-levy-tier drinks before implementation. These were offset by increases in sugar purchased from no-levy drinks. These findings may reflect reformulation of drinks from the lower levy to no-levy tier with removal of some but not all sugar, alongside changes in consumer attitudes and beliefs.
Trial registration
ISRCTN Registry ISRCTN18042742.



PLoS Med: 30 Oct 2020; 17:e1003269
Pell D, Penney TL, Mytton O, Briggs A, ... White M, Adams J
PLoS Med: 30 Oct 2020; 17:e1003269 | PMID: 33180869
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Health system interventions for adults with type 2 diabetes in low- and middle-income countries: A systematic review and meta-analysis.

Flood D, Hane J, Dunn M, Brown SJ, ... Rohloff P, Chopra V
Background
Effective health system interventions may help address the disproportionate burden of diabetes in low- and middle-income countries (LMICs). We assessed the impact of health system interventions to improve outcomes for adults with type 2 diabetes in LMICs.
Methods and findings
We searched Ovid MEDLINE, Cochrane Library, EMBASE, African Index Medicus, LILACS, and Global Index Medicus from inception of each database through February 24, 2020. We included randomized controlled trials (RCTs) of health system interventions targeting adults with type 2 diabetes in LMICs. Eligible studies reported at least 1 of the following outcomes: glycemic change, mortality, quality of life, or cost-effectiveness. We conducted a meta-analysis for the glycemic outcome of hemoglobin A1c (HbA1c). GRADE and Cochrane Effective Practice and Organisation of Care methods were used to assess risk of bias for the glycemic outcome and to prepare a summary of findings table. Of the 12,921 references identified in searches, we included 39 studies in the narrative review of which 19 were cluster RCTs and 20 were individual RCTs. The greatest number of studies were conducted in the East Asia and Pacific region (n = 20) followed by South Asia (n = 7). There were 21,080 total participants enrolled across included studies and 10,060 total participants in the meta-analysis of HbA1c when accounting for the design effect of cluster RCTs. Non-glycemic outcomes of mortality, health-related quality of life, and cost-effectiveness had sparse data availability that precluded quantitative pooling. In the meta-analysis of HbA1c from 35 of the included studies, the mean difference was -0.46% (95% CI -0.60% to -0.31%, I2 87.8%, p < 0.001) overall, -0.37% (95% CI -0.64% to -0.10%, I2 60.0%, n = 7, p = 0.020) in multicomponent clinic-based interventions, -0.87% (-1.20% to -0.53%, I2 91.0%, n = 13, p < 0.001) in pharmacist task-sharing studies, and -0.27% (-0.50% to -0.04%, I2 64.1%, n = 7, p = 0.010) in trials of diabetes education or support alone. Other types of interventions had few included studies. Eight studies were at low risk of bias for the summary assessment of glycemic control, 15 studies were at unclear risk, and 16 studies were at high risk. The certainty of evidence for glycemic control by subgroup was moderate for multicomponent clinic-based interventions but was low or very low for other intervention types. Limitations include the lack of consensus definitions for health system interventions, differences in the quality of underlying studies, and sparse data availability for non-glycemic outcomes.
Conclusions
In this meta-analysis, we found that health system interventions for type 2 diabetes may be effective in improving glycemic control in LMICs, but few studies are available from rural areas or low- or lower-middle-income countries. Multicomponent clinic-based interventions had the strongest evidence for glycemic benefit among intervention types. Further research is needed to assess non-glycemic outcomes and to study implementation in rural and low-income settings.



PLoS Med: 30 Oct 2020; 17:e1003434
Flood D, Hane J, Dunn M, Brown SJ, ... Rohloff P, Chopra V
PLoS Med: 30 Oct 2020; 17:e1003434 | PMID: 33180775
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

The impact of voluntary front-of-pack nutrition labelling on packaged food reformulation: A difference-in-differences analysis of the Australasian Health Star Rating scheme.

Bablani L, Ni Mhurchu C, Neal B, Skeels CL, Staub KE, Blakely T
Background
Front-of-pack nutrition labelling (FoPL) of packaged foods can promote healthier diets. Australia and New Zealand (NZ) adopted the voluntary Health Star Rating (HSR) scheme in 2014. We studied the impact of voluntary adoption of HSR on food reformulation relative to unlabelled foods and examined differential impacts for more-versus-less healthy foods.
Methods and findings
Annual nutrition information panel data were collected for nonseasonal packaged foods sold in major supermarkets in Auckland from 2013 to 2019 and in Sydney from 2014 to 2018. The analysis sample covered 58,905 unique products over 14 major food groups. We used a difference-in-differences design to estimate reformulation associated with HSR adoption. Healthier products adopted HSR more than unhealthy products: >35% of products that achieved 4 or more stars displayed the label compared to <15% of products that achieved 2 stars or less. Products that adopted HSR were 6.5% and 10.7% more likely to increase their rating by ≥0.5 stars in Australia and NZ, respectively. Labelled products showed a -4.0% [95% confidence interval (CI): -6.4% to -1.7%, p = 0.001] relative decline in sodium content in NZ, and there was a -1.4% [95% CI: -2.7% to -0.0%, p = 0.045] sodium change in Australia. HSR adoption was associated with a -2.3% [-3.7% to -0.9%, p = 0.001] change in sugar content in NZ and a statistically insignificant -1.1% [-2.3% to 0.1%, p = 0.061] difference in Australia. Initially unhealthy products showed larger reformulation effects when adopting HSR than healthier products. No evidence of a change in protein or saturated fat content was observed. A limitation of our study is that results are not sales weighted. Thus, it is not able to assess changes in overall nutrient consumption that occur because of HSR-caused reformulation. Also, participation into labelling and reformulation is jointly determined by producers in this observational study, impacting its generalisability to settings with mandatory labelling.
Conclusions
In this study, we observed that reformulation changes following voluntary HSR labelling are small, but greater for initially unhealthy products. Initially unhealthy foods were, however, less likely to adopt HSR. Our results, therefore, suggest that mandatory labelling has the greatest potential for improving the healthiness of packaged foods.



PLoS Med: 30 Oct 2020; 17:e1003427
Bablani L, Ni Mhurchu C, Neal B, Skeels CL, Staub KE, Blakely T
PLoS Med: 30 Oct 2020; 17:e1003427 | PMID: 33216747
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Advance care planning in patients with advanced cancer: A 6-country, cluster-randomised clinical trial.

Korfage IJ, Carreras G, Arnfeldt Christensen CM, Billekens P, ... van der Heide A, Rietjens JAC
Background
Advance care planning (ACP) supports individuals to define, discuss, and record goals and preferences for future medical treatment and care. Despite being internationally recommended, randomised clinical trials of ACP in patients with advanced cancer are scarce.
Methods and findings
To test the implementation of ACP in patients with advanced cancer, we conducted a cluster-randomised trial in 23 hospitals across Belgium, Denmark, Italy, Netherlands, Slovenia, and United Kingdom in 2015-2018. Patients with advanced lung (stage III/IV) or colorectal (stage IV) cancer, WHO performance status 0-3, and at least 3 months life expectancy were eligible. The ACTION Respecting Choices ACP intervention as offered to patients in the intervention arm included scripted ACP conversations between patients, family members, and certified facilitators; standardised leaflets; and standardised advance directives. Control patients received care as usual. Main outcome measures were quality of life (operationalised as European Organisation for Research and Treatment of Cancer [EORTC] emotional functioning) and symptoms. Secondary outcomes were coping, patient satisfaction, shared decision-making, patient involvement in decision-making, inclusion of advance directives (ADs) in hospital files, and use of hospital care. In all, 1,117 patients were included (442 intervention; 675 control), and 809 (72%) completed the 12-week questionnaire. Patients\' age ranged from 18 to 91 years, with a mean of 66; 39% were female. The mean number of ACP conversations per patient was 1.3. Fidelity was 86%. Sixteen percent of patients found ACP conversations distressing. Mean change in patients\' quality of life did not differ between intervention and control groups (T-score -1.8 versus -0.8, p = 0.59), nor did changes in symptoms, coping, patient satisfaction, and shared decision-making. Specialist palliative care (37% versus 27%, p = 0.002) and AD inclusion in hospital files (10% versus 3%, p < 0.001) were more likely in the intervention group. A key limitation of the study is that recruitment rates were lower in intervention than in control hospitals.
Conclusions
Our results show that quality of life effects were not different between patients who had ACP conversations and those who received usual care. The increased use of specialist palliative care and AD inclusion in hospital files of intervention patients is meaningful and requires further study. Our findings suggest that alternative approaches to support patient-centred end-of-life care in this population are needed.
Trial registration
ISRCTN registry ISRCTN63110516.



PLoS Med: 30 Oct 2020; 17:e1003422
Korfage IJ, Carreras G, Arnfeldt Christensen CM, Billekens P, ... van der Heide A, Rietjens JAC
PLoS Med: 30 Oct 2020; 17:e1003422 | PMID: 33186365
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Risk of colorectal cancer in patients with diabetes mellitus: A Swedish nationwide cohort study.

Ali Khan U, Fallah M, Sundquist K, Sundquist J, Brenner H, Kharazmi E
Background
Colorectal cancer (CRC) incidence is increasing among young adults below screening age, despite the effectiveness of screening in older populations. Individuals with diabetes mellitus are at increased risk of early-onset CRC. We aimed to determine how many years earlier than the general population patients with diabetes with/without family history of CRC reach the threshold risk at which CRC screening is recommended to the general population.
Methods and findings
A nationwide cohort study (follow-up:1964-2015) involving all Swedish residents born after 1931 and their parents was carried out using record linkage of Swedish Population Register, Cancer Registry, National Patient Register, and Multi-Generation Register. Of 12,614,256 individuals who were followed between 1964 and 2015 (51% men; age range at baseline 0-107 years), 162,226 developed CRC, and 559,375 developed diabetes. Age-specific 10-year cumulative risk curves were used to draw conclusions about how many years earlier patients with diabetes reach the 10-year cumulative risks of CRC in 50-year-old men and women (most common age of first screening), which were 0.44% and 0.41%, respectively. Diabetic patients attained the screening level of CRC risk earlier than the general Swedish population. Men with diabetes reached 0.44% risk at age 45 (5 years earlier than the recommended age of screening). In women with diabetes, the risk advancement was 4 years. Risk was more pronounced for those with additional family history of CRC (12-21 years earlier depending on sex and benchmark starting age of screening). The study limitations include lack of detailed information on diabetes type, lifestyle factors, and colonoscopy data.
Conclusions
Using high-quality registers, this study is, to our knowledge, the first one that provides novel evidence-based information for risk-adapted starting ages of CRC screening for patients with diabetes, who are at higher risk of early-onset CRC than the general population.



PLoS Med: 30 Oct 2020; 17:e1003431
Ali Khan U, Fallah M, Sundquist K, Sundquist J, Brenner H, Kharazmi E
PLoS Med: 30 Oct 2020; 17:e1003431 | PMID: 33186354
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Body mass index and risk of dying from a bloodstream infection: A Mendelian randomization study.

Rogne T, Solligård E, Burgess S, Brumpton BM, ... DeWan AT, Damås JK
Background
In observational studies of the general population, higher body mass index (BMI) has been associated with increased incidence of and mortality from bloodstream infection (BSI) and sepsis. On the other hand, higher BMI has been observed to be apparently protective among patients with infection and sepsis. We aimed to evaluate the causal association of BMI with risk of and mortality from BSI.
Methods and findings
We used a population-based cohort in Norway followed from 1995 to 2017 (the Trøndelag Health Study [HUNT]), and carried out linear and nonlinear Mendelian randomization analyses. Among 55,908 participants, the mean age at enrollment was 48.3 years, 26,324 (47.1%) were men, and mean BMI was 26.3 kg/m2. During a median 21 years of follow-up, 2,547 (4.6%) participants experienced a BSI, and 451 (0.8%) died from BSI. Compared with a genetically predicted BMI of 25 kg/m2, a genetically predicted BMI of 30 kg/m2 was associated with a hazard ratio for BSI incidence of 1.78 (95% CI: 1.40 to 2.27; p < 0.001) and for BSI mortality of 2.56 (95% CI: 1.31 to 4.99; p = 0.006) in the general population, and a hazard ratio for BSI mortality of 2.34 (95% CI: 1.11 to 4.94; p = 0.025) in an inverse-probability-weighted analysis of patients with BSI. Limitations of this study include a risk of pleiotropic effects that may affect causal inference, and that only participants of European ancestry were considered.
Conclusions
Supportive of a causal relationship, genetically predicted BMI was positively associated with BSI incidence and mortality in this cohort. Our findings contradict the \"obesity paradox,\" where previous traditional epidemiological studies have found increased BMI to be apparently protective in terms of mortality for patients with BSI or sepsis.



PLoS Med: 30 Oct 2020; 17:e1003413
Rogne T, Solligård E, Burgess S, Brumpton BM, ... DeWan AT, Damås JK
PLoS Med: 30 Oct 2020; 17:e1003413 | PMID: 33196656
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Mode of birth and risk of infection-related hospitalisation in childhood: A population cohort study of 7.17 million births from 4 high-income countries.

Miller JE, Goldacre R, Moore HC, Zeltzer J, ... Pedersen LH, Burgner DP
Background
The proportion of births via cesarean section (CS) varies worldwide and in many countries exceeds WHO-recommended rates. Long-term health outcomes for children born by CS are poorly understood, but limited data suggest that CS is associated with increased infection-related hospitalisation. We investigated the relationship between mode of birth and childhood infection-related hospitalisation in high-income countries with varying CS rates.
Methods and findings
We conducted a multicountry population-based cohort study of all recorded singleton live births from January 1, 1996 to December 31, 2015 using record-linked birth and hospitalisation data from Denmark, Scotland, England, and Australia (New South Wales and Western Australia). Birth years within the date range varied by site, but data were available from at least 2001 to 2010 for each site. Mode of birth was categorised as vaginal or CS (emergency/elective). Infection-related hospitalisations (overall and by clinical type) occurring after the birth-related discharge date were identified in children until 5 years of age by primary/secondary International Classification of Diseases, 10th Revision (ICD-10) diagnosis codes. Analysis used Cox regression models, adjusting for maternal factors, birth parameters, and socioeconomic status, with results pooled using meta-analysis. In total, 7,174,787 live recorded births were included. Of these, 1,681,966 (23%, range by jurisdiction 17%-29%) were by CS, of which 727,755 (43%, range 38%-57%) were elective. A total of 1,502,537 offspring (21%) had at least 1 infection-related hospitalisation. Compared to vaginally born children, risk of infection was greater among CS-born children (hazard ratio (HR) from random effects model, HR 1.10, 95% confidence interval (CI) 1.09-1.12, p < 0.001). The risk was higher following both elective (HR 1.13, 95% CI 1.12-1.13, p < 0.001) and emergency CS (HR 1.09, 95% CI 1.06-1.12, p < 0.001). Increased risks persisted to 5 years and were highest for respiratory, gastrointestinal, and viral infections. Findings were comparable in prespecified subanalyses of children born to mothers at low obstetric risk and unchanged in sensitivity analyses. Limitations include site-specific and longitudinal variations in clinical practice and in the definition and availability of some data. Data on postnatal factors were not available.
Conclusions
In this study, we observed a consistent association between birth by CS and infection-related hospitalisation in early childhood. Notwithstanding the limitations of observational data, the associations may reflect differences in early microbial exposure by mode of birth, which should be investigated by mechanistic studies. If our findings are confirmed, they could inform efforts to reduce elective CS rates that are not clinically indicated.



PLoS Med: 30 Oct 2020; 17:e1003429
Miller JE, Goldacre R, Moore HC, Zeltzer J, ... Pedersen LH, Burgner DP
PLoS Med: 30 Oct 2020; 17:e1003429 | PMID: 33211696
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Evaluating smartphone strategies for reliability, reproducibility, and quality of VIA for cervical cancer screening in the Shiselweni region of Eswatini: A cohort study.

Asgary R, Staderini N, Mthethwa-Hleta S, Lopez Saavedra PA, ... Beideck E, Kerschberger B
Background
Cervical cancer is among the most common preventable cancers with the highest morbidity and mortality. The World Health Organization (WHO) recommends visual inspection of the cervix with acetic acid (VIA) as cervical cancer screening strategy in resource-poor settings. However, there are barriers to the sustainability of VIA programs including declining providers\' VIA competence without mentorship and quality assurances and challenges of integration into primary healthcare. This study seeks to evaluate the impact of smartphone-based strategies in improving reliability, reproducibility, and quality of VIA in humanitarian settings.
Methods and findings
We implemented smartphone-based VIA that included standard VIA training, adapted refresher, and 6-month mHealth mentorship, sequentially, in the rural Shiselweni region of Eswatini. A remote expert reviewer provided diagnostic and management feedback on patients\' cervical images, which were reviewed weekly by nurses. Program\'s outcomes, VIA image agreement rates, and Kappa statistic were compared before, during, and after training. From September 1, 2016 to December 31, 2018, 4,247 patients underwent screening; 247 were reviewed weekly by a VIA diagnostic expert. Of the 247, 128 (49%) were HIV-positive; mean age was 30.80 years (standard deviation [SD]: 7.74 years). Initial VIA positivity of 16% (436/2,637) after standard training gradually increased to 25.1% (293/1,168), dropped to an average of 9.7% (143/1,469) with a lowest of 7% (20/284) after refresher in 2017 (p = 0.001), increased again to an average of 9.6% (240/2,488) with a highest of 17% (17/100) before the start of mentorship, and dropped to an average of 8.3% (134/1,610) in 2018 with an average of 6.3% (37/591) after the start of mentorship (p = 0.019). Overall, 88% were eligible for and 68% received cryotherapy the same day: 10 cases were clinically suspicious for cancer; however, only 5 of those cases were confirmed using punch biopsy. Agreement rates with the expert reviewer for positive and negative cases were 100% (95% confidence interval [CI]: 79.4% to 100%) and 95.7% (95% CI: 92.2% to 97.9%), respectively, with negative predictive value (NPV) (100%), positive predictive value (PPV) (63.5%), and area under the curve of receiver operating characteristics (AUC ROC) (0.978). Kappa statistic was 0.74 (95% CI; 0.58 to 0.89); 0.64 and 0.79 at 3 and 6 months, respectively. In logistic regression, HIV and age were associated with VIA positivity (adjusted Odds Ratio [aOR]: 3.53, 95% CI: 1.10 to 11.29; p = 0.033 and aOR: 1.06, 95% CI: 1.0004 to 1.13; p = 0.048, respectively). We were unable to incorporate a control arm due to logistical constraints in routine humanitarian settings.
Conclusions
Our findings suggest that smartphone mentorship provided experiential learning to improve nurses\' competencies and VIA reliability and reproducibility, reduced false positive, and introduced peer-to-peer education and quality control services. Local collaboration; extending services to remote populations; decreasing unnecessary burden to screened women, providers, and tertiary centers; and capacity building through low-tech high-yield screening are promising strategies for scale-up of VIA programs.



PLoS Med: 30 Oct 2020; 17:e1003378
Asgary R, Staderini N, Mthethwa-Hleta S, Lopez Saavedra PA, ... Beideck E, Kerschberger B
PLoS Med: 30 Oct 2020; 17:e1003378 | PMID: 33211691
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Plant-based diets and incident metabolic syndrome: Results from a South Korean prospective cohort study.

Kim H, Lee K, Rebholz CM, Kim J
Background
Prior studies have shown that plant-based diets are associated with lower risk of cardiovascular risk factors and incident cardiovascular disease, but risks differed by quality of plant-based diets. No prospective studies have evaluated the associations between different types of plant-based diets and incident metabolic syndrome (MetS) and components of MetS. Furthermore, limited evidence exists in Asian populations who have habitually consumed a diet rich in plant foods for a long period of time.
Methods and findings
Analyses were based on a community-based cohort of 5,646 men and women (40-69 years of age at baseline) living in Ansan and Ansung, South Korea (2001-2016) without MetS and related chronic diseases at baseline. Dietary intake was assessed using a validated food frequency questionnaire. Using the responses in the questionnaire, we calculated 4 plant-based diet indices (overall plant-based diet index [PDI], healthful plant-based diet index [hPDI], unhealthful plant-based diet index [uPDI], and pro-vegetarian diet index). Higher PDI score represented greater consumption of all types of plant foods regardless of healthiness. Higher hPDI score represented greater consumption of healthy plant foods (whole grains, fruits, vegetables, nuts, legumes, tea and coffee) and lower consumption of less-healthy plant foods (refined grains, potatoes, sugar-sweetened beverages, sweets, salty foods). Higher uPDI represented lower consumption of healthy plant foods and greater consumption of less-healthy plant foods. Similar to PDI, higher pro-vegetarian diet score represented greater consumption of plant foods but included only selected plant foods (grains, fruits, vegetables, nuts, legumes, potatoes). Higher scores in all plant-based diet indices represented lower consumption of animal foods (animal fat, dairy, eggs, fish/seafood, meat). Over a median follow-up of 8 years, 2,583 participants developed incident MetS. Individuals in the highest versus lowest quintile of uPDI had 50% higher risk of developing incident MetS, adjusting for demographic characteristics and lifestyle factors (hazard ratio [HR]: 1.50, 95% CI 1.31-1.71, P-trend < 0.001). When we further adjusted for body mass index (BMI), those in the highest quintile of uPDI had 24%-46% higher risk of 4 out of 5 individual components of MetS (abdominal obesity, hypertriglyceridemia, low high-density lipoprotein [HDL], and elevated blood pressure) (P-trend for all tests ≤ 0.001). Greater adherence to PDI was associated with lower risk of elevated fasting glucose (HR: 0.80, 95% CI 0.70-0.92, P-trend = 0.003). No consistent associations were observed for other plant-based diet indices and MetS. Limitations of the study may include potential measurement error in self-reported dietary intake, inability to classify a few plant foods as healthy and less-healthy, lack of data on vegetable oil intake, and possibility of residual confounding.
Conclusions
In this study, we observed that greater adherence to diets consisting of a high intake of refined carbohydrates, sugars, and salty foods in the framework of plant-based diets was associated with an elevated risk of MetS. These results suggest that considering the quality of plant foods is important for prevention of MetS in a population that habitually consumes plant foods.



PLoS Med: 30 Oct 2020; 17:e1003371
Kim H, Lee K, Rebholz CM, Kim J
PLoS Med: 30 Oct 2020; 17:e1003371 | PMID: 33206633
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Risk factors during first 1,000 days of life for carotid intima-media thickness in infants, children, and adolescents: A systematic review with meta-analyses.

Epure AM, Rios-Leyvraz M, Anker D, Di Bernardo S, ... Chiolero A, Sekarski N
Background
The first 1,000 days of life, i.e., from conception to age 2 years, could be a critical period for cardiovascular health. Increased carotid intima-media thickness (CIMT) is a surrogate marker of atherosclerosis. We performed a systematic review with meta-analyses to assess (1) the relationship between exposures or interventions in the first 1,000 days of life and CIMT in infants, children, and adolescents; and (2) the CIMT measurement methods.
Methods and findings
Systematic searches of Medical Literature Analysis and Retrieval System Online (MEDLINE), Excerpta Medica database (EMBASE), and Cochrane Central Register of Controlled Trials (CENTRAL) were performed from inception to March 2019. Observational and interventional studies evaluating factors at the individual, familial, or environmental levels, for instance, size at birth, gestational age, breastfeeding, mode of conception, gestational diabetes, or smoking, were included. Quality was evaluated based on study methodological validity (adjusted Newcastle-Ottawa Scale if observational; Cochrane collaboration risk of bias tool if interventional) and CIMT measurement reliability. Estimates from bivariate or partial associations that were least adjusted for sex were used for pooling data across studies, when appropriate, using random-effects meta-analyses. The research protocol was published and registered on the International Prospective Register of Systematic Reviews (PROSPERO; CRD42017075169). Of 6,221 reports screened, 50 full-text articles from 36 studies (34 observational, 2 interventional) totaling 7,977 participants (0 to 18 years at CIMT assessment) were retained. Children born small for gestational age had increased CIMT (16 studies, 2,570 participants, pooled standardized mean difference (SMD): 0.40 (95% confidence interval (CI): 0.15 to 0.64, p: 0.001), I2: 83%). When restricted to studies of higher quality of CIMT measurement, this relationship was stronger (3 studies, 461 participants, pooled SMD: 0.64 (95% CI: 0.09 to 1.19, p: 0.024), I2: 86%). Only 1 study evaluating small size for gestational age was rated as high quality for all methodological domains. Children conceived through assisted reproductive technologies (ART) (3 studies, 323 participants, pooled SMD: 0.78 (95% CI: -0.20 to 1.75, p: 0.120), I2: 94%) or exposed to maternal smoking during pregnancy (3 studies, 909 participants, pooled SMD: 0.12 (95% CI: -0.06 to 0.30, p: 0.205), I2: 0%) had increased CIMT, but the imprecision around the estimates was high. None of the studies evaluating these 2 factors was rated as high quality for all methodological domains. Two studies evaluating the effect of nutritional interventions starting at birth did not show an effect on CIMT. Only 12 (33%) studies were at higher quality across all domains of CIMT reliability. The degree of confidence in results is limited by the low number of high-quality studies, the relatively small sample sizes, and the high between-study heterogeneity.
Conclusions
In our meta-analyses, we found several risk factors in the first 1,000 days of life that may be associated with increased CIMT during childhood. Small size for gestational age had the most consistent relationship with increased CIMT. The associations with conception through ART or with smoking during pregnancy were not statistically significant, with a high imprecision around the estimates. Due to the large uncertainty in effect sizes and the limited quality of CIMT measurements, further high-quality studies are needed to justify intervention for primordial prevention of cardiovascular disease (CVD).



PLoS Med: 30 Oct 2020; 17:e1003414
Epure AM, Rios-Leyvraz M, Anker D, Di Bernardo S, ... Chiolero A, Sekarski N
PLoS Med: 30 Oct 2020; 17:e1003414 | PMID: 33226997
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Genetic and pharmacological relationship between P-glycoprotein and increased cardiovascular risk associated with clarithromycin prescription: An epidemiological and genomic population-based cohort study in Scotland, UK.

Mordi IR, Chan BK, Yanez ND, Palmer CNA, Lang CC, Chalmers JD
Background
There are conflicting reports regarding the association of the macrolide antibiotic clarithromycin with cardiovascular (CV) events. A possible explanation may be that this risk is partly mediated through drug-drug interactions and only evident in at-risk populations. To the best of our knowledge, no studies have examined whether this association might be mediated via P-glycoprotein (P-gp), a major pathway for clarithromycin metabolism. The aim of this study was to examine CV risk following prescription of clarithromycin versus amoxicillin and in particular, the association with P-gp, a major pathway for clarithromycin metabolism.
Methods and findings
We conducted an observational cohort study of patients prescribed clarithromycin or amoxicillin in the community in Tayside, Scotland (population approximately 400,000) between 1 January 2004 and 31 December 2014 and a genomic observational cohort study evaluating genotyped patients from the Genetics of Diabetes Audit and Research Tayside Scotland (GoDARTS) study, a longitudinal cohort study of 18,306 individuals with and without type 2 diabetes recruited between 1 December 1988 and 31 December 2015. Two single-nucleotide polymorphisms associated with P-gp activity were evaluated (rs1045642 and rs1128503 -AA genotype associated with lowest P-gp activity). The primary outcome for both analyses was CV hospitalization following prescription of clarithromycin versus amoxicillin at 0-14 days, 15-30 days, and 30 days to 1 year. In the observational cohort study, we calculated hazard ratios (HRs) adjusted for likelihood of receiving clarithromycin using inverse proportion of treatment weighting as a covariate, whereas in the pharmacogenomic study, HRs were adjusted for age, sex, history of myocardial infarction, and history of chronic obstructive pulmonary disease. The observational cohort study included 48,026 individuals with 205,227 discrete antibiotic prescribing episodes (34,074 clarithromycin, mean age 73 years, 42% male; 171,153 amoxicillin, mean age 74 years, 45% male). Clarithromycin use was significantly associated with increased risk of CV hospitalization compared with amoxicillin at both 0-14 days (HR 1.31; 95% CI 1.17-1.46, p < 0.001) and 30 days to 1 year (HR 1.13; 95% CI 1.06-1.19, p < 0.001), with the association at 0-14 days modified by use of P-gp inhibitors or substrates (interaction p-value: 0.029). In the pharmacogenomic study (13,544 individuals with 44,618 discrete prescribing episodes [37,497 amoxicillin, mean age 63 years, 56% male; 7,121 clarithromycin, mean age 66 years, 47% male]), when prescribed clarithromycin, individuals with genetically determined lower P-gp activity had a significantly increased risk of CV hospitalization at 30 days to 1 year compared with heterozygotes or those homozygous for the non-P-gp-lowering allele (rs1045642 AA: HR 1.39, 95% CI 1.20-1.60, p < 0.001, GG/GA: HR 0.99, 95% CI 0.89-1.10, p = 0.85, interaction p-value < 0.001 and rs1128503 AA 1.41, 95% CI 1.18-1.70, p < 0.001, GG/GA: HR 1.04, 95% CI 0.95-1.14, p = 0.43, interaction p-value < 0.001). The main limitation of our study is its observational nature, meaning that we are unable to definitively determine causality.
Conclusions
In this study, we observed that the increased risk of CV events with clarithromycin compared with amoxicillin was associated with an interaction with P-glycoprotein.



PLoS Med: 30 Oct 2020; 17:e1003372
Mordi IR, Chan BK, Yanez ND, Palmer CNA, Lang CC, Chalmers JD
PLoS Med: 30 Oct 2020; 17:e1003372 | PMID: 33226983
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Assessment of deep neural networks for the diagnosis of benign and malignant skin neoplasms in comparison with dermatologists: A retrospective validation study.

Han SS, Moon IJ, Kim SH, Na JI, ... Lee JH, Chang SE
Background
The diagnostic performance of convolutional neural networks (CNNs) for diagnosing several types of skin neoplasms has been demonstrated as comparable with that of dermatologists using clinical photography. However, the generalizability should be demonstrated using a large-scale external dataset that includes most types of skin neoplasms. In this study, the performance of a neural network algorithm was compared with that of dermatologists in both real-world practice and experimental settings.
Methods and findings
To demonstrate generalizability, the skin cancer detection algorithm (https://rcnn.modelderm.com) developed in our previous study was used without modification. We conducted a retrospective study with all single lesion biopsied cases (43 disorders; 40,331 clinical images from 10,426 cases: 1,222 malignant cases and 9,204 benign cases); mean age (standard deviation [SD], 52.1 [18.3]; 4,701 men [45.1%]) were obtained from the Department of Dermatology, Severance Hospital in Seoul, Korea between January 1, 2008 and March 31, 2019. Using the external validation dataset, the predictions of the algorithm were compared with the clinical diagnoses of 65 attending physicians who had recorded the clinical diagnoses with thorough examinations in real-world practice. In addition, the results obtained by the algorithm for the data of randomly selected batches of 30 patients were compared with those obtained by 44 dermatologists in experimental settings; the dermatologists were only provided with multiple images of each lesion, without clinical information. With regard to the determination of malignancy, the area under the curve (AUC) achieved by the algorithm was 0.863 (95% confidence interval [CI] 0.852-0.875), when unprocessed clinical photographs were used. The sensitivity and specificity of the algorithm at the predefined high-specificity threshold were 62.7% (95% CI 59.9-65.1) and 90.0% (95% CI 89.4-90.6), respectively. Furthermore, the sensitivity and specificity of the first clinical impression of 65 attending physicians were 70.2% and 95.6%, respectively, which were superior to those of the algorithm (McNemar test; p < 0.0001). The positive and negative predictive values of the algorithm were 45.4% (CI 43.7-47.3) and 94.8% (CI 94.4-95.2), respectively, whereas those of the first clinical impression were 68.1% and 96.0%, respectively. In the reader test conducted using images corresponding to batches of 30 patients, the sensitivity and specificity of the algorithm at the predefined threshold were 66.9% (95% CI 57.7-76.0) and 87.4% (95% CI 82.5-92.2), respectively. Furthermore, the sensitivity and specificity derived from the first impression of 44 of the participants were 65.8% (95% CI 55.7-75.9) and 85.7% (95% CI 82.4-88.9), respectively, which are values comparable with those of the algorithm (Wilcoxon signed-rank test; p = 0.607 and 0.097). Limitations of this study include the exclusive use of high-quality clinical photographs taken in hospitals and the lack of ethnic diversity in the study population.
Conclusions
Our algorithm could diagnose skin tumors with nearly the same accuracy as a dermatologist when the diagnosis was performed solely with photographs. However, as a result of limited data relevancy, the performance was inferior to that of actual medical examination. To achieve more accurate predictive diagnoses, clinical information should be integrated with imaging information.



PLoS Med: 30 Oct 2020; 17:e1003381
Han SS, Moon IJ, Kim SH, Na JI, ... Lee JH, Chang SE
PLoS Med: 30 Oct 2020; 17:e1003381 | PMID: 33237903
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Mortality and major disease risk among migrants of the 1991-2001 Balkan wars to Sweden: A register-based cohort study.

Thordardottir EB, Yin L, Hauksdottir A, Mittendorfer-Rutz E, ... Holmes EA, Valdimarsdottir UA
Background
In recent decades, millions of refugees and migrants have fled wars and sought asylum in Europe. The aim of this study was to quantify the risk of mortality and major diseases among migrants during the 1991-2001 Balkan wars to Sweden in comparison to other European migrants to Sweden during the same period.
Methods and findings
We conducted a register-based cohort study of 104,770 migrants to Sweden from the former Yugoslavia during the Balkan wars and 147,430 migrants to Sweden from 24 other European countries during the same period (1991-2001). Inpatient and specialized outpatient diagnoses of cardiovascular disease (CVD), cancer, and psychiatric disorders were obtained from the Swedish National Patient Register and the Swedish Cancer Register, and mortality data from the Swedish Cause of Death Register. Adjusting for individual-level data on sociodemographic characteristics and emigration country smoking prevalence, we used Cox regressions to contrast risks of health outcomes for migrants of the Balkan wars and other European migrants. During an average of 12.26 years of follow-up, being a migrant of the Balkan wars was associated with an elevated risk of being diagnosed with CVD (HR 1.39, 95% CI 1.34-1.43, p < 0.001) and dying from CVD (HR 1.45, 95% CI 1.29-1.62, p < 0.001), as well as being diagnosed with cancer (HR 1.16, 95% CI 1.08-1.24, p < 0.001) and dying from cancer (HR 1.27, 95% CI 1.15-1.41, p < 0.001), compared to other European migrants. Being a migrant of the Balkan wars was also associated with a greater overall risk of being diagnosed with a psychiatric disorder (HR 1.19, 95% CI 1.14-1.23, p < 0.001), particularly post-traumatic stress disorder (HR 9.33, 95% CI 7.96-10.94, p < 0.001), while being associated with a reduced risk of suicide (HR 0.68, 95% CI 0.48-0.96, p = 0.030) and suicide attempt (HR 0.57, 95% CI 0.51-0.65, p < 0.001). Later time period of migration and not having any first-degree relatives in Sweden at the time of immigration were associated with greater increases in risk of CVD and psychiatric disorders. Limitations of the study included lack of individual-level information on health status and behaviors of migrants at the time of immigration.
Conclusions
Our findings indicate that migrants of the Balkan wars faced considerably elevated risks of major diseases and mortality in their first decade in Sweden compared to other European migrants. War migrants without family members in Sweden or with more recent immigration may be particularly vulnerable to adverse health outcomes. Results underscore that persons displaced by war are a vulnerable group in need of long-term health surveillance for psychiatric disorders and somatic disease.



PLoS Med: 29 Nov 2020; 17:e1003392
Thordardottir EB, Yin L, Hauksdottir A, Mittendorfer-Rutz E, ... Holmes EA, Valdimarsdottir UA
PLoS Med: 29 Nov 2020; 17:e1003392 | PMID: 33259494
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Effects of vitamin B12 supplementation on neurodevelopment and growth in Nepalese Infants: A randomized controlled trial.

Strand TA, Ulak M, Hysing M, Ranjitkar S, ... Shrestha LS, Chandyo RK
Background
Vitamin B12 deficiency is common and affects cell division and differentiation, erythropoiesis, and the central nervous system. Several observational studies have demonstrated associations between biomarkers of vitamin B12 status with growth, neurodevelopment, and anemia. The objective of this study was to measure the effects of daily supplementation of vitamin B12 for 1 year on neurodevelopment, growth, and hemoglobin concentration in infants at risk of deficiency.
Methods and findings
This is a community-based, individually randomized, double-blind placebo-controlled trial conducted in low- to middle-income neighborhoods in Bhaktapur, Nepal. We enrolled 600 marginally stunted, 6- to 11-month-old infants between April 2015 and February 2017. Children were randomized in a 1:1 ratio to 2 μg of vitamin B12, corresponding to approximately 2 to 3 recommended daily allowances (RDAs) or a placebo daily for 12 months. Both groups were also given 15 other vitamins and minerals at around 1 RDA. The primary outcomes were neurodevelopment measured by the Bayley Scales of Infant and Toddler Development 3rd ed. (Bayley-III), attained growth, and hemoglobin concentration. Secondary outcomes included the metabolic response measured by plasma total homocysteine (tHcy) and methylmalonic acid (MMA). A total of 16 children (2.7%) in the vitamin B12 group and 10 children (1.7%) in the placebo group were lost to follow-up. Of note, 94% of the scheduled daily doses of vitamin B12 or placebo were reported to have been consumed (in part or completely). In this study, we observed that there were no effects of the intervention on the Bayley-III scores, growth, or hemoglobin concentration. Children in both groups grew on an average 12.5 cm (SD: 1.8), and the mean difference was 0.20 cm (95% confidence interval (CI): -0.23 to 0.63, P = 0.354). Furthermore, at the end of the study, the mean difference in hemoglobin concentration was 0.02 g/dL (95% CI: -1.33 to 1.37, P = 0.978), and the difference in the cognitive scaled scores was 0.16 (95% CI: -0.54 to 0.87, P = 0.648). The tHcy and MMA concentrations were 23% (95% CI: 17 to 30, P < 0.001) and 30% (95% CI: 15 to 46, P < 0.001) higher in the placebo group than in the vitamin B12 group, respectively. We observed 43 adverse events in 36 children, and these events were not associated with the intervention. In addition, 20 in the vitamin B12 group and 16 in the placebo group were hospitalized during the supplementation period. Important limitations of the study are that the strict inclusion criteria could limit the external validity and that the period of vitamin B12 supplementation might not have covered a critical window for infant growth or brain development.
Conclusions
In this study, we observed that vitamin B12 supplementation in young children at risk of vitamin B12 deficiency resulted in an improved metabolic response but did not affect neurodevelopment, growth, or hemoglobin concentration. Our results do not support widespread vitamin B12 supplementation in marginalized infants from low-income countries.
Trial registration
ClinicalTrials.gov NCT02272842 Universal Trial Number: U1111-1161-5187 (September 8, 2014) Trial Protocol: Original trial protocol: PMID: 28431557 (reference [18]; study protocols and plan of analysis included as Supporting information).



PLoS Med: 29 Nov 2020; 17:e1003430
Strand TA, Ulak M, Hysing M, Ranjitkar S, ... Shrestha LS, Chandyo RK
PLoS Med: 29 Nov 2020; 17:e1003430 | PMID: 33259482
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Estimated impact of RTS,S/AS01 malaria vaccine allocation strategies in sub-Saharan Africa: A modelling study.

Hogan AB, Winskill P, Ghani AC
Background
The RTS,S/AS01 vaccine against Plasmodium falciparum malaria infection completed phase III trials in 2014 and demonstrated efficacy against clinical malaria of approximately 36% over 4 years for a 4-dose schedule in children aged 5-17 months. Pilot vaccine implementation has recently begun in 3 African countries. If the pilots demonstrate both a positive health impact and resolve remaining safety concerns, wider roll-out could be recommended from 2021 onwards. Vaccine demand may, however, outstrip initial supply. We sought to identify where vaccine introduction should be prioritised to maximise public health impact under a range of supply constraints using mathematical modelling.
Methods and findings
Using a mathematical model of P. falciparum malaria transmission and RTS,S vaccine impact, we estimated the clinical cases and deaths averted in children aged 0-5 years in sub-Saharan Africa under 2 scenarios for vaccine coverage (100% and realistic) and 2 scenarios for other interventions (current coverage and World Health Organization [WHO] Global Technical Strategy targets). We used a prioritisation algorithm to identify potential allocative efficiency gains from prioritising vaccine allocation among countries or administrative units to maximise cases or deaths averted. If malaria burden at introduction is similar to current levels-assuming realistic vaccine coverage and country-level prioritisation in areas with parasite prevalence >10%-we estimate that 4.3 million malaria cases (95% credible interval [CrI] 2.8-6.8 million) and 22,000 deaths (95% CrI 11,000-35,000) in children younger than 5 years could be averted annually at a dose constraint of 30 million. This decreases to 3.0 million cases (95% CrI 2.0-4.7 million) and 14,000 deaths (95% CrI 7,000-23,000) at a dose constraint of 20 million, and increases to 6.6 million cases (95% CrI 4.2-10.8 million) and 38,000 deaths (95% CrI 18,000-61,000) at a dose constraint of 60 million. At 100% vaccine coverage, these impact estimates increase to 5.2 million cases (95% CrI 3.5-8.2 million) and 27,000 deaths (95% CrI 14,000-43,000), 3.9 million cases (95% CrI 2.7-6.0 million) and 19,000 deaths (95% CrI 10,000-30,000), and 10.0 million cases (95% CrI 6.7-15.7 million) and 51,000 deaths (95% CrI 25,000-82,000), respectively. Under realistic vaccine coverage, if the vaccine is prioritised sub-nationally, 5.3 million cases (95% CrI 3.5-8.2 million) and 24,000 deaths (95% CrI 12,000-38,000) could be averted at a dose constraint of 30 million. Furthermore, sub-national prioritisation would allow introduction in almost double the number of countries compared to national prioritisation (21 versus 11). If vaccine introduction is prioritised in the 3 pilot countries (Ghana, Kenya, and Malawi), health impact would be reduced, but this effect becomes less substantial (change of <5%) if 50 million or more doses are available. We did not account for within-country variation in vaccine coverage, and the optimisation was based on a single outcome measure, therefore this study should be used to understand overall trends rather than guide country-specific allocation.
Conclusions
These results suggest that the impact of constraints in vaccine supply on the public health impact of the RTS,S malaria vaccine could be reduced by introducing the vaccine at the sub-national level and prioritising countries with the highest malaria incidence.



PLoS Med: 30 Oct 2020; 17:e1003377
Hogan AB, Winskill P, Ghani AC
PLoS Med: 30 Oct 2020; 17:e1003377 | PMID: 33253211
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Trade-offs between cost and accuracy in active case finding for tuberculosis: A dynamic modelling analysis.

Cilloni L, Kranzer K, Stagg HR, Arinaminpathy N
Background
Active case finding (ACF) may be valuable in tuberculosis (TB) control, but questions remain about its optimum implementation in different settings. For example, smear microscopy misses up to half of TB cases, yet is cheap and detects the most infectious TB cases. What, then, is the incremental value of using more sensitive and specific, yet more costly, tests such as Xpert MTB/RIF in ACF in a high-burden setting?
Methods and findings
We constructed a dynamic transmission model of TB, calibrated to be consistent with an urban slum population in India. We applied this model to compare the potential cost and impact of 2 hypothetical approaches following initial symptom screening: (i) \'moderate accuracy\' testing employing a microscopy-like test (i.e., lower cost but also lower accuracy) for bacteriological confirmation and (ii) \'high accuracy\' testing employing an Xpert-like test (higher cost but also higher accuracy, while also detecting rifampicin resistance). Results suggest that ACF using a moderate-accuracy test could in fact cost more overall than using a high-accuracy test. Under an illustrative budget of US$20 million in a slum population of 2 million, high-accuracy testing would avert 1.14 (95% credible interval 0.75-1.99, with p = 0.28) cases relative to each case averted by moderate-accuracy testing. Test specificity is a key driver: High-accuracy testing would be significantly more impactful at the 5% significance level, as long as the high-accuracy test has specificity at least 3 percentage points greater than the moderate-accuracy test. Additional factors promoting the impact of high-accuracy testing are that (i) its ability to detect rifampicin resistance can lead to long-term cost savings in second-line treatment and (ii) its higher sensitivity contributes to the overall cases averted by ACF. Amongst the limitations of this study, our cost model has a narrow focus on the commodity costs of testing and treatment; our estimates should not be taken as indicative of the overall cost of ACF. There remains uncertainty about the true specificity of tests such as smear and Xpert-like tests in ACF, relating to the accuracy of the reference standard under such conditions.
Conclusions
Our results suggest that cheaper diagnostics do not necessarily translate to less costly ACF, as any savings from the test cost can be strongly outweighed by factors including false-positive TB treatment, reduced sensitivity, and foregone savings in second-line treatment. In resource-limited settings, it is therefore important to take all of these factors into account when designing cost-effective strategies for ACF.



PLoS Med: 29 Nov 2020; 17:e1003456
Cilloni L, Kranzer K, Stagg HR, Arinaminpathy N
PLoS Med: 29 Nov 2020; 17:e1003456 | PMID: 33264288
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Intake of dietary fats and fatty acids and the incidence of type 2 diabetes: A systematic review and dose-response meta-analysis of prospective observational studies.

Neuenschwander M, Barbaresko J, Pischke CR, Iser N, ... Schwingshackl L, Schlesinger S
Background
The role of fat quantity and quality in type 2 diabetes (T2D) prevention is controversial. Thus, this systematic review and meta-analysis aimed to investigate the associations between intake of dietary fat and fatty acids and T2D, and to evaluate the certainty of evidence.
Methods and findings
We systematically searched PubMed and Web of Science through 28 October 2019 for prospective observational studies in adults on the associations between intake of dietary fat and fatty acids and T2D incidence. The systematic literature search and data extraction were conducted independently by 2 researchers. We conducted linear and nonlinear random effects dose-response meta-analyses, calculated summary relative risks (SRRs) with their corresponding 95% confidence intervals (95% CIs), and assessed the certainty of evidence. In total, 15,070 publications were identified in the literature search after the removal of duplicates. Out of the 180 articles screened in full text, 23 studies (19 cohorts) met our inclusion criteria, with 11 studies (6 cohorts) conducted in the US, 7 studies (7 cohorts) in Europe, 4 studies (5 cohorts) in Asia, and 1 study (1 cohort) in Australia. We mainly observed no or weak linear associations between dietary fats and fatty acids and T2D incidence. In nonlinear dose-response meta-analyses, the protective association for vegetable fat and T2D was steeper at lower levels up to 13 g/d (SRR [95% CI]: 0.81 [0.76; 0.88], pnonlinearity = 0.012, n = 5 studies) than at higher levels. Saturated fatty acids showed an apparent protective association above intakes around 17 g/d with T2D (SRR [95% CI]: 0.95 [0.90; 1.00], pnonlinearity = 0.028, n = 11). There was a nonsignificant association of a decrease in T2D incidence for polyunsaturated fatty acid intakes up to 5 g/d (SRR [95% CI]: 0.96 [0.91; 1.01], pnonlinearity = 0.023, n = 8), and for alpha-linolenic acid consumption up to 560 mg/d (SRR [95% CI]: 0.95 [0.90; 1.00], pnonlinearity = 0.014, n = 11), after which the curve rose slightly, remaining close to no association. The association for long-chain omega-3 fatty acids and T2D was approximately linear for intakes up to 270 mg/d (SRR [95% CI]: 1.10 [1.06; 1.15], pnonlinearity < 0.001, n = 16), with a flattening curve thereafter. Certainty of evidence was very low to moderate. Limitations of the study are the high unexplained inconsistency between studies, the measurement of intake of dietary fats and fatty acids via self-report on a food group level, which is likely to lead to measurement errors, and the possible influence of unmeasured confounders on the findings.
Conclusions
There was no association between total fat intake and the incidence of T2D. However, for specific fats and fatty acids, dose-response curves provided insights for significant associations with T2D. In particular, a high intake of vegetable fat was inversely associated with T2D incidence. Thus, a diet including vegetable fat rather than animal fat might be beneficial regarding T2D prevention.



PLoS Med: 29 Nov 2020; 17:e1003347
Neuenschwander M, Barbaresko J, Pischke CR, Iser N, ... Schwingshackl L, Schlesinger S
PLoS Med: 29 Nov 2020; 17:e1003347 | PMID: 33264277
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:
Abstract

Dose-dependent oral glucocorticoid cardiovascular risks in people with immune-mediated inflammatory diseases: A population-based cohort study.

Pujades-Rodriguez M, Morgan AW, Cubbon RM, Wu J
Background
Glucocorticoids are widely used to reduce disease activity and inflammation in patients with a range of immune-mediated inflammatory diseases. It is uncertain whether or not low to moderate glucocorticoid dose increases cardiovascular risk. We aimed to quantify glucocorticoid dose-dependent cardiovascular risk in people with 6 immune-mediated inflammatory diseases.
Methods and findings
We conducted a population-based cohort analysis of medical records from 389 primary care practices contributing data to the United Kingdom Clinical Practice Research Datalink (CPRD), linked to hospital admissions and deaths in 1998-2017. We estimated time-variant daily and cumulative glucocorticoid prednisolone-equivalent dose-related risks and hazard ratios (HRs) of first all-cause and type-specific cardiovascular diseases (CVDs). There were 87,794 patients with giant cell arteritis and/or polymyalgia rheumatica (n = 25,581), inflammatory bowel disease (n = 27,739), rheumatoid arthritis (n = 25,324), systemic lupus erythematosus (n = 3,951), and/or vasculitis (n = 5,199), and no prior CVD. Mean age was 56 years and 34.1% were men. The median follow-up time was 5.0 years, and the proportions of person-years spent at each level of glucocorticoid daily exposure were 80% for non-use, 6.0% for <5 mg, 11.2% for 5.0-14.9 mg, 1.6% for 15.0-24.9 mg, and 1.2% for ≥25.0 mg. Incident CVD occurred in 13,426 (15.3%) people, including 6,013 atrial fibrillation, 7,727 heart failure, and 2,809 acute myocardial infarction events. One-year cumulative risks of all-cause CVD increased from 1.4% in periods of non-use to 8.9% for a daily prednisolone-equivalent dose of ≥25.0 mg. Five-year cumulative risks increased from 7.1% to 28.0%, respectively. Compared to periods of non-glucocorticoid use, those with <5.0 mg daily prednisolone-equivalent dose had increased all-cause CVD risk (HR = 1.74; 95% confidence interval [CI] 1.64-1.84; range 1.52 for polymyalgia rheumatica and/or giant cell arteritis to 2.82 for systemic lupus erythematosus). Increased dose-dependent risk ratios were found regardless of disease activity level and for all type-specific CVDs. HRs for type-specific CVDs and <5.0-mg daily dose use were: 1.69 (95% CI 1.54-1.85) for atrial fibrillation, 1.75 (95% CI 1.56-1.97) for heart failure, 1.76 (95% CI 1.51-2.05) for acute myocardial infarction, 1.78 (95% CI 1.53-2.07) for peripheral arterial disease, 1.32 (95% CI 1.15-1.50) for cerebrovascular disease, and 1.93 (95% CI 1.47-2.53) for abdominal aortic aneurysm. The lack of hospital medication records and drug adherence data might have led to underestimation of the dose prescribed when specialists provided care and overestimation of the dose taken during periods of low disease activity. The resulting dose misclassification in some patients is likely to have reduced the size of dose-response estimates.
Conclusions
In this study, we observed an increased risk of CVDs associated with glucocorticoid dose intake even at lower doses (<5 mg) in 6 immune-mediated diseases. These results highlight the importance of prompt and regular monitoring of cardiovascular risk and use of primary prevention treatment at all glucocorticoid doses.



PLoS Med: 29 Nov 2020; 17:e1003432
Pujades-Rodriguez M, Morgan AW, Cubbon RM, Wu J
PLoS Med: 29 Nov 2020; 17:e1003432 | PMID: 33270649
Go to: DOI | PubMed | PDF | Google Scholar |
Impact:

This program is still in alpha version.