Using evidence-based interventions to improve cancer screening in the National Breast and Cervical Cancer Early Detection Program
DeGroff A , Carter A , Kenney K , Myles Z , Melillo S , Royalty J , Rice K , Gressard L , Miller JW . J Public Health Manag Pract 2016 22 (5) 442-9 CONTEXT: The National Breast and Cervical Cancer Early Detection Program (NBCCEDP) provides cancer screening to low-income, un-, and underinsured women through more than 11 000 primary care clinics. The program is well-positioned to work with health systems to implement evidence-based interventions (EBIs) to increase screening among all women. OBJECTIVE: To collect baseline data on EBI use, evaluation of EBIs, and related training needs among NBCCEDP grantees. DESIGN: The Centers for Disease Control and Prevention conducted a Web-based survey in late 2013 among NBCCEDP grantees for the period July 2012 to June 2013. This was the first systematic assessment of EBIs among NBCCEDP grantees. SETTING: The Centers for Disease Control and Prevention's NBCCEDP. PARTICIPANTS: Primarily program directors/coordinators for all 67 NBCCEDP grantees. MAIN OUTCOME MEASURES: Data captured were used to assess implementation of 5 EBIs, their evaluation, and related training needs. Frequencies and proportions were determined. Cluster analysis identified grantees with similar patterns of EBI use for NBCCEDP clients and providers. RESULTS: On average, 4.1 of 5 EBIs were implemented per grantee for NBCCEDP clients and providers. Four clusters were identified including "high overall EBI users," "high provider EBI users," "high EBI users with no provider assessment and feedback," and "high client EBI users." Only 1.8 EBIs were implemented, on average, with non-NBCCEDP clients and providers. Fewer than half (n = 32, 47.8%) of grantees conducted process or outcome evaluation of 1 or more EBIs. Overall, 47.6% of grantees reported high or medium training needs for client-oriented EBIs and 54.3% for provider-oriented EBIs. CONCLUSIONS: The NBCCEDP grantees are implementing EBIs extensively with clients and providers. Increased EBI use among non-NBCCEDP clients/providers is needed to extend the NBCCEDP's reach and impact. Grantee training and technical assistance is necessary across EBIs. In addition, grantees' use of process and outcome evaluation of EBI implementation must be increased to inform effective program implementation. |
Trends in allergy prevalence among children aged 0-17 years by asthma status, United States, 2001-2013
Akinbami LJ , Simon AE , Schoendorf KC . J Asthma 2015 53 (4) 1-21 OBJECTIVES: Children with asthma and allergies-particularly food and/or multiple allergies--are at risk for adverse asthma outcomes. This analysis describes allergy prevalence trends among US children by asthma status. METHODS: We analyzed 2001-2013 National Health Interview Survey data for children aged 0-17 years. We estimated trends for reported respiratory, food, and skin allergy and the percentage of children with one, two, or all three allergy types by asthma status. We estimated unadjusted trends, and among children with asthma, adjusted associations between demographic characteristics and allergy. RESULTS: Prevalence of any allergy increased by 0.3 percentage points annually among children without asthma but not among children with asthma. However, underlying patterns changed among children with asthma: food and skin allergy prevalence increased as did the percentage with all three allergy types. Among children with asthma, risk was higher among younger and non-Hispanic black children for reported skin allergy, among non-Hispanic white children for reported respiratory allergy, and among nonpoor children for food and respiratory allergies. Prevalence of having one allergy type decreased by 0.50 percentage points annually, while the percent with all three types increased 0.2 percentage points annually. Non-poor and non-Hispanic white children with asthma were more likely to have multiple allergy types. CONCLUSIONS: While overall allergy prevalence among children with asthma remained stable, patterns in reported allergy type and number suggested a greater proportion may be at risk of adverse asthma outcomes associated with allergy: food allergy increased as did the percentage with all three allergy types. |
Population-based geographic access to endocrinologists in the United States, 2012
Lu H , Holt JB , Cheng YJ , Zhang X , Onufrak S , Croft JB . BMC Health Serv Res 2015 15 (1) 541 BACKGROUND: Increases in population and life expectancy of Americans may result in shortages of endocrinologists by 2020. This study aims to assess variations in geographic accessibility to endocrinologists in the US, by age group at state and county levels, and by urban/rural status, and distance. METHODS: We used the 2012 National Provider Identifier Registry to obtain office locations of all adult and pediatric endocrinologists in the US. The population with geographic access to an endocrinologist within a series of 6 distance radii, centered on endocrinologist practice locations, was estimated using the US Census 2010 block-level population. We assumed that persons living within the same circular buffer zone of an endocrinologist location have the same geographic accessibility to that endocrinologist. The geographic accessibility (the percentage of the population with geographic access to at least one endocrinologist) and the population-to-endocrinologist ratio for each geographic area were estimated. RESULTS: By using 20 miles as the distance radius, geographic accessibility to at least one pediatric/adult endocrinologist for age groups 0-17, 18-64, and ≥65 years was 64.1 %, 85.4 %, and 82.1 %. The overall population-to-endocrinologist ratio within 20 miles was 39,492:1 for children, 29,887:1 for adults aged 18-64 years, and 6,194:1 for adults aged ≥65 years. These ratios varied considerably by state, county, urban/rural status, and distance. CONCLUSIONS: This study demonstrates that there are geographic variations of accessibility to endocrinologists in the US. The areas with poorer geographic accessibility warrant further study of the effect of these variations on disease prevention, detection, and management of endocrine diseases in the US population. Our findings of geographic access to endocrinologists also may provide valuable information for medical education and health resources allocation. |
Factors affecting bleeding and stent thrombosis in clinical trials comparing bivalirudin with heparin during percutaneous coronary intervention
Bittl JA , He Y , Lang CD , Dangas GD . Circ Cardiovasc Interv 2015 8 (12) e002789 BACKGROUND: Patients treated with bivalirudin in randomized clinical trials of percutaneous coronary intervention generally have less bleeding but more acute stent thrombosis (ST) than do patients treated with heparin, but differences have varied among trials. METHODS AND RESULTS: We modeled the risk of major hemorrhage and ischemic outcomes 30 days after percutaneous coronary intervention by treatment assignment and the use of adjunctive therapies in 18 randomized clinical trials enrolling 41 871 patients. Overall bivalirudin caused less major bleeding (odds ratio [OR], 0.64; 95% confidence interval [CI], 0.53-0.76), more ST (OR, 1.58; 95% CI, 1.19-2.09), and no difference in mortality (OR, 0.93; 95% CI, 0.77-1.14) than heparin. A risk-benefit analysis identified 19 fewer bleeds and 5 more STs for every 1000 patients treated with bivalirudin in place of heparin. No significant bleeding advantage of bivalirudin over heparin could be identified in randomized clinical trials when transradial access (OR, 0.89; 95% CI, 0.57-1.41) and planned glycoprotein IIb/IIIa inhibitors were used with bivalirudin in the majority of patients (OR, 1.07; 95% CI, 0.87-1.31). The use of prasugrel or ticagrelor eliminated bleeding differences (OR, 0.80; 95% CI, 0.63-1.03) but did not reduce the risk of ST after bivalirudin (OR, 2.20; 95% CI, 1.48-3.27). CONCLUSIONS: Substituting bivalirudin for heparin conferred a tradeoff between bleeding and ST. Transradial access, adjunctive glycoprotein IIb/IIIa inhibitors, and potent P2Y12 inhibitors attenuated the bleeding advantage of bivalirudin over heparin but had no apparent effect on early ST. New approaches to reduce bleeding and ischemic complications during percutaneous coronary intervention warrant further investigation. |
Improving actions to control high blood pressure in Hispanic communities - Racial and Ethnic Approaches to Community Health Across the U.S. Project, 2009-2012
Liao Y , Siegel PZ , White S , Dulin R , Taylor A . Prev Med 2015 83 11-5 BACKGROUND: Compared with the general population in the United States (U.S.), Hispanics with hypertension are less likely to be aware of their condition, to take antihypertensive medication, and to adopt healthy lifestyles to control high blood pressure. We examined whether a multi-community intervention successfully increased the prevalence of actions to control hypertension among Hispanics. METHODS: Annual survey from 2009-2012 was conducted in six Hispanic communities in the Racial and Ethnic Approaches to Community Health (REACH) Across the U.S. PROJECT. The survey used address based sampling design that matched the geographies of intervention program. RESULTS: Age- and sex-standardized prevalences of taking hypertensive medication, changing eating habits, cutting down on salt, and reducing alcohol use significantly increased among Hispanics with self-reported hypertension in REACH communities. The 3-year relative percent increases were 5.8, 6.8, 7.9, and 35.2% for the four indicators, respectively. These favorable (healthier) trends occurred in both foreign-born and U.S.-born Hispanics. CONCLUSION: This large community-based participatory intervention resulted in more Hispanic residents in the communities taking actions to control high blood pressure. |
Assessing cervical cancer screening coverage using a population-based behavioral risk factor survey - Thailand, 2010
Joseph R , Manosoontorn S , Petcharoen N , Sangrajrang S , Senkomago V , Saraiya M . J Womens Health (Larchmt) 2015 24 (12) 966-8 Cervical cancer is the second most common cancer and fourth leading cause of cancer-related deaths among women in Thailand. In 2005, the Ministry of Public Health (MoPH) in Thailand initiated a phased national cervical cancer screening program. To monitor progress toward national screening targets-80% of women 30-60 years of age screened for cervical cancer once in the previous 5 years by 2013-the MoPH used the 2010 Thai Behavioral Risk Factor Surveillance System (BRFSS) to assess cervical cancer screening coverage. Results from the survey showed that 67.4% of women aged 30-60 years had been screened for cervical cancer in the past 5 years with varying screening coverage by region, residence, education, and marital status. Although the national cervical cancer screening program in Thailand appears to be close to reaching its national targets, the causes of lower coverage in some subpopulations need to be identified so that targeted interventions can be developed to increase coverage in these groups. |
Association of socioeconomic status with eye health among women with and without diabetes
Norris KL , Beckles GL , Chou CF , Zhang X , Saaddine J . J Womens Health (Larchmt) 2015 25 (3) 321-6 OBJECTIVE: To investigate the association between socioeconomic position (SEP) and poor eye health among women. MATERIALS AND METHODS: We included the 7,708 women aged ≥40 years who participated in the 2008 National Health Interview Survey. We defined poor eye health as self-reported age-related eye diseases (AREDs; cataract, glaucoma, macular degeneration, or diabetic retinopathy) or visual impairment (VI). We identified diagnosed diabetes by self-report. We measured SEP by education attained and annual household income. We conducted logistic regression analyses while controlling for demographic, clinical, behavioral, and healthcare access variables. RESULTS: The age-standardized prevalence of VI and ARED was significantly higher among women with diagnosed diabetes than among those without diagnosed diabetes, 29.8% versus 14.4% and 34.1% versus 20.8%, respectively (p < 0.05 for both). The prevalence of VI and ARED increased with decreasing SEP, but the trends were only significant among women without diabetes. After multivariable adjustment, education and income were significantly associated with VI but not with ARED. We found no interaction with diagnosed diabetes. CONCLUSIONS: SEP was inversely associated with VI but not with ARED. We found no interaction with diagnosed diabetes. |
Disparities in temporal and geographic patterns of declining heart disease mortality by race and sex in the United States, 1973-2010
Vaughan AS , Quick H , Pathak EB , Kramer MR , Casper M . J Am Heart Assoc 2015 4 (12) BACKGROUND: Examining small-area differences in the strength of declining heart disease mortality by race and sex provides important context for current racial and geographic disparities and identifies localities that could benefit from targeted interventions. We identified and described temporal trends in declining county-level heart disease mortality by race, sex, and geography between 1973 and 2010. METHODS AND RESULTS: Using a Bayesian hierarchical model, we estimated age-adjusted mortality with diseases of the heart listed as the underlying cause for 3099 counties. County-level percentage declines were calculated by race and sex for 3 time periods (1973-1985, 1986-1997, 1998-2010). Strong declines were statistically faster or no different than the total national decline in that time period. We observed county-level race-sex disparities in heart disease mortality trends. Continual (from 1973 to 2010) strong declines occurred in 73.2%, 44.6%, 15.5%, and 17.3% of counties for white men, white women, black men, and black women, respectively. Delayed (1998-2010) strong declines occurred in 15.4%, 42.0%, 75.5%, and 76.6% of counties for white men, white women, black men, and black women, respectively. Counties with the weakest patterns of decline were concentrated in the South. CONCLUSIONS: Since 1973, heart disease mortality has declined substantially for these race-sex groups. Patterns of decline differed by race and geography, reflecting potential disparities in national and local drivers of these declines. Better understanding of racial and geographic disparities in the diffusion of heart disease prevention and treatment may allow us to find clues to progress toward racial and geographic equity in heart disease mortality. |
Strengthening sexually transmitted disease services in Detroit, Michigan: A call to action
Ham DC , Lentine D , Hoover KW , Boazman-Holmes V , Whiting D , Sobel J , Miller C , Cohn J , Krzanowski K . Sex Transm Dis 2016 43 (1) 65-66 Sexually transmitted diseases (STDs) remain a significant cause of morbidity in the United States. In 2013, 1.4 million cases of chlamydia were reported to the Centers for Disease Control and Prevention (CDC), making it the most commonly reported notifiable disease in the United States.1 With such high case numbers, it is unreasonable to expect state and locally funded STD clinics to care for all patients with STDs. However, dedicated STD clinics often serve as a safety net for uninsured or underinsured individuals and provide higher-quality STD services than general medical/primary care clinics.2 Sexually transmitted disease clinics often provide additional services for free or with sliding scale fees, such as walk in or express visits, onsite diagnostics, and partner services, where clinic staff offer testing and treatment to the partner(s) of the patient.3 Sexually transmitted disease clinics are seen as an important place to receive confidential services.3 Recently, this service model has faced numerous challenges with local STD clinics experiencing budget cutbacks or closing.4 Furthermore, the landscape of healthcare provision in the United States is changing as a result of legislation and is causing a shift in the places where individuals seek care and who pays for it. Large municipalities with significant disease burden have been challenged to find the right balance between state and locally funded STD clinics and other models of STD service provision. Because of budget constraints, high disease burden, and a syphilis outbreak, perhaps nowhere has this struggle been more pronounced than in Detroit, Michigan. |
Syphilis time to treatment at publicly funded sexually transmitted disease clinics versus non-sexually transmitted disease clinics - Maricopa and Pima Counties, Arizona, 2009-2012
Robinson CL , Young L , Bisgard K , Mickey T , Taylor MM . Sex Transm Dis 2016 43 (1) 30-33 Delays in syphilis treatment may contribute to transmission. We evaluated time to treatment for symptomatic patients with syphilis by clinical testing site in 2 Arizona counties. Fewer patients were tested and treated at publicly funded sexually transmitted disease clinics, but received the timeliest treatment; these clinics remain crucial to syphilis disease control. |
Quality of HIV testing data before and after the implementation of a national data quality assessment and feedback system
Beltrami J , Wang G , Usman HR , Lin L . J Public Health Manag Pract 2016 23 (3) 269-275 CONTEXT: In 2010, the Centers for Disease Control and Prevention (CDC) implemented a national data quality assessment and feedback system for CDC-funded HIV testing program data. OBJECTIVE: Our objective was to analyze data quality before and after feedback. DESIGN: Coinciding with required quarterly data submissions to CDC, each health department received data quality feedback reports and a call with CDC to discuss the reports. Data from 2008 to 2011 were analyzed. SETTING: Fifty-nine state and local health departments that were funded for comprehensive HIV prevention services. PARTICIPANTS: Data collected by a service provider in conjunction with a client receiving HIV testing. INTERVENTION: National data quality assessment and feedback system. MAIN OUTCOME MEASURES: Before and after intervention implementation, quality was assessed through the number of new test records reported and the percentage of data values that were neither missing nor invalid. Generalized estimating equations were used to assess the effect of feedback in improving the completeness of variables. RESULTS: Data were included from 44 health departments. The average number of new records per submission period increased from 197 907 before feedback implementation to 497 753 afterward. Completeness was high before and after feedback for race/ethnicity (99.3% vs 99.3%), current test results (99.1% vs 99.7%), prior testing and results (97.4% vs 97.7%), and receipt of results (91.4% vs 91.2%). Completeness improved for HIV risk (83.6% vs 89.5%), linkage to HIV care (56.0% vs 64.0%), referral to HIV partner services (58.9% vs 62.8%), and referral to HIV prevention services (55.3% vs 63.9%). Calls as part of feedback were associated with improved completeness for HIV risk (adjusted odds ratio [AOR] = 2.28; 95% confidence interval [CI], 1.75-2.96), linkage to HIV care (AOR = 1.60; 95% CI, 1.31-1.96), referral to HIV partner services (AOR = 1.73; 95% CI, 1.43-2.09), and referral to HIV prevention services (AOR = 1.74; 95% CI, 1.43-2.10). CONCLUSIONS: Feedback contributed to increased data quality. CDC and health departments should continue monitoring the data and implement measures to improve variables of low completeness. |
Quality of sexually transmitted infection case management services in Gauteng Province, South Africa: An evaluation of health providers' knowledge, attitudes, and practices
Ham DC , Hariri S , Kamb M , Mark J , Ilunga R , Forhan S , Likibi M , Lewis DA . Sex Transm Dis 2016 43 (1) 23-29 BACKGROUND: The sexually transmitted infection (STI) clinical encounter is an opportunity to identify current and prevent new HIV and STI infections. We examined knowledge, attitudes, and practices regarding STIs and HIV among public and private providers in a large province in South Africa with a high disease burden. METHODS: From November 2008 to March 2009, 611 doctors and nurses from 120 public and 52 private clinics serving patients with STIs in Gauteng Province completed an anonymous, self-administered survey. Responses were compared by clinic location, provider type, and level of training. RESULTS: Most respondents were nurses (91%) and female (89%), were from public clinics (91%), and had received formal STI training (67%). Most (88%) correctly identified all of the common STI syndromes (i.e., genital ulcer syndrome, urethral discharge syndrome, and vaginal discharge syndrome). However, almost none correctly identified the most common etiologies for all 3 of these syndromes (0.8%), or the recommended first or alternative treatment regimens for all syndromes (0.8%). Very few (6%) providers correctly answered the 14 basic STI knowledge questions. Providers reporting formal STI training were more likely to identify correctly all 3 STI syndromes (P = 0.034) as well as answer correctly all 14 general STI knowledge questions (P = 0.016) compared with those not reporting STI training. In addition, several providers reported negative attitudes about patients with STI that may have affected their ability to practice optimal STI management. CONCLUSIONS: Sexually transmitted infection general knowledge was suboptimal, particularly among providers without STI training. Provider training and brief refresher courses on specific aspects of diagnosis and management may benefit HIV/STI clinical care and prevention in Gauteng Province. |
Chlamydia screening in juvenile corrections: Even females considered to be at low risk are at high risk
Torrone E , Beeston T , Ochoa R , Richardson M , Gray T , Peterman T , Katz KA . J Correct Health Care 2016 22 (1) 21-7 The Centers for Disease Control and Prevention recommends chlamydia screening at intake for all females in juvenile detention facilities. Identifying factors predictive of chlamydia could enable targeted screening, reducing costs while still identifying most infections. This study used demographic, arrest, and health data to identify factors associated with chlamydia among females aged 12 to 18 years entering a juvenile detention facility in San Diego during January 2009 to June 2010. The study created different screening criteria based on combinations of factors associated with infection and calculated sensitivity and proportion screened for each criterion. Overall chlamydia prevalence was 10.3% and was 4.2% among females reporting no sexual risk factors. No acceptable targeted screening approach was identified. High prevalence, even among females without risk factors, supports universal screening at intake. |
Rapid antiretroviral therapy initiation for women in an HIV-1 prevention clinical trial experiencing primary HIV-1 infection during pregnancy or breastfeeding
Morrison S , John-Stewart G , Egessa JJ , Mubezi S , Kusemererwa S , Bii DK , Bulya N , Mugume F , Campbell JD , Wangisi J , Bukusi EA , Celum C , Baeten JM . PLoS One 2015 10 (10) e0140773 During an HIV-1 prevention clinical trial in East Africa, we observed 16 cases of primary HIV-1 infection in women coincident with pregnancy or breastfeeding. Nine of eleven pregnant women initiated rapid combination antiretroviral therapy (ART), despite having CD4 counts exceeding national criteria for ART initiation; breastfeeding women initiated ART or replacement feeding. Rapid ART initiation during primary HIV-1 infection during pregnancy and breastfeeding is feasible in this setting. |
Sexual abstinence and other behaviours immediately following a new STI diagnosis among STI clinic patients: Findings from the Safe in the City trial
Gallo MF , Margolis AD , Malotte CK , Rietmeijer CA , Klausner JD , O'Donnell L , Warner L . Sex Transm Infect 2015 92 (3) 206-10 BACKGROUND: Few studies have assessed patients' sexual behaviours during the period immediately following a new diagnosis of a curable sexually transmitted infection (STI). METHODS: Data were analysed from a behavioural study nested within the Safe in the City trial, which evaluated a video-based STI/HIV prevention intervention in three urban STI clinics. We studied 450 patients who reported having received a new STI diagnosis, or STI treatment, 3 months earlier. Participants reported on whether they seriously considered, attempted and succeeded in adopting seven sex-related behaviours in the interval following the diagnostic visit. We used multivariable logistic regression to identify, among men, correlates of two behaviours related to immediately reducing reinfection risk and preventing further STI transmission: sexual abstinence until participants were adequately treated and abstinence until their partners were tested for STIs. RESULTS: Most participants reported successfully abstaining from sex until they were adequately treated for their baseline infection (89%-90%) and from sex with potentially exposed partners until their partners were tested for HIV and other STIs (66%-70%). Among men who intended to be abstinent until they were adequately treated, those who did not discuss the risks with a partner who was possibly exposed were more likely not to be abstinent (OR, 3.7; 95% CI 1.5 to 9.0) than those who had this discussion. Similarly, among men who intended to abstain from sex with any potentially exposed partner until the partner was tested for HIV and other STIs, those who reported not discussing the risks of infecting each other with HIV/STIs were more likely to be sexually active during this period (OR, 3.5; 95% CI 1.6 to 8.1) than were those who reported this communication. CONCLUSIONS: Improved partner communication could facilitate an important role in the adoption of protective behaviours in the interval immediately after receiving a new STI diagnosis. TRIAL REGISTRATION NUMBER: NCT00137670. |
Syringe service programs for persons who inject drugs in urban, suburban, and rural areas - United States, 2013
Des Jarlais DC , Nugent A , Solberg A , Feelemyer J , Mermin J , Holtzman D . MMWR Morb Mortal Wkly Rep 2015 64 (48) 1337-41 Reducing human immunodeficiency virus (HIV) infection rates in persons who inject drugs (PWID) has been one of the major successes in HIV prevention in the United States. Estimated HIV incidence among PWID declined by approximately 80% during 1990-2006. More recent data indicate that further reductions in HIV incidence are occurring in multiple areas. Research results for the effectiveness of risk reduction programs in preventing hepatitis C virus (HCV) infection among PWID have not been as consistent as they have been for HIV; however, a marked decline in the incidence of HCV infection occurred during 1992-2005 in selected U.S. locations when targeted risk reduction efforts for the prevention of HIV were implemented. Because syringe service programs (SSPs) have been one effective component of these risk reduction efforts for PWID, and because at least half of PWID are estimated to live outside major urban areas, a study was undertaken to characterize the current status of SSPs in the United States and determine whether urban, suburban, and rural SSPs differed. Data from a recent survey of SSPs were analyzed to describe program characteristics (e.g., size, clients, and services), which were then compared by urban, suburban, and rural location. Substantially fewer SSPs were located in rural and suburban than in urban areas, and harm reduction services( section sign) were less available to PWID outside urban settings. Because increases in substance abuse treatment admissions for drug injection have been observed concurrently with increases in reported cases of acute HCV infection in rural and suburban areas, state and local jurisdictions could consider extending effective prevention programs, including SSPs, to populations of PWID in rural and suburban areas. |
Update: Influenza activity - United States
Smith S , Blanton L , Kniss K , Mustaquim D , Steffens C , Reed C , Bramley A , Flannery B , Fry AM , Grohskopf LA , Bresee J , Wallis T , Garten R , Xu X , Elal AI , Gubareva L , Barnes J , Wentworth DE , Burns E , Katz J , Jernigan D , Brammer L . MMWR Morb Mortal Wkly Rep 2015 64 (48) 1342-8 CDC collects, compiles, and analyzes data on influenza activity year-round in the United States. The influenza season generally begins in the fall and continues through the winter and spring months; however, the timing and severity of circulating influenza viruses can vary by geographic location and season. Influenza activity in the United States remained low through October and November in 2015. Influenza A viruses have been most frequently identified, with influenza A (H3) viruses predominating. This report summarizes U.S. influenza activity for the period October 4-November 28, 2015. |
Younger age predicts failure to achieve viral suppression and virologic rebound among HIV-1 infected persons in serodiscordant partnerships
Mujugira A , Celum C , Tappero J , Ronald A , Mugo N , Baeten J . AIDS Res Hum Retroviruses 2015 32 (2) 148-54 BACKGROUND: Antiretroviral therapy (ART) markedly reduces the risk of HIV-1 transmission in serodiscordant partnerships. We previously found that younger age and higher CD4 counts were associated with delayed initiation of ART by HIV-1 infected partners in serodiscordant partnerships. Among those initiating ART, we sought to explore whether those same factors were associated with failure to achieve viral suppression. METHODS: In a prospective study of HIV-1 infected persons who had a known heterosexual HIV-1 uninfected partner in Kenya and Uganda (Partners PrEP Study), we used Cox proportional hazards regression to evaluate correlates of viral non-suppression (HIV-1 RNA >80 c/mL). RESULTS: Of 1035 HIV-1 infected participants initiating ART, 867 (84%) achieved viral suppression: 77% by 6 months and 86% by 12 months. Younger age (adjusted hazard ratio [aHR] 1.05 for every 5 years younger; p=0.006), lower pre-treatment CD4 count (aHR 1.26; p=0.009 for ≤250 compared with >250 cells/microL) and higher pre-treatment HIV-1 RNA quantity (aHR 1.21 per log10; p<0.001) independently predicted failure to achieve viral suppression. Following initial viral suppression, 8.8% (76/867) experienced virologic rebound (HIV-1 RNA >200 c/mL): 6.3% and 11.5% by 6 and 12 months after initial suppression, respectively. Age was the only factor associated with increased risk of virologic rebound (aHR 1.33 for every 5 years younger; p=0.005). CONCLUSIONS: For HIV-1 infected persons in serodiscordant couples, younger age was associated with delayed ART initiation, failure to achieve viral suppression, and increased risk of virologic rebound. Motivating ART initiation and early adherence is key to achieving and sustaining viral suppression. |
Live neonates born to mothers with Ebola virus disease: a review of the literature
Nelson JM , Griese SE , Goodman AB , Peacock G . J Perinatol 2015 36 (6) 411-4 Ebola virus disease (EVD) is associated with a high mortality, especially among neonates. There is a paucity of literature on live neonates born to pregnant women with EVD, and therefore, our understanding of their clinical illness and outcomes is extremely limited. A literature search was conducted to identify descriptions of live neonates born to pregnant women with EVD. To date, five known reports have provided limited information about 15 live neonates born to pregnant women with EVD. All 15 neonates died, and of those with information, death was within 19 days of birth. Of the 12 neonates with information on signs and symptoms, 8 (67%) were reported to have fever; no other signs or symptoms were reported. There are no published data describing the clinical course or treatments provided for these neonates. Potential modes of Ebola virus transmission from mother to neonate are through in utero transmission, during delivery, direct contact or through breast milk. There is an urgent need for more information about neonates with EVD, including clinical course (for example, onset and presentation of illness, symptomatology and course of illness) and treatments provided as well as information on Ebola viral load in breast milk from Ebola-positive and convalescing mothers. |
Long-term immunologic and virologic responses on raltegravir-containing regimens among ART-experienced participants in the HIV Outpatient Study
Buchacz K , Wiegand R , Armon C , Chmiel JS , Wood K , Brooks JT , Palella FJ Jr . HIV Clin Trials 2015 16 (4) 139-46 OBJECTIVES: Raltegravir (RAL)-containing antiretroviral therapy (ART) produced better immunologic and virologic responses than optimized background ART in clinical trials of heavily ART-experienced patients, but few data exist on long-term outcomes in routine HIV care. METHODS: We studied ART-experienced HIV outpatient study (HOPS) participants seen at 10 US HIV-specialty clinics during 2007-2011.We identified patients who started (baseline date) either continuous ≥ 30 days of RAL-containing or RAL-sparing ART, and used propensity score (PS) matching methods to account for baseline clinical and demographic differences. We used Kaplan-Meier methods and log-rank tests for the matched subsets to evaluate probability of death, achieving HIV RNA < 50 copies/ml, and CD4 cell count (CD4) increase of ≥ 50 cells mm(- 3) during follow-up. RESULTS: Among 784 RAL-exposed and 1062 RAL-unexposed patients, 472 from each group were matched by PS. At baseline, the 472 RAL-exposed patients (mean nadir CD4, 205 cells mm(- 3); mean baseline CD4, 460 cells mm(- 3); HIV RNA < 50 copies ml(- 1) in 61%; mean years on prescribed ART, 7.5) were similar to RAL unexposed. During a mean follow-up of over 3 years, mortality rates and immunologic and virologic trajectories did not differ between the two groups. Among patients with detectable baseline HIV RNA levels, 76% of RAL-exposed and 63% of RAL-unexposed achieved HIV RNA < 50 copies ml(- 1) (P = 0.51); 69 and 58%, respectively, achieved a CD4 increase ≥ 50 cells mm(- 3) (P = 0.70). DISCUSSION: In our large cohort of US ART-experienced patients with a wide spectrum of clinical history, similar outcomes were observed when prescribed RAL containing versus other contemporary ART. |
Outbreaks of acute gastroenteritis transmitted by person-to-person contact, environmental contamination, and unknown modes of transmission - United States, 2009-2013
Wikswo ME , Kambhampati A , Shioda K , Walsh KA , Bowen A , Hall AJ . MMWR Surveill Summ 2015 64 (12) 1-16 PROBLEM/CONDITION: Acute gastroenteritis (AGE) is a major cause of illness in the United States, with an estimated 179 million episodes annually. AGE outbreaks propagated through direct person-to-person contact, contaminated environmental surfaces, and unknown modes of transmission were not systematically captured at the national level before 2009 and thus were not well characterized. REPORTING PERIOD: 2009-2013. DESCRIPTION OF SYSTEM: The National Outbreak Reporting System (NORS) is a voluntary national reporting system that supports reporting of all waterborne and foodborne disease outbreaks and all AGE outbreaks resulting from transmission by contact with contaminated environmental sources, infected persons or animals, or unknown modes. Local, state, and territorial public health agencies within the 50 U.S. states, the District of Columbia (DC), five U.S. territories, and three Freely Associated States report outbreaks to CDC via NORS using a standard online data entry system. RESULTS: A total of 10,756 AGE outbreaks occurred during 2009-2013, for which the primary mode of transmission occurred through person-to-person contact, environmental contamination, and unknown modes of transmission. NORS received reports from public health agencies in 50 U.S. states, DC, and Puerto Rico. These outbreaks resulted in 356,532 reported illnesses, 5,394 hospitalizations, and 459 deaths. The median outbreak reporting rate for all sites in a given year increased from 2.7 outbreaks per million population in 2009 to 11.8 outbreaks in 2013. The etiology was unknown in 31% (N = 3,326) of outbreaks. Of the 7,430 outbreaks with a suspected or confirmed etiology reported, norovirus was the most common, reported in 6,223 (84%) of these outbreaks. Other reported suspected or confirmed etiologies included Shigella (n = 332) and Salmonella (n = 320). Outbreaks were more frequent during the winter, with 5,716 (53%) outbreaks occurring during December-February, and 70% of the 7,001 outbreaks with a reported setting of exposure occurred in long-term-care facilities (n = 4,894). In contrast, 59% (n = 143) of shigellosis outbreaks, 36% (n = 30) of salmonellosis outbreaks, and 32% (n = 84) of other or multiple etiology outbreaks were identified in child care facilities. INTERPRETATION: NORS is the first U.S. surveillance system that provides national data on AGE outbreaks spread through person-to-person contact, environmental contamination, and unknown modes of transmission. The increase in reporting rates during 2009-2013 indicates that reporting to NORS improved notably in the 5 years since its inception. Norovirus is the most commonly reported cause of these outbreaks and, on the basis of epidemiologic data, might account for a substantial proportion of outbreaks without a reported etiology. During 2009-2013, norovirus accounted for most deaths and health care visits in AGE outbreaks spread through person-to-person contact, environmental contamination, and unknown modes of transmission. PUBLIC HEALTH ACTION: Recommendations for prevention and control of AGE outbreaks transmitted through person-to-person contact, environmental contamination, and unknown modes of transmission depend primarily on appropriate hand hygiene, environmental disinfection, and isolation of ill persons. NORS surveillance data can help identify priority targets for the development of future control strategies, including hygiene interventions and vaccines, and help monitor the frequency and severity of AGE outbreaks in the United States. Ongoing study of these AGE outbreaks can provide a better understanding of certain pathogens and their modes of transmission. For example, certain reported outbreak etiologies (e.g., Salmonella) are considered primarily foodborne pathogens but can be transmitted through multiple routes. Similarly, further examination of outbreaks of unknown etiology could help identify barriers to making an etiologic determination, to analyze clinical and epidemiologic clues suggestive of a probable etiology, and to discover new and emerging etiologic agents. Outbreak reporting to NORS has improved substantially since its inception, and further outreach efforts and system improvements might facilitate additional increases in the number and completeness of reports to NORS. |
Pharmacokinetics and dosing of levofloxacin in children treated for active or latent multidrug-resistant tuberculosis, Federated States of Micronesia and Republic of the Marshall Islands
Mase SR , Jereb JA , Gonzalez D , Martin F , Daley CL , Fred D , Loeffler A , Menon L , Morris SB , Brostrom R , Chorba T , Peloquin CA . Pediatr Infect Dis J 2015 35 (4) 414-21 BACKGROUND: In the Federated States of Micronesia (FSM) and then the Republic of the Marshall Islands (RMI), levofloxacin pharmacokinetics (PK) were studied in children receiving directly observed once-daily regimens (10 mg/kg, age <5 years; 15 20 mg/kg, age ≤5 years) for either multidrug-resistant tuberculosis (MDR TB) disease or latent infection after MDR TB exposure, to inform future dosing strategies. METHODS: Blood samples were collected at 0 (RMI only), 1, 2, and 6 hours (50 children, aged 6 months to 15 years) after oral levofloxacin at >6 weeks of treatment. Clinical characteristics and levofloxacin Cmax, elimination half-life (t1/2), and area under the curve from 0 to 24 hours (AUC0-24 hours * microg/mL) were correlated to determine optimal dosage and to examine associations. Population PK and target attainment were modeled. With results from FSM, dosages were increased in RMI toward the target maximal drug concentration (Cmax) for Mycobacterium tuberculosis, 8-12 microg/ml. RESULTS: Cmax correlated linearly with per-weight dosage. Neither Cmax nor t1/2 was associated with gender, age, body mass index, concurrent medications, or pre-dose meals. At levofloxacin dosage of 15-20 mg/kg, Cmax ≥ 8 microg/ml was observed, and modeling corroborated a high target attainment across the ratio of the area under the free-concentration-versus-time curve to minimum inhibitory concentration (fAUCss,0-24/MIC) values. CONCLUSIONS: Levofloxacin dosage should be 15-20 mg/kg for Cmax ≥ 8 microg/ml and a high target attainment across fAUCss,0-24/MIC values in children ≥2 years of age. |
Post-Ebola signs and symptoms in U.S. Survivors
Epstein L , Wong KK , Kallen AJ , Uyeki TM . N Engl J Med 2015 373 (25) 2484-6 By mid-November 2015, the Ebola virus disease (EVD) epidemic in West Africa had resulted in 28,598 suspected, probable, or confirmed cases with 11,299 deaths since the earliest cases were identified in late 2013.1 Although most patients have been treated in West Africa, a small number have received care in hospitals in the United States.2 Although little has been known regarding clinical sequelae during recovery from EVD, current reports suggest that EVD survivors may have arthralgia, hearing loss, ocular disease, and extreme fatigue.3-5 However, the duration, severity, and pathogenesis of sequelae among EVD survivors are unknown. Understanding the experiences of EVD survivors in the United States may help inform the health needs of other survivors. | From August 2, 2014, to December 31, 2014, a total of 10 adult patients with EVD were treated in U.S. hospitals; of these patients, 8 survived. During March 2015, we administered a semistructured questionnaire by telephone or in person to all survivors about symptoms, diagnostic testing, and treatment occurring any time during the recovery period. (A copy of the questionnaire is provided in the Supplementary Appendix, available with the full text of this letter at NEJM.org.) Medical records were not reviewed. At the Centers for Disease Control and Prevention (CDC), it was determined that this inquiry did not meet the definition of research under federal guidelines, so review by institutional review boards was not required. |
Epidemiology and risk factors for echinocandin nonsusceptible Candida glabrata bloodstream infections: Data from a large multisite population-based candidemia surveillance program, 2008-2014
Vallabhaneni S , Cleveland AA , Farley MM , Harrison LH , Schaffner W , Beldavs ZG , Derado G , Pham CD , Lockhart SR , Smith RM . Open Forum Infect Dis 2015 2 (4) ofv163 Background. Echinocandins are first-line treatment for Candida glabrata candidemia. Echinocandin resistance is concerning due to limited remaining treatment options. We used data from a multisite, population-based surveillance program to describe the epidemiology and risk factors for echinocandin nonsusceptible (NS) C glabrata candidemia. Methods. The Centers for Disease Control and Prevention's Emerging Infections Program conducts population-based laboratory surveillance for candidemia in 4 metropolitan areas (7.9 million persons; 80 hospitals). We identified C glabrata cases occurring during 2008-2014; medical records of cases were reviewed, and C glabrata isolates underwent broth microdilution antifungal susceptibility testing. We defined echinocandin-NS C glabrata (intermediate or resistant) based on 2012 Clinical and Laboratory Standards Institute minimum inhibitory concentration breakpoints. Independent risk factors for NS C glabrata were determined by stepwise logistic regression. Results. Of 1385 C glabrata cases, 83 (6.0%) had NS isolates (19 intermediate and 64 resistant); the proportion of NS isolates rose from 4.2% in 2008 to 7.8% in 2014 (P < .001). The proportion of NS isolates at each hospital ranged from 0% to 25.8%; 3 large, academic hospitals accounted for almost half of all NS isolates. In multivariate analysis, prior echinocandin exposure (adjusted odds ratio [aOR], 5.3; 95% CI, 2.6-1.2), previous candidemia episode (aOR, 2.5; 95% CI, 1.2-5.1), hospitalization in the last 90 days (aOR, 1.9; 95% CI, 1.0-3.5, and fluconazole resistance [aOR, 3.6; 95% CI, 2.0-6.4]) were significantly associated with NS C glabrata. Fifty-nine percent of NS C glabrata cases had no known prior echinocandin exposure. Conclusion. The proportion of NS C glabrata isolates rose significantly during 2008-2014, and NS C glabrata frequency differed across hospitals. In addition to acquired resistance resulting from prior drug exposure, occurrence of NS C glabrata without prior echinocandin exposure suggests possible transmission of resistant organisms. |
Exploring pharmacy and home-based sexually transmissible infection testing
Habel MA , Scheinmann R , Verdesoto E , Gaydos C , Bertisch M , Chiasson MA . Sex Health 2015 12 (6) 472-9 BACKGROUND: This study assessed the feasibility and acceptability of pharmacy and home-based sexually transmissible infection (STI) screening as alternate testing venues among emergency contraception (EC) users. METHODS: The study included two phases in February 2011-July 2012. In Phase I, customers purchasing EC from eight pharmacies in Manhattan received vouchers for free STI testing at onsite medical clinics. In Phase II, three Facebook ads targeted EC users to connect them with free home-based STI test kits ordered online. Participants completed a self-administered survey. RESULTS: Only 38 participants enrolled in Phase I: 90% female, ≤29 years (74%), 45% White non-Hispanic and 75% college graduates; 71% were not tested for STIs in the past year and 68% reported a new partner in the past 3 months. None tested positive for STIs. In Phase II, ads led to >45000 click-throughs, 382 completed the survey and 290 requested kits; 28% were returned. Phase II participants were younger and less educated than Phase I participants; six tested positive for STIs. Challenges included recruitment, pharmacy staff participation, advertising with discretion and cost. CONCLUSIONS: This study found low uptake of pharmacy and home-based testing among EC users; however, STI testing in these settings is feasible and the acceptability findings indicate an appeal among younger women for testing in non-traditional settings. Collaborating with and training pharmacy and medical staff are key elements of service provision. Future research should explore how different permutations of expanding screening in non-traditional settings could improve testing uptake and detect additional STI cases. |
Hepatitis C virus testing perspectives among primary care physicians in four large primary care settings
Jewett A , Garg A , Meyer K , Wagner LD , Krauskopf K , Brown KA , Pan JJ , Massoud O , Smith BD , Rein DB . Health Promot Pract 2015 16 (2) 256-63 BACKGROUND: In 1998, the Centers for Disease Control and Prevention (CDC) published Recommendations for Prevention and Control of Hepatitis C Virus (HCV) Infection and HCV-Related Chronic Disease, recommending HCV testing for populations most likely to be infected with HCV. However, the implementation of risk-based screening has not been widely adopted in health care settings, and 45% to 85% of infected U.S. adults remain unidentified. OBJECTIVES: To develop a better understanding of why CDC's 1998 recommendations have had limited success in identifying persons with HCV infection and provide information about how CDC's 2012 Recommendations for the Identification of Chronic Hepatitis C Virus Infection Among Persons Born During 1945-1965 may be implemented more effectively. DESIGN: Qualitative data were collected and analyzed from a multidisciplinary team as part of the Birth Cohort Evaluation to Advance Screening and Testing for Hepatitis C project. RESPONDENTS: Nineteen providers were asked open-ended questions to identify current perspectives, practices, facilitators, and barriers to HCV screening and testing. Providers were affiliated with Henry Ford Hospital, Mount Sinai Hospital, the University of Alabama, and the University of Texas Health Science Center. RESULTS: Respondents reported the complexity of the 1998 recommendations, and numerous indicated risk factors were major barriers to effective implementation. Other hindrances to hepatitis C testing included physician discomfort in asking questions about socially undesirable behaviors and physician uncertainty about patient insurance coverage. CONCLUSION: Implementation of the CDC's 2012 recommendations could be more successful than the 1998 recommendations due to their relative simplicity; however, effective strategies need to be used for dissemination and implementation for full success. |
HIV testing among outpatients with Medicaid and commercial insurance
Dietz PM , Van Handel M , Wang H , Peters PJ , Zhang J , Viall A , Branson BM . PLoS One 2015 10 (12) e0144965 OBJECTIVE: To assess HIV testing and factors associated with receipt of testing among persons with Medicaid and commercial insurance during 2012. METHODS: Outpatient and laboratory claims were analyzed from two databases: all Medicaid claims from six states and all claims from Medicaid health plans from four other states and a large national convenience sample of patients with commercial insurance in the United States. We excluded those aged <13 years and >64 years, enrolled <9 of the 12 months, pregnant females, and previously diagnosed with HIV. We identified patients with new HIV diagnoses that followed (did not precede) the HIV test, using HIV ICD-9 codes. HIV testing percentages were assessed by patient demographics and other tests or diagnoses that occurred during the same visit. RESULTS: During 2012, 89,242 of 2,069,536 patients (4.3%) with Medicaid had at least one HIV test, and 850 (1.0%) of those tested received a new HIV diagnosis. Among 27,206,804 patients with commercial insurance, 757,646 (2.8%) had at least one HIV test, and 5,884 (0.8%) of those tested received a new HIV diagnosis. During visits that included an HIV test, 80.2% of Medicaid and 83.0% of commercial insurance claims also included a test or diagnosis for a sexually transmitted infection (STI), and/or Hepatitis B or C virus at the same visit. CONCLUSIONS: HIV testing primarily took place concurrently with screening or diagnoses for STIs or Hepatitis B or C. We found little evidence to suggest routine screening for HIV infection was widespread. |
How do changes in the population tested for chlamydia over time affect observed trends in chlamydia positivity? Analysis of routinely collected data from young women tested for chlamydia in family planning clinics in the Pacific Northwest (USA), between 2003 and 2010
Woodhall SC , Torrone L , Fine D , Salomon SG , Nakatsukasa-Ono W , Soldan K , Weinstock H . Sex Health 2015 12 (6) 512-9 BACKGROUND: The proportion of chlamydia tests that are positive (positivity) is dependent on the population tested and the test technology used. The way in which changes in these variables might affect trends in positivity over time is investigated. METHODS: Data from 15- to 24-year-old women tested for chlamydia in family planning clinics participating in the Infertility Prevention Project in the Pacific Northwest, United States (USA Public Health Service Region X) during 2003-2010 (n = 590557) were analysed. Trends in positivity and in test, demographic and sexual behaviour variables were identified. Unadjusted and adjusted trends in chlamydia positivity were calculated using logistic regression. RESULTS: The proportion of tests carried out using nucleic acid amplification tests (NAATs) increased dramatically during the analysis period in two states. Smaller changes in demographic and behavioural characteristics were seen. Controlling for test technology used had the largest effect on the trend in testing positive per year, leading to a fall in the calculated odds ratio of testing positive from 1.06 to 1.02 in Oregon, and from 1.07 to 1.02 in Idaho. Controlling for other variables had minimal effect on chlamydia positivity trends. CONCLUSIONS: Changes in NAAT use had a large effect on observed trends in chlamydia positivity over time in the two states where NAATs were introduced during the analysis period. While trends in chlamydia positivity may be a useful metric for monitoring chlamydia burden, it is important to consider changes in test type when interpreting these data. |
Applying public health principles to the HIV epidemic - how are we doing?
Frieden TR , Foti KE , Mermin J . N Engl J Med 2015 373 (23) 2281-7 Adecade ago, we called for applying public health principles to the human immunodeficiency virus (HIV) epidemic in the United States.1 Over the past decade, U.S. health departments, community organizations, and health care providers have expanded HIV screening and targeted testing, and as a result a greater proportion of HIV-infected people are now aware of their infection2,3; the number of reported new diagnoses of HIV infection has decreased4,5; and people with HIV infection are living longer.6 We have more sensitive diagnostic tests; more effective medications and medications with better side-effect profiles; rigorous confirmation that treatment prevents the spread of HIV and improves the health of infected people; and documentation of the potential benefit of preexposure prophylaxis for some high-risk people.7-12 | Despite this progress, most people living with HIV infection in the United States are not receiving antiretroviral treatment (ART)3; notification of partners of infected people remains the exception rather than the norm; and several high-risk behaviors have become more common. Anal sex without a condom has become more common among gay and bisexual men13 and there appears to be an increased number of people sharing needles and other injection paraphernalia.14,15 The number of new infections has increased among younger gay and bisexual men, particularly black men. Although surveillance has improved, data-driven targeted interventions are not being rapidly and effectively implemented in most geographic areas. Much more progress is possible through further application of public health principles by public health departments and the health care system and, most important, through closer integration of health care and public health action. |
Care and viral suppression during the last year of life among persons with HIV who died in 2012, 18 US jurisdictions
Hall HI , Espinoza L , Harris S , Shi J . AIDS Care 2015 28 (5) 1-5 Death due to HIV remains a leading cause of death among some US populations, yet little is known about HIV care before death. We used data from the National HIV Surveillance System to determine disease stage and care within 12 months prior to death among persons infected with HIV who died in 2012. Persons were considered to be in care within 12 months before death if they had ≥1 CD4 or viral load test results, and in continuous care if they had ≥2 CD4 or viral load test results at least 3 months apart. Viral suppression (viral load <200 copies/mL) was based on the most recent viral load test result in the 12 months before death. Among 7348 persons infected with HIV who died in 2012, 47.1% had late stage disease (AIDS) within 12 months before death. Overall, 85.7% had ≥1 test result, 64.3% had ≥2 tests at least 3 months apart, and 41.6% had a suppressed viral load. While blacks and Hispanics/Latinos had higher percentages of continuous care compared with whites, they had lower percentages of viral suppression and higher percentages with late stage disease. Viral suppression was higher among older persons. The majority had been diagnosed with HIV more than 5 years before death (86.3%). Although the majority of persons infected with HIV who died in 2012 had been diagnosed many years before death, almost half had late stage disease, and there were disparities in late stage disease and viral suppression by race/ethnicity and age. |
Clinical and pathological evaluation of Mycobacterium marinum group skin infections associated with fish markets in New York City
Sia TY , Taimur S , Blau DM , Lambe J , Ackelsberg J , Yacisin K , Bhatnagar J , Ritter J , Shieh WJ , Muehlenbachs A , Shulman K , Fong D , Kung E , Zaki SR . Clin Infect Dis 2015 62 (5) 590-5 BACKGROUND: From December 2013 through May 2014, physicians, dermatopathologists, and public health authorities collaborated to characterize an outbreak of Mycobacterium marinum and other nontuberculous mycobacterial skin and soft tissue infections (SSTIs) associated with handling fish in New York City's Chinatown. Clinicopathologic and laboratory investigations were performed on a series of patients. METHODS: Medical records were reviewed for 29 patients. Culture results were available for 27 patients and 24 biopsy specimens were evaluated by histopathology, immunohistochemistry (IHC) staining for acid-fast bacilli (AFB), and mycobacterial polymerase chain reaction (PCR) assays. RESULTS: All patients received antibiotics. The most commonly prescribed antibiotic regimen was clarithromycin and ethambutol. Of the 29 patients in this case series, 16 (55%) received surgical treatment involving incision and drainage, mass excision, and synovectomy. Of these, 7 (44%) had deep tissue involvement. All patients showed improvement. For those with culture results, 11 of 27 (41%) were positive for M. marinum; the remainder showed no growth. Poorly formed granulomas (96%), neutrophils (75%), and necrosis (79%) were found in 24 biopsies. Of 15 cases that were culture-negative and analyzed by other methods, 9 were PCR positive for M. marinum group species, 8 were IHC positive, and 3 were positive by AFB stains. CONCLUSIONS: A multidisciplinary approach was used to identify cases in an outbreak of M. marinum infections. The use of histopathology, culture, and IHC plus PCR from full thickness skin biopsy can lead to improved diagnosis of M. marinum SSTIs compared to relying solely on mycobacterial culture, the current gold standard. |
Comparing observed with predicted weekly influenza-like illness rates during the winter holiday break, United States, 2004-2013
Gao H , Wong KK , Zheteyeva Y , Shi J , Uzicanin A , Rainey JJ . PLoS One 2015 10 (12) e0143791 In the United States, influenza season typically begins in October or November, peaks in February, and tapers off in April. During the winter holiday break, from the end of December to the beginning of January, changes in social mixing patterns, healthcare-seeking behaviors, and surveillance reporting could affect influenza-like illness (ILI) rates. We compared predicted with observed weekly ILI to examine trends around the winter break period. We examined weekly rates of ILI by region in the United States from influenza season 2003-2004 to 2012-2013. We compared observed and predicted ILI rates from week 44 to week 8 of each influenza season using the auto-regressive integrated moving average (ARIMA) method. Of 1,530 region, week, and year combinations, 64 observed ILI rates were significantly higher than predicted by the model. Of these, 21 occurred during the typical winter holiday break period (weeks 51-52); 12 occurred during influenza season 2012-2013. There were 46 observed ILI rates that were significantly lower than predicted. Of these, 16 occurred after the typical holiday break during week 1, eight of which occurred during season 2012-2013. Of 90 (10 HHS regions x 9 seasons) predictions during the peak week, 78 predicted ILI rates were lower than observed. Out of 73 predictions for the post-peak week, 62 ILI rates were higher than observed. There were 53 out of 73 models that had lower peak and higher post-peak predicted ILI rates than were actually observed. While most regions had ILI rates higher than predicted during winter holiday break and lower than predicted after the break during the 2012-2013 season, overall there was not a consistent relationship between observed and predicted ILI around the winter holiday break during the other influenza seasons. |
A comparison of postelimination measles epidemiology in the United States, 2009-2014 versus 2001-2008
Fiebelkorn AP , Redd SB , Gastanaduy PA , Clemmons N , Rota PA , Rota JS , Bellini WJ , Wallace GS . J Pediatric Infect Dis Soc 2015 6 (1) 40-48 BACKGROUND: Measles, a vaccine-preventable disease that can cause severe complications, was declared eliminated from the United States in 2000. The last published summary of US measles epidemiology was during 2001-2008. We summarized US measles epidemiology during 2009-2014. METHODS: We compared demographic, vaccination, and virologic data on confirmed measles cases reported to the Centers for Disease Control and Prevention during January 1, 2009-December 31, 2014 and January 1, 2001-December 31, 2008. RESULTS: During 2009-2014, 1264 confirmed measles cases were reported in the United States, including 275 importations from 58 countries and 66 outbreaks. The annual median number of cases and outbreaks during this period was 130 (range, 55-667 cases) and 10 (range, 4-23 outbreaks), respectively, compared with an annual median of 56 cases (P = .08) and 4 outbreaks during 2001-2008 (P = .04). Among US-resident case-patients during 2009-2014, children aged 12-15 months had the highest measles incidence (65 cases; 8.3 cases/million person-years), and infants aged 6-11 months had the second highest incidence (86 cases; 7.3 cases/million person-years). During 2009-2014, 865 (74%) of 1173 US-resident case-patients were unvaccinated and 188 (16%) had unknown vaccination status; of 917 vaccine-eligible US-resident case-patients, 600 (65%) were reported as having philosophical or religious objections to vaccination. CONCLUSIONS: Although the United States has maintained measles elimination since 2000, measles outbreaks continue to occur globally, resulting in imported cases and potential spread. The annual median number of cases and outbreaks more than doubled during 2009-2014 compared with the earlier postelimination years. To maintain elimination, it will be necessary to maintain high 2-dose vaccination coverage, continue case-based surveillance, and monitor the patterns and rates of vaccine exemption. |
Metals exposures of residents living near the Akaki river in Addis Ababa, Ethiopia: A cross-sectional study
Yard E , Bayleyegn T , Abebe A , Mekonnen A , Murphy M , Caldwell KL , Luce R , Hunt DR , Tesfaye K , Abate M , Assefa T , Abera F , Habte K , Chala F , Lewis L , Kebede A . J Environ Public Health 2015 2015 935297 BACKGROUND: The Akaki River in Ethiopia has been found to contain elevated levels of several metals. Our objectives were to characterize metals exposures of residents living near the Akaki River and to assess metal levels in their drinking water. METHODS: In 2011, we conducted a cross-sectional study of 101 households in Akaki-Kality subcity (near the Akaki River) and 50 households in Yeka subcity (distant to the Akaki River). One willing adult in each household provided urine, blood, and drinking water sample. RESULTS: Urinary molybdenum (p < 0.001), tungsten (p < 0.001), lead (p < 0.001), uranium (p < 0.001), and mercury (p = 0.049) were higher in Akaki-Kality participants compared to Yeka participants. Participants in both subcities had low urinary iodine; 45% met the World Health Organization (WHO) classification for being at risk of moderate iodine deficiency. In Yeka, 47% of households exceeded the WHO aesthetic-based reference value for manganese; in Akaki-Kality, only 2% of households exceeded this value (p < 0.001). There was no correlation between metals levels in water samples and clinical specimens. CONCLUSIONS: Most of the exposures found during this investigation seem unlikely to cause acute health effects based on known toxic thresholds. However, toxicity data for many of these metals are very limited. |
Prenatal exposure to perfluoroalkyl acids and serum testosterone concentrations at 15 years of age in female ALSPAC study participants
Maisonet M , Calafat AM , Marcus M , Jaakkola JJ , Lashen H . Environ Health Perspect 2015 123 (12) 1325-30 BACKGROUND: Exposure to perfluorooctane sulfonic acid (PFOS) or to perfluorooctanoic acid (PFOA) increases mouse and human peroxisome proliferator-activated receptor alpha (PPARalpha) subtype activity, which influences lipid metabolism. Because cholesterol is the substrate from which testosterone is synthesized, exposure to these substances has the potential to alter testosterone concentrations. OBJECTIVES: We explored associations of total testosterone and sex hormone-binding globulin (SHBG) concentrations at age 15 years with prenatal exposures to PFOS, PFOA, perfluorohexane sulfonic acid (PFHxS), and perfluoronanoic acid (PFNA) in females. METHODS: Prenatal concentrations of the perfluoroalkyl acids (PFAAs) were measured in serum collected from pregnant mothers at enrollment (1991-1992) in the Avon Longitudinal Study of Parents and Children (ALSPAC). The median gestational age when the maternal blood sample was obtained was 16 weeks (interquartile range, 11-28 weeks). Total testosterone and SHBG concentrations were measured in serum obtained from their daughters at 15 years of age. Associations between prenatal PFAAs concentrations and reproductive outcomes were estimated using linear regression models (n = 72). RESULTS: Adjusted total testosterone concentrations were on average 0.18-nmol/L (95% CI: 0.01, 0.35) higher in daughters with prenatal PFOS in the upper concentration tertile compared with daughters with prenatal PFOS in the lower tertile. Adjusted total testosterone concentrations were also higher in daughters with prenatal concentrations of PFOA (beta = 0.24; 95% CI: 0.05, 0.43) and PFHxS (beta = 0.18; 95% CI: 0.00, 0.35) in the upper tertile compared with daughters with concentrations in the lower tertile. We did not find evidence of associations between PFNA and total testosterone or between any of the PFAAs and SHBG. CONCLUSIONS: Our findings were based on a small study sample and should be interpreted with caution. However, they suggest that prenatal exposure to some PFAAs may alter testosterone concentrations in females. |
Assessing arsenic exposure in households using bottled water or point-of-use treatment systems to mitigate well water contamination
Smith AE , Lincoln RA , Paulu C , Simones TL , Caldwell KL , Jones RL , Backer LC . Sci Total Environ 2015 544 701-710 There is little published literature on the efficacy of strategies to reduce exposure to residential well water arsenic. The objectives of our study were to: 1) determine if water arsenic remained a significant exposure source in households using bottled water or point-of-use treatment systems; and 2) evaluate the major sources and routes of any remaining arsenic exposure. We conducted a cross-sectional study of 167 households in Maine using one of these two strategies to prevent exposure to arsenic. Most households included one adult and at least one child. Untreated well water arsenic concentrations ranged from <10mug/L to 640mug/L. Urine samples, water samples, daily diet and bathing diaries, and household dietary and water use habit surveys were collected. Generalized estimating equations were used to model the relationship between urinary arsenic and untreated well water arsenic concentration, while accounting for documented consumption of untreated water and dietary sources. If mitigation strategies were fully effective, there should be no relationship between urinary arsenic and well water arsenic. To the contrary, we found that untreated arsenic water concentration remained a significant (p≤0.001) predictor of urinary arsenic levels. When untreated water arsenic concentrations were <40mug/L, untreated water arsenic was no longer a significant predictor of urinary arsenic. Time spent bathing (alone or in combination with water arsenic concentration) was not associated with urinary arsenic. A predictive analysis of the average study participant suggested that when untreated water arsenic ranged from 100 to 500mug/L, elimination of any untreated water use would result in an 8%-32% reduction in urinary arsenic for young children, and a 14%-59% reduction for adults. These results demonstrate the importance of complying with a point-of-use or bottled water exposure reduction strategy. However, there remained unexplained, water-related routes of exposure. |
Carcinogenic air toxics exposure and their cancer-related health impacts in the United States
Zhou Y , Li C , Huijbregts MA , Mumtaz MM . PLoS One 2015 10 (10) e0140013 Public health protection from air pollution can be achieved more effectively by shifting from a single-pollutant approach to a multi-pollutant approach. To develop such multi-pollutant approaches, identifying which air pollutants are present most frequently is essential. This study aims to determine the frequently found carcinogenic air toxics or hazardous air pollutants (HAPs) combinations across the United States as well as to analyze the health impacts of developing cancer due to exposure to these HAPs. To identify the most commonly found carcinogenic air toxics combinations, we first identified HAPs with cancer risk greater than one in a million in more than 5% of the census tracts across the United States, based on the National-Scale Air Toxics Assessment (NATA) by the U.S. EPA for year 2005. We then calculated the frequencies of their two-component (binary), and three-component (ternary) combinations. To quantify the cancer-related health impacts, we focused on the 10 most frequently found HAPs with national average cancer risk greater than one in a million. Their cancer-related health impacts were calculated by converting lifetime cancer risk reported in NATA 2005 to years of healthy life lost or Disability-Adjusted Life Years (DALYs). We found that the most frequently found air toxics with cancer risk greater than one in a million are formaldehyde, carbon tetrachloride, acetaldehyde, and benzene. The most frequently occurring binary pairs and ternary mixtures are the various combinations of these four air toxics. Analysis of urban and rural HAPs did not reveal significant differences in the top combinations of these chemicals. The cumulative annual cancer-related health impacts of inhaling the top 10 carcinogenic air toxics included was about 1,600 DALYs in the United States or 0.6 DALYs per 100,000 people. Formaldehyde and benzene together contribute nearly 60 percent of the total cancer-related health impacts. Our study shows that although there are many carcinogenic air toxics, only a few of them affect public health significantly at the national level in the United States, based on the frequency of occurrence of air toxics mixtures and cancer-related public health impacts. Future research is needed on their joint toxicity and cumulative health impacts. |
Description of calls from private well owners to a national well water hotline, 2013
Ridpath A , Taylor E , Greenstreet C , Martens M , Wicke H , Martin C . Sci Total Environ 2015 544 601-605 Water Systems Council (WSC) is a national, non-profit organization providing education and resources to private household well owners. Since 2003, WSC has provided wellcare(R), a toll-free telephone hotline to answer questions from the public regarding well stewardship. In order to identify knowledge gaps regarding well stewardship among private well owners, we obtained data from WSC and reviewed calls made during 2013 to wellcare(R). WSC records data from each wellcare(R) call-including caller information, primary reason for call, main use of well water, and if they were calling about a cistern, private well, shared well, or spring. We searched for calls with key words indicating specific contaminants of interest and reviewed primary reasons for calls. Calls classified as primarily testing-related were further categorized depending on whether the caller asked about how to test well water or how to interpret testing results. During 2013, wellcare(R) received 1100 calls from private well owners who were residents of 48 states. Among these calls, 87 (8%) mentioned radon, 83 (8%) coliforms, 51 (5%) chemicals related to fracking, 34 (3%) arsenic, and 32 (3%) nitrates key words. Only 38% of private well owners reported conducting any well maintenance activities, such as inspecting, cleaning, repairing the well, or testing well water, during the previous 12months. The primary reason for calls were related to well water testing (n=403), general information relating to wells (n=249), contaminants (n=229), and well water treatment (n=97). Among calls related to testing, 319 had questions about how to test their well water, and 33 had questions about how to interpret testing results. Calls from private well owners to the wellcare(R) Hotline during 2013 identified key knowledge gaps regarding well stewardship; well owners are generally not testing or maintaining their wells, have questions about well water testing treatment, and concerns about well water contaminants. |
De novo transcriptome reconstruction and annotation of the Egyptian rousette bat.
Lee AK , Kulcsar KA , Elliott O , Khiabanian H , Nagle ER , Jones ME , Amman BR , Sanchez-Lockhart M , Towner JS , Palacios G , Rabadan R . BMC Genomics 2015 16 1033 BACKGROUND: The Egyptian Rousette bat (Rousettus aegyptiacus), a common fruit bat species found throughout Africa and the Middle East, was recently identified as a natural reservoir host of Marburg virus. With Ebola virus, Marburg virus is a member of the family Filoviridae that causes severe hemorrhagic fever disease in humans and nonhuman primates, but results in little to no pathological consequences in bats. Understanding host-pathogen interactions within reservoir host species and how it differs from hosts that experience severe disease is an important aspect of evaluating viral pathogenesis and developing novel therapeutics and methods of prevention. RESULTS: Progress in studying bat reservoir host responses to virus infection is hampered by the lack of host-specific reagents required for immunological studies. In order to establish a basis for the design of reagents, we sequenced, assembled, and annotated the R. aegyptiacus transcriptome. We performed de novo transcriptome assembly using deep RNA sequencing data from 11 distinct tissues from one male and one female bat. We observed high similarity between this transcriptome and those available from other bat species. Gene expression analysis demonstrated clustering of expression profiles by tissue, where we also identified enrichment of tissue-specific gene ontology terms. In addition, we identified and experimentally validated the expression of novel coding transcripts that may be specific to this species. CONCLUSION: We comprehensively characterized the R. aegyptiacus transcriptome de novo. This transcriptome will be an important resource for understanding bat immunology, physiology, disease pathogenesis, and virus transmission. |
Complete Genome Sequences for Two Strains of a Novel Fastidious, Partially Acid-Fast, Gram-Positive Corynebacterineae Bacterium, Derived from Human Clinical Samples.
Nicholson AC , Bell M , Humrighouse BW , McQuiston JR . Genome Announc 2015 3 (6) Here we report the complete genome sequences of two strains of the novel fastidious, partially acid-fast, Gram-positive bacillus "Lawsonella clevelandensis" (proposed). Their clinical relevance and unusual growth characteristics make them intriguing candidates for whole-genome sequencing. |
Clinical utility of genetic and genomic services: context matters.
Dotson WD , Bowen MS , Kolor K , Khoury MJ . Genet Med 2015 18 (7) 672-4 Should diagnoses, and corresponding changes in disease management, be sufficient demonstration of clinical utility, even in the absence of evidence for improved clinical outcomes? This question is posed to the health-care payer community in a recent American College of Medical Genetics and Genomics (ACMG) position statement on the clinical utility of genetic and genomic services.1 Affirmative arguments could be drawn from examples of individually rare, highly penetrant, single-gene disorders. We fully support the ACMG’s call for inclusion of individual, familial, and societal levels of impact in the evaluation of testing. Nevertheless, broadening the definition of clinical utility for all cases may be less helpful in the evaluation of genetic tests than promoting more context-dependent and transparent decision-making, with less rigidity and dogmatic adherence to artificial logic models. |
The program cost of a brief video intervention shown in sexually transmitted disease clinic waiting rooms
Gift TL , O'Donnell LN , Rietmeijer CA , Malotte KC , Klausner JD , Margolis AD , Borkowf CB , Kent CK , Warner L . Sex Transm Dis 2016 43 (1) 61-64 BACKGROUND: Patients in sexually transmitted disease (STD) clinic waiting rooms represent a potential audience for delivering health messages via video-based interventions. A controlled trial at 3 sites found that patients exposed to one intervention, Safe in the City, had a significantly lower incidence of STDs compared with patients in the control condition. An evaluation of the intervention's cost could help determine whether such interventions are programmatically viable. MATERIALS AND METHODS: The cost of producing the Safe in the City intervention was estimated using study records, including logs, calendars, and contract invoices. Production costs were divided by the 1650 digital video kits initially fabricated to get an estimated cost per digital video. Clinic costs for showing the video in waiting rooms included staff time costs for equipment operation and hardware depreciation and were estimated for the 21-month study observation period retrospectively. RESULTS: The intervention cost an estimated $416,966 to develop, equaling $253 per digital video disk produced. Per-site costs to show the video intervention were estimated to be $2699 during the randomized trial. CONCLUSIONS: The cost of producing and implementing Safe in the City intervention suggests that similar interventions could potentially be produced and made available to end users at a price that would both cover production costs and be low enough that the end users could afford them. |
The economic burden of incident venous thromboembolism in the United States: A review of estimated attributable healthcare costs
Grosse SD , Nelson RE , Nyarko KA , Richardson LC , Raskob GE . Thromb Res 2015 137 3-10 Venous thromboembolism (VTE), which includes deep vein thrombosis and pulmonary embolism, is an important cause of preventable mortality and morbidity. In this study, we summarize estimates of per-patient and aggregate medical costs or expenditures attributable to incident VTE in the United States. Per-patient estimates of incremental costs can be calculated as the difference in costs between patients with and without an event after controlling for differences in underlying health status. We identified estimates of the incremental per-patient costs of acute VTEs and VTE-related complications, including recurrent VTE, post-thrombotic syndrome, chronic thromboembolic pulmonary hypertension, and anticoagulation-related adverse drug events. Based on the studies identified, treatment of an acute VTE on average appears to be associated with incremental direct medical costs of $12,000 to $15,000 (2014 US dollars) among first-year survivors, controlling for risk factors. Subsequent complications are conservatively estimated to increase cumulative costs to $18,000-23,000 per incident case. Annual incident VTE events conservatively cost the US healthcare system $7-10 billion each year for 375,000 to 425,000 newly diagnosed, medically treated incident VTE cases. Future studies should track long-term costs for cohorts of people with incident VTE, control for comorbid conditions that have been shown to be associated with VTE, and estimate incremental medical costs for people with VTE who do not survive. The costs associated with treating VTE can be used to assess the potential economic benefit and cost-savings from prevention efforts, although costs will vary among different patient groups. |
Financial hardship associated with cancer in the United States: Findings from a population-based sample of adult cancer survivors
Yabroff KR , Dowling EC , Guy GP Jr , Banegas MP , Davidoff A , Han X , Virgo KS , McNeel TS , Chawla N , Blanch-Hartigan D , Kent EE , Li C , Rodriguez JL , de Moor JS , Zheng Z , Jemal A , Ekwueme DU . J Clin Oncol 2015 34 (3) 259-67 PURPOSE: To estimate the prevalence of financial hardship associated with cancer in the United States and identify characteristics of cancer survivors associated with financial hardship. METHODS: We identified 1,202 adult cancer survivors diagnosed or treated at ≥ 18 years of age from the 2011 Medical Expenditure Panel Survey Experiences With Cancer questionnaire. Material financial hardship was measured by ever (1) borrowing money or going into debt, (2) filing for bankruptcy, (3) being unable to cover one's share of medical care costs, or (4) making other financial sacrifices because of cancer, its treatment, and lasting effects of treatment. Psychological financial hardship was measured as ever worrying about paying large medical bills. We examined factors associated with any material or psychological financial hardship using separate multivariable logistic regression models stratified by age group (18 to 64 and ≥ 65 years). RESULTS: Material financial hardship was more common in cancer survivors age 18 to 64 years than in those ≥ 65 years of age (28.4% v 13.8%; P < .001), as was psychological financial hardship (31.9% v 14.7%, P < .001). In adjusted analyses, cancer survivors age 18 to 64 years who were younger, female, nonwhite, and treated more recently and who had changed employment because of cancer were significantly more likely to report any material financial hardship. Cancer survivors who were uninsured, had lower family income, and were treated more recently were more likely to report psychological financial hardship. Among cancer survivors ≥ 65 years of age, those who were younger were more likely to report any financial hardship. CONCLUSION: Cancer survivors, especially the working-age population, commonly experience material and psychological financial hardship. |
Molecular characterisation and epidemiological investigation of an outbreak of blaOXA-181 carbapenemaseproducing isolates of Klebsiella pneumoniae in South Africa
Jacobson RK , Manesen MR , Moodley C , Smith M , Williams S , Nicol M , Bamford C . S Afr Med J 2015 105 (12) 1030-1035 BACKGROUND: Klebsiella pneumoniae is an opportunistic pathogen often associated with nosocomial infections. A suspected outbreak of K. pneumoniae isolates, exhibiting reduced susceptibility to carbapenem antibiotics, was detected during the month of May 2012 among patients admitted to a haematology unit of a tertiary academic hospital in Cape Town, South Africa (SA). OBJECTIVES: An investigation was done to determine possible epidemiological links between the case patients and to describe the mechanisms of carbapenem resistance of these bacterial isolates. METHODS: Relevant demographic, clinical and laboratory information was extracted from hospital records and an observational review of infection prevention and control practices in the affected unit was performed. Antimicrobial susceptibility testing including phenotypic testing and genotypic detection of the most commonly described carbapenemase genes was done. The phylogenetic relationship of all isolates containing the blaOXA-181 carbapenemase gene was determined by pulsed-field gel electrophoresis (PFGE) and multilocus sequence typing. RESULTS: Polymerase chain reaction analysis identified a total of seven blaOXA-181-positive, carbapenem-resistant K. pneumoniae isolates obtained from seven patients, all from a single unit. These isolates were indistinguishable using PFGE analysis and belonged to sequence type ST-14. No other carbapenemase enzymes were detected. CONCLUSION: This is the first documented laboratory-confirmed outbreak of OXA-181-producing K. pneumoniae in SA, and highlights the importance of enforcing strict adherence to infection control procedures and the need for ongoing surveillance of antibiotic-resistant pathogens in local hospitals. |
Probabilistic measurement of central line-associated bloodstream infections
Hota B , Malpiedi P , Fridkin SK , Martin J , Trick W . Infect Control Hosp Epidemiol 2015 37 (2) 1-7 OBJECTIVE: To develop a probabilistic method for measuring central line-associated bloodstream infection (CLABSI) rates that reduces the variability associated with traditional, manual methods of applying CLABSI surveillance definitions. DESIGN: Multicenter retrospective cohort study of bacteremia episodes among patients hospitalized in adult patient-care units; the study evaluated presence of CLABSI. SETTING: Hospitals that used SafetySurveillor software system (Premier) and who also reported to the Centers for Disease Control and Prevention's National Healthcare Safety Network (NHSN). PATIENTS: Patients were identified from a stratified sample from all eligible blood culture isolates from all eligible hospital units to generate a final set with an equal distribution (ie, 20%) from each unit type. Units were divided a priori into 5 major groups: medical intensive care unit, surgical intensive care unit, medical-surgical intensive care unit, hematology unit, or general medical wards. INTERVENTIONS: Episodes were reviewed by 2 experts, and a selection of discordant reviews were re-reviewed. Data were joined with NHSN data for hospitals for in-plan months. A predictive model was created; model performance was assessed using the c statistic in a validation set and comparison with NHSN reported rates for in-plan months. RESULTS: A final model was created with predictors of CLABSI. The c statistic for the final model was 0.75 (0.68-0.80). Rates from regression modeling correlated better with expert review than NHSN-reported rates. CONCLUSIONS: The use of a regression model based on the clinical characteristics of the bacteremia outperformed traditional infection preventionist surveillance compared with an expert-derived reference standard. |
Fifteen years after To Err is Human: a success story to learn from
Pronovost PJ , Cleeman JI , Wright D , Srinivasan A . BMJ Qual Saf 2015 25 (6) 396-9 Preventable harm is a major cause of preventable death worldwide. In late 1999, the Institute of Medicine (IOM) released To Err is Human,1 a report that riveted the world’s attention to between 44 000 and 98 000 patient deaths annually in the USA from medical errors. Progress towards reducing these harms has proven difficult because healthcare lacks robust mechanisms to routinely measure the problem and estimates of the magnitude vary widely. It is hard to gauge safety when healthcare uses multiple different measures for the same harm and provides limited investment in measurement, implementation and applied sciences. | Central line-associated bloodstream infection (CLABSI) provides a notable exception and case study for learning. Over the past 15 years, through the combined and coordinated efforts of many, these infections have been reduced over 80% in intensive care units (ICU), decreasing patient mortality.2 In this essay, we reflect on the journey in preventing these infections and explore how this success can inform and accelerate efforts to reduce other types of preventable harm. | While this paper is a synthesis of past work, what is novel is providing the historical profile of CLABSI, comparing infection rates before and 15 years after the IOM report and offering new insights into what led to the substantial reductions in infections. The journey began in the 1970s when the Centers for Disease Control and Prevention (CDC) began collecting data on ICU CLABSIs. Over the next several decades, the CDC published data on national benchmarks for CLABSI, investigated bloodstream infection outbreaks and published the first Guideline for the Prevention of Intravascular Catheter-Related Infections in 1991. These early efforts brought little change in ICU CLABSI rates throughout the 1990s, with clinicians and policy makers believing infections were inevitable. |
Seasonal influenza vaccination rates in the HIV Outpatient Study - United States, 1999-2013
Durham MD , Buchacz K , Armon C , Patel P , Wood K , Brooks JT . Clin Infect Dis 2015 60 (6) 976-7 Due to the high burden of estimated annual deaths and hospitalizations associated with influenza epidemics in the United States [1, 2], annual influenza vaccination is recommended for persons aged ≥6 months, and for those who are at increased risk of influenza-related complications, including persons with human immunodeficiency virus (HIV) infections [3]. In 2011, we published data from the HIV Outpatient Study (HOPS), an open prospective HIV cohort study of HIV-infected outpatients seen in 9 well-established community-based private practices, public health clinics, and university-based clinics, describing annual rates of influenza vaccination among HIV-infected persons in care during influenza seasons from 1999 to 2008 [4]. We found that an average of 35% of HOPS participants received an influenza vaccination while under observation during the time period under investigation. This letter serves as an update to the previous analysis by including 5 years of additional data describing influenza vaccination rates among HOPS participants through 30 June 2013. | Among 6548 active patients (patients with at least 1 clinical encounter during the time period under investigation), 4788 were vaccinated at any time between 1 July 1999 and 30 June 2013. The annual vaccination rates ranged from a low of 26.4% to a high of 50.9% (average, 38.7%; linear regression trend P = .043; Figure 1) during the influenza seasons studied. The HOPS recorded the highest rate of vaccination during the 2009–2010 H1N1 influenza season, but that level was not sustained in subsequent seasons. Although we detected an overall temporal increase in influenza vaccination rates over the 14-year period, the observed rates continued to be consistently lower than published recommendations and below the goal of 70% set for Healthy People 2020 [3, 5], underscoring the need for improving adherence to guidelines for annual influenza vaccination for HIV-infected persons. |
Uptake and effectiveness of a trivalent inactivated influenza vaccine in children in urban and rural Kenya, 2010-2012
Katz MA , Lebo E , Emukule GO , Otieno N , Caselton DL , Bigogo G , Njuguna H , Muthoka PM , Waiboci LW , Widdowson MA , Xu X , Njenga MK , Mott JA , Breiman RF . Pediatr Infect Dis J 2015 35 (3) 322-9 BACKGROUND: In Africa, recent surveillance has demonstrated a high burden of influenza, but influenza vaccine is rarely used. In Kenya, a country with a tropical climate, influenza has been shown to circulate year-round, like in other tropical countries. METHODS: During three months in 2010 and 2011, and two months in 2012, the Kenya Medical Research Institute/CDC-Kenya offered free injectable trivalent inactivated influenza vaccine to children 6 months-10 years old in two resource-poor communities in Kenya - Kibera andLwak (total population ~50,000). We conducted a case-control study to evaluate vaccine effectiveness (VE)in preventing laboratory-confirmed influenza associated with influenza-like illness and acute lower respiratory illness. RESULTS: Of 52,000 eligible children, 41%, 48%, and 51% received at least one vaccine in 2010, 2011, and 2012, respectively; 30%, 36%, and 38% were fully vaccinated. VE among fully vaccinated children was 57% (95% CI = 29-74%) during a 6-month follow-up period, 39% (95% CI = 17-56%) during a 9-month follow-up period, and 48% (95% CI = 32-61%) during a 12-month follow-up period. For the 12-month follow-up period, VE was statistically significant in children < 5 years and children 5 years old < 10 years old (50% and 46%, respectively). CONCLUSIONS: In Kenya, parents of nearly half of eligible children under 10 years old chose to get their children vaccinated with a free influenza vaccine. During a 12-month follow-up period the vaccine was moderately effective in preventing medically attended influenza-associated respiratory illness. |
Vaccine prioritization for effective pandemic response
Lee EK , Yuan F , Pietz FH , Benecke BA , Burel G . Interfaces (Providence) 2015 45 (5) 425-443 Public health experts agree that the best strategy to contain a pandemic, where vaccination is the prophylactic treatment but vaccine supply is limited, is to give higher priority to higher-risk populations. We derive a mathematical decision framework to track the effectiveness of prioritized vaccination through the course of a pandemic. Our approach couples a disease-propagation model with a vaccine queueing model and an optimization engine to determine optimal prioritized coverage in a mixed-vaccination strategy. This demonstrably minimizes infection and mortality. Given estimated outbreak characteristics, vaccine inventory levels, and individual risk factors, the study reveals an optimal coverage for the high-risk group that results in the lowest overall attack and mortality rates. This knowledge is critical to public health policy makers for determining the best strategies for population protection. This becomes particularly important in determining when to switch from a prioritized strategy emphasizing high-risk groups to a nonprioritized strategy in which the vaccine becomes available publicly. Our analysis highlights the importance of uninterrupted vaccine supply. Although the 2009 H1N1 supply, received in interrupted batches, eventually covered over 30 percent of the population, the resulting attack and mortality rates are significantly inferior to those in a scenario where only 20 percent of the population is covered with an uninterrupted supply. We also learned that early vaccination is important. Contrasting the 2009 H1N1 supply to a 10 percent uninterrupted supply, a three-week delay in vaccination reduces the 9.9 percent infection reduction of the former to a mere 0.9 percent. The optimal trigger for switching from prioritized to nonprioritized vaccination is sensitive to infectivity and vulnerability of the high-risk groups. Our study further underscores the importance of throughput efficiency in dispensing and its effects on the overall attack and mortality rates. The more transmissible the virus is, the lower the threshold for switching to nonprioritized vaccination. Our model, which can be generalized, allows the incorporation of seasonality and virus mutation of the biological agents. The system empowers policy makers to make the right decisions at the appropriate time to save more lives, better utilize limited resources, and reduce the health-service burden during a pandemic event. |
Effect of substituting IPV for tOPV on immunity to poliovirus in Bangladeshi infants: An open-label randomized controlled trial
Mychaleckyj JC , Haque R , Carmolli M , Zhang D , Colgate ER , Nayak U , Taniuchi M , Dickson D , Weldon WC , Oberste MS , Zaman K , Houpt ER , Alam M , Kirkpatrick BD , Petri WA Jr . Vaccine 2015 34 (3) 358-66 BACKGROUND: The Polio Endgame strategy includes phased withdrawal of oral poliovirus vaccines (OPV) coordinated with introduction of inactivated poliovirus vaccine (IPV) to ensure population immunity. The impact of IPV introduction into a primary OPV series of immunizations in a developing country is uncertain. METHODS: Between May 2011 and November 2012, we enrolled 700 Bangladeshi infant-mother dyads from Dhaka slums into an open-label randomized controlled trial to test whether substituting an injected IPV dose for the standard Expanded Program on Immunization (EPI) fourth tOPV dose at infant age 39 weeks would reduce fecal shedding and enhance systemic immunity. The primary endpoint was mucosal immunity to poliovirus at age one year, measured by fecal excretion of any Sabin virus at five time points up to 25 days post-52 week tOPV challenge, analyzed by the intention to treat principle. FINDINGS: We randomized 350 families to the tOPV and IPV vaccination arms. Neither study arm resulted in superior intestinal protection at 52 weeks measured by the prevalence of infants shedding any of three poliovirus serotypes, but the IPV dose induced significantly higher seroprevalence and seroconversion rates. This result was identical for poliovirus detection by cell culture or RT-qPCR. The non-significant estimated culture-based shedding risk difference was -3% favoring IPV, and the two vaccination schedules were inferred to be equivalent within a 95% confidence margin of -10% to +4%. Results for shedding analyses stratified by poliovirus type were similar. CONCLUSIONS: Neither of the vaccination regimens is superior to the other in enhancing intestinal immunity as measured by poliovirus shedding at 52 weeks of age and the IPV regimen provides similar intestinal immunity to the four tOPV series, although the IPV regimen strongly enhances humoral immunity. The IPV-modified regimen may be considered for vaccination programs without loss of intestinal protection. |
Epidemiology of invasive pneumococcal disease in Bangladeshi children before introduction of pneumococcal conjugate vaccine
Saha SK , Hossain B , Islam M , Hasanuzzaman M , Saha S , Hasan M , Darmstadt GL , Chowdury M , El Arifeen S , Baqui AH , Breiman RF , Santosham M , Luby SP , Whitney CG . Pediatr Infect Dis J 2015 35 (6) 655-61 BACKGROUND: Because Bangladesh intended to introduce PCV-10 in 2015, we examined the baseline burden of invasive pneumococcal disease (IPD) in order to measure impact of PCV. METHODS: During 2007-2013, we performed blood and cerebrospinal fluid (CSF) cultures in children <5 years of age with suspected IPD identified through active surveillance at 4 hospitals. Isolates were serotyped by quellung and tested for antibiotic susceptibility by disc diffusion and E-test. Serotyping of culture-negative cases, detected by Binax or PCR, was done by sequential multiplex polymerase chain reaction (PCR). Trends in IPD case numbers were analyzed by serotype and clinical syndrome. RESULTS: The study identified 752 IPD cases; 78% occurred in children <12 months of age. Serotype information was available for 78% (442/568), including 197 of 323 culture-negative cases available for serotyping. We identified 50 serotypes; the most common serotypes were 2 (16%), 1 (10 %), 6B (7%), 14 (7%) and 5 (7%). PCV-10 and PCV-13 serotypes accounted for 46% (range 29-57% by year) and 50% (range 37-64% by year) of cases, respectively. Potential serotype coverage for meningitis and non-meningitis cases was 45% and 49% for PCV-10, and 48% and 57% for PCV-13, respectively. Eighty two percent of strains were susceptible to all antibiotics except cotrimoxazole. CONCLUSION: The distribution of serotypes causing IPD in Bangladeshi children is diverse, limiting the proportion of IPD cases PCV can prevent. However, PCV introduction is expected to have major benefits as the country has a high burden of IPD-related mortality, morbidity and disability. |
Evaluation of the first year of national reporting on a new healthcare personnel influenza vaccination performance measure by US hospitals
Dolan SB , Kalayil EJ , Lindley MC , Ahmed F . Infect Control Hosp Epidemiol 2015 37 (2) 1-4 One thousand hospitals were surveyed on a new measure of healthcare personnel influenza vaccination for the 2012-2013 influenza season. Facilities found it easier to collect data on employees than nonemployees; larger facilities reported more challenges than smaller facilities. Barriers may decrease over time as facilities become accustomed to the measure. |
Influenza vaccine effectiveness for fully and partially vaccinated children 6 months to 8 years old during 2011-2012 and 2012-2013: The importance of two priming doses
Thompson MG , Clippard J , Petrie JG , Jackson ML , McLean HQ , Gaglani M , Reis EC , Flannery B , Monto AS , Jackson L , Belongia EA , Murthy K , Zimmerman RK , Thaker S , Fry AM . Pediatr Infect Dis J 2015 35 (3) 299-308 BACKGROUND: Few studies have examined the effectiveness of full vs. partial vaccination with inactivated trivalent influenza vaccines (IIV3) as defined by the U.S. CDC Advisory Committee on Immunization Practices (ACIP). METHODS: Respiratory swabs were collected from outpatients aged 6 months to 8 years with acute cough for ≤7 days in clinics in 5 states during the 2011-2012 and 2012-2013 influenza seasons. Influenza was confirmed by real-time reverse transcription polymerase chain reaction assay. Receipt of current season IIV3 and up to 4 prior vaccinations was documented from medical records and immunization registries. Using a test-negative design, vaccine effectiveness (VE) was estimated adjusting for age, race/ethnicity, medical conditions, study site, and month of enrollment. RESULTS: We did not observe higher VE for children fully vs. partially vaccinated with IIV3, as defined by U.S. ACIP, though our sample of partially vaccinated children was relatively small. However, among children aged 2-8 years in both seasons and against A(H3N2) and B influenza illness separately, VE point estimates were consistently higher for children who had received 2 doses in the same prior season compared to those without (VE range of 58-80% vs. 33-44%, respectively). Across seasons, the odds of A(H3N2) illness despite IIV3 vaccination were 2.4-fold (95% CI = 1.4-4.3) higher among children who had not received 2 doses in the same prior season. We also noted residual protection among unvaccinated children who were vaccinated the previous season (VE range = 36-40% across outcomes). CONCLUSION: Vaccination with IIV3 may provide preventive benefit in subsequent seasons, including possible residual protection if vaccination is missed. Two vaccine doses in the same season may be more effective than alternative priming strategies. |
Anticipating rotavirus vaccines -a pre-vaccine assessment of incidence and economic burden of rotavirus hospitalizations among children < 5 year of age in Libya, 2012-13
Alkoshi S , Leshem E , Parashar UD , Dahlui M . BMC Public Health 2015 15 26 BACKGROUND: Libya introduced rotavirus vaccine in October 2013. We examined pre-vaccine incidence of rotavirus hospitalizations and associated economic burden among children < 5 years in Libya to provide baseline data for future vaccine impact evaluations. METHODS: Prospective, hospital-based active surveillance for rotavirus was conducted at three public hospitals in two cities during August 2012 - April 2013. Clinical, demographic and estimated cost data were collected from children <5 hospitalized for diarrhea; stool specimens were tested for rotavirus with a commercial enzyme immunoassay. Annual rotavirus hospitalization incidence rate estimates included a conservative estimate based on the number of cases recorded during the nine months and an extrapolation to estimate 12 months incidence rate. National rotavirus disease and economic burden were estimated by extrapolating incidence and cost data to the national population of children aged < 5 years. RESULTS: A total of 410 children < 5 years of age with diarrhea were enrolled, of whom 239 (58%) tested positive rotavirus, yielding an incidence range of 418-557 rotavirus hospitalizations per 100,000 children < 5 years of age. Most (86%) rotavirus cases were below two years of age with a distinct seasonal peak in winter (December-March) months. The total cost of treatment for each rotavirus patient was estimated at US$ 679 (range: 200-5,423). By extrapolation, we estimated 2,948 rotavirus hospitalizations occur each year in Libyan children < 5 years of age, incurring total costs of US$ 2,001,662 (range: 1,931,726-2,094,005). CONCLUSIONS: Rotavirus incurs substantial morbidity and economic burden in Libya, highlighting the potential value of vaccination of Libyan children against rotavirus. |
Injury prevention as social change
McClure RJ , Mack K , Wilkins N , Davey TM . Inj Prev 2015 22 (3) 226-9 We will not solve the public health problem of injury simply by educating individuals about the nature of injury risk, improving their risk assessment and providing these individuals with information to enable them to reduce the level of risk to which they are exposed. Substantial improvement in the societal injury burden will occur only when changes are made at the societal level that focus on reducing the population-level indicators of injury.1,2 The shift from an individual to a population perspective has substantial implications for the way we perceive, direct, undertake, and evaluate injury prevention research and practice. The analogy of ‘the population as patient’ provides a clear illustration of the foundational truths that underpin the preferred public health approach to the prevention of injury. | Society is the system within which populations exist. Sustained change made at the societal level to reduce population-level indicators of injury morbidity and mortality involves systemic change. In this paper, we consider a shift from the contemporary systematic approach to unintentional injury and violence prevention,3 to a systemic approach4 more consistent with the principles of ecological public health.5 We consider the extent to which the logic of the systematic model, and the related misconceptions about the role of uncertainty in science, limit local, national and global efforts to minimise injury-related harm. We explore the implications of a systemic perspective for the field of injury prevention and conclude by delineating a new programme of work that could be of considerable benefit to the injury-related health of populations. |
Assessing the accuracy of the International Classification of Diseases codes to identify abusive head trauma: a feasibility study
Berger RP , Parks S , Fromkin J , Rubin P , Pecora PJ . Inj Prev 2015 21 e133-7 OBJECTIVE: To assess the accuracy of an International Classification of Diseases (ICD) code-based operational case definition for abusive head trauma (AHT). METHODS: Subjects were children <5 years of age evaluated for AHT by a hospital-based Child Protection Team (CPT) at a tertiary care paediatric hospital with a completely electronic medical record (EMR) system. Subjects were designated as non-AHT traumatic brain injury (TBI) or AHT based on whether the CPT determined that the injuries were due to AHT. The sensitivity and specificity of the ICD-based definition were calculated. RESULTS: There were 223 children evaluated for AHT: 117 AHT and 106 non-AHT TBI. The sensitivity and specificity of the ICD-based operational case definition were 92% (95% CI 85.8 to 96.2) and 96% (95% CI 92.3 to 99.7), respectively. All errors in sensitivity and three of the four specificity errors were due to coder error; one specificity error was a physician error. CONCLUSIONS: In a paediatric tertiary care hospital with an EMR system, the accuracy of an ICD-based case definition for AHT was high. Additional studies are needed to assess the accuracy of this definition in all types of hospitals in which children with AHT are cared for. |
College sports-related injuries - United States, 2009-10 through 2013-14 academic years
Kerr ZY , Marshall SW , Dompier TP , Corlette J , Klossner DA , Gilchrist J . MMWR Morb Mortal Wkly Rep 2015 64 (48) 1330-6 Sports-related injuries can have a substantial impact on the long-term health of student-athletes. The National Collegiate Athletic Association (NCAA) monitors injuries among college student-athletes at member schools. In academic year 2013-14, a total of 1,113 member schools fielded 19,334 teams with 478,869 participating student-athletes in NCAA championship sports (i.e., sports with NCAA championship competition) (1). External researchers and CDC used information reported to the NCAA Injury Surveillance Program (NCAA-ISP) by a sample of championship sports programs to summarize the estimated national cumulative and annual average numbers of injuries during the 5 academic years from 2009-10 through 2013-14. Analyses were restricted to injuries reported among student-athletes in 25 NCAA championship sports. During this period, 1,053,370 injuries were estimated to have occurred during an estimated 176.7 million athlete-exposures to potential injury (i.e., one athlete's participation in one competition or one practice). Injury incidence varied widely by sport. Among all sports, men's football accounted for the largest average annual estimated number of injuries (47,199) and the highest competition injury rate (39.9 per 1,000 athlete-exposures). Men's wrestling experienced the highest overall injury rate (13.1 per 1,000) and practice injury rate (10.2 per 1,000). Among women's sports, gymnastics had the highest overall injury rate (10.4 per 1,000) and practice injury rate (10.0 per 1,000), although soccer had the highest competition injury rate (17.2 per 1,000). More injuries were estimated to have occurred from practice than from competition for all sports, with the exception of men's ice hockey and baseball. However, injuries incurred during competition were somewhat more severe (e.g., requiring ≥7 days to return to full participation) than those acquired during practice. Multiple strategies are employed by NCAA and others to reduce the number of injuries in organized sports. These strategies include committees that recommend rule and policy changes based on surveillance data and education and awareness campaigns that target both athletes and coaches. Continued analysis of surveillance data will help to understand whether these strategies result in changes in the incidence and severity of college sports injuries. |
Enhanced Diagnosis of Pneumococcal Bacteremia Using Antigen- and Molecular-Based Tools on Blood Specimens in Mali and Thailand: A Prospective Surveillance Study.
Moisi JC , Moore M , Carvalho MD , Sow SO , Siludjai D , Knoll MD , Tapia M , Baggett HC . Am J Trop Med Hyg 2015 94 (2) 267-275 Prior antibiotic use, contamination, limited blood volume, and processing delays reduce yield of blood cultures for detection of Streptococcus pneumoniae. We performed immunochromatographic testing (ICT) on broth from incubated blood culture bottles and real-time lytA polymerase chain reaction (PCR) on broth and whole blood and compared findings to blood culture in patients with suspected bacteremia. We selected 383 patients in Mali and 586 patients in Thailand based on their blood culture results: 75 and 31 were positive for pneumococcus, 100 and 162 were positive for other pathogens, and 208 and 403 were blood culture negative, respectively. ICT and PCR of blood culture broth were at least 87% sensitive and 97% specific compared with blood culture; whole blood PCR was 75-88% sensitive and 96-100% specific. Pneumococcal yields in children < 5 years of age increased from 2.9% to 10.7% in Mali with > 99% of additional cases detected by whole blood PCR, and from 0.07% to 5.1% in Thailand with two-thirds of additional cases identified by ICT. Compared with blood culture, ICT and lytA PCR on cultured broth were highly sensitive and specific but their ability to improve pneumococcal identification varied by site. Further studies of these tools are needed before widespread implementation. |
Precision Diagnosis Is a Team Sport.
Zehnbauer BA , Buchman TG . Crit Care Med 2016 44 (1) 229-30 On September 22, 2015, the National Academies of Sci-ences, Engineering, and Medicine (formerly the Insti-tute of Medicine [IOM]) presented the third report (1) in the IOM Health Care Quality Initiative. Following on the two prior transformational reports (2, 3), the new publication focuses on diagnostic error. The report defines diagnostic error as the consequence of one of two failures: the failure to estab-lish an accurate and timely explanation of the patient’s health problem(s) and the failure to communicate that explanation to the patient. The former speaks to what clinicians and laborato-rians do as professionals. The latter speaks to patient—and we would add family—engagement. Both matter |
Hypercholesterolemia with consumption of PFOA-laced Western diets is dependent on strain and sex of mice
Rebholz SL , Jones T , Herrick RL , Xie C , Calafat AM , Pinney SM , Woollett LA . Toxicol Rep 2016 3 46-54 Perfluorooctanoic acid (PFOA) is a man-made surfactant with a number of industrial applications. It has a long half-life environmentally and biologically. Past studies suggest a direct relationship between plasma cholesterol and PFOA serum concentrations in humans and an inverse one in rodents fed standard rodent chow, making it difficult to examine mechanisms responsible for the potential PFOA-induced hypercholesterolemia and altered sterol metabolism. To examine dietary modification of PFOA-induced effects, C57BL/6 and BALB/c mice were fed PFOA in a fat- and cholesterol-containing diet. When fed these high fat diets, PFOA ingestion resulted in marked hypercholesterolemia in male and female C57BL/6 mice and less robust hypercholesterolemia in male BALB/c mice. The PFOA-induced hypercholesterolemia appeared to be the result of increased liver masses and altered expression of genes associated with hepatic sterol output, specifically bile acid production. mRNA levels of genes associated with sterol input were reduced only in C57BL/6 females, the mice with the greatest increase in plasma cholesterol levels. Strain-specific PFOA-induced changes in cholesterol concentrations in mammary tissues and ovaries paralleled changes in plasma cholesterol levels. mRNA levels of sterol-related genes were reduced in ovaries of C57BL/6 but not in BALB/c mice and not in mammary tissues. Our data suggest that PFOA ingestion leads to hypercholesterolemia in mice fed fat and cholesterol and effects are dependent upon the genetic background and gender of the mice with C57BL/6 female mice being most responsive to PFOA. |
Use of laboratory test results in patient management by clinicians in Malawi
Moyo K , Porter C , Chilima B , Mwenda R , Kabue M , Zungu L , Sarr A . Afr J Lab Med 2015 4 (1) 277 Background: Malawi has a high burden of infectious disease. The expansion of programmes targeting these diseases requires a strong laboratory infrastructure to support both diagnosis and treatment. Objectives: To assess the use of laboratory test results in patient management and to determine the requirements for improving laboratory services. Methods: A cross-sectional study was conducted in 2012 to survey practising clinicians. Two hospitals were purposively selected for observations of clinicians ordering laboratory tests. Twelve management-level key informants were interviewed. Descriptive statistics were conducted. Results: A total of 242 clinicians were identified and 216 (89%) were interviewed. Of these, 189 (87%) reported doubting laboratory test results at some point. Clinicians most often doubted the quality of haematology (67%), followed by malaria (53%) and CD4 (22%) test results. A total of 151 (70%) clinicians reported using laboratory tests results in patient management. Use of laboratory test results at all times in patient management varied by the type of health facility (P < 0.001). Ninety-one percent of clinicians reported that laboratories required infrastructure improvement. During 97 observations of clinicians' use of laboratory test results, 80 tests were ordered, and 73 (91%) of these were used in patient management. Key informants reported that the quality of laboratory services was good and useful, but that services were often unavailable. Conclusion: Gaps in the public laboratory system were evident. Key recommendations to enhance the use of laboratory test results in patient management were to strengthen the supply chain, reduce turn-around times, improve the test menu and improve the laboratory infrastructure. |
Laboratory validation of the sand fly fever virus antigen assay
Reeves WK , Szymczak MS , Burkhalter KL , Miller MM . J Am Mosq Control Assoc 2015 31 (4) 380-3 Sandfly fever group viruses in the genus Phlebovirus (family Bunyaviridae) are widely distributed across the globe and are a cause of disease in military troops and indigenous peoples. We assessed the laboratory sensitivity and specificity of the Sand Fly Fever Virus Antigen Assay, a rapid dipstick assay designed to detect sandfly fever Naples virus (SFNV) and Toscana virus (TOSV) against a panel of phleboviruses. The assay detected SFNV and TOSV, as well as other phleboviruses including Aguacate, Anahanga, Arumowot, Chagres, and Punta Toro viruses. It did not detect sandfly fever Sicilian, Heartland, Rio Grande, or Rift Valley fever viruses. It did not produce false positive results in the presence of uninfected sand flies (Lutzomyia longipalpis) or Cache Valley virus, a distantly related bunyavirus. Results from this laboratory evaluation suggest that this assay may be used as a rapid field-deployable assay to detect sand flies infected with TOSV and SFNV, as well as an assortment of other phleboviruses. |
Pharmacokinetic and pharmacodynamic evaluation following vaginal application of IQB3002, a dual chamber microbicide gel containing the NNRTI IQP-0528 in rhesus macaques
Pereira LE , Mesquita PM , Ham A , Singletary T , Deyounks F , Martin A , McNicholl J , Buckheit KW , Buckheit RW Jr , Smith JM . Antimicrob Agents Chemother 2015 60 (3) 1393-400 We evaluated in vivo pharmacokinetics and used a complementary ex vivo co-culture assay to determine pharmacodynamics of IQB3002 gel containing 1% IQP-0528, a non-nucleoside reverse transcriptase inhibitor, in rhesus macaques (RM). Gel (1.5 ml) was applied vaginally to 6 SHIV+ female RM. Blood, vaginal and rectal fluids were collected at 0, 1, 2, and 4 hours. RM were euthanized at 4 hours, and vaginal, cervical, rectal, and regional lymph node tissues were harvested. Anti-HIV activity was evaluated ex vivo by co-culturing fresh or frozen vaginal tissue with activated human peripheral blood mononuclear cells (PBMCs), and measuring p24 levels for 10 days after HIV-1Ba-L challenge. Median levels of IQP-0528, determined using LC-MS methods, were between 104-105 ng/g in vaginal and cervical tissue, 103-104 ng/g in rectal tissues, and 105-107 ng/ml in vaginal fluids over the 4 hour period. Vaginal tissues protected co-cultured PBMCs from HIV-1 infection ex vivo, with a viral inhibition range of 81-100% in fresh and frozen tissues that were proximal, medial, and distal relative to the cervix. No viral inhibition was detected in untreated baseline tissues. Collectively, the observed median drug levels were 5-7 logs higher than the in vitro EC50 range (0.21 ng/ml -1.29 ng/ml), suggesting that 1.5 ml of gel delivers IQP-0528 throughout the RM vaginal compartment at levels that are highly inhibitory to HIV-1. Importantly, anti-viral activity was observed in both fresh and frozen vaginal tissues, broadening the scope of the ex vivo co-culture model for future NNRTI efficacy studies. |
Establishment of an algorithm using prM/E- and NS1-specific IgM antibody-capture ELISAs in diagnosis of Japanese Encephalitis virus and West Nile virus infections in humans
Galula JU , Chang GJ , Chuang ST , Chao DY . J Clin Microbiol 2015 54 (2) 412-22 The front-line assay for presumptive serodiagnosis of acute Japanese encephalitis virus (JEV) and West Nile virus (WNV) infections is the premembrane/envelope (prM/E)-specific IgM antibody-capture ELISA (MAC-ELISA). Due to antibody cross-reactivity, MAC-ELISA positive samples may be confirmed with a time-consuming plaque reduction neutralization test (PRNT). In the present study, we applied previously developed anti-nonstructural protein 1 (NS1)-specific MAC-ELISA (NS1-MAC-ELISA) on archived acute-phase serum specimens from patients with confirmed JEV and WNV infections and compared the results with prM/E containing virus-like particle-specific MAC-ELISA (VLP-MAC-ELISA). Paired-ROC curve analyses revealed no statistical differences in the overall assay performances of the VLP- and NS1-MAC-ELISAs. The two methods had high sensitivities of 100% but slightly lower specificities that ranged between 80% and 100%. When the NS1-MAC-ELISA was used to confirm positive results in the VLP-MAC-ELISA, the specificity of serodiagnosis especially for JEV infection was increased to 90% when applied in areas where JEV co-circulates with WNV, or to 100% when applied in JEV endemic areas. Results also showed that using multiple antigens could resolve cross-reactivity in the assays. Significantly higher P/N values were consistently obtained with the homologous antigens than the heterologous antigens. JEV or WNV was reliably identified as the currently infecting flavivirus by a higher ratio of JEV-to-WNV P/N values or vice versa. In summary of above results, the diagnostic algorithm combining the use of a multiantigen VLP- and NS1-MAC-ELISAs was developed and can be practically applied to obtain a more specific and reliable result in serodiagnosis of JEV and WNV infections without the need of PRNT. The developed algorithm should provide great utility in diagnostic and surveillance activities wherein test accuracy is of utmost importance for an effective disease intervention. |
Evaluating the use of commercial West Nile Virus antigens as positive controls in the Rapid Analyte Measurement Platform West Nile virus assay
Burkhalter KL , Savage HM . J Am Mosq Control Assoc 2015 31 (4) 371-4 We evaluated the utility of 2 types of commercially available antigens as positive controls in the Rapid Analyte Measurement Platform (RAMP(R)) West Nile virus (WNV) assay. Purified recombinant WNV envelope antigens and whole killed virus antigens produced positive RAMP results and either type would be useful as a positive control. Killed virus antigens provide operational and economic advantages and we recommend their use over purified recombinant antigens. We also offer practical applications for RAMP positive controls and recommendations for preparing them. |
Evaluation of an immunochromatographic assay for rapid detection of penicillin-binding protein 2a in human and animal Staphylococcus intermedius group, Staphylococcus lugdunensis, and Staphylococcus schleiferi clinical isolates
Arnold AR , Burham CD , Ford BA , Lawhon SD , McAllister SK , Lonsway D , Albrecht V , Jerris RC , Rasheed JK , Limbago B , Burd EM , Westblade LF . J Clin Microbiol 2015 54 (3) 745-8 The performance of a rapid penicillin-binding protein 2a (PBP2a) detection assay, the Alere PBP2a Culture Colony Test, was evaluated for identification of PBP2a-mediated beta-lactam resistance in human and animal clinical isolates of Staphylococcus intermedius group, Staphylococcus lugdunensis, and Staphylococcus schleiferi. The assay was sensitive and specific, with all PBP2a-negative and -positive strains testing negative and positive, respectively. |
Air sampling filtration media: Collection efficiency for respirable size-selective sampling
Soo J-C , Monaghan K , Lee T , Kashon M , Harper M . Aerosol Sci Technol 2015 50 (1) 76-87 The collection efficiencies of commonly used membrane air sampling filters in the ultrafine particle size range were investigated. Mixed cellulose ester (MCE; 0.45, 0.8, 1.2 and 5 microm pore sizes), polycarbonate (0.4, 0.8, 2 and 5 microm pore sizes), polytetrafluoroethylene (PTFE; 0.45, 1, 2 and 5 microm pore sizes), polyvinyl chloride (PVC; 0.8 and 5 microm pore sizes) and silver membrane (0.45, 0.8, 1.2 and 5 microm pore sizes) filters were exposed to polydisperse sodium chloride (NaCl) particles in the size range of 10-400 nm. Test aerosols were nebulized and introduced into a calm air chamber through a diffusion dryer and aerosol neutralizer. The testing filters (37 mm diameter) were mounted in a conductive polypropylene filter-holder (cassette) within a metal testing tube. The experiments were conducted at flow rates between 1.7 and 11.2 l min-1. The particle size distributions of NaCl challenge aerosol were measured upstream and downstream of the test filters by a Scanning Mobility Particle Sizer (SMPS). Three different filters of each type with at least three repetitions for each pore size were tested. In general, the collection efficiency varied with airflow, pore size, and sampling duration. In addition, both collection efficiency and pressure drop increased with decreased pore size and increased sampling flow rate, but they differed among filter types and manufacturer. The present study confirmed that the MCE, PTFE and PVC filters have a relatively high collection efficiency for challenge particles much smaller than their nominal pore size and are considerably more efficient than polycarbonate and silver membrane filters, especially at larger nominal pore sizes. |
Screening and treatment of maternal genitourinary tract infections in early pregnancy to prevent preterm birth in rural Sylhet, Bangladesh: a cluster randomized trial
Lee AC , Quaiyum MA , Mullany LC , Mitra DK , Labrique A , Ahmed P , Uddin J , Rafiqullah I , DasGupta S , Mahmud A , Koumans EH , Christian P , Saha S , Baqui AH . BMC Pregnancy Childbirth 2015 15 326 BACKGROUND: Approximately half of preterm births are attributable to maternal infections, which are commonly undetected and untreated in low-income settings. Our primary aim is to determine the impact of early pregnancy screening and treatment of maternal genitourinary tract infections on the incidence of preterm live birth in Sylhet, Bangladesh. We will also assess the effect on other adverse pregnancy outcomes, including preterm birth (stillbirth and live birth), late miscarriage, maternal morbidity, and early onset neonatal sepsis. METHODS/DESIGN: We are conducting a cluster randomized controlled trial that will enroll 10,000 pregnant women in Sylhet district in rural northeastern Bangladesh. Twenty-four clusters, each with ~4000 population (120 pregnant women/year) and served by a community health worker (CHW), are randomized to: 1) the control arm, which provides routine antenatal and postnatal home-based care, or 2) the intervention arm, which includes routine antenatal and postnatal home-based care plus screening and treatment of pregnant women between 13 and 19 weeks of gestation for abnormal vaginal flora (AVF) and urinary tract infection (UTI). CHWs conduct monthly pregnancy surveillance, make 2 antenatal and 4 postnatal home visits for all enrolled pregnant women and newborns, and refer mothers or newborns with symptoms of serious illness to the government sub-district hospital. In the intervention clusters, CHWs perform home-based screening of AVF and UTI. Self-collected vaginal swabs are plated on slides, which are Gram stained and Nugent scored. Women with AVF (Nugent score ≥4) are treated with oral clindamycin, rescreened and retreated, if needed, after 3 weeks. Urine culture is performed and UTI treated with nitrofurantoin. Repeat urine culture is performed after 1 week for test of cure. Gestational age is determined by maternal report of last menstrual period at study enrollment using prospectively completed study calendars, and in a subset by early (<20 week) ultrasound. CHWs prospectively collect data on all pregnancy outcomes, maternal and neonatal morbidity and mortality. IMPLICATIONS/DISCUSSION: Findings will enhance our understanding of the burden of AVF and UTI in rural Bangladesh, the impact of a maternal screening-treatment program for genitourinary tract infections on perinatal health, and help formulate public health recommendations for infection screening in pregnancy in low-resource settings. TRIAL REGISTRATION: The study was registered on ClinicalTrials.gov:NCT01572532 on December 15, 2011. The study was funded by NICHD: R01HD066156 . |
Sepsis and the global burden of disease in children
Kissoon N , Uyeki TM . JAMA Pediatr 2015 170 (2) 1-2 In 2010, an estimated 25% of disability-adjusted life-years—a metric that incorporates premature death by years of life lost and years lived with disability—and 13% of all deaths worldwide were in children younger than 5 years.1,2 While reductions in mortality in children younger than 5 years have occurred in many countries since 1990, mortality increased in young children in some parts of sub-Saharan Africa, with severe infections leading to sepsis being a major contributor.1 For instance, in the neonatal period, diarrhea, lower respiratory tract infections, and meningitis were important contributors to mortality in 2010, while in the postneonatal period, nearly 1 million estimated deaths (half of all deaths) were due to lower respiratory tract infections (respiratory syncytial virus, Haemophilus influenzae type B, Streptococcus pneumoniae), diarrheal diseases (rotavirus, Cryptosporidium), and malaria.2 Other infectious causes of death in children younger than 5 years were measles, pertussis, and human immunodeficiency virus/AIDS. We suggest that sepsis-related pediatric deaths are substantially underestimated and that efforts are needed to better assess the impact of sepsis on childhood mortality worldwide. | The Surviving Sepsis Campaign defines sepsis as the presence of infection together with systemic manifestations of infection; sepsis-induced organ dysfunction or tissue hypoperfusion is referred to as severe sepsis.3 The common pathway to death for most children with systemic signs and symptoms associated with bacterial or viral infections is sepsis, whether death occurs at home or in a health care facility. These infections can result in a systemic inflammatory response and progress to severe sepsis and septic shock, often leading to multiorgan failure and death; survivors may have significant disabilities.4,5 However, for the Global Burden of Disease Study 2010 estimates, sepsis as a cause of death is considered only for neonatal deaths.2,6 Although this is likely due to a lack of data, such estimates underestimate the contribution of sepsis to mortality in the postneonatal period and in children aged 1 to 4 years. It is important to classify deaths according to specific causes as these data are needed for crafting and deploying preventive measures such as bed nets for malaria-endemic areas and vaccines such as for measles and pertussis. However, when faced with a child with severe multiorgan illness and systemic signs and symptoms of an infection, it is important to emphasize that the unifying feature of nearly all of these deaths is that they are due to sepsis. |
Survival of children with trisomy 13 and trisomy 18: A multi-state population-based study
Meyer RE , Liu G , Gilboa SM , Ethen MK , Aylsworth AS , Powell CM , Flood TJ , Mai CT , Wang Y , Canfield MA . Am J Med Genet A 2015 170A (4) 825-37 Trisomy 13 (T13) and trisomy 18 (T18) are among the most prevalent autosomal trisomies. Both are associated with a very high risk of mortality. Numerous instances, however, of long-term survival of children with T13 or T18 have prompted some clinicians to pursue aggressive treatment instead of the traditional approach of palliative care. The purpose of this study is to assess current mortality data for these conditions. This multi-state, population-based study examined data obtained from birth defect surveillance programs in nine states on live-born infants delivered during 1999-2007 with T13 or T18. Information on children's vital status and selected maternal and infant risk factors were obtained using matched birth and death certificates and other data sources. The Kaplan-Meier method and Cox proportional hazards models were used to estimate age-specific survival probabilities and predictors of survival up to age five. There were 693 children with T13 and 1,113 children with T18 identified from the participating states. Among children with T13, 5-year survival was 9.7%; among children with T18, it was 12.3%. For both trisomies, gestational age was the strongest predictor of mortality. Females and children of non-Hispanic black mothers had the lowest mortality. Omphalocele and congenital heart defects were associated with an increased risk of death for children with T18 but not T13. This study found survival among children with T13 and T18 to be somewhat higher than those previously reported in the literature, consistent with recent studies reporting improved survival following more aggressive medical intervention for these children. |
Prevalence and characteristics of autism spectrum disorder among 4-year-old children in the Autism and Developmental Disabilities Monitoring Network
Christensen DL , Bilder DA , Zahorodny W , Pettygrove S , Durkin MS , Fitzgerald RT , Rice C , Kurzius-Spencer M , Baio J , Yeargin-Allsopp M . J Dev Behav Pediatr 2015 37 (1) 1-8 OBJECTIVE: Early identification of children with autism spectrum disorder (ASD) facilitates timely access to intervention services. Yet, few population-based data exist on ASD identification among preschool-aged children. The authors aimed to describe ASD prevalence and characteristics among 4-year-old children in 5 of 11 sites participating in the 2010 Autism and Developmental Disabilities Monitoring Network. METHOD: Children with ASD were identified through screening of health and education records for ASD indicators, data abstraction and compilation for each child, and clinician review of records. ASD prevalence estimates, ages at first evaluation and ASD diagnosis, cognitive test scores, and demographics were compared for 4-year-old children and 8-year-old children living in the same areas. RESULTS: Among 58,467 children in these 5 sites, 4-year-old ASD prevalence was 13.4 per 1000, which was 30% lower than 8-year-old ASD prevalence. Prevalence of ASD without cognitive impairment was 40% lower among 4-year-olds compared with 8-year-olds, but prevalence of ASD with cognitive impairment was 20% higher among 4-year-olds compared with 8-year-olds. Among 4-year-olds with ASD, female and non-Hispanic white children were more likely to receive their first comprehensive evaluation by age 36 months compared with male and non-Hispanic black children, respectively. Among children diagnosed with ASD by age 48 months, median age at first comprehensive evaluation was 27 months for 4-year-olds compared with 32 months for 8-year-olds. CONCLUSION: Population-based ASD surveillance among 4-year-old children provides valuable information about the early identification of children with ASD and suggests progression toward lowering the age of first ASD evaluation within participating Autism and Developmental Disabilities Monitoring communities. |
Prevalence of adverse pregnancy outcomes, by maternal diabetes status at first and second deliveries, Massachusetts, 1998-2007
Kim SY , Kotelchuck M , Wilson HG , Diop H , Shapiro-Mendoza CK , England LJ . Prev Chronic Dis 2015 12 E218 INTRODUCTION: Understanding patterns of diabetes prevalence and diabetes-related complications across pregnancies could inform chronic disease prevention efforts. We examined adverse birth outcomes by diabetes status among women with sequential, live singleton deliveries. METHODS: We used data from the 1998-2007 Massachusetts Pregnancy to Early Life Longitudinal Data System, a population-based cohort of deliveries. We restricted the sample to sets of parity 1 and 2 deliveries. We created 8 diabetes categories using gestational diabetes mellitus (GDM) and chronic diabetes mellitus (CDM) status for the 2 deliveries. Adverse outcomes included large for gestational age (LGA), macrosomia, preterm birth, and cesarean delivery. We computed prevalence estimates for each outcome by diabetes status. RESULTS: We identified 133,633 women with both parity 1 and 2 deliveries. Compared with women who had no diabetes in either pregnancy, women with GDM or CDM during any pregnancy had increased risk for adverse birth outcomes; the prevalence of adverse outcomes was higher in parity 1 deliveries among women with no diabetes in parity 1 and GDM in parity 2 (for LGA [8.5% vs 15.1%], macrosomia [9.7% vs. 14.9%], cesarean delivery [24.7% vs 31.3%], and preterm birth [7.7% vs 12.9%]); and higher in parity 2 deliveries among those with GDM in parity 1 and no diabetes in parity 2 (for LGA [12.3% vs 18.2%], macrosomia [12.3% vs 17.2%], and cesarean delivery [27.0% vs 37.9%]). CONCLUSIONS: Women with GDM during one of 2 sequential pregnancies had elevated risk for adverse outcomes in the unaffected pregnancy, whether the diabetes-affected pregnancy preceded or followed it. |
Elevated body mass index and decreased diet quality among women and risk of birth defects in their offspring
Carmichael SL , Yang W , Gilboa S , Ailes E , Correa A , Botto LD , Feldkamp ML , Shaw GM . Birth Defects Res A Clin Mol Teratol 2015 106 (3) 164-71 BACKGROUND: We examined whether risks of 32 birth defects were higher than expected in the presence of overweight or obese body mass index (BMI) and low diet quality, based on estimating individual and joint effects of these factors and calculating relative excess risk due to interaction. METHODS: Analyses included mothers of 20,250 cases with birth defects and 8617 population-based controls without birth defects born from 1997 to 2009 and interviewed for the National Birth Defects Prevention Study. We used logistic regression to generate adjusted odds ratios (AORs) reflecting the combined effects of BMI and diet quality. We focused analyses on 16 birth defects (n = 11,868 cases, 8617 controls) for which initial results suggested an association with BMI or diet quality. RESULTS: Relative to the reference group (normal weight women with not low diet quality, i.e., >lowest quartile), AORs for low diet quality among normal weight women tended to be >1, and AORs for overweight and obese women tended to be stronger among women who had low diet quality than not low diet quality. For 9/16 birth defects, AORs for obese women who had low diet quality-the group we hypothesized to have highest risk-were higher than other stratum-specific AORs. Most relative excess risk due to interactions were positive but small (<0.5), with confidence intervals that included zero. CONCLUSION: These findings provide evidence for the hypothesis of highest birth defect risks among offspring to women who are obese and have low diet quality but insufficient evidence for an interaction of these factors in their contribution to risk. |
Evaluation of case definitions for estimation of respiratory syncytial virus associated hospitalizations among children in a rural community of northern India
Saha S , Pandey BG , Choudekar A , Krishnan A , Gerber SI , Rai SK , Singh P , Chadha M , Lal RB , Broor S . J Glob Health 2015 5 (2) 010419 BACKGROUND: The burden estimation studies for respiratory syncytial virus (RSV) have been based on varied case definitions, including case-definitions designed for influenza surveillance systems. We used all medical admissions among children aged 0-59 months to study the effect of case definitions on estimation of RSV-associated hospitalizations rates. METHODS: The hospital-based daily surveillance enrolled children aged 0-59 months admitted with acute medical conditions from July 2009-December 2012, from a well-defined rural population in Ballabgarh in northern India. All study participants were examined and nasal and throat swabs taken for testing by real-time polymerase chain reaction (RT-PCR) for RSV and influenza virus. Clinical data were used to retrospectively evaluate World Health Organization (WHO) case definitions (2011) commonly used for surveillance of respiratory pathogens, ie, acute respiratory illness (WHO-ARI), severe ARI (SARI) and influenza-like illness (ILI), for determination of RSV-associated hospitalization. RSV-associated hospitalization rates adjusted for admissions at non-study hospitals were calculated. FINDINGS: Out of 505 children enrolled, 82 (16.2%) tested positive for RSV. Annual incidence rates of RSV-associated hospitalization per 1000 children were highest among infants aged 0-5 months (15.2; 95% confidence interval (CI) 8.3-26.8), followed by ages 6-23 months (5.3, 95% CI 3.2-8.7) and lowest among children 24-59 months (0.5, 95% CI 0.1-1.5). The RSV positive children were more likely to have signs of respiratory distress like wheeze, chest in-drawing, tachypnea, and crepitation compared to RSV-negative based on bivariate comparisons. Other less commonly seen signs of respiratory distress, ie, nasal flaring, grunting, accessory muscle usage were also significantly associated with being RSV positive. Compared to the estimated RSV hospitalization rate based on all medical hospitalizations, the WHO-ARI case definition captured 86% of the total incidence, while case definitions requiring fever like ILI and SARI underestimated the incidence by 50-80%. CONCLUSIONS: Our study suggests that RSV is a substantial cause of hospitalization among children aged <24months especially those aged <6 months. The WHO-ARI case definition appeared to be the most suitable screening definition for RSV surveillance because of its high sensitivity. |
Birth prevalence of cerebral palsy: A population-based study
Van Naarden Braun K , Doernberg N , Schieve L , Christensen D , Goodman A , Yeargin-Allsopp M . Pediatrics 2015 137 (1) 1-9 OBJECTIVE: Population-based data in the United States on trends in cerebral palsy (CP) birth prevalence are limited. The objective of this study was to examine trends in the birth prevalence of congenital spastic CP by birth weight, gestational age, and race/ethnicity in a heterogeneous US metropolitan area. METHODS: Children with CP were identified by a population-based surveillance system for developmental disabilities (DDs). Children with CP were included if they were born in metropolitan Atlanta, Georgia, from 1985 to 2002, resided there at age 8 years, and did not have a postneonatal etiology (n = 766). Birth weight, gestational age, and race/ethnicity subanalyses were restricted to children with spastic CP (n = 640). Trends were examined by CP subtype, gender, race/ethnicity, co-occurring DDs, birth weight, and gestational age. RESULTS: Birth prevalence of spastic CP per 1000 1-year survivors was stable from 1985 to 2002 (1.9 in 1985 to 1.8 in 2002; 0.3% annual average prevalence; 95% confidence interval [CI] -1.1 to 1.8). Whereas no significant trends were observed by gender, subtype, birth weight, or gestational age overall, CP prevalence with co-occurring moderate to severe intellectual disability significantly decreased (-2.6% [95% CI -4.3 to -0.8]). Racial disparities persisted over time between non-Hispanic black and non-Hispanic white children (prevalence ratio 1.8 [95% CI 1.5 to 2.1]). Different patterns emerged for non-Hispanic white and non-Hispanic black children by birth weight and gestational age. CONCLUSIONS: Given improvements in neonatal survival, evidence of stability of CP prevalence is encouraging. Yet lack of overall decreases supports continued monitoring of trends and increased research and prevention efforts. Racial/ethnic disparities, in particular, warrant further study. |
A cluster randomized controlled evaluation of the health impact of a novel antimicrobial hand towel on the health of children under 2 years old in rural communities in Nyanza Province, Kenya
Slayton RB , Murphy JL , Morris J , Faith SH , Oremo J , Odhiambo A , Ayers T , Feinman SJ , Brown AC , Quick RE . Am J Trop Med Hyg 2015 94 (2) 437-44 To assess the health impact of reusable, antimicrobial hand towels, we conducted a cluster randomized, yearlong field trial. At baseline, we surveyed mothers, and gave four towels plus hygiene education to intervention households and education alone to controls. At biweekly home visits, we asked about infections in children < 2 years old and tested post-handwashing hand rinse samples of 20% of mothers for Escherichia coli. At study's conclusion, we tested 50% of towels for E. coli. Baseline characteristics between 188 intervention and 181 control households were similar. Intervention and control children had similar rates of diarrhea (1.47 versus 1.48, P = 0.99), respiratory infections (1.38 versus 1.48, P = 0.92), skin infections (1.76 versus 1.79, P = 0.81), and subjective fever (2.62 versus 3.40, P = 0.04) per 100 person-visits. Post-handwashing hand contamination was similar; 67% of towels exhibited E. coli contamination. Antimicrobial hand towels became contaminated over time, did not improve hand hygiene, or prevent diarrhea, respiratory infections, or skin infections. |
Variable lifting index (VLI): A new method for evaluating variable lifting tasks
Waters T , Occhipinti E , Colombini D , Alvarez-Casado E , Fox R . Hum Factors 2015 58 (5) 695-711 OBJECTIVE: We seek to develop a new approach for analyzing the physical demands of highly variable lifting tasks through an adaptation of the Revised NIOSH (National Institute for Occupational Safety and Health) Lifting Equation (RNLE) into a Variable Lifting Index (VLI). BACKGROUND: There are many jobs that contain individual lifts that vary from lift to lift due to the task requirements. The NIOSH Lifting Equation is not suitable in its present form to analyze variable lifting tasks. METHOD: In extending the prior work on the VLI, two procedures are presented to allow users to analyze variable lifting tasks. One approach involves the sampling of lifting tasks performed by a worker over a shift and the calculation of the Frequency Independent Lift Index (FILI) for each sampled lift and the aggregation of the FILI values into six categories. The Composite Lift Index (CLI) equation is used with lifting index (LI) category frequency data to calculate the VLI. The second approach employs a detailed systematic collection of lifting task data from production and/or organizational sources. The data are organized into simplified task parameter categories and further aggregated into six FILI categories, which also use the CLI equation to calculate the VLI. RESULTS: The two procedures will allow practitioners to systematically employ the VLI method to a variety of work situations where highly variable lifting tasks are performed. CONCLUSIONS: The scientific basis for the VLI procedure is similar to that for the CLI originally presented by NIOSH; however, the VLI method remains to be validated. APPLICATION: The VLI method allows an analyst to assess highly variable manual lifting jobs in which the task characteristics vary from lift to lift during a shift. |
Occupational risk factors for endometriosis in a cohort of flight attendants
Johnson CY , Grajewski B , Lawson CC , Whelan EA , Bertke SJ , Tseng CY . Scand J Work Environ Health 2015 42 (1) 52-60 OBJECTIVES: This study aimed to (i) compare odds of endometriosis in a cohort of flight attendants against a comparison group of teachers and (ii) investigate occupational risk factors for endometriosis among flight attendants. METHODS: We included 1945 flight attendants and 236 teachers aged 18-45 years. Laparoscopically confirmed endometriosis was self-reported via telephone interview, and flight records were retrieved from airlines to obtain work schedules and assess exposures for flight attendants. We used proportional odds regression to estimate adjusted odds ratios (OR adj) and 95% confidence intervals (95% CI) for associations between exposures and endometriosis, adjusting for potential confounders. RESULTS: Flight attendants and teachers were equally likely to report endometriosis (OR adj1.0, 95% CI 0.5-2.2). Among flight attendants, there were no clear trends between estimated cosmic radiation, circadian disruption, or ergonomic exposures and endometriosis. Greater number of flight segments (non-stop flights between two cities) per year was associated with endometriosis (OR adj2.2, 1.1-4.2 for highest versus lowest quartile, P trend= 0.02) but block hours (taxi plus flight time) per year was not (OR adj1.2, 95% CI 0.6-2.2 for highest versus lowest quartile, P trend=0.38). CONCLUSION: Flight attendants were no more likely than teachers to report endometriosis. Odds of endometriosis increased with number of flight segments flown per year. This suggests that some aspect of work scheduling is associated with increased risk of endometriosis, or endometriosis symptoms might affect how flight attendants schedule their flights. |
Progress on the US National Institute of Occupational Safety and Health hearing loss prevention strategic plan
Murphy WJ , Thompson JK . Noise News International 2015 23 (3) 99-108 In 2006, the National Institute for Occupational Safety and Health (NIOSH) entered the second decade of the National Occupational Research Agenda (NORA). NORA is a partnership program to stimulate innovative research and improved workplace practices. NORA has served as an organizing framework to plan and conduct critical occupational research and to promote expanded partnerships between the stakeholders such as universities, large and small businesses, professional societies, other government agencies (federal, state, and local), and worker organizations. Following a review by the National Academies Institute of Medicine of the NIOSH Hearing Loss Research program, a comprehensive strategic plan was developed for the Hearing Loss Prevention cross-sector. Six strategic goals were identified: 1) improved surveillance of occupational hearing loss data; 2) reduced noise emission levels from equipment focused on mining, construction, and manufacturing; 3) development of hearing protector technology; 4) development of best practices for hearing loss prevention programs; 5) identification of hearing loss risk factors; and 6) development of updated hearing damage risk criteria that consider exposures incurred during youth, adolescence, and adulthood. This presentation will review progress towards meeting these goals and propose a research agenda for the third decade of NORA research in hearing loss prevention. |
State laws governing HIV testing in correctional settings
Tarver BA , Sewell J , Oussayef N . J Correct Health Care 2016 22 (1) 28-40 At the end of 2010, 1.5% of inmates in state prisons were known to be HIV positive, a prevalence rate approximately 3 times that of the general population of the United States. Increased HIV testing in correctional settings has the potential to identify previously undiagnosed infections. This article offers a systematic review and analysis of state laws governing HIV testing in correctional settings, including HIV testing upon admission or prior to release, HIV testing for individuals charged with or convicted of specific crimes, and HIV testing of inmates in situations where contact between the inmate and law enforcement or corrections personnel may have led to an exposure. The implications of these laws for facilitating access to HIV testing within correctional settings are discussed. |
State law and standing orders for immunization services
Stewart AM , Lindley MC , Cox MA . Am J Prev Med 2015 50 (5) e133-e142 INTRODUCTION: This study determined whether state laws permit the implementation of standing orders programs (SOPs) for immunization practice. SOPs are an effective strategy to increase uptake of vaccines. Successful SOPs require a legal foundation authorizing delegation of immunization services performed by a wide range of providers, administered to broad patient populations, in several settings. Without legal permission to administer vaccines, non-physician health professionals (NPHPs) are unable to provide preventive services. METHODS: From 2012 through 2013, researchers analyzed the legal environment in 50 states and the District of Columbia to determine whether NPHPs are authorized to (1) assess patient immunization status; (2) prescribe vaccines; and (3) administer vaccines under their own practice license or delegated authority. Laws governing the following NPHPs were included: (1) medical assistants; (2) midwives; (3) nurses in advanced practice; (4) registered, practical, and vocational nurses; (5) physician assistants; and (6) pharmacists. Additionally, the review determined which vaccines may be administered, permissible patient populations, and allowable practice settings for each category of NPHP. RESULTS: The laws are highly variable, and no state authorizes all NPHPs to conduct all elements of immunization practice for all patients. The laws frequently indicate where NPHPs may or may not administer vaccines and outline permissible vaccines, eligible patients, and required level of supervision. CONCLUSIONS: The variation in the laws could potentially present a challenge to successful implementation of public health goals to improve immunization rates. Expanded authorization of SOPs in all states could increase health practitioners' ability to deliver recommended vaccines. |
A transdisciplinary approach to public health law: The emerging practice of legal epidemiology
Burris S , Ashe M , Levin D , Penn M , Larkin M . Annu Rev Public Health 2015 37 135-48 Public health law has roots in both law and science. For more than a century, lawyers have helped develop and implement health laws; over the past 50 years, scientific evaluation of the health effects of laws and legal practices has achieved high levels of rigor and influence. We describe an emerging model of public health law that unites these two traditions. This transdisciplinary model adds scientific practices to the lawyerly functions of normative and doctrinal research, counseling, and representation. These practices include policy surveillance and empirical public health law research on the efficacy of legal interventions and the impact of laws and legal practices on health and health system operation. A transdisciplinary model of public health law, melding its legal and scientific facets, can help break down enduring cultural, disciplinary, and resource barriers that have prevented the full recognition and optimal role of law in public health. Expected final online publication date for the Annual Review of Public Health Volume 37 is March 17, 2016. Please see http://www.annualreviews.org/catalog/pubdates.aspx for revised estimates. |
On management matters: Why we must improve public health management through action, Comment on "Management matters: A leverage point for health systems strengthening in global health"
Willacy E , Bratton S . Int J Health Policy Manag 2015 5 (1) 63-5 Public health management is a pillar of public health practice. Only through effective management can research, theory, and scientific innovation be translated into successful public health action. With this in mind, the U.S. Centers for Disease Control and Prevention (CDC) has developed an innovative program called Improving Public Health Management for Action (IMPACT) which aims to address this critical need by building an effective cadre of public health managers to work alongside scientists to prepare for and respond to disease threats and to effectively implement public health programs. IMPACT is a 2-year, experiential learning program that provides fellows with the management tools and opportunities to apply their new knowledge in the field, all while continuing to serve the Ministry of Health (MoH). IMPACT will launch in 2016 in 2 countries with the intent of expanding to additional countries in future years resulting in a well-trained cadre of public health managers around the world. |
Trends in severe maternal morbidity after assisted reproductive technology in the United States, 2008-2012
Martin AS , Monsour M , Kissin DM , Jamieson DJ , Callaghan WM , Boulet SL . Obstet Gynecol 2016 127 (1) 59-66 OBJECTIVE: To examine trends in severe maternal morbidity from 2008 to 2012 in delivery and postpartum hospitalizations among pregnancies conceived with or without assisted reproductive technology (ART). METHODS: In this retrospective cohort study, deliveries were identified in the 2008-2012 Truven Health MarketScan Commercial Claims and Encounters Databases. Severe maternal morbidity was identified using International Classification of Diseases, 9th Revision, Clinical Modification diagnosis codes and Current Procedural Terminology codes. Rate of severe maternal morbidity was calculated for ART and non-ART pregnancies. We performed multivariable logistic regression, controlling for maternal characteristics, and calculated adjusted odds ratios (ORs) and 95% confidence intervals (CIs) for severe morbidity. Additionally, a propensity score analysis was performed between ART and non-ART deliveries. RESULTS: Of 1,016,618 deliveries, 14,761 (1.5%) were identified as pregnancies conceived with ART. Blood transfusion was the most common severe morbidity indicator for ART and non-ART pregnancies. For every 10,000 singleton deliveries, there were 273 ART deliveries or postpartum hospitalizations with severe maternal morbidity compared with 126 for non-ART (P<.001). For ART singleton deliveries, the rate of severe morbidity decreased from 369 per 10,000 deliveries in 2008 to 219 per 10,000 deliveries in 2012 (P=.025). Odds of severe morbidity were increased for ART compared with non-ART singletons (adjusted OR 1.84, 95% CI 1.63-2.08). Among multiple gestations, there was no significant difference between ART and non-ART pregnancies (rate of severe morbidity for ART 604/10,000 and non-ART 539/10,000 deliveries, P=.089; adjusted OR 1.04, 95% CI 0.91-1.20). Propensity score matching agreed with these results. CONCLUSION: Singleton pregnancies conceived with ART are at increased risk for severe maternal morbidity; however, the rate has been decreasing since 2008. Multiple gestations have increased risk regardless of ART status. |
Urinary paraben concentrations and in vitro fertilization outcomes among women from a fertility clinic
Minguez-Alarcon L , Chiu YH , Messerlian C , Williams PL , Sabatini ME , Toth TL , Ford JB , Calafat AM , Hauser R . Fertil Steril 2015 105 (3) 714-721 OBJECTIVE: To explore the relationship between urinary paraben concentrations and IVF outcomes among women attending an academic fertility center. DESIGN: Prospective cohort study. SETTING: Fertility clinic in a hospital setting. PATIENT(S): A total of 245 women contributing 356 IVF cycles. INTERVENTION(S): None. Quantification of urinary concentrations of parabens by isotope-dilution tandem mass spectrometry, and assessment of clinical endpoints of IVF treatments abstracted from electronic medical records at the academic fertility center. MAIN OUTCOME MEASURE(S): Total and mature oocyte counts, proportion of high-quality embryos, fertilization rates, and rates of implantation, clinical pregnancy, and live births. RESULT(S): The geometric means of the urinary concentrations of methylparaben, propylparaben, and butylparaben in our study population were 133, 24, and 1.5 mug/L, respectively. In models adjusted for age, body mass index, race/ethnicity, smoking status, and primary infertility diagnosis, urinary methylparaben, propylparaben, and butylparaben concentrations were not associated with IVF outcomes, specifically total and mature oocyte counts, proportion of high embryo quality, and fertilization rates. Moreover, no significant associations were found between urinary paraben concentrations and rates of implantation, clinical pregnancy, and live births. CONCLUSION(S): Urinary paraben concentrations were not associated with IVF outcomes among women undergoing infertility treatments. |
Power and sample size calculations for interval-censored survival analysis
Kim HY , Williamson JM , Lin HM . Stat Med 2015 35 (8) 1390-400 We propose a method for calculating power and sample size for studies involving interval-censored failure time data that only involves standard software required for fitting the appropriate parametric survival model. We use the framework of a longitudinal study where patients are assessed periodically for a response and the only resultant information available to the investigators is the failure window: the time between the last negative and first positive test results. The survival model is fit to an expanded data set using easily computed weights. We illustrate with a Weibull survival model and a two-group comparison. The investigator can specify a group difference in terms of a hazards ratio. Our simulation results demonstrate the merits of these proposed power calculations. We also explore how the number of assessments (visits), and thus the corresponding lengths of the failure intervals, affect study power. The proposed method can be easily extended to more complex study designs and a variety of survival and censoring distributions. + |
Bayesian marked point process modeling for generating fully synthetic public use data with point-referenced geography
Quick H , Holan SH , Wikle CK , Reiter JP . Spat Stat 2015 14 439-451 Many data stewards collect confidential data that include fine geography. When sharing these data with others, data stewards strive to disseminate data that are informative for a wide range of spatial and non-spatial analyses while simultaneously protecting the confidentiality of data subjects' identities and attributes. Typically, data stewards meet this challenge by coarsening the resolution of the released geography and, as needed, perturbing the confidential attributes. When done with high intensity, these redaction strategies can result in released data with poor analytic quality. We propose an alternative dissemination approach based on fully synthetic data. We generate data using marked point process models that can maintain both the statistical properties and the spatial dependence structure of the confidential data. We illustrate the approach using data consisting of mortality records from Durham, North Carolina. |
Doubly robust multiple imputation using kernel-based techniques
Hsu CH , He Y , Li Y , Long Q , Friese R . Biom J 2015 58 (3) 588-606 We consider the problem of estimating the marginal mean of an incompletely observed variable and develop a multiple imputation approach. Using fully observed predictors, we first establish two working models: one predicts the missing outcome variable, and the other predicts the probability of missingness. The predictive scores from the two models are used to measure the similarity between the incomplete and observed cases. Based on the predictive scores, we construct a set of kernel weights for the observed cases, with higher weights indicating more similarity. Missing data are imputed by sampling from the observed cases with probability proportional to their kernel weights. The proposed approach can produce reasonable estimates for the marginal mean and has a double robustness property, provided that one of the two working models is correctly specified. It also shows some robustness against misspecification of both models. We demonstrate these patterns in a simulation study. In a real-data example, we analyze the total helicopter response time from injury in the Arizona emergency medical service data. |
Waterpipe tobacco smoking in Turkey: Policy implications and trends from the Global Adult Tobacco Survey (GATS)
Erdol C , Erguder T , Morton J , Palipudi K , Gupta P , Asma S . Int J Environ Res Public Health 2015 12 (12) 15559-66 Waterpipe tobacco smoking (WTS) is an emerging tobacco product globally, especially among adolescents and young adults who may perceive WTS as a safe alternative to smoking cigarettes. Monitoring the use of WTS in Turkey in relation to the tobacco control policy context is important to ensure that WTS does not become a major public health issue in Turkey. The Global Adult Tobacco Survey (GATS) was conducted in Turkey in 2008 and was repeated in 2012. GATS provided prevalence estimates on current WTS and change over time. Other indicators of WTS were also obtained, such as age of initiation and location of use. Among persons aged 15 and older in Turkey, the current prevalence of WTS decreased from 2.3% in 2008 to 0.8% in 2012, representing a 65% relative decline. Among males, WTS decreased from 4.0% to 1.1% (72% relative decline). While the overall smoking prevalence decreased among females, there was no change in the rate of WTS (0.7% in 2008 vs. 0.5% in 2012), though the WTS prevalence rate was already low in 2008. Comprehensive tobacco control efforts have been successful in reducing the overall smoking prevalence in Turkey, which includes the reduction of cigarette smoking and WTS. However, it is important to continue monitoring the use of waterpipes in Turkey and targeting tobacco control efforts to certain groups that may be vulnerable to future WTS marketing (e.g., youth, women). |
A multifaceted strategy to implement brief smoking cessation counselling during antenatal care in Argentina and Uruguay: a cluster randomized trial
Althabe F , Aleman A , Berrueta M , Morello P , Gibbons L , Colomar M , Tong VT , Dietz PM , Farr SL , Ciganda A , Mazzoni A , Llambi L , Becu A , Smith RA , Johnson C , Belizan JM , Buekens PM . Nicotine Tob Res 2015 18 (5) 1083-1092 INTRODUCTION: We evaluated a multifaceted intervention to increase the frequency of pregnant women who received brief smoking cessation counselling based on the 5As. METHODS: We randomly assigned (1:1) 20 antenatal care clusters in Buenos Aires, Argentina and Montevideo, Uruguay to receive a multifaceted intervention to implement brief smoking cessation counselling into routine antenatal care, or no intervention. Outcomes included receipt of 5As, smoking until the end of pregnancy, and attitudes and readiness of providers towards providing counseling. Women's outcomes were surveyed at baseline and at the end of the 14 to 18-month intervention at the postpartum hospital stay. Cessation was verified with saliva cotinine. The trial took place between October 03, 2011 and November 29, 2013. RESULTS: The rate of women who recalled receiving the 5As increased from 14.0% to 33.6% in the intervention group (median rate change, 22.1%), and 10.8% to 17.0% in the control group (median rate change, 4.6%) (P=0.001 for the difference in change between groups). The proportion of women who continued smoking during pregnancy was unchanged at follow-up in both groups; the relative difference between groups was not significant (ratio of odds ratios 1.16, 95% CI, 0.98 -1.37; P=0.086). No effect observed in attitudes and readiness of providers. CONCLUSION: The intervention showed a moderate effect in increasing the proportion of women who recalled receiving the 5As, with a third receiving counselling in more than one visit. The frequency of women who smoked until the end of the pregnancy was not significantly reduced by the intervention. IMPLICATION: No implementation trials of smoking cessation interventions for pregnant women have been carried out in Latin American or in middle-income countries where health care systems or capacities may differ. We evaluated a multifaceted strategy designed to increase the frequency of pregnant women who receive brief smoking cessation counseling based on the 5As in Argentina and Uruguay. We found that the intervention showed a moderate effect in increasing the proportion of women receiving the 5As, with a third of women receiving counselling in more than one visit. However, the frequency of women who smoked until the end of the pregnancy was not significantly reduced by the intervention. |
Nonmedical use of prescription drugs and sexual risk behaviors
Clayton HB , Lowry R , August E , Everett Jones S . Pediatrics 2015 137 (1) BACKGROUND: Substance use is associated with sexual risk behaviors among youth, but little is known about whether nonmedical prescription drug use, an increasingly common behavior, is associated with sexual risk behaviors. METHODS: Data from the 2011 and 2013 national Youth Risk Behavior Surveys, cross-sectional surveys conducted among nationally representative samples of students in grades 9 to 12 were combined (n = 29 008) to examine the association between ever taking prescription drugs without a doctor's prescription and sexual risk behaviors (ever having sexual intercourse, current sexual activity, lifetime number of sexual partners, condom use, and alcohol or drug use before last sexual intercourse). Using logistic regression models (adjusted for sex, race/ethnicity, grade, ever injection drug use, and use of alcohol, marijuana, heroin, cocaine, methamphetamines, ecstasy, and inhalants), we estimated adjusted prevalence ratios (aPRs) and 95% confidence intervals (CIs). RESULTS: Nonmedical use of prescription drugs (NMUPD) was associated with ever having sexual intercourse (aPR 1.16 [95% CI 1.11-1.22]), being currently sexually active (1.26 [1.20-1.33]), having ≥4 lifetime sexual partners (1.45 [1.34-1.57]), drinking alcohol or using drugs before last sexual intercourse (1.32 [1.17-1.48]), and not using a condom at last sexual intercourse (1.14 [1.05-1.23]). As the frequency of NMUPD increased, the association between NMUPD and each of the sexual risk behaviors increased in strength, suggesting a dose-response relationship. CONCLUSIONS: NMUPD is associated with sexual behaviors that put high school students at risk for sexually transmitted infections. These findings can be used to inform clinical and school-based interventions developed to reduce drug use and sexually transmitted infections. |
Content Index (Achived Edition)
- Chronic Diseases and Conditions
- Communicable Diseases
- Environmental Health
- Genetics and Genomics
- Health Economics
- Healthcare Associated Infections
- Immunity and Immunization
- Injury and Violence
- Laboratory Sciences
- Maternal and Child Health
- Occupational Safety and Health
- Public Health Law
- Public Health Leadership and Management
- Reproductive Health
- Statistics as Topic
- Substance Use and Abuse
About
CDC Science Clips is an online, continuously updated, searchable database of scientific literature published by CDC authors. Each article features an Altmetric Attention Score to track social and mainstream media mentions. If you are aware of a CDC-authored publication that does not appear in this database, please let us know.
- Page last reviewed:Feb 1, 2024
- Page last updated:Apr 29, 2024
- Content source:
- Powered by CDC PHGKB Infrastructure