Last data update: May 12, 2025. (Total: 49248 publications since 2009)
Records 1-11 (of 11 Records) |
Query Trace: Pulliam J[original query] |
---|
Detection of real-time changes in direction of COVID-19 transmission using national- and state-level epidemic trends based on R(t) estimates - United States Overall and New Mexico, April-October 2024
Richard DM , Susswein Z , Connolly S , Myers YGutiérrez A , Thalathara R , Carey K , Koumans EH , Khan D , Masters NB , McIntosh N , Corbett P , Ghinai I , Kahn R , Keen A , Pulliam J , Sosin D , Gostic K . MMWR Morb Mortal Wkly Rep 2024 73 (46) 1058-1063 Public health practitioners rely on timely surveillance data for planning and decision-making; however, surveillance data are often subject to delays. Epidemic trend categories, based on time-varying effective reproductive number (R(t)) estimates that use nowcasting methods, can mitigate reporting lags in surveillance data and detect changes in community transmission before reporting is completed. CDC analyzed the performance of epidemic trend categories for COVID-19 during summer 2024 in the United States and at the state level in New Mexico. COVID-19 epidemic trend categories were estimated and released in real time based on preliminary data, then retrospectively compared with final emergency department (ED) visit data to determine their ability to detect or confirm real-time changes in subsequent ED visits. Across the United States and in New Mexico, epidemic trend categories were an early indicator of increases in COVID-19 community transmission, signifying increases in COVID-19 community transmission in May, and a confirmatory indicator that decreasing COVID-19 ED visits reflected actual decreases in COVID-19 community transmission in September, rather than incomplete reporting. Public health decision-makers can use epidemic trend categories, in combination with other surveillance indicators, to understand whether COVID-19 community transmission and subsequent ED visits are increasing, decreasing, or not changing; this information can guide communications decisions. |
Best practices for estimating and reporting epidemiological delay distributions of infectious diseases
Charniga K , Park SW , Akhmetzhanov AR , Cori A , Dushoff J , Funk S , Gostic KM , Linton NM , Lison A , Overton CE , Pulliam JRC , Ward T , Cauchemez S , Abbott S . PLoS Comput Biol 2024 20 (10) e1012520 ![]() Epidemiological delays are key quantities that inform public health policy and clinical practice. They are used as inputs for mathematical and statistical models, which in turn can guide control strategies. In recent work, we found that censoring, right truncation, and dynamical bias were rarely addressed correctly when estimating delays and that these biases were large enough to have knock-on impacts across a large number of use cases. Here, we formulate a checklist of best practices for estimating and reporting epidemiological delays. We also provide a flowchart to guide practitioners based on their data. Our examples are focused on the incubation period and serial interval due to their importance in outbreak response and modeling, but our recommendations are applicable to other delays. The recommendations, which are based on the literature and our experience estimating epidemiological delay distributions during outbreak responses, can help improve the robustness and utility of reported estimates and provide guidance for the evaluation of estimates for downstream use in transmission models or other analyses. |
Syphilis treatment among people who are pregnant in six U.S. states, 2018-2021
Tannis A , Miele K , Carlson JM , O'Callaghan KP , Woodworth KR , Anderson B , Praag A , Pulliam K , Coppola N , Willabus T , Mbotha D , Abetew D , Currenti S , Longcore ND , Akosa A , Meaney-Delman D , Tong VT , Gilboa SM , Olsen EO . Obstet Gynecol 2024 OBJECTIVE: To describe syphilis treatment status and prenatal care among people with syphilis during pregnancy to identify missed opportunities for preventing congenital syphilis. METHODS: Six jurisdictions that participated in SET-NET (Surveillance for Emerging Threats to Pregnant People and Infants Network) conducted enhanced surveillance among people with syphilis during pregnancy based on case investigations, medical records, and linkage of laboratory data with vital records. Unadjusted risk ratios (RRs) were used to compare demographic and clinical characteristics by syphilis stage (primary, secondary, or early latent vs late latent or unknown) and treatment status during pregnancy (adequate per the Centers for Disease Control and Prevention's "Sexually Transmitted Infections Treatment Guidelines, 2021" vs inadequate or not treated) and by prenatal care (timely: at least 30 days before pregnancy outcome; nontimely: less than 30 days before pregnancy outcome; and no prenatal care). RESULTS: As of September 15, 2023, of 1,476 people with syphilis during pregnancy, 855 (57.9%) were adequately treated and 621 (42.1%) were inadequately treated or not treated. Eighty-two percent of the cohort received timely prenatal care. Although those with nontimely or no prenatal care were more likely to receive inadequate or no treatment (RR 2.50, 95% CI, 2.17-2.88 and RR 2.73, 95% CI, 2.47-3.02, respectively), 32.1% of those with timely prenatal care were inadequately or not treated. Those with reported substance use or a history of homelessness were nearly twice as likely to receive inadequate or no treatment (RR 2.04, 95% CI, 1.82-2.28 and RR 1.83, 95% CI, 1.58-2.13, respectively). CONCLUSION: In this surveillance cohort, people without timely prenatal care had the highest risk for syphilis treatment inadequacy; however, almost a third of people who received timely prenatal care were not adequately treated. These findings underscore gaps in syphilis screening and treatment for pregnant people, especially those experiencing substance use and homelessness, and the need for systems-based interventions, such as treatment outside of traditional prenatal care settings. |
Human exposure to bats, rodents and monkeys in Bangladesh
Shanta IS , Luby SP , Hossain K , Heffelfinger JD , Kilpatrick AM , Haider N , Rahman T , Chakma S , Ahmed SSU , Sharker Y , Pulliam JRC , Kennedy ED , Gurley ES . Ecohealth 2023 1-12 Bats, rodents and monkeys are reservoirs for emerging zoonotic infections. We sought to describe the frequency of human exposure to these animals and the seasonal and geographic variation of these exposures in Bangladesh. During 2013-2016, we conducted a cross-sectional survey in a nationally representative sample of 10,002 households from 1001 randomly selected communities. We interviewed household members about exposures to bats, rodents and monkeys, including a key human-bat interface-raw date palm sap consumption. Respondents reported observing rodents (90%), bats (52%) and monkeys (2%) in or around their households, although fewer reported direct contact. The presence of monkeys around the household was reported more often in Sylhet division (7%) compared to other divisions. Households in Khulna (17%) and Rajshahi (13%) were more likely to report drinking date palm sap than in other divisions (1.5-5.6%). Date palm sap was mostly consumed during winter with higher frequencies in January (16%) and February (12%) than in other months (0-5.6%). There was a decreasing trend in drinking sap over the three years. Overall, we observed substantial geographic and seasonal patterns in human exposure to animals that could be sources of zoonotic disease. These findings could facilitate targeting emerging zoonoses surveillance, research and prevention efforts to areas and seasons with the highest levels of exposure. |
COVID-19 Contact Tracing in Two Counties - North Carolina, June-July 2020.
Lash RR , Donovan CV , Fleischauer AT , Moore ZS , Harris G , Hayes S , Sullivan M , Wilburn A , Ong J , Wright D , Washington R , Pulliam A , Byers B , McLaughlin HP , Dirlikov E , Rose DA , Walke HT , Honein MA , Moonan PK , Oeltmann JE . MMWR Morb Mortal Wkly Rep 2020 69 (38) 1360-1363 Contact tracing is a strategy implemented to minimize the spread of communicable diseases (1,2). Prompt contact tracing, testing, and self-quarantine can reduce the transmission of SARS-CoV-2, the virus that causes coronavirus disease 2019 (COVID-19) (3,4). Community engagement is important to encourage participation in and cooperation with SARS-CoV-2 contact tracing (5). Substantial investments have been made to scale up contact tracing for COVID-19 in the United States. During June 1-July 12, 2020, the incidence of COVID-19 cases in North Carolina increased 183%, from seven to 19 per 100,000 persons per day* (6). To assess local COVID-19 contact tracing implementation, data from two counties in North Carolina were analyzed during a period of high incidence. Health department staff members investigated 5,514 (77%) persons with COVID-19 in Mecklenburg County and 584 (99%) in Randolph Counties. No contacts were reported for 48% of cases in Mecklenburg and for 35% in Randolph. Among contacts provided, 25% in Mecklenburg and 48% in Randolph could not be reached by telephone and were classified as nonresponsive after at least one attempt on 3 consecutive days of failed attempts. The median interval from specimen collection from the index patient to notification of identified contacts was 6 days in both counties. Despite aggressive efforts by health department staff members to perform case investigations and contact tracing, many persons with COVID-19 did not report contacts, and many contacts were not reached. These findings indicate that improved timeliness of contact tracing, community engagement, and increased use of community-wide mitigation are needed to interrupt SARS-CoV-2 transmission. |
Changing contact patterns over disease progression: Nipah virus as a case study
Lee KH , Nikolay B , Sazzad HMS , Hossain MJ , Khan Akmd , Rahman M , Satter SM , Nichol ST , Klena JD , Pulliam JRC , Kilpatrick AM , Sultana S , Afroj S , Daszak P , Luby S , Cauchemez S , Salje H , Gurley E . J Infect Dis 2020 222 (3) 438-442 Contact patterns play a key role in disease transmission, and variation in contacts during the course of illness can influence transmission, particularly when accompanied by changes in host infectiousness. We used surveys among 1,642 contacts of 94 Nipah case-patients in Bangladesh to determine how contact patterns (physical and with bodily fluids) changed as disease progressed in severity. The number of contacts increased with severity and, for case-patients who died, peaked on the day of death. Given transmission has only been observed among fatal Nipah cases, our findings suggest changes in contact patterns during illness contribute to risk of infection. |
Transmission of Nipah virus - 14 years of investigations in Bangladesh
Nikolay B , Salje H , Hossain MJ , Khan Akmd , Sazzad HMS , Rahman M , Daszak P , Stroher U , Pulliam JRC , Kilpatrick AM , Nichol ST , Klena JD , Sultana S , Afroj S , Luby SP , Cauchemez S , Gurley ES . N Engl J Med 2019 380 (19) 1804-1814 BACKGROUND: Nipah virus is a highly virulent zoonotic pathogen that can be transmitted between humans. Understanding the dynamics of person-to-person transmission is key to designing effective interventions. METHODS: We used data from all Nipah virus cases identified during outbreak investigations in Bangladesh from April 2001 through April 2014 to investigate case-patient characteristics associated with onward transmission and factors associated with the risk of infection among patient contacts. RESULTS: Of 248 Nipah virus cases identified, 82 were caused by person-to-person transmission, corresponding to a reproduction number (i.e., the average number of secondary cases per case patient) of 0.33 (95% confidence interval [CI], 0.19 to 0.59). The predicted reproduction number increased with the case patient's age and was highest among patients 45 years of age or older who had difficulty breathing (1.1; 95% CI, 0.4 to 3.2). Case patients who did not have difficulty breathing infected 0.05 times as many contacts (95% CI, 0.01 to 0.3) as other case patients did. Serologic testing of 1863 asymptomatic contacts revealed no infections. Spouses of case patients were more often infected (8 of 56 [14%]) than other close family members (7 of 547 [1.3%]) or other contacts (18 of 1996 [0.9%]). The risk of infection increased with increased duration of exposure of the contacts (adjusted odds ratio for exposure of >48 hours vs. </=1 hour, 13; 95% CI, 2.6 to 62) and with exposure to body fluids (adjusted odds ratio, 4.3; 95% CI, 1.6 to 11). CONCLUSIONS: Increasing age and respiratory symptoms were indicators of infectivity of Nipah virus. Interventions to control person-to-person transmission should aim to reduce exposure to body fluids. (Funded by the National Institutes of Health and others.). |
Evaluating Ebola vaccine trials: insights from simulation.
Pulliam JRC , Bellan SE , Gambhir M , Meyers LA , Dushoff J . Lancet Infect Dis 2015 15 (10) 1134 ![]() Piszczek and Parlow1 outlined expected benefits of a stepped-wedge cluster trial (SWCT) design, with specific reference to the Sierra Leone Trial to Introduce a Vaccine against Ebola (STRIVE). STRIVE, however, is not an SWCT, but a phased-rollout trial in which randomization to immediate or delayed vaccination arms occurs at the individual level (RCT) within trial clusters.2 While the SWCT design is advantageous in certain circumstances, many of the benefits described by Piszczek and Parlow would not apply to evaluation of Ebola vaccine candidates in Sierra Leone. | In a recently published study, we used simulations to compare statistical validity and power for an SWCT and a STRIVE-like RCT in the same trial population.3 Piszczek and Parlow contend that an SWCT can achieve greater statistical power than an RCT via multiple before-and-after and between-group comparisons; however, we found that the declining and heterogeneous epidemic incidence across Sierra Leone undermine such cluster-level comparisons and, consequently, the power of an SWCT. Specifically, we estimated that the SWCT design would be 3–10 times less likely than an individually randomized, phased roll-out RCT to definitively identify an efficacious vaccine. For example, an SWCT starting in April 2015 was expected to have a less than 10% chance of detecting the effect of a 90% efficacious vaccine. |
Statistical power and validity of Ebola vaccine trials in Sierra Leone: a simulation study of trial design and analysis
Bellan SE , Pulliam JR , Pearson CA , Champredon D , Fox SJ , Skrip L , Galvani AP , Gambhir M , Lopman BA , Porco TC , Meyers LA , Dushoff J . Lancet Infect Dis 2015 15 (6) 703-10 BACKGROUND: Safe and effective vaccines could help to end the ongoing Ebola virus disease epidemic in parts of west Africa, and mitigate future outbreaks of the virus. We assess the statistical validity and power of randomised controlled trial (RCT) and stepped-wedge cluster trial (SWCT) designs in Sierra Leone, where the incidence of Ebola virus disease is spatiotemporally heterogeneous, and is decreasing rapidly. METHODS: We projected district-level Ebola virus disease incidence for the next 6 months, using a stochastic model fitted to data from Sierra Leone. We then simulated RCT and SWCT designs in trial populations comprising geographically distinct clusters at high risk, taking into account realistic logistical constraints, and both individual-level and cluster-level variations in risk. We assessed false-positive rates and power for parametric and non-parametric analyses of simulated trial data, across a range of vaccine efficacies and trial start dates. FINDINGS: For an SWCT, regional variation in Ebola virus disease incidence trends produced increased false-positive rates (up to 0.15 at alpha=0.05) under standard statistical models, but not when analysed by a permutation test, whereas analyses of RCTs remained statistically valid under all models. With the assumption of a 6-month trial starting on Feb 18, 2015, we estimate the power to detect a 90% effective vaccine to be between 49% and 89% for an RCT, and between 6% and 26% for an SWCT, depending on the Ebola virus disease incidence within the trial population. We estimate that a 1-month delay in trial initiation will reduce the power of the RCT by 20% and that of the SWCT by 49%. INTERPRETATION: Spatiotemporal variation in infection risk undermines the statistical power of the SWCT. This variation also undercuts the SWCT's expected ethical advantages over the RCT, because an RCT, but not an SWCT, can prioritise vaccination of high-risk clusters. FUNDING: US National Institutes of Health, US National Science Foundation, and Canadian Institutes of Health Research. |
A comparison of bats and rodents as reservoirs of zoonotic viruses: are bats special?
Luis AD , Hayman DT , O'Shea TJ , Cryan PM , Gilbert AT , Pulliam JR , Mills JN , Timonin ME , Willis CK , Cunningham AA , Fooks AR , Rupprecht CE , Wood JL , Webb CT . Proc Biol Sci 2013 280 (1756) 20122753 ![]() Bats are the natural reservoirs of a number of high-impact viral zoonoses. We present a quantitative analysis to address the hypothesis that bats are unique in their propensity to host zoonotic viruses based on a comparison with rodents, another important host order. We found that bats indeed host more zoonotic viruses per species than rodents, and we identified life-history and ecological factors that promote zoonotic viral richness. More zoonotic viruses are hosted by species whose distributions overlap with a greater number of other species in the same taxonomic order (sympatry). Specifically in bats, there was evidence for increased zoonotic viral richness in species with smaller litters (one young), greater longevity and more litters per year. Furthermore, our results point to a new hypothesis to explain in part why bats host more zoonotic viruses per species: the stronger effect of sympatry in bats and more viruses shared between bat species suggests that interspecific transmission is more prevalent among bats than among rodents. Although bats host more zoonotic viruses per species, the total number of zoonotic viruses identified in bats (61) was lower than in rodents (68), a result of there being approximately twice the number of rodent species as bat species. Therefore, rodents should still be a serious concern as reservoirs of emerging viruses. These findings shed light on disease emergence and perpetuation mechanisms and may help lead to a predictive framework for identifying future emerging infectious virus reservoirs. |
Evaluating and regulating lead in synthetic turf
Van Ulirsch G , Gleason K , Gerstenberger S , Moffett DB , Pulliam G , Ahmed T , Fagliano J . Environ Health Perspect 2010 118 (10) 1345-9 BACKGROUND: In 2007, a synthetic turf recreational field in Newark, New Jersey, was closed because lead was found in synthetic turf fibers and in surface dust at concentrations exceeding hazard criteria. Consequently, public health professionals across the country began testing synthetic turf to determine whether it represented a lead hazard. Currently, no standardized methods exist to test for lead in synthetic turf or to assess lead hazards. OBJECTIVES: Our objectives were to increase awareness of potential lead exposure from synthetic turf by presenting data showing elevated lead in fibers and turf-derived dust; identify risk assessment uncertainties; recommend that federal and/or state agencies determine appropriate methodologies for assessing lead in synthetic turf; and recommend an interim standardized approach for sampling, interpreting results, and taking health-protective actions. DISCUSSION: Data collected from recreational fields and child care centers indicate lead in synthetic turf fibers and dust at concentrations exceeding the Consumer Product Safety Improvement Act of 2008 statutory lead limit of 300 mg/kg for consumer products intended for use by children, and the U.S. Environmental Protection Agency's lead-dust hazard standard of 40 microg/ft2 for floors. CONCLUSIONS: Synthetic turf can deteriorate to form dust containing lead at levels that may pose a risk to children. Given elevated lead levels in turf and dust on recreational fields and in child care settings, it is imperative that a consistent, nationwide approach for sampling, assessment, and action be developed. In the absence of a standardized approach, we offer an interim approach to assess potential lead hazards when evaluating synthetic turf. EDITOR'S SUMMARY: A recreational field in Newark, New Jersey, was closed in 2007 because lead concentrations found in synthetic turf fibers and in surface dust exceeded hazard criteria. Consequently, public health professionals across the country began testing synthetic turf to determine whether it represented a lead hazard. Data collected from recreational fields and child care centers indicated lead in synthetic turf fibers and dust at concentrations that exceed the Consumer Product Safety Improvement Act of 2008 statutory lead limit of 300 mg/kg for consumer products intended for use by children and the U.S. Environmental Protection Agency's lead-dust hazard standard of 40 microg/ft2 for floors. The authors conclude that synthetic turf can deteriorate to form dust containing lead at levels that may pose a risk to children. Currently, no standardized methods exist to test for lead in synthetic turf or to assess lead. Ulirsch et al. (p. 1345) summarize data on lead in fibers and turf-derived dust and discuss risk assessment uncertainties. They also note the need for regulatory agencies to develop standardized methods for assessing lead in synthetic turf and recommend an interim approach for sampling, interpreting results, and taking health-protective actions. |
- Page last reviewed:Feb 1, 2024
- Page last updated:May 12, 2025
- Content source:
- Powered by CDC PHGKB Infrastructure