Last data update: Jan 27, 2025. (Total: 48650 publications since 2009)
Records 1-9 (of 9 Records) |
Query Trace: Lopez VK[original query] |
---|
Longitudinal analysis of SARS-CoV-2 IgG antibody durability in Puerto Rico
Madewell ZJ , Graff NE , Lopez VK , Rodriguez DM , Wong JM , Maniatis P , Medina FA , Muñoz JL , Briggs-Hagen M , Adams LE , Rivera-Amill V , Paz-Bailey G , Major CG . Sci Rep 2024 14 (1) 30743 ![]() Understanding the dynamics of antibody responses following vaccination and SARS-CoV-2 infection is important for informing effective vaccination strategies and other public health interventions. This study investigates SARS-CoV-2 antibody dynamics in a Puerto Rican cohort, analyzing how IgG levels vary by vaccination status and previous infection. We assess waning immunity and the distribution of hybrid immunity with the aim to inform public health strategies and vaccination programs in Puerto Rico and similar settings. We conducted a prospective, longitudinal cohort study to identify SARS-CoV-2 infections and related outcomes in Ponce, Puerto Rico, from June 2020-August 2022. Participants provided self-collected nasal swabs every week and serum every six months for RT-PCR and IgG testing, respectively. IgG reactivity against nucleocapsid (N) antigens, which generally indicate previous infection, and spike (S1) and receptor-binding domain (RBD) antigens, which indicate history of either infection or vaccination, was assessed using the Luminex Corporation xMAP® SARS-CoV-2 Multi-Antigen IgG Assay. Prior infection was defined by positive RT-PCRs, categorized by the predominant circulating SARS-CoV-2 variant at the event time. Demographic information, medical history, and COVID-19 vaccination history were collected through standardized questionnaires. Of 882 participants included in our analysis, 34.0% experienced at least one SARS-CoV-2 infection, with most (78.7%) occurring during the Omicron wave (December 2021 onwards). SARS-CoV-2 antibody prevalence increased over time, reaching 98.4% by the final serum collection, 67.0% attributable to vaccination alone, 1.6% from infection alone, and 31.4% from both. Regardless of prior infection status, RBD and S1 IgG levels gradually declined following two vaccine doses. A third dose boosted these antibody levels and showed a slower decline over time. N-antibody levels peaked during the Omicron surge and waned over time. Vaccination in individuals with prior SARS-CoV-2 infection elicited the highest and most durable antibody responses. N or S1 seropositivity was associated with lower odds of a subsequent positive PCR test during the Omicron period, with N antibodies showing a stronger association. By elucidating the differential decay of RBD and S1 antibodies following vaccination and the complexities of N-antibody response following infection, this study in a Puerto Rican cohort strengthens the foundation for developing targeted interventions and public health strategies. |
Challenges of COVID-19 case forecasting in the US, 2020-2021
Lopez VK , Cramer EY , Pagano R , Drake JM , O'Dea EB , Adee M , Ayer T , Chhatwal J , Dalgic OO , Ladd MA , Linas BP , Mueller PP , Xiao J , Bracher J , Castro Rivadeneira AJ , Gerding A , Gneiting T , Huang Y , Jayawardena D , Kanji AH , Le K , Mühlemann A , Niemi J , Ray EL , Stark A , Wang Y , Wattanachit N , Zorn MW , Pei S , Shaman J , Yamana TK , Tarasewicz SR , Wilson DJ , Baccam S , Gurung H , Stage S , Suchoski B , Gao L , Gu Z , Kim M , Li X , Wang G , Wang L , Wang Y , Yu S , Gardner L , Jindal S , Marshall M , Nixon K , Dent J , Hill AL , Kaminsky J , Lee EC , Lemaitre JC , Lessler J , Smith CP , Truelove S , Kinsey M , Mullany LC , Rainwater-Lovett K , Shin L , Tallaksen K , Wilson S , Karlen D , Castro L , Fairchild G , Michaud I , Osthus D , Bian J , Cao W , Gao Z , Lavista Ferres J , Li C , Liu TY , Xie X , Zhang S , Zheng S , Chinazzi M , Davis JT , Mu K , Pastore YPiontti A , Vespignani A , Xiong X , Walraven R , Chen J , Gu Q , Wang L , Xu P , Zhang W , Zou D , Gibson GC , Sheldon D , Srivastava A , Adiga A , Hurt B , Kaur G , Lewis B , Marathe M , Peddireddy AS , Porebski P , Venkatramanan S , Wang L , Prasad PV , Walker JW , Webber AE , Slayton RB , Biggerstaff M , Reich NG , Johansson MA . PLoS Comput Biol 2024 20 (5) e1011200 During the COVID-19 pandemic, forecasting COVID-19 trends to support planning and response was a priority for scientists and decision makers alike. In the United States, COVID-19 forecasting was coordinated by a large group of universities, companies, and government entities led by the Centers for Disease Control and Prevention and the US COVID-19 Forecast Hub (https://covid19forecasthub.org). We evaluated approximately 9.7 million forecasts of weekly state-level COVID-19 cases for predictions 1-4 weeks into the future submitted by 24 teams from August 2020 to December 2021. We assessed coverage of central prediction intervals and weighted interval scores (WIS), adjusting for missing forecasts relative to a baseline forecast, and used a Gaussian generalized estimating equation (GEE) model to evaluate differences in skill across epidemic phases that were defined by the effective reproduction number. Overall, we found high variation in skill across individual models, with ensemble-based forecasts outperforming other approaches. Forecast skill relative to the baseline was generally higher for larger jurisdictions (e.g., states compared to counties). Over time, forecasts generally performed worst in periods of rapid changes in reported cases (either in increasing or decreasing epidemic phases) with 95% prediction interval coverage dropping below 50% during the growth phases of the winter 2020, Delta, and Omicron waves. Ideally, case forecasts could serve as a leading indicator of changes in transmission dynamics. However, while most COVID-19 case forecasts outperformed a naïve baseline model, even the most accurate case forecasts were unreliable in key phases. Further research could improve forecasts of leading indicators, like COVID-19 cases, by leveraging additional real-time data, addressing performance across phases, improving the characterization of forecast confidence, and ensuring that forecasts were coherent across spatial scales. In the meantime, it is critical for forecast users to appreciate current limitations and use a broad set of indicators to inform pandemic-related decision making. |
The United States COVID-19 Forecast Hub dataset (preprint)
Cramer EY , Huang Y , Wang Y , Ray EL , Cornell M , Bracher J , Brennen A , Rivadeneira AJC , Gerding A , House K , Jayawardena D , Kanji AH , Khandelwal A , Le K , Mody V , Mody V , Niemi J , Stark A , Shah A , Wattanchit N , Zorn MW , Reich NG , US COVID-19 Forecast Hub Consortium , Lopez VK , Walker JW , Slayton RB , Johansson MA , Biggerstaff M . medRxiv 2021 2021.11.04.21265886 ![]() Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident hospitalizations, incident cases, incident deaths, and cumulative deaths due to COVID-19 at national, state, and county levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages.Competing Interest StatementAV, MC, and APP report grants from Metabiota Inc outside the submitted work. Funding StatementFor teams that reported receiving funding for their work, we report the sources and disclosures below: AIpert-pwllnod: Natural Sciences and Engineering Research Council of Canada; Caltech-CS156: Gary Clinard Innovation Fund; CEID-Walk: University of Georgia; CMU-TimeSeries: CDC Center of Excellence, gifts from Google and Facebook; COVIDhub: This work has been supported by the US Centers for Disease Control and Prevention (1U01IP001122) and the National Institutes of General Medical Sciences (R35GM119582). The content is solely the responsibility of the authors and does not necessarily represent the official views of CDC, NIGMS or the National Institutes of Health; Johannes Bracher was supported by the Helmholtz Foundation via the SIMCARD Information & Data Science Pilot Project; Tilmann Gneiting gratefully acknowledges support by the Klaus Tschira Foundation; CU-select: NSF DMS-2027369 and a gift from the Morris-Singer Foundation; DDS-NBDS: NSF III-1812699; epiforecasts-ensemble1: Wellcome Trust (210758/Z/18/Z) FDANIHASU: supported by the Intramural Research Program of the NIH/NIDDK; GT_CHHS-COVID19: William W. George Endowment, Virginia C. and Joseph C. Mello Endowment, NSF DGE-1650044, NSF MRI 1828187, research cyberinfrastructure resources and services provided by the Partnership for an Advanced Computing Environment (PACE) at Georgia Tech, and the following benefactors at Georgia Tech: Andrea Laliberte, Joseph C. Mello, Richard Rick E. & Charlene Zalesky, and Claudia & Paul Raines, CDC MInD-Healthcare U01CK000531-Supplement; IHME: This work was supported by the Bill & Melinda Gates Foundation, as well as funding from the state of Washington and the National Science Foundation (award no. FAIN: 2031096); Imperial-ensemble1: SB acknowledges funding from the Wellcome Trust (219415); Institute of Business Forecasting: IBF; IowaStateLW-STEM: NSF DMS-1916204, Iowa State University Plant Sciences Institute Scholars Program, NSF DMS-1934884, Laurence H. Baker Center for Bioinformatics and Biological Statistics; IUPUI CIS: NSF; JHU_CSSE-DECOM: JHU CSSE: National Science Foundation (NSF) RAPID Real-time Forecasting of COVID-19 risk in the USA. 2021-2022. Award ID: 2108526. National Science Foundation (NSF) RAPID Development of an interactive web-based dashboard to track COVID-19 in real-time. 2020. Award ID: 2028604; JHU_IDD-CovidSP: State of California, US Dept of Health and Human Services, US Dept of Homeland Security, Johns Hopkins Health System, Office of the Dean at Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University Modeling and Policy Hub, Centers for Disease Control and Prevention (5U01CK000538-03), University of Utah Immunology, Inflammation, & Infectious Disease Initiative (26798 Seed Grant); JHU_UNC_GAS-StatMechP ol: NIH NIGMS: R01GM140564; JHUAPL-Bucky: US Dept of Health and Human Services; KITmetricslab-select_ensemble: Daniel Wolffram gratefully acknowledges support by the Klaus Tschira Foundation; LANL-GrowthRate: LANL LDRD 20200700ER; MIT-Cassandra: MIT Quest for Intelligence; MOBS-GLEAM_COVID: COVID Supplement CDC-HHS-6U01IP001137-01; CA NU38OT000297 from the Council of State and Territorial Epidemiologists (CSTE); NotreDame-FRED: NSF RAPID DEB 2027718; NotreDame-mobility: NSF RAPID DEB 2027718; PSI-DRAFT: NSF RAPID Grant # 2031536; QJHong-Encounter: NSF DMR-2001411 and DMR-1835939; SDSC_ISG-TrendModel: The development of the dashboard was partly funded by the Fondation Privee des Hopitaux Universitaires de Geneve; UA-EpiCovDA: NSF RAPID Grant # 2028401; UChicagoCHATTOPADHYAY-UnIT: Defense Advanced Research Projects Agency (DARPA) #HR00111890043/P00004 (I. Chattopadhyay, University of Chicago); UCSB-ACTS: NSF RAPID IIS 2029626; UCSD_NEU-DeepGLEAM: Google Faculty Award, W31P4Q-21-C-0014; UMass-MechBayes: NIGMS #R35GM119582, NSF #1749854, NIGMS #R35GM119582; UMich-RidgeTfReg: This project is funded by the University of Michigan Physics Department and the University of Michigan Office of Research; UVA-Ensemble: National Institutes of Health (NIH) Grant 1R01GM109718, NSF BIG DATA Grant IIS-1633028, NSF Grant No.: OAC-1916805, NSF Expeditions in Computing Grant CCF-1918656, CCF-1917819, NSF RAPID CNS-2028004, NSF RAPID OAC-2027541, US Centers for Disease Control and Prevention 75D30119C05935, a grant from Google, University of Virginia Strategic Investment Fund award number SIF160, Defense Threat Reduction Agency (DTRA) under Contract No. HDTRA1-19-D-0007, and Virginia Dept of Health Grant VDH-21-501-0141; Wadnwani_AI-BayesOpt: This study is made possible by the generous support of the American People through the United States Agency for International Development (USAID). The work described in this article was implemented under the TRACETB Project, managed by WIAI under the terms of Cooperative Agreement Number 72038620CA00006. The contents of this manuscript are the sole responsibility of the authors and do not necessarily reflect the views of USAID or the United States Government; WalmartLabsML-LogForecasting: Team acknowledges Walmart to support this study Author DeclarationsI confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.YesI confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.YesI understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).YesI have followed all appropriate research reporting guidelines and uploaded the relevant EQUATOR Network research reporting checklist(s) and other pertinent material as supplementary files, if applicable.YesAll data produced are available online at https://github.com/reichlab/covid19-forecast-hub https://github.com/reichlab/covid19-forecast-hub |
Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the US (preprint)
Cramer EY , Ray EL , Lopez VK , Bracher J , Brennen A , Castro Rivadeneira AJ , Gerding A , Gneiting T , House KH , Huang Y , Jayawardena D , Kanji AH , Khandelwal A , Le K , Mühlemann A , Niemi J , Shah A , Stark A , Wang Y , Wattanachit N , Zorn MW , Gu Y , Jain S , Bannur N , Deva A , Kulkarni M , Merugu S , Raval A , Shingi S , Tiwari A , White J , Abernethy NF , Woody S , Dahan M , Fox S , Gaither K , Lachmann M , Meyers LA , Scott JG , Tec M , Srivastava A , George GE , Cegan JC , Dettwiller ID , England WP , Farthing MW , Hunter RH , Lafferty B , Linkov I , Mayo ML , Parno MD , Rowland MA , Trump BD , Zhang-James Y , Chen S , Faraone SV , Hess J , Morley CP , Salekin A , Wang D , Corsetti SM , Baer TM , Eisenberg MC , Falb K , Huang Y , Martin ET , McCauley E , Myers RL , Schwarz T , Sheldon D , Gibson GC , Yu R , Gao L , Ma Y , Wu D , Yan X , Jin X , Wang YX , Chen Y , Guo L , Zhao Y , Gu Q , Chen J , Wang L , Xu P , Zhang W , Zou D , Biegel H , Lega J , McConnell S , Nagraj VP , Guertin SL , Hulme-Lowe C , Turner SD , Shi Y , Ban X , Walraven R , Hong QJ , Kong S , van de Walle A , Turtle JA , Ben-Nun M , Riley S , Riley P , Koyluoglu U , DesRoches D , Forli P , Hamory B , Kyriakides C , Leis H , Milliken J , Moloney M , Morgan J , Nirgudkar N , Ozcan G , Piwonka N , Ravi M , Schrader C , Shakhnovich E , Siegel D , Spatz R , Stiefeling C , Wilkinson B , Wong A , Cavany S , España G , Moore S , Oidtman R , Perkins A , Kraus D , Kraus A , Gao Z , Bian J , Cao W , Lavista Ferres J , Li C , Liu TY , Xie X , Zhang S , Zheng S , Vespignani A , Chinazzi M , Davis JT , Mu K , Pastore YPiontti A , Xiong X , Zheng A , Baek J , Farias V , Georgescu A , Levi R , Sinha D , Wilde J , Perakis G , Bennouna MA , Nze-Ndong D , Singhvi D , Spantidakis I , Thayaparan L , Tsiourvas A , Sarker A , Jadbabaie A , Shah D , Della Penna N , Celi LA , Sundar S , Wolfinger R , Osthus D , Castro L , Fairchild G , Michaud I , Karlen D , Kinsey M , Mullany LC , Rainwater-Lovett K , Shin L , Tallaksen K , Wilson S , Lee EC , Dent J , Grantz KH , Hill AL , Kaminsky J , Kaminsky K , Keegan LT , Lauer SA , Lemaitre JC , Lessler J , Meredith HR , Perez-Saez J , Shah S , Smith CP , Truelove SA , Wills J , Marshall M , Gardner L , Nixon K , Burant JC , Wang L , Gao L , Gu Z , Kim M , Li X , Wang G , Wang Y , Yu S , Reiner RC , Barber R , Gakidou E , Hay SI , Lim S , Murray C , Pigott D , Gurung HL , Baccam P , Stage SA , Suchoski BT , Prakash BA , Adhikari B , Cui J , Rodríguez A , Tabassum A , Xie J , Keskinocak P , Asplund J , Baxter A , Oruc BE , Serban N , Arik SO , Dusenberry M , Epshteyn A , Kanal E , Le LT , Li CL , Pfister T , Sava D , Sinha R , Tsai T , Yoder N , Yoon J , Zhang L , Abbott S , Bosse NI , Funk S , Hellewell J , Meakin SR , Sherratt K , Zhou M , Kalantari R , Yamana TK , Pei S , Shaman J , Li ML , Bertsimas D , Skali Lami O , Soni S , Tazi Bouardi H , Ayer T , Adee M , Chhatwal J , Dalgic OO , Ladd MA , Linas BP , Mueller P , Xiao J , Wang Y , Wang Q , Xie S , Zeng D , Green A , Bien J , Brooks L , Hu AJ , Jahja M , McDonald D , Narasimhan B , Politsch C , Rajanala S , Rumack A , Simon N , Tibshirani RJ , Tibshirani R , Ventura V , Wasserman L , O'Dea EB , Drake JM , Pagano R , Tran QT , Ho LST , Huynh H , Walker JW , Slayton RB , Johansson MA , Biggerstaff M , Reich NG . medRxiv 2021 2021.02.03.21250974 ![]() Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. In 2020, the COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized hundreds of thousands of specific predictions from more than 50 different academic, industry, and independent research groups. This manuscript systematically evaluates 23 models that regularly submitted forecasts of reported weekly incident COVID-19 mortality counts in the US at the state and national level. One of these models was a multi-model ensemble that combined all available forecasts each week. The performance of individual models showed high variability across time, geospatial units, and forecast horizons. Half of the models evaluated showed better accuracy than a naïve baseline model. In combining the forecasts from all teams, the ensemble showed the best overall probabilistic accuracy of any model. Forecast accuracy degraded as models made predictions farther into the future, with probabilistic accuracy at a 20-week horizon more than 5 times worse than when predicting at a 1-week horizon. This project underscores the role that collaboration and active coordination between governmental public health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks.Competing Interest StatementAV, MC, and APP report grants from Metabiota Inc outside the submitted work.Funding StatementFor teams that reported receiving funding for their work, we report the sources and disclosures below. CMU-TimeSeries: CDC Center of Excellence, gifts from Google and Facebook. CU-select: NSF DMS-2027369 and a gift from the Morris-Singer Foundation. COVIDhub: This work has been supported by the US Centers for Disease Control and Prevention (1U01IP001122) and the National Institutes of General Medical Sciences (R35GM119582). The content is solely the responsibility of the authors and does not necessarily represent the official views of CDC, NIGMS or the National Institutes of Health. Johannes Bracher was supported by the Helmholtz Foundation via the SIMCARD Information& Data Science Pilot Project. Tilmann Gneiting gratefully acknowledges support by the Klaus Tschira Foundation. DDS-NBDS: NSF III-1812699. EPIFORECASTS-ENSEMBLE1: Wellcome Trust (210758/Z/18/Z) GT_CHHS-COVID19: William W. George Endowment, Virginia C. and Joseph C. Mello Endowments, NSF DGE-1650044, NSF MRI 1828187, research cyberinfrastructure resources and services provided by the Partnership for an Advanced Computing Environment (PACE) at Georgia Tech, and the following benefactors at Georgia Tech: Andrea Laliberte, Joseph C. Mello, Richard Rick E. & Charlene Zalesky, and Claudia & Paul Raines GT-DeepCOVID: CDC MInD-Healthcare U01CK000531-Supplement. NSF (Expeditions CCF-1918770, CAREER IIS-2028586, RAPID IIS-2027862, Medium IIS-1955883, NRT DGE-1545362), CDC MInD program, ORNL and funds/computing resources from Georgia Tech and GTRI. IHME: This work was supported by the Bill & Melinda Gates Foundation, as well as funding from the state of Washington and the National Science Foundation (award no. FAIN: 2031096). IowaStateLW-STEM: Iowa State University Plant Sciences Institute Scholars Program, NSF DMS-1916204, NSF CCF-1934884, Laurence H. Baker Center for Bioinformatics and Biological Statistics. JHU_IDD-CovidSP: State of California, US Dept of Health and Human Services, US Dept of Homeland Security, US Office of Foreign Disaster Assistance, Johns Hopkins Health System, Office of the Dean at Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University Modeling and Policy Hub, Centers fo Disease Control and Prevention (5U01CK000538-03), University of Utah Immunology, Inflammation, & Infectious Disease Initiative (26798 Seed Grant). LANL-GrowthRate: LANL LDRD 20200700ER. MOBS-GLEAM_COVID: COVID Supplement CDC-HHS-6U01IP001137-01. NotreDame-mobility and NotreDame-FRED: NSF RAPID DEB 2027718 UA-EpiCovDA: NSF RAPID Grant # 2028401. UCSB-ACTS: NSF RAPID IIS 2029626. UCSD-NEU: Google Faculty Award, DARPA W31P4Q-21-C-0014, COVID Supplement CDC-HHS-6U01IP001137-01. UMass-MechBayes: NIGMS R35GM119582, NSF 1749854. UMich-RidgeTfReg: The University of Michigan Physics Department and the University of Michigan Office of Research.Author DeclarationsI confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.YesThe details of the IRB/oversight body that provided approval or exemption for the research described are given below:UMass-Amherst IRBAll necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived.YesI understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).YesI have followed all appropriate research reporting guidelines and uploaded the relevant EQUATOR Network research reporting checklist(s) and other pertinent material as supplementary files, if applicable.YesAll data and code referred to in the manuscript are publicly available. https://github.com/reichlab/covid19-forecast-hub/ https://github.com/reichlab/covidEnsembles https://zoltardata.com/project/44 |
Can Severity of a Humanitarian Crisis be Quantified? Assessment of the INFORM Severity Index (preprint)
Lopez VK , Nika A , Blanton C , Talley L , Garfield R . medRxiv 2020 11 Background: Those responding to humanitarian crises have an ethical imperative to respond most where the need is greatest. Metrics are used to estimate the severity of a given crisis. The INFORM Severity Index, one such metric, has become widely used to guide policy makers in humanitarian response decision making. The index, however, has not undergone critical statistical review. If imprecise or incorrect, the quality of decision making for humanitarian response will be affected. This analysis asks, how precise and how well does this index reflect the severity of conditions for people affected by disaster or war? Results: The INFORM Severity Index is calculated from 35 publicly available indicators, which conceptually reflect the severity of each crisis. We used 172 unique global crises from the INFORM Severity Index database that occurred January 1 to November 30, 2019 or were ongoing by this date. We applied exploratory factor analysis (EFA) to determine common factors within the dataset. We then applied a second-order confirmatory factor analysis (CFA) to predict crisis severity as a latent construct. Model fit was assessed via chi-square goodness-of-fit statistic, Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), and Root Mean Square Error of Approximation (RMSEA). The EFA models suggested a 3- or 4- factor solution, with 46% and 53% variance explained in each model, respectively. The final CFA was parsimonious, containing three factors comprised of 11 indicators, with reasonable model fit (Chi-squared=107, with 40 degrees of freedom, CFI=0.94, TLI=0.92, RMSEA=0.10). In the second-order CFA, the magnitude of standardized factor-loading on the 'societal governance' latent construct had the strongest association with the latent construct of 'crisis severity' (0.73), followed by the 'humanitarian access/safety' construct (0.56). Conclusion(s): A metric of crisis-severity is a critical step towards improving humanitarian response, but only when it reflects real life conditions. Our work is a first step in refining an existing framework to better quantify crisis severity. Copyright The copyright holder for this preprint is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY-NC-ND 4.0 International license. |
The United States COVID-19 Forecast Hub dataset.
Cramer EY , Huang Y , Wang Y , Ray EL , Cornell M , Bracher J , Brennen A , Rivadeneira AJC , Gerding A , House K , Jayawardena D , Kanji AH , Khandelwal A , Le K , Mody V , Mody V , Niemi J , Stark A , Shah A , Wattanchit N , Zorn MW , Reich NG , US COVID-19 Forecast Hub Consortium , Lopez VK , Walker JW , Slayton RB , Johansson MA , Biggerstaff M . Sci Data 2022 9 (1) 462 ![]() ![]() Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages. |
Can severity of a humanitarian crisis be quantified Assessment of the INFORM severity index
Lopez VK , Nika A , Blanton C , Talley L , Garfield R . Global Health 2023 19 (1) 7 BACKGROUND: Those responding to humanitarian crises have an ethical imperative to respond most where the need is greatest. Metrics are used to estimate the severity of a given crisis. The INFORM Severity Index, one such metric, has become widely used to guide policy makers in humanitarian response decision making. The index, however, has not undergone critical statistical review. If imprecise or incorrect, the quality of decision making for humanitarian response will be affected. This analysis asks, how precise and how well does this index reflect the severity of conditions for people affected by disaster or war? RESULTS: The INFORM Severity Index is calculated from 35 publicly available indicators, which conceptually reflect the severity of each crisis. We used 172 unique global crises from the INFORM Severity Index database that occurred January 1 to November 30, 2019 or were ongoing by this date. We applied exploratory factor analysis (EFA) to determine common factors within the dataset. We then applied a second-order confirmatory factor analysis (CFA) to predict crisis severity as a latent construct. Model fit was assessed via chi-square goodness-of-fit statistic, Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), and Root Mean Square Error of Approximation (RMSEA). The EFA models suggested a 3- or 4- factor solution, with 46 and 53% variance explained in each model, respectively. The final CFA was parsimonious, containing three factors comprised of 11 indicators, with reasonable model fit (Chi-squared = 107, with 40 degrees of freedom, CFI = 0.94, TLI = 0.92, RMSEA = 0.10). In the second-order CFA, the magnitude of standardized factor-loading on the 'societal governance' latent construct had the strongest association with the latent construct of 'crisis severity' (0.73), followed by the 'humanitarian access/safety' construct (0.56). CONCLUSIONS: A metric of crisis-severity is a critical step towards improving humanitarian response, but only when it reflects real life conditions. Our work is a first step in refining an existing framework to better quantify crisis severity. |
Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States.
Cramer EY , Ray EL , Lopez VK , Bracher J , Brennen A , Castro Rivadeneira AJ , Gerding A , Gneiting T , House KH , Huang Y , Jayawardena D , Kanji AH , Khandelwal A , Le K , Mühlemann A , Niemi J , Shah A , Stark A , Wang Y , Wattanachit N , Zorn MW , Gu Y , Jain S , Bannur N , Deva A , Kulkarni M , Merugu S , Raval A , Shingi S , Tiwari A , White J , Abernethy NF , Woody S , Dahan M , Fox S , Gaither K , Lachmann M , Meyers LA , Scott JG , Tec M , Srivastava A , George GE , Cegan JC , Dettwiller ID , England WP , Farthing MW , Hunter RH , Lafferty B , Linkov I , Mayo ML , Parno MD , Rowland MA , Trump BD , Zhang-James Y , Chen S , Faraone SV , Hess J , Morley CP , Salekin A , Wang D , Corsetti SM , Baer TM , Eisenberg MC , Falb K , Huang Y , Martin ET , McCauley E , Myers RL , Schwarz T , Sheldon D , Gibson GC , Yu R , Gao L , Ma Y , Wu D , Yan X , Jin X , Wang YX , Chen Y , Guo L , Zhao Y , Gu Q , Chen J , Wang L , Xu P , Zhang W , Zou D , Biegel H , Lega J , McConnell S , Nagraj VP , Guertin SL , Hulme-Lowe C , Turner SD , Shi Y , Ban X , Walraven R , Hong QJ , Kong S , van de Walle A , Turtle JA , Ben-Nun M , Riley S , Riley P , Koyluoglu U , DesRoches D , Forli P , Hamory B , Kyriakides C , Leis H , Milliken J , Moloney M , Morgan J , Nirgudkar N , Ozcan G , Piwonka N , Ravi M , Schrader C , Shakhnovich E , Siegel D , Spatz R , Stiefeling C , Wilkinson B , Wong A , Cavany S , España G , Moore S , Oidtman R , Perkins A , Kraus D , Kraus A , Gao Z , Bian J , Cao W , Lavista Ferres J , Li C , Liu TY , Xie X , Zhang S , Zheng S , Vespignani A , Chinazzi M , Davis JT , Mu K , Pastore YPiontti A , Xiong X , Zheng A , Baek J , Farias V , Georgescu A , Levi R , Sinha D , Wilde J , Perakis G , Bennouna MA , Nze-Ndong D , Singhvi D , Spantidakis I , Thayaparan L , Tsiourvas A , Sarker A , Jadbabaie A , Shah D , Della Penna N , Celi LA , Sundar S , Wolfinger R , Osthus D , Castro L , Fairchild G , Michaud I , Karlen D , Kinsey M , Mullany LC , Rainwater-Lovett K , Shin L , Tallaksen K , Wilson S , Lee EC , Dent J , Grantz KH , Hill AL , Kaminsky J , Kaminsky K , Keegan LT , Lauer SA , Lemaitre JC , Lessler J , Meredith HR , Perez-Saez J , Shah S , Smith CP , Truelove SA , Wills J , Marshall M , Gardner L , Nixon K , Burant JC , Wang L , Gao L , Gu Z , Kim M , Li X , Wang G , Wang Y , Yu S , Reiner RC , Barber R , Gakidou E , Hay SI , Lim S , Murray C , Pigott D , Gurung HL , Baccam P , Stage SA , Suchoski BT , Prakash BA , Adhikari B , Cui J , Rodríguez A , Tabassum A , Xie J , Keskinocak P , Asplund J , Baxter A , Oruc BE , Serban N , Arik SO , Dusenberry M , Epshteyn A , Kanal E , Le LT , Li CL , Pfister T , Sava D , Sinha R , Tsai T , Yoder N , Yoon J , Zhang L , Abbott S , Bosse NI , Funk S , Hellewell J , Meakin SR , Sherratt K , Zhou M , Kalantari R , Yamana TK , Pei S , Shaman J , Li ML , Bertsimas D , Skali Lami O , Soni S , Tazi Bouardi H , Ayer T , Adee M , Chhatwal J , Dalgic OO , Ladd MA , Linas BP , Mueller P , Xiao J , Wang Y , Wang Q , Xie S , Zeng D , Green A , Bien J , Brooks L , Hu AJ , Jahja M , McDonald D , Narasimhan B , Politsch C , Rajanala S , Rumack A , Simon N , Tibshirani RJ , Tibshirani R , Ventura V , Wasserman L , O'Dea EB , Drake JM , Pagano R , Tran QT , Ho LST , Huynh H , Walker JW , Slayton RB , Johansson MA , Biggerstaff M , Reich NG . Proc Natl Acad Sci U S A 2022 119 (15) e2113561119 ![]() SignificanceThis paper compares the probabilistic accuracy of short-term forecasts of reported deaths due to COVID-19 during the first year and a half of the pandemic in the United States. Results show high variation in accuracy between and within stand-alone models and more consistent accuracy from an ensemble model that combined forecasts from all eligible models. This demonstrates that an ensemble model provided a reliable and comparatively accurate means of forecasting deaths during the COVID-19 pandemic that exceeded the performance of all of the models that contributed to it. This work strengthens the evidence base for synthesizing multiple models to support public-health action. |
Lessons learned from implementation of a national hotline for Ebola virus disease emergency preparedness in South Sudan
Lopez VK , Shetty S , Kouch AT , Khol MT , Lako R , Bili A , Ayuen AD , Jukudu A , Kug AA , Mayen AD , Nyawel E , Berta K , Olu O , Clarke K , Bunga S . Confl Health 2021 15 (1) 27 BACKGROUND: The world's second largest Ebola outbreak occurred in the Democratic Republic of Congo from 2018 to 2020. At the time, risk of cross-border spread into South Sudan was very high. Thus, the South Sudan Ministry of Health scaled up Ebola preparedness activities in August 2018, including implementation of a 24-h, toll-free Ebola virus disease (EVD) hotline. The primary purpose was the hotline was to receive EVD alerts and the secondary goal was to provide evidence-based EVD messages to the public. METHODS: To assess whether the hotline augmented Ebola preparedness activities in a protracted humanitarian emergency context, we reviewed 22 weeks of call logs from January to June 2019. Counts and percentages were calculated for all available data. RESULTS: The hotline received 2114 calls during the analysis period, and an additional 1835 missed calls were documented. Callers used the hotline throughout 24-h of the day and were most often men and individuals living in Jubek state, where the national capital is located. The leading reasons for calling were to learn more about EVD (68%) or to report clinical signs or symptoms (16%). Common EVD-related questions included EVD signs and symptoms, transmission, and prevention. Only one call was documented as an EVD alert, and there was no documentation of reported symptoms or whether the person met the EVD case definition. CONCLUSIONS: Basic surveillance information was not collected from callers. To trigger effective outbreak investigation from hotline calls, the hotline should capture who is reporting and from where, symptoms and travel history, and whether this information should be further investigated. Electronic data capture will enhance data quality and availability of information for review. Additionally, the magnitude of missed calls presents a major challenge. When calls are answered, there is potential to provide health communication, so risk communication needs should be considered. However, prior to hotline implementation, governments should critically assess whether their hotline would yield actionable data and if other data sources for surveillance or community concerns are available. |
- Page last reviewed:Feb 1, 2024
- Page last updated:Jan 27, 2025
- Content source:
- Powered by CDC PHGKB Infrastructure