Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 137
Filter
1.
Trends Ecol Evol ; 39(6): 515-523, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38508923

ABSTRACT

Measuring and tracking biodiversity from local to global scales is challenging due to its multifaceted nature and the range of metrics used to describe spatial and temporal patterns. Abundance can be used to describe how a population changes across space and time, but it can be measured in different ways, with consequences for the interpretation and communication of spatiotemporal patterns. We differentiate between relative and absolute abundance, and discuss the advantages and disadvantages of each for biodiversity monitoring, conservation, and ecological research. We highlight when absolute abundance can be advantageous and should be prioritized in biodiversity monitoring and research, and conclude by providing avenues for future research directions to better assess the necessity of absolute abundance in biodiversity monitoring.


Subject(s)
Biodiversity , Conservation of Natural Resources , Population Density , Population Dynamics , Animals
2.
Heart Rhythm ; 21(4): 410-418, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38246594

ABSTRACT

BACKGROUND: Outcome comparisons among subcutaneous implantable cardioverter-defibrillator (S-ICD) recipients with nonischemic cardiomyopathies are scarce. OBJECTIVE: The aim of this study was to evaluate differences in device-related outcomes among S-ICD recipients with different structural substrates. METHODS: Patients enrolled in the i-SUSI (International SUbcutaneouS Implantable cardioverter defibrillator registry) project were grouped according to the underlying substrate (ischemic vs nonischemic) and subgrouped into dilated cardiomyopathy, hypertrophic cardiomyopathy, Brugada syndrome (BrS), arrhythmogenic right ventricular cardiomyopathy (ARVC). The main outcome of our study was to compare the rates of appropriate and inappropriate shocks and device-related complications. RESULTS: Among 1698 patients, the most common underlying substrate was ischemic (31.7%), followed by dilated cardiomyopathy (20.5%), BrS (10.8%), hypertrophic cardiomyopathy (8.5%), and ARVC (4.4%). S-ICD for primary prevention was more common in the nonischemic cohort (70.9% vs 65.4%; P = .037). Over a median (interquartile range) follow-up of 26.5 (12.6-42.8) months, no differences were observed in appropriate shocks between ischemic and nonischemic patients (4.8%/y vs 3.9%/y; log-rank, P = .282). ARVC (9.0%/y; hazard ratio [HR] 2.492; P = .001) and BrS (1.8%/y; HR 0.396; P = .008) constituted the groups with the highest and lowest rates of appropriate shocks, respectively. Device-related complications did not differ between groups (ischemic: 6.4%/y vs nonischemic: 6.1%/y; log-rank, P = .666), nor among underlying substrates (log-rank, P = .089). Nonischemic patients experienced higher rates of inappropriate shocks than did ischemic S-ICD recipients (4.4%/y vs 3.0%/y; log-rank, P = .043), with patients with ARVC (9.9%/y; P = .001) having the highest risk, even after controlling for confounders (adjusted HR 2.243; confidence interval 1.338-4.267; P = .002). CONCLUSION: Most S-ICD recipients were primary prevention nonischemic cardiomyopathy patients. Among those, patients with ARVC tend to receive the most frequent appropriate and inappropriate shocks and patients with BrS the least frequent appropriate shocks.


Subject(s)
Arrhythmogenic Right Ventricular Dysplasia , Cardiomyopathies , Cardiomyopathy, Dilated , Defibrillators, Implantable , Humans , Defibrillators, Implantable/adverse effects , Death, Sudden, Cardiac/epidemiology , Death, Sudden, Cardiac/etiology , Death, Sudden, Cardiac/prevention & control , Electric Countershock/adverse effects , Arrhythmogenic Right Ventricular Dysplasia/complications , Cardiomyopathy, Dilated/complications , Cardiomyopathy, Dilated/therapy , Registries , Treatment Outcome
4.
Glob Chang Biol ; 30(1): e17119, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38273572

ABSTRACT

Comparative extinction risk analysis-which predicts species extinction risk from correlation with traits or geographical characteristics-has gained research attention as a promising tool to support extinction risk assessment in the IUCN Red List of Threatened Species. However, its uptake has been very limited so far, possibly because existing models only predict a species' Red List category, without indicating which Red List criteria may be triggered. This prevents such approaches to be integrated into Red List assessments. We overcome this implementation gap by developing models that predict the probability of species meeting individual Red List criteria. Using data on the world's birds, we evaluated the predictive performance of our criterion-specific models and compared it with the typical criterion-blind modelling approach. We compiled data on biological traits (e.g. range size, clutch size) and external drivers (e.g. change in canopy cover) often associated with extinction risk. For each specific criterion, we modelled the relationship between extinction risk predictors and species' Red List category under that criterion using ordinal regression models. We found criterion-specific models were better at identifying threatened species compared to a criterion-blind model (higher sensitivity), but less good at identifying not threatened species (lower specificity). As expected, different covariates were important for predicting extinction risk under different criteria. Change in annual temperature was important for criteria related to population trends, while high forest dependency was important for criteria related to restricted area of occupancy or small population size. Our criteria-specific method can support Red List assessors by producing outputs that identify species likely to meet specific criteria, and which are the most important predictors. These species can then be prioritised for re-evaluation. We expect this new approach to increase the uptake of extinction risk models in Red List assessments, bridging a long-standing research-implementation gap.


Subject(s)
Conservation of Natural Resources , Endangered Species , Animals , Conservation of Natural Resources/methods , Extinction, Biological , Forests , Risk Assessment , Biodiversity
6.
Conserv Biol ; 38(3): e14227, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38111977

ABSTRACT

The International Union for Conservation of Nature (IUCN) Red List is a central tool for extinction risk monitoring and influences global biodiversity policy and action. But, to be effective, it is crucial that it consistently accounts for each driver of extinction. Climate change is rapidly becoming a key extinction driver, but consideration of climate change information remains challenging for the IUCN. Several methods can be used to predict species' future decline, but they often fail to provide estimates of the symptoms of endangerment used by IUCN. We devised a standardized method to measure climate change impact in terms of change in habitat quality to inform criterion A3 on future population reduction. Using terrestrial nonvolant tetrapods as a case study, we measured this impact as the difference between the current and the future species climatic niche, defined based on current and future bioclimatic variables under alternative model algorithms, dispersal scenarios, emission scenarios, and climate models. Our models identified 171 species (13% out of those analyzed) for which their current red-list category could worsen under criterion A3 if they cannot disperse beyond their current range in the future. Categories for 14 species (1.5%) could worsen if maximum dispersal is possible. Although ours is a simulation exercise and not a formal red-list assessment, our results suggest that considering climate change impacts may reduce misclassification and strengthen consistency and comprehensiveness of IUCN Red List assessments.


Una estrategia estándar para incluir las respuestas al cambio climático en las evaluaciones de la Lista Roja de la UICN Resumen La Lista Roja de la Unión Internacional para la Conservación de la Naturaleza (UICN) es una herramienta central para el monitoreo del riesgo de extinción e influye sobre las acciones y políticas para la biodiversidad. Para que esta herramienta sea efectiva, es crucial que tenga en cuenta de manera regular cada factor de extinción. El cambio climático se está convirtiendo rápidamente en un factor de extinción importante, pero considerar información sobre este factor todavía es un reto para la UICN. Se pueden usar varios métodos para predecir la declinación de una especie en el futuro, pero generalmente fallan en proporcionar estimaciones de los síntomas del peligro usados por la UICN. Diseñamos un método estandarizado para medir el impacto del cambio climático en términos del cambio en la calidad del hábitat para informar el criterio A3 sobre la reducción futura de las poblaciones. Usamos a los tetrápodos terrestres no voladores como estudio de caso para medir este impacto como la diferencia entre el nicho climático actual y futuro de las especies, definido con base en las variables bioclimáticas actuales y futuras con algoritmos de modelos alternativos, escenarios de dispersión y emisión y modelos climáticos. Nuestros modelos identificaron 171 especies (13% de las especies analizadas) para las que su categoría actual en la lista roja podría empeorar bajo el criterio A3 si no logran dispersarse más allá de su distribución actual en el futuro. Las categorías para 14 especies (1.5%) podrían empeorar si es posible la dispersión máxima. Aunque realizamos una simulación y no una evaluación formal para listas rojas, nuestros resultados sugieren que considerar los impactos del cambio climático podría reducir la clasificación incorrecta y fortalecer la coherencia y exhaustividad de las evaluaciones de la Lista Roja de la UICN.


Subject(s)
Biodiversity , Climate Change , Conservation of Natural Resources , Endangered Species , Conservation of Natural Resources/methods , Animals , Ecosystem , Extinction, Biological
7.
Proc Natl Acad Sci U S A ; 120(46): e2308273120, 2023 Nov 14.
Article in English | MEDLINE | ID: mdl-37931098

ABSTRACT

Elevational gradients are characterized by strong environmental changes within small geographical distances, providing important insights on the response of biological communities to climate change. Mountain biodiversity is particularly sensitive to climate change, given the limited capacity to colonize new areas and the competition from upshifting lowland species. Knowledge on the impact of climate change on mountain insect communities is patchy, but elevation is known to influence parasitic interactions which control insect communities and functions within ecosystems. We analyzed a European dataset of bristle flies, a parasitoid group which regulates insect herbivory in both managed and natural ecosystems. Our dataset spans six decades and multiple elevational bands, and we found marked elevational homogenization in the host specialization of bristle fly species through time. The proportion of specialized parasitoids has increased by ca. 70% at low elevations, from 17 to 29%, and has decreased by ca. 20% at high elevations, from 48 to 37%. As a result, the strong elevational gradient in bristle fly specialization observed in the 1960s has become much flatter over time. As climate warming is predicted to accelerate, the disappearance of specialized parasitoids from high elevations might become even faster. This parasitoid homogenization can reshape the ecological function of mountain insect communities, increasing the risk of herbivory outbreak at high elevations. Our results add to the mounting evidence that symbiotic species might be especially at risk from climate change: Monitoring the effects of these changes is urgently needed to define effective conservation strategies for mountain biodiversity.


Subject(s)
Altitude , Ecosystem , Animals , Biodiversity , Insecta , Geography
8.
Elife ; 122023 10 17.
Article in English | MEDLINE | ID: mdl-37846960

ABSTRACT

Knowledge of biodiversity is unevenly distributed across the Tree of Life. In the long run, such disparity in awareness unbalances our understanding of life on Earth, influencing policy decisions and the allocation of research and conservation funding. We investigated how humans accumulate knowledge of biodiversity by searching for consistent relationships between scientific (number of publications) and societal (number of views in Wikipedia) interest, and species-level morphological, ecological, and sociocultural factors. Across a random selection of 3019 species spanning 29 Phyla/Divisions, we show that sociocultural factors are the most important correlates of scientific and societal interest in biodiversity, including the fact that a species is useful or harmful to humans, has a common name, and is listed in the International Union for Conservation of Nature Red List. Furthermore, large-bodied, broadly distributed, and taxonomically unique species receive more scientific and societal attention, whereas colorfulness and phylogenetic proximity to humans correlate exclusively with societal attention. These results highlight a favoritism toward limited branches of the Tree of Life, and that scientific and societal priorities in biodiversity research broadly align. This suggests that we may be missing out on key species in our research and conservation agenda simply because they are not on our cultural radar.


Subject(s)
Biodiversity , Conservation of Natural Resources , Humans , Conservation of Natural Resources/methods , Phylogeny
9.
Europace ; 25(9)2023 08 02.
Article in English | MEDLINE | ID: mdl-37656991

ABSTRACT

AIMS: The HeartLogic Index combines data from multiple implantable cardioverter defibrillators (ICDs) sensors and has been shown to accurately stratify patients at risk of heart failure (HF) events. We evaluated and compared the performance of this algorithm during sinus rhythm and during long-lasting atrial fibrillation (AF). METHODS AND RESULTS: HeartLogic was activated in 568 ICD patients from 26 centres. We found periods of ≥30 consecutive days with an atrial high-rate episode (AHRE) burden <1 h/day and periods with an AHRE burden ≥20 h/day. We then identified patients who met both criteria during the follow-up (AHRE group, n = 53), to allow pairwise comparison of periods. For control purposes, we identified patients with an AHRE burden <1 h throughout their follow-up and implemented 2:1 propensity score matching vs. the AHRE group (matched non-AHRE group, n = 106). In the AHRE group, the rate of alerts was 1.2 [95% confidence interval (CI): 1.0-1.5]/patient-year during periods with an AHRE burden <1 h/day and 2.0 (95% CI: 1.5-2.6)/patient-year during periods with an AHRE-burden ≥20 h/day (P = 0.004). The rate of HF hospitalizations was 0.34 (95% CI: 0.15-0.69)/patient-year during IN-alert periods and 0.06 (95% CI: 0.02-0.14)/patient-year during OUT-of-alert periods (P < 0.001). The IN/OUT-of-alert state incidence rate ratio of HF hospitalizations was 8.59 (95% CI: 1.67-55.31) during periods with an AHRE burden <1 h/day and 2.70 (95% CI: 1.01-28.33) during periods with an AHRE burden ≥20 h/day. In the matched non-AHRE group, the rate of HF hospitalizations was 0.29 (95% CI: 0.12-0.60)/patient-year during IN-alert periods and 0.04 (95% CI: 0.02-0.08)/patient-year during OUT-of-alert periods (P < 0.001). The incidence rate ratio was 7.11 (95% CI: 2.19-22.44). CONCLUSION: Patients received more alerts during periods of AF. The ability of the algorithm to identify increased risk of HF events was confirmed during AF, despite a lower IN/OUT-of-alert incidence rate ratio in comparison with non-AF periods and non-AF patients. CLINICAL TRIAL REGISTRATION: http://clinicaltrials.gov/Identifier: NCT02275637.


Subject(s)
Atrial Fibrillation , Defibrillators, Implantable , Heart Failure , Humans , Algorithms , Atrial Fibrillation/diagnosis , Atrial Fibrillation/epidemiology , Atrial Fibrillation/therapy , Heart Atria , Heart Failure/diagnosis , Heart Failure/epidemiology , Heart Failure/therapy
11.
Conserv Biol ; 37(6): e14139, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37394972

ABSTRACT

Despite being central to the implementation of conservation policies, the usefulness of the International Union for Conservation of Nature (IUCN) Red List of Threatened Species is hampered by the 14% of species classified as data-deficient (DD) because information to evaluate these species' extinction risk was lacking when they were last assessed or because assessors did not appropriately account for uncertainty. Robust methods are needed to identify which DD species are more likely to be reclassified in one of the data-sufficient IUCN Red List categories. We devised a reproducible method to help red-list assessors prioritize reassessment of DD species and tested it with 6887 DD species of mammals, reptiles, amphibians, fishes, and Odonata (dragonflies and damselflies). For each DD species in these groups, we calculated its probability of being classified in a data-sufficient category if reassessed today from covariates measuring available knowledge (e.g., number of occurrence records or published articles available), knowledge proxies (e.g., remoteness of the range), and species characteristics (e.g., nocturnality); calculated change in such probability since last assessment from the increase in available knowledge (e.g., new occurrence records); and determined whether the species might qualify as threatened based on recent rate of habitat loss determined from global land-cover maps. We identified 1907 species with a probability of being reassessed in a data-sufficient category of >0.5; 624 species for which this probability increased by >0.25 since last assessment; and 77 species that could be reassessed as near threatened or threatened based on habitat loss. Combining these 3 elements, our results provided a list of species likely to be data-sufficient such that the comprehensiveness and representativeness of the IUCN Red List can be improved.


Priorización de la reevaluación de las especies con datos deficientes en la Lista Roja de la UICN Resumen No obstante que es fundamental para la implementación de políticas de conservación, la utilidad de la Lista Roja de Especies Amenazadas de la Unión Internacional para la Conservación de la Naturaleza (UICN) está limitada por el 14% de especies clasificadas con datos deficientes (DD) debido a que la información para evaluar el riesgo de extinción de estas especies no existía cuando fueron evaluadas la última vez o porque los evaluadores no consideraron la incertidumbre apropiadamente. Se requieren métodos robustos para identificar las especies DD con mayor probabilidad de ser reclasificadas en alguna de las categorías en la Lista Roja UICN con datos suficientes. Diseñamos un método reproducible para ayudar a que los evaluadores de la lista roja prioricen la reevaluación de especies DD y lo probamos con 6,887 especies DD de mamíferos, reptiles, anfibios, peces y Odonata (libélulas y caballitos del diablo). Para cada una de las especies DD en estos grupos, calculamos la probabilidad de ser clasificadas en una categoría con datos suficientes si fuera reevaluada hoy a partir de covariables que miden el conocimiento disponible (e.g., número de registros de ocurrencia o artículos publicados disponibles), sustitutos de conocimiento (e.g., extensión del rango de distribución) y características de la especie ((e.g., nocturnidad); calculamos el cambio en tal probabilidad desde la última reevaluación a partir del incremento en el conocimiento disponible (e.g., registros de ocurrencia nuevos); y determinamos si las especies podrían calificar como amenazadas con base en pérdidas de hábitat recientes a partir de mapas globales de cobertura de suelo recientes. Identificamos 1,907 especies con una probabilidad >0.5 de ser reclasificados en una categoría con datos suficientes; 624 especies cuya probabilidad aumentó en >0.25 desde la última evaluación, y 77 especies que podrían ser reclasificadas como casi en peligro con base en la pérdida de hábitat. Combinando estos 3 elementos, nuestros resultados proporcionaron una lista de especies probablemente con datos suficientes de tal modo que la exhaustividad y la representatividad de la Lista Roja de la UICN pueden ser mejoradas.


Subject(s)
Conservation of Natural Resources , Odonata , Animals , Endangered Species , Extinction, Biological , Ecosystem , Mammals , Fishes , Biodiversity
12.
ESC Heart Fail ; 10(4): 2469-2478, 2023 08.
Article in English | MEDLINE | ID: mdl-37278122

ABSTRACT

AIMS: The HeartLogic algorithm combines multiple implantable defibrillator (ICD) sensor data and has proved to be a sensitive and timely predictor of impending heart failure (HF) decompensation in cardiac resynchronization therapy (CRT-D) patients. We evaluated the performance of this algorithm in non-CRT ICD patients and in the presence of co-morbidities. METHODS AND RESULTS: The HeartLogic feature was activated in 568 ICD patients (410 with CRT-D) from 26 centres. The median follow-up was 26 months [25th-75th percentile: 16-37]. During follow-up, 97 hospitalizations were reported (53 cardiovascular) and 55 patients died. We recorded 1200 HeartLogic alerts in 370 patients. Overall, the time IN the alert state was 13% of the total observation period. The rate of cardiovascular hospitalizations or death was 0.48/patient-year (95% CI: 0.37-0.60) with the HeartLogic IN the alert state and 0.04/patient-year (95% CI: 0.03-0.05) OUT of the alert state, with an incidence rate ratio of 13.35 (95% CI: 8.83-20.51, P < 0.001). Among patient characteristics, atrial fibrillation (AF) on implantation (HR: 1.62, 95% CI: 1.27-2.07, P < 0.001) and chronic kidney disease (CKD) (HR: 1.53, 95% CI: 1.21-1.93, P < 0.001) independently predicted alerts. HeartLogic alerts were not associated with CRT-D versus ICD implantation (HR: 1.03, 95% CI: 0.82-1.30, P = 0.775). Comparisons of the clinical event rates in the IN alert state with those in the OUT of alert state yielded incidence rate ratios ranging from 9.72 to 14.54 (all P < 0.001) in all groups of patients stratified by: CRT-D/ICD, AF/non-AF, and CKD/non-CKD. After multivariate correction, the occurrence of alerts was associated with cardiovascular hospitalization or death (HR: 1.92, 95% CI: 1.05-3.51, P = 0.036). CONCLUSIONS: The burden of HeartLogic alerts was similar between CRT-D and ICD patients, while patients with AF and CKD seemed more exposed to alerts. Nonetheless, the ability of the HeartLogic algorithm to identify periods of significantly increased risk of clinical events was confirmed, regardless of the type of device and the presence of AF or CKD.


Subject(s)
Atrial Fibrillation , Cardiac Resynchronization Therapy , Defibrillators, Implantable , Heart Failure , Humans , Cardiac Resynchronization Therapy/methods , Heart Failure/epidemiology , Heart Failure/therapy , Atrial Fibrillation/etiology , Algorithms , Morbidity
14.
Eur Heart J Suppl ; 25(Suppl C): C331-C336, 2023 May.
Article in English | MEDLINE | ID: mdl-37125308

ABSTRACT

Heart failure (HF) is a major and still growing medical problem and is characterized by episodes of acute decompensation that are associated with a negative prognosis and a significant burden on the patients, doctors, and healthcare resources. Early detection of incipient HF may allow outpatient treatment before patients severely decompensate, thus reducing HF hospitalizations and related costs. The HeartLogic™ algorithm is an automatic, remotely managed system combining data directly related to HF pathophysiology into a single score, the HeartLogic™ index. This index proved to be effective in predicting the risk of incipient HF decompensation, allowing to redistribute resources from low-risk to high-risk patients in a timely and cost-saving manner. The alert-based remote management system seems more efficient than the one based on scheduled remote transmission in terms of caregivers' workload and alert detection timing. The widespread application of the HeartLogic™ algorithm requires the resolution of logistical and financial issues and the adoption of a pre-defined, functional workflow. In this paper, we reviewed general aspects of remote monitoring in HF patients, the functioning and pathophysiological basis of the HeartLogic index, its efficiency in the management of HF patients, and the economic effects and the organizational revolution associated with its use.

15.
Ecol Evol ; 13(5): e9961, 2023 May.
Article in English | MEDLINE | ID: mdl-37181203

ABSTRACT

We call for journals to commit to requiring open data be archived in a format that will be simple and clear for readers to understand and use. If applied consistently, these requirements will allow contributors to be acknowledged for their work through citation of open data, and facilitate scientific progress.

16.
Sci Data ; 10(1): 253, 2023 05 03.
Article in English | MEDLINE | ID: mdl-37137926

ABSTRACT

Knowledge of species' functional traits is essential for understanding biodiversity patterns, predicting the impacts of global environmental changes, and assessing the efficiency of conservation measures. Bats are major components of mammalian diversity and occupy a variety of ecological niches and geographic distributions. However, an extensive compilation of their functional traits and ecological attributes is still missing. Here we present EuroBaTrait 1.0, the most comprehensive and up-to-date trait dataset covering 47 European bat species. The dataset includes data on 118 traits including genetic composition, physiology, morphology, acoustic signature, climatic associations, foraging habitat, roost type, diet, spatial behaviour, life history, pathogens, phenology, and distribution. We compiled the bat trait data obtained from three main sources: (i) a systematic literature and dataset search, (ii) unpublished data from European bat experts, and (iii) observations from large-scale monitoring programs. EuroBaTrait is designed to provide an important data source for comparative and trait-based analyses at the species or community level. The dataset also exposes knowledge gaps in species, geographic and trait coverage, highlighting priorities for future data collection.


Subject(s)
Chiroptera , Animals , Biodiversity , Chiroptera/physiology , Ecosystem , Europe , Mammals
17.
Europace ; 25(4): 1467-1474, 2023 04 15.
Article in English | MEDLINE | ID: mdl-36881780

ABSTRACT

AIMS: Patients with atrial fibrillation frequently experience sleep disorder breathing, and both conditions are highly prevalent in presence of heart failure (HF). We explored the association between the combination of an HF and a sleep apnoea (SA) index and the incidence of atrial high-rate events (AHRE) in patients with implantable defibrillators (ICDs). METHODS AND RESULTS: Data were prospectively collected from 411 consecutive HF patients with ICD. The IN-alert HF state was measured by the multi-sensor HeartLogic Index (>16), and the ICD-measured Respiratory Disturbance Index (RDI) was computed to identify severe SA. The endpoints were as follows: daily AHRE burden of ≥5 min, ≥6 h, and ≥23 h. During a median follow-up of 26 months, the time IN-alert HF state was 13% of the total observation period. The RDI value was ≥30 episodes/h (severe SA) during 58% of the observation period. An AHRE burden of ≥5 min/day was documented in 139 (34%) patients, ≥6 h/day in 89 (22%) patients, and ≥23 h/day in 68 (17%) patients. The IN-alert HF state was independently associated with AHRE regardless of the daily burden threshold: hazard ratios from 2.17 for ≥5 min/day to 3.43 for ≥23 h/day (P < 0.01). An RDI ≥ 30 episodes/h was associated only with AHRE burden ≥5 min/day [hazard ratio 1.55 (95% confidence interval: 1.11-2.16), P = 0.001]. The combination of IN-alert HF state and RDI ≥ 30 episodes/h accounted for only 6% of the follow-up period and was associated with high rates of AHRE occurrence (from 28 events/100 patient-years for AHRE burden ≥5 min/day to 22 events/100 patient-years for AHRE burden ≥23 h/day). CONCLUSIONS: In HF patients, the occurrence of AHRE is independently associated with the ICD-measured IN-alert HF state and RDI ≥ 30 episodes/h. The coexistence of these two conditions occurs rarely but is associated with a very high rate of AHRE occurrence. CLINICAL TRIAL REGISTRATION: URL: http://clinicaltrials.gov/Identifier: NCT02275637.


Subject(s)
Atrial Fibrillation , Defibrillators, Implantable , Heart Failure , Sleep Apnea Syndromes , Humans , Defibrillators, Implantable/adverse effects , Atrial Fibrillation/diagnosis , Atrial Fibrillation/epidemiology , Atrial Fibrillation/therapy , Risk Assessment , Heart Failure/diagnosis , Heart Failure/epidemiology , Heart Failure/therapy
18.
J Cardiovasc Electrophysiol ; 34(5): 1257-1267, 2023 05.
Article in English | MEDLINE | ID: mdl-36994907

ABSTRACT

INTRODUCTION: The prediction of ventricular tachyarrhythmias among patients with implantable cardioverter defibrillators is difficult with available clinical tools. We sought to assess whether in patients with heart failure (HF) and reduced ejection fraction with defibrillators, physiological sensor-based HF status, as summarized by the HeartLogic index, could predict appropriate device therapies. METHODS: Five hundred and sixty-eight consecutive HF patients with defibrillators (n = 158, 28%) or cardiac resynchronization therapy-defibrillators (n = 410, 72%) were included in this prospective observational multicenter analysis. The association of both HeartLogic index and its physiological components with defibrillator shocks and overall appropriate therapies was assessed in regression and time-dependent Cox models. RESULTS: Over a follow-up of 25 (15-35) months, 122 (21%) patients received an appropriate device therapy (shock, n = 74, 13%), while the HeartLogic index crossed the threshold value (alert, HeartLogic ≥ 16) 1200 times (0.71 alerts/patient-year) in 370 (65%) subjects. The occurrence of ≥1 HeartLogic alert was significantly associated with both appropriate shocks (Hazard ratios [HR]: 2.44, 95% confidence interval [CI]: 1.49-3.97, p = .003), and any appropriate defibrillator therapies. In multivariable time-dependent Cox models, weekly IN-alert state was the strongest predictor of appropriate defibrillator shocks (HR: 2.94, 95% CI: 1.73-5.01, p < .001) and overall therapies. Compared with stable patients, patients with appropriate shocks had significantly higher values of HeartLogic index, third heart sound amplitude, and resting heart rate 30-60 days before device therapy. CONCLUSION: The HeartLogic index is an independent dynamic predictor of appropriate defibrillator therapies. The combined index and its individual physiological components change before the arrhythmic event occurs.


Subject(s)
Cardiac Resynchronization Therapy , Defibrillators, Implantable , Heart Failure , Tachycardia, Ventricular , Ventricular Dysfunction, Left , Humans , Tachycardia, Ventricular/diagnosis , Tachycardia, Ventricular/therapy , Tachycardia, Ventricular/complications , Heart Failure/diagnosis , Heart Failure/therapy , Heart Failure/complications , Cardiac Resynchronization Therapy/adverse effects , Ventricular Dysfunction, Left/therapy
19.
Heart Rhythm ; 20(7): 992-997, 2023 Jul.
Article in English | MEDLINE | ID: mdl-36966948

ABSTRACT

BACKGROUND: The HeartLogic algorithm (Boston Scientific) has proved to be a sensitive and timely predictor of impending heart failure (HF) decompensation. OBJECTIVE: The purpose of this study was to determine whether remotely monitored data from this algorithm could be used to identify patients at high risk for mortality. METHODS: The algorithm combines implantable cardioverter-defibrillator (ICD)-measured accelerometer-based heart sounds, intrathoracic impedance, respiration rate, ratio of respiration rate to tidal volume, night heart rate, and patient activity into a single index. An alert is issued when the index crosses a programmable threshold. The feature was activated in 568 ICD patients from 26 centers. RESULTS: During median follow-up of 26 months [25th-75th percentile 16-37], 1200 alerts were recorded in 370 patients (65%). Overall, the time IN-alert state was 13% of the total observation period (151/1159 years) and 20% of the follow-up period of the 370 patients with alerts. During follow-up, 55 patients died (46 in the group with alerts). The rate of death was 0.25 per patient-year (95% confidence interval [CI] 0.17-0.34) IN-alert state and 0.02 per patient-year (95% CI 0.01-0.03) OUT of the alert state, with an incidence rate ratio of 13.72 (95% CI 7.62-25.60; P <.001). After multivariate correction for baseline confounders (age, ischemic cardiomyopathy, kidney disease, atrial fibrillation), the IN-alert state remained significantly associated with the occurrence of death (hazard ratio 9.18; 95% CI 5.27-15.99; P <.001). CONCLUSION: The HeartLogic algorithm provides an index that can be used to identify patients at higher risk for all-cause mortality. The index state identifies periods of significantly increased risk of death.


Subject(s)
Atrial Fibrillation , Cardiac Resynchronization Therapy , Defibrillators, Implantable , Heart Failure , Humans , Cardiac Resynchronization Therapy/adverse effects , Heart Failure/diagnosis , Heart Failure/therapy , Heart Failure/etiology , Atrial Fibrillation/therapy , Algorithms
20.
Ecol Evol ; 13(1): e9752, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36713492

ABSTRACT

The viability of populations can be quantified with several measures, such as the probability of extinction, the mean time to extinction, or the population size. While conservation management decisions can be based on these measures, it has not yet been explored systematically if different viability measures rank species and scenarios similarly and if one viability measure can be converted into another to compare studies. To address this challenge, we conducted a quantitative comparison of eight viability measures based on the simulated population dynamics of more than 4500 virtual species. We compared (a) the ranking of scenarios based on different viability measures, (b) assessed direct correlations between the measures, and (c) explored if parameters in the simulation models can alter the relationship between pairs of viability measures. We found that viability measures ranked species similarly. Despite this, direct correlations between the different measures were often weak and could not be generalized. This can be explained by the loss of information due to the aggregation of raw data into a single number, the effect of model parameters on the relationship between viability measures, and because distributions, such as the probability of extinction over time, cannot be ranked objectively. Similar scenario rankings by different viability measures show that the choice of the viability metric does in many cases not alter which population is regarded more viable or which management option is the best. However, the more two scenarios or populations differ, the more likely it becomes that different measures produce different rankings. We thus recommend that PVA studies publish raw simulation data, which not only describes all risks and opportunities to the reader but also facilitates meta-analyses of PVA studies.

SELECTION OF CITATIONS
SEARCH DETAIL
...