Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.314
Filter
1.
Clin Psychopharmacol Neurosci ; 22(3): 442-450, 2024 Aug 31.
Article in English | MEDLINE | ID: mdl-39069683

ABSTRACT

Objective: This pharmacovigilance study evaluated the profile of clozapine-related adverse events by region using the Food and Drug Administration Adverse Event Reporting System (FAERS). Methods: We categorized each case into five regions (America, Europe/West Asia, Oceania, Asia, and Africa) based on the reporting country information in the FAERS database. The number of clozapine-related adverse events reported in each region was aggregated according to the preferred term (PT) and the Standardized Medical Dictionary for Regulatory Activities (MedDRA) Query (SMQ). Results: A total of 101,872 clozapine-related adverse events were registered in the FAERS database. In America and Europe, leukocyte or neutrophil count abnormalities accounted for half of the top 10 PTs by relative reporting rate. However, Asia had higher relative reporting rates of pyrexia and salivary hypersecretion (13.91% and 10.85%, respectively). Regarding the SMQ, the relative reporting rates of infective pneumonia, convulsions, extrapyramidal syndrome, gastrointestinal obstruction, and hyperglycaemia/new onset diabetes mellitus were higher in Asia than in other regions (5.26%, 9.72%, 12.65%, 5.13%, and 8.26%, respectively), with significant differences even after adjusting for confounding factors using multivariate logistic regression analysis. Conclusion: Spontaneous reports of adverse events associated with clozapine show regional disparities, particularly in Asia, where concentration-dependent adverse events are more frequently reported. However, the spontaneous reporting system has several limitations, requiring further research for validation.

2.
Animals (Basel) ; 14(14)2024 Jul 09.
Article in English | MEDLINE | ID: mdl-39061485

ABSTRACT

Mastitis, an important disease in dairy cows, causes significant losses in herd profitability. Accurate diagnosis is crucial for adequate control. Studies using artificial intelligence (AI) models to classify, identify, predict, and diagnose mastitis show promise in improving mastitis control. This bibliometric review aimed to evaluate AI and bovine mastitis terms in the most relevant Scopus-indexed papers from 2011 to 2021. Sixty-two documents were analyzed, revealing key terms, prominent researchers, relevant publications, main themes, and keyword clusters. "Mastitis" and "machine learning" were the most cited terms, with an increasing trend from 2018 to 2021. Other terms, such as "sensors" and "mastitis detection", also emerged. The United States was the most cited country and presented the largest collaboration network. Publications on mastitis and AI models notably increased from 2016 to 2021, indicating growing interest. However, few studies utilized AI for bovine mastitis detection, primarily employing artificial neural network models. This suggests a clear potential for further research in this area.

3.
Curr Protoc ; 4(7): e1066, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39073034

ABSTRACT

Image data from a single animal in neuroscientific experiments can be comprised of terabytes of information. Full studies can thus be challenging to analyze, store, view, and manage. What follows is an updated guide for preparing and sharing big neuroanatomical image data. © 2024 Wiley Periodicals LLC. Basic Protocol 1: Naming and organizing images and metadata Basic Protocol 2: Preparing and annotating images for presentations and figures Basic Protocol 3: Assessing the internet environment and optimizing images.


Subject(s)
Image Processing, Computer-Assisted , Neuroanatomy , Neuroanatomy/methods , Image Processing, Computer-Assisted/methods , Animals , Internet , Humans , Metadata
4.
Pharmaceuticals (Basel) ; 17(7)2024 Jul 03.
Article in English | MEDLINE | ID: mdl-39065726

ABSTRACT

The unintended modulation of nuclear receptor (NR) activity by drugs can lead to toxicities amongst the endocrine, gastrointestinal, hepatic cardiovascular, and central nervous systems. While secondary pharmacology screening assays include NRs, safety risks due to unintended interactions of small molecule drugs with NRs remain poorly understood. To identify potential nonclinical and clinical safety effects resulting from functional interactions with 44 of the 48 human-expressed NRs, we conducted a systematic narrative review of the scientific literature, tissue expression data, and used curated databases (OFF-X™) (Off-X, Clarivate) to organize reported toxicities linked to the functional modulation of NRs in a tabular and machine-readable format. The top five NRs associated with the highest number of safety alerts from peer-reviewed journals, regulatory agency communications, congresses/conferences, clinical trial registries, and company communications were the Glucocorticoid Receptor (GR, 18,328), Androgen Receptor (AR, 18,219), Estrogen Receptor (ER, 12,028), Retinoic acid receptors (RAR, 10,450), and Pregnane X receptor (PXR, 8044). Toxicities associated with NR modulation include hepatotoxicity, cardiotoxicity, endocrine disruption, carcinogenicity, metabolic disorders, and neurotoxicity. These toxicities often arise from the dysregulation of receptors like Peroxisome proliferator-activated receptors (PPARα, PPARγ), the ER, PXR, AR, and GR. This dysregulation leads to various health issues, including liver enlargement, hepatocellular carcinoma, heart-related problems, hormonal imbalances, tumor growth, metabolic syndromes, and brain function impairment. Gene expression analysis using heatmaps for human and rat tissues complemented the functional modulation of NRs associated with the reported toxicities. Interestingly, certain NRs showed ubiquitous expression in tissues not previously linked to toxicities, suggesting the potential utilization of organ-specific NR interactions for therapeutic purposes.

5.
Sensors (Basel) ; 24(14)2024 Jul 14.
Article in English | MEDLINE | ID: mdl-39065952

ABSTRACT

The acquisition, processing, mining, and visualization of sensory data for knowledge discovery and decision support has recently been a popular area of research and exploration. Its usefulness is paramount because of its relationship to the continuous involvement in the improvement of healthcare and other related disciplines. As a result of this, a huge amount of data have been collected and analyzed. These data are made available for the research community in various shapes and formats; their representation and study in the form of graphs or networks is also an area of research which many scholars are focused on. However, the large size of such graph datasets poses challenges in data mining and visualization. For example, knowledge discovery from the Bio-Mouse-Gene dataset, which has over 43 thousand nodes and 14.5 million edges, is a non-trivial job. In this regard, summarizing the large graphs provided is a useful alternative. Graph summarization aims to provide the efficient analysis of such complex and large-sized data; hence, it is a beneficial approach. During summarization, all the nodes that have similar structural properties are merged together. In doing so, traditional methods often overlook the importance of personalizing the summary, which would be helpful in highlighting certain targeted nodes. Personalized or context-specific scenarios require a more tailored approach for accurately capturing distinct patterns and trends. Hence, the concept of personalized graph summarization aims to acquire a concise depiction of the graph, emphasizing connections that are closer in proximity to a specific set of given target nodes. In this paper, we present a faster algorithm for the personalized graph summarization (PGS) problem, named IPGS; this has been designed to facilitate enhanced and effective data mining and visualization of datasets from various domains, including biosensors. Our objective is to obtain a similar compression ratio as the one provided by the state-of-the-art PGS algorithm, but in a faster manner. To achieve this, we improve the execution time of the current state-of-the-art approach by using weighted, locality-sensitive hashing, through experiments on eight large publicly available datasets. The experiments demonstrate the effectiveness and scalability of IPGS while providing a similar compression ratio to the state-of-the-art approach. In this way, our research contributes to the study and analysis of sensory datasets through the perspective of graph summarization. We have also presented a detailed study on the Bio-Mouse-Gene dataset, which was conducted to investigate the effectiveness of graph summarization in the domain of biosensors.

6.
J Funct Morphol Kinesiol ; 9(3)2024 Jun 28.
Article in English | MEDLINE | ID: mdl-39051275

ABSTRACT

The aim of this study was to test a machine learning (ML) model to predict high-intensity actions and body impacts during youth football training. Sixty under-15, -17, and -19 sub-elite Portuguese football players were monitored over a 6-week period. External training load data were collected from the target variables of accelerations (ACCs), decelerations (DECs), and dynamic stress load (DSL) using an 18 Hz global positioning system (GPS). Additionally, we monitored the perceived exertion and biological characteristics using total quality recovery (TQR), rating of perceived exertion (RPE), session RPE (sRPE), chronological age, maturation offset (MO), and age at peak height velocity (APHV). The ML model was computed by a feature selection process with a linear regression forecast and bootstrap method. The predictive analysis revealed that the players' MO demonstrated varying degrees of effectiveness in predicting their DEC and ACC across different ranges of IQR. After predictive analysis, the following performance values were observed: DEC (x¯predicted = 41, ß = 3.24, intercept = 37.0), lower IQR (IQRpredicted = 36.6, ß = 3.24, intercept = 37.0), and upper IQR (IQRpredicted = 46 decelerations, ß = 3.24, intercept = 37.0). The player's MO also demonstrated the ability to predict their upper IQR (IQRpredicted = 51, ß = 3.8, intercept = 40.62), lower IQR (IQRpredicted = 40, ß = 3.8, intercept = 40.62), and ACC (x¯predicted = 46 accelerations, ß = 3.8, intercept = 40.62). The ML model showed poor performance in predicting the players' ACC and DEC using MO (MSE = 2.47-4.76; RMSE = 1.57-2.18: R2 = -0.78-0.02). Maturational concerns are prevalent in football performance and should be regularly checked, as the current ML model treated MO as the sole variable for ACC, DEC, and DSL. Applying ML models to assess automated tracking data can be an effective strategy, particularly in the context of forecasting peak ACC, DEC, and bodily effects in sub-elite youth football training.

7.
Int J Soc Psychiatry ; : 207640241264674, 2024 Jul 25.
Article in English | MEDLINE | ID: mdl-39049604

ABSTRACT

AIMS: In this study, we examined the relationship between 131 suicide related Google search terms, grouped into nine categories, and the number of suicide cases per month in Ecuador from January 2011 to December 2021. METHODS: First, we applied time-series analysis to eliminate autocorrelation and seasonal patterns to prevent spurious correlations. Second, we used Pearson's correlation to assess the relationship between Google search terms and suicide rates. Third, cross-correlation analysis was used to explore the potential delayed effects between these variables. Fourth, we extended the correlation and cross-correlation analyses by three demographic characteristics - gender, age, and region. RESULTS: Significant correlations were found in all categories between Google search trends and suicide rates in Ecuador, with predominantly positive and moderate correlations. The terms 'stress' (.548), 'prevention' (.438), and 'disorders' (.435) showed the strongest associations. While global trends indicated moderate correlations, sensitivity analysis revealed higher coefficients in men, young adults, and the Highlands region. Specific patterns emerged in subgroups, such as 'digital violence' showing significant correlations in certain demographics, and 'trauma' presenting a unique temporal pattern in women. In general, cross correlation analysis showed an average negative correlation of -.191 at lag 3. CONCLUSION: Google search data do not provide further information about users, such as demographics or mental health records. Hence, our results are simply correlations and should not be interpreted as causal effects. Our findings highlight a need for tailored suicide prevention strategies that recognize the complex dynamics of suicide risk across demographics and time periods.

8.
Methods Mol Biol ; 2814: 223-245, 2024.
Article in English | MEDLINE | ID: mdl-38954209

ABSTRACT

Dictyostelium represents a stripped-down model for understanding how cells make decisions during development. The complete life cycle takes around a day and the fully differentiated structure is composed of only two major cell types. With this apparent reduction in "complexity," single cell transcriptomics has proven to be a valuable tool in defining the features of developmental transitions and cell fate separation events, even providing causal information on how mechanisms of gene expression can feed into cell decision-making. These scientific outputs have been strongly facilitated by the ease of non-disruptive single cell isolation-allowing access to more physiological measures of transcript levels. In addition, the limited number of cell states during development allows the use of more straightforward analysis tools for handling the ensuing large datasets, which provides enhanced confidence in inferences made from the data. In this chapter, we will outline the approaches we have used for handling Dictyostelium single cell transcriptomic data, illustrating how these approaches have contributed to our understanding of cell decision-making during development.


Subject(s)
Dictyostelium , Gene Expression Profiling , Single-Cell Analysis , Transcriptome , Dictyostelium/genetics , Dictyostelium/growth & development , Single-Cell Analysis/methods , Gene Expression Profiling/methods , Gene Expression Regulation, Developmental , Single-Cell Gene Expression Analysis
9.
Ann Lab Med ; 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38953115

ABSTRACT

Background: Healthcare 4.0. refers to the integration of advanced technologies, such as artificial intelligence (AI) and big data analysis, into the healthcare sector. Recognizing the impact of Healthcare 4.0 technologies in laboratory medicine (LM), we seek to assess the overall awareness and implementation of Healthcare 4.0 among members of the Korean Society for Laboratory Medicine (KSLM). Methods: A web-based survey was conducted using an anonymous questionnaire. The survey comprised 36 questions covering demographic information (seven questions), big data (10 questions), and AI (19 questions). Results: In total, 182 (17.9%) of 1,017 KSLM members participated in the survey. Thirty-two percent of respondents considered AI to be the most important technology in LM in the era of Healthcare 4.0, closely followed by 31% who favored big data. Approximately 80% of respondents were familiar with big data but had not conducted research using it, and 71% were willing to participate in future big data research conducted by the KSLM. Respondents viewed AI as the most valuable tool in molecular genetics within various divisions. More than half of the respondents were open to the notion of using AI as assistance rather than a complete replacement for their roles. Conclusions: This survey highlighted KSLM members' awareness of the potential applications and implications of big data and AI. We emphasize the complexity of AI integration in healthcare, citing technical and ethical challenges leading to diverse opinions on its impact on employment and training. This highlights the need for a holistic approach to adopting new technologies.

10.
Expert Rev Med Devices ; : 1-13, 2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39044340

ABSTRACT

INTRODUCTION: For over 60 years, spinal cord stimulation has endured as a therapy through innovation and novel developments. Current practice of neuromodulation requires proper patient selection, risk mitigation and use of innovation. However, there are tangible and intangible challenges in physiology, clinical science and within society. AREAS COVERED: We provide a narrative discussion regarding novel topics in the field especially over the last decade. We highlight the challenges in the patient care setting including selection, as well as economic and socioeconomic challenges. Physician training challenges in neuromodulation is explored as well as other factors related to the use of neuromodulation such as novel indications and economics. We also discuss the concepts of technology and healthcare data. EXPERT OPINION: Patient safety and durable outcomes are the mainstay goal for neuromodulation. Substantial work is needed to assimilate data for larger and more relevant studies reflecting a population. Big data and global interconnectivity efforts provide substantial opportunity to reinvent our scientific approach, data analysis and its management to maximize outcomes and minimize risk. As improvements in data analysis become the standard of innovation and physician training meets demand, we expect to see an expansion of novel indications and its use in broader cohorts.

11.
Sensors (Basel) ; 24(13)2024 Jun 26.
Article in English | MEDLINE | ID: mdl-39000931

ABSTRACT

Internet of Things (IoT) applications and resources are highly vulnerable to flood attacks, including Distributed Denial of Service (DDoS) attacks. These attacks overwhelm the targeted device with numerous network packets, making its resources inaccessible to authorized users. Such attacks may comprise attack references, attack types, sub-categories, host information, malicious scripts, etc. These details assist security professionals in identifying weaknesses, tailoring defense measures, and responding rapidly to possible threats, thereby improving the overall security posture of IoT devices. Developing an intelligent Intrusion Detection System (IDS) is highly complex due to its numerous network features. This study presents an improved IDS for IoT security that employs multimodal big data representation and transfer learning. First, the Packet Capture (PCAP) files are crawled to retrieve the necessary attacks and bytes. Second, Spark-based big data optimization algorithms handle huge volumes of data. Second, a transfer learning approach such as word2vec retrieves semantically-based observed features. Third, an algorithm is developed to convert network bytes into images, and texture features are extracted by configuring an attention-based Residual Network (ResNet). Finally, the trained text and texture features are combined and used as multimodal features to classify various attacks. The proposed method is thoroughly evaluated on three widely used IoT-based datasets: CIC-IoT 2022, CIC-IoT 2023, and Edge-IIoT. The proposed method achieves excellent classification performance, with an accuracy of 98.2%. In addition, we present a game theory-based process to validate the proposed approach formally.

12.
Expert Opin Drug Discov ; : 1-27, 2024 Jul 14.
Article in English | MEDLINE | ID: mdl-39004919

ABSTRACT

INTRODUCTION: Small molecules often bind to multiple targets, a behavior termed polypharmacology. Anticipating polypharmacology is essential for drug discovery since unknown off-targets can modulate safety and efficacy - profoundly affecting drug discovery success. Unfortunately, experimental methods to assess selectivity present significant limitations and drugs still fail in the clinic due to unanticipated off-targets. Computational methods are a cost-effective, complementary approach to predict polypharmacology. AREAS COVERED: This review aims to provide a comprehensive overview of the state of polypharmacology prediction and discuss its strengths and limitations, covering both classical cheminformatics methods and bioinformatic approaches. The authors review available data sources, paying close attention to their different coverage. The authors then discuss major algorithms grouped by the types of data that they exploit using selected examples. EXPERT OPINION: Polypharmacology prediction has made impressive progress over the last decades and contributed to identify many off-targets. However, data incompleteness currently limits most approaches to comprehensively predict selectivity. Moreover, our limited agreement on model assessment challenges the identification of the best algorithms - which at present show modest performance in prospective real-world applications. Despite these limitations, the exponential increase of multidisciplinary Big Data and AI hold much potential to better polypharmacology prediction and de-risk drug discovery.

13.
J Anesth Analg Crit Care ; 4(1): 44, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-38992794

ABSTRACT

We are in the era of Health 4.0 when novel technologies are providing tools capable of improving the quality and safety of the services provided. Our project involves the integration of different technologies (AI, big data, robotics, and telemedicine) to create a unique system for patients admitted to intensive care units suffering from infectious diseases capable of both increasing the personalization of care and ensuring a safer environment for caregivers.

14.
Int J Cardiol ; 411: 132329, 2024 Sep 15.
Article in English | MEDLINE | ID: mdl-38964554

ABSTRACT

BACKGROUND: Left ventricular (LV) thrombus is not common but poses significant risks of embolic stroke or systemic embolism. However, the distinction in embolic risk between nonischemic cardiomyopathy (NICM) and ischemic cardiomyopathy (ICM) remains unclear. METHODS AND RESULTS: In total, 2738 LV thrombus patients from the JROAD-DPC (Japanese Registry of All Cardiac and Vascular Diseases Diagnosis Procedure Combination) database were included. Among these patients, 1037 patients were analyzed, with 826 (79.7%) having ICM and 211 with NICM (20.3%). Within the NICM group, the distribution was as follows: dilated cardiomyopathy (DCM; 41.2%), takotsubo cardiomyopathy (27.0%), hypertrophic cardiomyopathy (18.0%), and other causes (13.8%). The primary outcome was a composite of embolic stroke or systemic embolism (SSE) during hospitalization. The ICM and NICM groups showed no significant difference in the primary outcome (5.8% vs. 7.6%, p = 0.34). Among NICM, SSE occurred in 12.6% of patients with DCM, 7.0% with takotsubo cardiomyopathy, and 2.6% with hypertrophic cardiomyopathy. Multivariate logistic regression analysis for SSE revealed an odds ratio of 1.4 (95% confidence interval [CI], 0.7-2.7, p = 0.37) for NICM compared to ICM. However, DCM exhibited a higher adjusted odds ratio for SSE compared to ICM (2.6, 95% CI 1.2-6.0, p = 0.022). CONCLUSIONS: This nationwide shows comparable rates of embolic events between ICM and NICM in LV thrombus patients, with DCM posing a greater risk of SSE than ICM. The findings emphasize the importance of assessing the specific cause of heart disease in NICM, within LV thrombus management strategies.


Subject(s)
Databases, Factual , Myocardial Ischemia , Registries , Thrombosis , Humans , Female , Male , Aged , Middle Aged , Thrombosis/epidemiology , Myocardial Ischemia/epidemiology , Myocardial Ischemia/diagnosis , Japan/epidemiology , Risk Factors , Embolism/epidemiology , Embolism/complications , Heart Ventricles/diagnostic imaging , Cardiomyopathies/epidemiology , Aged, 80 and over
15.
Brain Inform ; 11(1): 19, 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-38987395

ABSTRACT

Bipolar psychometric scales data are widely used in psychologic healthcare. Adequate psychological profiling benefits patients and saves time and costs. Grant funding depends on the quality of psychotherapeutic measures. Bipolar Likert scales yield compositional data because any order of magnitude of agreement towards an item assertion implies a complementary order of magnitude of disagreement. Using an isometric log-ratio (ilr) transformation the bivariate information can be transformed towards the real valued interval scale yielding unbiased statistical results increasing the statistical power of the Pearson correlation significance test if the Central Limit Theorem (CLT) of statistics is satisfied. In practice, however, the applicability of the CLT depends on the number of summands (i.e., the number of items) and the variance of the data generating process (DGP) of the ilr transformed data. Via simulation we provide evidence that the ilr approach also works satisfactory if the CLT is violated. That is, the ilr approach is robust towards extremely large or infinite variances of the underlying DGP increasing the statistical power of the correlation test. The study generalizes former results pointing out the universality and reliability of the ilr approach in psychometric big data analysis affecting psychometric health economics, patient welfare, grant funding, economic decision making and profits.

16.
medRxiv ; 2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38946964

ABSTRACT

Background: The use of big data and large language models in healthcare can play a key role in improving patient treatment and healthcare management, especially when applied to large-scale administrative data. A major challenge to achieving this is ensuring that patient confidentiality and personal information is protected. One way to overcome this is by augmenting clinical data with administrative laboratory dataset linkages in order to avoid the use of demographic information. Methods: We explored an alternative method to examine patient files from a large administrative dataset in South Africa (the National Health Laboratory Services, or NHLS), by linking external data to the NHLS database using specimen barcodes associated with laboratory tests. This offers us with a deterministic way of performing data linkages without accessing demographic information. In this paper, we quantify the performance metrics of this approach. Results: The linkage of the large NHLS data to external hospital data using specimen barcodes achieved a 95% success. Out of the 1200 records in the validation sample, 87% were exact matches and 9% were matches with typographic correction. The remaining 5% were either complete mismatches or were due to duplicates in the administrative data. Conclusions: The high success rate indicates the reliability of using barcodes for linking data without demographic identifiers. Specimen barcodes are an effective tool for deterministic linking in health data, and may provide a method of creating large, linked data sets without compromising patient confidentiality.

17.
Sci Rep ; 14(1): 15584, 2024 Jul 06.
Article in English | MEDLINE | ID: mdl-38971827

ABSTRACT

To address the shortcomings of traditional reliability theory in characterizing the stability of deep underground structures, the advanced first order second moment of reliability was improved to obtain fuzzy random reliability, which is more consistent with the working conditions. The traditional sensitivity analysis model was optimized using fuzzy random optimization, and an analytical calculation model of the mean and standard deviation of the fuzzy random reliability sensitivity was established. A big data hidden Markov model and expectation-maximization algorithm were used to improve the digital characteristics of fuzzy random variables. The fuzzy random sensitivity optimization model was used to confirm the effect of concrete compressive strength, thick-diameter ratio, reinforcement ratio, uncertainty coefficient of calculation model, and soil depth on the overall structural reliability of a reinforced concrete double-layer wellbore in deep alluvial soil. Through numerical calculations, these characteristics were observed to be the main influencing factors. Furthermore, while the soil depth was negatively correlated, the other influencing factors were all positively correlated with the overall reliability. This study provides an effective reference for the safe construction of deep underground structures in the future.

18.
Brief Bioinform ; 25(4)2024 May 23.
Article in English | MEDLINE | ID: mdl-39007597

ABSTRACT

Thyroid cancer incidences endure to increase even though a large number of inspection tools have been developed recently. Since there is no standard and certain procedure to follow for the thyroid cancer diagnoses, clinicians require conducting various tests. This scrutiny process yields multi-dimensional big data and lack of a common approach leads to randomly distributed missing (sparse) data, which are both formidable challenges for the machine learning algorithms. This paper aims to develop an accurate and computationally efficient deep learning algorithm to diagnose the thyroid cancer. In this respect, randomly distributed missing data stemmed singularity in learning problems is treated and dimensionality reduction with inner and target similarity approaches are developed to select the most informative input datasets. In addition, size reduction with the hierarchical clustering algorithm is performed to eliminate the considerably similar data samples. Four machine learning algorithms are trained and also tested with the unseen data to validate their generalization and robustness abilities. The results yield 100% training and 83% testing preciseness for the unseen data. Computational time efficiencies of the algorithms are also examined under the equal conditions.


Subject(s)
Algorithms , Deep Learning , Thyroid Neoplasms , Thyroid Neoplasms/diagnosis , Humans , Machine Learning , Cluster Analysis
19.
Front Oncol ; 14: 1444543, 2024.
Article in English | MEDLINE | ID: mdl-39015491
20.
Ophthalmol Glaucoma ; 2024 Jul 20.
Article in English | MEDLINE | ID: mdl-39038740

ABSTRACT

PURPOSE: Loss to follow-up (LTFU) in primary open-angle glaucoma (POAG) can lead to undertreatment, disease progression, and irreversible vision loss. Patients who become LTFU either eventually re-establish glaucoma care after a lapse or never return to the clinic. The purpose of this study is to examine a large population of POAG patients who became LTFU to determine the proportion that return to care and to identify demographic and clinical factors associated with non-return after LTFU. DESIGN: Retrospective longitudinal cohort study PARTICIPANTS: Patients with a diagnosis of POAG with a clinical encounter in 2014 in the IRIS® Registry (Intelligent Research in Sight) METHODS: We examined follow-up patterns for 553,663 patients with POAG who had an encounter in the IRIS Registry in 2014 by following their documented clinic visits through 2019. LTFU was defined as exceeding one calendar year without an encounter. Within the LTFU group, patients were classified as returning after a lapse in care (return after LTFU) or not (non-return after LTFU). MAIN OUTCOME MEASURES: Proportion of patients with non-return after LTFU and baseline demographic and clinical characteristics associated with non-return among LTFU POAG patients. RESULTS: Among 553,663 POAG patients, 277,019 (50%) had at least one episode of LTFU over the 6-year study period. Within the LTFU group, 33% (92,471) returned to care and 67% (184,548) did not return to care. Compared to those who returned to care, LTFU patients with non-return were more likely to be older (age >80 years; RR=1.48; 95% CI: 1.47-1.50), to have unknown/missing insurance (RR=1.31; 95% CI: 1.30-1.33), and to have severe-stage POAG (RR=1.13; 95% CI: 1.11-1.15). Greater POAG severity and visual impairment were associated with non-return with a dose-dependent relationship in the adjusted model that accounted for demographic characteristics. Among those with return after LTFU, almost all returned within 2 years of last appointment (82,201; 89%) rather than 2 or more years later. CONCLUSION: Half of POAG patients in the IRIS Registry had at least one period of LTFU, and two-thirds of LTFU POAG patients did not return to care. More effort is warranted to re-engage the vulnerable POAG patients who become LTFU.

SELECTION OF CITATIONS
SEARCH DETAIL
...