Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.452
Filter
1.
Am J Emerg Med ; 83: 40-46, 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38954885

ABSTRACT

BACKGROUND: Academic productivity is bolstered by collaboration, which is in turn related to connectivity between individuals. Gender disparities have been identified in academics in terms of both academic promotion and output. Using gender propensity and network analysis, we aimed to describe patterns of collaboration on publications in emergency medicine (EM), focusing on two Midwest academic departments. METHODS: We identified faculty at two EM departments, their academic rank, and their publications from 2020 to 2022 and gathered information on their co-authors. Using network analysis, gender propensity and standard statistical analyses we assessed the collaboration network for differences between men and women. RESULTS: Social network analysis of collaboration in academic emergency medicine showed no difference in the ways that men and women publish together. However, individuals with higher academic rank, regardless of gender, had more importance to the network. Men had a propensity to collaborate with men, and women with women. The rates of gender propensity for men and women fell between the gender ratios of emergency medicine (65%/35%) and the general population (50%/50%), 59.6% and 44%, respectively, suggesting a tendency toward homophily among men. CONCLUSION: Our study aims to use network analysis and gender propensity to identify patterns of collaboration. We found that further work in the area of network analysis application to academic productivity may be of value, with a particular focus on the role of academic rank. Our methodology may aid department leaders by using the information from local analyses to identify opportunities to support faculty members to broaden and diversify their networks.

2.
Eur Radiol Exp ; 8(1): 79, 2024 Jul 05.
Article in English | MEDLINE | ID: mdl-38965128

ABSTRACT

Sample size, namely the number of subjects that should be included in a study to reach the desired endpoint and statistical power, is a fundamental concept of scientific research. Indeed, sample size must be planned a priori, and tailored to the main endpoint of the study, to avoid including too many subjects, thus possibly exposing them to additional risks while also wasting time and resources, or too few subjects, failing to reach the desired purpose. We offer a simple, go-to review of methods for sample size calculation for studies concerning data reliability (repeatability/reproducibility) and diagnostic performance. For studies concerning data reliability, we considered Cohen's κ or intraclass correlation coefficient (ICC) for hypothesis testing, estimation of Cohen's κ or ICC, and Bland-Altman analyses. With regards to diagnostic performance, we considered accuracy or sensitivity/specificity versus reference standards, the comparison of diagnostic performances, and the comparisons of areas under the receiver operating characteristics curve. Finally, we considered the special cases of dropouts or retrospective case exclusions, multiple endpoints, lack of prior data estimates, and the selection of unusual thresholds for α and ß errors. For the most frequent cases, we provide example of software freely available on the Internet.Relevance statement Sample size calculation is a fundamental factor influencing the quality of studies on repeatability/reproducibility and diagnostic performance in radiology.Key points• Sample size is a concept related to precision and statistical power.• It has ethical implications, especially when patients are exposed to risks.• Sample size should always be calculated before starting a study.• This review offers simple, go-to methods for sample size calculations.


Subject(s)
Research Design , Sample Size , Humans , Reproducibility of Results
3.
JAMIA Open ; 7(3): ooae059, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39006216

ABSTRACT

Objectives: Missed appointments can lead to treatment delays and adverse outcomes. Telemedicine may improve appointment completion because it addresses barriers to in-person visits, such as childcare and transportation. This study compared appointment completion for appointments using telemedicine versus in-person care in a large cohort of patients at an urban academic health sciences center. Materials and Methods: We conducted a retrospective cohort study of electronic health record data to determine whether telemedicine appointments have higher odds of completion compared to in-person care appointments, January 1, 2021, and April 30, 2023. The data were obtained from the University of South Florida (USF), a large academic health sciences center serving Tampa, FL, and surrounding communities. We implemented 1:1 propensity score matching based on age, gender, race, visit type, and Charlson Comorbidity Index (CCI). Results: The matched cohort included 87 376 appointments, with diverse patient demographics. The percentage of completed telemedicine appointments exceeded that of completed in-person care appointments by 9.2 points (73.4% vs 64.2%, P < .001). The adjusted odds ratio for telemedicine versus in-person care in relation to appointment completion was 1.64 (95% CI, 1.59-1.69, P < .001), indicating that telemedicine appointments are associated with 64% higher odds of completion than in-person care appointments when controlling for other factors. Discussion: This cohort study indicated that telemedicine appointments are more likely to be completed than in-person care appointments, regardless of demographics, comorbidity, payment type, or distance. Conclusion: Telemedicine appointments are more likely to be completed than in-person healthcare appointments.

4.
Synth Biol (Oxf) ; 9(1): ysae010, 2024.
Article in English | MEDLINE | ID: mdl-38973982

ABSTRACT

Data science is playing an increasingly important role in the design and analysis of engineered biology. This has been fueled by the development of high-throughput methods like massively parallel reporter assays, data-rich microscopy techniques, computational protein structure prediction and design, and the development of whole-cell models able to generate huge volumes of data. Although the ability to apply data-centric analyses in these contexts is appealing and increasingly simple to do, it comes with potential risks. For example, how might biases in the underlying data affect the validity of a result and what might the environmental impact of large-scale data analyses be? Here, we present a community-developed framework for assessing data hazards to help address these concerns and demonstrate its application to two synthetic biology case studies. We show the diversity of considerations that arise in common types of bioengineering projects and provide some guidelines and mitigating steps. Understanding potential issues and dangers when working with data and proactively addressing them will be essential for ensuring the appropriate use of emerging data-intensive AI methods and help increase the trustworthiness of their applications in synthetic biology.

5.
Transl Clin Pharmacol ; 32(2): 73-82, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38974344

ABSTRACT

Large language models (LLMs) have emerged as a powerful tool for biomedical researchers, demonstrating remarkable capabilities in understanding and generating human-like text. ChatGPT with its Code Interpreter functionality, an LLM connected with the ability to write and execute code, streamlines data analysis workflows by enabling natural language interactions. Using materials from a previously published tutorial, similar analyses can be performed through conversational interactions with the chatbot, covering data loading and exploration, model development and comparison, permutation feature importance, partial dependence plots, and additional analyses and recommendations. The findings highlight the significant potential of LLMs in assisting researchers with data analysis tasks, allowing them to focus on higher-level aspects of their work. However, there are limitations and potential concerns associated with the use of LLMs, such as the importance of critical thinking, privacy, security, and equitable access to these tools. As LLMs continue to improve and integrate with available tools, data science may experience a transformation similar to the shift from manual to automatic transmission in driving. The advancements in LLMs call for considering the future directions of data science and its education, ensuring that the benefits of these powerful tools are utilized with proper human supervision and responsibility.

6.
PeerJ Comput Sci ; 10: e2092, 2024.
Article in English | MEDLINE | ID: mdl-38983225

ABSTRACT

More sophisticated data access is possible with artificial intelligence (AI) techniques such as question answering (QA), but regulations and privacy concerns have limited their use. Federated learning (FL) deals with these problems, and QA is a viable substitute for AI. The utilization of hierarchical FL systems is examined in this research, along with an ideal method for developing client-specific adapters. The User Modified Hierarchical Federated Learning Model (UMHFLM) selects local models for users' tasks. The article suggests employing recurrent neural network (RNN) as a neural network (NN) technique for learning automatically and categorizing questions based on natural language into the appropriate templates. Together, local and global models are developed, with the worldwide model influencing local models, which are, in turn, combined for personalization. The method is applied in natural language processing pipelines for phrase matching employing template exact match, segmentation, and answer type detection. The (SQuAD-2.0), a DL-based QA method for acquiring knowledge of complicated SPARQL test questions and their accompanying SPARQL queries across the DBpedia dataset, was used to train and assess the model. The SQuAD2.0 datasets evaluate the model, which identifies 38 distinct templates. Considering the top two most likely templates, the RNN model achieves template classification accuracy of 92.8% and 61.8% on the SQuAD2.0 and QALD-7 datasets. A study on data scarcity among participants found that FL Match outperformed BERT significantly. A MAP margin of 2.60% exists between BERT and FL Match at a 100% data ratio and an MRR margin of 7.23% at a 20% data ratio.

7.
Soc Sci Med ; 354: 117056, 2024 Jun 14.
Article in English | MEDLINE | ID: mdl-39029140

ABSTRACT

OBJECTIVES: Contemporary research on the exposome, i.e. the sum of all the exposures an individual encounters throughout life and that may influence human health, bears the promise of an integrative and policy-relevant research on the effect of environment on health. Critical analyses of the first generation of exposome projects have voiced concerns over their actual breadth of inclusion of environmental factors and a related risk of molecularization of public health issues. The emergence of the European Human Exposome Network (EHEN) provides an opportunity to better situate the ambitions and priorities of the exposome approach on the basis of new and ongoing research. METHODS: We assess the promises, methods, and limitations of the EHEN, as a case study of the second generation of exposome research. A critical textual analysis of profile articles from each of the projects involved in EHEN, published in Environmental Epidemiology, was carried out to derive common priorities, innovations, methodological and conceptual choices across EHEN and to discuss it. RESULTS: EHEN consolidates its integrative outlook by reinforcing the volume and variety of data, its data analysis infrastructure and by diversifying its strategies to deliver actionable knowledge. Yet data-driven limitations severely restrict the geographical and political scope of this knowledge to health issues primarily related to urban setups, which may aggravate some socio-spatial inequalities in health in Europe. CONCLUSIONS: The second generation of exposome research doubles down on the initial ambition of an integrative study of the environmental effects of health to fuel better public health interventions. This intensification is, however, accompanied by significant epistemological challenges and doesn't help to overcome severe restrictions in the geographical and political scope of this knowledge. We thus advocate for increased reflexivity over the limitations of this conceptually and methodologically integrative approach to public and environmental health.

8.
Brief Bioinform ; 25(Supplement_1)2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39041911

ABSTRACT

This manuscript describes the development of a resource module that is part of a learning platform named 'NIGMS Sandbox for Cloud-based Learning', https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial authored by National Institute of General Medical Sciences: NIGMS Sandbox: A Learning Platform toward Democratizing Cloud Computing for Biomedical Research at the beginning of this supplement. This module delivers learning materials introducing the utility of the BASH (Bourne Again Shell) programming language for genomic data analysis in an interactive format that uses appropriate cloud resources for data access and analyses. The next-generation sequencing revolution has generated massive amounts of novel biological data from a multitude of platforms that survey an ever-growing list of genomic modalities. These data require significant downstream computational and statistical analyses to glean meaningful biological insights. However, the skill sets required to generate these data are vastly different from the skills required to analyze these data. Bench scientists that generate next-generation data often lack the training required to perform analysis of these datasets and require support from bioinformatics specialists. Dedicated computational training is required to empower biologists in the area of genomic data analysis, however, learning to efficiently leverage a command line interface is a significant barrier in learning how to leverage common analytical tools. Cloud platforms have the potential to democratize access to the technical tools and computational resources necessary to work with modern sequencing data, providing an effective framework for bioinformatics education. This module aims to provide an interactive platform that slowly builds technical skills and knowledge needed to interact with genomics data on the command line in the Cloud. The sandbox format of this module enables users to move through the material at their own pace and test their grasp of the material with knowledge self-checks before building on that material in the next sub-module. This manuscript describes the development of a resource module that is part of a learning platform named ``NIGMS Sandbox for Cloud-based Learning'' https://github.com/NIGMS/NIGMS-Sandbox. The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox [1] at the beginning of this Supplement. This module delivers learning materials on the analysis of bulk and single-cell ATAC-seq data in an interactive format that uses appropriate cloud resources for data access and analyses.


Subject(s)
Cloud Computing , Computational Biology , Software , Computational Biology/methods , Programming Languages , High-Throughput Nucleotide Sequencing/methods , Genomics/methods , Humans
9.
Front Netw Physiol ; 4: 1211413, 2024.
Article in English | MEDLINE | ID: mdl-38948084

ABSTRACT

Algorithms for the detection of COVID-19 illness from wearable sensor devices tend to implicitly treat the disease as causing a stereotyped (and therefore recognizable) deviation from healthy physiology. In contrast, a substantial diversity of bodily responses to SARS-CoV-2 infection have been reported in the clinical milieu. This raises the question of how to characterize the diversity of illness manifestations, and whether such characterization could reveal meaningful relationships across different illness manifestations. Here, we present a framework motivated by information theory to generate quantified maps of illness presentation, which we term "manifestations," as resolved by continuous physiological data from a wearable device (Oura Ring). We test this framework on five physiological data streams (heart rate, heart rate variability, respiratory rate, metabolic activity, and sleep temperature) assessed at the time of reported illness onset in a previously reported COVID-19-positive cohort (N = 73). We find that the number of distinct manifestations are few in this cohort, compared to the space of all possible manifestations. In addition, manifestation frequency correlates with the rough number of symptoms reported by a given individual, over a several-day period prior to their imputed onset of illness. These findings suggest that information-theoretic approaches can be used to sort COVID-19 illness manifestations into types with real-world value. This proof of concept supports the use of information-theoretic approaches to map illness manifestations from continuous physiological data. Such approaches could likely inform algorithm design and real-time treatment decisions if developed on large, diverse samples.

10.
Surg Endosc ; 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38958719

ABSTRACT

BACKGROUND: Laparoscopic pancreatoduodenectomy (LPD) is one of the most challenging operations and has a long learning curve. Artificial intelligence (AI) automated surgical phase recognition in intraoperative videos has many potential applications in surgical education, helping shorten the learning curve, but no study has made this breakthrough in LPD. Herein, we aimed to build AI models to recognize the surgical phase in LPD and explore the performance characteristics of AI models. METHODS: Among 69 LPD videos from a single surgical team, we used 42 in the building group to establish the models and used the remaining 27 videos in the analysis group to assess the models' performance characteristics. We annotated 13 surgical phases of LPD, including 4 key phases and 9 necessary phases. Two minimal invasive pancreatic surgeons annotated all the videos. We built two AI models for the key phase and necessary phase recognition, based on convolutional neural networks. The overall performance of the AI models was determined mainly by mean average precision (mAP). RESULTS: Overall mAPs of the AI models in the test set of the building group were 89.7% and 84.7% for key phases and necessary phases, respectively. In the 27-video analysis group, overall mAPs were 86.8% and 71.2%, with maximum mAPs of 98.1% and 93.9%. We found commonalities between the error of model recognition and the differences of surgeon annotation, and the AI model exhibited bad performance in cases with anatomic variation or lesion involvement with adjacent organs. CONCLUSIONS: AI automated surgical phase recognition can be achieved in LPD, with outstanding performance in selective cases. This breakthrough may be the first step toward AI- and video-based surgical education in more complex surgeries.

11.
J Med Internet Res ; 26: e54263, 2024 Jul 05.
Article in English | MEDLINE | ID: mdl-38968598

ABSTRACT

BACKGROUND: The medical knowledge graph provides explainable decision support, helping clinicians with prompt diagnosis and treatment suggestions. However, in real-world clinical practice, patients visit different hospitals seeking various medical services, resulting in fragmented patient data across hospitals. With data security issues, data fragmentation limits the application of knowledge graphs because single-hospital data cannot provide complete evidence for generating precise decision support and comprehensive explanations. It is important to study new methods for knowledge graph systems to integrate into multicenter, information-sensitive medical environments, using fragmented patient records for decision support while maintaining data privacy and security. OBJECTIVE: This study aims to propose an electronic health record (EHR)-oriented knowledge graph system for collaborative reasoning with multicenter fragmented patient medical data, all the while preserving data privacy. METHODS: The study introduced an EHR knowledge graph framework and a novel collaborative reasoning process for utilizing multicenter fragmented information. The system was deployed in each hospital and used a unified semantic structure and Observational Medical Outcomes Partnership (OMOP) vocabulary to standardize the local EHR data set. The system transforms local EHR data into semantic formats and performs semantic reasoning to generate intermediate reasoning findings. The generated intermediate findings used hypernym concepts to isolate original medical data. The intermediate findings and hash-encrypted patient identities were synchronized through a blockchain network. The multicenter intermediate findings were collaborated for final reasoning and clinical decision support without gathering original EHR data. RESULTS: The system underwent evaluation through an application study involving the utilization of multicenter fragmented EHR data to alert non-nephrology clinicians about overlooked patients with chronic kidney disease (CKD). The study covered 1185 patients in nonnephrology departments from 3 hospitals. The patients visited at least two of the hospitals. Of these, 124 patients were identified as meeting CKD diagnosis criteria through collaborative reasoning using multicenter EHR data, whereas the data from individual hospitals alone could not facilitate the identification of CKD in these patients. The assessment by clinicians indicated that 78/91 (86%) patients were CKD positive. CONCLUSIONS: The proposed system was able to effectively utilize multicenter fragmented EHR data for clinical application. The application study showed the clinical benefits of the system with prompt and comprehensive decision support.


Subject(s)
Decision Support Systems, Clinical , Electronic Health Records , Humans
12.
Anal Bioanal Chem ; 2024 Jul 20.
Article in English | MEDLINE | ID: mdl-39031228

ABSTRACT

This study developed an innovative biosensor strategy for the sensitive and selective detection of canine mammary tumor biomarkers, cancer antigen 15-3 (CA 15-3) and mucin 1 (MUC-1), integrating green silver nanoparticles (GAgNPs) with machine learning (ML) algorithms to achieve high diagnostic accuracy and potential for noninvasive early detection. The GAgNPs-enhanced electrochemical biosensor demonstrated selective detection of CA 15-3 in serum and MUC-1 in tissue homogenates, with limits of detection (LODs) of 0.07 and 0.11 U mL-1, respectively. The nanoscale dimensions of the GAgNPs endowed them with electrochemically active surface areas, facilitating sensitive biomarker detection. Experimental studies targeted CA 15-3 and MUC-1 biomarkers in clinical samples, and the biosensor exhibited ease of use and good selectivity. Furthermore, ML algorithms were employed to analyze the electrochemical data and predict biomarker concentrations, enhancing the diagnostic accuracy. The Random Forest algorithm achieved 98% accuracy in tumor presence prediction, while an Artificial Neural Network attained 76% accuracy in CA 15-3-based tumor grade classification. The integration of ML techniques with the GAgNPs-based biosensor offers a promising approach for noninvasive, accurate, and early detection of canine mammary tumors, potentially revolutionizing veterinary diagnostics. This multilayered strategy, combining eco-friendly nanomaterials, electrochemical sensing, and ML algorithms, holds significant potential for advancing both biomedical research and clinical practice in the field of canine mammary tumor diagnostics.

13.
Article in English | MEDLINE | ID: mdl-38985412

ABSTRACT

PURPOSE: Decision support systems and context-aware assistance in the operating room have emerged as the key clinical applications supporting surgeons in their daily work and are generally based on single modalities. The model- and knowledge-based integration of multimodal data as a basis for decision support systems that can dynamically adapt to the surgical workflow has not yet been established. Therefore, we propose a knowledge-enhanced method for fusing multimodal data for anticipation tasks. METHODS: We developed a holistic, multimodal graph-based approach combining imaging and non-imaging information in a knowledge graph representing the intraoperative scene of a surgery. Node and edge features of the knowledge graph are extracted from suitable data sources in the operating room using machine learning. A spatiotemporal graph neural network architecture subsequently allows for interpretation of relational and temporal patterns within the knowledge graph. We apply our approach to the downstream task of instrument anticipation while presenting a suitable modeling and evaluation strategy for this task. RESULTS: Our approach achieves an F1 score of 66.86% in terms of instrument anticipation, allowing for a seamless surgical workflow and adding a valuable impact for surgical decision support systems. A resting recall of 63.33% indicates the non-prematurity of the anticipations. CONCLUSION: This work shows how multimodal data can be combined with the topological properties of an operating room in a graph-based approach. Our multimodal graph architecture serves as a basis for context-sensitive decision support systems in laparoscopic surgery considering a comprehensive intraoperative operating scene.

14.
J Med Internet Res ; 26: e51397, 2024 Jul 04.
Article in English | MEDLINE | ID: mdl-38963923

ABSTRACT

BACKGROUND: Machine learning (ML) models can yield faster and more accurate medical diagnoses; however, developing ML models is limited by a lack of high-quality labeled training data. Crowdsourced labeling is a potential solution but can be constrained by concerns about label quality. OBJECTIVE: This study aims to examine whether a gamified crowdsourcing platform with continuous performance assessment, user feedback, and performance-based incentives could produce expert-quality labels on medical imaging data. METHODS: In this diagnostic comparison study, 2384 lung ultrasound clips were retrospectively collected from 203 emergency department patients. A total of 6 lung ultrasound experts classified 393 of these clips as having no B-lines, one or more discrete B-lines, or confluent B-lines to create 2 sets of reference standard data sets (195 training clips and 198 test clips). Sets were respectively used to (1) train users on a gamified crowdsourcing platform and (2) compare the concordance of the resulting crowd labels to the concordance of individual experts to reference standards. Crowd opinions were sourced from DiagnosUs (Centaur Labs) iOS app users over 8 days, filtered based on past performance, aggregated using majority rule, and analyzed for label concordance compared with a hold-out test set of expert-labeled clips. The primary outcome was comparing the labeling concordance of collated crowd opinions to trained experts in classifying B-lines on lung ultrasound clips. RESULTS: Our clinical data set included patients with a mean age of 60.0 (SD 19.0) years; 105 (51.7%) patients were female and 114 (56.1%) patients were White. Over the 195 training clips, the expert-consensus label distribution was 114 (58%) no B-lines, 56 (29%) discrete B-lines, and 25 (13%) confluent B-lines. Over the 198 test clips, expert-consensus label distribution was 138 (70%) no B-lines, 36 (18%) discrete B-lines, and 24 (12%) confluent B-lines. In total, 99,238 opinions were collected from 426 unique users. On a test set of 198 clips, the mean labeling concordance of individual experts relative to the reference standard was 85.0% (SE 2.0), compared with 87.9% crowdsourced label concordance (P=.15). When individual experts' opinions were compared with reference standard labels created by majority vote excluding their own opinion, crowd concordance was higher than the mean concordance of individual experts to reference standards (87.4% vs 80.8%, SE 1.6 for expert concordance; P<.001). Clips with discrete B-lines had the most disagreement from both the crowd consensus and individual experts with the expert consensus. Using randomly sampled subsets of crowd opinions, 7 quality-filtered opinions were sufficient to achieve near the maximum crowd concordance. CONCLUSIONS: Crowdsourced labels for B-line classification on lung ultrasound clips via a gamified approach achieved expert-level accuracy. This suggests a strategic role for gamified crowdsourcing in efficiently generating labeled image data sets for training ML systems.


Subject(s)
Crowdsourcing , Lung , Ultrasonography , Crowdsourcing/methods , Humans , Ultrasonography/methods , Ultrasonography/standards , Lung/diagnostic imaging , Prospective Studies , Female , Male , Machine Learning , Adult , Middle Aged , Retrospective Studies
15.
Nanomedicine (Lond) ; : 1-13, 2024 Jun 21.
Article in English | MEDLINE | ID: mdl-38905147

ABSTRACT

Artificial intelligence has revolutionized many sectors with unparalleled predictive capabilities supported by machine learning (ML). So far, this tool has not been able to provide the same level of development in pharmaceutical nanotechnology. This review discusses the current data science methodologies related to polymeric drug-loaded nanoparticle production from an innovative multidisciplinary perspective while considering the strictest data science practices. Several methodological and data interpretation flaws were identified by analyzing the few qualified ML studies. Most issues lie in following appropriate analysis steps, such as cross-validation, balancing data, or testing alternative models. Thus, better-planned studies following the recommended data science analysis steps along with adequate numbers of experiments would change the current landscape, allowing the exploration of the full potential of ML.


[Box: see text].

16.
Transl Anim Sci ; 8: txae092, 2024.
Article in English | MEDLINE | ID: mdl-38939728

ABSTRACT

Advancements in technology have ushered in a new era of sensor-based measurement and management of livestock production systems. These sensor-based technologies have the ability to automatically monitor feeding, growth, and enteric emissions for individual animals across confined and extensive production systems. One challenge with sensor-based technologies is the large amount of data generated, which can be difficult to access, process, visualize, and monitor information in real time to ensure equipment is working properly and animals are utilizing it correctly. A solution to this problem is the development of application programming interfaces (APIs) to automate downloading, visualizing, and summarizing datasets generated from precision livestock technology (PLT). For this methods paper, we develop three APIs and accompanying processes for rapid data acquisition, visualization, systems tracking, and summary statistics for three technologies (SmartScale, SmartFeed, and GreenFeed) manufactured by C-Lock Inc (Rapid City, SD). Program R markdown documents and example datasets are provided to facilitate greater adoption of these techniques and to further advance PLT. The methodology presented successfully downloaded data from the cloud and generated a series of visualizations to conduct systems checks, animal usage rates, and calculate summary statistics. These tools will be essential for further adoption of precision technology. There is huge potential to further leverage APIs to incorporate a wide range of datasets such as weather data, animal locations, and sensor data to facilitate decision-making on time scales relevant to researchers and livestock managers.

17.
J Pharmacol Toxicol Methods ; 128: 107531, 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38852688

ABSTRACT

The one-size-fits-all approach has been the mainstream in medicine, and the well-defined standards support the development of safe and effective therapies for many years. Advancing technologies, however, enabled precision medicine to treat a targeted patient population (e.g., HER2+ cancer). In safety pharmacology, computational population modeling has been successfully applied in virtual clinical trials to predict drug-induced proarrhythmia risks against a wide range of pseudo cohorts. In the meantime, population modeling in safety pharmacology experiments has been challenging. Here, we used five commercially available human iPSC-derived cardiomyocytes growing in 384-well plates and analyzed the effects of ten potential proarrhythmic compounds with four concentrations on their calcium transients (CaTs). All the cell lines exhibited an expected elongation or shortening of calcium transient duration with various degrees. Depending on compounds inhibiting several ion channels, such as hERG, peak and late sodium and L-type calcium or IKs channels, some of the cell lines exhibited irregular, discontinuous beating that was not predicted by computational simulations. To analyze the shapes of CaTs and irregularities of beat patterns comprehensively, we defined six parameters to characterize compound-induced CaT waveform changes, successfully visualizing the similarities and differences in compound-induced proarrhythmic sensitivities of different cell lines. We applied Bayesian statistics to predict sample populations based on experimental data to overcome the limited number of experimental replicates in high-throughput assays. This process facilitated the principal component analysis to classify compound-induced sensitivities of cell lines objectively. Finally, the association of sensitivities in compound-induced changes between phenotypic parameters and ion channel inhibitions measured using patch clamp recording was analyzed. Successful ranking of compound-induced sensitivity of cell lines was in lined with visual inspection of raw data.

18.
Surg Endosc ; 2024 Jun 13.
Article in English | MEDLINE | ID: mdl-38872018

ABSTRACT

BACKGROUND: Laparoscopic cholecystectomy is a very frequent surgical procedure. However, in an ageing society, less surgical staff will need to perform surgery on patients. Collaborative surgical robots (cobots) could address surgical staff shortages and workload. To achieve context-awareness for surgeon-robot collaboration, the intraoperative action workflow recognition is a key challenge. METHODS: A surgical process model was developed for intraoperative surgical activities including actor, instrument, action and target in laparoscopic cholecystectomy (excluding camera guidance). These activities, as well as instrument presence and surgical phases were annotated in videos of laparoscopic cholecystectomy performed on human patients (n = 10) and on explanted porcine livers (n = 10). The machine learning algorithm Distilled-Swin was trained on our own annotated dataset and the CholecT45 dataset. The validation of the model was conducted using a fivefold cross-validation approach. RESULTS: In total, 22,351 activities were annotated with a cumulative duration of 24.9 h of video segments. The machine learning algorithm trained and validated on our own dataset scored a mean average precision (mAP) of 25.7% and a top K = 5 accuracy of 85.3%. With training and validation on our dataset and CholecT45, the algorithm scored a mAP of 37.9%. CONCLUSIONS: An activity model was developed and applied for the fine-granular annotation of laparoscopic cholecystectomies in two surgical settings. A machine recognition algorithm trained on our own annotated dataset and CholecT45 achieved a higher performance than training only on CholecT45 and can recognize frequently occurring activities well, but not infrequent activities. The analysis of an annotated dataset allowed for the quantification of the potential of collaborative surgical robots to address the workload of surgical staff. If collaborative surgical robots could grasp and hold tissue, up to 83.5% of the assistant's tissue interacting tasks (i.e. excluding camera guidance) could be performed by robots.

19.
Cytotherapy ; 2024 Apr 04.
Article in English | MEDLINE | ID: mdl-38842968

ABSTRACT

Although several cell-based therapies have received FDA approval, and others are showing promising results, scalable, and quality-driven reproducible manufacturing of therapeutic cells at a lower cost remains challenging. Challenges include starting material and patient variability, limited understanding of manufacturing process parameter effects on quality, complex supply chain logistics, and lack of predictive, well-understood product quality attributes. These issues can manifest as increased production costs, longer production times, greater batch-to-batch variability, and lower overall yield of viable, high-quality cells. The lack of data-driven insights and decision-making in cell manufacturing and delivery is an underlying commonality behind all these problems. Data collection and analytics from discovery, preclinical and clinical research, process development, and product manufacturing have not been sufficiently utilized to develop a "systems" understanding and identify actionable controls. Experience from other industries shows that data science and analytics can drive technological innovations and manufacturing optimization, leading to improved consistency, reduced risk, and lower cost. The cell therapy manufacturing industry will benefit from implementing data science tools, such as data-driven modeling, data management and mining, AI, and machine learning. The integration of data-driven predictive capabilities into cell therapy manufacturing, such as predicting product quality and clinical outcomes based on manufacturing data, or ensuring robustness and reliability using data-driven supply-chain modeling could enable more precise and efficient production processes and lead to better patient access and outcomes. In this review, we introduce some of the relevant computational and data science tools and how they are being or can be implemented in the cell therapy manufacturing workflow. We also identify areas where innovative approaches are required to address challenges and opportunities specific to the cell therapy industry. We conclude that interfacing data science throughout a cell therapy product lifecycle, developing data-driven manufacturing workflow, designing better data collection tools and algorithms, using data analytics and AI-based methods to better understand critical quality attributes and critical-process parameters, and training the appropriate workforce will be critical for overcoming current industry and regulatory barriers and accelerating clinical translation.

SELECTION OF CITATIONS
SEARCH DETAIL
...