ABSTRACT
Recent neuroscience studies demonstrate that a deeper understanding of brain function requires a deeper understanding of behavior. Detailed behavioral measurements are now often collected using video cameras, resulting in an increased need for computer vision algorithms that extract useful information from video data. Here we introduce a new video analysis tool that combines the output of supervised pose estimation algorithms (e.g. DeepLabCut) with unsupervised dimensionality reduction methods to produce interpretable, low-dimensional representations of behavioral videos that extract more information than pose estimates alone. We demonstrate this tool by extracting interpretable behavioral features from videos of three different head-fixed mouse preparations, as well as a freely moving mouse in an open field arena, and show how these interpretable features can facilitate downstream behavioral and neural analyses. We also show how the behavioral features produced by our model improve the precision and interpretation of these downstream analyses compared to using the outputs of either fully supervised or fully unsupervised methods alone.
Subject(s)
Algorithms , Artificial Intelligence/statistics & numerical data , Behavior, Animal , Video Recording , Animals , Computational Biology , Computer Simulation , Markov Chains , Mice , Models, Statistical , Neural Networks, Computer , Supervised Machine Learning/statistics & numerical data , Unsupervised Machine Learning/statistics & numerical data , Video Recording/statistics & numerical dataABSTRACT
In February, 2020, the European Commission published a white paper on artificial intelligence (AI) as well as an accompanying communication and report. The paper sets out policy options to facilitate a secure and trustworthy development of AI and considers health to be one of its most important areas of application. We illustrate that the European Commission's approach, as applied to medical AI, presents some challenges that can be detrimental if not addressed. In particular, we discuss the issues of European values and European data, the update problem of AI systems, and the challenges of new trade-offs such as privacy, cybersecurity, accuracy, and intellectual property rights. We also outline what we view as the most important next steps in the Commission's iterative process. Although the European Commission has done good work in setting out a European approach for AI, we conclude that this approach will be more difficult to implement in health care. It will require careful balancing of core values, detailed consideration of nuances of health and AI technologies, and a keen eye on the political winds and global competition.
Subject(s)
Artificial Intelligence/statistics & numerical data , Data Management/methods , Delivery of Health Care/statistics & numerical data , Europe , HumansABSTRACT
The Medical Futurist says that radiology is one of the fastest growing and developing areas of medicine, and therefore this might be the speciality in which we can expect to see the largest steps in development. So why do they think that, and does it apply to dose monitoring? The move from retrospective dose evaluation to a proactive dose management approach represents a serious area of research. Indeed, artificial intelligence and machine learning are consistently being integrated into best-in-class dose management software solutions. The development of clinical analytics and dashboards are already supporting operators in their decision-making, and these optimisations - if taken beyond a single machine, a single department, or a single health network - have the potential to drive real and lasting change. The question is for whom exactly are these innovations being developed? How can the patient know that their scan has been performed to the absolute best that the technology can deliver? Do they know or even care how much their lifetime risk for developing cancer has changed post examination? Do they want a personalised size-specific dose estimate or perhaps an individual organ dose assessment to share on Instagram? Let's get real about the clinical utility and regulatory application of dose monitoring, and shine a light on the shared responsibility in applying the technology and the associated innovations.
Subject(s)
Artificial Intelligence/statistics & numerical data , Inventions/statistics & numerical data , Machine Learning/statistics & numerical data , Radiation Dosage , Radiation Monitoring/statistics & numerical data , Radiation Protection/statistics & numerical data , Humans , Inventions/trends , Radiation Monitoring/instrumentation , Radiation Protection/instrumentationABSTRACT
Currently, the use of artificial intelligence (AI) in radiology, particularly machine learning (ML), has become a reality in clinical practice. Since the end of the last century, several ML algorithms have been introduced for a wide range of common imaging tasks, not only for diagnostic purposes but also for image acquisition and postprocessing. AI is now recognized to be a driving initiative in every aspect of radiology. There is growing evidence of the advantages of AI in radiology creating seamless imaging workflows for radiologists or even replacing radiologists. Most of the current AI methods have some internal and external disadvantages that are impeding their ultimate implementation in the clinical arena. As such, AI can be considered a portion of a business trying to be introduced in the health care market. For this reason, this review analyzes the current status of AI, and specifically ML, applied to radiology from the scope of strengths, weaknesses, opportunities, and threats (SWOT) analysis.
Subject(s)
Artificial Intelligence/statistics & numerical data , Machine Learning , Printing, Three-Dimensional , Radiology/trends , Algorithms , Data Collection , Female , Forecasting , Health Care Sector , Humans , Male , WorkflowSubject(s)
Artificial Intelligence/statistics & numerical data , Artificial Intelligence/standards , Goals , Research/statistics & numerical data , Research/trends , Artificial Intelligence/economics , Artificial Intelligence/ethics , Authorship , Child , China , Datasets as Topic , Electronic Health Records , Humans , Personnel Selection , Population Density , Research/economics , Research/standards , Research Personnel/standards , Research Personnel/supply & distribution , Social Change , Time Factors , United StatesABSTRACT
BACKGROUND: The widespread adoption of smartphones provides researchers with expanded opportunities for developing, testing and implementing interventions. National Institutes of Health (NIH) funds competitive, investigator-initiated grant applications. Funded grants represent the state of the science and therefore are expected to anticipate the progression of research in the near future. OBJECTIVE: The objective of this paper is to provide an analysis of the kinds of smartphone-based intervention apps funded in NIH research grants during the five-year period between 2014 and 2018. METHODS: We queried NIH Reporter to identify candidate funded grants that addressed mHealth and the use of smartphones. From 1524 potential grants, we identified 397 that met the requisites of including an intervention app. Each grant's abstract was analyzed to understand the focus of intervention. The year of funding, type of activity (eg, R01, R34, and so on) and funding were noted. RESULTS: We identified 13 categories of strategies employed in funded smartphone intervention apps. Most grants included either one (35.0%) or two (39.0%) intervention approaches. These included artificial intelligence (57 apps), bionic adaptation (33 apps), cognitive and behavioral therapies (68 apps), contingency management (24 apps), education and information (85 apps), enhanced motivation (50 apps), facilitating, reminding and referring (60 apps), gaming and gamification (52 apps), mindfulness training (18 apps), monitoring and feedback (192 apps), norm setting (7 apps), skills training (85 apps) and social support and social networking (59 apps). The most frequently observed grant types included Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) grants (40.8%) and Research Project Grants (R01s) (26.2%). The number of grants funded increased through the five-year period from 60 in 2014 to 112 in 2018. CONCLUSIONS: Smartphone intervention apps are increasingly competitive for NIH funding. They reflect a wide diversity of approaches that have significant potential for use in applied settings.
Subject(s)
Mobile Applications/statistics & numerical data , National Institutes of Health (U.S.)/economics , Smartphone/instrumentation , Artificial Intelligence/statistics & numerical data , Bionics/statistics & numerical data , Cognitive Behavioral Therapy/statistics & numerical data , Education/statistics & numerical data , Financial Management/economics , Financial Management/statistics & numerical data , Financing, Organized/economics , Financing, Organized/statistics & numerical data , Humans , Information Management/statistics & numerical data , Mobile Applications/trends , Research Personnel , Small Business/statistics & numerical data , Small Business/trends , Smartphone/economics , Technology Transfer , Telemedicine , United States/epidemiologySubject(s)
Central Nervous System Neoplasms/surgery , Inventions/trends , Neurosurgery/methods , Neurosurgical Procedures/methods , Robotic Surgical Procedures/methods , Robotics/statistics & numerical data , Artificial Intelligence/statistics & numerical data , Artificial Intelligence/trends , Biopsy/methods , Central Nervous System Neoplasms/pathology , Electrodes , Humans , Male , Middle Aged , Neurosurgery/instrumentation , Neurosurgery/trends , Neurosurgical Procedures/instrumentation , Neurosurgical Procedures/trends , Robotic Surgical Procedures/adverse effects , Robotic Surgical Procedures/instrumentation , Robotics/trends , Tomography, X-Ray ComputedSubject(s)
Entrepreneurship/organization & administration , Public-Private Sector Partnerships/organization & administration , Research/organization & administration , Artificial Intelligence/economics , Artificial Intelligence/statistics & numerical data , Biomedical Technology/economics , Biomedical Technology/trends , Brain-Computer Interfaces , Early Detection of Cancer/methods , Fitness Trackers , Humans , Mobile Applications , National Institute of Mental Health (U.S.)/organization & administration , Privacy , Public-Private Sector Partnerships/economics , Public-Private Sector Partnerships/trends , Research/standards , Research/trends , Software , United States , United States Food and Drug Administration/legislation & jurisprudence , WorkforceSubject(s)
Artificial Intelligence/statistics & numerical data , Artificial Intelligence/trends , Education/trends , Employment/trends , Entrepreneurship/trends , Inventions/trends , Social Change , Adaptation, Psychological , Algorithms , Automation , Automobiles , Employment/psychology , Humans , Military Science , Models, Economic , Robotics/statistics & numerical data , Robotics/trends , Unemployment/psychology , Unemployment/trendsSubject(s)
Artificial Intelligence/trends , Efficiency , Employment/trends , Inventions/trends , Artificial Intelligence/statistics & numerical data , Employment/statistics & numerical data , Humans , Machine Learning/statistics & numerical data , Machine Learning/trends , Research/trends , Social Change , WorkforceABSTRACT
The development and integration of machine learning/artificial intelligence into routine clinical practice will significantly alter the current practice of radiology. Changes in reimbursement and practice patterns will also continue to affect radiology. But rather than being a significant threat to radiologists, we believe these changes, particularly machine learning/artificial intelligence, will be a boon to radiologists by increasing their value, efficiency, accuracy, and personal satisfaction.
Subject(s)
Artificial Intelligence/statistics & numerical data , Radiologists , Clinical Competence , Efficiency , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Personal Satisfaction , Practice Patterns, Physicians' , Reimbursement MechanismsSubject(s)
Efficiency, Organizational/trends , Employment/trends , Research , Technology/trends , Artificial Intelligence/statistics & numerical data , Artificial Intelligence/trends , Efficiency , Efficiency, Organizational/statistics & numerical data , Employment/economics , Employment/statistics & numerical data , Humans , Policy Making , Research/trends , Technology/statistics & numerical data , United States , WorkforceSubject(s)
Artificial Intelligence , Industry , Personnel Selection , Research Personnel , Research , Artificial Intelligence/statistics & numerical data , Artificial Intelligence/trends , Education, Graduate/trends , Industry/economics , Research/economics , Research/education , Research Personnel/economics , Research Personnel/education , Salaries and Fringe Benefits , Universities , WorkforceABSTRACT
Financial markets emanate massive amounts of data from which machines can, in principle, learn to invest with minimal initial guidance from humans. I contrast human and machine strengths and weaknesses in making investment decisions. The analysis reveals areas in the investment landscape where machines are already very active and those where machines are likely to make significant inroads in the next few years.
Subject(s)
Financial Management/statistics & numerical data , Investments/statistics & numerical data , Robotics/statistics & numerical data , Algorithms , Artificial Intelligence/statistics & numerical data , Artificial Intelligence/trends , Data Interpretation, Statistical , Decision Making , Financial Management/trends , Humans , Investments/economics , Robotics/trends , TrustABSTRACT
BACKGROUND: Unstable Angina (UA) is widely accepted as a critical phase of coronary heart disease with patients exhibiting widely varying risks. Early risk assessment of UA is at the center of the management program, which allows physicians to categorize patients according to the clinical characteristics and stratification of risk and different prognosis. Although many prognostic models have been widely used for UA risk assessment in clinical practice, a number of studies have highlighted possible shortcomings. One serious drawback is that existing models lack the ability to deal with the intrinsic uncertainty about the variables utilized. METHODS: In order to help physicians refine knowledge for the stratification of UA risk with respect to vagueness in information, this paper develops an intelligent system combining genetic algorithm and fuzzy association rule mining. In detail, it models the input information's vagueness through fuzzy sets, and then applies a genetic fuzzy system on the acquired fuzzy sets to extract the fuzzy rule set for the problem of UA risk assessment. RESULTS: The proposed system is evaluated using a real data-set collected from the cardiology department of a Chinese hospital, which consists of 54 patient cases. 9 numerical patient features and 17 categorical patient features that appear in the data-set are selected in the experiments. The proposed system made the same decisions as the physician in 46 (out of a total of 54) tested cases (85.2%). CONCLUSIONS: By comparing the results that are obtained through the proposed system with those resulting from the physician's decision, it has been found that the developed model is highly reflective of reality. The proposed system could be used for educational purposes, and with further improvements, could assist and guide young physicians in their daily work.
Subject(s)
Angina, Unstable , Artificial Intelligence/statistics & numerical data , Fuzzy Logic , Models, Genetic , Prognosis , Algorithms , Feasibility Studies , Humans , Risk AssessmentABSTRACT
There has been some improvement in the treatment of preterm infants, which has helped to increase their chance of survival. However, the rate of premature births is still globally increasing. As a result, this group of infants are most at risk of developing severe medical conditions that can affect the respiratory, gastrointestinal, immune, central nervous, auditory and visual systems. In extreme cases, this can also lead to long-term conditions, such as cerebral palsy, mental retardation, learning difficulties, including poor health and growth. In the US alone, the societal and economic cost of preterm births, in 2005, was estimated to be $26.2 billion, per annum. In the UK, this value was close to £2.95 billion, in 2009. Many believe that a better understanding of why preterm births occur, and a strategic focus on prevention, will help to improve the health of children and reduce healthcare costs. At present, most methods of preterm birth prediction are subjective. However, a strong body of evidence suggests the analysis of uterine electrical signals (Electrohysterography), could provide a viable way of diagnosing true labour and predict preterm deliveries. Most Electrohysterography studies focus on true labour detection during the final seven days, before labour. The challenge is to utilise Electrohysterography techniques to predict preterm delivery earlier in the pregnancy. This paper explores this idea further and presents a supervised machine learning approach that classifies term and preterm records, using an open source dataset containing 300 records (38 preterm and 262 term). The synthetic minority oversampling technique is used to oversample the minority preterm class, and cross validation techniques, are used to evaluate the dataset against other similar studies. Our approach shows an improvement on existing studies with 96% sensitivity, 90% specificity, and a 95% area under the curve value with 8% global error using the polynomial classifier.
Subject(s)
Artificial Intelligence/statistics & numerical data , Premature Birth/prevention & control , Uterus/physiopathology , Area Under Curve , Databases, Factual , Electrophysiological Phenomena , Female , Health Care Costs , Humans , Infant, Newborn , Infant, Premature , Predictive Value of Tests , Pregnancy , Premature Birth/economics , ROC CurveABSTRACT
In this work, we first revise some extensions of the standard Hopfield model in the low storage limit, namely the correlated attractor case and the multitasking case recently introduced by the authors. The former case is based on a modification of the Hebbian prescription, which induces a coupling between consecutive patterns and this effect is tuned by a parameter a. In the latter case, dilution is introduced in pattern entries, in such a way that a fraction d of them is blank. Then, we merge these two extensions to obtain a system able to retrieve several patterns in parallel and the quality of retrieval, encoded by the set of Mattis magnetizations {m(µ)}, is reminiscent of the correlation among patterns. By tuning the parameters d and a, qualitatively different outputs emerge, ranging from highly hierarchical to symmetric. The investigations are accomplished by means of both numerical simulations and statistical mechanics analysis, properly adapting a novel technique originally developed for spin glasses, i.e. the Hamilton-Jacobi interpolation, with excellent agreement. Finally, we show the thermodynamical equivalence of this associative network with a (restricted) Boltzmann machine and study its stochastic dynamics to obtain even a dynamical picture, perfectly consistent with the static scenario earlier discussed.
Subject(s)
Artificial Intelligence , Computer Simulation , Neural Networks, Computer , Artificial Intelligence/statistics & numerical data , Computer Simulation/statistics & numerical data , Monte Carlo MethodABSTRACT
Properties of data are frequently seen to vary depending on the sampled situations, which usually change along a time evolution or owing to environmental effects. One way to analyze such data is to find invariances, or representative features kept constant over changes. The aim of this paper is to identify one such feature, namely interactions or dependencies among variables that are common across multiple datasets collected under different conditions. To that end, we propose a common substructure learning (CSSL) framework based on a graphical Gaussian model. We further present a simple learning algorithm based on the Dual Augmented Lagrangian and the Alternating Direction Method of Multipliers. We confirm the performance of CSSL over other existing techniques in finding unchanging dependency structures in multiple datasets through numerical simulations on synthetic data and through a real world application to anomaly detection in automobile sensors.