Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
1.
Radiology ; 311(1): e232535, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38591971

ABSTRACT

Background Mammographic density measurements are used to identify patients who should undergo supplemental imaging for breast cancer detection, but artificial intelligence (AI) image analysis may be more effective. Purpose To assess whether AISmartDensity-an AI-based score integrating cancer signs, masking, and risk-surpasses measurements of mammographic density in identifying patients for supplemental breast imaging after a negative screening mammogram. Materials and Methods This retrospective study included randomly selected individuals who underwent screening mammography at Karolinska University Hospital between January 2008 and December 2015. The models in AISmartDensity were trained and validated using nonoverlapping data. The ability of AISmartDensity to identify future cancer in patients with a negative screening mammogram was evaluated and compared with that of mammographic density models. Sensitivity and positive predictive value (PPV) were calculated for the top 8% of scores, mimicking the proportion of patients in the Breast Imaging Reporting and Data System "extremely dense" category. Model performance was evaluated using area under the receiver operating characteristic curve (AUC) and was compared using the DeLong test. Results The study population included 65 325 examinations (median patient age, 53 years [IQR, 47-62 years])-64 870 examinations in healthy patients and 455 examinations in patients with breast cancer diagnosed within 3 years of a negative screening mammogram. The AUC for detecting subsequent cancers was 0.72 and 0.61 (P < .001) for AISmartDensity and the best-performing density model (age-adjusted dense area), respectively. For examinations with scores in the top 8%, AISmartDensity identified 152 of 455 (33%) future cancers with a PPV of 2.91%, whereas the best-performing density model (age-adjusted dense area) identified 57 of 455 (13%) future cancers with a PPV of 1.09% (P < .001). AISmartDensity identified 32% (41 of 130) and 34% (111 of 325) of interval and next-round screen-detected cancers, whereas the best-performing density model (dense area) identified 16% (21 of 130) and 9% (30 of 325), respectively. Conclusion AISmartDensity, integrating cancer signs, masking, and risk, outperformed traditional density models in identifying patients for supplemental imaging after a negative screening mammogram. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Kim and Chang in this issue.


Subject(s)
Breast Neoplasms , Early Detection of Cancer , Humans , Middle Aged , Female , Breast Neoplasms/diagnostic imaging , Artificial Intelligence , Retrospective Studies , Mammography
3.
Eur Phys J E Soft Matter ; 46(4): 27, 2023 Apr 11.
Article in English | MEDLINE | ID: mdl-37039923

ABSTRACT

We introduce a reinforcement learning (RL) environment to design and benchmark control strategies aimed at reducing drag in turbulent fluid flows enclosed in a channel. The environment provides a framework for computationally efficient, parallelized, high-fidelity fluid simulations, ready to interface with established RL agent programming interfaces. This allows for both testing existing deep reinforcement learning (DRL) algorithms against a challenging task, and advancing our knowledge of a complex, turbulent physical system that has been a major topic of research for over two centuries, and remains, even today, the subject of many unanswered questions. The control is applied in the form of blowing and suction at the wall, while the observable state is configurable, allowing to choose different variables such as velocity and pressure, in different locations of the domain. Given the complex nonlinear nature of turbulent flows, the control strategies proposed so far in the literature are physically grounded, but too simple. DRL, by contrast, enables leveraging the high-dimensional data that can be sampled from flow simulations to design advanced control strategies. In an effort to establish a benchmark for testing data-driven control strategies, we compare opposition control, a state-of-the-art turbulence-control strategy from the literature, and a commonly used DRL algorithm, deep deterministic policy gradient. Our results show that DRL leads to 43% and 30% drag reduction in a minimal and a larger channel (at a friction Reynolds number of 180), respectively, outperforming the classical opposition control by around 20 and 10 percentage points, respectively.

4.
Bioinformatics ; 37(3): 360-366, 2021 04 20.
Article in English | MEDLINE | ID: mdl-32780838

ABSTRACT

MOTIVATION: Proteins are ubiquitous molecules whose function in biological processes is determined by their 3D structure. Experimental identification of a protein's structure can be time-consuming, prohibitively expensive and not always possible. Alternatively, protein folding can be modeled using computational methods, which however are not guaranteed to always produce optimal results. GraphQA is a graph-based method to estimate the quality of protein models, that possesses favorable properties such as representation learning, explicit modeling of both sequential and 3D structure, geometric invariance and computational efficiency. RESULTS: GraphQA performs similarly to state-of-the-art methods despite using a relatively low number of input features. In addition, the graph network structure provides an improvement over the architecture used in ProQ4 operating on the same input features. Finally, the individual contributions of GraphQA components are carefully evaluated. AVAILABILITY AND IMPLEMENTATION: PyTorch implementation, datasets, experiments and link to an evaluation server are available through this GitHub repository: github.com/baldassarreFe/graphqa. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
Neural Networks, Computer , Proteins , Protein Folding
5.
Nat Commun ; 11(1): 233, 2020 01 13.
Article in English | MEDLINE | ID: mdl-31932590

ABSTRACT

The emergence of artificial intelligence (AI) and its progressively wider impact on many sectors requires an assessment of its effect on the achievement of the Sustainable Development Goals. Using a consensus-based expert elicitation process, we find that AI can enable the accomplishment of 134 targets across all the goals, but it may also inhibit 59 targets. However, current research foci overlook important aspects. The fast development of AI needs to be supported by the necessary regulatory insight and oversight for AI-based technologies to enable sustainable development. Failure to do so could result in gaps in transparency, safety, and ethical standards.

6.
Radiology ; 294(2): 265-272, 2020 02.
Article in English | MEDLINE | ID: mdl-31845842

ABSTRACT

Background Most risk prediction models for breast cancer are based on questionnaires and mammographic density assessments. By training a deep neural network, further information in the mammographic images can be considered. Purpose To develop a risk score that is associated with future breast cancer and compare it with density-based models. Materials and Methods In this retrospective study, all women aged 40-74 years within the Karolinska University Hospital uptake area in whom breast cancer was diagnosed in 2013-2014 were included along with healthy control subjects. Network development was based on cases diagnosed from 2008 to 2012. The deep learning (DL) risk score, dense area, and percentage density were calculated for the earliest available digital mammographic examination for each woman. Logistic regression models were fitted to determine the association with subsequent breast cancer. False-negative rates were obtained for the DL risk score, age-adjusted dense area, and age-adjusted percentage density. Results A total of 2283 women, 278 of whom were later diagnosed with breast cancer, were evaluated. The age at mammography (mean, 55.7 years vs 54.6 years; P < .001), the dense area (mean, 38.2 cm2 vs 34.2 cm2; P < .001), and the percentage density (mean, 25.6% vs 24.0%; P < .001) were higher among women diagnosed with breast cancer than in those without a breast cancer diagnosis. The odds ratios and areas under the receiver operating characteristic curve (AUCs) were higher for age-adjusted DL risk score than for dense area and percentage density: 1.56 (95% confidence interval [CI]: 1.48, 1.64; AUC, 0.65), 1.31 (95% CI: 1.24, 1.38; AUC, 0.60), and 1.18 (95% CI: 1.11, 1.25; AUC, 0.57), respectively (P < .001 for AUC). The false-negative rate was lower: 31% (95% CI: 29%, 34%), 36% (95% CI: 33%, 39%; P = .006), and 39% (95% CI: 37%, 42%; P < .001); this difference was most pronounced for more aggressive cancers. Conclusion Compared with density-based models, a deep neural network can more accurately predict which women are at risk for future breast cancer, with a lower false-negative rate for more aggressive cancers. © RSNA, 2019 Online supplemental material is available for this article. See also the editorial by Bahl in this issue.


Subject(s)
Breast Density , Breast Neoplasms/diagnostic imaging , Mammography/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Adult , Aged , Breast/diagnostic imaging , Deep Learning , Female , Humans , Middle Aged , Neural Networks, Computer , Retrospective Studies , Risk Assessment
7.
Cell Syst ; 6(6): 636-653, 2018 06 27.
Article in English | MEDLINE | ID: mdl-29953863

ABSTRACT

Phenotypic image analysis is the task of recognizing variations in cell properties using microscopic image data. These variations, produced through a complex web of interactions between genes and the environment, may hold the key to uncover important biological phenomena or to understand the response to a drug candidate. Today, phenotypic analysis is rarely performed completely by hand. The abundance of high-dimensional image data produced by modern high-throughput microscopes necessitates computational solutions. Over the past decade, a number of software tools have been developed to address this need. They use statistical learning methods to infer relationships between a cell's phenotype and data from the image. In this review, we examine the strengths and weaknesses of non-commercial phenotypic image analysis software, cover recent developments in the field, identify challenges, and give a perspective on future possibilities.


Subject(s)
High-Throughput Screening Assays/methods , Image Processing, Computer-Assisted/methods , Animals , Big Data , Humans , Machine Learning , Microscopy/methods , Phenotype , Software
8.
Transl Res ; 194: 19-35, 2018 04.
Article in English | MEDLINE | ID: mdl-29175265

ABSTRACT

Breast cancer is the most common malignant disease in women worldwide. In recent decades, earlier diagnosis and better adjuvant therapy have substantially improved patient outcome. Diagnosis by histopathology has proven to be instrumental to guide breast cancer treatment, but new challenges have emerged as our increasing understanding of cancer over the years has revealed its complex nature. As patient demand for personalized breast cancer therapy grows, we face an urgent need for more precise biomarker assessment and more accurate histopathologic breast cancer diagnosis to make better therapy decisions. The digitization of pathology data has opened the door to faster, more reproducible, and more precise diagnoses through computerized image analysis. Software to assist diagnostic breast pathology through image processing techniques have been around for years. But recent breakthroughs in artificial intelligence (AI) promise to fundamentally change the way we detect and treat breast cancer in the near future. Machine learning, a subfield of AI that applies statistical methods to learn from data, has seen an explosion of interest in recent years because of its ability to recognize patterns in data with less need for human instruction. One technique in particular, known as deep learning, has produced groundbreaking results in many important problems including image classification and speech recognition. In this review, we will cover the use of AI and deep learning in diagnostic breast pathology, and other recent developments in digital image analysis.


Subject(s)
Artificial Intelligence , Breast Neoplasms/diagnostic imaging , Breast/diagnostic imaging , Image Processing, Computer-Assisted/methods , Biomarkers, Tumor , Breast/pathology , Breast Neoplasms/pathology , Diagnosis, Computer-Assisted , Female , Humans , Machine Learning
9.
IEEE Trans Pattern Anal Mach Intell ; 38(9): 1790-802, 2016 09.
Article in English | MEDLINE | ID: mdl-26584488

ABSTRACT

Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for a wide range of target visual recognition tasks. This paper introduces and investigates several factors affecting the transferability of such representations. It includes parameters for training of the source ConvNet such as its architecture, distribution of the training data, etc. and also the parameters of feature extraction such as layer of the trained ConvNet, dimensionality reduction, etc. Then, by optimizing these factors, we show that significant improvements can be achieved on various (17) visual recognition tasks. We further show that these visual recognition tasks can be categorically ordered based on their similarity to the source task such that a correlation between the performance of tasks and their similarity to the source task w.r.t. the proposed factors is observed.

SELECTION OF CITATIONS
SEARCH DETAIL
...