Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 159
Filter
1.
Sensors (Basel) ; 22(10)2022 May 10.
Article in English | MEDLINE | ID: covidwho-1875742

ABSTRACT

Convolutional neural networks are a class of deep neural networks that leverage spatial information, and they are therefore well suited to classifying images for a range of applications [...].


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer
2.
Sensors (Basel) ; 22(10)2022 May 12.
Article in English | MEDLINE | ID: covidwho-1855751

ABSTRACT

Studies and systems that are aimed at the identification of the presence of people within an indoor environment and the monitoring of their activities and flows have been receiving more attention in recent years, specifically since the beginning of the COVID-19 pandemic. This paper proposes an approach for people counting that is based on the use of cameras and Raspberry Pi platforms, together with an edge-based transfer learning framework that is enriched with specific image processing strategies, with the aim of this approach being adopted in different indoor environments without the need for tailored training phases. The system was deployed on a university campus, which was chosen as the case study. The proposed system was able to work in classrooms with different characteristics. This paper reports a proposed architecture that could make the system scalable and privacy compliant and the evaluation tests that were conducted in different types of classrooms, which demonstrate the feasibility of this approach. Overall, the system was able to count the number of people in classrooms with a maximum mean absolute error of 1.23.


Subject(s)
COVID-19 , Pandemics , Humans , Image Processing, Computer-Assisted , Machine Learning
3.
Comput Biol Med ; 145: 105498, 2022 06.
Article in English | MEDLINE | ID: covidwho-1838703

ABSTRACT

BACKGROUND: Automated generation of radiological reports for different imaging modalities is essentially required to smoothen the clinical workflow and alleviate radiologists' workload. It involves the careful amalgamation of image processing techniques for medical image interpretation and language generation techniques for report generation. This paper presents CADxReport, a coattention and reinforcement learning based technique for generating clinically accurate reports from chest x-ray (CXR) images. METHOD: CADxReport, uses VGG19 network pre-trained over ImageNet dataset and a multi-label classifier for extracting visual and semantic features from CXR images, respectively. The co-attention mechanism with both the features is used to generate a context vector, which is then passed to HLSTM for radiological report generation. The model is trained using reinforcement learning to maximize CIDEr rewards. OpenI dataset, having 7, 470 CXRs along with 3, 955 associated structured radiological reports, is used for training and testing. RESULTS: Our proposed model is able to generate clinically accurate reports from CXR images. The quantitative evaluations confirm satisfactory results by achieving the following performance scores: BLEU-1 = 0.577, BLEU-2 = 0.478, BLEU-3 = 0.403, BLEU-4 = 0.346, ROUGE = 0.618 and CIDEr = 0.380. CONCLUSIONS: The evaluation using BLEU, ROUGE, and CIDEr score metrics indicates that the proposed model generates sufficiently accurate CXR reports and outperforms most of the state-of-the-art methods for the given task.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Radiography , Thorax , X-Rays
4.
Med Image Anal ; 79: 102461, 2022 Jul.
Article in English | MEDLINE | ID: covidwho-1804830

ABSTRACT

Ultrasound (US) imaging is widely used for anatomical structure inspection in clinical diagnosis. The training of new sonographers and deep learning based algorithms for US image analysis usually requires a large amount of data. However, obtaining and labeling large-scale US imaging data are not easy tasks, especially for diseases with low incidence. Realistic US image synthesis can alleviate this problem to a great extent. In this paper, we propose a generative adversarial network (GAN) based image synthesis framework. Our main contributions include: (1) we present the first work that can synthesize realistic B-mode US images with high-resolution and customized texture editing features; (2) to enhance structural details of generated images, we propose to introduce auxiliary sketch guidance into a conditional GAN. We superpose the edge sketch onto the object mask and use the composite mask as the network input; (3) to generate high-resolution US images, we adopt a progressive training strategy to gradually generate high-resolution images from low-resolution images. In addition, a feature loss is proposed to minimize the difference of high-level features between the generated and real images, which further improves the quality of generated images; (4) the proposed US image synthesis method is quite universal and can also be generalized to the US images of other anatomical structures besides the three ones tested in our study (lung, hip joint, and ovary); (5) extensive experiments on three large US image datasets are conducted to validate our method. Ablation studies, customized texture editing, user studies, and segmentation tests demonstrate promising results of our method in synthesizing realistic US images.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Female , Humans , Image Processing, Computer-Assisted/methods , Ultrasonography
5.
Phys Med Biol ; 67(7)2022 03 29.
Article in English | MEDLINE | ID: covidwho-1774310

ABSTRACT

Chest x-ray (CXR) is one of the most commonly used imaging techniques for the detection and diagnosis of pulmonary diseases. One critical component in many computer-aided systems, for either detection or diagnosis in digital CXR, is the accurate segmentation of the lung. Due to low-intensity contrast around lung boundary and large inter-subject variance, it has been challenging to segment lung from structural CXR images accurately. In this work, we propose an automatic Hybrid Segmentation Network (H-SegNet) for lung segmentation on CXR. The proposed H-SegNet consists of two key steps: (1) an image preprocessing step based on a deep learning model to automatically extract coarse lung contours; (2) a refinement step to fine-tune the coarse segmentation results based on an improved principal curve-based method coupled with an improved machine learning method. Experimental results on several public datasets show that the proposed method achieves superior segmentation results in lung CXRs, compared with several state-of-the-art methods.


Subject(s)
Lung Diseases , Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods , Lung/diagnostic imaging , Lung Diseases/diagnosis , Radiography , Thorax/diagnostic imaging
6.
Med Phys ; 49(6): 3797-3815, 2022 Jun.
Article in English | MEDLINE | ID: covidwho-1750419

ABSTRACT

BACKGROUND: The coronavirus disease 2019 (COVID-19) spreads rapidly across the globe, seriously threatening the health of people all over the world. To reduce the diagnostic pressure of front-line doctors, an accurate and automatic lesion segmentation method is highly desirable in clinic practice. PURPOSE: Many proposed two-dimensional (2D) methods for sliced-based lesion segmentation cannot take full advantage of spatial information in the three-dimensional (3D) volume data, resulting in limited segmentation performance. Three-dimensional methods can utilize the spatial information but suffer from long training time and slow convergence speed. To solve these problems, we propose an end-to-end hybrid-feature cross fusion network (HFCF-Net) to fuse the 2D and 3D features at three scales for the accurate segmentation of COVID-19 lesions. METHODS: The proposed HFCF-Net incorporates 2D and 3D subnets to extract features within and between slices effectively. Then the cross fusion module is designed to bridge 2D and 3D decoders at the same scale to fuse both types of features. The module consists of three cross fusion blocks, each of which contains a prior fusion path and a context fusion path to jointly learn better lesion representations. The former aims to explicitly provide the 3D subnet with lesion-related prior knowledge, and the latter utilizes the 3D context information as the attention guidance of the 2D subnet, which promotes the precise segmentation of the lesion regions. Furthermore, we explore an imbalance-robust adaptive learning loss function that includes image-level loss and pixel-level loss to tackle the problems caused by the apparent imbalance between the proportions of the lesion and non-lesion voxels, providing a learning strategy to dynamically adjust the learning focus between 2D and 3D branches during the training process for effective supervision. RESULT: Extensive experiments conducted on a publicly available dataset demonstrate that the proposed segmentation network significantly outperforms some state-of-the-art methods for the COVID-19 lesion segmentation, yielding a Dice similarity coefficient of 74.85%. The visual comparison of segmentation performance also proves the superiority of the proposed network in segmenting different-sized lesions. CONCLUSIONS: In this paper, we propose a novel HFCF-Net for rapid and accurate COVID-19 lesion segmentation from chest computed tomography volume data. It innovatively fuses hybrid features in a cross manner for lesion segmentation, aiming to utilize the advantages of 2D and 3D subnets to complement each other for enhancing the segmentation performance. Benefitting from the cross fusion mechanism, the proposed HFCF-Net can segment the lesions more accurately with the knowledge acquired from both subnets.


Subject(s)
COVID-19 , COVID-19/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods
7.
Sensors (Basel) ; 22(5)2022 Feb 22.
Article in English | MEDLINE | ID: covidwho-1742604

ABSTRACT

The axle box in the bogie system of subway trains is a key component connecting primary damper and the axle. In order to extract deep features and large-scale fault features for rapid diagnosis, a novel fault reconstruction characteristics classification method based on deep residual network with a multi-scale stacked receptive field for rolling bearings of a subway train axle box is proposed. Firstly, multi-layer stacked convolutional kernels and methods to insert them into ultra-deep residual networks are developed. Then, the original vibration signals of four fault characteristics acquired are reconstructed with a Gramian angular summation field and trainable large-scale 2D time-series images are obtained. In the end, the experimental results show that ResNet-152-MSRF has a low complexity of network structure, less trainable parameters than general convolutional neural networks, and no significant increase in network parameters and calculation time after embedding multi-layer stacked convolutional kernels. Moreover, there is a significant improvement in accuracy compared to lower depths, and a slight improvement in accuracy compared to networks than unembedded multi-layer stacked convolutional kernels.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Algorithms , Disease Progression , Humans , Image Processing, Computer-Assisted/methods
8.
J Healthc Eng ; 2022: 5998042, 2022.
Article in English | MEDLINE | ID: covidwho-1731358

ABSTRACT

Pulmonary medical image analysis using image processing and deep learning approaches has made remarkable achievements in the diagnosis, prognosis, and severity check of lung diseases. The epidemic of COVID-19 brought out by the novel coronavirus has triggered a critical need for artificial intelligence assistance in diagnosing and controlling the disease to reduce its effects on people and global economies. This study aimed at identifying the various COVID-19 medical imaging analysis models proposed by different researchers and featured their merits and demerits. It gives a detailed discussion on the existing COVID-19 detection methodologies (diagnosis, prognosis, and severity/risk detection) and the challenges encountered for the same. It also highlights the various preprocessing and post-processing methods involved to enhance the detection mechanism. This work also tries to bring out the different unexplored research areas that are available for medical image analysis and how the vast research done for COVID-19 can advance the field. Despite deep learning methods presenting high levels of efficiency, some limitations have been briefly described in the study. Hence, this review can help understand the utilization and pros and cons of deep learning in analyzing medical images.


Subject(s)
Artificial Intelligence , COVID-19 , COVID-19/diagnostic imaging , Humans , Image Processing, Computer-Assisted , SARS-CoV-2 , Tomography, X-Ray Computed
9.
Biomed Res Int ; 2022: 8925930, 2022.
Article in English | MEDLINE | ID: covidwho-1723968

ABSTRACT

COVID-19 is a fatal disease caused by the SARS-CoV-2 virus that has caused around 5.3 Million deaths globally as of December 2021. The detection of this disease is a time taking process that have worsen the situation around the globe, and the disease has been identified as a world pandemic by the WHO. Deep learning-based approaches are being widely used to diagnose the COVID-19 cases, but the limitation of immensity in the publicly available dataset causes the problem of model over-fitting. Modern artificial intelligence-based techniques can be used to increase the dataset to avoid from the over-fitting problem. This research work presents the use of various deep learning models along with the state-of-the-art augmentation methods, namely, classical and generative adversarial network- (GAN-) based data augmentation. Furthermore, four existing deep convolutional networks, namely, DenseNet-121, InceptionV3, Xception, and ResNet101 have been used for the detection of the virus in X-ray images after training on augmented dataset. Additionally, we have also proposed a novel convolutional neural network (QuNet) to improve the COVID-19 detection. The comparative analysis of achieved results reflects that both QuNet and Xception achieved high accuracy with classical augmented dataset, whereas QuNet has also outperformed and delivered 90% detection accuracy with GAN-based augmented dataset.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Image Processing, Computer-Assisted/methods , Computer Graphics , Databases, Factual , Humans , Neural Networks, Computer , Pneumonia/diagnostic imaging , Radiography
10.
Sci Rep ; 12(1): 3212, 2022 02 25.
Article in English | MEDLINE | ID: covidwho-1713208

ABSTRACT

Novel Coronavirus disease (COVID-19) is a highly contagious respiratory infection that has had devastating effects on the world. Recently, new COVID-19 variants are emerging making the situation more challenging and threatening. Evaluation and quantification of COVID-19 lung abnormalities based on chest Computed Tomography (CT) images can help determining the disease stage, efficiently allocating limited healthcare resources, and making informed treatment decisions. During pandemic era, however, visual assessment and quantification of COVID-19 lung lesions by expert radiologists become expensive and prone to error, which raises an urgent quest to develop practical autonomous solutions. In this context, first, the paper introduces an open-access COVID-19 CT segmentation dataset containing 433 CT images from 82 patients that have been annotated by an expert radiologist. Second, a Deep Neural Network (DNN)-based framework is proposed, referred to as the [Formula: see text], that autonomously segments lung abnormalities associated with COVID-19 from chest CT images. Performance of the proposed [Formula: see text] framework is evaluated through several experiments based on the introduced and external datasets. Third, an unsupervised enhancement approach is introduced that can reduce the gap between the training set and test set and improve the model generalization. The enhanced results show a dice score of 0.8069 and specificity and sensitivity of 0.9969 and 0.8354, respectively. Furthermore, the results indicate that the [Formula: see text] model can efficiently segment COVID-19 lesions in both 2D CT images and whole lung volumes. Results on the external dataset illustrate generalization capabilities of the [Formula: see text] model to CT images obtained from a different scanner.


Subject(s)
COVID-19/diagnostic imaging , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Radiography, Thoracic , Tomography, X-Ray Computed , Datasets as Topic , Female , Humans , Male , Middle Aged
11.
Sci Rep ; 12(1): 3090, 2022 02 23.
Article in English | MEDLINE | ID: covidwho-1704592

ABSTRACT

World Health Organization (WHO) declared COVID-19 (COronaVIrus Disease 2019) as pandemic on March 11, 2020. Ever since then, the virus is undergoing different mutations, with a high rate of dissemination. The diagnosis and prognosis of COVID-19 are critical in bringing the situation under control. COVID-19 virus replicates in the lungs after entering the upper respiratory system, causing pneumonia and mortality. Deep learning has a significant role in detecting infections from the Computed Tomography (CT). With the help of basic image processing techniques and deep learning, we have developed a two stage cascaded 3D UNet to segment the contaminated area from the lungs. The first 3D UNet extracts the lung parenchyma from the CT volume input after preprocessing and augmentation. Since the CT volume is small, we apply appropriate post-processing to the lung parenchyma and input these volumes into the second 3D UNet. The second 3D UNet extracts the infected 3D volumes. With this method, clinicians can input the complete CT volume of the patient and analyze the contaminated area without having to label the lung parenchyma for each new patient. For lung parenchyma segmentation, the proposed method obtained a sensitivity of 93.47%, specificity of 98.64%, an accuracy of 98.07%, and a dice score of 92.46%. We have achieved a sensitivity of 83.33%, a specificity of 99.84%, an accuracy of 99.20%, and a dice score of 82% for lung infection segmentation.


Subject(s)
Image Processing, Computer-Assisted
12.
Microsc Res Tech ; 85(6): 2313-2330, 2022 Jun.
Article in English | MEDLINE | ID: covidwho-1703067

ABSTRACT

The COVID-19 pandemic is spreading at a fast pace around the world and has a high mortality rate. Since there is no proper treatment of COVID-19 and its multiple variants, for example, Alpha, Beta, Gamma, and Delta, being more infectious in nature are affecting millions of people, further complicates the detection process, so, victims are at the risk of death. However, timely and accurate diagnosis of this deadly virus can not only save the patients from life loss but can also prevent them from the complex treatment procedures. Accurate segmentation and classification of COVID-19 is a tedious job due to the extensive variations in its shape and similarity with other diseases like Pneumonia. Furthermore, the existing techniques have hardly focused on the infection growth estimation over time which can assist the doctors to better analyze the condition of COVID-19-affected patients. In this work, we tried to overcome the shortcomings of existing studies by proposing a model capable of segmenting, classifying the COVID-19 from computed tomography images, and predicting its behavior over a certain period. The framework comprises four main steps: (i) data preparation, (ii) segmentation, (iii) infection growth estimation, and (iv) classification. After performing the pre-processing step, we introduced the DenseNet-77 based UNET approach. Initially, the DenseNet-77 is used at the Encoder module of the UNET model to calculate the deep keypoints which are later segmented to show the coronavirus region. Then, the infection growth estimation of COVID-19 per patient is estimated using the blob analysis. Finally, we employed the DenseNet-77 framework as an end-to-end network to classify the input images into three classes namely healthy, COVID-19-affected, and pneumonia images. We evaluated the proposed model over the COVID-19-20 and COVIDx CT-2A datasets for segmentation and classification tasks, respectively. Furthermore, unlike existing techniques, we performed a cross-dataset evaluation to show the generalization ability of our method. The quantitative and qualitative evaluation confirms that our method is robust to both COVID-19 segmentation and classification and can accurately predict the infection growth in a certain time frame. RESEARCH HIGHLIGHTS: We present an improved UNET framework with a DenseNet-77-based encoder for deep keypoints extraction to enhance the identification and segmentation performance of the coronavirus while reducing the computational complexity as well. We propose a computationally robust approach for COVID-19 infection segmentation due to fewer model parameters. Robust segmentation of COVID-19 due to accurate feature computation power of DenseNet-77. A module is introduced to predict the infection growth of COVID-19 for a patient to analyze its severity over time. We present such a framework that can effectively classify the samples into several classes, that is, COVID-19, Pneumonia, and healthy samples. Rigorous experimentation was performed including the cross-dataset evaluation to prove the efficacy of the presented technique.


Subject(s)
COVID-19 , Pneumonia , COVID-19/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Pandemics , Tomography, X-Ray Computed/methods
13.
PLoS One ; 17(2): e0264139, 2022.
Article in English | MEDLINE | ID: covidwho-1690689

ABSTRACT

A pressure ulcer is an injury of the skin and underlying tissues adjacent to a bony eminence. Patients who suffer from this disease may have difficulty accessing medical care. Recently, the COVID-19 pandemic has exacerbated this situation. Automatic diagnosis based on machine learning (ML) brings promising solutions. Traditional ML requires complicated preprocessing steps for feature extraction. Its clinical applications are thus limited to particular datasets. Deep learning (DL), which extracts features from convolution layers, can embrace larger datasets that might be deliberately excluded in traditional algorithms. However, DL requires large sets of domain specific labeled data for training. Labeling various tissues of pressure ulcers is a challenge even for experienced plastic surgeons. We propose a superpixel-assisted, region-based method of labeling images for tissue classification. The boundary-based method is applied to create a dataset for wound and re-epithelialization (re-ep) segmentation. Five popular DL models (U-Net, DeeplabV3, PsPNet, FPN, and Mask R-CNN) with encoder (ResNet-101) were trained on the two datasets. A total of 2836 images of pressure ulcers were labeled for tissue classification, while 2893 images were labeled for wound and re-ep segmentation. All five models had satisfactory results. DeeplabV3 had the best performance on both tasks with a precision of 0.9915, recall of 0.9915 and accuracy of 0.9957 on the tissue classification; and a precision of 0.9888, recall of 0.9887 and accuracy of 0.9925 on the wound and re-ep segmentation task. Combining segmentation results with clinical data, our algorithm can detect the signs of wound healing, monitor the progress of healing, estimate the wound size, and suggest the need for surgical debridement.


Subject(s)
Algorithms , COVID-19/epidemiology , Deep Learning , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Pressure Ulcer/diagnosis , COVID-19/virology , Humans , Pressure Ulcer/diagnostic imaging , SARS-CoV-2/isolation & purification , Taiwan/epidemiology
14.
Acta Crystallogr D Struct Biol ; 78(Pt 2): 152-161, 2022 Feb 01.
Article in English | MEDLINE | ID: covidwho-1684950

ABSTRACT

Recently, there has been a dramatic improvement in the quality and quantity of data derived using cryogenic electron microscopy (cryo-EM). This is also associated with a large increase in the number of atomic models built. Although the best resolutions that are achievable are improving, often the local resolution is variable, and a significant majority of data are still resolved at resolutions worse than 3 Å. Model building and refinement is often challenging at these resolutions, and hence atomic model validation becomes even more crucial to identify less reliable regions of the model. Here, a graphical user interface for atomic model validation, implemented in the CCP-EM software suite, is presented. It is aimed to develop this into a platform where users can access multiple complementary validation metrics that work across a range of resolutions and obtain a summary of evaluations. Based on the validation estimates from atomic models associated with cryo-EM structures from SARS-CoV-2, it was observed that models typically favor adopting the most common conformations over fitting the observations when compared with the model agreement with data. At low resolutions, the stereochemical quality may be favored over data fit, but care should be taken to ensure that the model agrees with the data in terms of resolvable features. It is demonstrated that further re-refinement can lead to improvement of the agreement with data without the loss of geometric quality. This also highlights the need for improved resolution-dependent weight optimization in model refinement and an effective test for overfitting that would help to guide the refinement process.


Subject(s)
Cryoelectron Microscopy/methods , Software Validation , Software , COVID-19 , Image Processing, Computer-Assisted , Models, Molecular , Reproducibility of Results , User-Computer Interface
15.
Sensors (Basel) ; 22(3)2022 Jan 24.
Article in English | MEDLINE | ID: covidwho-1649264

ABSTRACT

The rapid spread of the COVID-19 pandemic, in early 2020, has radically changed the lives of people. In our daily routine, the use of a face (surgical) mask is necessary, especially in public places, to prevent the spread of this disease. Furthermore, in crowded indoor areas, the automated recognition of people wearing a mask is a requisite for the assurance of public health. In this direction, image processing techniques, in combination with deep learning, provide effective ways to deal with this problem. However, it is a common phenomenon that well-established datasets containing images of people wearing masks are not publicly available. To overcome this obstacle and to assist the research progress in this field, we present a publicly available annotated image database containing images of people with and without a mask on their faces, in different environments and situations. Moreover, we tested the performance of deep learning detectors in images and videos on this dataset. The training and the evaluation were performed on different versions of the YOLO network using Darknet, which is a state-of-the-art real-time object detection system. Finally, different experiments and evaluations were carried out for each version of YOLO, and the results for each detector are presented.


Subject(s)
COVID-19 , Pandemics , Humans , Image Processing, Computer-Assisted , Masks , SARS-CoV-2
16.
Contrast Media Mol Imaging ; 2022: 4352730, 2022.
Article in English | MEDLINE | ID: covidwho-1673528

ABSTRACT

Currently, countries across the world are suffering from a prominent viral infection called COVID-19. Most countries are still facing several issues due to this disease, which has resulted in several fatalities. The first COVID-19 wave caused devastation across the world owing to its virulence and led to a massive loss in human lives, impacting the country's economy drastically. A dangerous disease called mucormycosis was discovered worldwide during the second COVID-19 wave, in 2021, which lasted from April to July. The mucormycosis disease is commonly known as "black fungus," which belongs to the fungus family Mucorales. It is usually a rare disease, but the level of destruction caused by the disease is vast and unpredictable. This disease mainly targets people already suffering from other diseases and consuming heavy medication to counter the disease they are suffering from. This is because of the reduction in antibodies in the affected people. Therefore, the patient's body does not have the ability to act against fungus-oriented infections. This black fungus is more commonly identified in patients with coronavirus disease in certain country. The condition frequently manifests on skin, but it can also harm organs such as eyes and brain. This study intends to design a modified neural network logic for an artificial intelligence (AI) strategy with learning principles, called a hybrid learning-based neural network classifier (HLNNC). The proposed method is based on well-known techniques such as convolutional neural network (CNN) and support vector machine (SVM). This article discusses a dataset containing several eye photographs of patients with and without black fungus infection. These images were collected from the real-time records of people afflicted with COVID followed by the black fungus. This proposed HLNNC scheme identifies the black fungus disease based on the following image processing procedures: image acquisition, preprocessing, feature extraction, and classification; these procedures were performed considering the dataset training and testing principles with proper performance analysis. The results of the procedure are provided in a graphical format with the precise specification, and the efficacy of the proposed method is established.


Subject(s)
COVID-19/complications , Coinfection/microbiology , Deep Learning , Mucorales/isolation & purification , Mucormycosis/epidemiology , Algorithms , COVID-19/drug therapy , Comorbidity , Humans , Image Processing, Computer-Assisted , India/epidemiology , Mucorales/classification , Mucorales/immunology , Mucormycosis/complications , Mucormycosis/microbiology , Neural Networks, Computer , Support Vector Machine
17.
Sci Rep ; 12(1): 1716, 2022 02 02.
Article in English | MEDLINE | ID: covidwho-1665719

ABSTRACT

The rapid evolution of the novel coronavirus disease (COVID-19) pandemic has resulted in an urgent need for effective clinical tools to reduce transmission and manage severe illness. Numerous teams are quickly developing artificial intelligence approaches to these problems, including using deep learning to predict COVID-19 diagnosis and prognosis from chest computed tomography (CT) imaging data. In this work, we assess the value of aggregated chest CT data for COVID-19 prognosis compared to clinical metadata alone. We develop a novel patient-level algorithm to aggregate the chest CT volume into a 2D representation that can be easily integrated with clinical metadata to distinguish COVID-19 pneumonia from chest CT volumes from healthy participants and participants with other viral pneumonia. Furthermore, we present a multitask model for joint segmentation of different classes of pulmonary lesions present in COVID-19 infected lungs that can outperform individual segmentation models for each task. We directly compare this multitask segmentation approach to combining feature-agnostic volumetric CT classification feature maps with clinical metadata for predicting mortality. We show that the combination of features derived from the chest CT volumes improve the AUC performance to 0.80 from the 0.52 obtained by using patients' clinical data alone. These approaches enable the automated extraction of clinically relevant features from chest CT volumes for risk stratification of COVID-19 patients.


Subject(s)
COVID-19/diagnosis , COVID-19/virology , Deep Learning , SARS-CoV-2 , Thorax/diagnostic imaging , Thorax/pathology , Tomography, X-Ray Computed , Algorithms , COVID-19/mortality , Databases, Genetic , Humans , Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Prognosis , Tomography, X-Ray Computed/methods , Tomography, X-Ray Computed/standards
18.
Neurosci Lett ; 772: 136484, 2022 02 16.
Article in English | MEDLINE | ID: covidwho-1654975

ABSTRACT

Occupational burnout has become a pervasive problem, especially among medical professionals who are highly vulnerable to burnout. Since the beginning of the COVID-19 pandemic, medical professionals have faced greater levels of stress. It is critical to increase our understanding of the neurobiological mechanisms of burnout among medical professionals for the benefit of healthcare systems. Therefore, in this study, we investigated structural brain correlates of burnout severity in medical professionals using a voxel-based morphometric technique. Nurses in active service underwent structural magnetic resonance imaging. Two core dimensions of burnout, namely, emotional exhaustion and depersonalization, were assessed using self-reported psychological questionnaires. Levels of emotional exhaustion were found to be negatively correlated with gray matter (GM) volumes in the bilateral ventromedial prefrontal cortex (vmPFC) and left insula. Moreover, levels of depersonalization were negatively correlated with GM volumes in the left vmPFC and left thalamus. Altogether, these findings contribute to a better understanding of the neural mechanisms of burnout and may provide helpful insights for developing effective interventions for medical professionals.


Subject(s)
Brain/diagnostic imaging , Burnout, Professional/diagnostic imaging , Adult , COVID-19 , Cerebral Cortex/diagnostic imaging , Depersonalization , Emotions , Female , Gray Matter/diagnostic imaging , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Neuropsychological Tests , Nurses , Pandemics , Prefrontal Cortex/diagnostic imaging , Self Report , Surveys and Questionnaires , Thalamus/diagnostic imaging , Young Adult
19.
Cytometry A ; 101(5): 423-433, 2022 05.
Article in English | MEDLINE | ID: covidwho-1640695

ABSTRACT

Imaging Mass Cytometry (IMC) is a powerful high-throughput technique enabling resolution of up to 37 markers in a single fixed tissue section while also preserving in situ spatial relationships. Currently, IMC processing and analysis necessitates the use of multiple different software, labour-intensive pipeline development, different operating systems and knowledge of bioinformatics, all of which are a barrier to many potential users. Here we present TITAN - an open-source, single environment, end-to-end pipeline that can be utilized for image visualization, segmentation, analysis and export of IMC data. TITAN is implemented as an extension within the publicly available 3D Slicer software. We demonstrate the utility, application, reliability and comparability of TITAN using publicly available IMC data from recently-published breast cancer and COVID-19 lung injury studies. Compared with current IMC analysis methods, TITAN provides a user-friendly, efficient single environment to accurately visualize, segment, and analyze IMC data for all users.


Subject(s)
COVID-19 , Data Analysis , Humans , Image Cytometry/methods , Image Processing, Computer-Assisted/methods , Reproducibility of Results , Software
20.
Curr Med Imaging ; 18(2): 103, 2022.
Article in English | MEDLINE | ID: covidwho-1625140
SELECTION OF CITATIONS
SEARCH DETAIL