Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
1.
Phys Eng Sci Med ; 2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38652347

ABSTRACT

Schizophrenia (SZ) has been acknowledged as a highly intricate mental disorder for a long time. In fact, individuals with SZ experience a blurred line between fantasy and reality, leading to a lack of awareness about their condition, which can pose significant challenges during the treatment process. Due to the importance of the issue, timely diagnosis of this illness can not only assist patients and their families in managing the condition but also enable early intervention, which may help prevent its advancement. EEG is a widely utilized technique for investigating mental disorders like SZ due to its non-invasive nature, affordability, and wide accessibility. In this study, our main goal is to develop an optimized system that can achieve automatic diagnosis of SZ with minimal input information. To optimize the system, we adopted a strategy of using single-channel EEG signals and integrated knowledge distillation and transfer learning techniques into the model. This approach was designed to improve the performance and efficiency of our proposed method for SZ diagnosis. Additionally, to leverage the pre-trained models effectively, we converted the EEG signals into images using Continuous Wavelet Transform (CWT). This transformation allowed us to harness the capabilities of pre-trained models in the image domain, enabling automatic SZ detection with enhanced efficiency. To achieve a more robust estimate of the model's performance, we employed fivefold cross-validation. The accuracy achieved from the 5-s records of the EEG signal, along with the combination of self-distillation and VGG16 for the P4 channel, is 97.81. This indicates a high level of accuracy in diagnosing SZ using the proposed method.

2.
Int J Comput Assist Radiol Surg ; 19(1): 119-127, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37418109

ABSTRACT

PURPOSE: Medical imaging can be used to estimate a patient's biological age, which may provide complementary information to clinicians compared to chronological age. In this study, we aimed to develop a method to estimate a patient's age based on their chest CT scan. Additionally, we investigated whether chest CT estimated age is a more accurate predictor of lung cancer risk compared to chronological age. METHODS: To develop our age prediction model, we utilized composite CT images and Inception-ResNet-v2. The model was trained, validated, and tested on 13,824 chest CT scans from the National Lung Screening Trial, with 91% for training, 5% for validation, and 4% for testing. Additionally, we independently tested the model on 1849 CT scans collected locally. To assess chest CT estimated age as a risk factor for lung cancer, we computed the relative lung cancer risk between two groups. Group 1 consisted of individuals assigned a CT age older than their chronological age, while Group 2 comprised those assigned a CT age younger than their chronological age. RESULTS: Our analysis revealed a mean absolute error of 1.84 years and a Pearson's correlation coefficient of 0.97 for our local data when comparing chronological age with the estimated CT age. The model showed the most activation in the area associated with the lungs during age estimation. The relative risk for lung cancer was 1.82 (95% confidence interval, 1.65-2.02) for individuals assigned a CT age older than their chronological age compared to those assigned a CT age younger than their chronological age. CONCLUSION: Findings suggest that chest CT age captures some aspects of biological aging and may be a more accurate predictor of lung cancer risk than chronological age. Future studies with larger and more diverse patients are required for the generalization of the interpretations.


Subject(s)
Deep Learning , Lung Neoplasms , Humans , Tomography, X-Ray Computed/methods , Lung Neoplasms/diagnostic imaging , Radiography , Lung/diagnostic imaging
3.
Microsc Res Tech ; 87(2): 229-256, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37750465

ABSTRACT

In recent days, non-communicable diseases (NCDs) require more attention since they require specialized infrastructure for treatment. As per the cancer population registry estimate, nearly 800,000 new cancer cases will be detected yearly. The statistics alarm the need for early cancer detection and diagnosis. Cancer identification can be made either through manual efforts or by computer-aided algorithms. Manual efforts-based cancer detection is labor intensive and also offers more time complexity. In contrast, computer-aided algorithms offer feasibility in reducing time and manual efforts. With the motivation to develop a computer-aided diagnosis system for NCD, we developed a cancer detection methodology. In the present article, a deep learning (DL)-based cancer identification model is developed. In DL-based architectures, the features are generally extracted using convolutional neural networks. The proposed attention-guided, densely connected residual, and dilated convolution deep neural network called DeepHistoNet acquire precise patterns for classification. Experimentation has been carried out on Kasturba Medical College (KMC), TCGA-LIHC, and LC25000 datasets to prove the robustness of the model. Performance evaluation metrics like F1-score, sensitivity, specificity, recall, and accuracy validate the experimentation. Experimental results demonstrate that the proposed DeepHistoNet model outperforms the other state-of-the-art methods. The proposed model has been able to classify the KMC liver dataset with 97.1% accuracy and 0.9867 value of area under the curve-receiver operating characteristic curve (AUC-ROC), which is the best result obtained compared to the state-of-the-art techniques. The performance of the DeepHistoNet has been even better on the LC25000 dataset. On the LC25000 dataset, the proposed model achieved 99.8% classification accuracy. To our knowledge, DeepHistoNet is a novel approach for multiple histopathological image classification. RESEARCH HIGHLIGHTS: A novel robust DL model is proposed for histopathological image carcinoma classification. The precise patterns for accurate classification are extracted using dense cross-connected residual blocks. Spatial attention is provided to the network so that the spatial information is not lost during the feature extraction. DeepHistoNet is trained and evaluated on the liver, lung, and colon histopathology datasets to demonstrate its resilience. The results are promising and outperform state-of-the-art techniques. The proposed methodology has obtained the AUC-ROC value of 0.9867 with a classification accuracy of 97.1% on the KMC dataset. The proposed DeepHistoNet has classified the LC25000 dataset with 99.8% accuracy. The results are the best obtained till date.


Subject(s)
Carcinoma, Hepatocellular , Deep Learning , Liver Neoplasms , Humans , Machine Learning , Carcinoma, Hepatocellular/diagnosis , Liver Neoplasms/diagnosis , Lung , Colon
4.
Int J Comput Assist Radiol Surg ; 18(10): 1903-1914, 2023 Oct.
Article in English | MEDLINE | ID: mdl-36947337

ABSTRACT

PURPOSE: The usage of iodinated contrast media (ICM) can improve the sensitivity and specificity of computed tomography (CT) for many clinical indications. However, the adverse effects of ICM administration can include renal injury, life-threatening allergic-like reactions, and environmental contamination. Deep learning (DL) models can generate full-dose ICM CT images from non-contrast or low-dose ICM administration or generate non-contrast CT from full-dose ICM CT. Eliminating the need for both contrast-enhanced and non-enhanced imaging or reducing the amount of required contrast while maintaining diagnostic capability may reduce overall patient risk, improve efficiency and minimize costs. We reviewed the current capabilities of DL to reduce the need for contrast administration in CT. METHODS: We conducted a systematic review of articles utilizing DL to reduce the amount of ICM required in CT, searching MEDLINE, Embase, Compendex, Inspec, and Scopus to identify papers published from 2016 to 2022. We classified the articles based on the DL model and ICM reduction. RESULTS: Eighteen papers met the inclusion criteria for analysis. Of these, ten generated synthetic full-dose (100%) ICM from real non-contrast CT, while four augmented low-dose to full-dose ICM CT. Three used DL to create synthetic non-contrast CT from real 100% ICM CT, while one paper used DL to translate the 100% ICM to non-contrast CT and vice versa. DL models commonly used generative adversarial networks trained and tested by paired contrast-enhanced and non-contrast or low ICM CTs. Image quality metrics such as peak signal-to-noise ratio and structural similarity index were frequently used for comparing synthetic versus real CT image quality. CONCLUSION: DL-generated contrast-enhanced or non-contrast CT may assist in diagnosis and radiation therapy planning; however, further work to optimize protocols to reduce or eliminate ICM for specific pathology is still needed along with a dedicated assessment of the clinical utility of these synthetic images.


Subject(s)
Contrast Media , Deep Learning , Humans , Tomography, X-Ray Computed/methods
5.
Diagnostics (Basel) ; 12(12)2022 Dec 02.
Article in English | MEDLINE | ID: mdl-36553037

ABSTRACT

Glaucoma is an eye disease that gradually deteriorates vision. Much research focuses on extracting information from the optic disc and optic cup, the structure used for measuring the cup-to-disc ratio. These structures are commonly segmented with deeplearning techniques, primarily using Encoder-Decoder models, which are hard to train and time-consuming. Object detection models using convolutional neural networks can extract features from fundus retinal images with good precision. However, the superiority of one model over another for a specific task is still being determined. The main goal of our approach is to compare object detection model performance to automate segment cups and discs on fundus images. This study brings the novelty of seeing the behavior of different object detection models in the detection and segmentation of the disc and the optical cup (Mask R-CNN, MS R-CNN, CARAFE, Cascade Mask R-CNN, GCNet, SOLO, Point_Rend), evaluated on Retinal Fundus Images for Glaucoma Analysis (REFUGE), and G1020 datasets. Reported metrics were Average Precision (AP), F1-score, IoU, and AUCPR. Several models achieved the highest AP with a perfect 1.000 when the threshold for IoU was set up at 0.50 on REFUGE, and the lowest was Cascade Mask R-CNN with an AP of 0.997. On the G1020 dataset, the best model was Point_Rend with an AP of 0.956, and the worst was SOLO with 0.906. It was concluded that the methods reviewed achieved excellent performance with high precision and recall values, showing efficiency and effectiveness. The problem of how many images are needed was addressed with an initial value of 100, with excellent results. Data augmentation, multi-scale handling, and anchor box size brought improvements. The capability to translate knowledge from one database to another shows promising results too.

6.
SN Comput Sci ; 3(5): 403, 2022.
Article in English | MEDLINE | ID: mdl-35915832

ABSTRACT

Presently, the whole world is suffering from the Covid-19 pandemic. In this harmful situation, using information and Internet technology is mandatory for the government and medical practitioners. After the lockdown, the government needs to take important decisions to allow passengers to travel through air, rail, and land. In the present situation, people need to get a medical report from the hospitals to travel through various modes of transport. In this regard, the Covid-19 history of the passengers plays an important role in issuing tickets to the passengers. Hence, in this paper, a novel authentication method using InterPlanetary File System (IPFS) is suggested to retrieve the Covid-19 history of all passengers to determine whether to issue tickets and allow people to travel through various modes of transport. The government can share the Covid-19 status of passengers with the ticket issuing authority. The medical practitioners can share medical reports and medical images of such people for telediagnosis. To provide security, a novel privacy-preserving storage and sharing of Covid-19 records using secure authentication and image cryptosystem are proposed using chaos, cryptographic hash (SHA-256), Paillier cryptosystem, and IPFS. Security analysis shows that the system can withstand various kinds of attacks.

7.
Multimed Tools Appl ; 81(21): 30615-30645, 2022.
Article in English | MEDLINE | ID: mdl-35431611

ABSTRACT

One of the primary clinical observations for screening the novel coronavirus is capturing a chest x-ray image. In most patients, a chest x-ray contains abnormalities, such as consolidation, resulting from COVID-19 viral pneumonia. In this study, research is conducted on efficiently detecting imaging features of this type of pneumonia using deep convolutional neural networks in a large dataset. It is demonstrated that simple models, alongside the majority of pretrained networks in the literature, focus on irrelevant features for decision-making. In this paper, numerous chest x-ray images from several sources are collected, and one of the largest publicly accessible datasets is prepared. Finally, using the transfer learning paradigm, the well-known CheXNet model is utilized to develop COVID-CXNet. This powerful model is capable of detecting the novel coronavirus pneumonia based on relevant and meaningful features with precise localization. COVID-CXNet is a step towards a fully automated and robust COVID-19 detection system.

8.
Comput Med Imaging Graph ; 91: 101937, 2021 07.
Article in English | MEDLINE | ID: mdl-34087611

ABSTRACT

Rib fractures are injuries commonly assessed in trauma wards. Deep learning has demonstrated state-of-the-art accuracy for a variety of tasks, including image classification. This paper assesses the speed-accuracy trade-offs and general suitability of four popular convolutional neural networks to classify rib fractures from axial computed tomography imagery. We transfer learned InceptionV3, ResNet50, MobileNetV2, and VGG16 models, additionally training "decomposed" models comprised of taking only the first n blocks for each block for each architecture. Given that acute (new) fractures are generally most important to detect, we trained two types of models: a classful model with classes acute, old (healed), and normal (non-fractured); and a binary model with acute vs. the other classes. We found that the first 7 blocks of InceptionV3 achieved the best results and general speed-accuracy trade-off. The classful model achieved a 5-fold cross-validation average accuracy and macro recall of 96.00% and 94.0%, respectively. The binary model achieved a 5-fold cross-validation average accuracy, macro recall, and area under receiver operator characteristic curve of 97.76%, 94.6%, and 94.7%, respectively. On a Windows 10 PC with 32GB RAM and an Nvidia 1080ti GPU, the model's average CPU and GPU per-crop inference times were 13.6 and 12.2 ms, respectively. Compared to the InceptionV3 Block 7 classful model, a radiologist with 9 years of experience was less accurate but more sensitive to acute fractures; meanwhile, the deep learning model had fewer false positive diagnoses and better sensitivity to old fractures and normal ribs. The Cohen's Kappa between the two was 0.813.


Subject(s)
Deep Learning , Rib Fractures , Humans , Neural Networks, Computer , Research Design , Rib Fractures/diagnostic imaging , Tomography, X-Ray Computed
9.
Comput Med Imaging Graph ; 82: 101718, 2020 06.
Article in English | MEDLINE | ID: mdl-32464565

ABSTRACT

Ankylosing spondylitis (AS) is an arthritis with symptoms visible in medical imagery. This paper proposes, to the authors' best knowledge, the first use of statistical machine learning- and deep learning-based classifiers to detect erosion, an early AS symptom, via analysis of computed tomography (CT) imagery, giving some consideration to patient age in so doing. We used gray-level co-occurrence matrices and local binary patterns to generate input features to machine learning algorithms, specifically k-nearest neighbors (k-NN) and random forest. Deep learning solutions based on a modified InceptionV3 architecture were designed and tested, with one classifier produced by training with a cross-entropy loss function and another produced by additionally seeking to minimize validation loss. We found that the random forest classifiers outperform the k-NN classifiers and achieve an eightfold cross-validation average accuracy, recall, and area under receiver operator characteristic curve (ROC AUC) of 96.0%, 92.9%, and 0.97, respectively, for erosion vs. young control patients, and 82.4%, 80.6%, and 0.91, respectively, for erosion vs. old control patients. We found that the deep learning classifier trained without minimizing validation loss was best and achieves an eightfold cross-validation accuracy, recall, and ROC AUC of 99.0%, 97.5%, and 0.97, respectively, for erosion vs. all (combined young and old) control patients; this classifier outperforms a musculoskeletal radiologist with 9 years of experience in raw sensitivity and specificity by 8.4% and 9.5%, respectively. Despite the relatively small dataset on which we trained and cross-validated, our results indicate the potential of machine and deep learning to aid AS diagnosis, and further research using larger datasets should be conducted.


Subject(s)
Spondylitis, Ankylosing/diagnostic imaging , Adult , Age Factors , Aged , Deep Learning , Early Diagnosis , Female , Humans , Machine Learning , Male , Middle Aged , ROC Curve , Sensitivity and Specificity , Tomography, X-Ray Computed
10.
Ultrasound Med Biol ; 46(5): 1119-1132, 2020 05.
Article in English | MEDLINE | ID: mdl-32059918

ABSTRACT

To assist radiologists in breast cancer classification in automated breast ultrasound (ABUS) imaging, we propose a computer-aided diagnosis based on a convolutional neural network (CNN) that classifies breast lesions as benign and malignant. The proposed CNN adopts a modified Inception-v3 architecture to provide efficient feature extraction in ABUS imaging. Because the ABUS images can be visualized in transverse and coronal views, the proposed CNN provides an efficient way to extract multiview features from both views. The proposed CNN was trained and evaluated on 316 breast lesions (135 malignant and 181 benign). An observer performance test was conducted to compare five human reviewers' diagnostic performance before and after referring to the predicting outcomes of the proposed CNN. Our method achieved an area under the curve (AUC) value of 0.9468 with five-folder cross-validation, for which the sensitivity and specificity were 0.886 and 0.876, respectively. Compared with conventional machine learning-based feature extraction schemes, particularly principal component analysis (PCA) and histogram of oriented gradients (HOG), our method achieved a significant improvement in classification performance. The proposed CNN achieved a >10% increased AUC value compared with PCA and HOG. During the observer performance test, the diagnostic results of all human reviewers had increased AUC values and sensitivities after referring to the classification results of the proposed CNN, and four of the five human reviewers' AUCs were significantly improved. The proposed CNN employing a multiview strategy showed promise for the diagnosis of breast cancer, and could be used as a second reviewer for increasing diagnostic reliability.


Subject(s)
Breast Neoplasms/classification , Breast Neoplasms/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted/methods , Neural Networks, Computer , Ultrasonography, Mammary/methods , Adult , Aged , Area Under Curve , Breast Diseases/classification , Breast Diseases/diagnostic imaging , Breast Diseases/pathology , Breast Neoplasms/pathology , Female , Humans , Middle Aged , Principal Component Analysis , Retrospective Studies
11.
Comput Med Imaging Graph ; 68: 1-15, 2018 09.
Article in English | MEDLINE | ID: mdl-29775951

ABSTRACT

Since the retinal blood vessel has been acknowledged as an indispensable element in both ophthalmological and cardiovascular disease diagnosis, the accurate segmentation of the retinal vessel tree has become the prerequisite step for automated or computer-aided diagnosis systems. In this paper, a supervised method is presented based on a pre-trained fully convolutional network through transfer learning. This proposed method has simplified the typical retinal vessel segmentation problem from full-size image segmentation to regional vessel element recognition and result merging. Meanwhile, additional unsupervised image post-processing techniques are applied to this proposed method so as to refine the final result. Extensive experiments have been conducted on DRIVE, STARE, CHASE_DB1 and HRF databases, and the accuracy of the cross-database test on these four databases is state-of-the-art, which also presents the high robustness of the proposed approach. This successful result has not only contributed to the area of automated retinal blood vessel segmentation but also supports the effectiveness of transfer learning when applying deep learning technique to medical imaging.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Retinal Vessels/diagnostic imaging , Databases, Factual , Humans
12.
Opt Express ; 25(21): 25867-25878, 2017 Oct 16.
Article in English | MEDLINE | ID: mdl-29041249

ABSTRACT

Occlusion handling in computer-generated holography is of vast importance as it enhances depth information by presenting correct motion parallax of the 3D scene within the viewing angle. In this paper, we propose a computationally efficient occlusion handling technique based on a fully analytic mesh based computer generated holography. The proposed technique uses angular spectrum convolution that renders exact occlusion while preserving all other aspects of the fully analytic mesh based computer generated holography. The proposed method is computationally efficient as only a single convolution operation is required for each mesh without numerical propagation between the meshes. The proposed method is also exact as it performs the occlusion processing in the tilted mesh plane, being free from artifacts coming from orthographic spatial masking. The proposed method can be applied to the self and the mutual occlusions between the objects in the 3D scene. The computer simulated results show the feasibility of the proposed method.

13.
Opt Express ; 23(26): 33893-901, 2015 Dec 28.
Article in English | MEDLINE | ID: mdl-26832048

ABSTRACT

Fully analytic mesh-based computer generated hologram enables efficient and precise representation of three-dimensional scene. Conventional method assigns uniform amplitude inside individual mesh, resulting in reconstruction of the three-dimensional scene of flat shading. In this paper, we report an extension of the conventional method to achieve the continuous shading where the amplitude in each mesh is continuously varying. The proposed method enables the continuous shading, while maintaining fully analytic framework of the conventional method without any sacrifice in the precision. The proposed method can also be extended to enable fast update of the shading for different illumination directions and the ambient-diffuse reflection ratio based on Phong reflection model. The feasibility of the proposed method is confirmed by the numerical and optical reconstruction of the generated hologram.

14.
Article in English | MEDLINE | ID: mdl-19964558

ABSTRACT

The pulse transit time (PTT) based method has been suggested as a continuous, cuffless and non-invasive approach to estimate blood pressure. It is of paramount importance to accurately determine the pulse transit time from the measured electrocardiogram (ECG) and photoplethysmo-gram (PPG) signals. We apply the celebrated Hilbert-Huang Transform (HHT) to process both the ECG and PPG signals, and improve the accuracy of the PTT estimation. Further, the blood pressure variation is obtained by using a well-established formula reflecting the relationship between the blood pressure and the estimated PTT. Simulation results are provided to illustrate the effectiveness of the proposed method.


Subject(s)
Blood Pressure , Electrocardiography
15.
Open Biomed Eng J ; 3: 1-7, 2009 Jan 21.
Article in English | MEDLINE | ID: mdl-19662151

ABSTRACT

The FANFARE (Falls And Near Falls Assessment Research and Evaluation) project has developed a system to fulfill the need for a wearable device to collect data for fall and near-falls analysis. The system consists of a computer and a wireless sensor network to measure, display, and store fall related parameters such as postural activities and heart rate variability. Ease of use and low power are considered in the design. The system was built and tested successfully. Different machine learning algorithms were applied to the stored data for fall and near-fall evaluation. Results indicate that the Naïve Bayes algorithm is the best choice, due to its fast model building and high accuracy in fall detection.

16.
J Korean Med Sci ; 18(2): 299-300, 2003 Apr.
Article in English | MEDLINE | ID: mdl-12692435

ABSTRACT

Flumazenil, an imidazobenzodiazepine, is the first benzodiazepine antagonist and is being used to reverse the adverse pharmacological effects of benzodiazepine. There have been a few reports on the central nevous system side effects with its use. We report a patient with generalized ballism following administration of flumazenil. The mechanism through which flumazenil induced this symptom is unknown. It is conceivable that flumazenil may antagonize the GABA-benzodiazepine receptor complex and induce dopamine hypersensitivity, thus induce dyskinesic symptoms.


Subject(s)
Dyskinesias/etiology , Flumazenil/adverse effects , GABA Modulators/adverse effects , Diagnosis, Differential , Female , Humans , Middle Aged
SELECTION OF CITATIONS
SEARCH DETAIL
...