Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Heliyon ; 9(2): e13577, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36852023

ABSTRACT

The placenta is a fundamental organ throughout the pregnancy and the fetus' health is closely related to its proper function. Because of the importance of the placenta, any suspicious placental conditions require ultrasound image investigation. We propose an automated method for processing fetal ultrasonography images to identify placental abruption using machine learning methods in this paper. The placental imaging characteristics are used as the semantic identifiers of the region of the placenta compared with the amniotic fluid and hard organs. The quantitative feature extraction is applied to the automatically identified placental regions to assign a vector of optical features to each ultrasonographic image. In the first classification step, two methods of kernel-based Support Vector Machine (SVM) and decision tree Ensemble classifier are elaborated and compared for identification of the abruption cases and controls. The Recursive Feature Elimination (RFE) is applied for optimizing the feature vector elements for the best performance of each classifier. In the second step, the deep learning classifiers of multi-path ResNet-50 and Inception-V3 are used in combination with RFE. The resulting performances of the algorithms are compared together to reveal the best classification method for the identification of the abruption status. The best results were achieved for optimized ResNet-50 with an accuracy of 82.88% ± SD 1.42% in the identification of placental abruption on the testing dataset. These results show it is possible to construct an automated analysis method with affordable performance for the detection of placental abruption based on ultrasound images.

2.
Clin Transl Gastroenterol ; 14(1): e00548, 2023 01 01.
Article in English | MEDLINE | ID: mdl-36434803

ABSTRACT

INTRODUCTION: Pancreatic cancer is the third leading cause of cancer deaths among men and women in the United States. We aimed to detect early changes on computed tomography (CT) images associated with pancreatic ductal adenocarcinoma (PDAC) based on quantitative imaging features (QIFs) for patients with and without chronic pancreatitis (CP). METHODS: Adults 18 years and older diagnosed with PDAC in 2008-2018 were identified. Their CT scans 3 months-3 years before the diagnosis date were matched to up to 2 scans of controls. The pancreas was automatically segmented using a previously developed algorithm. One hundred eleven QIFs were extracted. The data set was randomly split for training/validation. Neighborhood and principal component analyses were applied to select the most important features. A conditional support vector machine was used to develop prediction algorithms separately for patients with and without CP. The computer labels were compared with manually reviewed CT images 2-3 years before the index date in 19 cases and 19 controls. RESULTS: Two hundred twenty-seven of 554 scans of non-CP cancer cases/controls and 70 of 140 scans of CP cancer cases/controls were included (average age 71 and 68 years, 51% and 44% females for non-CP patients and patients with CP, respectively). The QIF-based algorithms varied based on CP status. For non-CP patients, accuracy measures were 94%-95% and area under the curve (AUC) measures were 0.98-0.99. Sensitivity, specificity, positive predictive value, and negative predictive value were in the ranges of 88%-91%, 96%-98%, 91%-95%, and 94%-96%, respectively. QIFs on CT examinations within 2-3 years before the index date also had very high predictive accuracy (accuracy 95%-98%; AUC 0.99-1.00). The QIF-based algorithm outperformed manual rereview of images for determination of PDAC risk. For patients with CP, the algorithms predicted PDAC perfectly (accuracy 100% and AUC 1.00). DISCUSSION: QIFs can accurately predict PDAC for both non-CP patients and patients with CP on CT imaging and represent promising biomarkers for early detection of pancreatic cancer.


Subject(s)
Carcinoma, Pancreatic Ductal , Pancreatic Neoplasms , Pancreatitis, Chronic , Male , Adult , Humans , Female , Pancreatic Neoplasms/diagnostic imaging , Pancreatic Neoplasms/pathology , Carcinoma, Pancreatic Ductal/diagnostic imaging , Carcinoma, Pancreatic Ductal/pathology , Pancreas/diagnostic imaging , Pancreas/pathology , Tomography, X-Ray Computed/methods , Pancreatic Neoplasms
3.
Front Oncol ; 12: 1007990, 2022.
Article in English | MEDLINE | ID: mdl-36439445

ABSTRACT

Early detection of Pancreatic Ductal Adenocarcinoma (PDAC) is complicated as PDAC remains asymptomatic until cancer advances to late stages when treatment is mostly ineffective. Stratifying the risk of developing PDAC can improve early detection as subsequent screening of high-risk individuals through specialized surveillance systems reduces the chance of misdiagnosis at the initial stage of cancer. Risk stratification is however challenging as PDAC lacks specific predictive biomarkers. Studies reported that the pancreas undergoes local morphological changes in response to underlying biological evolution associated with PDAC development. Accurate identification of these changes can help stratify the risk of PDAC. In this retrospective study, an extensive radiomic analysis of the precancerous pancreatic subregions was performed using abdominal Computed Tomography (CT) scans. The analysis was performed using 324 pancreatic subregions identified in 108 contrast-enhanced abdominal CT scans with equal proportion from healthy control, pre-diagnostic, and diagnostic groups. In a pairwise feature analysis, several textural features were found potentially predictive of PDAC. A machine learning classifier was then trained to perform risk prediction of PDAC by automatically classifying the CT scans into healthy control (low-risk) and pre-diagnostic (high-risk) classes and specifying the subregion(s) likely to develop a tumor. The proposed model was trained on CT scans from multiple phases. Whereas using 42 CT scans from the venous phase, model validation was performed which resulted in ~89.3% classification accuracy on average, with sensitivity and specificity reaching 86% and 93%, respectively, for predicting the development of PDAC (i.e., high-risk). To our knowledge, this is the first model that unveiled microlevel precancerous changes across pancreatic subregions and quantified the risk of developing PDAC. The model demonstrated improved prediction by 3.3% in comparison to the state-of-the-art method that considers the global (whole pancreas) features for PDAC prediction.

4.
Cancer Biomark ; 33(2): 211-217, 2022.
Article in English | MEDLINE | ID: mdl-35213359

ABSTRACT

BACKGROUND: Early stage diagnosis of Pancreatic Ductal Adenocarcinoma (PDAC) is challenging due to the lack of specific diagnostic biomarkers. However, stratifying individuals at high risk of PDAC, followed by monitoring their health conditions on regular basis, has the potential to allow diagnosis at early stages. OBJECTIVE: To stratify high risk individuals for PDAC by identifying predictive features in pre-diagnostic abdominal Computed Tomography (CT) scans. METHODS: A set of CT features, potentially predictive of PDAC, was identified in the analysis of 4000 raw radiomic parameters extracted from pancreases in pre-diagnostic scans. The naïve Bayes classifier was then developed for automatic classification of CT scans of the pancreas with high risk for PDAC. A set of 108 retrospective CT scans (36 scans from each healthy control, pre-diagnostic, and diagnostic group) from 72 subjects was used for the study. Model development was performed on 66 multiphase CT scans, whereas external validation was performed on 42 venous-phase CT scans. RESULTS: The system achieved an average classification accuracy of 86% on the external dataset. CONCLUSIONS: Radiomic analysis of abdominal CT scans can unveil, quantify, and interpret micro-level changes in the pre-diagnostic pancreas and can efficiently assist in the stratification of high risk individuals for PDAC.


Subject(s)
Artificial Intelligence , Carcinoma, Pancreatic Ductal/diagnostic imaging , Pancreatic Neoplasms/diagnostic imaging , Tomography, X-Ray Computed/methods , Abdomen/diagnostic imaging , Bayes Theorem , Early Detection of Cancer , Humans
5.
Article in English | MEDLINE | ID: mdl-24110814

ABSTRACT

Sleep apnea diagnosis requires analysis of long term polysomnographic signal during one period of night sleep. Limited access to sleep laboratories, various required devices and dedicated assistants made the diagnosis of sleep apnea underestimated and not easily accessible to the general population. In this work, a classification method based on modified Kalman filter which uses heart rate variability (HRV) wavelet signal obtained from a single electrocardiogram (ECG) lead is proposed. A pre-filtering was performed on wavelet transform to improve the correlation of extracted features. Sample entropy was used to enhance the convergence rate and accuracy of classification. The performance of the proposed method was evaluated in terms of accuracy, sensitivity and specificity. The classifier overcomes these methods by 5.3% to 7.2% improvements in accuracy.


Subject(s)
Algorithms , Electrocardiography/methods , Heart Rate/physiology , Monitoring, Physiologic , Sleep Apnea Syndromes/diagnosis , Adult , Aged , Female , Head , Humans , Male , Middle Aged , Wavelet Analysis
6.
Biomed Eng Online ; 10: 3, 2011 Jan 14.
Article in English | MEDLINE | ID: mdl-21235800

ABSTRACT

BACKGROUND: Speech production and speech phonetic features gradually improve in children by obtaining audio feedback after cochlear implantation or using hearing aids. The aim of this study was to develop and evaluate automated classification of voice disorder in children with cochlear implantation and hearing aids. METHODS: We considered 4 disorder categories in children's voice using the following definitions: Level_1: Children who produce spontaneous phonation and use words spontaneously and imitatively. Level_2: Children, who produce spontaneous phonation, use words spontaneously and make short sentences imitatively. Level_3: Children, who produce spontaneous phonations, use words and arbitrary sentences spontaneously. Level_4: Normal children without any hearing loss background. Thirty Persian children participated in the study, including six children in each level from one to three and 12 children in level four. Voice samples of five isolated Persian words "mashin", "mar", "moosh", "gav" and "mouz" were analyzed. Four levels of the voice quality were considered, the higher the level the less significant the speech disorder. "Frame-based" and "word-based" features were extracted from voice signals. The frame-based features include intensity, fundamental frequency, formants, nasality and approximate entropy and word-based features include phase space features and wavelet coefficients. For frame-based features, hidden Markov models were used as classifiers and for word-based features, neural network was used. RESULTS: After Classifiers fusion with three methods: Majority Voting Rule, Linear Combination and Stacked fusion, the best classification rates were obtained using frame-based and word-based features with MVR rule (level 1:100%, level 2: 93.75%, level 3: 100%, level 4: 94%). CONCLUSIONS: Result of this study may help speech pathologists follow up voice disorder recovery in children with cochlear implantation or hearing aid who are in the same age range.


Subject(s)
Classification/methods , Cochlear Implantation , Hearing Aids , Voice Disorders/classification , Voice Disorders/surgery , Child , Child, Preschool , Female , Hearing Loss/complications , Hearing Loss/physiopathology , Humans , Language , Male , Phonation/physiology , Voice/physiology , Voice Disorders/complications , Voice Disorders/physiopathology
7.
Med Biol Eng Comput ; 44(10): 919-30, 2006 Oct.
Article in English | MEDLINE | ID: mdl-17031716

ABSTRACT

Science of human identification using physiological characteristics or biometry has been of great concern in security systems. However, robust multimodal identification systems based on audio-visual information has not been thoroughly investigated yet. Therefore, the aim of this work to propose a model-based feature extraction method which employs physiological characteristics of facial muscles producing lip movements. This approach adopts the intrinsic properties of muscles such as viscosity, elasticity, and mass which are extracted from the dynamic lip model. These parameters are exclusively dependent on the neuro-muscular properties of speaker; consequently, imitation of valid speakers could be reduced to a large extent. These parameters are applied to a hidden Markov model (HMM) audio-visual identification system. In this work, a combination of audio and video features has been employed by adopting a multistream pseudo-synchronized HMM training method. Noise robust audio features such as Mel-frequency cepstral coefficients (MFCC), spectral subtraction (SS), and relative spectra perceptual linear prediction (J-RASTA-PLP) have been used to evaluate the performance of the multimodal system once efficient audio feature extraction methods have been utilized. The superior performance of the proposed system is demonstrated on a large multispeaker database of continuously spoken digits, along with a sentence that is phonetically rich. To evaluate the robustness of algorithms, some experiments were performed on genetically identical twins. Furthermore, changes in speaker voice were simulated with drug inhalation tests. In 3 dB signal to noise ratio (SNR), the dynamic muscle model improved the identification rate of the audio-visual system from 91 to 98%. Results on identical twins revealed that there was an apparent improvement on the performance for the dynamic muscle model-based system, in which the identification rate of the audio-visual system was enhanced from 87 to 96%.


Subject(s)
Facial Muscles/physiology , Lip/physiology , Pattern Recognition, Automated/methods , Recognition, Psychology/physiology , Adult , Algorithms , Artificial Intelligence , Biometry/methods , Female , Humans , Male , Markov Chains , Models, Biological , Movement/physiology , Speech Acoustics , Vision, Ocular
SELECTION OF CITATIONS
SEARCH DETAIL
...