Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
1.
Sensors (Basel) ; 23(21)2023 Nov 01.
Article in English | MEDLINE | ID: mdl-37960589

ABSTRACT

The human liver exhibits variable characteristics and anatomical information, which is often ambiguous in radiological images. Machine learning can be of great assistance in automatically segmenting the liver in radiological images, which can be further processed for computer-aided diagnosis. Magnetic resonance imaging (MRI) is preferred by clinicians for liver pathology diagnosis over volumetric abdominal computerized tomography (CT) scans, due to their superior representation of soft tissues. The convenience of Hounsfield unit (HoU) based preprocessing in CT scans is not available in MRI, making automatic segmentation challenging for MR images. This study investigates multiple state-of-the-art segmentation networks for liver segmentation from volumetric MRI images. Here, T1-weighted (in-phase) scans are investigated using expert-labeled liver masks from a public dataset of 20 patients (647 MR slices) from the Combined Healthy Abdominal Organ Segmentation grant challenge (CHAOS). The reason for using T1-weighted images is that it demonstrates brighter fat content, thus providing enhanced images for the segmentation task. Twenty-four different state-of-the-art segmentation networks with varying depths of dense, residual, and inception encoder and decoder backbones were investigated for the task. A novel cascaded network is proposed to segment axial liver slices. The proposed framework outperforms existing approaches reported in the literature for the liver segmentation task (on the same test set) with a dice similarity coefficient (DSC) score and intersect over union (IoU) of 95.15% and 92.10%, respectively.


Subject(s)
Deep Learning , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Abdomen/diagnostic imaging , Liver/diagnostic imaging
2.
Front Plant Sci ; 14: 1175515, 2023.
Article in English | MEDLINE | ID: mdl-37794930

ABSTRACT

Mulberry leaves feed Bombyx mori silkworms to generate silk thread. Diseases that affect mulberry leaves have reduced crop and silk yields in sericulture, which produces 90% of the world's raw silk. Manual leaf disease identification is tedious and error-prone. Computer vision can categorize leaf diseases early and overcome the challenges of manual identification. No mulberry leaf deep learning (DL) models have been reported. Therefore, in this study, two types of leaf diseases: leaf rust and leaf spot, with disease-free leaves, were collected from two regions of Bangladesh. Sericulture experts annotated the leaf images. The images were pre-processed, and 6,000 synthetic images were generated using typical image augmentation methods from the original 764 training images. Additional 218 and 109 images were employed for testing and validation respectively. In addition, a unique lightweight parallel depth-wise separable CNN model, PDS-CNN was developed by applying depth-wise separable convolutional layers to reduce parameters, layers, and size while boosting classification performance. Finally, the explainable capability of PDS-CNN is obtained through the use of SHapley Additive exPlanations (SHAP) evaluated by a sericulture specialist. The proposed PDS-CNN outperforms well-known deep transfer learning models, achieving an optimistic accuracy of 95.05 ± 2.86% for three-class classifications and 96.06 ± 3.01% for binary classifications with only 0.53 million parameters, 8 layers, and a size of 6.3 megabytes. Furthermore, when compared with other well-known transfer models, the proposed model identified mulberry leaf diseases with higher accuracy, fewer factors, fewer layers, and lower overall size. The visually expressive SHAP explanation images validate the models' findings aligning with the predictions made the sericulture specialist. Based on these findings, it is possible to conclude that the explainable AI (XAI)-based PDS-CNN can provide sericulture specialists with an effective tool for accurately categorizing mulberry leaves.

3.
Sensors (Basel) ; 23(18)2023 Sep 07.
Article in English | MEDLINE | ID: mdl-37765780

ABSTRACT

Colorectal polyps in the colon or rectum are precancerous growths that can lead to a more severe disease called colorectal cancer. Accurate segmentation of polyps using medical imaging data is essential for effective diagnosis. However, manual segmentation by endoscopists can be time-consuming, error-prone, and expensive, leading to a high rate of missed anomalies. To solve this problem, an automated diagnostic system based on deep learning algorithms is proposed to find polyps. The proposed IRv2-Net model is developed using the UNet architecture with a pre-trained InceptionResNetV2 encoder to extract most features from the input samples. The Test Time Augmentation (TTA) technique, which utilizes the characteristics of the original, horizontal, and vertical flips, is used to gain precise boundary information and multi-scale image features. The performance of numerous state-of-the-art (SOTA) models is compared using several metrics such as accuracy, Dice Similarity Coefficients (DSC), Intersection Over Union (IoU), precision, and recall. The proposed model is tested on the Kvasir-SEG and CVC-ClinicDB datasets, demonstrating superior performance in handling unseen real-time data. It achieves the highest area coverage in the area under the Receiver Operating Characteristic (ROC-AUC) and area under Precision-Recall (AUC-PR) curves. The model exhibits excellent qualitative testing outcomes across different types of polyps, including more oversized, smaller, over-saturated, sessile, or flat polyps, within the same dataset and across different datasets. Our approach can significantly minimize the number of missed rating difficulties. Lastly, a graphical interface is developed for producing the mask in real-time. The findings of this study have potential applications in clinical colonoscopy procedures and can serve based on further research and development.


Subject(s)
Deep Learning , Algorithms , Area Under Curve , Benchmarking , Image Processing, Computer-Assisted
4.
Sensors (Basel) ; 23(16)2023 Aug 14.
Article in English | MEDLINE | ID: mdl-37631693

ABSTRACT

Every one of us has a unique manner of communicating to explore the world, and such communication helps to interpret life. Sign language is the popular language of communication for hearing and speech-disabled people. When a sign language user interacts with a non-sign language user, it becomes difficult for a signer to express themselves to another person. A sign language recognition system can help a signer to interpret the sign of a non-sign language user. This study presents a sign language recognition system that is capable of recognizing Arabic Sign Language from recorded RGB videos. To achieve this, two datasets were considered, such as (1) the raw dataset and (2) the face-hand region-based segmented dataset produced from the raw dataset. Moreover, operational layer-based multi-layer perceptron "SelfMLP" is proposed in this study to build CNN-LSTM-SelfMLP models for Arabic Sign Language recognition. MobileNetV2 and ResNet18-based CNN backbones and three SelfMLPs were used to construct six different models of CNN-LSTM-SelfMLP architecture for performance comparison of Arabic Sign Language recognition. This study examined the signer-independent mode to deal with real-time application circumstances. As a result, MobileNetV2-LSTM-SelfMLP on the segmented dataset achieved the best accuracy of 87.69% with 88.57% precision, 87.69% recall, 87.72% F1 score, and 99.75% specificity. Overall, face-hand region-based segmentation and SelfMLP-infused MobileNetV2-LSTM-SelfMLP surpassed the previous findings on Arabic Sign Language recognition by 10.970% accuracy.


Subject(s)
Deep Learning , Humans , Language , Sign Language , Communication , Recognition, Psychology
5.
Polymers (Basel) ; 15(14)2023 Jul 17.
Article in English | MEDLINE | ID: mdl-37514453

ABSTRACT

This study experimentally investigates the effect of green polymeric nanoparticles on the interfacial tension (IFT) and wettability of carbonate reservoirs to effectively change the enhanced oil recovery (EOR) parameters. This experimental study compares the performance of xanthan/magnetite/SiO2 nanocomposites (NC) and several green materials, i.e., eucalyptus plant nanocomposites (ENC) and walnut shell ones (WNC) on the oil recovery with performing series of spontaneous imbibition tests. Scanning electron microscopy (SEM), X-ray diffraction (XRD), energy-dispersive X-ray spectroscopy (EDAX), and BET (Brunauer, Emmett, and Teller) surface analysis tests are also applied to monitor the morphology and crystalline structure of NC, ENC, and WNC. Then, the IFT and contact angle (CA) were measured in the presence of these materials under various reservoir conditions and solvent salinities. It was found that both ENC and WNC nanocomposites decreased CA and IFT, but ENC performed better than WNC under different salinities, namely, seawater (SW), double diluted salted (2 SW), ten times diluted seawater (10 SW), formation water (FW), and distilled water (DIW), which were applied at 70 °C, 2000 psi, and 0.05 wt.% nanocomposites concentration. Based on better results, ENC nanofluid at salinity concentrations of 10 SW and 2 SW ENC were selected for the EOR of carbonate rocks under reservoir conditions. The contact angles of ENC nanocomposites at the salinities of 2 SW and 10 SW were 49 and 43.4°, respectively. Zeta potential values were -44.39 and -46.58 for 2 SW and 10 SW ENC nanofluids, which is evidence of the high stability of ENC nanocomposites. The imbibition results at 70 °C and 2000 psi with 0.05 wt.% ENC at 10 SW and 2 SW led to incremental oil recoveries of 64.13% and 60.12%, respectively, compared to NC, which was 46.16%.

6.
Materials (Basel) ; 16(4)2023 Feb 09.
Article in English | MEDLINE | ID: mdl-36837096

ABSTRACT

With an expectation of an increased number of revision surgeries and patients receiving orthopedic implants in the coming years, the focus of joint replacement research needs to be on improving the mechanical properties of implants. Head-stem trunnion fixation provides superior load support and implant stability. Fretting wear is formed at the trunnion because of the dynamic load activities of patients, and this eventually causes the total hip implant system to fail. To optimize the design, multiple experiments with various trunnion geometries have been performed by researchers to examine the wear rate and associated mechanical performance characteristics of the existing head-stem trunnion. The objective of this work is to quantify and evaluate the performance parameters of smooth and novel spiral head-stem trunnion types under dynamic loading situations. This study proposes a finite element method for estimating head-stem trunnion performance characteristics, namely contact pressure and sliding distance, for both trunnion types under walking and jogging dynamic loading conditions. The wear rate for both trunnion types was computed using the Archard wear model for a standard number of gait cycles. The experimental results indicated that the spiral trunnion with a uniform contact pressure distribution achieved more fixation than the smooth trunnion. However, the average contact pressure distribution was nearly the same for both trunnion types. The maximum and average sliding distances were both shorter for the spiral trunnion; hence, the summed sliding distance was approximately 10% shorter for spiral trunnions than that of the smooth trunnion over a complete gait cycle. Owing to a lower sliding ability, hip implants with spiral trunnions achieved more stability than those with smooth trunnions. The anticipated wear rate for spiral trunnions was 0.039 mm3, which was approximately 10% lower than the smooth trunnion wear rate of 0.048 mm3 per million loading cycles. The spiral trunnion achieved superior fixation stability with a shorter sliding distance and a lower wear rate than the smooth trunnion; therefore, the spiral trunnion can be recommended for future hip implant systems.

7.
Bioengineering (Basel) ; 9(11)2022 Nov 15.
Article in English | MEDLINE | ID: mdl-36421093

ABSTRACT

Cardiovascular diseases are one of the most severe causes of mortality, annually taking a heavy toll on lives worldwide. Continuous monitoring of blood pressure seems to be the most viable option, but this demands an invasive process, introducing several layers of complexities and reliability concerns due to non-invasive techniques not being accurate. This motivates us to develop a method to estimate the continuous arterial blood pressure (ABP) waveform through a non-invasive approach using Photoplethysmogram (PPG) signals. We explore the advantage of deep learning, as it would free us from sticking to ideally shaped PPG signals only by making handcrafted feature computation irrelevant, which is a shortcoming of the existing approaches. Thus, we present PPG2ABP, a two-stage cascaded deep learning-based method that manages to estimate the continuous ABP waveform from the input PPG signal with a mean absolute error of 4.604 mmHg, preserving the shape, magnitude, and phase in unison. However, the more astounding success of PPG2ABP turns out to be that the computed values of Diastolic Blood Pressure (DBP), Mean Arterial Pressure (MAP), and Systolic Blood Pressure (SBP) from the estimated ABP waveform outperform the existing works under several metrics (mean absolute error of 3.449 ± 6.147 mmHg, 2.310 ± 4.437 mmHg, and 5.727 ± 9.162 mmHg, respectively), despite that PPG2ABP is not explicitly trained to do so. Notably, both for DBP and MAP, we achieve Grade A in the BHS (British Hypertension Society) Standard and satisfy the AAMI (Association for the Advancement of Medical Instrumentation) standard.

8.
Pharmaceuticals (Basel) ; 15(11)2022 Nov 14.
Article in English | MEDLINE | ID: mdl-36422535

ABSTRACT

This study constructs a machine learning method to simultaneously analyze the thermodynamic behavior of many polymer-drug systems. The solubility temperature of Acetaminophen, Celecoxib, Chloramphenicol, D-Mannitol, Felodipine, Ibuprofen, Ibuprofen Sodium, Indomethacin, Itraconazole, Naproxen, Nifedipine, Paracetamol, Sulfadiazine, Sulfadimidine, Sulfamerazine, and Sulfathiazole in 1,3-bis[2-pyrrolidone-1-yl] butane, Polyvinyl Acetate, Polyvinylpyrrolidone (PVP), PVP K12, PVP K15, PVP K17, PVP K25, PVP/VA, PVP/VA 335, PVP/VA 535, PVP/VA 635, PVP/VA 735, Soluplus analyzes from a modeling perspective. The least-squares support vector regression (LS-SVR) designs to approximate the solubility temperature of drugs in polymers from polymer and drug types and drug loading in polymers. The structure of this machine learning model is well-tuned by conducting trial and error on the kernel type (i.e., Gaussian, polynomial, and linear) and methods used for adjusting the LS-SVR coefficients (i.e., leave-one-out and 10-fold cross-validation scenarios). Results of the sensitivity analysis showed that the Gaussian kernel and 10-fold cross-validation is the best candidate for developing an LS-SVR for the given task. The built model yielded results consistent with 278 experimental samples reported in the literature. Indeed, the mean absolute relative deviation percent of 8.35 and 7.25 is achieved in the training and testing stages, respectively. The performance on the largest available dataset confirms its applicability. Such a reliable tool is essential for monitoring polymer-drug systems' stability and deliverability, especially for poorly soluble drugs in polymers, which can be further validated by adopting it to an actual implementation in the future.

9.
Sensors (Basel) ; 22(19)2022 Oct 07.
Article in English | MEDLINE | ID: mdl-36236697

ABSTRACT

An intelligent insole system may monitor the individual's foot pressure and temperature in real-time from the comfort of their home, which can help capture foot problems in their earliest stages. Constant monitoring for foot complications is essential to avoid potentially devastating outcomes from common diseases such as diabetes mellitus. Inspired by those goals, the authors of this work propose a full design for a wearable insole that can detect both plantar pressure and temperature using off-the-shelf sensors. The design provides details of specific temperature and pressure sensors, circuit configuration for characterizing the sensors, and design considerations for creating a small system with suitable electronics. The procedure also details how, using a low-power communication protocol, data about the individuals' foot pressure and temperatures may be sent wirelessly to a centralized device for storage. This research may aid in the creation of an affordable, practical, and portable foot monitoring system for patients. The solution can be used for continuous, at-home monitoring of foot problems through pressure patterns and temperature differences between the two feet. The generated maps can be used for early detection of diabetic foot complication with the help of artificial intelligence.


Subject(s)
Artificial Intelligence , Diabetic Foot , Diabetic Foot/diagnosis , Humans , Pressure , Shoes , Temperature
10.
Bioengineering (Basel) ; 9(10)2022 Oct 16.
Article in English | MEDLINE | ID: mdl-36290527

ABSTRACT

Respiratory ailments are a very serious health issue and can be life-threatening, especially for patients with COVID. Respiration rate (RR) is a very important vital health indicator for patients. Any abnormality in this metric indicates a deterioration in health. Hence, continuous monitoring of RR can act as an early indicator. Despite that, RR monitoring equipment is generally provided only to intensive care unit (ICU) patients. Recent studies have established the feasibility of using photoplethysmogram (PPG) signals to estimate RR. This paper proposes a deep-learning-based end-to-end solution for estimating RR directly from the PPG signal. The system was evaluated on two popular public datasets: VORTAL and BIDMC. A lightweight model, ConvMixer, outperformed all of the other deep neural networks. The model provided a root mean squared error (RMSE), mean absolute error (MAE), and correlation coefficient (R) of 1.75 breaths per minute (bpm), 1.27 bpm, and 0.92, respectively, for VORTAL, while these metrics were 1.20 bpm, 0.77 bpm, and 0.92, respectively, for BIDMC. The authors also showed how fine-tuning a small subset could increase the performance of the model in the case of an out-of-distribution dataset. In the fine-tuning experiments, the models produced an average R of 0.81. Hence, this lightweight model can be deployed to mobile devices for real-time monitoring of patients.

11.
Pharmaceutics ; 14(8)2022 Aug 05.
Article in English | MEDLINE | ID: mdl-36015258

ABSTRACT

Synthesizing micro-/nano-sized pharmaceutical compounds with an appropriate size distribution is a method often followed to enhance drug delivery and reduce side effects. Supercritical CO2 (carbon dioxide) is a well-known solvent utilized in the pharmaceutical synthesis process. Reliable knowledge of a drug's solubility in supercritical CO2 is necessary for feasible study, modeling, design, optimization, and control of such a process. Therefore, the current study constructs a stacked/ensemble model by combining three up-to-date machine learning tools (i.e., extra tree, gradient boosting, and random forest) to predict the solubility of twelve anticancer drugs in supercritical CO2. An experimental databank comprising 311 phase equilibrium samples was gathered from the literature and applied to design the proposed stacked model. This model estimates the solubility of anticancer drugs in supercritical CO2 as a function of solute and solvent properties and operating conditions. Several statistical indices, including average absolute relative deviation (AARD = 8.62%), mean absolute error (MAE = 2.86 × 10-6), relative absolute error (RAE = 2.42%), mean squared error (MSE = 1.26 × 10-10), and regression coefficient (R2 = 0.99809) were used to validate the performance of the constructed model. The statistical, sensitivity, and trend analyses confirmed that the suggested stacked model demonstrates excellent performance for correlating and predicting the solubility of anticancer drugs in supercritical CO2.

12.
Sensors (Basel) ; 22(11)2022 Jun 02.
Article in English | MEDLINE | ID: mdl-35684870

ABSTRACT

Diabetes mellitus (DM) is one of the most prevalent diseases in the world, and is correlated to a high index of mortality. One of its major complications is diabetic foot, leading to plantar ulcers, amputation, and death. Several studies report that a thermogram helps to detect changes in the plantar temperature of the foot, which may lead to a higher risk of ulceration. However, in diabetic patients, the distribution of plantar temperature does not follow a standard pattern, thereby making it difficult to quantify the changes. The abnormal temperature distribution in infrared (IR) foot thermogram images can be used for the early detection of diabetic foot before ulceration to avoid complications. There is no machine learning-based technique reported in the literature to classify these thermograms based on the severity of diabetic foot complications. This paper uses an available labeled diabetic thermogram dataset and uses the k-mean clustering technique to cluster the severity risk of diabetic foot ulcers using an unsupervised approach. Using the plantar foot temperature, the new clustered dataset is verified by expert medical doctors in terms of risk for the development of foot ulcers. The newly labeled dataset is then investigated in terms of robustness to be classified by any machine learning network. Classical machine learning algorithms with feature engineering and a convolutional neural network (CNN) with image-enhancement techniques are investigated to provide the best-performing network in classifying thermograms based on severity. It is found that the popular VGG 19 CNN model shows an accuracy, precision, sensitivity, F1-score, and specificity of 95.08%, 95.08%, 95.09%, 95.08%, and 97.2%, respectively, in the stratification of severity. A stacking classifier is proposed using extracted features of the thermogram, which is created using the trained gradient boost classifier, XGBoost classifier, and random forest classifier. This provides a comparable performance of 94.47%, 94.45%, 94.47%, 94.43%, and 93.25% for accuracy, precision, sensitivity, F1-score, and specificity, respectively.


Subject(s)
Diabetes Mellitus , Diabetic Foot , Algorithms , Diabetic Foot/diagnostic imaging , Humans , Machine Learning , Neural Networks, Computer , Thermography/methods
13.
Comput Biol Med ; 147: 105620, 2022 08.
Article in English | MEDLINE | ID: mdl-35667155

ABSTRACT

Liver and liver tumor segmentation from 3D volumetric images has been an active research area in the medical image processing domain for the last few decades. The existence of other organs such as the heart, spleen, stomach, and kidneys complicate liver segmentation and tumor identification task since these organs share identical properties in terms of shape, texture, and intensity values. Many automatic and semi-automatic techniques have been presented in recent years, in an attempt to establish a system for the reliable diagnosis and detection of liver illnesses, specifically liver tumors. With the evolution of deep learning techniques and their exceptional performance in the field of medical image processing, medical image segmentation in volumetric images using deep learning techniques has received a great deal of emphasis. The goal of this study is to provide an overview of the available deep learning approaches for segmenting liver and detecting liver tumors, as well as their evaluation metrics including accuracy, volume overlap error, dice coefficient, and mean square distance. This research also includes a detailed overview of the various 3D volumetric imaging architectures, designed specifically for the task of semantic segmentation. The comparison of approaches offered in earlier challenges for liver and tumor segmentation, as well as their dice scores derived from respective site sources, is also provided.


Subject(s)
Deep Learning , Liver Neoplasms , Humans , Image Processing, Computer-Assisted/methods , Liver Neoplasms/diagnostic imaging , Neural Networks, Computer , Tomography, X-Ray Computed/methods
14.
Diagnostics (Basel) ; 12(4)2022 Apr 07.
Article in English | MEDLINE | ID: mdl-35453968

ABSTRACT

Problem-Since the outbreak of the COVID-19 pandemic, mass testing has become essential to reduce the spread of the virus. Several recent studies suggest that a significant number of COVID-19 patients display no physical symptoms whatsoever. Therefore, it is unlikely that these patients will undergo COVID-19 testing, which increases their chances of unintentionally spreading the virus. Currently, the primary diagnostic tool to detect COVID-19 is a reverse-transcription polymerase chain reaction (RT-PCR) test from the respiratory specimens of the suspected patient, which is invasive and a resource-dependent technique. It is evident from recent researches that asymptomatic COVID-19 patients cough and breathe in a different way than healthy people. Aim-This paper aims to use a novel machine learning approach to detect COVID-19 (symptomatic and asymptomatic) patients from the convenience of their homes so that they do not overburden the healthcare system and also do not spread the virus unknowingly by continuously monitoring themselves. Method-A Cambridge University research group shared such a dataset of cough and breath sound samples from 582 healthy and 141 COVID-19 patients. Among the COVID-19 patients, 87 were asymptomatic while 54 were symptomatic (had a dry or wet cough). In addition to the available dataset, the proposed work deployed a real-time deep learning-based backend server with a web application to crowdsource cough and breath datasets and also screen for COVID-19 infection from the comfort of the user's home. The collected dataset includes data from 245 healthy individuals and 78 asymptomatic and 18 symptomatic COVID-19 patients. Users can simply use the application from any web browser without installation and enter their symptoms, record audio clips of their cough and breath sounds, and upload the data anonymously. Two different pipelines for screening were developed based on the symptoms reported by the users: asymptomatic and symptomatic. An innovative and novel stacking CNN model was developed using three base learners from of eight state-of-the-art deep learning CNN algorithms. The stacking CNN model is based on a logistic regression classifier meta-learner that uses the spectrograms generated from the breath and cough sounds of symptomatic and asymptomatic patients as input using the combined (Cambridge and collected) dataset. Results-The stacking model outperformed the other eight CNN networks with the best classification performance for binary classification using cough sound spectrogram images. The accuracy, sensitivity, and specificity for symptomatic and asymptomatic patients were 96.5%, 96.42%, and 95.47% and 98.85%, 97.01%, and 99.6%, respectively. For breath sound spectrogram images, the metrics for binary classification of symptomatic and asymptomatic patients were 91.03%, 88.9%, and 91.5% and 80.01%, 72.04%, and 82.67%, respectively. Conclusion-The web-application QUCoughScope records coughing and breathing sounds, converts them to a spectrogram, and applies the best-performing machine learning model to classify the COVID-19 patients and healthy subjects. The result is then reported back to the test user in the application interface. Therefore, this novel system can be used by patients in their premises as a pre-screening method to aid COVID-19 diagnosis by prioritizing the patients for RT-PCR testing and thereby reducing the risk of spreading of the disease.

15.
Sensors (Basel) ; 22(5)2022 Feb 24.
Article in English | MEDLINE | ID: mdl-35270938

ABSTRACT

Diabetes mellitus (DM) can lead to plantar ulcers, amputation and death. Plantar foot thermogram images acquired using an infrared camera have been shown to detect changes in temperature distribution associated with a higher risk of foot ulceration. Machine learning approaches applied to such infrared images may have utility in the early diagnosis of diabetic foot complications. In this work, a publicly available dataset was categorized into different classes, which were corroborated by domain experts, based on a temperature distribution parameter-the thermal change index (TCI). We then explored different machine-learning approaches for classifying thermograms of the TCI-labeled dataset. Classical machine learning algorithms with feature engineering and the convolutional neural network (CNN) with image enhancement techniques were extensively investigated to identify the best performing network for classifying thermograms. The multilayer perceptron (MLP) classifier along with the features extracted from thermogram images showed an accuracy of 90.1% in multi-class classification, which outperformed the literature-reported performance metrics on this dataset.


Subject(s)
Diabetes Mellitus , Diabetic Foot , Algorithms , Diabetic Foot/diagnostic imaging , Humans , Machine Learning , Neural Networks, Computer , Thermography
16.
Polymers (Basel) ; 14(3)2022 Jan 28.
Article in English | MEDLINE | ID: mdl-35160516

ABSTRACT

Biodegradable polymers have recently found significant applications in pharmaceutics processing and drug release/delivery. Composites based on poly (L-lactic acid) (PLLA) have been suggested to enhance the crystallization rate and relative crystallinity of pure PLLA polymers. Despite the large amount of experimental research that has taken place to date, the theoretical aspects of relative crystallinity have not been comprehensively investigated. Therefore, this research uses machine learning methods to estimate the relative crystallinity of biodegradable PLLA/PGA (polyglycolide) composites. Six different artificial intelligent classes were employed to estimate the relative crystallinity of PLLA/PGA polymer composites as a function of crystallization time, temperature, and PGA content. Cumulatively, 1510 machine learning topologies, including 200 multilayer perceptron neural networks, 200 cascade feedforward neural networks (CFFNN), 160 recurrent neural networks, 800 adaptive neuro-fuzzy inference systems, and 150 least-squares support vector regressions, were developed, and their prediction accuracy compared. The modeling results show that a single hidden layer CFFNN with 9 neurons is the most accurate method for estimating 431 experimentally measured datasets. This model predicts an experimental database with an average absolute percentage difference of 8.84%, root mean squared errors of 4.67%, and correlation coefficient (R2) of 0.999008. The modeling results and relevancy studies show that relative crystallinity increases based on the PGA content and crystallization time. Furthermore, the effect of temperature on relative crystallinity is too complex to be easily explained.

17.
Sensors (Basel) ; 22(3)2022 Jan 25.
Article in English | MEDLINE | ID: mdl-35161664

ABSTRACT

Cardiovascular diseases are the most common causes of death around the world. To detect and treat heart-related diseases, continuous blood pressure (BP) monitoring along with many other parameters are required. Several invasive and non-invasive methods have been developed for this purpose. Most existing methods used in hospitals for continuous monitoring of BP are invasive. On the contrary, cuff-based BP monitoring methods, which can predict systolic blood pressure (SBP) and diastolic blood pressure (DBP), cannot be used for continuous monitoring. Several studies attempted to predict BP from non-invasively collectible signals such as photoplethysmograms (PPG) and electrocardiograms (ECG), which can be used for continuous monitoring. In this study, we explored the applicability of autoencoders in predicting BP from PPG and ECG signals. The investigation was carried out on 12,000 instances of 942 patients of the MIMIC-II dataset, and it was found that a very shallow, one-dimensional autoencoder can extract the relevant features to predict the SBP and DBP with state-of-the-art performance on a very large dataset. An independent test set from a portion of the MIMIC-II dataset provided a mean absolute error (MAE) of 2.333 and 0.713 for SBP and DBP, respectively. On an external dataset of 40 subjects, the model trained on the MIMIC-II dataset provided an MAE of 2.728 and 1.166 for SBP and DBP, respectively. For both the cases, the results met British Hypertension Society (BHS) Grade A and surpassed the studies from the current literature.


Subject(s)
Hypertension , Photoplethysmography , Blood Pressure , Blood Pressure Determination , Electrocardiography , Humans , Hypertension/diagnosis
18.
Sensors (Basel) ; 21(22)2021 Nov 12.
Article in English | MEDLINE | ID: mdl-34833602

ABSTRACT

MRI images are visually inspected by domain experts for the analysis and quantification of the tumorous tissues. Due to the large volumetric data, manual reporting on the images is subjective, cumbersome, and error prone. To address these problems, automatic image analysis tools are employed for tumor segmentation and other subsequent statistical analysis. However, prior to the tumor analysis and quantification, an important challenge lies in the pre-processing. In the present study, permutations of different pre-processing methods are comprehensively investigated. In particular, the study focused on Gibbs ringing artifact removal, bias field correction, intensity normalization, and adaptive histogram equalization (AHE). The pre-processed MRI data is then passed onto 3D U-Net for automatic segmentation of brain tumors. The segmentation results demonstrated the best performance with the combination of two techniques, i.e., Gibbs ringing artifact removal and bias-field correction. The proposed technique achieved mean dice score metrics of 0.91, 0.86, and 0.70 for the whole tumor, tumor core, and enhancing tumor, respectively. The testing mean dice scores achieved by the system are 0.90, 0.83, and 0.71 for the whole tumor, core tumor, and enhancing tumor, respectively. The novelty of this work concerns a robust pre-processing sequence for improving the segmentation accuracy of MR images. The proposed method overcame the testing dice scores of the state-of-the-art methods. The results are benchmarked with the existing techniques used in the Brain Tumor Segmentation Challenge (BraTS) 2018 challenge.


Subject(s)
Brain Neoplasms , Magnetic Resonance Imaging , Brain/diagnostic imaging , Brain Neoplasms/diagnostic imaging , Humans , Image Enhancement , Image Processing, Computer-Assisted
19.
Diagnostics (Basel) ; 11(5)2021 May 17.
Article in English | MEDLINE | ID: mdl-34067937

ABSTRACT

Detecting COVID-19 at an early stage is essential to reduce the mortality risk of the patients. In this study, a cascaded system is proposed to segment the lung, detect, localize, and quantify COVID-19 infections from computed tomography images. An extensive set of experiments were performed using Encoder-Decoder Convolutional Neural Networks (ED-CNNs), UNet, and Feature Pyramid Network (FPN), with different backbone (encoder) structures using the variants of DenseNet and ResNet. The conducted experiments for lung region segmentation showed a Dice Similarity Coefficient (DSC) of 97.19% and Intersection over Union (IoU) of 95.10% using U-Net model with the DenseNet 161 encoder. Furthermore, the proposed system achieved an elegant performance for COVID-19 infection segmentation with a DSC of 94.13% and IoU of 91.85% using the FPN with DenseNet201 encoder. The proposed system can reliably localize infections of various shapes and sizes, especially small infection regions, which are rarely considered in recent studies. Moreover, the proposed system achieved high COVID-19 detection performance with 99.64% sensitivity and 98.72% specificity. Finally, the system was able to discriminate between different severity levels of COVID-19 infection over a dataset of 1110 subjects with sensitivity values of 98.3%, 71.2%, 77.8%, and 100% for mild, moderate, severe, and critical, respectively.

SELECTION OF CITATIONS
SEARCH DETAIL
...