Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 32
Filter
1.
PLoS One ; 19(7): e0306215, 2024.
Article in English | MEDLINE | ID: mdl-38980871

ABSTRACT

[This corrects the article DOI: 10.1371/journal.pone.0114406.].

2.
J Clin Med ; 12(17)2023 Aug 30.
Article in English | MEDLINE | ID: mdl-37685724

ABSTRACT

BACKGROUND: Sepsis, a life-threatening infection-induced inflammatory condition, has significant global health impacts. Timely detection is crucial for improving patient outcomes as sepsis can rapidly progress to severe forms. The application of machine learning (ML) and deep learning (DL) to predict sepsis using electronic health records (EHRs) has gained considerable attention for timely intervention. METHODS: PubMed, IEEE Xplore, Google Scholar, and Scopus were searched for relevant studies. All studies that used ML/DL to detect or early-predict the onset of sepsis in the adult population using EHRs were considered. Data were extracted and analyzed from all studies that met the criteria and were also evaluated for their quality. RESULTS: This systematic review examined 1942 articles, selecting 42 studies while adhering to strict criteria. The chosen studies were predominantly retrospective (n = 38) and spanned diverse geographic settings, with a focus on the United States. Different datasets, sepsis definitions, and prevalence rates were employed, necessitating data augmentation. Heterogeneous parameter utilization, diverse model distribution, and varying quality assessments were observed. Longitudinal data enabled early sepsis prediction, and quality criteria fulfillment varied, with inconsistent funding-article quality correlation. CONCLUSIONS: This systematic review underscores the significance of ML/DL methods for sepsis detection and early prediction through EHR data.

3.
Bioengineering (Basel) ; 10(5)2023 May 10.
Article in English | MEDLINE | ID: mdl-37237649

ABSTRACT

Electroencephalogram (EEG) signals immensely suffer from several physiological artifacts, including electrooculogram (EOG), electromyogram (EMG), and electrocardiogram (ECG) artifacts, which must be removed to ensure EEG's usability. This paper proposes a novel one-dimensional convolutional neural network (1D-CNN), i.e., MultiResUNet3+, to denoise physiological artifacts from corrupted EEG. A publicly available dataset containing clean EEG, EOG, and EMG segments is used to generate semi-synthetic noisy EEG to train, validate and test the proposed MultiResUNet3+, along with four other 1D-CNN models (FPN, UNet, MCGUNet, LinkNet). Adopting a five-fold cross-validation technique, all five models' performance is measured by estimating temporal and spectral percentage reduction in artifacts, temporal and spectral relative root mean squared error, and average power ratio of each of the five EEG bands to whole spectra. The proposed MultiResUNet3+ achieved the highest temporal and spectral percentage reduction of 94.82% and 92.84%, respectively, in EOG artifacts removal from EOG-contaminated EEG. Moreover, compared to the other four 1D-segmentation models, the proposed MultiResUNet3+ eliminated 83.21% of the spectral artifacts from the EMG-corrupted EEG, which is also the highest. In most situations, our proposed model performed better than the other four 1D-CNN models, evident by the computed performance evaluation metrics.

4.
Sensors (Basel) ; 22(19)2022 Oct 07.
Article in English | MEDLINE | ID: mdl-36236697

ABSTRACT

An intelligent insole system may monitor the individual's foot pressure and temperature in real-time from the comfort of their home, which can help capture foot problems in their earliest stages. Constant monitoring for foot complications is essential to avoid potentially devastating outcomes from common diseases such as diabetes mellitus. Inspired by those goals, the authors of this work propose a full design for a wearable insole that can detect both plantar pressure and temperature using off-the-shelf sensors. The design provides details of specific temperature and pressure sensors, circuit configuration for characterizing the sensors, and design considerations for creating a small system with suitable electronics. The procedure also details how, using a low-power communication protocol, data about the individuals' foot pressure and temperatures may be sent wirelessly to a centralized device for storage. This research may aid in the creation of an affordable, practical, and portable foot monitoring system for patients. The solution can be used for continuous, at-home monitoring of foot problems through pressure patterns and temperature differences between the two feet. The generated maps can be used for early detection of diabetic foot complication with the help of artificial intelligence.


Subject(s)
Artificial Intelligence , Diabetic Foot , Diabetic Foot/diagnosis , Humans , Pressure , Shoes , Temperature
5.
Bioengineering (Basel) ; 9(10)2022 Oct 16.
Article in English | MEDLINE | ID: mdl-36290527

ABSTRACT

Respiratory ailments are a very serious health issue and can be life-threatening, especially for patients with COVID. Respiration rate (RR) is a very important vital health indicator for patients. Any abnormality in this metric indicates a deterioration in health. Hence, continuous monitoring of RR can act as an early indicator. Despite that, RR monitoring equipment is generally provided only to intensive care unit (ICU) patients. Recent studies have established the feasibility of using photoplethysmogram (PPG) signals to estimate RR. This paper proposes a deep-learning-based end-to-end solution for estimating RR directly from the PPG signal. The system was evaluated on two popular public datasets: VORTAL and BIDMC. A lightweight model, ConvMixer, outperformed all of the other deep neural networks. The model provided a root mean squared error (RMSE), mean absolute error (MAE), and correlation coefficient (R) of 1.75 breaths per minute (bpm), 1.27 bpm, and 0.92, respectively, for VORTAL, while these metrics were 1.20 bpm, 0.77 bpm, and 0.92, respectively, for BIDMC. The authors also showed how fine-tuning a small subset could increase the performance of the model in the case of an out-of-distribution dataset. In the fine-tuning experiments, the models produced an average R of 0.81. Hence, this lightweight model can be deployed to mobile devices for real-time monitoring of patients.

6.
Diagnostics (Basel) ; 12(9)2022 Sep 03.
Article in English | MEDLINE | ID: mdl-36140545

ABSTRACT

With the onset of the COVID-19 pandemic, the number of critically sick patients in intensive care units (ICUs) has increased worldwide, putting a burden on ICUs. Early prediction of ICU requirement is crucial for efficient resource management and distribution. Early-prediction scoring systems for critically ill patients using mathematical models are available, but are not generalized for COVID-19 and Non-COVID patients. This study aims to develop a generalized and reliable prognostic model for ICU admission for both COVID-19 and non-COVID-19 patients using best feature combination from the patient data at admission. A retrospective cohort study was conducted on a dataset collected from the pulmonology department of Moscow City State Hospital between 20 April 2020 and 5 June 2020. The dataset contains ten clinical features for 231 patients, of whom 100 patients were transferred to ICU and 131 were stable (non-ICU) patients. There were 156 COVID positive patients and 75 non-COVID patients. Different feature selection techniques were investigated, and a stacking machine learning model was proposed and compared with eight different classification algorithms to detect risk of need for ICU admission for both COVID-19 and non-COVID patients combined and COVID patients alone. C-reactive protein (CRP), chest computed tomography (CT), lung tissue affected (%), age, admission to hospital, and fibrinogen parameters at hospital admission were found to be important features for ICU-requirement risk prediction. The best performance was produced by the stacking approach, with weighted precision, sensitivity, F1-score, specificity, and overall accuracy of 84.45%, 84.48%, 83.64%, 84.47%, and 84.48%, respectively, for both types of patients, and 85.34%, 85.35%, 85.11%, 85.34%, and 85.35%, respectively, for COVID-19 patients only. The proposed work can help doctors to improve management through early prediction of the risk of need for ICU admission of patients during the COVID-19 pandemic, as the model can be used for both types of patients.

7.
J Pers Med ; 12(9)2022 Sep 14.
Article in English | MEDLINE | ID: mdl-36143293

ABSTRACT

Type 1 diabetes mellitus (T1DM) patients are a significant threat to chronic kidney disease (CKD) development during their life. However, there is always a high chance of delay in CKD detection because CKD can be asymptomatic, and T1DM patients bypass traditional CKD tests during their routine checkups. This study aims to develop and validate a prediction model and nomogram of CKD in T1DM patients using readily available routine checkup data for early CKD detection. This research utilized 1375 T1DM patients' sixteen years of longitudinal data from multi-center Epidemiology of Diabetes Interventions and Complications (EDIC) clinical trials conducted at 28 sites in the USA and Canada and considered 17 routinely available features. Three feature ranking algorithms, extreme gradient boosting (XGB), random forest (RF), and extremely randomized trees classifier (ERT), were applied to create three feature ranking lists, and logistic regression analyses were performed to develop CKD prediction models using these ranked feature lists to identify the best performing top-ranked features combination. Finally, the most significant features were selected to develop a multivariate logistic regression-based CKD prediction model for T1DM patients. This model was evaluated using sensitivity, specificity, accuracy, precision, and F1 score on train and test data. A nomogram of the final model was further generated for easy application in clinical practices. Hypertension, duration of diabetes, drinking habit, triglycerides, ACE inhibitors, low-density lipoprotein (LDL) cholesterol, age, and smoking habit were the top-8 features ranked by the XGB model and identified as the most important features for predicting CKD in T1DM patients. These eight features were selected to develop the final prediction model using multivariate logistic regression, which showed 90.04% and 88.59% accuracy in internal and test data validation. The proposed model showed excellent performance and can be used for CKD identification in T1DM patients during routine checkups.

8.
Sensors (Basel) ; 22(11)2022 Jun 02.
Article in English | MEDLINE | ID: mdl-35684870

ABSTRACT

Diabetes mellitus (DM) is one of the most prevalent diseases in the world, and is correlated to a high index of mortality. One of its major complications is diabetic foot, leading to plantar ulcers, amputation, and death. Several studies report that a thermogram helps to detect changes in the plantar temperature of the foot, which may lead to a higher risk of ulceration. However, in diabetic patients, the distribution of plantar temperature does not follow a standard pattern, thereby making it difficult to quantify the changes. The abnormal temperature distribution in infrared (IR) foot thermogram images can be used for the early detection of diabetic foot before ulceration to avoid complications. There is no machine learning-based technique reported in the literature to classify these thermograms based on the severity of diabetic foot complications. This paper uses an available labeled diabetic thermogram dataset and uses the k-mean clustering technique to cluster the severity risk of diabetic foot ulcers using an unsupervised approach. Using the plantar foot temperature, the new clustered dataset is verified by expert medical doctors in terms of risk for the development of foot ulcers. The newly labeled dataset is then investigated in terms of robustness to be classified by any machine learning network. Classical machine learning algorithms with feature engineering and a convolutional neural network (CNN) with image-enhancement techniques are investigated to provide the best-performing network in classifying thermograms based on severity. It is found that the popular VGG 19 CNN model shows an accuracy, precision, sensitivity, F1-score, and specificity of 95.08%, 95.08%, 95.09%, 95.08%, and 97.2%, respectively, in the stratification of severity. A stacking classifier is proposed using extracted features of the thermogram, which is created using the trained gradient boost classifier, XGBoost classifier, and random forest classifier. This provides a comparable performance of 94.47%, 94.45%, 94.47%, 94.43%, and 93.25% for accuracy, precision, sensitivity, F1-score, and specificity, respectively.


Subject(s)
Diabetes Mellitus , Diabetic Foot , Algorithms , Diabetic Foot/diagnostic imaging , Humans , Machine Learning , Neural Networks, Computer , Thermography/methods
9.
Sensors (Basel) ; 22(9)2022 Apr 21.
Article in English | MEDLINE | ID: mdl-35590859

ABSTRACT

The electroencephalogram (EEG) and functional near-infrared spectroscopy (fNIRS) signals, highly non-stationary in nature, greatly suffers from motion artifacts while recorded using wearable sensors. Since successful detection of various neurological and neuromuscular disorders is greatly dependent upon clean EEG and fNIRS signals, it is a matter of utmost importance to remove/reduce motion artifacts from EEG and fNIRS signals using reliable and robust methods. In this regard, this paper proposes two robust methods: (i) Wavelet packet decomposition (WPD) and (ii) WPD in combination with canonical correlation analysis (WPD-CCA), for motion artifact correction from single-channel EEG and fNIRS signals. The efficacy of these proposed techniques is tested using a benchmark dataset and the performance of the proposed methods is measured using two well-established performance matrices: (i) difference in the signal to noise ratio ( ) and (ii) percentage reduction in motion artifacts ( ). The proposed WPD-based single-stage motion artifacts correction technique produces the highest average (29.44 dB) when db2 wavelet packet is incorporated whereas the greatest average (53.48%) is obtained using db1 wavelet packet for all the available 23 EEG recordings. Our proposed two-stage motion artifacts correction technique, i.e., the WPD-CCA method utilizing db1 wavelet packet has shown the best denoising performance producing an average and values of 30.76 dB and 59.51%, respectively, for all the EEG recordings. On the other hand, for the available 16 fNIRS recordings, the two-stage motion artifacts removal technique, i.e., WPD-CCA has produced the best average (16.55 dB, utilizing db1 wavelet packet) and largest average (41.40%, using fk8 wavelet packet). The highest average and using single-stage artifacts removal techniques (WPD) are found as 16.11 dB and 26.40%, respectively, for all the fNIRS signals using fk4 wavelet packet. In both EEG and fNIRS modalities, the percentage reduction in motion artifacts increases by 11.28% and 56.82%, respectively when two-stage WPD-CCA techniques are employed in comparison with the single-stage WPD method. In addition, the average also increases when WPD-CCA techniques are used instead of single-stage WPD for both EEG and fNIRS signals. The increment in both and values is a clear indication that two-stage WPD-CCA performs relatively better compared to single-stage WPD. The results reported using the proposed methods outperform most of the existing state-of-the-art techniques.


Subject(s)
Artifacts , Canonical Correlation Analysis , Algorithms , Electroencephalography/methods , Motion , Signal Processing, Computer-Assisted , Wavelet Analysis
10.
Sensors (Basel) ; 22(9)2022 May 05.
Article in English | MEDLINE | ID: mdl-35591196

ABSTRACT

Diabetic neuropathy (DN) is one of the prevalent forms of neuropathy that involves alterations in biomechanical changes in the human gait. Diabetic foot ulceration (DFU) is one of the pervasive types of complications that arise due to DN. In the literature, for the last 50 years, researchers have been trying to observe the biomechanical changes due to DN and DFU by studying muscle electromyography (EMG) and ground reaction forces (GRF). However, the literature is contradictory. In such a scenario, we propose using Machine learning techniques to identify DN and DFU patients by using EMG and GRF data. We collected a dataset from the literature which involves three patient groups: Control (n = 6), DN (n = 6), and previous history of DFU (n = 9) and collected three lower limb muscles EMG (tibialis anterior (TA), vastus lateralis (VL), gastrocnemius lateralis (GL)), and three GRF components (GRFx, GRFy, and GRFz). Raw EMG and GRF signals were preprocessed, and different feature extraction techniques were applied to extract the best features from the signals. The extracted feature list was ranked using four different feature ranking techniques, and highly correlated features were removed. In this study, we considered different combinations of muscles and GRF components to find the best performing feature list for the identification of DN and DFU. We trained eight different conventional ML models: Discriminant analysis classifier (DAC), Ensemble classification model (ECM), Kernel classification model (KCM), k-nearest neighbor model (KNN), Linear classification model (LCM), Naive Bayes classifier (NBC), Support vector machine classifier (SVM), and Binary decision classification tree (BDC), to find the best-performing algorithm and optimized that model. We trained the optimized the ML algorithm for different combinations of muscles and GRF component features, and the performance matrix was evaluated. Our study found the KNN algorithm performed well in identifying DN and DFU, and we optimized it before training. We found the best accuracy of 96.18% for EMG analysis using the top 22 features from the chi-square feature ranking technique for features from GL and VL muscles combined. In the GRF analysis, the model showed 98.68% accuracy using the top 7 features from the Feature selection using neighborhood component analysis for the feature combinations from the GRFx-GRFz signal. In conclusion, our study has shown a potential solution for ML application in DN and DFU patient identification using EMG and GRF parameters. With careful signal preprocessing with strategic feature extraction from the biomechanical parameters, optimization of the ML model can provide a potential solution in the diagnosis and stratification of DN and DFU patients from the EMG and GRF signals.


Subject(s)
Diabetes Mellitus , Diabetic Foot , Diabetic Neuropathies , Algorithms , Bayes Theorem , Diabetic Foot/diagnosis , Diabetic Neuropathies/diagnosis , Electromyography/methods , Gait/physiology , Humans , Machine Learning , Support Vector Machine
11.
Sensors (Basel) ; 22(5)2022 Feb 24.
Article in English | MEDLINE | ID: mdl-35270938

ABSTRACT

Diabetes mellitus (DM) can lead to plantar ulcers, amputation and death. Plantar foot thermogram images acquired using an infrared camera have been shown to detect changes in temperature distribution associated with a higher risk of foot ulceration. Machine learning approaches applied to such infrared images may have utility in the early diagnosis of diabetic foot complications. In this work, a publicly available dataset was categorized into different classes, which were corroborated by domain experts, based on a temperature distribution parameter-the thermal change index (TCI). We then explored different machine-learning approaches for classifying thermograms of the TCI-labeled dataset. Classical machine learning algorithms with feature engineering and the convolutional neural network (CNN) with image enhancement techniques were extensively investigated to identify the best performing network for classifying thermograms. The multilayer perceptron (MLP) classifier along with the features extracted from thermogram images showed an accuracy of 90.1% in multi-class classification, which outperformed the literature-reported performance metrics on this dataset.


Subject(s)
Diabetes Mellitus , Diabetic Foot , Algorithms , Diabetic Foot/diagnostic imaging , Humans , Machine Learning , Neural Networks, Computer , Thermography
12.
IEEE Trans Neural Netw Learn Syst ; 33(11): 6068-6088, 2022 11.
Article in English | MEDLINE | ID: mdl-34086580

ABSTRACT

The staggering innovations and emergence of numerous deep learning (DL) applications have forced researchers to reconsider hardware architecture to accommodate fast and efficient application-specific computations. Applications, such as object detection, image recognition, speech translation, as well as music synthesis and image generation, can be performed with high accuracy at the expense of substantial computational resources using DL. Furthermore, the desire to adopt Industry 4.0 and smart technologies within the Internet of Things infrastructure has initiated several studies to enable on-chip DL capabilities for resource-constrained devices. Specialized DL processors reduce dependence on cloud servers, improve privacy, lessen latency, and mitigate bandwidth congestion. As we reach the limits of shrinking transistors, researchers are exploring various application-specific hardware architectures to meet the performance and efficiency requirements for DL tasks. Over the past few years, several software optimizations and hardware innovations have been proposed to efficiently perform these computations. In this article, we review several DL accelerators, as well as technologies with emerging devices, to highlight their architectural features in application-specific integrated circuit (IC) and field-programmable gate array (FPGA) platforms. Finally, the design considerations for DL hardware in portable applications have been discussed, along with some deductions about the future trends and potential research directions to innovate DL accelerator architectures further. By compiling this review, we expect to help aspiring researchers widen their knowledge in custom hardware architectures for DL.


Subject(s)
Deep Learning , Neural Networks, Computer , Computers , Software
13.
Sensors (Basel) ; 21(24)2021 Dec 20.
Article in English | MEDLINE | ID: mdl-34960577

ABSTRACT

Epileptic seizures are temporary episodes of convulsions, where approximately 70 percent of the diagnosed population can successfully manage their condition with proper medication and lead a normal life. Over 50 million people worldwide are affected by some form of epileptic seizures, and their accurate detection can help millions in the proper management of this condition. Increasing research in machine learning has made a great impact on biomedical signal processing and especially in electroencephalogram (EEG) data analysis. The availability of various feature extraction techniques and classification methods makes it difficult to choose the most suitable combination for resource-efficient and correct detection. This paper intends to review the relevant studies of wavelet and empirical mode decomposition-based feature extraction techniques used for seizure detection in epileptic EEG data. The articles were chosen for review based on their Journal Citation Report, feature selection methods, and classifiers used. The high-dimensional EEG data falls under the category of '3N' biosignals-nonstationary, nonlinear, and noisy; hence, two popular classifiers, namely random forest and support vector machine, were taken for review, as they are capable of handling high-dimensional data and have a low risk of over-fitting. The main metrics used are sensitivity, specificity, and accuracy; hence, some papers reviewed were excluded due to insufficient metrics. To evaluate the overall performances of the reviewed papers, a simple mean value of all metrics was used. This review indicates that the system that used a Stockwell transform wavelet variant as a feature extractor and SVM classifiers led to a potentially better result.


Subject(s)
Epilepsy , Seizures , Electroencephalography , Epilepsy/diagnosis , Humans , Machine Learning , Seizures/diagnosis , Signal Processing, Computer-Assisted
14.
Diagnostics (Basel) ; 11(12)2021 Dec 03.
Article in English | MEDLINE | ID: mdl-34943504

ABSTRACT

Chronic kidney disease (CKD) is one of the severe side effects of type 1 diabetes mellitus (T1DM). However, the detection and diagnosis of CKD are often delayed because of its asymptomatic nature. In addition, patients often tend to bypass the traditional urine protein (urinary albumin)-based CKD detection test. Even though disease detection using machine learning (ML) is a well-established field of study, it is rarely used to diagnose CKD in T1DM patients. This research aimed to employ and evaluate several ML algorithms to develop models to quickly predict CKD in patients with T1DM using easily available routine checkup data. This study analyzed 16 years of data of 1375 T1DM patients, obtained from the Epidemiology of Diabetes Interventions and Complications (EDIC) clinical trials directed by the National Institute of Diabetes, Digestive, and Kidney Diseases, USA. Three data imputation techniques (RF, KNN, and MICE) and the SMOTETomek resampling technique were used to preprocess the primary dataset. Ten ML algorithms including logistic regression (LR), k-nearest neighbor (KNN), Gaussian naïve Bayes (GNB), support vector machine (SVM), stochastic gradient descent (SGD), decision tree (DT), gradient boosting (GB), random forest (RF), extreme gradient boosting (XGB), and light gradient-boosted machine (LightGBM) were applied to developed prediction models. Each model included 19 demographic, medical history, behavioral, and biochemical features, and every feature's effect was ranked using three feature ranking techniques (XGB, RF, and Extra Tree). Lastly, each model's ROC, sensitivity (recall), specificity, accuracy, precision, and F-1 score were estimated to find the best-performing model. The RF classifier model exhibited the best performance with 0.96 (±0.01) accuracy, 0.98 (±0.01) sensitivity, and 0.93 (±0.02) specificity. LightGBM performed second best and was quite close to RF with 0.95 (±0.06) accuracy. In addition to these two models, KNN, SVM, DT, GB, and XGB models also achieved more than 90% accuracy.

15.
Diagnostics (Basel) ; 11(5)2021 May 07.
Article in English | MEDLINE | ID: mdl-34067203

ABSTRACT

A force-invariant feature extraction method derives identical information for all force levels. However, the physiology of muscles makes it hard to extract this unique information. In this context, we propose an improved force-invariant feature extraction method based on nonlinear transformation of the power spectral moments, changes in amplitude, and the signal amplitude along with spatial correlation coefficients between channels. Nonlinear transformation balances the forces and increases the margin among the gestures. Additionally, the correlation coefficient between channels evaluates the amount of spatial correlation; however, it does not evaluate the strength of the electromyogram signal. To evaluate the robustness of the proposed method, we use the electromyogram dataset containing nine transradial amputees. In this study, the performance is evaluated using three classifiers with six existing feature extraction methods. The proposed feature extraction method yields a higher pattern recognition performance, and significant improvements in accuracy, sensitivity, specificity, precision, and F1 score are found. In addition, the proposed method requires comparatively less computational time and memory, which makes it more robust than other well-known feature extraction methods.

16.
Sensors (Basel) ; 21(5)2021 Mar 04.
Article in English | MEDLINE | ID: mdl-33806350

ABSTRACT

The front-end electronics (FEE) of the Compact Muon Solenoid (CMS) is needed very low power consumption and higher readout bandwidth to match the low power requirement of its Short Strip application-specific integrated circuits (ASIC) (SSA) and to handle a large number of pileup events in the High-Luminosity Large Hadron Collider (LHC). A low-noise, wide bandwidth, and ultra-low power FEE for the pixel-strip sensor of the CMS has been designed and simulated in a 0.35 µm Complementary Metal Oxide Semiconductor (CMOS) process. The design comprises a Charge Sensitive Amplifier (CSA) and a fast Capacitor-Resistor-Resistor-Capacitor (CR-RC) pulse shaper (PS). A compact structure of the CSA circuit has been analyzed and designed for high throughput purposes. Analytical calculations were performed to achieve at least 998 MHz gain bandwidth, and then overcome pileup issue in the High-Luminosity LHC. The spice simulations prove that the circuit can achieve 88 dB dc-gain while exhibiting up to 1 GHz gain-bandwidth product (GBP). The stability of the design was guaranteed with an 82-degree phase margin while 214 ns optimal shaping time was extracted for low-power purposes. The robustness of the design against radiations was performed and the amplitude resolution of the proposed front-end was controlled at 1.87% FWHM (full width half maximum). The circuit has been designed to handle up to 280 fC input charge pulses with 2 pF maximum sensor capacitance. In good agreement with the analytical calculations, simulations outcomes were validated by post-layout simulations results, which provided a baseline gain of 546.56 mV/MeV and 920.66 mV/MeV, respectively, for the CSA and the shaping module while the ENC (Equivalent Noise Charge) of the device was controlled at 37.6 e- at 0 pF with a noise slope of 16.32 e-/pF. Moreover, the proposed circuit dissipates very low power which is only 8.72 µW from a 3.3 V supply and the compact layout occupied just 0.0205 mm2 die area.

17.
Sci Rep ; 10(1): 21770, 2020 12 10.
Article in English | MEDLINE | ID: mdl-33303857

ABSTRACT

Despite the availability of various clinical trials that used different diagnostic methods to identify diabetic sensorimotor polyneuropathy (DSPN), no reliable studies that prove the associations among diagnostic parameters from two different methods are available. Statistically significant diagnostic parameters from various methods can help determine if two different methods can be incorporated together for diagnosing DSPN. In this study, a systematic review, meta-analysis, and trial sequential analysis (TSA) were performed to determine the associations among the different parameters from the most commonly used electrophysiological screening methods in clinical research for DSPN, namely, nerve conduction study (NCS), corneal confocal microscopy (CCM), and electromyography (EMG), for different experimental groups. Electronic databases (e.g., Web of Science, PubMed, and Google Scholar) were searched systematically for articles reporting different screening tools for diabetic peripheral neuropathy. A total of 22 studies involving 2394 participants (801 patients with DSPN, 702 controls, and 891 non-DSPN patients) were reviewed systematically. Meta-analysis was performed to determine statistical significance of difference among four NCS parameters, i.e., peroneal motor nerve conduction velocity, peroneal motor nerve amplitude, sural sensory nerve conduction velocity, and sural sensory nerve amplitude (all p < 0.001); among three CCM parameters, including nerve fiber density, nerve branch density, and nerve fiber length (all p < 0.001); and among four EMG parameters, namely, time to peak occurrence (from 0 to 100% of the stance phase) of four lower limb muscles, including the vastus lateralis (p < 0.001), tibialis anterior (p = 0.63), lateral gastrocnemius (p = 0.01), and gastrocnemius medialis (p = 0.004), and the vibration perception threshold (p < 0.001). Moreover, TSA was conducted to estimate the robustness of the meta-analysis. Most of the parameters showed statistical significance between each other, whereas some were statistically nonsignificant. This meta-analysis and TSA concluded that studies including NCS and CCM parameters were conclusive and robust. However, the included studies on EMG were inconclusive, and additional clinical trials are required.


Subject(s)
Diabetic Neuropathies/diagnosis , Electrophysiology/methods , Neural Conduction , Adult , Aged , Diabetic Neuropathies/physiopathology , Electromyography , Female , Humans , Male , Middle Aged , Peroneal Nerve/physiopathology , Sural Nerve/physiopathology
18.
Sensors (Basel) ; 20(19)2020 Oct 02.
Article in English | MEDLINE | ID: mdl-33023097

ABSTRACT

Growing plants in the gulf region can be challenging as it is mostly desert, and the climate is dry. A few species of plants have the capability to grow in such a climate. However, those plants are not suitable as a food source. The aim of this work is to design and construct an indoor automatic vertical hydroponic system that does not depend on the outside climate. The designed system is capable to grow common type of crops that can be used as a food source inside homes without the need of large space. The design of the system was made after studying different types of vertical hydroponic systems in terms of price, power consumption and suitability to be built as an indoor automated system. A microcontroller was working as a brain of the system, which communicates with different types of sensors to control all the system parameters and to minimize the human intervention. An open internet of things (IoT) platform was used to store and display the system parameters and graphical interface for remote access. The designed system is capable of maintaining healthy growing parameters for the plants with minimal input from the user. The functionality of the overall system was confirmed by evaluating the response from individual system components and monitoring them in the IoT platform. The system was consuming 120.59 and 230.59 kWh respectively without and with air conditioning control during peak summer, which is equivalent to the system running cost of 13.26 and 25.36 Qatari Riyal (QAR) respectively. This system was circulating around 104 k gallons of nutrient solution monthly however, only 8-10 L water was consumed by the system. This system offers real-time notifications to alert the hydroponic system user when the conditions are not favorable. So, the user can monitor several parameters without using laboratory instruments, which will allow to control the entire system remotely. Moreover, the system also provides a wide range of information, which could be essential for plant researchers and provides a greater understanding of how the key parameters of hydroponic system correlate with plant growth. The proposed platform can be used both for quantitatively optimizing the setup of the indoor farming and for automating some of the most labor-intensive maintenance activities. Moreover, such a monitoring system can also potentially be used for high-level decision making, once enough data will be collected. This work presents significant opportunities for the people who live in the gulf region to produce food as per their requirements.

19.
Sci Rep ; 10(1): 14891, 2020 09 10.
Article in English | MEDLINE | ID: mdl-32913303

ABSTRACT

A capacitive electromyography (cEMG) biomedical sensor measures the EMG signal from human body through capacitive coupling methodology. It has the flexibility to be insulated by different types of materials. Each type of insulator will yield a unique skin-electrode capacitance which determine the performance of a cEMG biomedical sensor. Most of the insulator being explored are solid and non-breathable which cause perspiration in a long-term EMG measurement process. This research aims to explore the porous medical bandages such as micropore, gauze, and crepe bandage to be used as an insulator of a cEMG biomedical sensor. These materials are breathable and hypoallergenic. Their unique properties and characteristics have been reviewed respectively. A 50 Hz digital notch filter was developed and implemented in the EMG measurement system design to further enhance the performance of these porous medical bandage insulated cEMG biomedical sensors. A series of experimental verifications such as noise floor characterization, EMG signals measurement, and performance correlation were done on all these sensors. The micropore insulated cEMG biomedical sensor yielded the lowest noise floor amplitude of 2.44 mV and achieved the highest correlation coefficient result in comparison with the EMG signals captured by the conventional wet contact electrode.


Subject(s)
Bandages , Biosensing Techniques , Electric Capacitance , Electromyography/instrumentation , Humans , Porosity
20.
Sensors (Basel) ; 20(11)2020 Jun 01.
Article in English | MEDLINE | ID: mdl-32492902

ABSTRACT

Hypertension is a potentially unsafe health ailment, which can be indicated directly from the blood pressure (BP). Hypertension always leads to other health complications. Continuous monitoring of BP is very important; however, cuff-based BP measurements are discrete and uncomfortable to the user. To address this need, a cuff-less, continuous, and noninvasive BP measurement system is proposed using the photoplethysmograph (PPG) signal and demographic features using machine learning (ML) algorithms. PPG signals were acquired from 219 subjects, which undergo preprocessing and feature extraction steps. Time, frequency, and time-frequency domain features were extracted from the PPG and their derivative signals. Feature selection techniques were used to reduce the computational complexity and to decrease the chance of over-fitting the ML algorithms. The features were then used to train and evaluate ML algorithms. The best regression models were selected for systolic BP (SBP) and diastolic BP (DBP) estimation individually. Gaussian process regression (GPR) along with the ReliefF feature selection algorithm outperforms other algorithms in estimating SBP and DBP with a root mean square error (RMSE) of 6.74 and 3.59, respectively. This ML model can be implemented in hardware systems to continuously monitor BP and avoid any critical health conditions due to sudden changes.


Subject(s)
Blood Pressure Determination , Blood Pressure , Machine Learning , Photoplethysmography , Adult , Aged , Algorithms , Female , Humans , Male , Middle Aged
SELECTION OF CITATIONS
SEARCH DETAIL
...