Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
2.
Math Biosci Eng ; 19(5): 5031-5054, 2022 03 16.
Article in English | MEDLINE | ID: mdl-35430852

ABSTRACT

OBJECTIVE: Autism spectrum disorder (ASD) is usually characterised by altered social skills, repetitive behaviours, and difficulties in verbal/nonverbal communication. It has been reported that electroencephalograms (EEGs) in ASD are characterised by atypical complexity. The most commonly applied method in studies of ASD EEG complexity is multiscale entropy (MSE), where the sample entropy is evaluated across several scales. However, the accuracy of MSE-based classifications between ASD and neurotypical EEG activities is poor owing to several shortcomings in scale extraction and length, the overlap between amplitude and frequency information, and sensitivity to frequency. The present study proposes a novel, nonlinear, non-stationary, adaptive, data-driven, and accurate method for the classification of ASD and neurotypical groups based on EEG complexity and entropy without the shortcomings of MSE. APPROACH: The proposed method is as follows: (a) each ASD and neurotypical EEG (122 subjects × 64 channels) is decomposed using empirical mode decomposition (EMD) to obtain the intrinsic components (intrinsic mode functions). (b) The extracted components are normalised through the direct quadrature procedure. (c) The Hilbert transforms of the components are computed. (d) The analytic counterparts of components (and normalised components) are found. (e) The instantaneous frequency function of each analytic normalised component is calculated. (f) The instantaneous amplitude function of each analytic component is calculated. (g) The Shannon entropy values of the instantaneous frequency and amplitude vectors are computed. (h) The entropy values are classified using a neural network (NN). (i) The achieved accuracy is compared to that obtained with MSE-based classification. (j) The consistency of the results of entropy 3D mapping with clinical data is assessed. MAIN RESULTS: The results demonstrate that the proposed method outperforms MSE (accuracy: 66.4%), with an accuracy of 93.5%. Moreover, the entropy 3D mapping results are more consistent with the available clinical data regarding brain topography in ASD. SIGNIFICANCE: This study presents a more robust alternative to MSE, which can be used for accurate classification of ASD/neurotypical as well as for the examination of EEG entropy across brain zones in ASD.


Subject(s)
Autism Spectrum Disorder , Autistic Disorder , Autism Spectrum Disorder/diagnosis , Autistic Disorder/diagnosis , Brain , Electroencephalography , Entropy , Humans
3.
Comput Math Methods Med ; 2022: 5975228, 2022.
Article in English | MEDLINE | ID: mdl-35222684

ABSTRACT

The mechanical heart valve is a crucial solution for many patients. However, it cannot function on the state of blood as human tissue valves. Thus, people with mechanical valves are put under anticoagulant therapy. A good measurement of the state of blood and how long it takes blood to form clots is the prothrombin time (PT); moreover, it is an indicator of how well the anticoagulant therapy is, and of whether the response of the patient to the drug is as needed. For a more specific standardized measurement of coagulation time, an international normalized ratio (INR) is established. Clinical testing of INR and PT is relatively easy. However, it requires the patient to visit the clinic for evaluation purposes. Many techniques are therefore being developed to provide PT and INR self-testing devices. Unfortunately, those solutions are either inaccurate, complex, or expensive. The present work approaches the design of an anticoagulation self-monitoring device that is easy to use, accurate, and relatively inexpensive. Hence, a two-channel polymethyl methacrylate-based microfluidic point-of-care (POC) smart device has been developed. The Arduino based lab-on-a-chip device applies optical properties to a small amount of blood. The achieved accuracy is 96.7%.


Subject(s)
International Normalized Ratio/instrumentation , Lab-On-A-Chip Devices , Point-of-Care Testing , Prothrombin Time/instrumentation , Anticoagulants/therapeutic use , Computational Biology , Equipment Design , Heart Valve Prosthesis , Humans , International Normalized Ratio/methods , International Normalized Ratio/statistics & numerical data , Lab-On-A-Chip Devices/statistics & numerical data , Optical Devices/statistics & numerical data , Point-of-Care Testing/statistics & numerical data , Polymethyl Methacrylate , Prothrombin Time/methods , Prothrombin Time/statistics & numerical data , Self-Testing
4.
Comput Biol Med ; 134: 104548, 2021 07.
Article in English | MEDLINE | ID: mdl-34119923

ABSTRACT

BACKGROUND: Autism spectrum disorder is a common group of conditions affecting about one in 54 children. Electroencephalogram (EEG) signals from children with autism have a common morphological pattern which makes them distinguishable from normal EEG. We have used this type of signal to design and implement an automated autism detection model. MATERIALS AND METHOD: We propose a hybrid lightweight deep feature extractor to obtain high classification performance. The system was designed and tested with a big EEG dataset that contained signals from autism patients and normal controls. (i) A new signal to image conversion model is presented in this paper. In this work, features are extracted from EEG signal using one-dimensional local binary pattern (1D_LBP) and the generated features are utilized as input of the short time Fourier transform (STFT) to generate spectrogram images. (ii) The deep features of the generated spectrogram images are extracted using a combination of pre-trained MobileNetV2, ShuffleNet, and SqueezeNet models. This method is named hybrid deep lightweight feature generator. (iii) A two-layered ReliefF algorithm is used for feature ranking and feature selection. (iv) The most discriminative features are fed to various shallow classifiers, developed using a 10-fold cross-validation strategy for automated autism detection. RESULTS: A support vector machine (SVM) classifier reached 96.44% accuracy based on features from the proposed model. CONCLUSIONS: The results strongly indicate that the proposed hybrid deep lightweight feature extractor is suitable for autism detection using EEG signals. The model is ready to serve as part of an adjunct tool that aids neurologists during autism diagnosis in medical centers.


Subject(s)
Autism Spectrum Disorder , Algorithms , Autism Spectrum Disorder/diagnosis , Child , Electroencephalography , Humans , Support Vector Machine
5.
Heliyon ; 6(4): e03669, 2020 Apr.
Article in English | MEDLINE | ID: mdl-32274431

ABSTRACT

The inputs to the outputs of nonlinear systems can be modeled using machine and deep learning approaches, among which artificial neural networks (ANNs) are a promising option. However, noisy signals affect ANN modeling negatively; hence, it is important to investigate these signals prior to the modeling. Herein, two customized and simple approaches, visual inspection and absolute correlation, are proposed to examine the relationship between the inputs and outputs of a nonlinear system. The system under consideration uses biosignals from surface electromyography as inputs and human finger joint angles as outputs, acquired from eight intact participants performing movements and grasping tasks in dynamic conditions. Furthermore, the results of these approaches are tested using the standard mutual information measure. Hence, the system dimensionality is reduced, and the ANN learning (convergence) is accelerated, where the most informative inputs are selected for the next phase. Subsequently, four ANN types, i.e., feedforward, cascade-forward, radial basis function, and generalized regression ANNs, are used to perform the modeling. Finally, the performance of the ANNs is compared with findings from the signal analysis. Results indicate a high level of consistency among all the aforementioned signal pre-analysis techniques from one side, and they also indicate that these techniques match the ANN performances from the other side. As an example, for a certain movement set, the ANN models resulted in the rotation estimation accuracy of the joints in the following descending order: carpometacarpal, metacarpophalangeal, proximal interphalangeal, and distal interphalangeal. This information has been indicated in the signal pre-analysis step. Therefore, this step is crucial in input-output variable selections prior to machine-/deep-learning-based modeling approaches.

6.
Article in English | MEDLINE | ID: mdl-32033231

ABSTRACT

Autistic individuals often have difficulties expressing or controlling emotions and have poor eye contact, among other symptoms. The prevalence of autism is increasing globally, posing a need to address this concern. Current diagnostic systems have particular limitations; hence, some individuals go undiagnosed or the diagnosis is delayed. In this study, an effective autism diagnostic system using electroencephalogram (EEG) signals, which are generated from electrical activity in the brain, was developed and characterized. The pre-processed signals were converted to two-dimensional images using the higher-order spectra (HOS) bispectrum. Nonlinear features were extracted thereafter, and then reduced using locality sensitivity discriminant analysis (LSDA). Significant features were selected from the condensed feature set using Student's t-test, and were then input to different classifiers. The probabilistic neural network (PNN) classifier achieved the highest accuracy of 98.70% with just five features. Ten-fold cross-validation was employed to evaluate the performance of the classifier. It was shown that the developed system can be useful as a decision support tool to assist healthcare professionals in diagnosing autism.


Subject(s)
Autism Spectrum Disorder/diagnosis , Adolescent , Autism Spectrum Disorder/physiopathology , Child , Child, Preschool , Discriminant Analysis , Electroencephalography , Female , Humans , Male , Neural Networks, Computer , Signal Processing, Computer-Assisted
7.
Biomed Microdevices ; 21(4): 80, 2019 08 15.
Article in English | MEDLINE | ID: mdl-31418067

ABSTRACT

Blood viscosity measurements are crucial for the diagnosis and understanding of a range of hematological and cardiovascular diseases. Such measurements are heavily used in monitoring patients during and after surgeries, which necessitates the development of a highly accurate viscometer that uses a minimal amount of blood. In this work, we have designed and implemented a microfluidic device that was used to measure fluid viscosity with a high accuracy using less than 10 µl of blood. The device was further used to construct a blood viscosity model based on temperature, shear rate, and anti-coagulant concentration. The model has an R-squared value of 0.950. Finally, blood protein content was changed to simulate diseased conditions and blood viscosity was measured using the device and estimated using the model constructed in this work. Simulated diseased conditions were clearly detected when comparing estimated viscosity values using the model and the measured values using the device, proving the applicability of the setup in the detection of rheological anomalies and in disease diagnosis.


Subject(s)
Blood Viscosity/drug effects , Heparin/pharmacology , Lab-On-A-Chip Devices , Models, Biological , Shear Strength , Temperature , Animals , Biomechanical Phenomena/drug effects , Blood Flow Velocity/drug effects , Dimethylpolysiloxanes , Dose-Response Relationship, Drug , Equipment Design , Nylons
8.
Brain Topogr ; 32(5): 914-921, 2019 09.
Article in English | MEDLINE | ID: mdl-31006838

ABSTRACT

Multiscale entropy (MSE) model quantifies the complexity of brain functions by measuring the entropy across multiple time-scales. Although MSE model has been applied in children with Autism spectrum disorders (ASD) in previous studies, they were limited to distinguish children with ASD from those normally developed without corresponding severity level of their autistic features. Therefore, we aims  to explore and to identify the MSE features and patterns in children with mild and severe ASD by using a high dense 64-channel array EEG system. This study is a cross-sectional study, where 36 children with ASD were recruited and classified into two groups: mild and severe ASD (18 children in each). Three calculated outcomes identified brain complexity of mild and severe ASD groups: averaged MSE values, MSE topographical cortical representation, and MSE curve plotting. Averaged MSE values of children with mild ASD were higher than averaged MSE value in children with severe ASD in right frontal (0.37 vs. 0.22, respectively, p = 0.022), right parietal (0.31 vs. 0.13, respectively, p = 0.017), left parietal (0.37 vs. 0.17, respectively, p = 0.018), and central cortical area (0.36 vs. 0.21, respectively, p = 0.026). In addition, children with mild ASD showed a clear and more increase in sample entropy values over increasing values of scale factors than children with severe ASD. Obtained data showed different brain complexity (MSE) features, values and topographical representations in children with mild ASD compared with those with severe ASD. As a result of this, MSE could serve as a sensitive method for identifying the severity level of ASD.


Subject(s)
Autism Spectrum Disorder/physiopathology , Brain/physiopathology , Child , Cross-Sectional Studies , Electroencephalography , Entropy , Female , Humans , Male
9.
Behav Brain Res ; 362: 240-248, 2019 04 19.
Article in English | MEDLINE | ID: mdl-30641159

ABSTRACT

BACKGROUND: Previous automated EEG-based diagnosis of autism spectrum disorders (ASD) using various nonlinear EEG analysis methods were limited to distinguish only children with ASD from those normally developed without approaching their autistic features severity. OBJECTIVES: Identifying potential differences between children with mild and sever ASD based on EEG analysis using empirical mode decomposition (EMD) and second-order difference plot (SODP) models, and determining the accuracy of such model outcome measures to distinguish ASD severity levels. METHODS: Resting-state EEG data recorded for 36 children, who divided equally into two matched groups of mild and sever ASD. EMD analysis was applied to their EEG data to identify intrinsic mode functions (IMFs) features, SODP patterns, elliptical area and central tendency measure (CTM) values. Artificial neural network then used to determine the accuracy of this models outcome measures in distinguishing between the two ASD groups. RESULTS: Children with sever ASD showed smaller, less twitches and oscillation of IMFs features, more stochastic SODP plotting, less CTM values, and higher ellipse area values compared to the children with mild ASD, which indicates their greater EEG variabilities and their greater inability to suppress their improper behavior. ANN ended with model sensitivity and specificity of 100% and 94.7%, respectively, and 97.2% overall accuracy of distinguishing between ASD groups. CONCLUSION: Children with sever and mild ASD had different IMFs features, SODP plotting, elliptical area and CTM values. In addition, these EMD outcome measures could serve as a sensitive automated tool to distinguish different severity levels in children with ASD.


Subject(s)
Autism Spectrum Disorder/physiopathology , Autistic Disorder/physiopathology , Electroencephalography , Neural Networks, Computer , Autism Spectrum Disorder/diagnosis , Autistic Disorder/diagnostic imaging , Behavior/physiology , Child , Child, Preschool , Electroencephalography/methods , Female , Humans , Male , Sensitivity and Specificity , Signal Processing, Computer-Assisted
10.
J Med Syst ; 42(4): 58, 2018 Feb 17.
Article in English | MEDLINE | ID: mdl-29455440

ABSTRACT

Blood leucocytes segmentation in medical images is viewed as difficult process due to the variability of blood cells concerning their shape and size and the difficulty towards determining location of Blood Leucocytes. Physical analysis of blood tests to recognize leukocytes is tedious, time-consuming and liable to error because of the various morphological components of the cells. Segmentation of medical imagery has been considered as a difficult task because of complexity of images, and also due to the non-availability of leucocytes models which entirely captures the probable shapes in each structures and also incorporate cell overlapping, the expansive variety of the blood cells concerning their shape and size, various elements influencing the outer appearance of the blood leucocytes, and low Static Microscope Image disparity from extra issues outcoming about because of noise. We suggest a strategy towards segmentation of blood leucocytes using static microscope images which is a resultant of three prevailing systems of computer vision fiction: enhancing the image, Support vector machine for segmenting the image, and filtering out non ROI (region of interest) on the basis of Local binary patterns and texture features. Every one of these strategies are modified for blood leucocytes division issue, in this manner the subsequent techniques are very vigorous when compared with its individual segments. Eventually, we assess framework based by compare the outcome and manual division. The findings outcome from this study have shown a new approach that automatically segments the blood leucocytes and identify it from a static microscope images. Initially, the method uses a trainable segmentation procedure and trained support vector machine classifier to accurately identify the position of the ROI. After that, filtering out non ROI have proposed based on histogram analysis to avoid the non ROI and chose the right object. Finally, identify the blood leucocytes type using the texture feature. The performance of the foreseen approach has been tried in appearing differently in relation to the system against manual examination by a gynaecologist utilizing diverse scales. A total of 100 microscope images were used for the comparison, and the results showed that the proposed solution is a viable alternative to the manual segmentation method for accurately determining the ROI. We have evaluated the blood leucocytes identification using the ROI texture (LBP Feature). The identification accuracy in the technique used is about 95.3%., with 100 sensitivity and 91.66% specificity.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Leukocytes/cytology , Pattern Recognition, Automated/methods , Support Vector Machine , Artificial Intelligence , Humans , Microscopy , Reproducibility of Results
11.
Water Sci Technol ; 76(11-12): 3227-3235, 2017 Dec.
Article in English | MEDLINE | ID: mdl-29236002

ABSTRACT

A non-sacrificial boron-doped diamond electrode was prepared in the laboratory and used as a novel anode for electrochemical oxidation of poultry slaughterhouse wastewater. This wastewater poses environmental threats as it is characterized by a high content of recalcitrant organics. The influence of several process variables, applied current density, initial pH, supporting electrolyte nature, and concentration of electrocoagulant, on chemical oxygen demand (COD) removal, color removal, and turbidity removal was investigated. Results showed that raising the applied current density to 3.83 mA/cm2 has a positive effect on COD removal, color removal, and turbidity removal. These parameters increased to 100%, 90%, and 80% respectively. A low pH of 5 favored oxidants generation and consequently increased the COD removal percentage to reach 100%. Complete removal of COD had occurred in the presence of NaCl (1%) as supporting electrolyte. Na2SO4 demonstrated lower efficiency than NaCl in terms of COD removal. The COD decay kinetics follows the pseudo-first-order reaction. The simultaneous use of Na2SO4 and FeCl3 decreased the turbidity in wastewater by 98% due to electrocoagulation.


Subject(s)
Abattoirs , Boron/chemistry , Diamond , Industrial Waste/analysis , Wastewater/chemistry , Water Pollutants, Chemical/chemistry , Biological Oxygen Demand Analysis , Electrodes , Electrolytes/chemistry , Kinetics , Oxidants , Oxidation-Reduction , Waste Disposal, Fluid/instrumentation , Waste Disposal, Fluid/methods
12.
BMC Med Educ ; 17(1): 129, 2017 Aug 04.
Article in English | MEDLINE | ID: mdl-28778157

ABSTRACT

BACKGROUND: Improvement of medical content in Biomedical Engineering curricula based on a qualitative assessment process or on a comparison with another high-standard program has been approached by a number of studies. However, the quantitative assessment tools have not been emphasized. The quantitative assessment tools can be more accurate and robust in cases of challenging multidisciplinary fields like that of Biomedical Engineering which includes biomedicine elements mixed with technology aspects. The major limitations of the previous research are the high dependence on surveys or pure qualitative approaches as well as the absence of strong focus on medical outcomes without implicit confusion with the technical ones. The proposed work presents the development and evaluation of an accurate/robust quantitative approach to the improvement of the medical content in the challenging multidisciplinary BME curriculum. METHODS: The work presents quantitative assessment tools and subsequent improvement of curriculum medical content applied, as example for explanation, to the ABET (Accreditation Board for Engineering and Technology, USA) accredited biomedical engineering BME department at Jordan University of Science and Technology. The quantitative results of assessment of curriculum/course, capstone, exit exam, course assessment by student (CAS) as well as of surveys filled by alumni, seniors, employers and training supervisors were, first, mapped to the expected students' outcomes related to the medical field (SOsM). The collected data were then analyzed and discussed to find curriculum weakness points by tracking shortcomings in every outcome degree of achievement. Finally, actions were taken to fill in the gaps of the curriculum. Actions were also mapped to the students' medical outcomes (SOsM). RESULTS: Weighted averages of obtained quantitative values, mapped to SOsM, indicated accurately the achievement levels of all outcomes as well as the necessary improvements to be performed in curriculum. Mapping the improvements to SOsM also helps in the assessment of the following cycle. CONCLUSION: The suggested assessment tools can be generalized and extended to any other BME department. Robust improvement of medical content in BME curriculum can subsequently be achieved.


Subject(s)
Accreditation/standards , Education, Medical, Graduate , Educational Measurement/standards , Students, Medical , Biomedical Engineering/standards , Curriculum , Education, Medical, Graduate/standards , Humans , Professional Competence , Quality Improvement
13.
J Med Biol Eng ; 37(6): 843-857, 2017.
Article in English | MEDLINE | ID: mdl-29541014

ABSTRACT

This paper presents an accurate nonlinear classification method that can help physicians diagnose seizure in electroencephalographic (EEG) signal characterized by a disturbance in temporal and spectral content. This is accomplished by applying four steps. First, different EEG signals containing healthy, ictal and seizure-free (inter-ictal) activities are decomposed by empirical mode decomposition method. The instantaneous amplitudes and frequencies of resulted bands (intrinsic mode functions, IMF) are then tracked by the direct quadrature method (DQ). In contrast to other approaches, DQ cancels the effect of amplitude modulation on frequency calculation. The dissociation between instantaneous amplitude and frequency information is therefore fully achieved to avoid features confusion. Afterwards, the Shannon entropy values of both sets of instantaneous values (amplitudes and frequencies)-related to every IMF-are calculated. Finally, the obtained entropy values are classified by random forest tree. The proposed procedure yields 100% accuracy for (healthy)/(ictal) and 98.3-99.7% for (healthy)/(ictal)/(interictal) classification problems. The suggested method is hence robust, accurate, fast, user-friendly, data driven with open access interpretability.

14.
Biomed J ; 38(2): 153-61, 2015.
Article in English | MEDLINE | ID: mdl-25179722

ABSTRACT

BACKGROUND: Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. METHODS: This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFIS) toolboxes. The methods have been applied to 10 different respiratory sounds for classification. RESULTS: The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. CONCLUSIONS: The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.


Subject(s)
Evoked Potentials/physiology , Nerve Net/physiology , Neural Networks, Computer , Respiratory Sounds/physiology , Artificial Intelligence , Humans , Signal Processing, Computer-Assisted , Software
15.
Biomed Eng Online ; 10: 38, 2011 May 24.
Article in English | MEDLINE | ID: mdl-21609459

ABSTRACT

BACKGROUND: Classification method capable of recognizing abnormal activities of the brain functionality are either brain imaging or brain signal analysis. The abnormal activity of interest in this study is characterized by a disturbance caused by changes in neuronal electrochemical activity that results in abnormal synchronous discharges. The method aims at helping physicians discriminate between healthy and seizure electroencephalographic (EEG) signals. METHOD: Discrimination in this work is achieved by analyzing EEG signals obtained from freely accessible databases. MATLAB has been used to implement and test the proposed classification algorithm. The analysis in question presents a classification of normal and ictal activities using a feature relied on Hilbert-Huang Transform. Through this method, information related to the intrinsic functions contained in the EEG signal has been extracted to track the local amplitude and the frequency of the signal. Based on this local information, weighted frequencies are calculated and a comparison between ictal and seizure-free determinant intrinsic functions is then performed. Methods of comparison used are the t-test and the Euclidean clustering. RESULTS: The t-test results in a P-value < 0.02 and the clustering leads to accurate (94%) and specific (96%) results. The proposed method is also contrasted against the Multivariate Empirical Mode Decomposition that reaches 80% accuracy. Comparison results strengthen the contribution of this paper not only from the accuracy point of view but also with respect to its fast response and ease to use. CONCLUSION: An original tool for EEG signal processing giving physicians the possibility to diagnose brain functionality abnormalities is presented in this paper. The proposed system bears the potential of providing several credible benefits such as fast diagnosis, high accuracy, good sensitivity and specificity, time saving and user friendly. Furthermore, the classification of mode mixing can be achieved using the extracted instantaneous information of every IMF, but it would be most likely a hard task if only the average value is used. Extra benefits of this proposed system include low cost, and ease of interface. All of that indicate the usefulness of the tool and its use as an efficient diagnostic tool.


Subject(s)
Electroencephalography/methods , Seizures/diagnosis , Signal Processing, Computer-Assisted , Case-Control Studies , Humans
16.
Article in English | MEDLINE | ID: mdl-19965156

ABSTRACT

The purpose of this study is to investigate the potential of the ensemble empirical mode decomposition (EEMD) to extract cardiogenic oscillations from inductive plethysmography signals in order to measure cardiac stroke volume. First, a simple cardio-respiratory model is used to simulate cardiac, respiratory, and cardio-respiratory signals. Second, application of empirical mode decomposition (EMD) to simulated cardio-respiratory signals demonstrates that the mode mixing phenomenon affects the extraction performance and hence also the cardiac stroke volume measurement. Stroke volume is measured as the amplitude of extracted cardiogenic oscillations, and it is compared to the stroke volume of simulated cardiac activity. Finally, we show that the EEMD leads to mode mixing removal.


Subject(s)
Plethysmography/methods , Signal Processing, Computer-Assisted , Algorithms , Biomedical Engineering/methods , Computer Simulation , Heart Rate , Humans , Models, Statistical , Oscillometry/methods , Respiration , Stroke Volume , Time Factors
17.
Philos Trans A Math Phys Eng Sci ; 367(1908): 4741-57, 2009 Dec 13.
Article in English | MEDLINE | ID: mdl-19884178

ABSTRACT

To study the mechanical interactions between heart, lungs and thorax, we propose a mathematical model combining a ventilatory neuromuscular model and a model of the cardiovascular system, as described by Smith et al. (Smith, Chase, Nokes, Shaw & Wake 2004 Med. Eng. Phys.26, 131-139. (doi:10.1016/j.medengphy.2003.10.001)). The respiratory model has been adapted from Thibault et al. (Thibault, Heyer, Benchetrit & Baconnier 2002 Acta Biotheor. 50, 269-279. (doi:10.1023/A:1022616701863)); using a Liénard oscillator, it allows the activity of the respiratory centres, the respiratory muscles and rib cage internal mechanics to be simulated. The minimal haemodynamic system model of Smith includes the heart, as well as the pulmonary and systemic circulation systems. These two modules interact mechanically by means of the pleural pressure, calculated in the mechanical respiratory system, and the intrathoracic blood volume, calculated in the cardiovascular model. The simulation by the proposed model provides results, first, close to experimental data, second, in agreement with the literature results and, finally, highlighting the presence of mechanical cardiorespiratory interactions.


Subject(s)
Heart/physiology , Lung/physiology , Models, Cardiovascular , Respiratory Mechanics/physiology , Thorax/physiology , Computer Simulation , Electrocardiography , Humans , Male , Plethysmography
18.
Article in English | MEDLINE | ID: mdl-18003550

ABSTRACT

We present progress on a comprehensive, modular, interactive modeling environment centered on overall regulation of blood pressure and body fluid homeostasis. We call the project SAPHIR, for "a Systems Approach for PHysiological Integration of Renal, cardiac, and respiratory functions". The project uses state-of-the-art multi-scale simulation methods. The basic core model will give succinct input-output (reduced-dimension) descriptions of all relevant organ systems and regulatory processes, and it will be modular, multi-resolution, and extensible, in the sense that detailed submodules of any process(es) can be "plugged-in" to the basic model in order to explore, eg. system-level implications of local perturbations. The goal is to keep the basic core model compact enough to insure fast execution time (in view of eventual use in the clinic) and yet to allow elaborate detailed modules of target tissues or organs in order to focus on the problem area while maintaining the system-level regulatory compensations.


Subject(s)
Blood Pressure/physiology , Body Fluids/physiology , Models, Biological , Animals , Cardiovascular Physiological Phenomena , Homeostasis , Humans , Kidney/physiology , Respiratory Physiological Phenomena
19.
Article in English | MEDLINE | ID: mdl-18002147

ABSTRACT

Thoracocardiography approach pretends to non-invasively monitor stroke volume by inductive plethysmographic recording of ventricular volume curves by a transducer placed on the chest. The purpose of this study was to investigate the potential of thoracocardiography to estimate stroke volumes while apnea with open glottis. We hypothesized that, when glottis is open, stroke volumes would be better estimated if airways flow curves were taken into account.


Subject(s)
Artifacts , Cardiography, Impedance/methods , Diagnosis, Computer-Assisted/methods , Glottis/physiology , Respiratory Mechanics/physiology , Stroke Volume/physiology , Adult , Female , Humans , Male , Middle Aged , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...