Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Pharmaceutics ; 14(5)2022 May 09.
Article in English | MEDLINE | ID: mdl-35631610

ABSTRACT

Bayesian therapeutic drug monitoring (TDM) software uses a reported pharmacokinetic (PK) model as prior information. Since its estimation is based on the Bayesian method, the estimation performance of TDM software can be improved using a PK model with characteristics similar to those of a patient. Therefore, we aimed to develop a classifier using machine learning (ML) to select a more suitable vancomycin PK model for TDM in a patient. In our study, nine vancomycin PK studies were selected, and a classifier was created to choose suitable models among them for patients. The classifier was trained using 900,000 virtual patients, and its performance was evaluated using 9000 and 4000 virtual patients for internal and external validation, respectively. The accuracy of the classifier ranged from 20.8% to 71.6% in the simulation scenarios. TDM using the ML classifier showed stable results compared with that using single models without the ML classifier. Based on these results, we have discussed further development of TDM using ML. In conclusion, we developed and evaluated a new method for selecting a PK model for TDM using ML. With more information, such as on additional PK model reporting and ML model improvement, this method can be further enhanced.

2.
Pharmaceuticals (Basel) ; 15(2)2022 Jan 21.
Article in English | MEDLINE | ID: mdl-35215240

ABSTRACT

Most therapeutic drug monitoring (TDM) packages are based on the maximum a posteriori (MAP) estimation. In this study, HMCtdm, a new TDM package, was developed using a Hamiltonian Monte Carlo (HMC) simulation. The estimation process of HMCtdm for the drugs amikacin, vancomycin, theophylline, and phenytoin was based on the R package Torsten. The prior pharmacokinetic (PK) models of the drugs were derived from the Abbottbase® pharmacokinetics systems (PKS) program. The performance of HMCtdm for each drug was assessed through internal and external validations. The internal validation results of the HMCtdm were compared with those of a MAP-based estimation. The developed open-source HMCtdm package is user friendly. The validation results were reviewed and interpreted using the mean percentage error and root mean squared error. The successful transplantation of the prior PK structures (used in PKS) was confirmed by comparing the validation results with a MAP estimation. An open-source HMC-based TDM package was also successfully developed in this study, and its performance was evaluated. This package can be operated by users unfamiliar with C++ and can be further developed for various applications.

3.
Sensors (Basel) ; 20(24)2020 Dec 11.
Article in English | MEDLINE | ID: mdl-33322359

ABSTRACT

Electroencephalogram (EEG) biosignals are widely used to measure human emotional reactions. The recent progress of deep learning-based classification models has improved the accuracy of emotion recognition in EEG signals. We apply a deep learning-based emotion recognition model from EEG biosignals to prove that illustrated surgical images reduce the negative emotional reactions that the photographic surgical images generate. The strong negative emotional reactions caused by surgical images, which show the internal structure of the human body (including blood, flesh, muscle, fatty tissue, and bone) act as an obstacle in explaining the images to patients or communicating with the images with non-professional people. We claim that the negative emotional reactions generated by illustrated surgical images are less severe than those caused by raw surgical images. To demonstrate the difference in emotional reaction, we produce several illustrated surgical images from photographs and measure the emotional reactions they engender using EEG biosignals; a deep learning-based emotion recognition model is applied to extract emotional reactions. Through this experiment, we show that the negative emotional reactions associated with photographic surgical images are much higher than those caused by illustrated versions of identical images. We further execute a self-assessed user survey to prove that the emotions recognized from EEG signals effectively represent user-annotated emotions.


Subject(s)
Electroencephalography , Emotions , Adult , Female , General Surgery , Human Body , Humans , Male , Photography , Young Adult
4.
Sensors (Basel) ; 20(16)2020 Aug 13.
Article in English | MEDLINE | ID: mdl-32823741

ABSTRACT

Visual contents such as movies and animation evoke various human emotions. We examine an argument that the emotion from the visual contents may vary according to the contrast control of the scenes contained in the contents. We sample three emotions including positive, neutral and negative to prove our argument. We also sample several scenes of these emotions from visual contents and control the contrast of the scenes. We manipulate the contrast of the scenes and measure the change of valence and arousal from human participants who watch the contents using a deep emotion recognition module based on electroencephalography (EEG) signals. As a result, we conclude that the enhancement of contrast induces the increase of valence, while the reduction of contrast induces the decrease. Meanwhile, the contrast control affects arousal on a very minute scale.


Subject(s)
Electroencephalography , Emotions , Arousal , Female , Humans , Interior Design and Furnishings , Male
5.
Sensors (Basel) ; 19(24)2019 Dec 14.
Article in English | MEDLINE | ID: mdl-31847398

ABSTRACT

Visual stimuli from photographs and artworks raise corresponding emotional responses. It is a long process to prove whether the emotions that arise from photographs and artworks are different or not. We answer this question by employing electroencephalogram (EEG)-based biosignals and a deep convolutional neural network (CNN)-based emotion recognition model. We employ Russell's emotion model, which matches emotion keywords such as happy, calm or sad to a coordinate system whose axes are valence and arousal, respectively. We collect photographs and artwork images that match the emotion keywords and build eighteen one-minute video clips for nine emotion keywords for photographs and artwork. We hired forty subjects and executed tests about the emotional responses from the video clips. From the t-test on the results, we concluded that the valence shows difference, while the arousal does not.


Subject(s)
Deep Learning , Emotions/physiology , Electroencephalography , Humans , Models, Theoretical , Neural Networks, Computer
6.
Sensors (Basel) ; 19(21)2019 Oct 31.
Article in English | MEDLINE | ID: mdl-31683608

ABSTRACT

We present a multi-column CNN-based model for emotion recognition from EEG signals. Recently, a deep neural network is widely employed for extracting features and recognizing emotions from various biosignals including EEG signals. A decision from a single CNN-based emotion recognizing module shows improved accuracy than the conventional handcrafted feature-based modules. To further improve the accuracy of the CNN-based modules, we devise a multi-column structured model, whose decision is produced by a weighted sum of the decisions from individual recognizing modules. We apply the model to EEG signals from DEAP dataset for comparison and demonstrate the improved accuracy of our model.


Subject(s)
Electroencephalography , Emotions , Facial Recognition , Neural Networks, Computer , Signal Processing, Computer-Assisted , Algorithms , Arousal , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...