Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE J Biomed Health Inform ; 23(2): 693-702, 2019 03.
Article in English | MEDLINE | ID: mdl-29994012

ABSTRACT

Elderly population (over the age of 60) is predicted to be 1.2 billion by 2025. Most of the elderly people would like to stay alone in their own house due to the high eldercare cost and privacy invasion. Unobtrusive activity recognition is the most preferred solution for monitoring daily activities of the elderly people living alone rather than the camera and wearable devices based systems. Thus, we propose an unobtrusive activity recognition classifier using deep convolutional neural network (DCNN) and anonymous binary sensors that are passive infrared motion sensors and door sensors. We employed Aruba annotated open data set that was acquired from a smart home where a voluntary single elderly woman was living inside for eight months. First, ten basic daily activities, namely, Eating, Bed_to_Toilet, Relax, Meal_Preparation, Sleeping, Work, Housekeeping, Wash_Dishes, Enter_Home, and Leave_Home are segmented with different sliding window sizes, and then converted into binary activity images. Next, the activity images are employed as the ground truth for the proposed DCNN model. The 10-fold cross-validation evaluation results indicated that our proposed DCNN model outperforms the existing models with F1-score of 0.79 and 0.951 for all ten activities and eight activities (excluding Leave_Home and Wash_Dishes), respectively.


Subject(s)
Deep Learning , Health Services for the Aged , Human Activities/classification , Image Processing, Computer-Assisted/methods , Independent Living , Aged , Humans , Video Recording
2.
PLoS One ; 13(11): e0206916, 2018.
Article in English | MEDLINE | ID: mdl-30403736

ABSTRACT

In distributed speech recognition applications, the front-end device that stands for any handheld electronic device like smartphones and personal digital assistants (PDAs) captures the speech signal, extracts the speech features, and then sends the speech-feature vector sequence to the back-end server for decoding. Since the front-end mobile device has limited computation capacity, battery power and bandwidth, there exists a feasible strategy of reducing the frame rate of the speech-feature vector sequence to alleviate the drawback. Previously, we proposed a method for adjusting the transition probabilities of the hidden Markov model to enable it to address the degradation of recognition accuracy caused by the frame-rate mismatch between the input and the original model. The previous model adaptation method is referred to as the adapting-then-connecting approach that adapts each model individually and then connects the adapted models to form a word network for speech recognition. We have found that this model adaption approach introduces transitions that skip too many states and increase the number of insertion errors. In this study, we propose an improved model adaptation approach denoted as the connecting-then-adapting approach that first connects the individual models to form a word network and then adapts the connected network for speech recognition. This new approach calculates the transition matrix of a connected model, adapts the transition matrix of the connected model according to the frame rate, and then creates a transition arc for each transition probability. The new approach can better align the speech feature sequence with the states in the word network and therefore reduce the number of insertion errors. We conducted experiments to investigate the effectiveness of our new approach and analyzed the results with respect to insertion, deletion, and substitution errors. The experimental results indicate that the proposed new method obtains a better recognition rate than the old method.


Subject(s)
Neural Networks, Computer , Pattern Recognition, Automated/methods , Speech Recognition Software , Algorithms , Humans , Markov Chains , Smartphone , Speech , User-Computer Interface
3.
J Acoust Soc Am ; 140(2): EL204, 2016 08.
Article in English | MEDLINE | ID: mdl-27586781

ABSTRACT

The hidden Markov models have been widely applied to systems with sequential data. However, the conditional independence of the state outputs will limit the output of a hidden Markov model to be a piecewise constant random sequence, which is not a good approximation for many real processes. In this paper, a high-order hidden Markov model for piecewise linear processes is proposed to better approximate the behavior of a real process. A parameter estimation method based on the expectation-maximization algorithm was derived for the proposed model. Experiments on speech recognition of noisy Mandarin digits were conducted to examine the effectiveness of the proposed method. Experimental results show that the proposed method can reduce the recognition error rate compared to a baseline hidden Markov model.


Subject(s)
Markov Chains , Speech Perception/physiology , Algorithms , Databases, Factual , Female , Humans , Male , Speech
4.
J Acoust Soc Am ; 135(3): EL166-71, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24606311

ABSTRACT

In distributed speech recognition (DSR), data packets may be lost over error prone channels. A commonly used approach to rectify this is to reconstruct a full frame rate data sequence for recognition using linear interpolation. In this study, an error-concealment decoding method that dynamically adapts the transition probabilities of hidden Markov models to match the frame loss observation sequence is proposed. Experimental results show that a DSR system using the proposed method can achieve the same level of accuracy as a data reconstruction method, is more robust against heavy frame loss, and significantly reduces the computation time.


Subject(s)
Acoustics , Models, Statistical , Pattern Recognition, Automated , Signal Processing, Computer-Assisted , Speech Acoustics , Speech Production Measurement , Algorithms , Female , Humans , Likelihood Functions , Male
5.
IEEE Trans Cybern ; 43(6): 2114-21, 2013 Dec.
Article in English | MEDLINE | ID: mdl-23757520

ABSTRACT

The frame rate of the observation sequence in distributed speech recognition applications may be reduced to suit a resource-limited front-end device. In order to use models trained using full-frame-rate data in the recognition of reduced frame-rate (RFR) data, we propose a method for adapting the transition probabilities of hidden Markov models (HMMs) to match the frame rate of the observation. Experiments on the recognition of clean and noisy connected digits are conducted to evaluate the proposed method. Experimental results show that the proposed method can effectively compensate for the frame-rate mismatch between the training and the test data. Using our adapted model to recognize the RFR speech data, one can significantly reduce the computation time and achieve the same level of accuracy as that of a method, which restores the frame rate using data interpolation.


Subject(s)
Algorithms , Artificial Intelligence , Models, Statistical , Pattern Recognition, Automated/methods , Speech Production Measurement/methods , Speech Recognition Software , Computer Simulation , Humans , Information Storage and Retrieval , Markov Chains
SELECTION OF CITATIONS
SEARCH DETAIL
...