Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
Add more filters










Publication year range
1.
Front Neurorobot ; 18: 1398703, 2024.
Article in English | MEDLINE | ID: mdl-38831877

ABSTRACT

Introduction: During the last few years, a heightened interest has been shown in classifying scene images depicting diverse robotic environments. The surge in interest can be attributed to significant improvements in visual sensor technology, which has enhanced image analysis capabilities. Methods: Advances in vision technology have a major impact on the areas of multiple object detection and scene understanding. These tasks are an integral part of a variety of technologies, including integrating scenes in augmented reality, facilitating robot navigation, enabling autonomous driving systems, and improving applications in tourist information. Despite significant strides in visual interpretation, numerous challenges persist, encompassing semantic understanding, occlusion, orientation, insufficient availability of labeled data, uneven illumination including shadows and lighting, variation in direction, and object size and changing background. To overcome these challenges, we proposed an innovative scene recognition framework, which proved to be highly effective and yielded remarkable results. First, we perform preprocessing using kernel convolution on scene data. Second, we perform semantic segmentation using UNet segmentation. Then, we extract features from these segmented data using discrete wavelet transform (DWT), Sobel and Laplacian, and textual (local binary pattern analysis). To recognize the object, we have used deep belief network and then find the object-to-object relation. Finally, AlexNet is used to assign the relevant labels to the scene based on recognized objects in the image. Results: The performance of the proposed system was validated using three standard datasets: PASCALVOC-12, Cityscapes, and Caltech 101. The accuracy attained on the PASCALVOC-12 dataset exceeds 96% while achieving a rate of 95.90% on the Cityscapes dataset. Discussion: Furthermore, the model demonstrates a commendable accuracy of 92.2% on the Caltech 101 dataset. This model showcases noteworthy advancements beyond the capabilities of current models.

2.
Sensors (Basel) ; 24(10)2024 May 10.
Article in English | MEDLINE | ID: mdl-38793886

ABSTRACT

The domain of human locomotion identification through smartphone sensors is witnessing rapid expansion within the realm of research. This domain boasts significant potential across various sectors, including healthcare, sports, security systems, home automation, and real-time location tracking. Despite the considerable volume of existing research, the greater portion of it has primarily concentrated on locomotion activities. Comparatively less emphasis has been placed on the recognition of human localization patterns. In the current study, we introduce a system by facilitating the recognition of both human physical and location-based patterns. This system utilizes the capabilities of smartphone sensors to achieve its objectives. Our goal is to develop a system that can accurately identify different human physical and localization activities, such as walking, running, jumping, indoor, and outdoor activities. To achieve this, we perform preprocessing on the raw sensor data using a Butterworth filter for inertial sensors and a Median Filter for Global Positioning System (GPS) and then applying Hamming windowing techniques to segment the filtered data. We then extract features from the raw inertial and GPS sensors and select relevant features using the variance threshold feature selection method. The extrasensory dataset exhibits an imbalanced number of samples for certain activities. To address this issue, the permutation-based data augmentation technique is employed. The augmented features are optimized using the Yeo-Johnson power transformation algorithm before being sent to a multi-layer perceptron for classification. We evaluate our system using the K-fold cross-validation technique. The datasets used in this study are the Extrasensory and Sussex Huawei Locomotion (SHL), which contain both physical and localization activities. Our experiments demonstrate that our system achieves high accuracy with 96% and 94% over Extrasensory and SHL in physical activities and 94% and 91% over Extrasensory and SHL in the location-based activities, outperforming previous state-of-the-art methods in recognizing both types of activities.


Subject(s)
Algorithms , Biosensing Techniques , Geographic Information Systems , Wearable Electronic Devices , Humans , Biosensing Techniques/methods , Locomotion/physiology , Smartphone , Walking/physiology , Internet of Things
3.
Front Physiol ; 15: 1344887, 2024.
Article in English | MEDLINE | ID: mdl-38449788

ABSTRACT

Human activity recognition (HAR) plays a pivotal role in various domains, including healthcare, sports, robotics, and security. With the growing popularity of wearable devices, particularly Inertial Measurement Units (IMUs) and Ambient sensors, researchers and engineers have sought to take advantage of these advances to accurately and efficiently detect and classify human activities. This research paper presents an advanced methodology for human activity and localization recognition, utilizing smartphone IMU, Ambient, GPS, and Audio sensor data from two public benchmark datasets: the Opportunity dataset and the Extrasensory dataset. The Opportunity dataset was collected from 12 subjects participating in a range of daily activities, and it captures data from various body-worn and object-associated sensors. The Extrasensory dataset features data from 60 participants, including thousands of data samples from smartphone and smartwatch sensors, labeled with a wide array of human activities. Our study incorporates novel feature extraction techniques for signal, GPS, and audio sensor data. Specifically, for localization, GPS, audio, and IMU sensors are utilized, while IMU and Ambient sensors are employed for locomotion activity recognition. To achieve accurate activity classification, state-of-the-art deep learning techniques, such as convolutional neural networks (CNN) and long short-term memory (LSTM), have been explored. For indoor/outdoor activities, CNNs are applied, while LSTMs are utilized for locomotion activity recognition. The proposed system has been evaluated using the k-fold cross-validation method, achieving accuracy rates of 97% and 89% for locomotion activity over the Opportunity and Extrasensory datasets, respectively, and 96% for indoor/outdoor activity over the Extrasensory dataset. These results highlight the efficiency of our methodology in accurately detecting various human activities, showing its potential for real-world applications. Moreover, the research paper introduces a hybrid system that combines machine learning and deep learning features, enhancing activity recognition performance by leveraging the strengths of both approaches.

4.
Sensors (Basel) ; 24(3)2024 Jan 23.
Article in English | MEDLINE | ID: mdl-38339452

ABSTRACT

Advancements in sensing technology have expanded the capabilities of both wearable devices and smartphones, which are now commonly equipped with inertial sensors such as accelerometers and gyroscopes. Initially, these sensors were used for device feature advancement, but now, they can be used for a variety of applications. Human activity recognition (HAR) is an interesting research area that can be used for many applications like health monitoring, sports, fitness, medical purposes, etc. In this research, we designed an advanced system that recognizes different human locomotion and localization activities. The data were collected from raw sensors that contain noise. In the first step, we detail our noise removal process, which employs a Chebyshev type 1 filter to clean the raw sensor data, and then the signal is segmented by utilizing Hamming windows. After that, features were extracted for different sensors. To select the best feature for the system, the recursive feature elimination method was used. We then used SMOTE data augmentation techniques to solve the imbalanced nature of the Extrasensory dataset. Finally, the augmented and balanced data were sent to a long short-term memory (LSTM) deep learning classifier for classification. The datasets used in this research were Real-World Har, Real-Life Har, and Extrasensory. The presented system achieved 89% for Real-Life Har, 85% for Real-World Har, and 95% for the Extrasensory dataset. The proposed system outperforms the available state-of-the-art methods.


Subject(s)
Exercise , Wearable Electronic Devices , Humans , Locomotion , Human Activities , Recognition, Psychology
5.
Micromachines (Basel) ; 14(12)2023 Dec 03.
Article in English | MEDLINE | ID: mdl-38138373

ABSTRACT

Multiple Internet of Healthcare Things (IoHT)-based devices have been utilized as sensing methodologies for human locomotion decoding to aid in applications related to e-healthcare. Different measurement conditions affect the daily routine monitoring, including the sensor type, wearing style, data retrieval method, and processing model. Currently, several models are present in this domain that include a variety of techniques for pre-processing, descriptor extraction, and reduction, along with the classification of data captured from multiple sensors. However, such models consisting of multiple subject-based data using different techniques may degrade the accuracy rate of locomotion decoding. Therefore, this study proposes a deep neural network model that not only applies the state-of-the-art Quaternion-based filtration technique for motion and ambient data along with background subtraction and skeleton modeling for video-based data, but also learns important descriptors from novel graph-based representations and Gaussian Markov random-field mechanisms. Due to the non-linear nature of data, these descriptors are further utilized to extract the codebook via the Gaussian mixture regression model. Furthermore, the codebook is provided to the recurrent neural network to classify the activities for the locomotion-decoding system. We show the validity of the proposed model across two publicly available data sampling strategies, namely, the HWU-USP and LARa datasets. The proposed model is significantly improved over previous systems, as it achieved 82.22% and 82.50% for the HWU-USP and LARa datasets, respectively. The proposed IoHT-based locomotion-decoding model is useful for unobtrusive human activity recognition over extended periods in e-healthcare facilities.

6.
Sensors (Basel) ; 23(17)2023 Aug 23.
Article in English | MEDLINE | ID: mdl-37687819

ABSTRACT

Ubiquitous computing has been a green research area that has managed to attract and sustain the attention of researchers for some time now. As ubiquitous computing applications, human activity recognition and localization have also been popularly worked on. These applications are used in healthcare monitoring, behavior analysis, personal safety, and entertainment. A robust model has been proposed in this article that works over IoT data extracted from smartphone and smartwatch sensors to recognize the activities performed by the user and, in the meantime, classify the location at which the human performed that particular activity. The system starts by denoising the input signal using a second-order Butterworth filter and then uses a hamming window to divide the signal into small data chunks. Multiple stacked windows are generated using three windows per stack, which, in turn, prove helpful in producing more reliable features. The stacked data are then transferred to two parallel feature extraction blocks, i.e., human activity recognition and human localization. The respective features are extracted for both modules that reinforce the system's accuracy. A recursive feature elimination is applied to the features of both categories independently to select the most informative ones among them. After the feature selection, a genetic algorithm is used to generate ten different generations of each feature vector for data augmentation purposes, which directly impacts the system's performance. Finally, a deep neural decision forest is trained for classifying the activity and the subject's location while working on both of these attributes in parallel. For the evaluation and testing of the proposed system, two openly accessible benchmark datasets, the ExtraSensory dataset and the Sussex-Huawei Locomotion dataset, were used. The system outperformed the available state-of-the-art systems by recognizing human activities with an accuracy of 88.25% and classifying the location with an accuracy of 90.63% over the ExtraSensory dataset, while, for the Sussex-Huawei Locomotion dataset, the respective results were 96.00% and 90.50% accurate.


Subject(s)
Human Activities , Recognition, Psychology , Humans , Memory , Benchmarking , Intelligence
7.
Sensors (Basel) ; 23(17)2023 Aug 30.
Article in English | MEDLINE | ID: mdl-37687978

ABSTRACT

Gestures have been used for nonverbal communication for a long time, but human-computer interaction (HCI) via gestures is becoming more common in the modern era. To obtain a greater recognition rate, the traditional interface comprises various devices, such as gloves, physical controllers, and markers. This study provides a new markerless technique for obtaining gestures without the need for any barriers or pricey hardware. In this paper, dynamic gestures are first converted into frames. The noise is removed, and intensity is adjusted for feature extraction. The hand gesture is first detected through the images, and the skeleton is computed through mathematical computations. From the skeleton, the features are extracted; these features include joint color cloud, neural gas, and directional active model. After that, the features are optimized, and a selective feature set is passed through the classifier recurrent neural network (RNN) to obtain the classification results with higher accuracy. The proposed model is experimentally assessed and trained over three datasets: HaGRI, Egogesture, and Jester. The experimental results for the three datasets provided improved results based on classification, and the proposed system achieved an accuracy of 92.57% over HaGRI, 91.86% over Egogesture, and 91.57% over the Jester dataset, respectively. Also, to check the model liability, the proposed method was tested on the WLASL dataset, attaining 90.43% accuracy. This paper also includes a comparison with other-state-of-the art methods to compare our model with the standard methods of recognition. Our model presented a higher accuracy rate with a markerless approach to save money and time for classifying the gestures for better interaction.


Subject(s)
Gestures , Nerve Agents , Humans , Automation , Neural Networks, Computer , Recognition, Psychology
8.
Sensors (Basel) ; 23(18)2023 Sep 16.
Article in English | MEDLINE | ID: mdl-37765984

ABSTRACT

Smart home monitoring systems via internet of things (IoT) are required for taking care of elders at home. They provide the flexibility of monitoring elders remotely for their families and caregivers. Activities of daily living are an efficient way to effectively monitor elderly people at home and patients at caregiving facilities. The monitoring of such actions depends largely on IoT-based devices, either wireless or installed at different places. This paper proposes an effective and robust layered architecture using multisensory devices to recognize the activities of daily living from anywhere. Multimodality refers to the sensory devices of multiple types working together to achieve the objective of remote monitoring. Therefore, the proposed multimodal-based approach includes IoT devices, such as wearable inertial sensors and videos recorded during daily routines, fused together. The data from these multi-sensors have to be processed through a pre-processing layer through different stages, such as data filtration, segmentation, landmark detection, and 2D stick model. In next layer called the features processing, we have extracted, fused, and optimized different features from multimodal sensors. The final layer, called classification, has been utilized to recognize the activities of daily living via a deep learning technique known as convolutional neural network. It is observed from the proposed IoT-based multimodal layered system's results that an acceptable mean accuracy rate of 84.14% has been achieved.

9.
Sensors (Basel) ; 23(10)2023 May 12.
Article in English | MEDLINE | ID: mdl-37430630

ABSTRACT

Locomotion prediction for human welfare has gained tremendous interest in the past few years. Multimodal locomotion prediction is composed of small activities of daily living and an efficient approach to providing support for healthcare, but the complexities of motion signals along with video processing make it challenging for researchers in terms of achieving a good accuracy rate. The multimodal internet of things (IoT)-based locomotion classification has helped in solving these challenges. In this paper, we proposed a novel multimodal IoT-based locomotion classification technique using three benchmarked datasets. These datasets contain at least three types of data, such as data from physical motion, ambient, and vision-based sensors. The raw data has been filtered through different techniques for each sensor type. Then, the ambient and physical motion-based sensor data have been windowed, and a skeleton model has been retrieved from the vision-based data. Further, the features have been extracted and optimized using state-of-the-art methodologies. Lastly, experiments performed verified that the proposed locomotion classification system is superior when compared to other conventional approaches, particularly when considering multimodal data. The novel multimodal IoT-based locomotion classification system has achieved an accuracy rate of 87.67% and 86.71% over the HWU-USP and Opportunity++ datasets, respectively. The mean accuracy rate of 87.0% is higher than the traditional methods proposed in the literature.

10.
PeerJ Comput Sci ; 9: e1355, 2023.
Article in English | MEDLINE | ID: mdl-37346503

ABSTRACT

Innovative technology and improvements in intelligent machinery, transportation facilities, emergency systems, and educational services define the modern era. It is difficult to comprehend the scenario, do crowd analysis, and observe persons. For e-learning-based multiobject tracking and predication framework for crowd data via multilayer perceptron, this article recommends an organized method that takes e-learning crowd-based type data as input, based on usual and abnormal actions and activities. After that, super pixel and fuzzy c mean, for features extraction, we used fused dense optical flow and gradient patches, and for multiobject tracking, we applied a compressive tracking algorithm and Taylor series predictive tracking approach. The next step is to find the mean, variance, speed, and frame occupancy utilized for trajectory extraction. To reduce data complexity and optimization, we applied T-distributed stochastic neighbor embedding (t-SNE). For predicting normal and abnormal action in e-learning-based crowd data, we used multilayer perceptron (MLP) to classify numerous classes. We used the three-crowd activity University of California San Diego, Department of Pediatrics (USCD-Ped), Shanghai tech, and Indian Institute of Technology Bombay (IITB) corridor datasets for experimental estimation based on human and nonhuman-based videos. We achieve a mean accuracy of 87.00%, USCD-Ped, Shanghai tech for 85.75%, and IITB corridor of 88.00% datasets.

11.
Int J Surg Case Rep ; 102: 107834, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36535177

ABSTRACT

INTRODUCTION: Hernias of the posterior rectus sheath are very rare abdominal wall hernias with only around 15 reported cases to date. CLINICAL PRESENTATION: This case report examines a 27-year-old female who is presented with epigastric abdominal pain and vomiting. An Abdomen CT scan was done and showed signs of SBO and herniation of the small bowel at the posterior rectus sheath. The patient underwent exploratory laparotomy that showed right-sided posterior rectus sheath obstructed hernia, which was repaired with primary closure. Postoperatively, the patient was doing well and was discharged on postoperative day 3 in good general condition. CONCLUSION: The patient had no complaints during her follow-up at one month. Due to its rarity and potential complications, it is also important to report this case to enhance the evidence base for posterior rectus sheath hernia and to familiarise this uncommon condition to radiologists, clinicians, and surgeons.

12.
PeerJ Comput Sci ; 8: e1105, 2022.
Article in English | MEDLINE | ID: mdl-36262158

ABSTRACT

Human locomotion is an imperative topic to be conversed among researchers. Predicting the human motion using multiple techniques and algorithms has always been a motivating subject matter. For this, different methods have shown the ability of recognizing simple motion patterns. However, predicting the dynamics for complex locomotion patterns is still immature. Therefore, this article proposes unique methods including the calibration-based filter algorithm and kinematic-static patterns identification for predicting those complex activities from fused signals. Different types of signals are extracted from benchmarked datasets and pre-processed using a novel calibration-based filter for inertial signals along with a Bessel filter for physiological signals. Next, sliding overlapped windows are utilized to get motion patterns defined over time. Then, polynomial probability distribution is suggested to decide the motion patterns natures. For features extraction based kinematic-static patterns, time and probability domain features are extracted over physical action dataset (PAD) and growing old together validation (GOTOV) dataset. Further, the features are optimized using quadratic discriminant analysis and orthogonal fuzzy neighborhood discriminant analysis techniques. Manifold regularization algorithms have also been applied to assess the performance of proposed prediction system. For the physical action dataset, we achieved an accuracy rate of 82.50% for patterned signals. While, the GOTOV dataset, we achieved an accuracy rate of 81.90%. As a result, the proposed system outdid when compared to the other state-of-the-art models in literature.

13.
PeerJ Comput Sci ; 7: e764, 2021.
Article in English | MEDLINE | ID: mdl-34901426

ABSTRACT

The study of human posture analysis and gait event detection from various types of inputs is a key contribution to the human life log. With the help of this research and technologies humans can save costs in terms of time and utility resources. In this paper we present a robust approach to human posture analysis and gait event detection from complex video-based data. For this, initially posture information, landmark information are extracted, and human 2D skeleton mesh are extracted, using this information set we reconstruct the human 2D to 3D model. Contextual features, namely, degrees of freedom over detected body parts, joint angle information, periodic and non-periodic motion, and human motion direction flow, are extracted. For features mining, we applied the rule-based features mining technique and, for gait event detection and classification, the deep learning-based CNN technique is applied over the mpii-video pose, the COCO, and the pose track datasets. For the mpii-video pose dataset, we achieved a human landmark detection mean accuracy of 87.09% and a gait event recognition mean accuracy of 90.90%. For the COCO dataset, we achieved a human landmark detection mean accuracy of 87.36% and a gait event recognition mean accuracy of 89.09%. For the pose track dataset, we achieved a human landmark detection mean accuracy of 87.72% and a gait event recognition mean accuracy of 88.18%. The proposed system performance shows a significant improvement compared to existing state-of-the-art frameworks.

14.
Entropy (Basel) ; 23(5)2021 May 18.
Article in English | MEDLINE | ID: mdl-34069994

ABSTRACT

To prevent disasters and to control and supervise crowds, automated video surveillance has become indispensable. In today's complex and crowded environments, manual surveillance and monitoring systems are inefficient, labor intensive, and unwieldy. Automated video surveillance systems offer promising solutions, but challenges remain. One of the major challenges is the extraction of true foregrounds of pixels representing humans only. Furthermore, to accurately understand and interpret crowd behavior, human crowd behavior (HCB) systems require robust feature extraction methods, along with powerful and reliable decision-making classifiers. In this paper, we describe our approach to these issues by presenting a novel Particles Force Model for multi-person tracking, a vigorous fusion of global and local descriptors, along with a robust improved entropy classifier for detecting and interpreting crowd behavior. In the proposed model, necessary preprocessing steps are followed by the application of a first distance algorithm for the removal of background clutter; true-foreground elements are then extracted via a Particles Force Model. The detected human forms are then counted by labeling and performing cluster estimation, using a K-nearest neighbors search algorithm. After that, the location of all the human silhouettes is fixed and, using the Jaccard similarity index and normalized cross-correlation as a cost function, multi-person tracking is performed. For HCB detection, we introduced human crowd contour extraction as a global feature and a particles gradient motion (PGD) descriptor, along with geometrical and speeded up robust features (SURF) for local features. After features were extracted, we applied bat optimization for optimal features, which also works as a pre-classifier. Finally, we introduced a robust improved entropy classifier for decision making and automated crowd behavior detection in smart surveillance systems. We evaluated the performance of our proposed system on a publicly available benchmark PETS2009 and UMN dataset. Experimental results show that our system performed better compared to existing well-known state-of-the-art methods by achieving higher accuracy rates. The proposed system can be deployed to great benefit in numerous public places, such as airports, shopping malls, city centers, and train stations to control, supervise, and protect crowds.

15.
Entropy (Basel) ; 22(5)2020 May 20.
Article in English | MEDLINE | ID: mdl-33286351

ABSTRACT

Advancements in wearable sensors technologies provide prominent effects in the daily life activities of humans. These wearable sensors are gaining more awareness in healthcare for the elderly to ensure their independent living and to improve their comfort. In this paper, we present a human activity recognition model that acquires signal data from motion node sensors including inertial sensors, i.e., gyroscopes and accelerometers. First, the inertial data is processed via multiple filters such as Savitzky-Golay, median and hampel filters to examine lower/upper cutoff frequency behaviors. Second, it extracts a multifused model for statistical, wavelet and binary features to maximize the occurrence of optimal feature values. Then, adaptive moment estimation (Adam) and AdaDelta are introduced in a feature optimization phase to adopt learning rate patterns. These optimized patterns are further processed by the maximum entropy Markov model (MEMM) for empirical expectation and highest entropy, which measure signal variances for outperformed accuracy results. Our model was experimentally evaluated on University of Southern California Human Activity Dataset (USC-HAD) as a benchmark dataset and on an Intelligent Mediasporting behavior (IMSB), which is a new self-annotated sports dataset. For evaluation, we used the "leave-one-out" cross validation scheme and the results outperformed existing well-known statistical state-of-the-art methods by achieving an improved recognition accuracy of 91.25%, 93.66% and 90.91% when compared with USC-HAD, IMSB, and Mhealth datasets, respectively. The proposed system should be applicable to man-machine interface domains, such as health exercises, robot learning, interactive games and pattern-based surveillance.

16.
Entropy (Basel) ; 22(8)2020 Jul 26.
Article in English | MEDLINE | ID: mdl-33286588

ABSTRACT

Automatic identification of human interaction is a challenging task especially in dynamic environments with cluttered backgrounds from video sequences. Advancements in computer vision sensor technologies provide powerful effects in human interaction recognition (HIR) during routine daily life. In this paper, we propose a novel features extraction method which incorporates robust entropy optimization and an efficient Maximum Entropy Markov Model (MEMM) for HIR via multiple vision sensors. The main objectives of proposed methodology are: (1) to propose a hybrid of four novel features-i.e., spatio-temporal features, energy-based features, shape based angular and geometric features-and a motion-orthogonal histogram of oriented gradient (MO-HOG); (2) to encode hybrid feature descriptors using a codebook, a Gaussian mixture model (GMM) and fisher encoding; (3) to optimize the encoded feature using a cross entropy optimization function; (4) to apply a MEMM classification algorithm to examine empirical expectations and highest entropy, which measure pattern variances to achieve outperformed HIR accuracy results. Our system is tested over three well-known datasets: SBU Kinect interaction; UoL 3D social activity; UT-interaction datasets. Through wide experimentations, the proposed features extraction algorithm, along with cross entropy optimization, has achieved the average accuracy rate of 91.25% with SBU, 90.4% with UoL and 87.4% with UT-Interaction datasets. The proposed HIR system will be applicable to a wide variety of man-machine interfaces, such as public-place surveillance, future medical applications, virtual reality, fitness exercises and 3D interactive gaming.

17.
Sensors (Basel) ; 20(22)2020 Nov 21.
Article in English | MEDLINE | ID: mdl-33233412

ABSTRACT

Nowadays, wearable technology can enhance physical human life-log routines by shifting goals from merely counting steps to tackling significant healthcare challenges. Such wearable technology modules have presented opportunities to acquire important information about human activities in real-life environments. The purpose of this paper is to report on recent developments and to project future advances regarding wearable sensor systems for the sustainable monitoring and recording of human life-logs. On the basis of this survey, we propose a model that is designed to retrieve better information during physical activities in indoor and outdoor environments in order to improve the quality of life and to reduce risks. This model uses a fusion of both statistical and non-statistical features for the recognition of different activity patterns using wearable inertial sensors, i.e., triaxial accelerometers, gyroscopes and magnetometers. These features include signal magnitude, positive/negative peaks and position direction to explore signal orientation changes, position differentiation, temporal variation and optimal changes among coordinates. These features are processed by a genetic algorithm for the selection and classification of inertial signals to learn and recognize abnormal human movement. Our model was experimentally evaluated on four benchmark datasets: Intelligent Media Wearable Smart Home Activities (IM-WSHA), a self-annotated physical activities dataset, Wireless Sensor Data Mining (WISDM) with different sporting patterns from an IM-SB dataset and an SMotion dataset with different physical activities. Experimental results show that the proposed feature extraction strategy outperformed others, achieving an improved recognition accuracy of 81.92%, 95.37%, 90.17%, 94.58%, respectively, when IM-WSHA, WISDM, IM-SB and SMotion datasets were applied.


Subject(s)
Accelerometry , Algorithms , Exercise , Wearable Electronic Devices , Humans , Quality of Life
18.
Sensors (Basel) ; 20(14)2020 Jul 10.
Article in English | MEDLINE | ID: mdl-32664434

ABSTRACT

In recent years, interest in scene classification of different indoor-outdoor scene images has increased due to major developments in visual sensor techniques. Scene classification has been demonstrated to be an efficient method for environmental observations but it is a challenging task considering the complexity of multiple objects in scenery images. These images include a combination of different properties and objects i.e., (color, text, and regions) and they are classified on the basis of optimal features. In this paper, an efficient multiclass objects categorization method is proposed for the indoor-outdoor scene classification of scenery images using benchmark datasets. We illustrate two improved methods, fuzzy c-mean and mean shift algorithms, which infer multiple object segmentation in complex images. Multiple object categorization is achieved through multiple kernel learning (MKL), which considers local descriptors and signatures of regions. The relations between multiple objects are then examined by intersection over union algorithm. Finally, scene classification is achieved by using Multi-class Logistic Regression (McLR). Experimental evaluation demonstrated that our scene classification method is superior compared to other conventional methods, especially when dealing with complex images. Our system should be applicable in various domains such as drone targeting, autonomous driving, Global positioning systems, robotics and tourist guide applications.

19.
Sensors (Basel) ; 20(13)2020 Jun 27.
Article in English | MEDLINE | ID: mdl-32605003

ABSTRACT

Implementation of dynamic spectrum access (DSA) in cognitive radio (CR) systems requires the unlicensed secondary users (SU) to implement spectrum sensing to monitor the activity of the licensed primary users (PU). Energy detection (ED) is one of the most widely used methods for spectrum sensing in CR systems, and in this paper we present a novel ED algorithm with an adaptive sensing threshold. The three-event ED (3EED) algorithm for spectrum sensing is considered for which an accurate approximation of the optimal decision threshold that minimizes the decision error probability (DEP) is found using Newton's method with forced convergence in one iteration. The proposed algorithm is analyzed and illustrated with numerical results obtained from simulations that closely match the theoretical results and show that it outperforms the conventional ED (CED) algorithm for spectrum sensing.

20.
Sensors (Basel) ; 14(7): 11735-59, 2014 Jul 02.
Article in English | MEDLINE | ID: mdl-24991942

ABSTRACT

Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.


Subject(s)
Imaging, Three-Dimensional/instrumentation , Information Storage and Retrieval/methods , Motor Activity/physiology , Pattern Recognition, Automated/methods , Photography/instrumentation , Telemedicine/instrumentation , Video Recording/instrumentation , Activities of Daily Living , Aged , Aged, 80 and over , Artificial Intelligence , Equipment Design , Equipment Failure Analysis , Geriatric Assessment/methods , Home Care Services , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...