Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Sensors (Basel) ; 24(1)2023 Dec 22.
Article in English | MEDLINE | ID: mdl-38202937

ABSTRACT

This paper addresses the problem of feature encoding for gait analysis using multimodal time series sensory data. In recent years, the dramatic increase in the use of numerous sensors, e.g., inertial measurement unit (IMU), in our daily wearable devices has gained the interest of the research community to collect kinematic and kinetic data to analyze the gait. The most crucial step for gait analysis is to find the set of appropriate features from continuous time series data to accurately represent human locomotion. This paper presents a systematic assessment of numerous feature extraction techniques. In particular, three different feature encoding techniques are presented to encode multimodal time series sensory data. In the first technique, we utilized eighteen different handcrafted features which are extracted directly from the raw sensory data. The second technique follows the Bag-of-Visual-Words model; the raw sensory data are encoded using a pre-computed codebook and a locality-constrained linear encoding (LLC)-based feature encoding technique. We evaluated two different machine learning algorithms to assess the effectiveness of the proposed features in the encoding of raw sensory data. In the third feature encoding technique, we proposed two end-to-end deep learning models to automatically extract the features from raw sensory data. A thorough experimental evaluation is conducted on four large sensory datasets and their outcomes are compared. A comparison of the recognition results with current state-of-the-art methods demonstrates the computational efficiency and high efficacy of the proposed feature encoding method. The robustness of the proposed feature encoding technique is also evaluated to recognize human daily activities. Additionally, this paper also presents a new dataset consisting of the gait patterns of 42 individuals, gathered using IMU sensors.


Subject(s)
Gait Analysis , Gait , Humans , Algorithms , Kinetics , Locomotion
2.
Sensors (Basel) ; 21(7)2021 Mar 29.
Article in English | MEDLINE | ID: mdl-33805368

ABSTRACT

Human activity recognition (HAR) aims to recognize the actions of the human body through a series of observations and environmental conditions. The analysis of human activities has drawn the attention of the research community in the last two decades due to its widespread applications, diverse nature of activities, and recording infrastructure. Lately, one of the most challenging applications in this framework is to recognize the human body actions using unobtrusive wearable motion sensors. Since the human activities of daily life (e.g., cooking, eating) comprises several repetitive and circumstantial short sequences of actions (e.g., moving arm), it is quite difficult to directly use the sensory data for recognition because the multiple sequences of the same activity data may have large diversity. However, a similarity can be observed in the temporal occurrence of the atomic actions. Therefore, this paper presents a two-level hierarchical method to recognize human activities using a set of wearable sensors. In the first step, the atomic activities are detected from the original sensory data, and their recognition scores are obtained. Secondly, the composite activities are recognized using the scores of atomic actions. We propose two different methods of feature extraction from atomic scores to recognize the composite activities, and they include handcrafted features and the features obtained using the subspace pooling technique. The proposed method is evaluated on the large publicly available CogAge dataset, which contains the instances of both atomic and composite activities. The data is recorded using three unobtrusive wearable devices: smartphone, smartwatch, and smart glasses. We also investigated the performance evaluation of different classification algorithms to recognize the composite activities. The proposed method achieved 79% and 62.8% average recognition accuracies using the handcrafted features and the features obtained using subspace pooling technique, respectively. The recognition results of the proposed technique and their comparison with the existing state-of-the-art techniques confirm its effectiveness.


Subject(s)
Human Activities , Smart Glasses , Algorithms , Humans , Recognition, Psychology , Smartphone
3.
Sensors (Basel) ; 21(5)2021 Feb 27.
Article in English | MEDLINE | ID: mdl-33673425

ABSTRACT

Unmanned Aerial Vehicle (UAV) is one of the latest technologies for high spatial resolution 3D modeling of the Earth. The objectives of this study are to assess low-cost UAV data using image radiometric transformation techniques and investigate its effects on global and local accuracy of the Digital Surface Model (DSM). This research uses UAV Light Detection and Ranging (LIDAR) data from 80 meters and UAV Drone data from 300 and 500 meters flying height. RAW UAV images acquired from 500 meters flying height are radiometrically transformed in Matrix Laboratory (MATLAB). UAV images from 300 meters flying height are processed for the generation of 3D point cloud and DSM in Pix4D Mapper. UAV LIDAR data are used for the acquisition of Ground Control Points (GCP) and accuracy assessment of UAV Image data products. Accuracy of enhanced DSM with DSM generated from 300 meters flight height were analyzed for point cloud number, density and distribution. Root Mean Square Error (RMSE) value of Z is enhanced from ±2.15 meters to 0.11 meters. For local accuracy assessment of DSM, four different types of land covers are statistically compared with UAV LIDAR resulting in compatibility of enhancement technique with UAV LIDAR accuracy.

4.
Entropy (Basel) ; 23(2)2021 Feb 21.
Article in English | MEDLINE | ID: mdl-33670018

ABSTRACT

Multi-focus image fusion is the process of combining focused regions of two or more images to obtain a single all-in-focus image. It is an important research area because a fused image is of high quality and contains more details than the source images. This makes it useful for numerous applications in image enhancement, remote sensing, object recognition, medical imaging, etc. This paper presents a novel multi-focus image fusion algorithm that proposes to group the local connected pixels with similar colors and patterns, usually referred to as superpixels, and use them to separate the focused and de-focused regions of an image. We note that these superpixels are more expressive than individual pixels, and they carry more distinctive statistical properties when compared with other superpixels. The statistical properties of superpixels are analyzed to categorize the pixels as focused or de-focused and to estimate a focus map. A spatial consistency constraint is ensured on the initial focus map to obtain a refined map, which is used in the fusion rule to obtain a single all-in-focus image. Qualitative and quantitative evaluations are performed to assess the performance of the proposed method on a benchmark multi-focus image fusion dataset. The results show that our method produces better quality fused images than existing image fusion techniques.

5.
Sensors (Basel) ; 20(11)2020 Jun 10.
Article in English | MEDLINE | ID: mdl-32532113

ABSTRACT

Movement analysis of human body parts is momentous in several applications including clinical diagnosis and rehabilitation programs. The objective of this research is to present a low-cost 3D visual tracking system to analyze the movement of various body parts during therapeutic procedures. Specifically, a marker based motion tracking system is proposed in this paper to capture the movement information in home-based rehabilitation. Different color markers are attached to the desired joints' locations and they are detected and tracked in the video to encode their motion information. The availability of this motion information of different body parts during the therapy can be exploited to achieve more accurate results with better clinical insight, which in turn can help improve the therapeutic decision making. The proposed framework is an automated and inexpensive motion tracking system with execution speed close to real time. The performance of the proposed method is evaluated on a dataset of 10 patients using two challenging matrices that measure the average accuracy by estimating the joints' locations and rotations. The experimental evaluation and its comparison with the existing state-of-the-art techniques reveals the efficiency of the proposed method.


Subject(s)
Human Body , Movement , Physical Therapy Modalities , Humans
6.
J Parasit Dis ; 44(1): 69-78, 2020 Mar.
Article in English | MEDLINE | ID: mdl-32174707

ABSTRACT

Malaria is caused by Plasmodium parasite. It is transmitted by female Anopheles bite. Thick and thin blood smears of the patient are manually examined by an expert pathologist with the help of a microscope to diagnose the disease. Such expert pathologists may not be available in many parts of the world due to poor health facilities. Moreover, manual inspection requires full concentration of the pathologist and it is a tedious and time consuming way to detect the malaria. Therefore, development of automated systems is momentous for a quick and reliable detection of malaria. It can reduce the false negative rate and it can help in detecting the disease at early stages where it can be cured effectively. In this paper, we present a computer aided design to automatically detect malarial parasite from microscopic blood images. The proposed method uses bilateral filtering to remove the noise and enhance the image quality. Adaptive thresholding and morphological image processing algorithms are used to detect the malaria parasites inside individual cell. To measure the efficiency of the proposed algorithm, we have tested our method on a NIH Malaria dataset and also compared the results with existing similar methods. Our method achieved the detection accuracy of more than 91% outperforming the competing methods. The results show that the proposed algorithm is reliable and can be of great assistance to the pathologists and hematologists for accurate malaria parasite detection.

7.
J Imaging ; 6(2)2020 Feb 24.
Article in English | MEDLINE | ID: mdl-34460555

ABSTRACT

The lung tumor is among the most detrimental kinds of malignancy. It has a high occurrence rate and a high death rate, as it is frequently diagnosed at the later stages. Computed Tomography (CT) scans are broadly used to distinguish the disease; computer aided systems are being created to analyze the ailment at prior stages productively. In this paper, we present a fully automatic framework for nodule detection from CT images of lungs. A histogram of the grayscale CT image is computed to automatically isolate the lung locale from the foundation. The results are refined using morphological operators. The internal structures are then extracted from the parenchyma. A threshold-based technique is proposed to separate the candidate nodules from other structures, e.g., bronchioles and blood vessels. Different statistical and shape-based features are extracted for these nodule candidates to form nodule feature vectors which are classified using support vector machines. The proposed method is evaluated on a large lungs CT dataset collected from the Lung Image Database Consortium (LIDC). The proposed method achieved excellent results compared to similar existing methods; it achieves a sensitivity rate of 93.75%, which demonstrates its effectiveness.

8.
J Imaging ; 6(7)2020 Jul 02.
Article in English | MEDLINE | ID: mdl-34460653

ABSTRACT

Image fusion is a process that integrates similar types of images collected from heterogeneous sources into one image in which the information is more definite and certain. Hence, the resultant image is anticipated as more explanatory and enlightening both for human and machine perception. Different image combination methods have been presented to consolidate significant data from a collection of images into one image. As a result of its applications and advantages in variety of fields such as remote sensing, surveillance, and medical imaging, it is significant to comprehend image fusion algorithms and have a comparative study on them. This paper presents a review of the present state-of-the-art and well-known image fusion techniques. The performance of each algorithm is assessed qualitatively and quantitatively on two benchmark multi-focus image datasets. We also produce a multi-focus image fusion dataset by collecting the widely used test images in different studies. The quantitative evaluation of fusion results is performed using a set of image fusion quality assessment metrics. The performance is also evaluated using different statistical measures. Another contribution of this paper is the proposal of a multi-focus image fusion library, to the best of our knowledge, no such library exists so far. The library provides implementation of numerous state-of-the-art image fusion algorithms and is made available publicly at project website.

9.
Sensors (Basel) ; 18(10)2018 Sep 21.
Article in English | MEDLINE | ID: mdl-30248968

ABSTRACT

Movement analysis of infants' body parts is momentous for the early detection of various movement disorders such as cerebral palsy. Most existing techniques are either marker-based or use wearable sensors to analyze the movement disorders. Such techniques work well for adults, however they are not effective for infants as wearing such sensors or markers may cause discomfort to them, affecting their natural movements. This paper presents a method to help the clinicians for the early detection of movement disorders in infants. The proposed method is marker-less and does not use any wearable sensors which makes it ideal for the analysis of body parts movement in infants. The algorithm is based on the deformable part-based model to detect the body parts and track them in the subsequent frames of the video to encode the motion information. The proposed algorithm learns a model using a set of part filters and spatial relations between the body parts. In particular, it forms a mixture of part-filters for each body part to determine its orientation which is used to detect the parts and analyze their movements by tracking them in the temporal direction. The model is represented using a tree-structured graph and the learning process is carried out using the structured support vector machine. The proposed framework will assist the clinicians and the general practitioners in the early detection of infantile movement disorders. The performance evaluation of the proposed method is carried out on a large dataset and the results compared with the existing techniques demonstrate its effectiveness.


Subject(s)
Movement Disorders/diagnosis , Movement Disorders/physiopathology , Movement , Support Vector Machine , Video Recording , Adult , Cerebral Palsy/diagnosis , Cerebral Palsy/physiopathology , Humans , Infant
10.
Int J Med Inform ; 113: 85-95, 2018 05.
Article in English | MEDLINE | ID: mdl-29602437

ABSTRACT

A neurological illness is t he disorder in human nervous system that can result in various diseases including the motor disabilities. Neurological disorders may affect the motor neurons, which are associated with skeletal muscles and control the body movement. Consequently, they introduce some diseases in the human e.g. cerebral palsy, spinal scoliosis, peripheral paralysis of arms/legs, hip joint dysplasia and various myopathies. Vojta therapy is considered a useful technique to treat the motor disabilities. In Vojta therapy, a specific stimulation is given to the patient's body to perform certain reflexive pattern movements which the patient is unable to perform in a normal manner. The repetition of stimulation ultimately brings forth the previously blocked connections between the spinal cord and the brain. After few therapy sessions, the patient can perform these movements without external stimulation. In this paper, we propose a computer vision-based system to monitor the correct movements of the patient during the therapy treatment using the RGBD data. The proposed framework works in three steps. In the first step, patient's body is automatically detected and segmented and two novel techniques are proposed for this purpose. In the second step, a multi-dimensional feature vector is computed to define various movements of patient's body during the therapy. In the final step, a multi-class support vector machine is used to classify these movements. The experimental evaluation carried out on the large captured dataset shows that the proposed system is highly useful in monitoring the patient's body movements during Vojta therapy.


Subject(s)
Artificial Intelligence , Brain Diseases/rehabilitation , Monitoring, Physiologic , Movement Disorders/rehabilitation , Physical Therapy Modalities , Reflexotherapy/methods , Algorithms , Female , Humans , Image Processing, Computer-Assisted , Infant , Infant, Newborn , Male , Physical Stimulation
11.
IEEE Trans Image Process ; 24(1): 205-19, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25438310

ABSTRACT

The future of novel 3D display technologies largely depends on the design of efficient techniques for 3D video representation and coding. Recently, multiple view plus depth video formats have attracted many research efforts since they enable intermediate view estimation and permit to efficiently represent and compress 3D video sequences. In this paper, we present spatiotemporal occlusion compensation with panorama view (STOP), a novel 3D video coding technique based on the creation of a panorama view and occlusion coding in terms of spatiotemporal offsets. The panorama picture represents the most of the visual information acquired from multiple views using a single virtual view, characterized by a larger field of view. Encoding the panorama video with state-of-the-art HECV and representing occlusions with simple spatiotemporal ancillary information STOP achieves high-compression ratio and good visual quality with competitive results with respect to competing techniques. Moreover, STOP enables free viewpoint 3D TV applications whilst allowing legacy display to get a bidimensional service using a standard video codec and simple cropping operations.


Subject(s)
Imaging, Three-Dimensional/methods , Television , Algorithms , Humans , Video Recording
SELECTION OF CITATIONS
SEARCH DETAIL
...