Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
Mar Pollut Bull ; 198: 115874, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38056290

ABSTRACT

The oil spill accidents on the sea surface pose a severe threat to the marine environment and human health. This paper proposes a novel Semantic Segmentation Network (SSN) for processing oil spill images so that low-contrast oil spills on the sea surface can be accurately identified. After the detection accuracy and real-time performance of the current SSNs are compared, the basic network architecture of DeeplabV3+ based target detection is analyzed. The standard convolution is replaced by the Omni-dimensional Dynamic Convolution (ODConv) in the Ghost Module Depth-Wise separable Convolution (DWConv) to further enhance the feature extraction ability of the network. Furthermore, a new DeeplabV3+ based network with ODGhostNetV2 is constructed as the main feature extraction module, and an Adaptive Triplet Attention (ATA) module is deployed in the encoder and decoder at the same time. This not only improves the richness of semantic features but also increases the following receptive fields of the network model. ATA integrates the Adaptively Spatial Feature Fusion (ASFF) module to optimize the weight assignment problem in the feature map fusion process. The ablation experiments are conducted to verify the proposed network which show high accuracy and good real-time performance for the oil spill detection.


Subject(s)
Petroleum Pollution , Humans , Semantics , Oceans and Seas
2.
Mar Pollut Bull ; 190: 114840, 2023 May.
Article in English | MEDLINE | ID: mdl-36996611

ABSTRACT

This paper presents a novel split-frequency feature fusion framework used for processing the dual-optical (infrared-visible) images of offshore oil spills. The self-coding network is used for high-frequency features of oil spill images based on local cross-stage residual dense blocks to achieve feature extraction and construct a regularized fusion strategy. The adaptive weights are designed to increase the proportion of high-frequency features in source images during the low-frequency feature fusion process. A global residual branch is established to reduce the loss of oil spill texture features. The network structure of the primary residual dense block auto-encoding network is optimized based on the local cross-stage method to further reduce the network parameters and improve the network operation speed. To verify the effectiveness of the proposed infrared-visible image fusion algorithm, the BiSeNetV2 algorithm is selected as the oil spill detection algorithm to realize the pixel accuracy of the oil spill image features at 91%.


Subject(s)
Petroleum Pollution , Algorithms
3.
Comput Intell Neurosci ; 2022: 5827097, 2022.
Article in English | MEDLINE | ID: mdl-36156961

ABSTRACT

Vision plays an important role in the aesthetic cognition of human beings. When creating dance choreography, human dancers, who always observe their own dance poses in a mirror, understand the aesthetics of those poses and aim to improve their dancing performance. In order to develop artificial intelligence, a robot should establish a similar mechanism to imitate the above human dance behaviour. Inspired by this, this paper designs a way for a robot to visually perceive its own dance poses and constructs a novel dataset of dance poses based on real NAO robots. On this basis, this paper proposes a hierarchical processing network-based approach to automatic aesthetics evaluation of robotic dance poses. The hierarchical processing network first extracts the primary visual features by using three parallel CNNs, then uses a synthesis CNN to achieve high-level association and comprehensive processing on the basis of multi-modal feature fusion, and finally makes an automatic aesthetics decision. Notably, the design of this hierarchical processing network is inspired by the research findings in neuroaesthetics. Experimental results show that our approach can achieve a high correct ratio of aesthetic evaluation at 82.3%, which is superior to the existing methods.


Subject(s)
Dancing , Robotic Surgical Procedures , Robotics , Artificial Intelligence , Esthetics , Humans
4.
Sensors (Basel) ; 22(11)2022 Jun 04.
Article in English | MEDLINE | ID: mdl-35684904

ABSTRACT

Robotics grasp detection has mostly used the extraction of candidate grasping rectangles; those discrete sampling methods are time-consuming and may ignore the potential best grasp synthesis. This paper proposes a new pixel-level grasping detection method on RGB-D images. Firstly, a fine grasping representation is introduced to generate the gripper configurations of parallel-jaw, which can effectively resolve the gripper approaching conflicts and improve the applicability to unknown objects in cluttered scenarios. Besides, the adaptive grasping width is used to adaptively represent the grasping attribute, which is fine for objects. Then, the encoder-decoder-inception convolution neural network (EDINet) is proposed to predict the fine grasping configuration. In our findings, EDINet uses encoder, decoder, and inception modules to improve the speed and robustness of pixel-level grasping detection. The proposed EDINet structure was evaluated on the Cornell and Jacquard dataset; our method achieves 98.9% and 96.1% test accuracy, respectively. Finally, we carried out the grasping experiment on the unknown objects, and the results show that the average success rate of our network model is 97.2% in a single object scene and 93.7% in a cluttered scene, which out-performs the state-of-the-art algorithms. In addition, EDINet completes a grasp detection pipeline within only 25 ms.


Subject(s)
Hand Strength , Robotics , Neural Networks, Computer , Robotics/methods
5.
Mar Pollut Bull ; 175: 113343, 2022 Feb.
Article in English | MEDLINE | ID: mdl-35051846

ABSTRACT

Accidental oil spills from pipelines or tankers have posed a big threat to marine life and natural resources. This paper presents a novel lightweight bilateral segmentation network for detecting oil spills on the sea surface. A novel deep-learning semantic-segmentation algorithm is firstly created for analyzing the characteristics of oil spill images. A Bilateral Segmentation Network (BiSeNetV2) is then selected as the basic network architecture and evaluated by using experimental comparison of the current mainstream networks on detection accuracy and real-time performances for oil samples. Furthermore, the Gather-and-Expansion (GE) layer of the semantic branch in the traditional network is redesigned and the parameter complexity is reduced. A dual attention mechanism is deployed in the two branches of the BiSeNetV2 to solve the problem of inter-class similarity. Finally, experimental results are given to show the good detection accuracy of the proposed network.


Subject(s)
Petroleum Pollution , Accidents , Algorithms , Semantics
6.
Sensors (Basel) ; 20(18)2020 Sep 05.
Article in English | MEDLINE | ID: mdl-32899515

ABSTRACT

Obstacle detection is one of the essential capabilities for autonomous robots operated on unstructured terrain. In this paper, a novel laser-based approach is proposed for obstacle detection by autonomous robots, in which the Sobel operator is deployed in the edge-detection process of 3D laser point clouds. The point clouds of unstructured terrain are filtered by VoxelGrid, and then processed by the Gaussian kernel function to obtain the edge features of obstacles. The Euclidean clustering algorithm is optimized by super-voxel in order to cluster the point clouds of each obstacle. The characteristics of the obstacles are recognized by the Levenberg-Marquardt back-propagation (LM-BP) neural network. The algorithm proposed in this paper is a post-processing algorithm based on the reconstructed point cloud. Experiments are conducted by using both the existing datasets and real unstructured terrain point cloud reconstructed by an all-terrain robot to demonstrate the feasibility and performance of the proposed approach.

7.
Games Health J ; 8(5): 313-325, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31287734

ABSTRACT

This systematic review aims to analyze the state-of-the-art regarding interaction modalities used on serious games for upper limb rehabilitation. A systematic search was performed in IEEE Xplore and Web of Science databases. PRISMA and QualSyst protocols were used to filter and assess the articles. Articles must meet the following inclusion criteria: they must be written in English; be at least four pages in length; use or develop serious games; focus on upper limb rehabilitation; and be published between 2007 and 2017. Of 121 articles initially retrieved, 33 articles met the inclusion criteria. Three interaction modalities were found: vision systems (42.4%), complementary vision systems (30.3%), and no-vision systems (27.2%). Vision systems and no-vision systems obtained a similar mean QualSyst (86%) followed by complementary vision systems (85.7%). Almost half of the studies used vision systems as the interaction modality (42.4%) and used the Kinect sensor to collect the body movements (48.48%). The shoulder was the most treated body part in the studies (19%). A key limitation of vision systems and complementary vision systems is that their device performances might be affected by lighting conditions. A main limitation of the no-vision systems is that the range-of-motion in angles of the body movement might not be measured accurately. Due to a limited number of studies, fruitful areas for further research could be the following: serious games focused on finger rehabilitation and trauma injuries, game difficulty adaptation based on user's muscle strength and posture, and multisensor data fusion on interaction modalities.


Subject(s)
Games, Experimental , Rehabilitation/methods , Upper Extremity , Exercise Therapy/methods , Exercise Therapy/standards , Exercise Therapy/trends , Humans , Rehabilitation/standards , Rehabilitation/trends
8.
Biomed Eng Online ; 17(1): 107, 2018 Aug 06.
Article in English | MEDLINE | ID: mdl-30081927

ABSTRACT

BACKGROUND: For the functional control of prosthetic hand, it is insufficient to obtain only the motion pattern information. As far as practicality is concerned, the control of the prosthetic hand force is indispensable. The application value of prosthetic hand will be greatly improved if the stable grip of prosthetic hand can be achieved. To address this problem, in this study, a bio-signal control method for grasping control of a prosthetic hand is proposed to improve patient's sense of using prosthetic hand and the thus improving the quality of life. METHODS: A MYO gesture control armband is used to collect the surface electromyographic (sEMG) signals from the upper limb. The overlapping sliding window scheme are applied for data segmentation and the correlated features are extracted from each segmented data. Principal component analysis (PCA) methods are then deployed for dimension reduction. Deep neural network is used to generate sEMG-force regression model for force prediction at different levels. The predicted force values are input to a fuzzy controller for the grasping control of a prosthetic hand. A vibration feedback device is used to feed grasping force value back to patient's arm to improve patient's sense of using prosthetic hand and realize accurate grasping. To test the effectiveness of the scheme, 15 able-bodied subjects participated in the experiments. RESULTS: The classification results indicated that 8-channel sEMG applying all four time-domain features, with PCA reduction from 32 to 8 dimensions results in the highest classification accuracy. Based on the experimental results from 15 participants, the average recognition rate is over 95%. On the other hand, from the statistical results of standard deviation, the between-subject variations ranges from 3.58 to 1.25%, proving that the robustness and stability of the proposed approach. CONCLUSIONS: The method proposed hereto control grasping power through the patient's own sEMG signal, which achieves a high recognition rate to improve the success rate of grip and increases the sense of operation and also brings the gospel for upper extremity amputation patients.


Subject(s)
Artificial Limbs , Electromyography , Hand Strength , Hand/physiology , Machine Learning , Muscles/physiology , Principal Component Analysis , Feasibility Studies , Female , Humans , Male
9.
Sensors (Basel) ; 18(6)2018 May 29.
Article in English | MEDLINE | ID: mdl-29844278

ABSTRACT

Environment perception is important for collision-free motion planning of outdoor mobile robots. This paper presents an adaptive obstacle detection method for outdoor mobile robots using a single downward-looking LiDAR sensor. The method begins by extracting line segments from the raw sensor data, and then estimates the height and the vector of the scanned road surface at each moment. Subsequently, the segments are divided into either road ground or obstacles based on the average height of each line segment and the deviation between the line segment and the road vector estimated from the previous measurements. A series of experiments have been conducted in several scenarios, including normal scenes and complex scenes. The experimental results show that the proposed approach can accurately detect obstacles on roads and could effectively deal with the different heights of obstacles in urban road environments.

10.
Sensors (Basel) ; 18(1)2018 Jan 13.
Article in English | MEDLINE | ID: mdl-29342850

ABSTRACT

Articulated wheel loaders used in the construction industry are heavy vehicles and have poor stability and a high rate of accidents because of the unpredictable changes of their body posture, mass and centroid position in complex operation environments. This paper presents a novel distributed multi-sensor system for real-time attitude estimation and stability measurement of articulated wheel loaders to improve their safety and stability. Four attitude and heading reference systems (AHRS) are constructed using micro-electro-mechanical system (MEMS) sensors, and installed on the front body, rear body, rear axis and boom of an articulated wheel loader to detect its attitude. A complementary filtering algorithm is deployed for sensor data fusion in the system so that steady state margin angle (SSMA) can be measured in real time and used as the judge index of rollover stability. Experiments are conducted on a prototype wheel loader, and results show that the proposed multi-sensor system is able to detect potential unstable states of an articulated wheel loader in real-time and with high accuracy.

11.
IEEE Trans Neural Netw Learn Syst ; 28(1): 177-190, 2017 01.
Article in English | MEDLINE | ID: mdl-26685265

ABSTRACT

This paper investigates the problem of multiclass and multiview 3-D object detection for service robots operating in a cluttered indoor environment. A novel 3-D object detection system using laser point clouds is proposed to deal with cluttered indoor scenes with a fewer and imbalanced training data. Raw 3-D point clouds are first transformed to 2-D bearing angle images to reduce the computational cost, and then jointly trained multiple object detectors are deployed to perform the multiclass and multiview 3-D object detection. The reclassification technique is utilized on each detected low confidence bounding box in the system to reduce false alarms in the detection. The RUS-SMOTEboost algorithm is used to train a group of independent binary classifiers with imbalanced training data. Dense histograms of oriented gradients and local binary pattern features are combined as a feature set for the reclassification task. Based on the dalian university of technology (DUT)-3-D data set taken from various office and household environments, experimental results show the validity and good performance of the proposed method.

12.
Sensors (Basel) ; 15(9): 23004-19, 2015 Sep 11.
Article in English | MEDLINE | ID: mdl-26378540

ABSTRACT

In order to deal with the problem of projection occurring in fall detection with two-dimensional (2D) grey or color images, this paper proposed a robust fall detection method based on spatio-temporal context tracking over three-dimensional (3D) depth images that are captured by the Kinect sensor. In the pre-processing procedure, the parameters of the Single-Gauss-Model (SGM) are estimated and the coefficients of the floor plane equation are extracted from the background images. Once human subject appears in the scene, the silhouette is extracted by SGM and the foreground coefficient of ellipses is used to determine the head position. The dense spatio-temporal context (STC) algorithm is then applied to track the head position and the distance from the head to floor plane is calculated in every following frame of the depth image. When the distance is lower than an adaptive threshold, the centroid height of the human will be used as the second judgment criteria to decide whether a fall incident happened. Lastly, four groups of experiments with different falling directions are performed. Experimental results show that the proposed method can detect fall incidents that occurred in different orientations, and they only need a low computation complexity.


Subject(s)
Accidental Falls , Head/physiology , Imaging, Three-Dimensional/methods , Adult , Algorithms , Humans , Video Recording
13.
IEEE Trans Neural Netw Learn Syst ; 23(8): 1279-90, 2012 Aug.
Article in English | MEDLINE | ID: mdl-24807524

ABSTRACT

This paper presents a method of using Gaussian process regression to model spatial functions for mobile wireless sensor networks. A distributed Gaussian process regression (DGPR) approach is developed by using a sparse Gaussian process regression method and a compactly supported covariance function. The resultant formulation of the DGPR approach only requires neighbor-to-neighbor communication, which enables each sensor node within a network to produce the regression result independently. The collective motion control is implemented by using a locational optimization algorithm, which utilizes the information entropy from the DGPR result. The collective mobility of sensor networks plus the online learning capability of the DGPR approach also enables the mobile sensor network to adapt to spatiotemporal functions. Simulation results are provided to show the performance of the proposed approach in modeling stationary spatial functions and spatiotemporal functions.

14.
Australas Phys Eng Sci Med ; 34(4): 497-513, 2011 Dec.
Article in English | MEDLINE | ID: mdl-22124948

ABSTRACT

This paper presents a novel human-machine interface for disabled people to interact with assistive systems for a better quality of life. It is based on multi-channel forehead bioelectric signals acquired by placing three pairs of electrodes (physical channels) on the Frontalis and Temporalis facial muscles. The acquired signals are passed through a parallel filter bank to explore three different sub-bands related to facial electromyogram, electrooculogram and electroencephalogram. The root mean square features of the bioelectric signals analyzed within non-overlapping 256 ms windows were extracted. The subtractive fuzzy c-means clustering method (SFCM) was applied to segment the feature space and generate initial fuzzy based Takagi-Sugeno rules. Then, an adaptive neuro-fuzzy inference system is exploited to tune up the premises and consequence parameters of the extracted SFCMs rules. The average classifier discriminating ratio for eight different facial gestures (smiling, frowning, pulling up left/right lips corner, eye movement to left/right/up/down) is between 93.04% and 96.99% according to different combinations and fusions of logical features. Experimental results show that the proposed interface has a high degree of accuracy and robustness for discrimination of 8 fundamental facial gestures. Some potential and further capabilities of our approach in human-machine interfaces are also discussed.


Subject(s)
Electroencephalography/instrumentation , Electroencephalography/methods , Electrooculography/methods , Facial Muscles/physiology , Self-Help Devices , Signal Processing, Computer-Assisted , User-Computer Interface , Adolescent , Adult , Child , Cluster Analysis , Electrooculography/instrumentation , Facial Expression , Fuzzy Logic , Humans
15.
Clin EEG Neurosci ; 42(4): 225-9, 2011 Oct.
Article in English | MEDLINE | ID: mdl-22208119

ABSTRACT

This paper presents a simple self-paced motor imagery based brain-computer interface (BCI) to control a robotic wheelchair. An innovative control protocol is proposed to enable a 2-class self-paced BCI for wheelchair control, in which the user makes path planning and fully controls the wheelchair except for the automatic obstacle avoidance based on a laser range finder when necessary. In order for the users to train their motor imagery control online safely and easily, simulated robot navigation in a specially designed environment was developed. This allowed the users to practice motor imagery control with the core self-paced BCI system in a simulated scenario before controlling the wheelchair. The self-paced BCI can then be applied to control a real robotic wheelchair using a protocol similar to that controlling the simulated robot. Our emphasis is on allowing more potential users to use the BCI controlled wheelchair with minimal training; a simple 2-class self paced system is adequate with the novel control protocol, resulting in a better transition from offline training to online control. Experimental results have demonstrated the usefulness of the online practice under the simulated scenario, and the effectiveness of the proposed self-paced BCI for robotic wheelchair control.


Subject(s)
Brain/physiology , Electroencephalography/methods , Imagination/physiology , Man-Machine Systems , Robotics , User-Computer Interface , Wheelchairs , Feedback , Humans
16.
Article in English | MEDLINE | ID: mdl-19965221

ABSTRACT

This paper evaluates supervised and unsupervised adaptive schemes applied to online support vector machine (SVM) that classifies BCI data. Online SVM processes fresh samples as they come and update existing support vectors without referring to pervious samples. It is shown that the performance of online SVM is similar to that of the standard SVM, and both supervised and unsupervised schemes improve the classification hit rate.


Subject(s)
Brain Mapping/instrumentation , Pattern Recognition, Automated , Signal Processing, Computer-Assisted , Algorithms , Artificial Intelligence , Brain/pathology , Brain Mapping/methods , Computational Biology/methods , Equipment Design , Fuzzy Logic , Humans , Internet , Models, Statistical , Reproducibility of Results , Software , User-Computer Interface
17.
IEEE Trans Syst Man Cybern B Cybern ; 39(1): 167-81, 2009 Feb.
Article in English | MEDLINE | ID: mdl-19068442

ABSTRACT

One of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In this paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based leg detection using the onboard laser range finder (LRF). The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to also be very discriminative in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera, and the information is fused to the legs' position using a sequential implementation of unscented Kalman filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments.


Subject(s)
Artificial Intelligence , Pattern Recognition, Automated/methods , Robotics/methods , Algorithms , Diagnostic Errors , Face , Humans , Lasers , Leg , Normal Distribution
18.
IEEE Trans Biomed Eng ; 55(8): 1956-65, 2008 Aug.
Article in English | MEDLINE | ID: mdl-18632358

ABSTRACT

This paper proposes and evaluates the application of support vector machine (SVM) to classify upper limb motions using myoelectric signals. It explores the optimum configuration of SVM-based myoelectric control, by suggesting an advantageous data segmentation technique, feature set, model selection approach for SVM, and postprocessing methods. This work presents a method to adjust SVM parameters before classification, and examines overlapped segmentation and majority voting as two techniques to improve controller performance. A SVM, as the core of classification in myoelectric control, is compared with two commonly used classifiers: linear discriminant analysis (LDA) and multilayer perceptron (MLP) neural networks. It demonstrates exceptional accuracy, robust performance, and low computational load. The entropy of the output of the classifier is also examined as an online index to evaluate the correctness of classification; this can be used by online training for long-term myoelectric control operations.


Subject(s)
Algorithms , Arm/physiology , Artificial Intelligence , Electromyography/methods , Models, Biological , Movement/physiology , Muscle Contraction/physiology , Pattern Recognition, Automated/methods , Action Potentials/physiology , Computer Simulation , Feedback/physiology , Humans
19.
Med Biol Eng Comput ; 46(3): 241-9, 2008 Mar.
Article in English | MEDLINE | ID: mdl-18087743

ABSTRACT

In this paper, we introduce an interactive telecommunication system that supports video/audio signal acquisition, data processing, transmission, and 3D animation for post stroke rehabilitation. It is designed for stroke patients to use in their homes. It records motion exercise data, and immediately transfers this data to hospitals via the internet. A real-time videoconferencing interface is adopted for patients to observe therapy instructions from therapists. The system uses a peer-to-peer network architecture, without the need for a server. This is a potentially effective approach to reducing costs, allowing easy setup and permitting group-rehabilitation sessions. We evaluate this system using the following steps: (1) motion detection in different movement patterns, such as reach, drink, and reach-flexion; (2) online bidirectional visual telecommunication; and (3) 3D rendering using a proposed offline animation package. This evaluation has subjectively been proved to be optimal.


Subject(s)
Exercise Therapy/methods , Home Care Services, Hospital-Based , Stroke Rehabilitation , Telemedicine/methods , Upper Extremity/physiopathology , Humans , Internet , Stroke/physiopathology
20.
Article in English | MEDLINE | ID: mdl-19162656

ABSTRACT

This paper investigates manifestation of fatigue in myoelectric signals during dynamic contractions produced whilst playing PC games. The hand's myoelectric signals were collected in 26 independent sessions with 10 subjects. Two methods, spectral analysis and time-scale analysis, were applied to compute signal frequency and least-square linear regression was used to model the trend of frequency shift. Non-parametric statistical methods were employed to analyze experimental results, which indicates significant decline in signal frequency as a manifestation of fatigue in long-term muscle activities.


Subject(s)
Algorithms , Electromyography/methods , Muscle Contraction/physiology , Muscle Fatigue/physiology , Muscle, Skeletal/physiology , Video Games , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...