Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-38857129

ABSTRACT

Over the past few years, monocular depth estimation and completion have been paid more and more attention from the computer vision community because of their widespread applications. In this paper, we introduce novel physics (geometry)-driven deep learning frameworks for these two tasks by assuming that 3D scenes are constituted with piece-wise planes. Instead of directly estimating the depth map or completing the sparse depth map, we propose to estimate the surface normal and plane-to-origin distance maps or complete the sparse surface normal and distance maps as intermediate outputs. To this end, we develop a normal-distance head that outputs pixel-level surface normal and distance. Afterthat, the surface normal and distance maps are regularized by a developed plane-aware consistency constraint, which are then transformed into depth maps. Furthermore, we integrate an additional depth head to strengthen the robustness of the proposed frameworks. Extensive experiments on the NYU-Depth-v2, KITTI and SUN RGB-D datasets demonstrate that our method exceeds in performance prior state-of-the-art monocular depth estimation and completion competitors.

2.
Comput Biol Med ; 175: 108504, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38701593

ABSTRACT

Convolutional neural network (CNN) has been widely applied in motor imagery (MI)-based brain computer interface (BCI) to decode electroencephalography (EEG) signals. However, due to the limited perceptual field of convolutional kernel, CNN only extracts features from local region without considering long-term dependencies for EEG decoding. Apart from long-term dependencies, multi-modal temporal information is equally important for EEG decoding because it can offer a more comprehensive understanding of the temporal dynamics of neural processes. In this paper, we propose a novel deep learning network that combines CNN with self-attention mechanism to encapsulate multi-modal temporal information and global dependencies. The network first extracts multi-modal temporal information from two distinct perspectives: average and variance. A shared self-attention module is then designed to capture global dependencies along these two feature dimensions. We further design a convolutional encoder to explore the relationship between average-pooled and variance-pooled features and fuse them into more discriminative features. Moreover, a data augmentation method called signal segmentation and recombination is proposed to improve the generalization capability of the proposed network. The experimental results on the BCI Competition IV-2a (BCIC-IV-2a) and BCI Competition IV-2b (BCIC-IV-2b) datasets show that our proposed method outperforms the state-of-the-art methods and achieves 4-class average accuracy of 85.03% on the BCIC-IV-2a dataset. The proposed method implies the effectiveness of multi-modal temporal information fusion in attention-based deep learning networks and provides a new perspective for MI-EEG decoding. The code is available at https://github.com/Ma-Xinzhi/EEG-TransNet.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Neural Networks, Computer , Humans , Electroencephalography/methods , Signal Processing, Computer-Assisted , Imagination/physiology , Deep Learning
3.
Biomimetics (Basel) ; 9(4)2024 Apr 22.
Article in English | MEDLINE | ID: mdl-38667265

ABSTRACT

The exoskeleton robot is a wearable electromechanical device inspired by animal exoskeletons. It combines technologies such as sensing, control, information, and mobile computing, enhancing human physical abilities and assisting in rehabilitation training. In recent years, with the development of visual sensors and deep learning, the environmental perception of exoskeletons has drawn widespread attention in the industry. Environmental perception can provide exoskeletons with a certain level of autonomous perception and decision-making ability, enhance their stability and safety in complex environments, and improve the human-machine-environment interaction loop. This paper provides a review of environmental perception and its related technologies of lower-limb exoskeleton robots. First, we briefly introduce the visual sensors and control system. Second, we analyze and summarize the key technologies of environmental perception, including related datasets, detection of critical terrains, and environment-oriented adaptive gait planning. Finally, we analyze the current factors limiting the development of exoskeleton environmental perception and propose future directions.

4.
Comput Biol Med ; 169: 107910, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38183703

ABSTRACT

Lower-limb exoskeletons have been used extensively in many rehabilitation applications to assist disabled people with their therapies. Brain-machine interfaces (BMIs) further provide effective and natural control schemes. However, the limited performance of brain signal decoding from lower-limb kinematics restricts the broad growth of both BMI and rehabilitation industry. To address these challenges, we propose an ensemble method for lower-limb motor imagery (MI) classification. The proposed model employs multiple techniques to boost performance, including deep and shallow parts. Traditional wavelet transformation followed by filter-bank common spatial pattern (CSP) employs neurophysiologically reasonable patterns, while multi-head self-attention (MSA) followed by temporal convolutional network (TCN) extracts deeper encoded generalized patterns. Experimental results in a customized lower-limb exoskeleton on 8 subjects in 3 consecutive sessions showed that the proposed method achieved 60.27% and 64.20% for three (MI of left leg, MI of right leg, and rest) and two classes (lower-limb MI vs. rest), respectively. Besides, the proposed model achieves improvements of up to 4% and 2% accuracy for the subject-specific and subject-independent modes compared to the current state-of-the-art (SOTA) techniques, respectively. Finally, feature analysis was conducted to show discriminative brain patterns in each MI task and sessions with different feedback modalities. The proposed models integrated in the brain-actuated lower-limb exoskeleton established a potential BMI for gait training and neuroprosthesis.


Subject(s)
Brain-Computer Interfaces , Exoskeleton Device , Humans , Electroencephalography/methods , Brain/physiology , Leg , Gait , Imagination/physiology , Algorithms
5.
Am J Phys Med Rehabil ; 103(4): 318-324, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-37792502

ABSTRACT

OBJECTIVE: Poststroke cognitive impairment substantially affects patients' quality of life. This study explored the therapeutic efficacy of intermittent theta burst stimulation combined with cognitive training for poststroke cognitive impairment. DESIGN: The experimental group received intermittent theta burst stimulation and cognitive training, whereas the control group only received cognitive training, both for 6 wks. The outcome measures were the Loewenstein Occupational Therapy Cognitive Assessment, modified Barthel Index, transcranial Doppler ultrasonography, and functional near-infrared spectroscopy. RESULTS: After therapy, between-group comparisons revealed a substantial difference in the Loewenstein Occupational Therapy Cognitive Assessment scores ( P = 0.024). Improvements in visuomotor organization and thinking operations were more noticeable in the experimental group than in the other groups ( P = 0.017 and P = 0.044, respectively). After treatment, the resistance index of the experimental group differed from that of the control group; channels 29, 37, and 41 were activated ( P < 0.05). The active locations were the left dorsolateral prefrontal cortex, prefrontal polar cortex, and left Broca's region. CONCLUSIONS: Intermittent theta burst stimulation combined with cognitive training had a superior effect on improving cognitive function and everyday activities compared with cognitive training alone, notably in visuomotor organization and thinking operations. Intermittent theta burst stimulation may enhance cognitive performance by improving network connectivity.


Subject(s)
Cognitive Dysfunction , Transcranial Magnetic Stimulation , Humans , Transcranial Magnetic Stimulation/methods , Single-Blind Method , Cognitive Training , Quality of Life , Theta Rhythm/physiology , Prefrontal Cortex , Cognitive Dysfunction/etiology , Cognitive Dysfunction/therapy
6.
Article in English | MEDLINE | ID: mdl-38145527

ABSTRACT

The existing surface electromyography-based pattern recognition system (sEMG-PRS) exhibits limited generalizability in practical applications. In this paper, we propose a stacked weighted random forest (SWRF) algorithm to enhance the long-term usability and user adaptability of sEMG-PRS. First, the weighted random forest (WRF) is proposed to address the issue of imbalanced performance in standard random forests (RF) caused by randomness in sampling and feature selection. Then, the stacking is employed to further enhance the generalizability of WRF. Specifically, RF is utilized as the base learner, while WRF serves as the meta-leaning layer algorithm. The SWRF is evaluated against classical classification algorithms in both online experiments and offline datasets. The offline experiments indicate that the SWRF achieves an average classification accuracy of 89.06%, outperforming RF, WRF, long short-term memory (LSTM), and support vector machine (SVM). The online experiments indicate that SWRF outperforms the aforementioned algorithms regarding long-term usability and user adaptability. We believe that our method has significant potential for practical application in sEMG-PRS.


Subject(s)
Algorithms , Random Forest , Humans , Electromyography/methods , Support Vector Machine , Pattern Recognition, Automated/methods
7.
Sci Rep ; 13(1): 22681, 2023 12 19.
Article in English | MEDLINE | ID: mdl-38114592

ABSTRACT

In rehabilitation medicine, real-time analysis of the gait for human wearing lower-limb exoskeleton rehabilitation robot during walking can effectively prevent patients from experiencing excessive and asymmetric gait during rehabilitation training, thereby avoiding falls or even secondary injuries. To address the above situation, we propose a gait detection method based on computer vision for the real-time monitoring of gait during human-machine integrated walking. Specifically, we design a neural network model called GaitPoseNet, which is used for posture recognition in human-machine integrated walking. Using RGB images as input and depth features as output, regression of joint coordinates through depth estimation of implicit supervised networks. In addition, joint guidance strategy (JGS) is designed in the network framework. The degree of correlation between the various joints of the human body is used as a detection target to effectively overcome prediction difficulties due to partial joint occlusion during walking. Finally, a post processing algorithm is designed to describe patients' walking motion by combining the pixel coordinates of each joint point and leg length. Our advantage is that we provide a non-contact measurement method with strong universality, and use depth estimation and JGS to improve measurement accuracy. Conducting experiments on the Walking Pose with Exoskeleton (WPE) Dataset shows that our method can reach 95.77% PCKs@0.1, 93.14% PCKs@0.08 and 3.55 ms runtime. Therefore our method achieves advanced performance considering both speed and accuracy.


Subject(s)
Deep Learning , Exoskeleton Device , Humans , Gait Analysis , Gait , Walking , Biomechanical Phenomena
8.
Biomimetics (Basel) ; 8(4)2023 Aug 09.
Article in English | MEDLINE | ID: mdl-37622958

ABSTRACT

The utilization of lower extremity exoskeletons has witnessed a growing presence across diverse domains such as the military, medical treatment, and rehabilitation. This paper introduces a novel design of a lower extremity exoskeleton specifically tailored for individuals engaged in heavy object carrying tasks. The exoskeleton incorporates an impressive 12 degrees of freedom (DOF), with four of them being effectively controlled through hydraulic cylinders. To achieve optimal control of this intricate lower extremity exoskeleton system, the authors propose an adaptive dynamic programming (ADP) algorithm. Several crucial components are established to implement this control scheme. These include the formulation of the state equation for the lower extremity exoskeleton system, which is well-suited for the ADP algorithm. Additionally, a corresponding performance index function based on the tracking error is devised, along with the game algebraic Riccati equation. By employing the value iteration ADP scheme, the lower extremity exoskeleton demonstrates highly effective tracking control. This research not only highlights the potential of the proposed control approach but also showcases its ability to enhance the overall performance and functionality of lower extremity exoskeletons, particularly in scenarios involving heavy object carrying. Overall, this study contributes to the advancement of lower extremity exoskeleton technology and offers valuable insights into the application of ADP algorithms for achieving precise and efficient control in demanding tasks.

9.
Article in English | MEDLINE | ID: mdl-37498754

ABSTRACT

Deep learning methods have been widely explored in motor imagery (MI)-based brain computer interface (BCI) systems to decode electroencephalography (EEG) signals. However, most studies fail to fully explore temporal dependencies among MI-related patterns generated in different stages during MI tasks, resulting in limited MI-EEG decoding performance. Apart from feature extraction, learning temporal dependencies is equally important to develop a subject-specific MI-based BCI because every subject has their own way of performing MI tasks. In this paper, a novel temporal dependency learning convolutional neural network (CNN) with attention mechanism is proposed to address MI-EEG decoding. The network first learns spatial and spectral information from multi-view EEG data via the spatial convolution block. Then, a series of non-overlapped time windows is employed to segment the output data, and the discriminative feature is further extracted from each time window to capture MI-related patterns generated in different stages. Furthermore, to explore temporal dependencies among discriminative features in different time windows, we design a temporal attention module that assigns different weights to features in various time windows and fuses them into more discriminative features. The experimental results on the BCI Competition IV-2a (BCIC-IV-2a) and OpenBMI datasets show that our proposed network outperforms the state-of-the-art algorithms and achieves the average accuracy of 79.48%, improved by 2.30% on the BCIC-IV-2a dataset. We demonstrate that learning temporal dependencies effectively improves MI-EEG decoding performance. The code is available at https://github.com/Ma-Xinzhi/LightConvNet.


Subject(s)
Brain-Computer Interfaces , Humans , Neural Networks, Computer , Algorithms , Electroencephalography/methods , Imagination
10.
Biomed Res Int ; 2023: 7563802, 2023.
Article in English | MEDLINE | ID: mdl-37082189

ABSTRACT

Background: The efficacy of robotic-assisted gait training (RAGT) should be considered versatilely; among which, gait assessment is one of the most important measures; observational gait assessment is the most commonly used method in clinical practice, but it has certain limitations due to the deviation of subjectivity; instrumental assessments such as three-dimensional gait analysis (3DGA) and surface electromyography (sEMG) can be used to obtain gait data and muscle activation during walking in stroke patients with hemiplegia, so as to better evaluate the rehabilitation effect of RAGT. Objective: This single-blind randomized controlled trial is aimed at analyzing the impact of RAGT on the 3DGA parameters and muscle activation in patients with subacute stroke and evaluating the clinical effect of improving walking function of RAGT. Methods: This randomized controlled trial evaluated the improvement of 4-week RAGT on patients with subacute stroke by 3DGA and surface electromyography (sEMG), combined with clinical scales: experimental group (n = 18, 20 sessions of RAGT) or control group (n = 16, 20 sessions of conventional gait training). Gait performance was evaluated by the 3DGA, and clinical evaluations based on Fugl-Meyer assessment for lower extremity (FMA-LE), functional ambulation category (FAC), and 6-minute walk test (6MWT) were used. Of these patients, 30 patients underwent sEMG measurement synchronized with 3DGA; the cocontraction index in swing phase of the knee and ankle of the affected side was calculated. Results: After 4 weeks of intervention, intragroup comparison showed that walking speed, temporal symmetry, bilateral stride length, range of motion (ROM) of the bilateral hip, flexion angle of the affected knee, ROM of the affected ankle, FMA-LE, FAC, and 6MWT in the experimental group were significantly improved (p < 0.05), and in the control group, significant improvements were observed in walking speed, temporal symmetry, stride length of the affected side, ROM of the affected hip, FMA-LE, FAC, and 6MWT (p < 0.05). Intergroup comparison showed that the experimental group significantly outperformed the control group in walking speed, temporal symmetry of the spatiotemporal parameters, ROM of the affected hip and peak flexion of the knee in the kinematic parameters, and the FMA-LE and FAC in the clinical scale (p < 0.05). In patients evaluated by sEMG, the experimental group showed a noticeable improvement in the cocontraction index of the knee (p = 0.042), while no significant improvement was observed in the control group (p = 0.196), and the experimental group was better than the control group (p = 0.020). No noticeable changes were observed in the cocontraction index of the ankle in both groups (p > 0.05). Conclusions: Compared with conventional gait training, RAGT successfully improved part of the spatiotemporal parameters of patients and optimized the motion of the affected lower limb joints and muscle activation patterns during walking, which is crucial for further rehabilitation of walking ability in patients with subacute stroke. This trial is registered with ChiCTR2200066402.


Subject(s)
Gait Disorders, Neurologic , Robotic Surgical Procedures , Stroke Rehabilitation , Stroke , Humans , Stroke Rehabilitation/methods , Gait Analysis , Single-Blind Method , Electromyography , Gait/physiology , Walking
11.
Sensors (Basel) ; 23(4)2023 Feb 15.
Article in English | MEDLINE | ID: mdl-36850775

ABSTRACT

Stairs are common vertical traffic structures in buildings, and stair detection tasks are important in environmental perception for autonomous mobile robots. Most existing algorithms have difficulty combining the visual information from binocular sensors effectively and ensuring reliable detection at night and in the case of extremely fuzzy visual clues. To solve these problems, we propose a stair detection network with red-green-blue (RGB) and depth inputs. Specifically, we design a selective module, which can make the network learn the complementary relationship between the RGB feature maps and the depth feature maps and fuse the features effectively in different scenes. In addition, we propose several postprocessing algorithms, including a stair line clustering algorithm and a coordinate transformation algorithm, to obtain the stair geometric parameters. Experiments show that our method has better performance than existing the state-of-the-art deep learning method, and the accuracy, recall, and runtime are improved by 5.64%, 7.97%, and 3.81 ms, respectively. The improved indexes show the effectiveness of the multimodal inputs and the selective module. The estimation values of stair geometric parameters have root mean square errors within 15 mm when ascending stairs and 25 mm when descending stairs. Our method also has extremely fast detection speed, which can meet the requirements of most real-time applications.

12.
IEEE Trans Biomed Eng ; 70(2): 446-458, 2023 02.
Article in English | MEDLINE | ID: mdl-35881595

ABSTRACT

BACKGROUND: Preoperative prediction of the origin site of premature ventricular complexes (PVCs) is critical for the success of operations. However, current methods are not efficient or accurate enough. In addition, among the proposed strategies, there are few good prediction methods for electrocardiogram (ECG) images combined with deep learning aspects. METHODS: We propose ECGNet, a new neural network for the classification of 12-lead ECG images. In ECGNet, 609 ECG images from 310 patients who had undergone successful surgery in the Division of Cardiology, the First Affiliated Hospital of Soochow University, are utilized to construct the dataset. We adopt dense blocks, special convolution kernels and divergent paths to improve the performance of ECGNet. In addition, a new loss function is designed to address the sample imbalance situation, whose cause is the uneven distribution of cases themselves, which often occurs in the medical field. We also conduct extensive experiments in terms of network prediction accuracy to compare ECGNet with other networks, such as ResNet and DarkNet. RESULTS: Our ECGNet achieves extremely high prediction accuracy (91.74%) and efficiency with very small datasets. Our newly proposed loss function can solve the problem of sample imbalance during the training process. CONCLUSION: The proposed ECGNet can quickly and accurately realize the multiclassification of PVCs after training with little data. Our network has the potential to be helpful to doctors with a preoperative diagnosis of PVCs. We will continue to collect similar cases and perfect our network structure to further improve the accuracy of our network's prediction.


Subject(s)
Electrocardiography , Ventricular Premature Complexes , Ventricular Premature Complexes/diagnostic imaging , Ventricular Premature Complexes/physiopathology , Machine Learning , Neural Networks, Computer , Humans
13.
Rev Sci Instrum ; 93(11): 115114, 2022 Nov 01.
Article in English | MEDLINE | ID: mdl-36461556

ABSTRACT

The functional coupling of the cerebral cortex and muscle contraction indicates that electroencephalogram (EEG) and surface electromyogram (sEMG) signals are coherent. The objective of this study is to clearly describe the coupling relationship between EEG and sEMG through a variety of analysis methods. We collected the EEG and sEMG data of left- or right-hand motor imagery and motor execution from six healthy subjects and six stroke patients. To enhance the coherence coefficient between EEG and sEMG signals, the algorithm of EEG modification based on the peak position of sEMG signals is proposed. Through analyzing a variety of signal synchronization analysis methods, the most suitable coherence analysis algorithm is selected. In addition, the wavelet coherence analysis method based on time spectrum estimation was used to study the linear correlation characteristics of the frequency domain components of EEG and sEMG signals, which verified that wavelet coherence analysis can effectively describe the temporal variation characteristics of EEG-sEMG coherence. In the task of motor imagery, the significant EEG-sEMG coherence is mainly in the imagination process with the frequency distribution of the alpha and beta frequency bands; in the task of motor execution, the significant EEG-sEMG coherence mainly concentrates before and during the task with the frequency distribution of the alpha, beta, and gamma frequency bands. The results of this study may provide a theoretical basis for the cooperative working mode of neurorehabilitation training and introduce a new method for evaluating the functional state of neural rehabilitation movement.


Subject(s)
Electroencephalography , Wavelet Analysis , Humans , Electromyography , Imagination , Muscle Contraction
14.
Sci Rep ; 12(1): 16124, 2022 09 27.
Article in English | MEDLINE | ID: mdl-36167971

ABSTRACT

Staircases are some of the most common building structures in urban environments. Stair detection is an important task for various applications, including the environmental perception of exoskeleton robots, humanoid robots, and rescue robots and the navigation of visually impaired people. Most existing stair detection algorithms have difficulty dealing with the diversity of stair structure materials, extreme light and serious occlusion. Inspired by human perception, we propose an end-to-end method based on deep learning. Specifically, we treat the process of stair line detection as a multitask involving coarse-grained semantic segmentation and object detection. The input images are divided into cells, and a simple neural network is used to judge whether each cell contains stair lines. For cells containing stair lines, the locations of the stair lines relative to each cell are regressed. Extensive experiments on our dataset show that our method can achieve 81.49[Formula: see text] accuracy, 81.91[Formula: see text] recall and 12.48 ms runtime, and our method has higher performance in terms of both speed and accuracy than previous methods. A lightweight version can even achieve 300+ frames per second with the same resolution.


Subject(s)
Algorithms , Neural Networks, Computer , Humans , Semantics
15.
Micromachines (Basel) ; 13(9)2022 Sep 07.
Article in English | MEDLINE | ID: mdl-36144108

ABSTRACT

Brain-machine interfaces (BMIs) have been applied as a pattern recognition system for neuromodulation and neurorehabilitation. Decoding brain signals (e.g., EEG) with high accuracy is a prerequisite to building a reliable and practical BMI. This study presents a deep convolutional neural network (CNN) for EEG-based motor decoding. Both upper-limb and lower-limb motor imagery were detected from this end-to-end learning with four datasets. An average classification accuracy of 93.36 ± 1.68% was yielded on the four datasets. We compared the proposed approach with two other models, i.e., multilayer perceptron and the state-of-the-art framework with common spatial patterns and support vector machine. We observed that the performance of the CNN-based framework was significantly better than the other two models. Feature visualization was further conducted to evaluate the discriminative channels employed for the decoding. We showed the feasibility of the proposed architecture to decode motor imagery from raw EEG data without manually designed features. With the advances in the fields of computer vision and speech recognition, deep learning can not only boost the EEG decoding performance but also help us gain more insight from the data, which may further broaden the knowledge of neuroscience for brain mapping.

16.
Med Image Anal ; 77: 102338, 2022 04.
Article in English | MEDLINE | ID: mdl-35016079

ABSTRACT

Recently, self-supervised learning technology has been applied to calculate depth and ego-motion from monocular videos, achieving remarkable performance in autonomous driving scenarios. One widely adopted assumption of depth and ego-motion self-supervised learning is that the image brightness remains constant within nearby frames. Unfortunately, the endoscopic scene does not meet this assumption because there are severe brightness fluctuations induced by illumination variations, non-Lambertian reflections and interreflections during data collection, and these brightness fluctuations inevitably deteriorate the depth and ego-motion estimation accuracy. In this work, we introduce a novel concept referred to as appearance flow to address the brightness inconsistency problem. The appearance flow takes into consideration any variations in the brightness pattern and enables us to develop a generalized dynamic image constraint. Furthermore, we build a unified self-supervised framework to estimate monocular depth and ego-motion simultaneously in endoscopic scenes, which comprises a structure module, a motion module, an appearance module and a correspondence module, to accurately reconstruct the appearance and calibrate the image brightness. Extensive experiments are conducted on the SCARED dataset and EndoSLAM dataset, and the proposed unified framework exceeds other self-supervised approaches by a large margin. To validate our framework's generalization ability on different patients and cameras, we train our model on SCARED but test it on the SERV-CT and Hamlyn datasets without any fine-tuning, and the superior results reveal its strong generalization ability. Code is available at: https://github.com/ShuweiShao/AF-SfMLearner.


Subject(s)
Ego , Endoscopy, Gastrointestinal , Humans , Motion
17.
Int J Comput Assist Radiol Surg ; 17(1): 157-166, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34677745

ABSTRACT

PURPOSE: Image registration is a fundamental task in the area of image processing, and it is critical to many clinical applications, e.g., computer-assisted surgery. In this work, we attempt to design an effective framework that gains higher accuracy at a minimal cost of the invertibility of registration field. METHODS: A hierarchically aggregated transformation (HAT) module is proposed. Within each HAT module, we connect multiple convolutions in a hierarchical manner to capture the multi-scale context, enabling small and large displacements between a pair of images to be taken into account simultaneously during the registration process. Besides, an adaptive feature scaling (AFS) mechanism is presented to refine the multi-scale feature maps derived from the HAT module by rescaling channel-wise features in the global receptive field. Based on the HAT module and AFS mechanism, we establish an efficacious and efficient unsupervised deformable registration framework. RESULTS: The devised framework is validated on the dataset of SCARED and MICCAI Instrument Segmentation and Tracking Challenge 2015, and the experimental results demonstrate that our method achieves better registration accuracy with fewer number of folding pixels than three widely used baseline approaches of SyN, NiftyReg and VoxelMorph. CONCLUSION: We develop a novel method for unsupervised deformable image registration by incorporating the HAT module and AFS mechanism into the framework, which provides a new way to obtain a desirable registration field between a pair of images.


Subject(s)
Image Processing, Computer-Assisted , Unsupervised Machine Learning , Algorithms , Humans
18.
Materials (Basel) ; 13(21)2020 Oct 28.
Article in English | MEDLINE | ID: mdl-33126561

ABSTRACT

The dynamic properties of materials should be analyzed for the material selection and safety design of robots used in the army and other protective structural applications. Split Hopkinson pressure bars (SHPB) is a widely used system for measuring the dynamic behavior of materials between 102 and 104 s-1 strain rates. In order to obtain accurate dynamic parameters of materials, the influences of friction and inertia should be considered in the SHPB tests. In this study, the effects of the friction conditions, specimen shape, and specimen configuration on the SHPB results are numerically investigated for rate-independent material, rate-dependent elastic-plastic material, and rate-dependent visco-elastic material. High-strength steel DP500 and polymethylmethacrylate are the representative materials for the latter two materials. The rate-independent material used the same elastic modulus and hardening modulus as the rate-dependent visco-elastic material but without strain rate effects for comparison. The impact velocities were 3 and 10 m/s. The results show that friction and inertia can produce a significant increase in the flow stress, and their effects are affected by impact velocities. Rate-dependent visco-elasticity material specimen is the most sensitive material to friction and inertia effects among these three materials (rate-independent material, rate-dependent elastic-plastic material, and rate-dependent visco-elastic material). A theoretical analysis based on the conservation of energy is conducted to quantitatively analyze the relationship between the stress measured in the specimen and friction as well as inertia effects. Furthermore, the methods to reduce the influence of friction and inertia effects on the experimental results are further analyzed.

19.
Front Neurorobot ; 13: 67, 2019.
Article in English | MEDLINE | ID: mdl-31507400

ABSTRACT

As a leading cause of loss of functional movement, stroke often makes it difficult for patients to walk. Interventions to aid motor recovery in stroke patients should be carried out as a matter of urgency. However, muscle activity in the knee is usually too weak to generate overt movements, which poses a challenge for early post-stroke rehabilitation training. Although electromyography (EMG)-controlled exoskeletons have the potential to solve this problem, most existing robotic devices in rehabilitation centers are expensive, technologically complex, and allow only low training intensity. To address these problems, we have developed an EMG-controlled knee exoskeleton for use at home to assist stroke patients in their rehabilitation. EMG signals of the subject are acquired by an easy-to-don EMG sensor and then processed by a Kalman filter to control the exoskeleton autonomously. A newly-designed game is introduced to improve rehabilitation by encouraging patients' involvement in the training process. Six healthy subjects took part in an initial test of this new training tool. The test showed that subjects could use their EMG signals to control the exoskeleton to assist them in playing the game. Subjects found the rehabilitation process interesting, and they improved their control performance through 20-block training, with game scores increasing from 41.3 ± 15.19 to 78.5 ± 25.2. The setup process was simplified compared to traditional studies and took only 72 s according to test on one healthy subject. The time lag of EMG signal processing, which is an important aspect for real-time control, was significantly reduced to about 64 ms by employing a Kalman filter, while the delay caused by the exoskeleton was about 110 ms. This easy-to-use rehabilitation tool has a greatly simplified training process and allows patients to undergo rehabilitation in a home environment without the need for a therapist to be present. It has the potential to improve the intensity of rehabilitation and the outcomes for stroke patients in the initial phase of rehabilitation.

20.
IEEE Trans Neural Syst Rehabil Eng ; 26(8): 1626-1635, 2018 08.
Article in English | MEDLINE | ID: mdl-30004882

ABSTRACT

Brain-machine interfaces have been used to incorporate the user intention to trigger robotic devices by decoding movement onset from electroencephalography. Active neural participation is crucial to promote brain plasticity thus to enhance the opportunity of motor recovery. This paper presents the decoding of lower-limb movement-related cortical potentials with continuous classification and asynchronous detection. We executed experiments in a customized gait trainer, where 10 healthy subjects performed self-initiated ankle plantar flexion. We further analyzed the features, evaluated the impact of the limb side, and compared the proposed framework with other typical decoding methods. No significant differences were observed between the left and right legs in terms of neural signatures of movement and classification performance. We obtained a higher true positive rate, lower false positives, and comparable latencies with respect to the existing online detection methods. This paper demonstrates the feasibility of the proposed framework to build a closed-loop gait trainer. Potential applications include gait training neurorehabilitation in clinical trials.


Subject(s)
Electroencephalography/classification , Electroencephalography/statistics & numerical data , Lower Extremity/physiology , Movement/physiology , Adult , Artifacts , Biomechanical Phenomena , Brain-Computer Interfaces , Cerebral Cortex/physiology , Female , Functional Laterality/physiology , Gait Disorders, Neurologic/rehabilitation , Healthy Volunteers , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...