Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 9.077
Filter
1.
Sci Rep ; 14(1): 10598, 2024 05 08.
Article in English | MEDLINE | ID: mdl-38719940

ABSTRACT

A popular and widely suggested measure for assessing unilateral hand motor skills in stroke patients is the box and block test (BBT). Our study aimed to create an augmented reality enhanced version of the BBT (AR-BBT) and evaluate its correlation to the original BBT for stroke patients. Following G-power analysis, clinical examination, and inclusion-exclusion criteria, 31 stroke patients were included in this study. AR-BBT was developed using the Open Source Computer Vision Library (OpenCV). The MediaPipe's hand tracking library uses a palm and a hand landmark machine learning model to detect and track hands. A computer and a depth camera were employed in the clinical evaluation of AR-BBT following the principles of traditional BBT. A strong correlation was achieved between the number of blocks moved in the BBT and the AR-BBT on the hemiplegic side (Pearson correlation = 0.918) and a positive statistically significant correlation (p = 0.000008). The conventional BBT is currently the preferred assessment method. However, our approach offers an advantage, as it suggests that an AR-BBT solution could remotely monitor the assessment of a home-based rehabilitation program and provide additional hand kinematic information for hand dexterities in AR environment conditions. Furthermore, it employs minimal hardware equipment.


Subject(s)
Augmented Reality , Hand , Machine Learning , Stroke Rehabilitation , Stroke , Humans , Male , Female , Middle Aged , Stroke/physiopathology , Aged , Hand/physiopathology , Hand/physiology , Stroke Rehabilitation/methods , Motor Skills/physiology , Adult
2.
Sci Rep ; 14(1): 10144, 2024 05 02.
Article in English | MEDLINE | ID: mdl-38698185

ABSTRACT

Arterial pulse wave velocity (PWV) is recognized as a convenient method to assess peripheral vascular stiffness. This study explored the clinical characteristics of hand PWV (hPWV) and hand pulse transit time (hPTT) in healthy adults (sixty males = 42.4 ± 13.9 yrs; sixty-four females = 42.8 ± 13.9 yrs) voluntarily participated in this study. The arterial pulse waveform and the anatomical distance from the radial styloid process to the tip of the middle finger of both hands were recorded in the sitting position. The hPWV was calculated as the traversed distance divided by hPTT between those two points. Male subjects showed significantly greater hPWV, systolic blood pressure, and pulse pressure than age-matched female subjects, while the hPTT was not significantly different between genders. Multiple linear regression analysis showed that gender is a common determinant of hPWV and hPTT, and that age and heart rate (HR) were negatively correlated with hPWV and hPTT, respectively. We conclude that male subjects have greater hPWV than female subjects. Ageing is associated with decreased hPWV, while increased HR is associated with a smaller hPTT. The hPWV and hPTT might be used as non-invasive indices to characterise the ageing and arterial stiffness of peripheral blood vessels.


Subject(s)
Blood Pressure , Hand , Heart Rate , Pulse Wave Analysis , Vascular Stiffness , Humans , Male , Female , Adult , Middle Aged , Hand/physiology , Vascular Stiffness/physiology , Blood Pressure/physiology , Heart Rate/physiology , Healthy Volunteers
3.
Cereb Cortex ; 34(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38771243

ABSTRACT

Variability in brain structure is associated with the capacity for behavioral change. However, a causal link between specific brain areas and behavioral change (such as motor learning) has not been demonstrated. We hypothesized that greater gray matter volume of a primary motor cortex (M1) area active during a hand motor learning task is positively correlated with subsequent learning of the task, and that the disruption of this area blocks learning of the task. Healthy participants underwent structural MRI before learning a skilled hand motor task. Next, participants performed this learning task during fMRI to determine M1 areas functionally active during this task. This functional ROI was anatomically constrained with M1 boundaries to create a group-level "Active-M1" ROI used to measure gray matter volume in each participant. Greater gray matter volume in the left hemisphere Active-M1 ROI was related to greater motor learning in the corresponding right hand. When M1 hand area was disrupted with repetitive transcranial stimulation (rTMS), learning of the motor task was blocked, confirming its causal link to motor learning. Our combined imaging and rTMS approach revealed greater cortical volume in a task-relevant M1 area is causally related to learning of a hand motor task in healthy humans.


Subject(s)
Gray Matter , Hand , Learning , Magnetic Resonance Imaging , Motor Cortex , Transcranial Magnetic Stimulation , Humans , Motor Cortex/physiology , Motor Cortex/diagnostic imaging , Male , Female , Hand/physiology , Learning/physiology , Adult , Young Adult , Gray Matter/physiology , Gray Matter/diagnostic imaging , Motor Skills/physiology , Brain Mapping , Functional Laterality/physiology
4.
Sci Rep ; 14(1): 11617, 2024 05 21.
Article in English | MEDLINE | ID: mdl-38773183

ABSTRACT

It has been argued that experiencing the pain of others motivates helping. Here, we investigate the contribution of somatic feelings while witnessing the pain of others onto costly helping decisions, by contrasting the choices and brain activity of participants that report feeling somatic feelings (self-reported mirror-pain synesthetes) against those that do not. Participants in fMRI witnessed a confederate receiving pain stimulations whose intensity they could reduce by donating money. The pain intensity could be inferred either from the facial expressions of the confederate in pain (Face condition) or from the kinematics of the pain-receiving hand (Hand condition). Our results show that self-reported mirror-pain synesthetes increase their donation more steeply, as the intensity of the observed pain increases, and their somatosensory brain activity (SII and the adjacent IPL) was more tightly associated with donation in the Hand condition. For all participants, activation in insula, SII, TPJ, pSTS, amygdala and MCC correlated with the trial by trial donation made in the Face condition, while SI and MTG activation was correlated with the donation in the Hand condition. These results further inform us about the role of somatic feelings while witnessing the pain of others in situations of costly helping.


Subject(s)
Magnetic Resonance Imaging , Pain , Humans , Female , Male , Adult , Pain/psychology , Pain/physiopathology , Young Adult , Brain/physiopathology , Brain/diagnostic imaging , Brain/physiology , Brain Mapping , Facial Expression , Helping Behavior , Hand/physiology
5.
PLoS One ; 19(5): e0294125, 2024.
Article in English | MEDLINE | ID: mdl-38781201

ABSTRACT

Most people know whether they are left-handed or right-handed, and usually base this assessment on preferences during one-handed tasks. There are several manual tasks that require the contribution of both hands, in which, in most cases, each hand plays a different role. In this specific case, holding an ice-hockey stick is particularly interesting because the hand placement may have an incidence on the playing style. In this study (n = 854), the main objective was to determine to what extent the way of holding an ice-hockey stick is associated with other lateralized preferences. Amongst the 131 participants reporting a preference for the left hand in unilateral tasks, 70.2% reported a preference for shooting right (placing the right hand in the middle of the stick); and amongst the 583 participants reporting a preference for writing with the right hand, 66.2% reported a preference for shooting left. 140 (16.4%) participants were classified as ambidextrous and 61.4% of them reported a preference for shooting right. This preference on the ice-hockey stick is closely correlated (uncrossed preference) to the way one holds a rake, shovel, or broom, or a golf club, but inversely related to the way one holds an ax and a baseball bat. The link between the way of holding the ice-hockey stick and eyedness or footedness is weak. These results are contrasted with the results reported by Loffing et al. (2014) and reveal the need to clarify the exact nature and requirements of the targeted tasks when studying bilateral asymmetric preferences.


Subject(s)
Functional Laterality , Humans , Functional Laterality/physiology , Male , Female , Adult , Hockey/physiology , Young Adult , Hand/physiology , Middle Aged , Adolescent
6.
Curr Biol ; 34(10): 2238-2246.e5, 2024 05 20.
Article in English | MEDLINE | ID: mdl-38718799

ABSTRACT

To sense and interact with objects in the environment, we effortlessly configure our fingertips at desired locations. It is therefore reasonable to assume that the underlying control mechanisms rely on accurate knowledge about the structure and spatial dimensions of our hand and fingers. This intuition, however, is challenged by years of research showing drastic biases in the perception of finger geometry.1,2,3,4,5 This perceptual bias has been taken as evidence that the brain's internal representation of the body's geometry is distorted,6 leading to an apparent paradox regarding the skillfulness of our actions.7 Here, we propose an alternative explanation of the biases in hand perception-they are the result of the Bayesian integration of noisy, but unbiased, somatosensory signals about finger geometry and posture. To address this hypothesis, we combined Bayesian reverse engineering with behavioral experimentation on joint and fingertip localization of the index finger. We modeled the Bayesian integration either in sensory or in space-based coordinates, showing that the latter model variant led to biases in finger perception despite accurate representation of finger length. Behavioral measures of joint and fingertip localization responses showed similar biases, which were well fitted by the space-based, but not the sensory-based, model variant. The space-based model variant also outperformed a distorted hand model with built-in geometric biases. In total, our results suggest that perceptual distortions of finger geometry do not reflect a distorted hand model but originate from near-optimal Bayesian inference on somatosensory signals.


Subject(s)
Bayes Theorem , Fingers , Hand , Humans , Hand/physiology , Fingers/physiology , Female , Male , Adult , Young Adult , Touch Perception/physiology
7.
Article in English | MEDLINE | ID: mdl-38722725

ABSTRACT

Utilization of hand-tracking cameras, such as Leap, for hand rehabilitation and functional assessments is an innovative approach to providing affordable alternatives for people with disabilities. However, prior to deploying these commercially-available tools, a thorough evaluation of their performance for disabled populations is necessary. In this study, we provide an in-depth analysis of the accuracy of Leap's hand-tracking feature for both individuals with and without upper-body disabilities for common dynamic tasks used in rehabilitation. Leap is compared against motion capture with conventional techniques such as signal correlations, mean absolute errors, and digit segment length estimation. We also propose the use of dimensionality reduction techniques, such as Principal Component Analysis (PCA), to capture the complex, high-dimensional signal spaces of the hand. We found that Leap's hand-tracking performance did not differ between individuals with and without disabilities, yielding average signal correlations between 0.7-0.9. Both low and high mean absolute errors (between 10-80mm) were observed across participants. Overall, Leap did well with general hand posture tracking, with the largest errors associated with the tracking of the index finger. Leap's hand model was found to be most inaccurate in the proximal digit segment, underestimating digit lengths with errors as high as 18mm. Using PCA to quantify differences between the high-dimensional spaces of Leap and motion capture showed that high correlations between latent space projections were associated with high accuracy in the original signal space. These results point to the potential of low-dimensional representations of complex hand movements to support hand rehabilitation and assessment.


Subject(s)
Hand , Principal Component Analysis , Video Recording , Humans , Hand/physiology , Male , Female , Adult , Disabled Persons/rehabilitation , Middle Aged , Reproducibility of Results , Young Adult , Algorithms , Movement/physiology
8.
Sensors (Basel) ; 24(9)2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38732808

ABSTRACT

Currently, surface EMG signals have a wide range of applications in human-computer interaction systems. However, selecting features for gesture recognition models based on traditional machine learning can be challenging and may not yield satisfactory results. Considering the strong nonlinear generalization ability of neural networks, this paper proposes a two-stream residual network model with an attention mechanism for gesture recognition. One branch processes surface EMG signals, while the other processes hand acceleration signals. Segmented networks are utilized to fully extract the physiological and kinematic features of the hand. To enhance the model's capacity to learn crucial information, we introduce an attention mechanism after global average pooling. This mechanism strengthens relevant features and weakens irrelevant ones. Finally, the deep features obtained from the two branches of learning are fused to further improve the accuracy of multi-gesture recognition. The experiments conducted on the NinaPro DB2 public dataset resulted in a recognition accuracy of 88.25% for 49 gestures. This demonstrates that our network model can effectively capture gesture features, enhancing accuracy and robustness across various gestures. This approach to multi-source information fusion is expected to provide more accurate and real-time commands for exoskeleton robots and myoelectric prosthetic control systems, thereby enhancing the user experience and the naturalness of robot operation.


Subject(s)
Electromyography , Gestures , Neural Networks, Computer , Humans , Electromyography/methods , Signal Processing, Computer-Assisted , Pattern Recognition, Automated/methods , Acceleration , Algorithms , Hand/physiology , Machine Learning , Biomechanical Phenomena/physiology
9.
Sensors (Basel) ; 24(9)2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38732843

ABSTRACT

As the number of electronic gadgets in our daily lives is increasing and most of them require some kind of human interaction, this demands innovative, convenient input methods. There are limitations to state-of-the-art (SotA) ultrasound-based hand gesture recognition (HGR) systems in terms of robustness and accuracy. This research presents a novel machine learning (ML)-based end-to-end solution for hand gesture recognition with low-cost micro-electromechanical (MEMS) system ultrasonic transducers. In contrast to prior methods, our ML model processes the raw echo samples directly instead of using pre-processed data. Consequently, the processing flow presented in this work leaves it to the ML model to extract the important information from the echo data. The success of this approach is demonstrated as follows. Four MEMS ultrasonic transducers are placed in three different geometrical arrangements. For each arrangement, different types of ML models are optimized and benchmarked on datasets acquired with the presented custom hardware (HW): convolutional neural networks (CNNs), gated recurrent units (GRUs), long short-term memory (LSTM), vision transformer (ViT), and cross-attention multi-scale vision transformer (CrossViT). The three last-mentioned ML models reached more than 88% accuracy. The most important innovation described in this research paper is that we were able to demonstrate that little pre-processing is necessary to obtain high accuracy in ultrasonic HGR for several arrangements of cost-effective and low-power MEMS ultrasonic transducer arrays. Even the computationally intensive Fourier transform can be omitted. The presented approach is further compared to HGR systems using other sensor types such as vision, WiFi, radar, and state-of-the-art ultrasound-based HGR systems. Direct processing of the sensor signals by a compact model makes ultrasonic hand gesture recognition a true low-cost and power-efficient input method.


Subject(s)
Gestures , Hand , Machine Learning , Neural Networks, Computer , Humans , Hand/physiology , Pattern Recognition, Automated/methods , Ultrasonography/methods , Ultrasonography/instrumentation , Ultrasonics/instrumentation , Algorithms
10.
Sensors (Basel) ; 24(9)2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38732871

ABSTRACT

Myoelectric hands are beneficial tools in the daily activities of people with upper-limb deficiencies. Because traditional myoelectric hands rely on detecting muscle activity in residual limbs, they are not suitable for individuals with short stumps or paralyzed limbs. Therefore, we developed a novel electric prosthetic hand that functions without myoelectricity, utilizing wearable wireless sensor technology for control. As a preliminary evaluation, our prototype hand with wireless button sensors was compared with a conventional myoelectric hand (Ottobock). Ten healthy therapists were enrolled in this study. The hands were fixed to their forearms, myoelectric hand muscle activity sensors were attached to the wrist extensor and flexor muscles, and wireless button sensors for the prostheses were attached to each user's trunk. Clinical evaluations were performed using the Simple Test for Evaluating Hand Function and the Action Research Arm Test. The fatigue degree was evaluated using the modified Borg scale before and after the tests. While no statistically significant differences were observed between the two hands across the tests, the change in the Borg scale was notably smaller for our prosthetic hand (p = 0.045). Compared with the Ottobock hand, the proposed hand prosthesis has potential for widespread applications in people with upper-limb deficiencies.


Subject(s)
Artificial Limbs , Hand , Wearable Electronic Devices , Wireless Technology , Humans , Hand/physiology , Pilot Projects , Wireless Technology/instrumentation , Male , Adult , Female , Electromyography/instrumentation , Prosthesis Design
11.
Sensors (Basel) ; 24(9)2024 May 03.
Article in English | MEDLINE | ID: mdl-38733030

ABSTRACT

This article presents a study on the neurobiological control of voluntary movements for anthropomorphic robotic systems. A corticospinal neural network model has been developed to control joint trajectories in multi-fingered robotic hands. The proposed neural network simulates cortical and spinal areas, as well as the connectivity between them, during the execution of voluntary movements similar to those performed by humans or monkeys. Furthermore, this neural connection allows for the interpretation of functional roles in the motor areas of the brain. The proposed neural control system is tested on the fingers of a robotic hand, which is driven by agonist-antagonist tendons and actuators designed to accurately emulate complex muscular functionality. The experimental results show that the corticospinal controller produces key properties of biological movement control, such as bell-shaped asymmetric velocity profiles and the ability to compensate for disturbances. Movements are dynamically compensated for through sensory feedback. Based on the experimental results, it is concluded that the proposed biologically inspired adaptive neural control system is robust, reliable, and adaptable to robotic platforms with diverse biomechanics and degrees of freedom. The corticospinal network successfully integrates biological concepts with engineering control theory for the generation of functional movement. This research significantly contributes to improving our understanding of neuromotor control in both animals and humans, thus paving the way towards a new frontier in the field of neurobiological control of anthropomorphic robotic systems.


Subject(s)
Hand , Neural Networks, Computer , Robotics , Tendons , Humans , Robotics/methods , Hand/physiology , Tendons/physiology , Movement/physiology , Nerve Net/physiology , Biomechanical Phenomena/physiology , Pyramidal Tracts/physiology , Animals
12.
Article in English | MEDLINE | ID: mdl-38739518

ABSTRACT

The employment of surface electromyographic (sEMG) signals in the estimation of hand kinematics represents a promising non-invasive methodology for the advancement of human-machine interfaces. However, the limitations of existing subject-specific methods are obvious as they confine the application to individual models that are custom-tailored for specific subjects, thereby reducing the potential for broader applicability. In addition, current cross-subject methods are challenged in their ability to simultaneously cater to the needs of both new and existing users effectively. To overcome these challenges, we propose the Cross-Subject Lifelong Network (CSLN). CSLN incorporates a novel lifelong learning approach, maintaining the patterns of sEMG signals across a varied user population and across different temporal scales. Our method enhances the generalization of acquired patterns, making it applicable to various individuals and temporal contexts. Our experimental investigations, encompassing both joint and sequential training approaches, demonstrate that the CSLN model not only attains enhanced performance in cross-subject scenarios but also effectively addresses the issue of catastrophic forgetting, thereby augmenting training efficacy.


Subject(s)
Algorithms , Electromyography , Hand , Humans , Electromyography/methods , Hand/physiology , Biomechanical Phenomena , Male , Adult , Learning/physiology , Female , Man-Machine Systems , Machine Learning , Young Adult , Neural Networks, Computer , Muscle, Skeletal/physiology
13.
Article in English | MEDLINE | ID: mdl-38739519

ABSTRACT

Intuitive regression control of prostheses relies on training algorithms to correlate biological recordings to motor intent. The quality of the training dataset is critical to run-time regression performance, but accurately labeling intended hand kinematics after hand amputation is challenging. In this study, we quantified the accuracy and precision of labeling hand kinematics using two common training paradigms: 1) mimic training, where participants mimic predetermined motions of a prosthesis, and 2) mirror training, where participants mirror their contralateral intact hand during synchronized bilateral movements. We first explored this question in healthy non-amputee individuals where the ground-truth kinematics could be readily determined using motion capture. Kinematic data showed that mimic training fails to account for biomechanical coupling and temporal changes in hand posture. Additionally, mirror training exhibited significantly higher accuracy and precision in labeling hand kinematics. These findings suggest that the mirror training approach generates a more faithful, albeit more complex, dataset. Accordingly, mirror training resulted in significantly better offline regression performance when using a large amount of training data and a non-linear neural network. Next, we explored these different training paradigms online, with a cohort of unilateral transradial amputees actively controlling a prosthesis in real-time to complete a functional task. Overall, we found that mirror training resulted in significantly faster task completion speeds and similar subjective workload. These results demonstrate that mirror training can potentially provide more dexterous control through the utilization of task-specific, user-selected training data. Consequently, these findings serve as a valuable guide for the next generation of myoelectric and neuroprostheses leveraging machine learning to provide more dexterous and intuitive control.


Subject(s)
Algorithms , Artificial Limbs , Electromyography , Hand , Humans , Electromyography/methods , Biomechanical Phenomena , Male , Female , Adult , Hand/physiology , Reproducibility of Results , Amputees/rehabilitation , Neural Networks, Computer , Prosthesis Design , Movement/physiology , Young Adult , Healthy Volunteers , Nonlinear Dynamics
14.
Conscious Cogn ; 121: 103696, 2024 05.
Article in English | MEDLINE | ID: mdl-38703539

ABSTRACT

A serial reaction time task was used to test whether the representations of a probabilistic second-order sequence structure are (i) stored in an effector-dependent, effector-independent intrinsic or effector-independent visuospatial code and (ii) are inter-manually accessible. Participants were trained either with the dominant or non-dominant hand. Tests were performed with both hands in the practice sequence, a random sequence, and a mirror sequence. Learning did not differ significantly between left and right-hand practice, suggesting symmetric intermanual transfer from the dominant to the non-dominant hand and vice versa. In the posttest, RTs were shorter for the practice sequence than for the random sequence, and longest for the mirror sequence. Participants were unable to freely generate or recognize the practice sequence, indicating implicit knowledge of the probabilistic sequence structure. Because sequence-specific learning did not differ significantly between hands, we conclude that representations of the probabilistic sequence structure are stored in an effector-independent visuospatial code.


Subject(s)
Reaction Time , Space Perception , Transfer, Psychology , Humans , Male , Female , Adult , Reaction Time/physiology , Young Adult , Space Perception/physiology , Transfer, Psychology/physiology , Psychomotor Performance/physiology , Visual Perception/physiology , Functional Laterality/physiology , Serial Learning/physiology , Practice, Psychological , Hand/physiology
15.
Behav Brain Res ; 468: 115024, 2024 Jun 25.
Article in English | MEDLINE | ID: mdl-38705283

ABSTRACT

Motor adaptations are responsible for recalibrating actions and facilitating the achievement of goals in a constantly changing environment. Once consolidated, the decay of motor adaptation is a process affected by available sensory information during deadaptation. However, the cortical response to task error feedback during the deadaptation phase has received little attention. Here, we explored changes in brain cortical responses due to feedback of task-related error during deadaptation. Twelve healthy volunteers were recruited for the study. Right hand movement and EEG were recorded during repetitive trials of a hand reaching movement. A visuomotor rotation of 30° was introduced to induce motor adaptation. Volunteers participated in two experimental sessions organized in baseline, adaptation, and deadaptation blocks. In the deadaptation block, the visuomotor rotation was removed, and visual feedback was only provided in one session. Performance was quantified using angle end-point error, averaged speed, and movement onset time. A non-parametric spatiotemporal cluster-level permutation test was used to analyze the EEG recordings. During deadaptation, participants experienced a greater error reduction when feedback of the cursor was provided. The EEG responses showed larger activity in the left centro-frontal parietal areas during the deadaptation block when participants received feedback, as opposed to when they did not receive feedback. Centrally distributed clusters were found for the adaptation and deadaptation blocks in the absence of visual feedback. The results suggest that visual feedback of the task-related error activates cortical areas related to performance monitoring, depending on the accessible sensory information.


Subject(s)
Adaptation, Physiological , Electroencephalography , Feedback, Sensory , Psychomotor Performance , Humans , Male , Female , Adult , Psychomotor Performance/physiology , Adaptation, Physiological/physiology , Young Adult , Feedback, Sensory/physiology , Cerebral Cortex/physiology , Hand/physiology , Movement/physiology , Motor Activity/physiology
16.
Article in English | MEDLINE | ID: mdl-38771682

ABSTRACT

Gesture recognition has emerged as a significant research domain in computer vision and human-computer interaction. One of the key challenges in gesture recognition is how to select the most useful channels that can effectively represent gesture movements. In this study, we have developed a channel selection algorithm that determines the number and placement of sensors that are critical to gesture classification. To validate this algorithm, we constructed a Force Myography (FMG)-based signal acquisition system. The algorithm considers each sensor as a distinct channel, with the most effective channel combinations and recognition accuracy determined through assessing the correlation between each channel and the target gesture, as well as the redundant correlation between different channels. The database was created by collecting experimental data from 10 healthy individuals who wore 16 sensors to perform 13 unique hand gestures. The results indicate that the average number of channels across the 10 participants was 3, corresponding to an 75% decrease in the initial channel count, with an average recognition accuracy of 94.46%. This outperforms four widely adopted feature selection algorithms, including Relief-F, mRMR, CFS, and ILFS. Moreover, we have established a universal model for the position of gesture measurement points and verified it with an additional five participants, resulting in an average recognition accuracy of 96.3%. This study provides a sound basis for identifying the optimal and minimum number and location of channels on the forearm and designing specialized arm rings with unique shapes.


Subject(s)
Algorithms , Gestures , Pattern Recognition, Automated , Humans , Male , Female , Adult , Pattern Recognition, Automated/methods , Young Adult , Myography/methods , Hand/physiology , Healthy Volunteers , Reproducibility of Results
17.
Sci Rep ; 14(1): 8707, 2024 04 15.
Article in English | MEDLINE | ID: mdl-38622201

ABSTRACT

In this study, we explored spatial-temporal dependencies and their impact on the tactile perception of moving objects. Building on previous research linking visual perception and human movement, we examined if an imputed motion mechanism operates within the tactile modality. We focused on how biological coherence between space and time, characteristic of human movement, influences tactile perception. An experiment was designed wherein participants were stimulated on their right palm with tactile patterns, either ambiguous (incongruent conditions) or non-ambiguous (congruent conditions) relative to a biological motion law (two-thirds power law) and asked to report perceived shape and associated confidence. Our findings reveal that introducing ambiguous tactile patterns (1) significantly diminishes tactile discrimination performance, implying motor features of shape recognition in vision are also observed in the tactile modality, and (2) undermines participants' response confidence, uncovering the accessibility degree of information determining the tactile percept's conscious representation. Analysis based on the Hierarchical Drift Diffusion Model unveiled the sensitivity of the evidence accumulation process to the stimulus's informational ambiguity and provides insight into tactile perception as predictive dynamics for reducing uncertainty. These discoveries deepen our understanding of tactile perception mechanisms and underscore the criticality of predictions in sensory information processing.


Subject(s)
Motion Perception , Touch Perception , Humans , Touch/physiology , Touch Perception/physiology , Visual Perception , Hand/physiology , Movement/physiology , Motion Perception/physiology
18.
J Agromedicine ; 29(3): 415-425, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38595034

ABSTRACT

OBJECTIVES: Continuous exposure to hand-arm vibration integrated with poor posture and forceful movements are known causes of musculoskeletal disorders (MSD). In most related studies, force and vibration levels in experimental research is controlled. This study aims to determine how actual hand tractor field operation can affect the upper limb of users. It intends to characterize upper limb muscle activation applied during actual hand tractor usage. Lastly, it determines the immediate impacts on hand strength and perceived upper limb discomfort after the operation. METHODS: We recruited 15 farm operators with a mean working experience of 20.1 ± 12.2 years. They were asked to operate a hand tractor on paddy fields for at most 8 minutes. Handle vibration was measured using a tri-axial accelerometer. The total unweighted vibration acceleration was computed and used to represent the handle vibration magnitude. Muscle activation was measured using surface electromyography (sEMG). Six sEMG sensors were attached to the dominant and non-dominant side of the extensor carpi radialis (ECR), bicep, and deltoid. Pre- and post-task hand strength and subjective discomfort rating were also taken. RESULTS: The total unweighted handle vibration acceleration is 17.45 ± 7.53 m/s2. This exceeds the allowable safe value. Meanwhile, the percentage of maximum voluntary contraction (% MVC) of the muscles ranged from 6% to 14% with the ECR having a significantly higher activation (p < .05) than the bicep and deltoid. The post-task grip strength of the dominant hand was lower than its pre-task value (p < .01) while that of the non-dominant side did not vary significantly. There is a modest trend of higher hand discomfort of the non-dominant side on post-task than pre-task rating (p < .10). Although, overall, the perceived discomfort ranged from none to mild discomfort. CONCLUSION: In conclusion, the study showed an indication that the effects of vibration on humans are evident even at mild muscle exertion, with the exertion predominantly concentrated on the distal arm area clearly affecting grip strength and hand discomfort. In such cases, future recommendations can revolve around the improvement of the hand tractor handle grip to impose grip comfort and ease.


Subject(s)
Electromyography , Hand Strength , Upper Extremity , Vibration , Humans , Adult , Male , Upper Extremity/physiology , Hand Strength/physiology , Hand/physiology , Farmers , Female , Occupational Exposure/adverse effects , Middle Aged , Muscle, Skeletal/physiology
19.
J Neurosci ; 44(21)2024 May 22.
Article in English | MEDLINE | ID: mdl-38589229

ABSTRACT

Hand movements are associated with modulations of neuronal activity across several interconnected cortical areas, including the primary motor cortex (M1) and the dorsal and ventral premotor cortices (PMd and PMv). Local field potentials (LFPs) provide a link between neuronal discharges and synaptic inputs. Our current understanding of how LFPs vary in M1, PMd, and PMv during contralateral and ipsilateral movements is incomplete. To help reveal unique features in the pattern of modulations, we simultaneously recorded LFPs in these areas in two macaque monkeys performing reach and grasp movements with either the right or left hand. The greatest effector-dependent differences were seen in M1, at low (≤13 Hz) and γ frequencies. In premotor areas, differences related to hand use were only present in low frequencies. PMv exhibited the greatest increase in low frequencies during instruction cues and the smallest effector-dependent modulation during movement execution. In PMd, δ oscillations were greater during contralateral reach and grasp, and ß activity increased during contralateral grasp. In contrast, ß oscillations decreased in M1 and PMv. These results suggest that while M1 primarily exhibits effector-specific LFP activity, premotor areas compute more effector-independent aspects of the task requirements, particularly during movement preparation for PMv and production for PMd. The generation of precise hand movements likely relies on the combination of complementary information contained in the unique pattern of neural modulations contained in each cortical area. Accordingly, integrating LFPs from premotor areas and M1 could enhance the performance and robustness of brain-machine interfaces.


Subject(s)
Functional Laterality , Hand Strength , Macaca mulatta , Motor Cortex , Psychomotor Performance , Animals , Motor Cortex/physiology , Hand Strength/physiology , Male , Psychomotor Performance/physiology , Functional Laterality/physiology , Movement/physiology , Hand/physiology
20.
Commun Biol ; 7(1): 506, 2024 Apr 27.
Article in English | MEDLINE | ID: mdl-38678058

ABSTRACT

Limb movement direction can be inferred from local field potentials in motor cortex during movement execution. Yet, it remains unclear to what extent intended hand movements can be predicted from brain activity recorded during movement planning. Here, we set out to probe the directional-tuning of oscillatory features during motor planning and execution, using a machine learning framework on multi-site local field potentials (LFPs) in humans. We recorded intracranial EEG data from implanted epilepsy patients as they performed a four-direction delayed center-out motor task. Fronto-parietal LFP low-frequency power predicted hand-movement direction during planning while execution was largely mediated by higher frequency power and low-frequency phase in motor areas. By contrast, Phase-Amplitude Coupling showed uniform modulations across directions. Finally, multivariate classification led to an increase in overall decoding accuracy (>80%). The novel insights revealed here extend our understanding of the role of neural oscillations in encoding motor plans.


Subject(s)
Motor Cortex , Movement , Humans , Movement/physiology , Male , Adult , Motor Cortex/physiology , Female , Electroencephalography , Brain/physiology , Young Adult , Machine Learning , Electrocorticography , Epilepsy/physiopathology , Hand/physiology , Brain Mapping/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...