Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 95
Filter
1.
J Imaging Inform Med ; 2024 Jul 17.
Article in English | MEDLINE | ID: mdl-39020154

ABSTRACT

This paper presents an innovative automatic fusion imaging system that combines 3D CT/MR images with real-time ultrasound acquisition. The system eliminates the need for external physical markers and complex training, making image fusion feasible for physicians with different experience levels. The integrated system involves a portable 3D camera for patient-specific surface acquisition, an electromagnetic tracking system, and US components. The fusion algorithm comprises two main parts: skin segmentation and rigid co-registration, both integrated into the US machine. The co-registration aligns the surface extracted from CT/MR images with the 3D surface acquired by the camera, facilitating rapid and effective fusion. Experimental tests in different settings, validate the system's accuracy, computational efficiency, noise robustness, and operator independence.

2.
Front Neurol ; 15: 1354092, 2024.
Article in English | MEDLINE | ID: mdl-39055321

ABSTRACT

Introduction: Alzheimer's disease and related disorders (ADRD) progressively impair cognitive function, prompting the need for early detection to mitigate its impact. Mild Cognitive Impairment (MCI) may signal an early cognitive decline due to ADRD. Thus, developing an accessible, non-invasive method for detecting MCI is vital for initiating early interventions to prevent severe cognitive deterioration. Methods: This study explores the utility of analyzing gait patterns, a fundamental aspect of human motor behavior, on straight and oval paths for diagnosing MCI. Using a Kinect v.2 camera, we recorded the movements of 25 body joints from 25 individuals with MCI and 30 healthy older adults (HC). Signal processing, descriptive statistical analysis, and machine learning techniques were employed to analyze the skeletal gait data in both walking conditions. Results and discussion: The study demonstrated that both straight and oval walking patterns provide valuable insights for MCI detection, with a notable increase in identifiable gait features in the more complex oval walking test. The Random Forest model excelled among various algorithms, achieving an 85.50% accuracy and an 83.9% F-score in detecting MCI during oval walking tests. This research introduces a cost-effective, Kinect-based method that integrates gait analysis-a key behavioral pattern-with machine learning, offering a practical tool for MCI screening in both clinical and home environments.

3.
Int J Comput Assist Radiol Surg ; 19(7): 1349-1357, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38748053

ABSTRACT

PURPOSE: In this paper, we present a novel approach to the automatic evaluation of open surgery skills using depth cameras. This work is intended to show that depth cameras achieve similar results to RGB cameras, which is the common method in the automatic evaluation of open surgery skills. Moreover, depth cameras offer advantages such as robustness to lighting variations, camera positioning, simplified data compression, and enhanced privacy, making them a promising alternative to RGB cameras. METHODS: Experts and novice surgeons completed two simulators of open suturing. We focused on hand and tool detection and action segmentation in suturing procedures. YOLOv8 was used for tool detection in RGB and depth videos. Furthermore, UVAST and MSTCN++ were used for action segmentation. Our study includes the collection and annotation of a dataset recorded with Azure Kinect. RESULTS: We demonstrated that using depth cameras in object detection and action segmentation achieves comparable results to RGB cameras. Furthermore, we analyzed 3D hand path length, revealing significant differences between experts and novice surgeons, emphasizing the potential of depth cameras in capturing surgical skills. We also investigated the influence of camera angles on measurement accuracy, highlighting the advantages of 3D cameras in providing a more accurate representation of hand movements. CONCLUSION: Our research contributes to advancing the field of surgical skill assessment by leveraging depth cameras for more reliable and privacy evaluations. The findings suggest that depth cameras can be valuable in assessing surgical skills and provide a foundation for future research in this area.


Subject(s)
Clinical Competence , Video Recording , Humans , Suture Techniques/education , Suture Techniques/instrumentation , Imaging, Three-Dimensional/methods
4.
Front Pediatr ; 12: 1256445, 2024.
Article in English | MEDLINE | ID: mdl-38374878

ABSTRACT

Background: Spinal Muscular Atrophy (SMA) is manifested by deformation of the chest wall, including a bell-shaped chest. We determined the ability of a novel non-ionizing, non-volitional method to measure and quantify bell-shaped chests in SMA. Methods: A 3D depth camera and a chest x-ray (CXR) were used to capture chest images in 14 SMA patients and 28 controls. Both methods measure the distance between two points, but measurements performed by 3D analysis allow for the consideration of the curve of a surface (geodesic measurements), whereas the CXR allows solely for the determination of the shortest path between two points, with no regard for the surface (Euclidean measurements). The ratio of the upper to lower chest distances was quantified to distinguish chest shape in imaging by both the 3D depth camera and the CXR, and the ratios were compared between healthy and SMA patients. Results: The mean 3D Euclidean ratio of distances measured by 3D imaging was 1.00 in the control group and 0.92 in the SMA group (p = 0.01), the latter indicative of a bell-shaped chest. This result repeated itself in the ratio of geodesic measurements (0.99 vs. 0.89, respectively, p = 0.03). Conclusion: The herein-described novel, noninvasive 3D method for measuring the upper and lower chest distances was shown to distinguish the bell-shaped chest configuration in patients with SMA from the chests of controls. This method bears several advantages over CXR and may be readily applicable in clinical settings that manage children with SMA.

5.
Transl Anim Sci ; 8: txae018, 2024.
Article in English | MEDLINE | ID: mdl-38410179

ABSTRACT

In numerous systems of animal production, there is increasing interest in the use of three-dimensional (3D)-imaging technology on farms for its ability to easily and safely measure traits of interest in living animals. With this information, it is possible to evaluate multiple morphological indicators of interest, either directly or indirectly, and follow them through time. Several tools for this purpose were developed, but one of their main weaknesses was their sensitivity to light and animal movement, which limited their potential for large-scale application on farms. To address this, a new device, called Deffilait3D and based on depth camera technology, was developed. In tests on 31 Holstein dairy cows and 13 Holstein heifers, the values generated for most measured indicators were highly repeatable and reproducible, with coefficients of variation lower than 4%. A comparison of measurements obtained from both Deffilait3D and the previous validated system, called Morpho3D, revealed a high degree of similarity for most selected traits, e.g., less than 0.2% variation for animal volume and 1.2% for chest depth, with the highest degree of difference (8%) noted for animal surface area. Previously published equations used to estimate body weight with the Morpho3D device were equally valid using Deffilait3D. This new device was able to record 3D images regardless of the movement of animals and it is affected only by direct daylight. The ongoing step is now to develop methods for automated analysis and extraction from images, which should enable the rapid development of new tools and potentially lead to the large-scale adoption of this type of device on commercial farms.

6.
Biomed Eng Online ; 23(1): 19, 2024 Feb 12.
Article in English | MEDLINE | ID: mdl-38347584

ABSTRACT

Individuals with incomplete spinal-cord injury/disease are at an increased risk of falling due to their impaired ability to maintain balance. Our research group has developed a closed-loop visual-feedback balance training (VFBT) system coupled with functional electrical stimulation (FES) for rehabilitation of standing balance (FES + VFBT system); however, clinical usage of this system is limited by the use of force plates, which are expensive and not easily accessible. This study aimed to investigate the feasibility of a more affordable and accessible sensor such as a depth camera or pressure mat in place of the force plate. Ten able-bodied participants (7 males, 3 females) performed three sets of four different standing balance exercises using the FES + VFBT system with the force plate. A depth camera and pressure mat collected centre of mass and centre of pressure data passively, respectively. The depth camera showed higher Pearson's correlation (r > 98) and lower root mean squared error (RMSE < 10 mm) than the pressure mat (r > 0.82; RMSE < 4.5 mm) when compared with the force plate overall. Stimulation based on the depth camera showed lower RMSE than that based on the pressure mat relative to the FES + VFBT system. The depth camera shows potential as a replacement sensor to the force plate for providing feedback to the FES + VFBT system.


Subject(s)
Electric Stimulation Therapy , Spinal Cord Injuries , Male , Female , Humans , Feasibility Studies , Feedback, Sensory , Postural Balance/physiology , Electric Stimulation
7.
BMC Geriatr ; 24(1): 125, 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-38302872

ABSTRACT

BACKGROUND: Falls pose a severe threat to the health of older adults worldwide. Determining gait and kinematic parameters that are related to an increased risk of falls is essential for developing effective intervention and fall prevention strategies. This study aimed to investigate the discriminatory parameter, which lay an important basis for developing effective clinical screening tools for identifying high-fall-risk older adults. METHODS: Forty-one individuals aged 65 years and above living in the community participated in this study. The older adults were classified as high-fall-risk and low-fall-risk individuals based on their BBS scores. The participants wore an inertial measurement unit (IMU) while conducting the Timed Up and Go (TUG) test. Simultaneously, a depth camera acquired images of the participants' movements during the experiment. After segmenting the data according to subtasks, 142 parameters were extracted from the sensor-based data. A t-test or Mann-Whitney U test was performed on the parameters for distinguishing older adults at high risk of falling. The logistic regression was used to further quantify the role of different parameters in identifying high-fall-risk individuals. Furthermore, we conducted an ablation experiment to explore the complementary information offered by the two sensors. RESULTS: Fifteen participants were defined as high-fall-risk individuals, while twenty-six were defined as low-fall-risk individuals. 17 parameters were tested for significance with p-values less than 0.05. Some of these parameters, such as the usage of walking assistance, maximum angular velocity around the yaw axis during turn-to-sit, and step length, exhibit the greatest discriminatory abilities in identifying high-fall-risk individuals. Additionally, combining features from both devices for fall risk assessment resulted in a higher AUC of 0.882 compared to using each device separately. CONCLUSIONS: Utilizing different types of sensors can offer more comprehensive information. Interpreting parameters to physiology provides deeper insights into the identification of high-fall-risk individuals. High-fall-risk individuals typically exhibited a cautious gait, such as larger step width and shorter step length during walking. Besides, we identified some abnormal gait patterns of high-fall-risk individuals compared to low-fall-risk individuals, such as less knee flexion and a tendency to tilt the pelvis forward during turning.


Subject(s)
Independent Living , Postural Balance , Humans , Aged , Postural Balance/physiology , Gait/physiology , Walking , Risk Assessment/methods , Accidental Falls/prevention & control
8.
Heliyon ; 10(1): e23704, 2024 Jan 15.
Article in English | MEDLINE | ID: mdl-38261861

ABSTRACT

Background: Following surgery, perioperative pulmonary rehabilitation (PR) is important for patients with early-stage lung cancer. However, current inpatient programs are often limited in time and space, and outpatient settings have access barriers. Therefore, we aimed to develop a background-free, zero-contact thoracoabdominal movement-tracking model that is easily set up and incorporated into a pre-existing PR program or extended to home-based rehabilitation and remote monitoring. We validated its effectiveness in providing preclinical real-time RGB-D (colour-depth camera) visual feedback. Methods: Twelve healthy volunteers performed deep breathing exercises following audio instruction for three cycles, followed by audio instruction and real-time visual feedback for another three cycles. In the visual feedback system, we used a RealSense™ D415 camera to capture RGB and depth images for human pose-estimation with Google MediaPipe. Target-tracking regions were defined based on the relative position of detected joints. The processed depth information of the tracking regions was visualised on a screen as a motion bar to provide real-time visual feedback of breathing intensity. Pulmonary function was simultaneously recorded using spirometric measurements, and changes in pulmonary volume were derived from respiratory airflow signals. Results: Our movement-tracking model showed a very strong correlation (r = 0.90 ± 0.05) between thoracic motion signals and spirometric volume, and a strong correlation (r = 0.73 ± 0.22) between abdominal signals and spirometric volume. Displacement of the chest wall was enhanced by RGB-D visual feedback (23 vs 20 mm, P = 0.034), and accompanied by an increased lung volume (2.58 vs 2.30 L, P = 0.003). Conclusion: We developed an easily implemented thoracoabdominal movement-tracking model and reported the positive impact of real-time RGB-D visual feedback on self-promoted external chest wall expansion, accompanied by increased internal lung volumes. This system can be extended to home-based PR.

9.
Neurol Sci ; 45(6): 2661-2670, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38183553

ABSTRACT

INTRODUCTION: The acute levodopa challenge test (ALCT) is an important and valuable examination but there are still some shortcomings with it. We aimed to objectively assess ALCT based on a depth camera and filter out the best indicators. METHODS: Fifty-nine individuals with parkinsonism completed ALCT and the improvement rate (IR, which indicates the change in value before and after levodopa administration) of the Movement Disorder Society-Sponsored Revision of the Unified Parkinson's Disease Rating Scale part III (MDS-UPDRS III) was calculated. The kinematic features of the patients' movements in both the OFF and ON states were collected with an Azure Kinect depth camera. RESULTS: The IR of MDS-UPDRS III was significantly correlated with the IRs of many kinematic features for arising from a chair, pronation-supination movements of the hand, finger tapping, toe tapping, leg agility, and gait (rs = - 0.277 ~ - 0.672, P < 0.05). Moderate to high discriminative values were found in the selected features in identifying a clinically significant response to levodopa with sensitivity, specificity, and area under the curve (AUC) in the range of 50-100%, 47.22%-97.22%, and 0.673-0.915, respectively. The resulting classifier combining kinematic features of toe tapping showed an excellent performance with an AUC of 0.966 (95% CI = 0.922-1.000, P < 0.001). The optimal cut-off value was 21.24% with sensitivity and specificity of 94.44% and 87.18%, respectively. CONCLUSION: This study demonstrated the feasibility of measuring the effect of levodopa and objectively assessing ALCT based on kinematic data derived from an Azure Kinect-based system.


Subject(s)
Antiparkinson Agents , Feasibility Studies , Levodopa , Parkinsonian Disorders , Humans , Levodopa/administration & dosage , Levodopa/therapeutic use , Levodopa/pharmacology , Male , Female , Aged , Middle Aged , Antiparkinson Agents/therapeutic use , Antiparkinson Agents/administration & dosage , Biomechanical Phenomena/physiology , Parkinsonian Disorders/drug therapy , Parkinsonian Disorders/physiopathology , Parkinsonian Disorders/diagnosis , Severity of Illness Index
10.
Article in Chinese | WPRIM (Western Pacific) | ID: wpr-1027470

ABSTRACT

Objective:To evaluate the feasibility of 3D reconstruction techniques based on multi-depth cameras for daily patient positioning in radiotherapy.Methods:Through region of interest (ROI) extraction, filtering, registration, splicing and other processes, multi-depth cameras (Intel RealSense D435i) were used to fuse point clouds in real-time manner to obtain the real optical 3D surface of patients. The reconstructed surface was matched with the external contour of the localization CT to complete the positioning. In this article, the feasibility of the system was validated by using multiple models. Clinical feasibility of 5 patients with head and neck radiotherapy, 10 cases of chest radiotherapy and 5 cases of pelvic radiotherapy was also validated. The data of each group were analyzed by paired t-test. Results:The system running time was 0.475 s, which met the requirement of real-time monitoring. The six-dimensional registration errors in the model experiment were (1.00±0.74) mm, (1.69±0.69) mm, (1.36±0.87) mm, 0.15°±0.14°, 0.25°±0.20°, 0.13°±0.13° in the x, y, z, rotational, pitch and roll directions, respectively. In the actual patient positioning, the mean positioning errors were (0.77±0.51) mm, (1.24±0.67) mm, (0.94±0.76) mm, 0.61°±0.41°, 0.69°±0.55°, and 0.52°±0.35° in the x, y, z, rotational, pitch and roll directions, respectively. The translational error was less than 2.8 mm, and the positioning error was the largest in the pelvic region. Conclusions:Real-time 3D reconstruction techniques based on multi-depth cameras is applicable for patient positioning during radiotherapy. The method is accurate in positioning and can detect the small movement of the patient's position, which meets the requirements of radiotherapy.

11.
Front Plant Sci ; 14: 1268015, 2023.
Article in English | MEDLINE | ID: mdl-37822341

ABSTRACT

Maize (Zea mays L.) is one of the most important crops, influencing food production and even the whole industry. In recent years, global crop production has been facing great challenges from diseases. However, most of the traditional methods make it difficult to efficiently identify disease-related phenotypes in germplasm resources, especially in actual field environments. To overcome this limitation, our study aims to evaluate the potential of the multi-sensor synchronized RGB-D camera with depth information for maize leaf disease classification. We distinguished maize leaves from the background based on the RGB-D depth information to eliminate interference from complex field environments. Four deep learning models (i.e., Resnet50, MobilenetV2, Vgg16, and Efficientnet-B3) were used to classify three main types of maize diseases, i.e., the curvularia leaf spot [Curvularia lunata (Wakker) Boedijn], the small spot [Bipolaris maydis (Nishik.) Shoemaker], and the mixed spot diseases. We finally compared the pre-segmentation and post-segmentation results to test the robustness of the above models. Our main findings are: 1) The maize disease classification models based on the pre-segmentation image data performed slightly better than the ones based on the post-segmentation image data. 2) The pre-segmentation models overestimated the accuracy of disease classification due to the complexity of the background, but post-segmentation models focusing on leaf disease features provided more practical results with shorter prediction times. 3) Among the post-segmentation models, the Resnet50 and MobilenetV2 models showed similar accuracy and were better than the Vgg16 and Efficientnet-B3 models, and the MobilenetV2 model performed better than the other three models in terms of the size and the single image prediction time. Overall, this study provides a novel method for maize leaf disease classification using the post-segmentation image data from a multi-sensor synchronized RGB-D camera and offers the possibility of developing relevant portable devices.

12.
IEEE J Transl Eng Health Med ; 11: 479-486, 2023.
Article in English | MEDLINE | ID: mdl-37817821

ABSTRACT

BACKGROUND: Accidental falls are a major health issue in older people. One significant and potentially modifiable risk factor is reduced gait stability. Clinicians do not have sophisticated kinematic options to measure this risk factor with simple and affordable systems. Depth-imaging with AI-pose estimation can be used for gait analysis in young healthy adults. However, is it applicable for measuring gait in older adults at a risk of falling? METHODS: In this methodological comparison 59 older adults with and without a history of falls walked on a treadmill while their gait pattern was recorded with multiple inertial measurement units and with an Azure Kinect depth-camera. Spatiotemporal gait parameters of both systems were compared for convergent validity and with a Bland-Altman plot. RESULTS: Correlation between systems for stride length (r=.992, [Formula: see text]) and stride time (r=0.914, [Formula: see text]) was high. Bland-Altman plots revealed a moderate agreement in stride length (-0.74 ± 3.68 cm; [-7.96 cm to 6.47 cm]) and stride time (-3.7±54 ms; [-109 ms to 102 ms]). CONCLUSION: Gait parameters in older adults with and without a history of falls can be measured with inertial measurement units and Azure Kinect cameras. Affordable and small depth-cameras agree with IMUs for gait analysis in older adults with and without an increased risk of falling. However, tolerable accuracy is limited to the average over multiple steps of spatiotemporal parameters derived from the initial foot contact. Clinical Translation Statement- Gait parameters in older adults with and without a history of falls can be measured with inertial measurement units and Azure Kinect. Affordable and small depth-cameras, developed for various purposes in research and industry, agree with IMUs in clinical gait analysis in older adults with and without an increased risk of falling. However, tolerable accuracy to assess function or monitor changes in gait is limited to the average over multiple steps of spatiotemporal parameters derived from the initial foot contact.


Subject(s)
Accidental Falls , Gait Analysis , Humans , Aged , Accidental Falls/prevention & control , Gait , Walking , Exercise Test/methods
13.
Sensors (Basel) ; 23(18)2023 Sep 11.
Article in English | MEDLINE | ID: mdl-37765865

ABSTRACT

Adolescent idiopathic scoliosis (AIS) is a prevalent musculoskeletal disorder that causes abnormal spinal deformities. The early screening of children and adolescents is crucial to identify and prevent the further progression of AIS. In clinical examinations, scoliometers are often used to noninvasively estimate the primary Cobb angle, and optical 3D scanning systems have also emerged as alternative noninvasive approaches for this purpose. The recent advances in low-cost 3D scanners have led to their use in several studies to estimate the primary Cobb angle or even internal spinal alignment. However, none of these studies demonstrate whether such a low-cost scanner satisfies the minimal requirements for capturing the relevant deformities of the human back. To practically quantify the minimal required spatial resolution and camera resolution to capture the geometry and shape of the deformities of the human back, we used multiple 3D scanning methodologies and systems. The results from an evaluation of 30 captures of AIS patients and 76 captures of healthy subjects showed that the minimal required spatial resolution is between 2 mm and 5 mm, depending on the chosen error tolerance. Therefore, a minimal camera resolution of 640 × 480 pixels is recommended for use in future studies.


Subject(s)
Musculoskeletal Diseases , Optical Devices , Adolescent , Child , Humans , Healthy Volunteers
14.
Comput Biol Med ; 164: 107292, 2023 09.
Article in English | MEDLINE | ID: mdl-37544250

ABSTRACT

BACKGROUND: Distal radius fractures (DRFs) treated with volar locking plates (VLPs) allows early rehabilitation exercises favourable to fracture recovery. However, the role of rehabilitation exercises induced muscle forces on the biomechanical microenvironment at the fracture site remains to be fully explored. The purpose of this study is to investigate the effects of muscle forces on DRF healing by developing a depth camera-based fracture healing model. METHOD: First, the rehabilitation-related hand motions were captured by a depth camera system. A macro-musculoskeletal model is then developed to analyse the data captured by the system for estimating hand muscle and joint reaction forces which are used as inputs for our previously developed DRF model to predict the tissue differentiation patterns at the fracture site. Finally, the effect of different wrist motions (e.g., from 60° of extension to 60° of flexion) on the DRF healing outcomes will be studied. RESULTS: Muscle and joint reaction forces in hands which are highly dependent on hand motions could significantly affect DRF healing through imposed compressive and bending forces at the fracture site. There is an optimal range of wrist motion (i.e., between 40° of extension and 40° of flexion) which could promote mechanical stimuli governed healing while mitigating the risk of bony non-union due to excessive movement at the fracture site. CONCLUSION: The developed depth camera-based fracture healing model can accurately predict the influence of muscle loading induced by rehabilitation exercises in distal radius fracture healing outcomes. The outcomes from this study could potentially assist osteopathic surgeons in designing effective post-operative rehabilitation strategies for DRF patients.


Subject(s)
Radius Fractures , Wrist Fractures , Humans , Radius Fractures/surgery , Fracture Fixation, Internal , Wrist Joint , Muscle, Skeletal , Bone Plates , Range of Motion, Articular , Treatment Outcome
15.
Sensors (Basel) ; 23(14)2023 Jul 18.
Article in English | MEDLINE | ID: mdl-37514799

ABSTRACT

Improving soybean (Glycine max L. (Merr.)) yield is crucial for strengthening national food security. Predicting soybean yield is essential to maximize the potential of crop varieties. Non-destructive methods are needed to estimate yield before crop maturity. Various approaches, including the pod-count method, have been used to predict soybean yield, but they often face issues with the crop background color. To address this challenge, we explored the application of a depth camera to real-time filtering of RGB images, aiming to enhance the performance of the pod-counting classification model. Additionally, this study aimed to compare object detection models (YOLOV7 and YOLOv7-E6E) and select the most suitable deep learning (DL) model for counting soybean pods. After identifying the best architecture, we conducted a comparative analysis of the model's performance by training the DL model with and without background removal from images. Results demonstrated that removing the background using a depth camera improved YOLOv7's pod detection performance by 10.2% precision, 16.4% recall, 13.8% mAP@50, and 17.7% mAP@0.5:0.95 score compared to when the background was present. Using a depth camera and the YOLOv7 algorithm for pod detection and counting yielded a mAP@0.5 of 93.4% and mAP@0.5:0.95 of 83.9%. These results indicated a significant improvement in the DL model's performance when the background was segmented, and a reasonably larger dataset was used to train YOLOv7.


Subject(s)
Glycine max , Plant Breeding
16.
Nihon Hoshasen Gijutsu Gakkai Zasshi ; 79(5): 431-439, 2023 May 20.
Article in Japanese | MEDLINE | ID: mdl-36948627

ABSTRACT

PURPOSE: In this study, we propose a system that combines a depth camera with a deep learning model for estimating the human skeleton and a depth camera to estimate the shooting part to be radiographed and to acquire the thickness of the subject, thereby providing optimized X-ray imaging conditions. METHODS: We propose a system that provides optimized X-ray imaging conditions by estimating the shooting part and measuring the thickness of the subject using an RGB camera and a depth camera. The system uses OpenPose, a posture estimation library, to estimate the shooting part. RESULTS: The recognition rate of the shooting part was 15.38% for the depth camera and 84.62% for the RGB camera at a distance of 100 cm, and 42.31% for the depth camera and 100% for the RGB camera at a distance of 120 cm. The measurement accuracy of the subject thickness was within ±10 mm except for a few cases, indicating that the X-ray imaging conditions were optimized for the subject thickness. CONCLUSION: The implementation of this system in an X-ray system is expected to enable automatic setting of X-ray imaging conditions. The system is also useful in preventing increased exposure dose due to excessive dose or decreased image quality due to insufficient dose caused by incorrect setting of X-ray imaging conditions.


Subject(s)
Posture , Humans , X-Rays , Radiography
17.
Bioengineering (Basel) ; 10(2)2023 Jan 17.
Article in English | MEDLINE | ID: mdl-36829620

ABSTRACT

Hand pose estimation (HPE) plays an important role during the functional assessment of the hand and in potential rehabilitation. It is a challenge to predict the pose of the hand conveniently and accurately during functional tasks, and this limits the application of HPE. In this paper, we propose a novel architecture of a shifted attention regression network (SARN) to perform HPE. Given a depth image, SARN first predicts the spatial relationships between points in the depth image and a group of hand keypoints that determine the pose of the hand. Then, SARN uses these spatial relationships to infer the 3D position of each hand keypoint. To verify the effectiveness of the proposed method, we conducted experiments on three open-source datasets of 3D hand poses: NYU, ICVL, and MSRA. The proposed method achieved state-of-the-art performance with 7.32 mm, 5.91 mm, and 7.17 mm of mean error at the hand keypoints, i.e., mean Euclidean distance between the predicted and ground-truth hand keypoint positions. Additionally, to test the feasibility of SARN in hand movement recognition, a hand movement dataset of 26K depth images from 17 healthy subjects was constructed based on the finger tapping test, an important component of neurological exams administered to Parkinson's patients. Each image was annotated with the tips of the index finger and the thumb. For this dataset, the proposed method achieved a mean error of 2.99 mm at the hand keypoints and comparable performance on three task-specific metrics: the distance, velocity, and acceleration of the relative movement of the two fingertips. Results on the open-source datasets demonstrated the effectiveness of the proposed method, and results on our finger tapping dataset validated its potential for applications in functional task characterization.

18.
Sensors (Basel) ; 23(3)2023 Feb 01.
Article in English | MEDLINE | ID: mdl-36772636

ABSTRACT

Face masks can effectively prevent the spread of viruses. It is necessary to determine the wearing condition of masks in various locations, such as traffic stations, hospitals, and other places with a risk of infection. Therefore, achieving fast and accurate identification in different application scenarios is an urgent problem to be solved. Contactless mask recognition can avoid the waste of human resources and the risk of exposure. We propose a novel method for face mask recognition, which is demonstrated using the spatial and frequency features from the 3D information. A ToF camera with a simple system and robust data are used to capture the depth images. The facial contour of the depth image is extracted accurately by the designed method, which can reduce the dimension of the depth data to improve the recognition speed. Additionally, the classification process is further divided into two parts. The wearing condition of the mask is first identified by features extracted from the facial contour. The types of masks are then classified by new features extracted from the spatial and frequency curves. With appropriate thresholds and a voting method, the total recall accuracy of the proposed algorithm can achieve 96.21%. Especially, the recall accuracy for images without mask can reach 99.21%.


Subject(s)
Form Perception , Masks , Humans , SARS-CoV-2 , Algorithms , Recognition, Psychology
19.
J Hand Surg Glob Online ; 5(1): 39-47, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36704372

ABSTRACT

Purpose: Quantitative measurement of hand motion is essential in evaluating hand function. This study aimed to investigate the validity and reliability of a novel depth camera-based contactless automatic measurement system to assess hand range of motion and its potential benefits in clinical applications. Methods: Five hand gestures were designed to evaluate the hand range of motion using a depth camera-based measurement system. Seventy-one volunteers were enrolled in performing the designed hand gestures. Then, the hand range of motion was measured with the depth camera and manual procedures. System validity was evaluated based on 3 dimensions: repeatability, within-laboratory precision, and reproducibility. For system reliability, linear evaluation, the intraclass correlation coefficient, paired t -test and bias were employed to test the consistency and difference between the depth camera and manual procedures. Results: When measuring phalangeal length, repeatability, within-laboratory precision, and reproducibility were 2.63%, 12.87%, and 27.15%, respectively. When measuring angles of hand motion, the mean repeatability and within-laboratory precision were 1.2° and 3.3° for extension of 5 digits, 2.7° and 10.2° for flexion of 4 fingers, and 3.1° and 5.3° for abduction of 4 metacarpophalangeal joints, respectively. For system reliability, the results showed excellent consistency (intraclass correlation coefficient = 0.823; P < .05) and good linearity with the manual procedures (r = 0.909-0.982, approximately; P < .001). Besides, 78.3% of the measurements were clinically acceptable. Conclusions: Our depth camera-based evaluation system provides acceptable validity and reliability in measuring hand range of motion and offers potential benefits for clinical care and research in hand surgery. However, further studies are required before clinical application. Clinical relevance: This study suggests a depth camera-based contactless automatic measurement system holds promise for assessing hand range of motion in hand function evaluation, diagnosis, and rehabilitation for medical staff. However, it is currently not adequate for all clinical applications.

20.
J Hand Surg Eur Vol ; 48(5): 453-458, 2023 05.
Article in English | MEDLINE | ID: mdl-36420794

ABSTRACT

The purpose of this cross-sectional study was to determine the precision and accuracy of the measurement of finger motion with a depth camera. Fifty-five healthy adult hands were included. Measurements were done with a depth camera and compared with traditional manual goniometer measurements. Repeated measuring showed that the overall repeatability and reproducibility of extension measured with the depth camera were within 3° and 4° and that of flexion were within 13° and 14°. Compared with traditional manual goniometry, biases of extension of all finger joints and flexion of metacarpophalangeal joints were less than 5°, and the average bias of flexion of proximal and distal interphalangeal joints was 29°. We conclude that the measurement of finger extension and flexion of the metacarpophalangeal joints with a depth camera was reliable, but improvement is required in the precision and accuracy of interphalangeal joint flexion.


Subject(s)
Finger Joint , Fingers , Adult , Humans , Cross-Sectional Studies , Healthy Volunteers , Reproducibility of Results , Range of Motion, Articular
SELECTION OF CITATIONS
SEARCH DETAIL