Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 521
Filter
1.
Sensors (Basel) ; 24(11)2024 Jun 02.
Article in English | MEDLINE | ID: mdl-38894374

ABSTRACT

Visual Simultaneous Localization and Mapping (V-SLAM) plays a crucial role in the development of intelligent robotics and autonomous navigation systems. However, it still faces significant challenges in handling highly dynamic environments. The prevalent method currently used for dynamic object recognition in the environment is deep learning. However, models such as Yolov5 and Mask R-CNN require significant computational resources, which limits their potential in real-time applications due to hardware and time constraints. To overcome this limitation, this paper proposes ADM-SLAM, a visual SLAM system designed for dynamic environments that builds upon the ORB-SLAM2. This system integrates efficient adaptive feature point homogenization extraction, lightweight deep learning semantic segmentation based on an improved DeepLabv3, and multi-view geometric segmentation. It optimizes keyframe extraction, segments potential dynamic objects using contextual information with the semantic segmentation network, and detects the motion states of dynamic objects using multi-view geometric methods, thereby eliminating dynamic interference points. The results indicate that ADM-SLAM outperforms ORB-SLAM2 in dynamic environments, especially in high-dynamic scenes, where it achieves up to a 97% reduction in Absolute Trajectory Error (ATE). In various highly dynamic test sequences, ADM-SLAM outperforms DS-SLAM and DynaSLAM in terms of real-time performance and accuracy, proving its excellent adaptability.

2.
Sensors (Basel) ; 24(11)2024 Jun 02.
Article in English | MEDLINE | ID: mdl-38894383

ABSTRACT

Because of the absence of visual perception, visually impaired individuals encounter various difficulties in their daily lives. This paper proposes a visual aid system designed specifically for visually impaired individuals, aiming to assist and guide them in grasping target objects within a tabletop environment. The system employs a visual perception module that incorporates a semantic visual SLAM algorithm, achieved through the fusion of ORB-SLAM2 and YOLO V5s, enabling the construction of a semantic map of the environment. In the human-machine cooperation module, a depth camera is integrated into a wearable device worn on the hand, while a vibration array feedback device conveys directional information of the target to visually impaired individuals for tactile interaction. To enhance the system's versatility, a Dobot Magician manipulator is also employed to aid visually impaired individuals in grasping tasks. The performance of the semantic visual SLAM algorithm in terms of localization and semantic mapping was thoroughly tested. Additionally, several experiments were conducted to simulate visually impaired individuals' interactions in grasping target objects, effectively verifying the feasibility and effectiveness of the proposed system. Overall, this system demonstrates its capability to assist and guide visually impaired individuals in perceiving and acquiring target objects.


Subject(s)
Algorithms , Visually Impaired Persons , Wearable Electronic Devices , Humans , Visually Impaired Persons/rehabilitation , Hand Strength/physiology , Self-Help Devices , Visual Perception/physiology , Semantics , Male
3.
bioRxiv ; 2024 May 23.
Article in English | MEDLINE | ID: mdl-38826327

ABSTRACT

The Maternal-to-Zygotic transition (MZT) is a reprograming process encompassing zygotic genome activation (ZGA) and the clearance of maternally-provided mRNAs. While some factors regulating MZT have been identified, there are thousands of maternal RNAs whose function has not been ascribed yet. Here, we have performed a proof-of-principle CRISPR-RfxCas13d maternal screening targeting mRNAs encoding protein kinases and phosphatases in zebrafish and identified Bckdk as a novel post-translational regulator of MZT. Bckdk mRNA knockdown caused epiboly defects, ZGA deregulation, H3K27ac reduction and a partial impairment of miR-430 processing. Phospho-proteomic analysis revealed that Phf10/Baf45a, a chromatin remodeling factor, is less phosphorylated upon Bckdk depletion. Further, phf10 mRNA knockdown also altered ZGA and Phf10 constitutively phosphorylated rescued the developmental defects observed after bckdk mRNA depletion. Altogether, our results demonstrate the competence of CRISPR-RfxCas13d screenings to uncover new regulators of early vertebrate development and shed light on the post-translational control of MZT mediated by protein phosphorylation.

4.
Sensors (Basel) ; 24(12)2024 Jun 13.
Article in English | MEDLINE | ID: mdl-38931615

ABSTRACT

In this study, we enhanced odometry performance by integrating vision sensors with LiDAR sensors, which exhibit contrasting characteristics. Vision sensors provide extensive environmental information but are limited in precise distance measurement, whereas LiDAR offers high accuracy in distance metrics but lacks detailed environmental data. By utilizing data from vision sensors, this research compensates for the inadequate descriptors of LiDAR sensors, thereby improving LiDAR feature matching performance. Traditional fusion methods, which rely on extracting depth from image features, depend heavily on vision sensors and are vulnerable under challenging conditions such as rain, darkness, or light reflection. Utilizing vision sensors as primary sensors under such conditions can lead to significant mapping errors and, in the worst cases, system divergence. Conversely, our approach uses LiDAR as the primary sensor, mitigating the shortcomings of previous methods and enabling vision sensors to support LiDAR-based mapping. This maintains LiDAR Odometry performance even in environments where vision sensors are compromised, thus enhancing performance with the support of vision sensors. We adopted five prominent algorithms from the latest LiDAR SLAM open-source projects and conducted experiments on the KITTI odometry dataset. This research proposes a novel approach by integrating a vision support module into the top three LiDAR SLAM methods, thereby improving performance. By making the source code of VA-LOAM publicly available, this work enhances the accessibility of the technology, fostering reproducibility and transparency within the research community.

5.
Sensors (Basel) ; 24(12)2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38931794

ABSTRACT

Simultaneous localization and mapping (SLAM) is a hot research area that is widely required in many robotics applications. In SLAM technology, it is essential to explore an accurate and efficient map model to represent the environment and develop the corresponding data association methods needed to achieve reliable matching from measurements to maps. These two key elements impact the working stability of the SLAM system, especially in complex scenarios. However, previous literature has not fully addressed the problems of efficient mapping and accurate data association. In this article, we propose a novel hash multi-scale (H-MS) map to ensure query efficiency with accurate modeling. In the proposed map, the inserted map point will simultaneously participate in modeling voxels of different scales in a voxel group, enabling the map to represent objects of different scales in the environment effectively. Meanwhile, the root node of the voxel group is saved to a hash table for efficient access. Secondly, considering the one-to-many (1 ×103 order of magnitude) high computational data association problem caused by maintaining multi-scale voxel landmarks simultaneously in the H-MS map, we further propose a bidirectional matching algorithm (MSBM). This algorithm utilizes forward-reverse-forward projection to balance the efficiency and accuracy problem. The proposed H-MS map and MSBM algorithm are integrated into a completed LiDAR SLAM (HMS-SLAM) system. Finally, we validated the proposed map model, matching algorithm, and integrated system on the public KITTI dataset. The experimental results show that, compared with the ikd tree map, the H-MS map model has higher insertion and deletion efficiency, both having O(1) time complexity. The computational efficiency and accuracy of the MSBM algorithm are better than that of the small-scale priority matching algorithm, and the computing speed of the MSBM achieves 49 ms/time under a single CPU thread. In addition, the HMS-SLAM system built in this article has also reached excellent performance in terms of mapping accuracy and memory usage.

6.
Microbiol Res ; 285: 127750, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38761489

ABSTRACT

The progress of viral infection involves numerous transcriptional regulatory events. The identification of the newly synthesized transcripts helps us to understand the replication mechanisms and pathogenesis of the virus. Here, we utilized a time-resolved technique called metabolic RNA labeling approach called thiol(SH)-linked alkylation for the metabolic sequencing of RNA (SLAM-seq) to differentially elucidate the levels of steady-state and newly synthesized RNAs of BHK21 cell line in response to human coronavirus OC43 (HCoV-OC43) infection. Our results showed that the Wnt/ß-catenin signaling pathway was significantly enriched with the newly synthesized transcripts of BHK21 cell line in response to HCoV-OC43 infection. Moreover, inhibition of the Wnt pathway promoted viral replication in the early stage of infection, but inhibited it in the later stage of infection. Furthermore, remdesivir inhibits the upregulation of the Wnt/ß-catenin signaling pathway induced by early infection with HCoV-OC43. Collectively, our study showed the diverse roles of Wnt/ß-catenin pathway at different stages of HCoV-OC43 infection, suggesting a potential target for the antiviral treatment. In addition, although infection with HCoV-OC43 induces cytopathic effects in BHK21 cells, inhibiting apoptosis does not affect the intracellular replication of the virus. Monitoring newly synthesized RNA based on such time-resolved approach is a highly promising method for studying the mechanism of viral infections.


Subject(s)
Adenosine Monophosphate , Alanine , Antiviral Agents , Coronavirus OC43, Human , Transcriptome , Virus Replication , Wnt Signaling Pathway , Coronavirus OC43, Human/genetics , Coronavirus OC43, Human/drug effects , Virus Replication/drug effects , Cell Line , Humans , Adenosine Monophosphate/analogs & derivatives , Adenosine Monophosphate/pharmacology , Adenosine Monophosphate/metabolism , Antiviral Agents/pharmacology , Alanine/analogs & derivatives , Alanine/pharmacology , Alanine/metabolism , Animals , Coronavirus Infections/virology , Coronavirus Infections/drug therapy
7.
Int J Comput Assist Radiol Surg ; 19(7): 1375-1383, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38771418

ABSTRACT

PURPOSE: Intraoperative reconstruction of endoscopic scenes is a key technology for surgical navigation systems. The accuracy and efficiency of 3D reconstruction directly determine the effectiveness of navigation systems in a variety of clinical applications. While current deformable SLAM algorithms can meet real-time requirements, their underlying reliance on regular templates still makes it challenging to efficiently capture abrupt geometric features within scenes, such as organ contours and surgical margins. METHODS: We propose a novel real-time monocular deformable SLAM algorithm with geometrically adapted template. To ensure real-time performance, the proposed algorithm consists of two threads: a deformation mapping thread updates the template at keyframe rate and a deformation tracking thread estimates the camera pose and the deformation at frame rate. To capture geometric features more efficiently, the algorithm first detects salient edge features using a pre-trained contour detection network and then constructs the template through a triangulation method with guidance of the salient features. RESULTS: We thoroughly evaluated this method on Mandala and Hamlyn datasets in terms of accuracy and performance. The results demonstrated that the proposed method achieves better accuracy with 0.75-7.95% improvement and achieves consistent effectiveness in data association compared with the closest method. CONCLUSION: This study verified an adaptive template does improve the performance of reconstruction of dynamic laparoscopic Scenes with abrupt geometric features. However, further exploration is needed for applications in laparoscopic surgery with incisal margins caused by surgical instruments. This research serves as a crucial step toward enhanced automatic computer-assisted navigation in laparoscopic surgery. Code is available at https://github.com/Tang257/SLAM-with-geometrically-adapted-template .


Subject(s)
Algorithms , Imaging, Three-Dimensional , Laparoscopy , Humans , Laparoscopy/methods , Imaging, Three-Dimensional/methods , Surgery, Computer-Assisted/methods
8.
Int J Comput Assist Radiol Surg ; 19(7): 1259-1266, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38775904

ABSTRACT

PURPOSE: Monocular SLAM algorithms are the key enabling technology for image-based surgical navigation systems for endoscopic procedures. Due to the visual feature scarcity and unique lighting conditions encountered in endoscopy, classical SLAM approaches perform inconsistently. Many of the recent approaches to endoscopic SLAM rely on deep learning models. They show promising results when optimized on singular domains such as arthroscopy, sinus endoscopy, colonoscopy or laparoscopy, but are limited by an inability to generalize to different domains without retraining. METHODS: To address this generality issue, we propose OneSLAM a monocular SLAM algorithm for surgical endoscopy that works out of the box for several endoscopic domains, including sinus endoscopy, colonoscopy, arthroscopy and laparoscopy. Our pipeline builds upon robust tracking any point (TAP) foundation models to reliably track sparse correspondences across multiple frames and runs local bundle adjustment to jointly optimize camera poses and a sparse 3D reconstruction of the anatomy. RESULTS: We compare the performance of our method against three strong baselines previously proposed for monocular SLAM in endoscopy and general scenes. OneSLAM presents better or comparable performance over existing approaches targeted to that specific data in all four tested domains, generalizing across domains without the need for retraining. CONCLUSION: OneSLAM benefits from the convincing performance of TAP foundation models but generalizes to endoscopic sequences of different anatomies all while demonstrating better or comparable performance over domain-specific SLAM approaches. Future research on global loop closure will investigate how to reliably detect loops in endoscopic scenes to reduce accumulated drift and enhance long-term navigation capabilities.


Subject(s)
Algorithms , Endoscopy , Humans , Endoscopy/methods , Imaging, Three-Dimensional/methods , Surgery, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods
9.
Comput Biol Med ; 175: 108546, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38704902

ABSTRACT

Three-dimensional reconstruction of images acquired through endoscopes is playing a vital role in an increasing number of medical applications. Endoscopes used in the clinic are commonly classified as monocular endoscopes and binocular endoscopes. We have reviewed the classification of methods for depth estimation according to the type of endoscope. Basically, depth estimation relies on feature matching of images and multi-view geometry theory. However, these traditional techniques have many problems in the endoscopic environment. With the increasing development of deep learning techniques, there is a growing number of works based on learning methods to address challenges such as inconsistent illumination and texture sparsity. We have reviewed over 170 papers published in the 10 years from 2013 to 2023. The commonly used public datasets and performance metrics are summarized. We also give a taxonomy of methods and analyze the advantages and drawbacks of algorithms. Summary tables and result atlas are listed to facilitate the comparison of qualitative and quantitative performance of different methods in each category. In addition, we summarize commonly used scene representation methods in endoscopy and speculate on the prospects of deep estimation research in medical applications. We also compare the robustness performance, processing time, and scene representation of the methods to facilitate doctors and researchers in selecting appropriate methods based on surgical applications.


Subject(s)
Endoscopy , Imaging, Three-Dimensional , Humans , Imaging, Three-Dimensional/methods , Endoscopy/methods , Algorithms , Deep Learning
10.
Article in English | MEDLINE | ID: mdl-38745863

ABSTRACT

Augmented reality (AR) has seen increased interest and attention for its application in surgical procedures. AR-guided surgical systems can overlay segmented anatomy from pre-operative imaging onto the user's environment to delineate hard-to-see structures and subsurface lesions intraoperatively. While previous works have utilized pre-operative imaging such as computed tomography or magnetic resonance images, registration methods still lack the ability to accurately register deformable anatomical structures without fiducial markers across modalities and dimensionalities. This is especially true of minimally invasive abdominal surgical techniques, which often employ a monocular laparoscope, due to inherent limitations. Surgical scene reconstruction is a critical component towards accurate registrations needed for AR-guided surgery and other downstream AR applications such as remote assistance or surgical simulation. In this work, we utilize a state-of-the-art (SOTA) deep-learning-based visual simultaneous localization and mapping (vSLAM) algorithm to generate a dense 3D reconstruction with camera pose estimations and depth maps from video obtained with a monocular laparoscope. The proposed method can robustly reconstruct surgical scenes using real-time data and provide camera pose estimations without stereo or additional sensors, which increases its usability and is less intrusive. We also demonstrate a framework to evaluate current vSLAM algorithms on non-Lambertian, low-texture surfaces and explore using its outputs on downstream tasks. We expect these evaluation methods can be utilized for the continual refinement of newer algorithms for AR-guided surgery.

11.
Mol Oncol ; 2024 May 22.
Article in English | MEDLINE | ID: mdl-38775167

ABSTRACT

Inactivation of cyclin-dependent kinase 12 (CDK12) characterizes an aggressive sub-group of castration-resistant prostate cancer (CRPC). Hyper-activation of MYC transcription factor is sufficient to confer the CRPC phenotype. Here, we show that loss of CDK12 promotes MYC activity, which renders the cells dependent on the otherwise non-essential splicing regulatory kinase SRSF protein kinase 1 (SRPK1). High MYC expression is associated with increased levels of SRPK1 in patient samples, and overexpression of MYC sensitizes prostate cancer cells to SRPK1 inhibition using pharmacological and genetic strategies. We show that Endovion (SCO-101), a compound currently in clinical trials against pancreatic cancer, phenocopies the effects of the well-characterized SRPK1 inhibitor SRPIN340 on nascent transcription. This is the first study to show that Endovion is an SRPK1 inhibitor. Inhibition of SRPK1 with either of the compounds promotes transcription elongation, and transcriptionally activates the unfolded protein response. In brief, here we discover that CDK12 inactivation promotes MYC signaling in an SRPK1-dependent manner, and show that the clinical grade compound Endovion selectively targets the cells with CDK12 inactivation.

12.
Sensors (Basel) ; 24(9)2024 Apr 27.
Article in English | MEDLINE | ID: mdl-38732904

ABSTRACT

In this paper, we present a novel approach referred to as the audio-based virtual landmark-based HoloSLAM. This innovative method leverages a single sound source and microphone arrays to estimate the voice-printed speaker's direction. The system allows an autonomous robot equipped with a single microphone array to navigate within indoor environments, interact with specific sound sources, and simultaneously determine its own location while mapping the environment. The proposed method does not require multiple audio sources in the environment nor sensor fusion to extract pertinent information and make accurate sound source estimations. Furthermore, the approach incorporates Robotic Mixed Reality using Microsoft HoloLens to superimpose landmarks, effectively mitigating the audio landmark-related issues of conventional audio-based landmark SLAM, particularly in situations where audio landmarks cannot be discerned, are limited in number, or are completely missing. The paper also evaluates an active speaker detection method, demonstrating its ability to achieve high accuracy in scenarios where audio data are the sole input. Real-time experiments validate the effectiveness of this method, emphasizing its precision and comprehensive mapping capabilities. The results of these experiments showcase the accuracy and efficiency of the proposed system, surpassing the constraints associated with traditional audio-based SLAM techniques, ultimately leading to a more detailed and precise mapping of the robot's surroundings.

13.
Sensors (Basel) ; 24(10)2024 May 08.
Article in English | MEDLINE | ID: mdl-38793834

ABSTRACT

Localization and perception play an important role as the basis of autonomous Unmanned Aerial Vehicle (UAV) applications, providing the internal state of movements and the external understanding of environments. Simultaneous Localization And Mapping (SLAM), one of the critical techniques for localization and perception, is facing technical upgrading, due to the development of embedded hardware, multi-sensor technology, and artificial intelligence. This survey aims at the development of visual SLAM and the basis of UAV applications. The solutions to critical problems for visual SLAM are shown by reviewing state-of-the-art and newly presented algorithms, providing the research progression and direction in three essential aspects: real-time performance, texture-less environments, and dynamic environments. Visual-inertial fusion and learning-based enhancement are discussed for UAV localization and perception to illustrate their role in UAV applications. Subsequently, the trend of UAV localization and perception is shown. The algorithm components, camera configuration, and data processing methods are also introduced to give comprehensive preliminaries. In this paper, we provide coverage of visual SLAM and its related technologies over the past decade, with a specific focus on their applications in autonomous UAV applications. We summarize the current research, reveal potential problems, and outline future trends from academic and engineering perspectives.

14.
Sensors (Basel) ; 24(10)2024 May 10.
Article in English | MEDLINE | ID: mdl-38793892

ABSTRACT

Modern UAVs (unmanned aerial vehicles) equipped with video cameras can provide large-scale high-resolution video data. This poses significant challenges for structure from motion (SfM) and simultaneous localization and mapping (SLAM) algorithms, as most of them are developed for relatively small-scale and low-resolution scenes. In this paper, we present a video-based SfM method specifically designed for high-resolution large-size UAV videos. Despite the wide range of applications for SfM, performing mainstream SfM methods on such videos poses challenges due to their high computational cost. Our method consists of three main steps. Firstly, we employ a visual SLAM (VSLAM) system to efficiently extract keyframes, keypoints, initial camera poses, and sparse structures from downsampled videos. Next, we propose a novel two-step keypoint adjustment method. Instead of matching new points in the original videos, our method effectively and efficiently adjusts the existing keypoints at the original scale. Finally, we refine the poses and structures using a rotation-averaging constrained global bundle adjustment (BA) technique, incorporating the adjusted keypoints. To enrich the resources available for SLAM or SfM studies, we provide a large-size (3840 × 2160) outdoor video dataset with millimeter-level-accuracy ground control points, which supplements the current relatively low-resolution video datasets. Experiments demonstrate that, compared with other SLAM or SfM methods, our method achieves an average efficiency improvement of 100% on our collected dataset and 45% on the EuRoc dataset. Our method also demonstrates superior localization accuracy when compared with state-of-the-art SLAM or SfM methods.

15.
Sensors (Basel) ; 24(10)2024 May 16.
Article in English | MEDLINE | ID: mdl-38794008

ABSTRACT

Multi-robot Simultaneous Localization and Mapping (SLAM) systems employing 2D lidar scans are effective for exploration and navigation within GNSS-limited environments. However, scalability concerns arise with larger environments and increased robot numbers, as 2D mapping necessitates substantial processor memory and inter-robot communication bandwidth. Thus, data compression prior to transmission becomes imperative. This study investigates the problem of communication-efficient multi-robot SLAM based on 2D maps and introduces an architecture that enables compressed communication, facilitating the transmission of full maps with significantly reduced bandwidth. We propose a framework employing a lightweight feature extraction Convolutional Neural Network (CNN) for a full map, followed by an encoder combining Huffman and Run-Length Encoding (RLE) algorithms to further compress a full map. Subsequently, a lightweight recovery CNN was designed to restore map features. Experimental validation involves applying our compressed communication framework to a two-robot SLAM system. The results demonstrate that our approach reduces communication overhead by 99% while maintaining map quality. This compressed communication strategy effectively addresses bandwidth constraints in multi-robot SLAM scenarios, offering a practical solution for collaborative SLAM applications.

16.
Sensors (Basel) ; 24(10)2024 May 18.
Article in English | MEDLINE | ID: mdl-38794061

ABSTRACT

Detecting objects, particularly naval mines, on the seafloor is a complex task. In naval mine countermeasures (MCM) operations, sidescan or synthetic aperture sonars have been used to search large areas. However, a single sensor cannot meet the requirements of high-precision autonomous navigation. Based on the ORB-SLAM3-VI framework, we propose ORB-SLAM3-VIP, which integrates a depth sensor, an IMU sensor and an optical sensor. This method integrates the measurements of depth sensors and an IMU sensor into the visual SLAM algorithm through tight coupling, and establishes a multi-sensor fusion SLAM model. Depth constraints are introduced into the process of initialization, scale fine-tuning, tracking and mapping to constrain the position of the sensor in the z-axis and improve the accuracy of pose estimation and map scale estimate. The test on seven sets of underwater multi-sensor sequence data in the AQUALOC dataset shows that, compared with ORB-SLAM3-VI, the ORB-SLAM3-VIP system proposed in this paper reduces the scale error in all sequences by up to 41.2%, and reduces the trajectory error by up to 41.2%. The square root has also been reduced by up to 41.6%.

17.
Surg Neurol Int ; 15: 146, 2024.
Article in English | MEDLINE | ID: mdl-38742013

ABSTRACT

Background: Augmented reality (AR) applications in neurosurgery have expanded over the past decade with the introduction of headset-based platforms. Many studies have focused on either preoperative planning to tailor the approach to the patient's anatomy and pathology or intraoperative surgical navigation, primarily realized as AR navigation through microscope oculars. Additional efforts have been made to validate AR in trainee and patient education and to investigate novel surgical approaches. Our objective was to provide a systematic overview of AR in neurosurgery, provide current limitations of this technology, as well as highlight several applications of AR in neurosurgery. Methods: We performed a literature search in PubMed/Medline to identify papers that addressed the use of AR in neurosurgery. The authors screened three hundred and seventy-five papers, and 57 papers were selected, analyzed, and included in this systematic review. Results: AR has made significant inroads in neurosurgery, particularly in neuronavigation. In spinal neurosurgery, this primarily has been used for pedicle screw placement. AR-based neuronavigation also has significant applications in cranial neurosurgery, including neurovascular, neurosurgical oncology, and skull base neurosurgery. Other potential applications include operating room streamlining, trainee and patient education, and telecommunications. Conclusion: AR has already made a significant impact in neurosurgery in the above domains and has the potential to be a paradigm-altering technology. Future development in AR should focus on both validating these applications and extending the role of AR.

18.
Orthop J Sports Med ; 12(4): 23259671241241551, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38617888

ABSTRACT

Background: The epidemiology of musculoskeletal injuries at the Australian Open, Wimbledon, and US Open tennis tournaments has been investigated in recent studies; however, there is no published literature on the incidence of musculoskeletal injuries at the French Open. Purpose: To describe the incidence, location, and type of musculoskeletal injuries in tennis players during the French Open tournament from 2011 to 2022. Study Design: Descriptive epidemiology study. Methods: A review was performed of all injuries documented by a multidisciplinary medical team during the French Open from 2011 to 2022. All musculoskeletal injuries that occurred during the main draw of the female and male singles or doubles matches were included. Descriptive statistics were used to summarize the data. Injury locations were grouped into regions as well as into upper limb, trunk, and lower limb. Results: In total, there were 750 injuries in 687 tennis players, resulting in a mean of 62.5 injuries per tournament; however, there were no obvious trends in injury incidence over the time frame evaluated. The number of injuries in female and male players was similar (392 vs 358, respectively). The most common injury regions were the thigh/hip/pelvis (n = 156), ankle/foot (n = 114), and spine (n = 103). The most common injury types were muscle-related (n = 244), tendon-related (n = 207), and joint-related (n = 163), and the most affected muscles were the adductors (n = 45), rectus abdominis (n = 38), and lumbar muscles (n = 25). Conclusion: Over the 12-year period from 2011 to 2022 female and male players experienced similar numbers of musculoskeletal injuries, with most injuries occurring in the lower limbs compared with the upper limbs and trunk.

19.
Sensors (Basel) ; 24(7)2024 Mar 22.
Article in English | MEDLINE | ID: mdl-38610245

ABSTRACT

Simultaneous Localization and Mapping (SLAM) poses distinct challenges, especially in settings with variable elements, which demand the integration of multiple sensors to ensure robustness. This study addresses these issues by integrating advanced technologies like LiDAR-inertial odometry (LIO), visual-inertial odometry (VIO), and sophisticated Inertial Measurement Unit (IMU) preintegration methods. These integrations enhance the robustness and reliability of the SLAM process for precise mapping of complex environments. Additionally, incorporating an object-detection network aids in identifying and excluding transient objects such as pedestrians and vehicles, essential for maintaining the integrity and accuracy of environmental mapping. The object-detection network features a lightweight design and swift performance, enabling real-time analysis without significant resource utilization. Our approach focuses on harmoniously blending these techniques to yield superior mapping outcomes in complex scenarios. The effectiveness of our proposed methods is substantiated through experimental evaluation, demonstrating their capability to produce more reliable and precise maps in environments with variable elements. The results indicate improvements in autonomous navigation and mapping, providing a practical solution for SLAM in challenging and dynamic settings.

20.
Sensors (Basel) ; 24(8)2024 Apr 12.
Article in English | MEDLINE | ID: mdl-38676105

ABSTRACT

This research presents a comprehensive comparative analysis of SLAM algorithms and Deep Neural Network (DNN)-based Behavior Cloning (BC) navigation in outdoor agricultural environments. The study categorizes SLAM algorithms into laser-based and vision-based approaches, addressing the specific challenges posed by uneven terrain and the similarity between aisles in an orchard farm. The DNN-based BC navigation technique proves efficient, exhibiting reduced human intervention and providing a viable alternative for agricultural navigation. Despite the DNN-based BC navigation approach taking more time to reach its target due to a constant throttle limit for steady speed, the overall performance in terms of driving deviation and human intervention is notable compared to conventional SLAM algorithms. We provide comprehensive evaluation criteria for selecting optimal techniques for outdoor agricultural navigations. The algorithms were tested in three different scenarios: Precision, Speed, and Autonomy. Our proposed performance metric, P, is weighted and normalized. The DNN-based BC algorithm showed the best performance among the others, with a performance of 0.92 in the Precision and Autonomy scenarios. When Speed is more important, the RTAB-Map showed the best score with 0.96. In a case where Autonomy has a higher priority, Gmapping also showed a comparable performance of 0.92 with the DNN-based BC.

SELECTION OF CITATIONS
SEARCH DETAIL
...