Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 593
Filter
1.
JMIR Mhealth Uhealth ; 12: e57158, 2024 Sep 27.
Article in English | MEDLINE | ID: mdl-39331461

ABSTRACT

Wearable monitors continue to play a critical role in scientific assessments of physical activity. Recently, research-grade monitors have begun providing raw data from photoplethysmography (PPG) alongside standard raw data from inertial sensors (accelerometers and gyroscopes). Raw PPG enables granular and transparent estimation of cardiovascular parameters such as heart rate, thus presenting a valuable alternative to standard PPG methodologies (most of which rely on consumer-grade monitors that provide only coarse output from proprietary algorithms). The implications for physical activity assessment are tremendous, since it is now feasible to monitor granular and concurrent trends in both movement and cardiovascular physiology using a single noninvasive device. However, new users must also be aware of challenges and limitations that accompany the use of raw PPG data. This viewpoint paper therefore orients new users to the opportunities and challenges of raw PPG data by presenting its mechanics, pitfalls, and availability, as well as its parallels and synergies with inertial sensors. This includes discussion of specific applications to the prediction of energy expenditure, activity type, and 24-hour movement behaviors, with an emphasis on areas in which raw PPG data may help resolve known issues with inertial sensing (eg, measurement during cycling activities). We also discuss how the impact of raw PPG data can be maximized through the use of open-source tools when developing and disseminating new methods, similar to current standards for raw accelerometer and gyroscope data. Collectively, our comments show the strong potential of raw PPG data to enhance the use of research-grade wearable activity monitors in science over the coming years.


Subject(s)
Photoplethysmography , Wearable Electronic Devices , Photoplethysmography/instrumentation , Photoplethysmography/methods , Photoplethysmography/standards , Humans , Wearable Electronic Devices/standards , Wearable Electronic Devices/statistics & numerical data , Exercise/physiology , Monitoring, Physiologic/instrumentation , Monitoring, Physiologic/methods , Heart Rate/physiology , Accelerometry/instrumentation , Accelerometry/methods
2.
Sensors (Basel) ; 24(18)2024 Sep 10.
Article in English | MEDLINE | ID: mdl-39338609

ABSTRACT

In the field of detection and ranging, multiple complementary sensing modalities may be used to enrich information obtained from a dynamic scene. One application of this sensor fusion is in public security and surveillance, where efficacy and privacy protection measures must be continually evaluated. We present a novel deployment of sensor fusion for the discrete detection of concealed metal objects on persons whilst preserving their privacy. This is achieved by coupling off-the-shelf mmWave radar and depth camera technology with a novel neural network architecture that processes radar signals using convolutional Long Short-Term Memory (LSTM) blocks and depth signals using convolutional operations. The combined latent features are then magnified using deep feature magnification to reveal cross-modality dependencies in the data. We further propose a decoder, based on the feature extraction and embedding block, to learn an efficient upsampling of the latent space to locate the concealed object in the spatial domain through radar feature guidance. We demonstrate the ability to detect the presence and infer the 3D location of concealed metal objects. We achieve accuracies of up to 95% using a technique that is robust to multiple persons. This work provides a demonstration of the potential for cost-effective and portable sensor fusion with strong opportunities for further development.

3.
Sensors (Basel) ; 24(18)2024 Sep 10.
Article in English | MEDLINE | ID: mdl-39338625

ABSTRACT

Recent advancements in vehicle technology have stimulated innovation across the automotive sector, from Advanced Driver Assistance Systems (ADAS) to autonomous driving and motorsport applications. Modern vehicles, equipped with sensors for perception, localization, navigation, and actuators for autonomous driving, generate vast amounts of data used for training and evaluating autonomous systems. Real-world testing is essential for validation but is complex, expensive, and time-intensive, requiring multiple vehicles and reference systems. To address these challenges, computer graphics-based simulators offer a compelling solution by providing high-fidelity 3D environments to simulate vehicles and road users. These simulators are crucial for developing, validating, and testing ADAS, autonomous driving systems, and cooperative driving systems, and enhancing vehicle performance and driver training in motorsport. This paper reviews computer graphics-based simulators tailored for automotive applications. It begins with an overview of their applications and analyzes their key features. Additionally, this paper compares five open-source (CARLA, AirSim, LGSVL, AWSIM, and DeepDrive) and ten commercial simulators. Our findings indicate that open-source simulators are best for the research community, offering realistic 3D environments, multiple sensor support, APIs, co-simulation, and community support. Conversely, commercial simulators, while less extensible, provide a broader set of features and solutions.

4.
Sensors (Basel) ; 24(18)2024 Sep 17.
Article in English | MEDLINE | ID: mdl-39338751

ABSTRACT

Despite the many potential applications of an accurate indoor positioning system (IPS), no universal, readily available system exists. Much of the IPS research to date has been based on the use of radio transmitters as positioning beacons. Visible light positioning (VLP) instead uses LED lights as beacons. Either cameras or photodiodes (PDs) can be used as VLP receivers, and position estimates are usually based on either the angle of arrival (AOA) or the strength of the received signal. Research on the use of AOA with photodiode receivers has so far been limited by the lack of a suitable compact receiver. The quadrature angular diversity aperture receiver (QADA) can fill this gap. In this paper, we describe a new QADA design that uses only three readily available parts: a quadrant photodiode, a 3D-printed aperture, and a programmable system on a chip (PSoC). Extensive experimental results demonstrate that this design provides accurate AOA estimates within a room-sized test chamber. The flexibility and programmability of the PSoC mean that other sensors can be supported by the same PSoC. This has the potential to allow the AOA estimates from the QADA to be combined with information from other sensors to form future powerful sensor-fusion systems requiring only one beacon.

5.
Sensors (Basel) ; 24(17)2024 Aug 23.
Article in English | MEDLINE | ID: mdl-39275374

ABSTRACT

In recent years, the safety issues of high-speed railways have remained severe. The intrusion of personnel or obstacles into the perimeter has often occurred in the past, causing derailment or parking, especially in the case of bad weather such as fog, haze, rain, etc. According to previous research, it is difficult for a single sensor to meet the application needs of all scenario, all weather, and all time domains. Due to the complementary advantages of multi-sensor data such as images and point clouds, multi-sensor fusion detection technology for high-speed railway perimeter intrusion is becoming a research hotspot. To the best of our knowledge, there has been no review of research on multi-sensor fusion detection technology for high-speed railway perimeter intrusion. To make up for this deficiency and stimulate future research, this article first analyzes the situation of high-speed railway technical defense measures and summarizes the research status of single sensor detection. Secondly, based on the analysis of typical intrusion scenarios in high-speed railways, we introduce the research status of multi-sensor data fusion detection algorithms and data. Then, we discuss risk assessment of railway safety. Finally, the trends and challenges of multi-sensor fusion detection algorithms in the railway field are discussed. This provides effective theoretical support and technical guidance for high-speed rail perimeter intrusion monitoring.

6.
Sensors (Basel) ; 24(17)2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39275571

ABSTRACT

In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual-inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of mobile robots in indoor environments, we propose a visual SLAM perception method that integrates wheel odometry information. First, the robot's body pose is parameterized in SE(2) and the corresponding camera pose is parameterized in SE(3). On this basis, we derive the visual constraint residuals and their Jacobian matrices for reprojection observations using the camera projection model. We employ the concept of pre-integration to derive pose-constraint residuals and their Jacobian matrices and utilize marginalization theory to derive the relative pose residuals and their Jacobians for loop closure constraints. This approach solves the nonlinear optimization problem to obtain the optimal pose and landmark points of the ground-moving robot. A comparison with the ORBSLAM3 algorithm reveals that, in the recorded indoor environment datasets, the proposed algorithm demonstrates significantly higher perception accuracy, with root mean square error (RMSE) improvements of 89.2% in translation and 98.5% in rotation for absolute trajectory error (ATE). The overall trajectory localization accuracy ranges between 5 and 17 cm, validating the effectiveness of the proposed algorithm. These findings can be applied to preliminary mapping for the autonomous navigation of indoor mobile robots and serve as a basis for path planning based on the mapping results.

7.
Sensors (Basel) ; 24(17)2024 Sep 09.
Article in English | MEDLINE | ID: mdl-39275752

ABSTRACT

Current state-of-the-art (SOTA) LiDAR-only detectors perform well for 3D object detection tasks, but point cloud data are typically sparse and lacks semantic information. Detailed semantic information obtained from camera images can be added with existing LiDAR-based detectors to create a robust 3D detection pipeline. With two different data types, a major challenge in developing multi-modal sensor fusion networks is to achieve effective data fusion while managing computational resources. With separate 2D and 3D feature extraction backbones, feature fusion can become more challenging as these modes generate different gradients, leading to gradient conflicts and suboptimal convergence during network optimization. To this end, we propose a 3D object detection method, Attention-Enabled Point Fusion (AEPF). AEPF uses images and voxelized point cloud data as inputs and estimates the 3D bounding boxes of object locations as outputs. An attention mechanism is introduced to an existing feature fusion strategy to improve 3D detection accuracy and two variants are proposed. These two variants, AEPF-Small and AEPF-Large, address different needs. AEPF-Small, with a lightweight attention module and fewer parameters, offers fast inference. AEPF-Large, with a more complex attention module and increased parameters, provides higher accuracy than baseline models. Experimental results on the KITTI validation set show that AEPF-Small maintains SOTA 3D detection accuracy while inferencing at higher speeds. AEPF-Large achieves mean average precision scores of 91.13, 79.06, and 76.15 for the car class's easy, medium, and hard targets, respectively, in the KITTI validation set. Results from ablation experiments are also presented to support the choice of model architecture.

8.
Biomimetics (Basel) ; 9(9)2024 Sep 15.
Article in English | MEDLINE | ID: mdl-39329579

ABSTRACT

Assistive robotic platforms have recently gained popularity in various healthcare applications, and their use has expanded to social settings such as education, tourism, and manufacturing. These social robots, often in the form of bio-inspired humanoid systems, provide significant psychological and physiological benefits through one-on-one interactions. To optimize the interaction between social robotic platforms and humans, it is crucial for these robots to identify and mimic human motions in real time. This research presents a motion prediction model developed using convolutional neural networks (CNNs) to efficiently determine the type of motions at the initial state. Once identified, the corresponding reactions of the robots are executed by moving their joints along specific trajectories derived through temporal alignment and stored in a pre-selected motion library. In this study, we developed a multi-axial robotic arm integrated with a motion identification model to interact with humans by emulating their movements. The robotic arm follows pre-selected trajectories for corresponding interactions, which are generated based on identified human motions. To address the nonlinearities and cross-coupled dynamics of the robotic system, we applied a control strategy for precise motion tracking. This integrated system ensures that the robotic arm can achieve adequate controlled outcomes, thus validating the feasibility of such an interactive robotic system in providing effective bio-inspired motion emulation.

9.
Article in English | MEDLINE | ID: mdl-39344095

ABSTRACT

Understanding the complex three-dimensional (3D) dynamic interactions between self-contained breathing apparatus (SCBA) and the human torso is critical to assessing potential impacts on firefighter health and informing equipment design. This study employed a multi-inertial sensor fusion technology to quantify these interactions. Six volunteer firefighters performed walking and running experiments on a treadmill while wearing the SCBA. Calculations of interaction forces and moments from the multi-inertial sensor technology were validated against a 3D motion capture system. The predicted interaction forces and moments showed good agreement with the measured data, especially for the forces (normal and lateral) and moments (x- and z-direction components) with relative root mean square errors (RMSEs) below 9.4%, 7.7%, 7.7%, and 7.8%, respectively. Peak pack force reached up to 150 N, significantly exceeding the SCBA's intrinsic weight during SCBA carriage. The proposed multi-inertial sensor fusion technique can effectively evaluate the 3D dynamic interactions and provide a scientific basis for health monitoring and ergonomic optimization of SCBA systems for firefighters.

10.
SLAS Technol ; 29(5): 100181, 2024 Aug 28.
Article in English | MEDLINE | ID: mdl-39209115

ABSTRACT

In the pursuit of advancing health and rehabilitation, the quintessence of human motion recognition technology has been underscored through its quantitative contributions to physical performance assessment. This research delineates the inception of a novel fuzzy comprehensive evaluation-based recognition method that stands at the forefront of such innovative endeavours. By synergistically fusing multi-sensor data and advanced classification algorithms, the proposed system offers a granular quantitative analysis with implications for health and fitness monitoring, particularly rehabilitation processes. Our methodological approach, grounded in the modal separation technique and Empirical Mode Decomposition (EMD), effectively distills the motion acceleration component from raw accelerometer data, facilitating the extraction of intricate motion patterns. Quantitative analysis revealed that our integrated framework significantly amplifies the accuracy of motion recognition, achieving an overall recognition rate of 90.03 %, markedly surpassing conventional methods, such as Support Vector Machines (SVM), Decision Trees (DT), and K-Nearest Neighbors (KNN), which hovered around 80 %. Moreover, the system demonstrated an unprecedented accuracy of 97 % in discerning minor left-right swaying motions, showcasing its robustness in evaluating subtle movement nuances-a paramount feature for rehabilitation and patient monitoring. This marked precision in motion recognition heralds a new paradigm in health assessment, enabling objective and scalable analysis pertinent to individualized therapeutic interventions. The experimental evaluation accentuates the system's adeptness at navigating the dichotomy between complex, intense motions and finer, subtler movements with a high fidelity rate. It substantiates the method's utility in delivering sophisticated, data-driven insights for rehabilitation trajectory monitoring.

11.
Sensors (Basel) ; 24(15)2024 Aug 02.
Article in English | MEDLINE | ID: mdl-39124046

ABSTRACT

The labor shortage and rising costs in the greenhouse industry have driven the development of automation, with the core of autonomous operations being positioning and navigation technology. However, precise positioning in complex greenhouse environments and narrow aisles poses challenges to localization technologies. This study proposes a multi-sensor fusion positioning and navigation robot based on ultra-wideband (UWB), an inertial measurement unit (IMU), odometry (ODOM), and a laser rangefinder (RF). The system introduces a confidence optimization algorithm based on weakening non-line-of-sight (NLOS) for UWB positioning, obtaining calibrated UWB positioning results, which are then used as a baseline to correct the positioning errors generated by the IMU and ODOM. The extended Kalman filter (EKF) algorithm is employed to fuse multi-sensor data. To validate the feasibility of the system, experiments were conducted in a Chinese solar greenhouse. The results show that the proposed NLOS confidence optimization algorithm significantly improves UWB positioning accuracy by 60.05%. At a speed of 0.1 m/s, the root mean square error (RMSE) for lateral deviation is 0.038 m and for course deviation is 4.030°. This study provides a new approach for greenhouse positioning and navigation technology, achieving precise positioning and navigation in complex commercial greenhouse environments and narrow aisles, thereby laying a foundation for the intelligent development of greenhouses.

12.
Stud Health Technol Inform ; 316: 988-992, 2024 Aug 22.
Article in English | MEDLINE | ID: mdl-39176957

ABSTRACT

Continuous monitoring of physiological signals such as electrocardiogram (ECG) in driving environments has the potential to reduce the need for frequent health check-ups by providing real-time information on cardiovascular health. However, capturing ECG from sensors mounted on steering wheels creates difficulties due to motion artifacts, noise, and dropouts. To address this, we propose a novel method for reliable and accurate detection of heartbeats using sensor fusion with a bidirectional long short-term memory (BiLSTM) model. Our dataset contains reference ECG, steering wheel ECG, photoplethysmogram (PPG), and imaging PPG (iPPG) signals, which are more feasible to capture in driving scenarios. We combine these signals for R-wave detection. We conduct experiments with individual signals and signal fusion techniques to evaluate the performance of detected heartbeat positions. The BiLSTMs model achieves a performance of 62.69% in the driving scenario city. The model can be integrated into the system to detect heartbeat positions for further analysis.


Subject(s)
Electrocardiography , Photoplethysmography , Signal Processing, Computer-Assisted , Humans , Photoplethysmography/methods , Heart Rate/physiology , Automobile Driving , Algorithms
13.
Biomimetics (Basel) ; 9(8)2024 Aug 14.
Article in English | MEDLINE | ID: mdl-39194471

ABSTRACT

As a significant technological innovation in the fields of medicine and geriatric care, smart care wheelchairs offer a novel approach to providing high-quality care services and improving the quality of care. The aim of this review article is to examine the development, applications and prospects of smart nursing wheelchairs, with particular emphasis on their assistive nursing functions, multiple-sensor fusion technology, and human-machine interaction interfaces. First, we describe the assistive functions of nursing wheelchairs, including position changing, transferring, bathing, and toileting, which significantly reduce the workload of nursing staff and improve the quality of care. Second, we summarized the existing multiple-sensor fusion technology for smart nursing wheelchairs, including LiDAR, RGB-D, ultrasonic sensors, etc. These technologies give wheelchairs autonomy and safety, better meeting patients' needs. We also discussed the human-machine interaction interfaces of intelligent care wheelchairs, such as voice recognition, touch screens, and remote controls. These interfaces allow users to operate and control the wheelchair more easily, improving usability and maneuverability. Finally, we emphasized the importance of multifunctional-integrated care wheelchairs that integrate assistive care, navigation, and human-machine interaction functions into a comprehensive care solution for users. We are looking forward to the future and assume that smart nursing wheelchairs will play an increasingly important role in medicine and geriatric care. By integrating advanced technologies such as enhanced artificial intelligence, intelligent sensors, and remote monitoring, we expect to further improve patients' quality of care and quality of life.

14.
Sensors (Basel) ; 24(16)2024 Aug 13.
Article in English | MEDLINE | ID: mdl-39204932

ABSTRACT

The engine in-cylinder pressure is a very important parameter for the optimization of internal combustion engines. This paper proposes an alternative recursive Kalman filter-based engine cylinder pressure reconstruction approach using sensor-fused engine speed. In the proposed approach, the fused engine speed is first obtained using the centralized sensor fusion technique, which synthesizes the information from the engine vibration sensor and engine flywheel angular speed sensor. Afterwards, with the fused speed, the engine cylinder pressure signal can be reconstructed by inverse filtering of the engine structural vibration signal. The cylinder pressure reconstruction results of the proposed approach are validated by two combustion indicators, which are pressure peak Pmax and peak location Ploc. Meanwhile, the reconstruction results are compared with the results obtained by the cylinder pressure reconstruction approach using the calculated engine speed. The results of sensor fusion can indicate that the fused speed is smoother when the vibration signal is trusted more. Furthermore, the cylinder pressure reconstruction results can display the relationship between the sensor-fused speed and the cylinder pressure reconstruction accuracy, and with more belief in the vibration signal, the reconstructed results will become better.

15.
Sensors (Basel) ; 24(16)2024 Aug 22.
Article in English | MEDLINE | ID: mdl-39205116

ABSTRACT

Corn, as one of the three major grain crops in China, plays a crucial role in ensuring national food security through its yield and quality. With the advancement of agricultural intelligence, agricultural robot technology has gained significant attention. High-precision navigation is the basis for realizing various operations of agricultural robots in corn fields and is closely related to the quality of operations. Corn leaf and stalk recognition and ranging are the prerequisites for achieving high-precision navigation and have attracted much attention. This paper proposes a corn leaf and stalk recognition and ranging algorithm based on multi-sensor fusion. First, YOLOv8 is used to identify corn leaves and stalks. Considering the large differences in leaf morphology and the large changes in field illumination that lead to discontinuous identification, an equidistant expansion polygon algorithm is proposed to post-process the leaves, thereby increasing the average recognition completeness of the leaves to 86.4%. Secondly, after eliminating redundant point clouds, the IMU data are used to calculate the confidence of the LiDAR and depth camera ranging point clouds, and point cloud fusion is performed based on this to achieve high-precision ranging of corn leaves. The average ranging error is 2.9 cm, which is lower than the measurement error of a single sensor. Finally, the stalk point cloud is processed and clustered using the FILL-DBSCAN algorithm to identify and measure the distance of the same corn stalk. The algorithm combines recognition accuracy and ranging accuracy to meet the needs of robot navigation or phenotypic measurement in corn fields, ensuring the stable and efficient operation of the robot in the corn field.


Subject(s)
Algorithms , Plant Leaves , Zea mays , Zea mays/anatomy & histology , Plant Leaves/anatomy & histology , Robotics , Agriculture/methods , Crops, Agricultural , China
16.
Sensors (Basel) ; 24(13)2024 Jun 21.
Article in English | MEDLINE | ID: mdl-39000829

ABSTRACT

This paper presents a new deep-learning architecture designed to enhance the spatial synchronization between CMOS and event cameras by harnessing their complementary characteristics. While CMOS cameras produce high-quality imagery, they struggle in rapidly changing environments-a limitation that event cameras overcome due to their superior temporal resolution and motion clarity. However, effective integration of these two technologies relies on achieving precise spatial alignment, a challenge unaddressed by current algorithms. Our architecture leverages a dynamic graph convolutional neural network (DGCNN) to process event data directly, improving synchronization accuracy. We found that synchronization precision strongly correlates with the spatial concentration and density of events, with denser distributions yielding better alignment results. Our empirical results demonstrate that areas with denser event clusters enhance calibration accuracy, with calibration errors increasing in more uniformly distributed event scenarios. This research pioneers scene-based synchronization between CMOS and event cameras, paving the way for advancements in mixed-modality visual systems. The implications are significant for applications requiring detailed visual and temporal information, setting new directions for the future of visual perception technologies.

17.
Sensors (Basel) ; 24(13)2024 Jun 25.
Article in English | MEDLINE | ID: mdl-39000897

ABSTRACT

Effective security surveillance is crucial in the railway sector to prevent security incidents, including vandalism, trespassing, and sabotage. This paper discusses the challenges of maintaining seamless surveillance over extensive railway infrastructure, considering both technological advances and the growing risks posed by terrorist attacks. Based on previous research, this paper discusses the limitations of current surveillance methods, particularly in managing information overload and false alarms that result from integrating multiple sensor technologies. To address these issues, we propose a new fusion model that utilises Probabilistic Occupancy Maps (POMs) and Bayesian fusion techniques. The fusion model is evaluated on a comprehensive dataset comprising three use cases with a total of eight real life critical scenarios. We show that, with this model, the detection accuracy can be increased while simultaneously reducing the false alarms in railway security surveillance systems. This way, our approach aims to enhance situational awareness and reduce false alarms, thereby improving the effectiveness of railway security measures.

18.
Sensors (Basel) ; 24(13)2024 Jun 27.
Article in English | MEDLINE | ID: mdl-39000972

ABSTRACT

With the continuous development of new sensor features and tracking algorithms for object tracking, researchers have opportunities to experiment using different combinations. However, there is no standard or agreed method for selecting an appropriate architecture for autonomous vehicle (AV) crash reconstruction using multi-sensor-based sensor fusion. This study proposes a novel simulation method for tracking performance evaluation (SMTPE) to solve this problem. The SMTPE helps select the best tracking architecture for AV crash reconstruction. This study reveals that a radar-camera-based centralized tracking architecture of multi-sensor fusion performed the best among three different architectures tested with varying sensor setups, sampling rates, and vehicle crash scenarios. We provide a brief guideline for the best practices in selecting appropriate sensor fusion and tracking architecture arrangements, which can be helpful for future vehicle crash reconstruction and other AV improvement research.

19.
Sensors (Basel) ; 24(13)2024 Jun 30.
Article in English | MEDLINE | ID: mdl-39001042

ABSTRACT

With the transformation and development of the automotive industry, low-cost and seamless indoor and outdoor positioning has become a research hotspot for modern vehicles equipped with in-vehicle infotainment systems, Internet of Vehicles, or other intelligent systems (such as Telematics Box, Autopilot, etc.). This paper analyzes modern vehicles in different configurations and proposes a low-cost, versatile indoor non-visual semantic mapping and localization solution based on low-cost sensors. Firstly, the sliding window-based semantic landmark detection method is designed to identify non-visual semantic landmarks (e.g., entrance/exit, ramp entrance/exit, road node). Then, we construct an indoor non-visual semantic map that includes the vehicle trajectory waypoints, non-visual semantic landmarks, and Wi-Fi fingerprints of RSS features. Furthermore, to estimate the position of modern vehicles in the constructed semantic maps, we proposed a graph-optimized localization method based on landmark matching that exploits the correlation between non-visual semantic landmarks. Finally, field experiments are conducted in two shopping mall scenes with different underground parking layouts to verify the proposed non-visual semantic mapping and localization method. The results show that the proposed method achieves a high accuracy of 98.1% in non-visual semantic landmark detection and a low localization error of 1.31 m.

20.
Front Plant Sci ; 15: 1369501, 2024.
Article in English | MEDLINE | ID: mdl-38988641

ABSTRACT

Diameter and height are crucial morphological parameters of banana pseudo-stems, serving as indicators of the plant's growth status. Currently, in densely cultivated banana plantations, there is a lack of applicable research methods for the scalable measurement of phenotypic parameters such as diameter and height of banana pseudo-stems. This paper introduces a handheld mobile LiDAR and Inertial Measurement Unit (IMU)-fused laser scanning system designed for measuring phenotypic parameters of banana pseudo-stems within banana orchards. To address the challenges posed by dense canopy cover in banana orchards, a distance-weighted feature extraction method is proposed. This method, coupled with Lidar-IMU integration, constructs a three-dimensional point cloud map of the banana plantation area. To overcome difficulties in segmenting individual banana plants in complex environments, a combined segmentation approach is proposed, involving Euclidean clustering, Kmeans clustering, and threshold segmentation. A sliding window recognition method is presented to determine the connection points between pseudo-stems and leaves, mitigating issues caused by crown closure and heavy leaf overlap. Experimental results in banana orchards demonstrate that, compared with manual measurements, the mean absolute errors and relative errors for banana pseudo-stem diameter and height are 0.2127 cm (4.06%) and 3.52 cm (1.91%), respectively. These findings indicate that the proposed method is suitable for scalable measurements of banana pseudo-stem diameter and height in complex, obscured environments, providing a rapid and accurate inter-orchard measurement approach for banana plantation managers.

SELECTION OF CITATIONS
SEARCH DETAIL