Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 56
Filter
2.
Sci Rep ; 13(1): 12741, 2023 Aug 06.
Article in English | MEDLINE | ID: mdl-37544955

ABSTRACT

Cleaning is a fundamental routine task in human life that is now handed over to leading-edge technologies such as robotics and artificial intelligence. Various floor-cleaning robots have been developed with different cleaning functionalities, such as vacuuming and scrubbing. However, failures can occur when a robot tries to clean an incompatible dirt type. These situations will not only reduce the efficiency of the robot but also impose severe damage to the robots. Therefore, developing effective methods to classify the cleaning tasks performed in different regions and assign them to the respective cleaning agent has become a trending research domain. This article proposes a vision-based system that employs YOLOv5 and DeepSORT algorithms to detect and classify dirt to create a dirt distribution map that indicates the regions to be assigned for different cleaning requirements. This map would be useful for a collaborative cleaning framework for deploying each cleaning robot to its respective region to achieve an uninterrupted and energy-efficient operation. The proposed method can be executed with any mobile robot and on any surface and dirt, achieving high accuracy of 81.0%, for dirt indication in the dirt distribution map.

3.
Sensors (Basel) ; 23(4)2023 Feb 20.
Article in English | MEDLINE | ID: mdl-36850936

ABSTRACT

Hazardous object detection (escalators, stairs, glass doors, etc.) and avoidance are critical functional safety modules for autonomous mobile cleaning robots. Conventional object detectors have less accuracy for detecting low-feature hazardous objects and have miss detection, and the false classification ratio is high when the object is under occlusion. Miss detection or false classification of hazardous objects poses an operational safety issue for mobile robots. This work presents a deep-learning-based context-aware multi-level information fusion framework for autonomous mobile cleaning robots to detect and avoid hazardous objects with a higher confidence level, even if the object is under occlusion. First, the image-level-contextual-encoding module was proposed and incorporated with the Faster RCNN ResNet 50 object detector model to improve the low-featured and occluded hazardous object detection in an indoor environment. Further, a safe-distance-estimation function was proposed to avoid hazardous objects. It computes the distance of the hazardous object from the robot's position and steers the robot into a safer zone using detection results and object depth data. The proposed framework was trained with a custom image dataset using fine-tuning techniques and tested in real-time with an in-house-developed mobile cleaning robot, BELUGA. The experimental results show that the proposed algorithm detected the low-featured and occluded hazardous object with a higher confidence level than the conventional object detector and scored an average detection accuracy of 88.71%.

4.
Sci Rep ; 12(1): 15938, 2022 09 24.
Article in English | MEDLINE | ID: mdl-36153413

ABSTRACT

Floor cleaning robots are widely used in public places like food courts, hospitals, and malls to perform frequent cleaning tasks. However, frequent cleaning tasks adversely impact the robot's performance and utilize more cleaning accessories (such as brush, scrubber, and mopping pad). This work proposes a novel selective area cleaning/spot cleaning framework for indoor floor cleaning robots using RGB-D vision sensor-based Closed Circuit Television (CCTV) network, deep learning algorithms, and an optimal complete waypoints path planning method. In this scheme, the robot will clean only dirty areas instead of the whole region. The selective area cleaning/spot cleaning region is identified based on the combination of two strategies: tracing the human traffic patterns and detecting stains and trash on the floor. Here, a deep Simple Online and Real-time Tracking (SORT) human tracking algorithm was used to trace the high human traffic region and Single Shot Detector (SSD) MobileNet object detection framework for detecting the dirty region. Further, optimal shortest waypoint coverage path planning using evolutionary-based optimization was incorporated to traverse the robot efficiently to the designated selective area cleaning/spot cleaning regions. The experimental results show that the SSD MobileNet algorithm scored 90% accuracy for stain and trash detection on the floor. Further, compared to conventional methods, the evolutionary-based optimization path planning scheme reduces 15% percent of navigation time and 10% percent of energy consumption.


Subject(s)
Deep Learning , Robotics , Algorithms , Floors and Floorcoverings , Humans , Robotics/methods
5.
Sci Rep ; 12(1): 14557, 2022 08 25.
Article in English | MEDLINE | ID: mdl-36008439

ABSTRACT

This work presents the vision pipeline for our in-house developed autonomous reconfigurable pavement sweeping robot named Panthera. As the goal of Panthera is to be an autonomous self-reconfigurable robot, it has to understand the type of pavement it is moving in so that it can adapt smoothly to changing pavement width and perform cleaning operations more efficiently and safely. deep learning (DL) based vision pipeline is proposed for the Panthera robot to recognize pavement features, including pavement type identification, pavement surface condition prediction, and pavement width estimation. The DeepLabv3+ semantic segmentation algorithm was customized to identify the pavement type classification, an eight-layer CNN was proposed for pavement surface condition prediction. Furthermore, pavement width estimation was computed by fusing the segmented pavement region on the depth map. In the end, the fuzzy inference system was implemented by taking input as the pavement width and its conditions detected and output as the safe operational speed. The vision pipeline was trained using the DL provided with the custom pavement images dataset. The performance was evaluated using offline test and real-time field trial images captured through the reconfigurable robot Panthera stereo vision sensor. In the experimental analysis, the DL-based vision pipeline components scored 88.02% and 93.22% accuracy for pavement segmentation and pavement surface condition assessment, respectively, and took approximately 10 ms computation time to process the single image frame from the vision sensor using the onboard computer.


Subject(s)
Robotics , Algorithms , Semantics
7.
Sensors (Basel) ; 22(14)2022 Jul 12.
Article in English | MEDLINE | ID: mdl-35890883

ABSTRACT

Cleaning is an important task that is practiced in every domain and has prime importance. The significance of cleaning has led to several newfangled technologies in the domestic and professional cleaning domain. However, strategies for auditing the cleanliness delivered by the various cleaning methods remain manual and often ignored. This work presents a novel domestic dirt image dataset for cleaning auditing application including AI-based dirt analysis and robot-assisted cleaning inspection. One of the significant challenges in an AI-based robot-aided cleaning auditing is the absence of a comprehensive dataset for dirt analysis. We bridge this gap by identifying nine classes of commonly occurring domestic dirt and a labeled dataset consisting of 3000 microscope dirt images curated from a semi-indoor environment. The dirt dataset gathered using the adhesive dirt lifting method can enhance the current dirt sensing and dirt composition estimation for cleaning auditing. The dataset's quality is analyzed by AI-based dirt analysis and a robot-aided cleaning auditing task using six standard classification models. The models trained with the dirt dataset were capable of yielding a classification accuracy above 90% in the offline dirt analysis experiment and 82% in real-time test results.


Subject(s)
Soil , Datasets as Topic
8.
Sensors (Basel) ; 22(14)2022 Jul 12.
Article in English | MEDLINE | ID: mdl-35890893

ABSTRACT

Cebrenus Rechenburgi, a member of the huntsman spider family have inspired researchers to adopt different locomotion modes in reconfigurable robotic development. Object-of-interest perception is crucial for such a robot to provide fundamental information on the traversed pathways and guide its locomotion mode transformation. Therefore, we present a object-of-interest perception in a reconfigurable rolling-crawling robot and identifying appropriate locomotion modes. We demonstrate it in Scorpio, our in-house developed robot with two locomotion modes: rolling and crawling. We train the locomotion mode recognition framework, named Pyramid Scene Parsing Network (PSPNet), with a self-collected dataset composed of two categories paths, unobstructed paths (e.g., floor) for rolling and obstructed paths (e.g., with person, railing, stairs, static objects and wall) for crawling, respectively. The efficiency of the proposed framework has been validated with evaluation metrics in offline and real-time field trial tests. The experiment results show that the trained model can achieve an mIOU score of 72.28 and 70.63 in offline and online testing, respectively for both environments. The proposed framework's performance is compared with semantic framework (HRNet and Deeplabv3) where the proposed framework outperforms in terms of mIOU and speed. Furthermore, the experimental results has revealed that the robot's maneuverability is stable, and the proposed framework can successfully determine the appropriate locomotion modes with enhanced accuracy during complex pathways.


Subject(s)
Robotics , Humans , Locomotion , Perception , Robotics/methods
9.
Sensors (Basel) ; 22(14)2022 Jul 16.
Article in English | MEDLINE | ID: mdl-35890997

ABSTRACT

Robot-aided cleaning auditing is pioneering research that uses autonomous robots to assess a region's cleanliness level by analyzing the dirt samples collected from various locations. Since the dirt sample gathering process is more challenging, adapting a coverage planning strategy from a similar domain for cleaning is non-viable. Alternatively, a path planning approach to gathering dirt samples selectively at locations with a high likelihood of dirt accumulation is more feasible. This work presents a first-of-its-kind dirt sample gathering strategy for the cleaning auditing robots by combining the geometrical feature extraction and swarm algorithms. This combined approach generates an efficient optimal path covering all the identified dirt locations for efficient cleaning auditing. Besides being the foundational effort for cleaning audit, a path planning approach considering the geometric signatures that contribute to the dirt accumulation of a region has not been device so far. The proposed approach is validated systematically through experiment trials. The geometrical feature extraction-based dirt location identification method successfully identified dirt accumulated locations in our post-cleaning analysis as part of the experiment trials. The path generation strategies are validated in a real-world environment using an in-house developed cleaning auditing robot BELUGA. From the experiments conducted, the ant colony optimization algorithm generated the best cleaning auditing path with less travel distance, exploration time, and energy usage.


Subject(s)
Robotics , Algorithms , Robotics/methods
10.
Biomedicines ; 10(7)2022 Jun 30.
Article in English | MEDLINE | ID: mdl-35884872

ABSTRACT

(1) Background: To study the feasibility of developing finite element (FE) models of the whole lumbar spine using clinical routine multi-detector computed tomography (MDCT) scans to predict failure load (FL) and range of motion (ROM) parameters. (2) Methods: MDCT scans of 12 subjects (6 healthy controls (HC), mean age ± standard deviation (SD): 62.16 ± 10.24 years, and 6 osteoporotic patients (OP), mean age ± SD: 65.83 ± 11.19 years) were included in the current study. Comprehensive FE models of the lumbar spine (5 vertebrae + 4 intervertebral discs (IVDs) + ligaments) were generated (L1−L5) and simulated. The coefficients of correlation (ρ) were calculated to investigate the relationship between FE-based FL and ROM parameters and bone mineral density (BMD) values of L1−L3 derived from MDCT (BMDQCT-L1-3). Finally, Mann−Whitney U tests were performed to analyze differences in FL and ROM parameters between HC and OP cohorts. (3) Results: Mean FE-based FL value of the HC cohort was significantly higher than that of the OP cohort (1471.50 ± 275.69 N (HC) vs. 763.33 ± 166.70 N (OP), p < 0.01). A strong correlation of 0.8 (p < 0.01) was observed between FE-based FL and BMDQCT-L1-L3 values. However, no significant differences were observed between ROM parameters of HC and OP cohorts (p = 0.69 for flexion; p = 0.69 for extension; p = 0.47 for lateral bending; p = 0.13 for twisting). In addition, no statistically significant correlations were observed between ROM parameters and BMDQCT- L1-3. (4) Conclusions: Clinical routine MDCT data can be used for patient-specific FE modeling of the whole lumbar spine. ROM parameters do not seem to be significantly altered between HC and OP. In contrast, FE-derived FL may help identify patients with increased osteoporotic fracture risk in the future.

11.
Sensors (Basel) ; 22(13)2022 Jun 29.
Article in English | MEDLINE | ID: mdl-35808427

ABSTRACT

Mosquito-borne diseases can pose serious risks to human health. Therefore, mosquito surveillance and control programs are essential for the wellbeing of the community. Further, human-assisted mosquito surveillance and population mapping methods are time-consuming, labor-intensive, and require skilled manpower. This work presents an AI-enabled mosquito surveillance and population mapping framework using our in-house-developed robot, named 'Dragonfly', which uses the You Only Look Once (YOLO) V4 Deep Neural Network (DNN) algorithm and a two-dimensional (2D) environment map generated by the robot. The Dragonfly robot was designed with a differential drive mechanism and a mosquito trapping module to attract mosquitoes in the environment. The YOLO V4 was trained with three mosquito classes, namely Aedes aegypti, Aedes albopictus, and Culex, to detect and classify the mosquito breeds from the mosquito glue trap. The efficiency of the mosquito surveillance framework was determined in terms of mosquito classification accuracy and detection confidence level on offline and real-time field tests in a garden, drain perimeter area, and covered car parking area. The experimental results show that the trained YOLO V4 DNN model detects and classifies the mosquito classes with an 88% confidence level on offline mosquito test image datasets and scores an average of an 82% confidence level on the real-time field trial. Further, to generate the mosquito population map, the detection results are fused in the robot's 2D map, which will help to understand mosquito population dynamics and species distribution.


Subject(s)
Aedes , Culex , Robotics , Animals , Mosquito Vectors
13.
Sensors (Basel) ; 21(24)2021 Dec 13.
Article in English | MEDLINE | ID: mdl-34960425

ABSTRACT

Cleaning is one of the fundamental tasks with prime importance given in our day-to-day life. Moreover, the importance of cleaning drives the research efforts towards bringing leading edge technologies, including robotics, into the cleaning domain. However, an effective method to assess the quality of cleaning is an equally important research problem to be addressed. The primary footstep towards addressing the fundamental question of "How clean is clean" is addressed using an autonomous cleaning-auditing robot that audits the cleanliness of a given area. This research work focuses on a novel reinforcement learning-based experience-driven dirt exploration strategy for a cleaning-auditing robot. The proposed approach uses proximal policy approximation (PPO) based on-policy learning method to generate waypoints and sampling decisions to explore the probable dirt accumulation regions in a given area. The policy network is trained in multiple environments with simulated dirt patterns. Experiment trials have been conducted to validate the trained policy in both simulated and real-world environments using an in-house developed cleaning audit robot called BELUGA.


Subject(s)
Robotics
14.
Sci Rep ; 11(1): 22378, 2021 11 17.
Article in English | MEDLINE | ID: mdl-34789747

ABSTRACT

Drain blockage is a crucial problem in the urban environment. It heavily affects the ecosystem and human health. Hence, routine drain inspection is essential for urban environment. Manual drain inspection is a tedious task and prone to accidents and water-borne diseases. This work presents a drain inspection framework using convolutional neural network (CNN) based object detection algorithm and in house developed reconfigurable teleoperated robot called 'Raptor'. The CNN based object detection model was trained using a transfer learning scheme with our custom drain-blocking objects data-set. The efficiency of the trained CNN algorithm and drain inspection robot Raptor was evaluated through various real-time drain inspection field trial. The experimental results indicate that our trained object detection algorithm has detect and classified the drain blocking objects with 91.42% accuracy for both offline and online test images and is able to process 18 frames per second (FPS). Further, the maneuverability of the robot was evaluated from various open and closed drain environment. The field trial results ensure that the robot maneuverability was stable, and its mapping and localization is also accurate in a complex drain environment.

15.
Sensors (Basel) ; 21(21)2021 Nov 01.
Article in English | MEDLINE | ID: mdl-34770593

ABSTRACT

Human visual inspection of drains is laborious, time-consuming, and prone to accidents. This work presents an AI-enabled robot-assisted remote drain inspection and mapping framework using our in-house developed reconfigurable robot Raptor. The four-layer IoRT serves as a bridge between the users and the robots, through which seamless information sharing takes place. The Faster RCNN ResNet50, Faster RCNN ResNet101, and Faster RCNN Inception-ResNet-v2 deep learning frameworks were trained using a transfer learning scheme with six typical concrete defect classes and deployed in an IoRT framework remote defect detection task. The efficiency of the trained CNN algorithm and drain inspection robot Raptor was evaluated through various real-time drain inspection field trials using the SLAM technique. The experimental results indicate that robot's maneuverability was stable, and its mapping and localization were also accurate in different drain types. Finally, for effective drain maintenance, the SLAM-based defect map was generated by fusing defect detection results in the lidar-SLAM map.


Subject(s)
Raptors , Robotics , Algorithms , Animals , Humans
16.
World J Crit Care Med ; 10(5): 244-259, 2021 Sep 09.
Article in English | MEDLINE | ID: mdl-34616660

ABSTRACT

BACKGROUND: Our understanding of the severe acute respiratory syndrome coronavirus 2 has evolved since the first reported cases in December 2019, and a greater emphasis has been placed on the hyper-inflammatory response in severely ill patients. The purpose of this study was to determine risk factors for mortality and the impact of anti-inflammatory therapies on survival. AIM: To determine the impact of various therapies on outcomes in severe coronavirus disease 2019 patients with a focus on anti-inflammatory and immune-modulating agents. METHODS: A retrospective analysis was conducted on 261 patients admitted or transferred to the intensive care unit in two community hospitals between March 12, 2020 and June 17, 2020. Totally 167 patients received glucocorticoid (GC) therapy. Seventy-three patients received GC alone, 94 received GC and tocilizumab, 28 received tocilizumab monotherapy, and 66 received no anti-inflammatory therapy. RESULTS: Patient survival was associated with GC use, either alone or with tocilizumab, and decreased vasopressor requirements. Delayed administration of GC was found to decrease the survival benefit of GC therapy. No difference in survival was found with varying anticoagulant doses, convalescent plasma, tocilizumab monotherapy; prone ventilation, hydroxychloroquine, azithromycin, or intravenous ascorbic acid use. CONCLUSION: This analysis demonstrated the survival benefit associated with anti-inflammatory therapy of GC, with or without tocilizumab, with the combination providing the most benefit. More studies are needed to assess the optimal timing of anti-inflammatory therapy initiation.

17.
Sensors (Basel) ; 21(18)2021 Sep 11.
Article in English | MEDLINE | ID: mdl-34577301

ABSTRACT

During a viral outbreak, such as COVID-19, autonomously operated robots are in high demand. Robots effectively improve the environmental concerns of contaminated surfaces in public spaces, such as airports, public transport areas and hospitals, that are considered high-risk areas. Indoor spaces walls made up most of the indoor areas in these public spaces and can be easily contaminated. Wall cleaning and disinfection processes are therefore critical for managing and mitigating the spread of viruses. Consequently, wall cleaning robots are preferred to address the demands. A wall cleaning robot needs to maintain a close and consistent distance away from a given wall during cleaning and disinfection processes. In this paper, a reconfigurable wall cleaning robot with autonomous wall following ability is proposed. The robot platform, Wasp, possess inter-reconfigurability, which enables it to be physically reconfigured into a wall-cleaning robot. The wall following ability has been implemented using a Fuzzy Logic System (FLS). The design of the robot and the FLS are presented in the paper. The platform and the FLS are tested and validated in several test cases. The experimental outcomes validate the real-world applicability of the proposed wall following method for a wall cleaning robot.


Subject(s)
COVID-19 , Robotics , Disinfection , Fuzzy Logic , Humans , SARS-CoV-2
18.
Sensors (Basel) ; 21(18)2021 Sep 18.
Article in English | MEDLINE | ID: mdl-34577486

ABSTRACT

Staircase cleaning is a crucial and time-consuming task for maintenance of multistory apartments and commercial buildings. There are many commercially available autonomous cleaning robots in the market for building maintenance, but few of them are designed for staircase cleaning. A key challenge for automating staircase cleaning robots involves the design of Environmental Perception Systems (EPS), which assist the robot in determining and navigating staircases. This system also recognizes obstacles and debris for safe navigation and efficient cleaning while climbing the staircase. This work proposes an operational framework leveraging the vision based EPS for the modular re-configurable maintenance robot, called sTetro. The proposed system uses an SSD MobileNet real-time object detection model to recognize staircases, obstacles and debris. Furthermore, the model filters out false detection of staircases by fusion of depth information through the use of a MobileNet and SVM. The system uses a contour detection algorithm to localize the first step of the staircase and depth clustering scheme for obstacle and debris localization. The framework has been deployed on the sTetro robot using the Jetson Nano hardware from NVIDIA and tested with multistory staircases. The experimental results show that the entire framework takes an average of 310 ms to run and achieves an accuracy of 94.32% for staircase recognition tasks and 93.81% accuracy for obstacle and debris detection tasks during real operation of the robot.


Subject(s)
Deep Learning , Form Perception , Robotics , Algorithms
19.
Sensors (Basel) ; 21(17)2021 Aug 26.
Article in English | MEDLINE | ID: mdl-34502633

ABSTRACT

Frequent inspections are essential for drains to maintain proper function to ensure public health and safety. Robots have been developed to aid the drain inspection process. However, existing robots designed for drain inspection require improvements in their design and autonomy. This paper proposes a novel design of a drain inspection robot named Raptor. The robot has been designed with a manually reconfigurable wheel axle mechanism, which allows the change of ground clearance height. Design aspects of the robot, such as mechanical design, control architecture and autonomy functions, are comprehensively described in the paper, and insights are included. Maintaining the robot's position in the middle of a drain when moving along the drain is essential for the inspection process. Thus, a fuzzy logic controller has been introduced to the robot to cater to this demand. Experiments have been conducted by deploying a prototype of the design to drain environments considering a set of diverse test scenarios. Experiment results show that the proposed controller effectively maintains the robot in the middle of a drain while moving along the drain. Therefore, the proposed robot design and the controller would be helpful in improving the productivity of robot-aided inspection of drains.


Subject(s)
Raptors , Robotics , Animals , Fuzzy Logic
20.
Sensors (Basel) ; 21(15)2021 Jul 30.
Article in English | MEDLINE | ID: mdl-34372408

ABSTRACT

False-ceiling inspection is a critical factor in pest-control management within a built infrastructure. Conventionally, the false-ceiling inspection is done manually, which is time-consuming and unsafe. A lightweight robot is considered a good solution for automated false-ceiling inspection. However, due to the constraints imposed by less load carrying capacity and brittleness of false ceilings, the inspection robots cannot rely upon heavy batteries, sensors, and computation payloads for enhancing task performance. Hence, the strategy for inspection has to ensure efficiency and best performance. This work presents an optimal functional footprint approach for the robot to maximize the efficiency of an inspection task. With a conventional footprint approach in path planning, complete coverage inspection may become inefficient. In this work, the camera installation parameters are considered as the footprint defining parameters for the false ceiling inspection. An evolutionary algorithm-based multi-objective optimization framework is utilized to derive the optimal robot footprint by minimizing the area missed and path-length taken for the inspection task. The effectiveness of the proposed approach is analyzed using numerical simulations. The results are validated on an in-house developed false-ceiling inspection robot-Raptor-by experiment trials on a false-ceiling test-bed.


Subject(s)
Robotics , Algorithms
SELECTION OF CITATIONS
SEARCH DETAIL
...