Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Nat Commun ; 15(1): 4004, 2024 May 11.
Article in English | MEDLINE | ID: mdl-38734697

ABSTRACT

The current thyroid ultrasound relies heavily on the experience and skills of the sonographer and the expertise of the radiologist, and the process is physically and cognitively exhausting. In this paper, we report a fully autonomous robotic ultrasound system, which is able to scan thyroid regions without human assistance and identify malignant nod- ules. In this system, human skeleton point recognition, reinforcement learning, and force feedback are used to deal with the difficulties in locating thyroid targets. The orientation of the ultrasound probe is adjusted dynamically via Bayesian optimization. Experimental results on human participants demonstrated that this system can perform high-quality ultrasound scans, close to manual scans obtained by clinicians. Additionally, it has the potential to detect thyroid nodules and provide data on nodule characteristics for American College of Radiology Thyroid Imaging Reporting and Data System (ACR TI-RADS) calculation.


Subject(s)
Robotics , Thyroid Gland , Thyroid Nodule , Ultrasonography , Humans , Thyroid Gland/diagnostic imaging , Ultrasonography/methods , Ultrasonography/instrumentation , Robotics/methods , Robotics/instrumentation , Thyroid Nodule/diagnostic imaging , Thyroid Nodule/pathology , Bayes Theorem , Female , Adult , Male , Thyroid Neoplasms/diagnostic imaging
2.
IEEE Trans Cybern ; 53(7): 4175-4188, 2023 Jul.
Article in English | MEDLINE | ID: mdl-35171785

ABSTRACT

Existing driving fatigue detection methods rarely consider how to effectively fuse the advantages of the electroencephalogram (EEG) and electrocardiogram (ECG) signals to enhance detection performance under noise conditions. To address the issues, this article proposes a new type of the deep learning (DL) framework based on EEG and ECG called the product fuzzy convolutional network (PFCN). It should be noted that this article first investigates how to fuse EEG and ECG signals to deal with driving fatigue detection under noise conditions in both simulated and real-field driving environments. Specifically, the PFCN includes three subnetworks. The first uses a fuzzy neural network (FNN) with feedback and a product layer, effectively capturing the particularity and temporal variation of high-dimensional EEG signals and reducing the time-space complexity. The second subnetwork uses a 1-D convolution to convert the ECG data into feature sequences, providing high accuracy and low computational complexity in ECG data classification. The third subnetwork proposes a fusion-separation mechanism to effectively fuse the extracted ECG and EEG features, suppressing the noise interference and ensuring higher detection accuracy. To evaluate the performance of PFCN, a series of experiments has been set up in both simulated and real-field driving environments. The results indicate that the proposed PFCN model has better robustness and detection accuracy compared with several mainstream fatigue detection models.


Subject(s)
Electroencephalography , Neural Networks, Computer , Electroencephalography/methods , Electrocardiography
3.
Front Robot AI ; 5: 125, 2018.
Article in English | MEDLINE | ID: mdl-33501004

ABSTRACT

With the development of Industry 4.0, the cooperation between robots and people is increasing. Therefore, man-machine security is the first problem that must be solved. In this paper, we proposed a novel methodology of active collision avoidance to safeguard the human who enters the robot's workspace. In the conventional approaches of obstacle avoidance, it is not easy for robots and humans to work safely in the common unstructured environment due to the lack of the intelligence. In this system, one Kinect is employed to monitor the workspace of the robot and detect anyone who enters the workspace of the robot. Once someone enters the working space, the human will be detected, and the skeleton of the human can be calculated in real time by the Kinect. The measurement errors increase over time, owing to the tracking error and the noise of the device. Therefore we use an Unscented Kalman Filter (UKF) to estimate the positions of the skeleton points. We employ an expert system to estimate the behavior of the human. Then let the robot avoid the human by taking different measures, such as stopping, bypassing the human or getting away. Finally, when the robot needs to execute bypassing the human in real time, to achieve this, we adopt a method called artificial potential field method to generate a new path for the robot. By using this active collision avoidance, the system can achieve the purpose that the robot is unable to touch on the human. This proposed system highlights the advantage that during the process, it can first detect the human, then analyze the motion of the human and finally safeguard the human. We experimentally tested the active collision avoidance system in real-world applications. The results of the test indicate that it can effectively ensure human security.

4.
ScientificWorldJournal ; 2014: 692165, 2014.
Article in English | MEDLINE | ID: mdl-24757430

ABSTRACT

This paper utilizes a human-robot interface system which incorporates particle filter (PF) and adaptive multispace transformation (AMT) to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect) to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT) method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator.


Subject(s)
Man-Machine Systems , Algorithms , Hand/physiology , Humans , Robotics
5.
ScientificWorldJournal ; 2014: 897242, 2014.
Article in English | MEDLINE | ID: mdl-24693252

ABSTRACT

This paper proposed a novel spatial-motion-constraints virtual fixtures (VFs) method for the human-machine interface collaborative technique. In our method, two 3D flexible VFs have been presented: warning pipe and safe pipe. And a potential-collision-detection method based on two flexible VFs has been proposed. The safe pipe constructs the safe workspace dynamically for the robot, which makes it possible to detect the potential collision between the robot and the obstacles. By calculating the speed and the acceleration of the robot end-effecter (EE), the warning pipe can adjust its radius to detect the deviation from the EE to the reference path. These spatial constraints serve as constraint conditions for constrained robot control. The approach enables multiobstacle manipulation task of telerobot in precise interactive teleoperation environment. We illustrate our approach on a teleoperative manipulation task and analyze the performance results. The performance-comparison experimental results demonstrate that the control mode employing our method can assist the operator more precisely in teleoperative tasks. Due to the properties such as collision avoidance and safety, operators can complete the tasks more efficiently along with reduction in operating tension.


Subject(s)
Robotics/instrumentation , Telecommunications/instrumentation , Robotics/methods
6.
ScientificWorldJournal ; 2013: 139738, 2013.
Article in English | MEDLINE | ID: mdl-24302854

ABSTRACT

Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.


Subject(s)
Robotics/instrumentation , Robotics/methods , Acceleration , Algorithms , Biomechanical Phenomena , Calibration , Computer Systems , Equipment Design , Models, Statistical , Online Systems , Reproducibility of Results , Software
SELECTION OF CITATIONS
SEARCH DETAIL
...