Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38598406

RESUMO

Autonomous Ultrasound Image Quality Assessment (US-IQA) is a promising tool to aid the interpretation by practicing sonographers and to enable the future robotization of ultrasound procedures. However, autonomous US-IQA has several challenges. Ultrasound images contain many spurious artifacts, such as noise due to handheld probe positioning, errors in the selection of probe parameters and patient respiration during the procedure. Further, these images are highly variable in appearance with respect to the individual patient's physiology. We propose to use a deep Convolutional Neural Network (CNN), USQNet, which utilizes a Multi-scale and Local-to-Global Second-order Pooling (MS-L2GSoP) classifier to conduct the sonographer-like assessment of image quality. This classifier first extracts features at multiple scales to encode the inter-patient anatomical variations, similar to a sonographer's understanding of anatomy. Then, it uses second-order pooling in the intermediate layers (local) and at the end of the network (global) to exploit the second-order statistical dependency of multi-scale structural and multi-region textural features. The L2GSoP will capture the higher-order relationships between different spatial locations and provide the seed for correlating local patches, much like a sonographer prioritizes regions across the image. We experimentally validated the USQNet for a new dataset of the human urinary bladder ultrasound images. The validation involved first with the subjective assessment by experienced radiologists' annotation, and then with state-of-the-art CNN networks for US-IQA and its ablated counterparts. The results demonstrate that USQNet achieves a remarkable accuracy of 92.4% and outperforms the SOTA models by 3 - 14% while requiring comparable computation time.

2.
Mil Med ; 188(Suppl 6): 412-419, 2023 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-37948233

RESUMO

INTRODUCTION: Remote military operations require rapid response times for effective relief and critical care. Yet, the military theater is under austere conditions, so communication links are unreliable and subject to physical and virtual attacks and degradation at unpredictable times. Immediate medical care at these austere locations requires semi-autonomous teleoperated systems, which enable the completion of medical procedures even under interrupted networks while isolating the medics from the dangers of the battlefield. However, to achieve autonomy for complex surgical and critical care procedures, robots require extensive programming or massive libraries of surgical skill demonstrations to learn effective policies using machine learning algorithms. Although such datasets are achievable for simple tasks, providing a large number of demonstrations for surgical maneuvers is not practical. This article presents a method for learning from demonstration, combining knowledge from demonstrations to eliminate reward shaping in reinforcement learning (RL). In addition to reducing the data required for training, the self-supervised nature of RL, in conjunction with expert knowledge-driven rewards, produces more generalizable policies tolerant to dynamic environment changes. A multimodal representation for interaction enables learning complex contact-rich surgical maneuvers. The effectiveness of the approach is shown using the cricothyroidotomy task, as it is a standard procedure seen in critical care to open the airway. In addition, we also provide a method for segmenting the teleoperator's demonstration into subtasks and classifying the subtasks using sequence modeling. MATERIALS AND METHODS: A database of demonstrations for the cricothyroidotomy task was collected, comprising six fundamental maneuvers referred to as surgemes. The dataset was collected by teleoperating a collaborative robotic platform-SuperBaxter, with modified surgical grippers. Then, two learning models are developed for processing the dataset-one for automatic segmentation of the task demonstrations into a sequence of surgemes and the second for classifying each segment into labeled surgemes. Finally, a multimodal off-policy RL with rewards learned from demonstrations was developed to learn the surgeme execution from these demonstrations. RESULTS: The task segmentation model has an accuracy of 98.2%. The surgeme classification model using the proposed interaction features achieved a classification accuracy of 96.25% averaged across all surgemes compared to 87.08% without these features and 85.4% using a support vector machine classifier. Finally, the robot execution achieved a task success rate of 93.5% compared to baselines of behavioral cloning (78.3%) and a twin-delayed deep deterministic policy gradient with shaped rewards (82.6%). CONCLUSIONS: Results indicate that the proposed interaction features for the segmentation and classification of surgical tasks improve classification accuracy. The proposed method for learning surgemes from demonstrations exceeds popular methods for skill learning. The effectiveness of the proposed approach demonstrates the potential for future remote telemedicine on battlefields.


Assuntos
Robótica , Cirurgia Assistida por Computador , Humanos , Robótica/métodos , Algoritmos , Cirurgia Assistida por Computador/métodos , Aprendizado de Máquina
3.
J Anim Sci ; 1012023 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-37335911

RESUMO

Precision livestock farming (PLF) offers a strategic solution to enhance the management capacity of large animal groups, while simultaneously improving profitability, efficiency, and minimizing environmental impacts associated with livestock production systems. Additionally, PLF contributes to optimizing the ability to manage and monitor animal welfare while providing solutions to global grand challenges posed by the growing demand for animal products and ensuring global food security. By enabling a return to the "per animal" approach by harnessing technological advancements, PLF enables cost-effective, individualized care for animals through enhanced monitoring and control capabilities within complex farming systems. Meeting the nutritional requirements of a global population exponentially approaching ten billion people will likely require the density of animal proteins for decades to come. The development and application of digital technologies are critical to facilitate the responsible and sustainable intensification of livestock production over the next several decades to maximize the potential benefits of PLF. Real-time continuous monitoring of each animal is expected to enable more precise and accurate tracking and management of health and well-being. Importantly, the digitalization of agriculture is expected to provide collateral benefits of ensuring auditability in value chains while assuaging concerns associated with labor shortages. Despite notable advances in PLF technology adoption, a number of critical concerns currently limit the viability of these state-of-the-art technologies. The potential benefits of PLF for livestock management systems which are enabled by autonomous continuous monitoring and environmental control can be rapidly enhanced through an Internet of Things approach to monitoring and (where appropriate) closed-loop management. In this paper, we analyze the multilayered network of sensors, actuators, communication, networking, and analytics currently used in PLF, focusing on dairy farming as an illustrative example. We explore the current state-of-the-art, identify key shortcomings, and propose potential solutions to bridge the gap between technology and animal agriculture. Additionally, we examine the potential implications of advancements in communication, robotics, and artificial intelligence on the health, security, and welfare of animals.


Precision technologies are revolutionizing animal agriculture by enhancing the management of animal welfare and productivity. To fully realize the potential benefits of precision livestock farming (PLF), the development and application of digital technologies are needed to facilitate the responsible and sustainable intensification of livestock production over the next several decades. Importantly, the digitalization of agriculture is expected to provide collateral benefits of ensuring audibility in value chains while assuaging concerns associated with labor shortages. In this paper, we analyze the multilayered network of sensors, actuators, communication, and analytics currently in use in PLF. We analyze the various aspects of sensing, communication, networking, and intelligence on the farm leveraging dairy farms as an example system. We also discuss the potential implications of advancements in communication, robotics, and artificial intelligence on the security and welfare of animals.


Assuntos
Criação de Animais Domésticos , Inteligência Artificial , Animais , Agricultura , Fazendas , Gado , Tecnologia
4.
IEEE Trans Biomed Eng ; 70(4): 1219-1230, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36215341

RESUMO

Sensors in and around the environment becoming ubiquitous has ushered in the age of smart animal agriculture which has the potential to greatly improve animal health and productivity. The data gathered from sensors dwelling in animal agriculture settings have made farms a part of the IoT space leading to active research in developing efficient communication methodologies for farm networks. This study focuses on the first hop of farm networks where data from inside the body of animals is communicated to a node dwelling outside the body. Novel experimental methods are used to calculate the channel loss at sub-GHz frequencies (100-900 MHz) to characterize the in-body to out-of-body (IBOB) communication channel in large animals. A first-of-its-kind 3D bovine modeling is done with computer vision techniques for detailed morphological features of the animal body to perform Finite Element Method based Electromagnetic simulations. The results of the simulations are experimentally validated to build a complete channel modeling methodology for IBOB animal-body-communication. The 3D bovine model is made available publicly on GitHub. The results illustrate that an IBOB communication channel is realizable from the rumen to the collar of ruminants with [Formula: see text] path loss at sub-GHz frequencies making communication feasible. The developed methodology has been illustrated for ruminants but can also be used for other IBOB studies. An efficient communication architecture can be formed using the channel modeling technique illustrated for IBOB communication in animals paving the way for the design and development of future smart animal agriculture systems.


Assuntos
Agricultura , Ruminantes , Bovinos , Animais , Comunicação , Projetos de Pesquisa
5.
J Dairy Sci ; 105(8): 6379-6404, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35773034

RESUMO

Quantifying digestive and fermentative processes within the rumen environment has been the subject of decades of research; however, our existing research methodologies preclude time-sensitive and spatially explicit investigation of this system. To better understand the temporal and spatial dynamics of the rumen environment, real-time and in situ monitoring of various chemical and physical parameters in the rumen through implantable microsensor technologies is a practical solution. Moreover, such sensors could contribute to the next generation of precision livestock farming, provided sufficient wireless data networking and computing systems are incorporated. In this review, various microsensor technologies applicable to real-time metabolic monitoring for ruminants are introduced, including the detection of parameters for rumen metabolism, such as pH, temperature, histamine concentrations, and volatile fatty acid concentrations. The working mechanisms and requirements of the sensors are summarized with respect to the selected target parameters. Lastly, future challenges and perspectives of this research field are discussed.


Assuntos
Rúmen , Ruminantes , Animais , Fazendas , Ácidos Graxos Voláteis/metabolismo , Gado , Rúmen/metabolismo
6.
Rob Auton Syst ; 147: 103919, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34703078

RESUMO

Coexisting with the current COVID-19 pandemic is a global reality that comes with unique challenges impacting daily interactions, business, and facility maintenance. A monumental challenge accompanied is continuous and effective disinfection of shared spaces, such as office/school buildings, elevators, classrooms, and cafeterias. Although ultraviolet light and chemical sprays are routines for indoor disinfection, they irritate humans, hence can only be used when the facility is unoccupied. Stationary air filtration systems, while being irritation-free and commonly available, fail to protect all occupants due to limitations in air circulation and diffusion. Hence, we present a novel collaborative robot (cobot) disinfection system equipped with a Bernoulli Air Filtration Module, with a design that minimizes disturbance to the surrounding airflow and maneuverability among occupants for maximum coverage. The influence of robotic air filtration on dosage at neighbors of a coughing source is analyzed with derivations from a Computational Fluid Dynamics (CFD) simulation. Based on the analysis, the novel occupant-centric online rerouting algorithm decides the path of the robot. The rerouting ensures effective air filtration that minimizes the risk of occupants under their detected layout. The proposed system was tested on a 2 × 3 seating grid (empty seats allowed) in a classroom, and the worst-case dosage for all occupants was chosen as the metric. The system reduced the worst-case dosage among all occupants by 26% and 19% compared to a stationary air filtration system with the same flow rate, and a robotic air filtration system that traverses all the seats but without occupant-centric planning of its path, respectively. Hence, we validated the effectiveness of the proposed robotic air filtration system.

7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 7570-7573, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892842

RESUMO

Continuous real-time health monitoring in animals is essential for ensuring animal welfare. In ruminants like cows, rumen health is closely intertwined with overall animal health. Therefore, in-situ monitoring of rumen health is critical. However, this demands in-body to out-of-body communication of sensor data. In this paper, we devise a method of channel modeling for a cow using experiments and FEM based simulations at 400 MHz. This technique can be further employed across all frequencies to characterize the communication channel for the development of a channel architecture that efficiently exploits its properties.


Assuntos
Rúmen , Ruminantes , Agricultura , Animais , Bovinos , Comunicação , Feminino
8.
Mil Med ; 186(Suppl 1): 288-294, 2021 01 25.
Artigo em Inglês | MEDLINE | ID: mdl-33499518

RESUMO

INTRODUCTION: Short response time is critical for future military medical operations in austere settings or remote areas. Such effective patient care at the point of injury can greatly benefit from the integration of semi-autonomous robotic systems. To achieve autonomy, robots would require massive libraries of maneuvers collected with the goal of training machine learning algorithms. Although this is attainable in controlled settings, obtaining surgical data in austere settings can be difficult. Hence, in this article, we present the Dexterous Surgical Skill (DESK) database for knowledge transfer between robots. The peg transfer task was selected as it is one of the six main tasks of laparoscopic training. In addition, we provide a machine learning framework to evaluate novel transfer learning methodologies on this database. METHODS: A set of surgical gestures was collected for a peg transfer task, composed of seven atomic maneuvers referred to as surgemes. The collected Dexterous Surgical Skill dataset comprises a set of surgical robotic skills using the four robotic platforms: Taurus II, simulated Taurus II, YuMi, and the da Vinci Research Kit. Then, we explored two different learning scenarios: no-transfer and domain-transfer. In the no-transfer scenario, the training and testing data were obtained from the same domain; whereas in the domain-transfer scenario, the training data are a blend of simulated and real robot data, which are tested on a real robot. RESULTS: Using simulation data to train the learning algorithms enhances the performance on the real robot where limited or no real data are available. The transfer model showed an accuracy of 81% for the YuMi robot when the ratio of real-tosimulated data were 22% to 78%. For the Taurus II and the da Vinci, the model showed an accuracy of 97.5% and 93%, respectively, training only with simulation data. CONCLUSIONS: The results indicate that simulation can be used to augment training data to enhance the performance of learned models in real scenarios. This shows potential for the future use of surgical data from the operating room in deployable surgical robots in remote areas.


Assuntos
Robótica , Competência Clínica , Simulação por Computador , Humanos , Laparoscopia , Aprendizado de Máquina
9.
Exp Brain Res ; 238(3): 537-550, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31974755

RESUMO

Electroencephalography (EEG) activity in the mu frequency band (8-13 Hz) is suppressed during both gesture performance and observation. However, it is not clear if or how particular characteristics within the kinematic execution of gestures map onto dynamic changes in mu activity. Mapping the time course of gesture kinematics onto that of mu activity could help understand which aspects of gestures capture attention and aid in the classification of communicative intent. In this work, we test whether the timing of inflection points within gesture kinematics predicts the occurrence of oscillatory mu activity during passive gesture observation. The timing for salient features of performed gestures in video stimuli was determined by isolating inflection points in the hands' motion trajectories. Participants passively viewed the gesture videos while continuous EEG data was collected. We used wavelet analysis to extract mu oscillations at 11 Hz and at central electrodes and occipital electrodes. We used linear regression to test for associations between the timing of inflection points in motion trajectories and mu oscillations that generalized across gesture stimuli. Separately, we also tested whether inflection point occurrences evoked mu/alpha responses that generalized across participants. Across all gestures and inflection points, and pooled across participants, peaks in 11 Hz EEG waveforms were detected 465 and 535 ms after inflection points at occipital and central electrodes, respectively. A regression model showed that inflection points in the motion trajectories strongly predicted subsequent mu oscillations ([Formula: see text]<0.01); effects were weaker and non-significant for low (17 Hz) and high (21 Hz) beta activity. When segmented by inflection point occurrence rather than stimulus onset and testing participants as a random effect, inflection points evoked mu and beta activity from 308 to 364 ms at central electrodes, and broad activity from 226 to 800 ms at occipital electrodes. The results suggest that inflection points in gesture trajectories elicit coordinated activity in the visual and motor cortices, with prominent activity in the mu/alpha frequency band and extending into the beta frequency band. The time course of activity indicates that visual processing drives subsequent activity in the motor cortex during gesture processing, with a lag of approximately 80 ms.


Assuntos
Atenção/fisiologia , Ondas Encefálicas/fisiologia , Fenômenos Eletrofisiológicos/fisiologia , Gestos , Adolescente , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Neurônios-Espelho/fisiologia , Córtex Motor/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
10.
IEEE Trans Image Process ; 22(6): 2306-16, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-23475364

RESUMO

This paper presents a robust method for 3D object rotation estimation using spherical harmonics representation and the unit quaternion vector. The proposed method provides a closed-form solution for rotation estimation without recurrence relations or searching for point correspondences between two objects. The rotation estimation problem is casted as a minimization problem, which finds the optimum rotation angles between two objects of interest in the frequency domain. The optimum rotation angles are obtained by calculating the unit quaternion vector from a symmetric matrix, which is constructed from the two sets of spherical harmonics coefficients using eigendecomposition technique. Our experimental results on hundreds of 3D objects show that our proposed method is very accurate in rotation estimation, robust to noisy data, missing surface points, and can handle intra-class variability between 3D objects.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...