Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Animals (Basel) ; 13(2)2023 Jan 10.
Article in English | MEDLINE | ID: mdl-36670787

ABSTRACT

The objectives were to determine the sensitivity, specificity, and cutoff values of a visual-based precision livestock technology (NUtrack), and determine the sensitivity and specificity of sickness score data collected with the live observation by trained human observers. At weaning, pigs (n = 192; gilts and barrows) were randomly assigned to one of twelve pens (16/pen) and treatments were randomly assigned to pens. Sham-pen pigs all received subcutaneous saline (3 mL). For LPS-pen pigs, all pigs received subcutaneous lipopolysaccharide (LPS; 300 µg/kg BW; E. coli O111:B4; in 3 mL of saline). For the last treatment, eight pigs were randomly assigned to receive LPS, and the other eight were sham (same methods as above; half-and-half pens). Human data from the day of the challenge presented high true positive and low false positive rates (88.5% sensitivity; 85.4% specificity; 0.871 Area Under Curve, AUC), however, these values declined when half-and-half pigs were scored (75% sensitivity; 65.5% specificity; 0.703 AUC). Precision technology measures had excellent AUC, sensitivity, and specificity for the first 72 h after treatment and AUC values were >0.970, regardless of pen treatment. These results indicate that precision technology has a greater potential for identifying pigs during a natural infectious disease event than trained professionals using timepoint sampling.

2.
Sensors (Basel) ; 19(4)2019 Feb 19.
Article in English | MEDLINE | ID: mdl-30791377

ABSTRACT

Computer vision systems have the potential to provide automated, non-invasive monitoring of livestock animals, however, the lack of public datasets with well-defined targets and evaluation metrics presents a significant challenge for researchers. Consequently, existing solutions often focus on achieving task-specific objectives using relatively small, private datasets. This work introduces a new dataset and method for instance-level detection of multiple pigs in group-housed environments. The method uses a single fully-convolutional neural network to detect the location and orientation of each animal, where both body part locations and pairwise associations are represented in the image space. Accompanying this method is a new dataset containing 2000 annotated images with 24,842 individually annotated pigs from 17 different locations. The proposed method achieves over 99% precision and over 96% recall when detecting pigs in environments previously seen by the network during training. To evaluate the robustness of the trained network, it is also tested on environments and lighting conditions unseen in the training set, where it achieves 91% precision and 67% recall. The dataset is publicly available for download.

3.
Biomed Sci Instrum ; 51: 289-96, 2015.
Article in English | MEDLINE | ID: mdl-25996730

ABSTRACT

Shifting demographics in the U.S. has created an urgent need to reform the policies, practices, and technology associated with delivering healthcare to geriatric populations. Automated monitoring systems can improve the quality of life while reducing healthcare costs for individuals aging in place. For these systems to be successful, both activity detection and localization are important, but most existing research focuses on only one of these technologies and systems that do collect both data treat these data sources separately. Here, we present SLAD {Simultaneous Localization and Activity Detection a novel framework for simultaneously processing data collected from localization and activity classification systems. Using a hidden Markov model and machine learning techniques, SLAD fuses these two sources of data in realtime using a probabilistic likelihood framework, which allows activity data to refine localization, and vice-versa. To evaluate the system, a wireless sensor network was deployed to collect RSSI data and IMU data concurrently from a wrist-worn watch; the RSSI data was processed using a radial basis function neural network localization algorithm, and the resulting position likelihoods were combined with the likelihoods from an IMU acitivty classification algorithm. In an experiment conducted in an indoor office environment, the proposed method produces 97% localization accuracy and 85% activity classification.

4.
Article in English | MEDLINE | ID: mdl-25570416

ABSTRACT

As a first step toward building a smart home behavioral monitoring system capable of classifying a wide variety of human behavior, a wireless sensor network (WSN) system is presented for RSSI localization. The low-cost, non-intrusive system uses a smart watch worn by the user to broadcast data to the WSN, where the strength of the radio signal is evaluated at each WSN node to localize the user. A method is presented that uses simultaneous localization and mapping (SLAM) for system calibration, providing automated fingerprinting associating the radio signal strength patterns to the user's location within the living space. To improve the accuracy of localization, a novel refinement technique is introduced that takes into account typical movement patterns of people within their homes. Experimental results demonstrate that the system is capable of providing accurate localization results in a typical living space.


Subject(s)
Behavior , Monitoring, Ambulatory/methods , Wireless Technology , Activities of Daily Living , Algorithms , Calibration , Electronic Data Processing , Humans , Internet , Lasers , Movement , Probability , Signal Processing, Computer-Assisted
5.
Surg Endosc ; 26(12): 3413-7, 2012 Dec.
Article in English | MEDLINE | ID: mdl-22648119

ABSTRACT

BACKGROUND: Accurate real-time 3D models of the operating field have the potential to enable augmented reality for endoscopic surgery. A new system is proposed to create real-time 3D models of the operating field that uses a custom miniaturized stereoscopic video camera attached to a laparoscope and an image-based reconstruction algorithm implemented on a graphics processing unit (GPU). METHODS: The proposed system was evaluated in a porcine model that approximates the viewing conditions of in vivo surgery. To assess the quality of the models, a synthetic view of the operating field was produced by overlaying a color image on the reconstructed 3D model, and an image rendered from the 3D model was compared with a 2D image captured from the same view. RESULTS: Experiments conducted with an object of known geometry demonstrate that the system produces 3D models accurate to within 1.5 mm. CONCLUSIONS: The ability to produce accurate real-time 3D models of the operating field is a significant advancement toward augmented reality in minimally invasive surgery. An imaging system with this capability will potentially transform surgery by helping novice and expert surgeons alike to delineate variance in internal anatomy accurately.


Subject(s)
Imaging, Three-Dimensional , Laparoscopy/methods , Animals , Computer Systems , Swine
SELECTION OF CITATIONS
SEARCH DETAIL
...