Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Front Vet Sci ; 9: 835529, 2022.
Article in English | MEDLINE | ID: mdl-35242842

ABSTRACT

Machine vision has demonstrated its usefulness in the livestock industry in terms of improving welfare in such areas as lameness detection and body condition scoring in dairy cattle. In this article, we present some promising results of applying state of the art object detection and classification techniques to insects, specifically Black Soldier Fly (BSF) and the domestic cricket, with the view of enabling automated processing for insect farming. We also present the low-cost "Insecto" Internet of Things (IoT) device, which provides environmental condition monitoring for temperature, humidity, CO2, air pressure, and volatile organic compound levels together with high resolution image capture. We show that we are able to accurately count and measure size of BSF larvae and also classify the sex of domestic crickets by detecting the presence of the ovipositor. These early results point to future work for enabling automation in the selection of desirable phenotypes for subsequent generations and for providing early alerts should environmental conditions deviate from desired values.

2.
Sci Rep ; 10(1): 17557, 2020 10 16.
Article in English | MEDLINE | ID: mdl-33067502

ABSTRACT

The digestive health of cows is one of the primary factors that determine their well-being and productivity. Under- and over-feeding are both commonplace in the beef and dairy industry; leading to welfare issues, negative environmental impacts, and economic losses. Unfortunately, digestive health is difficult for farmers to routinely monitor in large farms due to many factors including the need to transport faecal samples to a laboratory for compositional analysis. This paper describes a novel means for monitoring digestive health via a low-cost and easy to use imaging device based on computer vision. The method involves the rapid capture of multiple visible and near-infrared images of faecal samples. A novel three-dimensional analysis algorithm is then applied to objectively score the condition of the sample based on its geometrical features. While there is no universal ground truth for comparison of results, the order of scores matched a qualitative human prediction very closely. The algorithm is also able to detect the presence of undigested fibres and corn kernels using a deep learning approach. Detection rates for corn and fibre in image regions were of the order 90%. These results indicate the potential to develop this system for on-farm, real time monitoring of the digestive health of individual animals, allowing early intervention to effectively adjust feeding strategy.


Subject(s)
Animal Husbandry/instrumentation , Animal Husbandry/methods , Feces , Algorithms , Animal Feed/analysis , Animal Welfare , Animals , Behavior, Animal , Calibration , Cattle , Dairying , Deep Learning , Farms , Image Processing, Computer-Assisted/methods , Livestock , Software , Spectroscopy, Near-Infrared
3.
World J Gastrointest Endosc ; 12(5): 138-148, 2020 May 16.
Article in English | MEDLINE | ID: mdl-32477448

ABSTRACT

Colonoscopy screening for the detection and removal of colonic adenomas is central to efforts to reduce the morbidity and mortality of colorectal cancer. However, up to a third of adenomas may be missed at colonoscopy, and the majority of post-colonoscopy colorectal cancers are thought to arise from these. Adenomas have three-dimensional surface topographic features that differentiate them from adjacent normal mucosa. However, these topographic features are not enhanced by white light colonoscopy, and the endoscopist must infer these from two-dimensional cues. This may contribute to the number of missed lesions. A variety of optical imaging technologies have been developed commercially to enhance surface topography. However, existing techniques enhance surface topography indirectly, and in two dimensions, and the evidence does not wholly support their use in routine clinical practice. In this narrative review, co-authored by gastroenterologists and engineers, we summarise the evidence for the impact of established optical imaging technologies on adenoma detection rate, and review the development of photometric stereo (PS) for colonoscopy. PS is a machine vision technique able to capture a dense array of surface normals to render three-dimensional reconstructions of surface topography. This imaging technique has several potential clinical applications in colonoscopy, including adenoma detection, polyp classification, and facilitating polypectomy, an inherently three-dimensional task. However, the development of PS for colonoscopy is at an early stage. We consider the progress that has been made with PS to date and identify the obstacles that need to be overcome prior to clinical application.

4.
Gigascience ; 8(5)2019 05 01.
Article in English | MEDLINE | ID: mdl-31127811

ABSTRACT

BACKGROUND: Tracking and predicting the growth performance of plants in different environments is critical for predicting the impact of global climate change. Automated approaches for image capture and analysis have allowed for substantial increases in the throughput of quantitative growth trait measurements compared with manual assessments. Recent work has focused on adopting computer vision and machine learning approaches to improve the accuracy of automated plant phenotyping. Here we present PS-Plant, a low-cost and portable 3D plant phenotyping platform based on an imaging technique novel to plant phenotyping called photometric stereo (PS). RESULTS: We calibrated PS-Plant to track the model plant Arabidopsis thaliana throughout the day-night (diel) cycle and investigated growth architecture under a variety of conditions to illustrate the dramatic effect of the environment on plant phenotype. We developed bespoke computer vision algorithms and assessed available deep neural network architectures to automate the segmentation of rosettes and individual leaves, and extract basic and more advanced traits from PS-derived data, including the tracking of 3D plant growth and diel leaf hyponastic movement. Furthermore, we have produced the first PS training data set, which includes 221 manually annotated Arabidopsis rosettes that were used for training and data analysis (1,768 images in total). A full protocol is provided, including all software components and an additional test data set. CONCLUSIONS: PS-Plant is a powerful new phenotyping tool for plant research that provides robust data at high temporal and spatial resolutions. The system is well-suited for small- and large-scale research and will help to accelerate bridging of the phenotype-to-genotype gap.


Subject(s)
Deep Learning , Imaging, Three-Dimensional/methods , Photometry/methods , Plant Development , Arabidopsis , Imaging, Three-Dimensional/economics , Imaging, Three-Dimensional/standards , Phenotype , Photometry/economics , Photometry/standards
5.
J Opt Soc Am A Opt Image Sci Vis ; 33(3): 314-25, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26974900

ABSTRACT

This paper introduces an unsupervised modular approach for accurate and real-time eye center localization in images and videos, thus allowing a coarse-to-fine, global-to-regional scheme. The trajectories of eye centers in consecutive frames, i.e., gaze gestures, are further analyzed, recognized, and employed to boost the human-computer interaction (HCI) experience. This modular approach makes use of isophote and gradient features to estimate the eye center locations. A selective oriented gradient filter has been specifically designed to remove strong gradients from eyebrows, eye corners, and shadows, which sabotage most eye center localization methods. A real-world implementation utilizing these algorithms has been designed in the form of an interactive advertising billboard to demonstrate the effectiveness of our method for HCI. The eye center localization algorithm has been compared with 10 other algorithms on the BioID database and six other algorithms on the GI4E database. It outperforms all the other algorithms in comparison in terms of localization accuracy. Further tests on the extended Yale Face Database b and self-collected data have proved this algorithm to be robust against moderate head poses and poor illumination conditions. The interactive advertising billboard has manifested outstanding usability and effectiveness in our tests and shows great potential for benefiting a wide range of real-world HCI applications.


Subject(s)
Computers , Eye Movements , Pattern Recognition, Automated/methods , Humans , Unsupervised Machine Learning
6.
J Opt Soc Am A Opt Image Sci Vis ; 33(3): 333-44, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26974902

ABSTRACT

This paper seeks to compare encoded features from both two-dimensional (2D) and three-dimensional (3D) face images in order to achieve automatic gender recognition with high accuracy and robustness. The Fisher vector encoding method is employed to produce 2D, 3D, and fused features with escalated discriminative power. For 3D face analysis, a two-source photometric stereo (PS) method is introduced that enables 3D surface reconstructions with accurate details as well as desirable efficiency. Moreover, a 2D+3D imaging device, taking the two-source PS method as its core, has been developed that can simultaneously gather color images for 2D evaluations and PS images for 3D analysis. This system inherits the superior reconstruction accuracy from the standard (three or more light) PS method but simplifies the reconstruction algorithm as well as the hardware design by only requiring two light sources. It also offers great potential for facilitating human computer interaction by being accurate, cheap, efficient, and nonintrusive. Ten types of low-level 2D and 3D features have been experimented with and encoded for Fisher vector gender recognition. Evaluations of the Fisher vector encoding method have been performed on the FERET database, Color FERET database, LFW database, and FRGCv2 database, yielding 97.7%, 98.0%, 92.5%, and 96.7% accuracy, respectively. In addition, the comparison of 2D and 3D features has been drawn from a self-collected dataset, which is constructed with the aid of the 2D+3D imaging device in a series of data capture experiments. With a variety of experiments and evaluations, it can be proved that the Fisher vector encoding method outperforms most state-of-the-art gender recognition methods. It has also been observed that 3D features reconstructed by the two-source PS method are able to further boost the Fisher vector gender recognition performance, i.e., up to a 6% increase on the self-collected database.


Subject(s)
Face , Imaging, Three-Dimensional , Pattern Recognition, Automated/methods , Sex Factors , Databases, Factual , Female , Humans , Male
7.
J Opt Soc Am A Opt Image Sci Vis ; 30(3): 278-86, 2013 Mar 01.
Article in English | MEDLINE | ID: mdl-23456103

ABSTRACT

This paper proposes and describes an implementation of a photometric stereo-based technique for in vivo assessment of three-dimensional (3D) skin topography in the presence of interreflections. The proposed method illuminates skin with red, green, and blue colored lights and uses the resulting variation in surface gradients to mitigate the effects of interreflections. Experiments were carried out on Caucasian, Asian, and African American subjects to demonstrate the accuracy of our method and to validate the measurements produced by our system. Our method produced significant improvement in 3D surface reconstruction for all Caucasian, Asian, and African American skin types. The results also illustrate the differences in recovered skin topography due to the nondiffuse bidirectional reflectance distribution function (BRDF) for each color illumination used, which also concur with the existing multispectral BRDF data available for skin.


Subject(s)
Imaging, Three-Dimensional/methods , Optical Phenomena , Photometry/methods , Skin/cytology , Humans , Skin Aging/ethnology
SELECTION OF CITATIONS
SEARCH DETAIL
...