Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Front Plant Sci ; 15: 1334215, 2024.
Article in English | MEDLINE | ID: mdl-38405587

ABSTRACT

Canopy conductance is a crucial factor in modelling plant transpiration and is highly responsive to water stress. The objective of this study is to develop a straightforward method for estimating canopy conductance (gc) in grapevines. To predict gc, this study combines stomatal conductance to water vapor (gsw) measurements from grapevine leaves, scaled to represent the canopy size by the leaf area index (LAI), with atmospheric variables, such as net solar radiation (Rn) and air vapor pressure deficit (VPD). The developed model was then validated by comparing its predictions with gc values calculated using the inverse of the Penman Monteith equation. The proposed model demonstrates its effectiveness in estimating the gc, with the highest root-mean-squared-error (RMSE=1.45x10-4 m.s-1) being lower than the minimum gc measured in the field (gc obs=0.0005 m.s-1). The results of this study reveal the significant influence of both VPD and gsw on grapevine canopy conductance.

2.
Sensors (Basel) ; 18(9)2018 Sep 03.
Article in English | MEDLINE | ID: mdl-30177667

ABSTRACT

This paper presents a new methodology for the estimation of olive-fruit mass and size, characterized by its major and minor axis length, by using image analysis techniques. First, different sets of olives from the varieties Picual and Arbequina were photographed in the laboratory. An original algorithm based on mathematical morphology and statistical thresholding was developed for segmenting the acquired images. The estimation models for the three targeted features, specifically for each variety, were established by linearly correlating the information extracted from the segmentations to objective reference measurement. The performance of the models was evaluated on external validation sets, giving relative errors of 0.86% for the major axis, 0.09% for the minor axis and 0.78% for mass in the case of the Arbequina variety; analogously, relative errors of 0.03%, 0.29% and 2.39% were annotated for Picual. Additionally, global feature estimation models, applicable to both varieties, were also tried, providing comparable or even better performance than the variety-specific ones. Attending to the achieved accuracy, it can be concluded that the proposed method represents a first step in the development of a low-cost, automated and non-invasive system for olive-fruit characterization in industrial processing chains.

3.
J Sci Food Agric ; 97(3): 784-792, 2017 Feb.
Article in English | MEDLINE | ID: mdl-27173452

ABSTRACT

BACKGROUND: Grapevine flower number per inflorescence provides valuable information that can be used for assessing yield. Considerable research has been conducted at developing a technological tool, based on image analysis and predictive modelling. However, the behaviour of variety-independent predictive models and yield prediction capabilities on a wide set of varieties has never been evaluated. RESULTS: Inflorescence images from 11 grapevine Vitis vinifera L. varieties were acquired under field conditions. The flower number per inflorescence and the flower number visible in the images were calculated manually, and automatically using an image analysis algorithm. These datasets were used to calibrate and evaluate the behaviour of two linear (single-variable and multivariable) and a nonlinear variety-independent model. As a result, the integrated tool composed of the image analysis algorithm and the nonlinear approach showed the highest performance and robustness (RPD = 8.32, RMSE = 37.1). The yield estimation capabilities of the flower number in conjunction with fruit set rate (R2 = 0.79) and average berry weight (R2 = 0.91) were also tested. CONCLUSION: This study proves the accuracy of flower number per inflorescence estimation using an image analysis algorithm and a nonlinear model that is generally applicable to different grapevine varieties. This provides a fast, non-invasive and reliable tool for estimation of yield at harvest. © 2016 Society of Chemical Industry.


Subject(s)
Crop Production , Crops, Agricultural/growth & development , Inflorescence/growth & development , Models, Biological , Vitis/growth & development , Algorithms , Calibration , Computational Biology , Crops, Agricultural/metabolism , Fruit/growth & development , Fruit/metabolism , Image Processing, Computer-Assisted , Inflorescence/metabolism , Linear Models , Multivariate Analysis , Nonlinear Dynamics , Pigments, Biological/biosynthesis , Reproducibility of Results , Spain , Species Specificity , Vitis/metabolism
4.
Sensors (Basel) ; 15(9): 21204-18, 2015 Aug 28.
Article in English | MEDLINE | ID: mdl-26343664

ABSTRACT

Grapevine flowering and fruit set greatly determine crop yield. This paper presents a new smartphone application for automatically counting, non-invasively and directly in the vineyard, the flower number in grapevine inflorescence photos by implementing artificial vision techniques. The application, called vitisFlower(®), firstly guides the user to appropriately take an inflorescence photo using the smartphone's camera. Then, by means of image analysis, the flowers in the image are detected and counted. vitisFlower(®) has been developed for Android devices and uses the OpenCV libraries to maximize computational efficiency. The application was tested on 140 inflorescence images of 11 grapevine varieties taken with two different devices. On average, more than 84% of flowers in the captures were found, with a precision exceeding 94%. Additionally, the application's efficiency on four different devices covering a wide range of the market's spectrum was also studied. The results of this benchmarking study showed significant differences among devices, although indicating that the application is efficiently usable even with low-range devices. vitisFlower is one of the first applications for viticulture that is currently freely available on Google Play.


Subject(s)
Agriculture/methods , Image Processing, Computer-Assisted/instrumentation , Image Processing, Computer-Assisted/methods , Inflorescence/physiology , Mobile Applications , Vitis/physiology , Algorithms , Smartphone
5.
Comput Biol Med ; 55: 61-73, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25450220

ABSTRACT

This paper presents a methodology for establishing the macular grading grid in digital retinal images by means of fovea centre detection. To this effect, visual and anatomical feature-based criteria are combined with the aim of exploiting the benefits of both techniques. First, acceptable fovea centre estimation is obtained by using a priori known anatomical features with respect to the optic disc and the vascular tree. Second, a type of morphological processing is employed in an attempt to improve the obtained fovea centre estimation when the fovea is detectable in the image; otherwise, it is declared indistinguishable and the first result is retained. The methodology was tested on the MESSIDOR and DIARETDB1 databases making use of a distance criterion between the obtained and the real fovea centre. Fovea centres in the brackets between the categories Excellent and Fair (fovea centres primarily accepted as valid in the literature) made up for 98.24% and 94.38% of the cases in the MESSIDOR and DIARETDB1, respectively.


Subject(s)
Diabetic Retinopathy/pathology , Fovea Centralis/anatomy & histology , Image Processing, Computer-Assisted/methods , Databases, Factual , Diabetic Retinopathy/diagnosis , Early Diagnosis , Humans , Optic Disk/anatomy & histology , Reproducibility of Results , Retinal Vessels/anatomy & histology
6.
IEEE Trans Med Imaging ; 31(2): 231-9, 2012 Feb.
Article in English | MEDLINE | ID: mdl-21926018

ABSTRACT

Retinal blood vessel assessment plays an important role in the diagnosis of ophthalmic pathologies. The use of digital images for this purpose enables the application of a computerized approach and has fostered the development of multiple methods for automated vascular tree segmentation. Metrics based on contingency tables for binary classification have been widely used for evaluating the performance of these algorithms. Metrics from this family are based on the measurement of a success or failure rate in the detected pixels, obtained by means of pixel-to-pixel comparison between the automated segmentation and a manually-labeled reference image. Therefore, vessel pixels are not considered as a part of a vascular structure with specific features. This paper contributes a function for the evaluation of global quality in retinal vessel segmentations. This function is based on the characterization of vascular structures as connected segments with measurable area and length. Thus, its design is meant to be sensitive to anatomical vascularity features. Comparison of results between the proposed function and other general quality evaluation functions shows that this proposal renders a high matching degree with human quality perception. Therefore, it can be used to enhance quality evaluation in retinal vessel segmentations, supplementing the existing functions. On the other hand, from a general point of view, the applied concept of measuring descriptive properties may be used to design specialized functions aimed at segmentation quality evaluation in other complex structures.


Subject(s)
Algorithms , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Pattern Recognition, Automated/methods , Retinal Vessels/anatomy & histology , Retinoscopy/methods , Humans , Image Enhancement/methods , Image Enhancement/standards , Image Interpretation, Computer-Assisted/standards , Imaging, Three-Dimensional/standards , Observer Variation , Pattern Recognition, Automated/standards , Quality Assurance, Health Care/methods , Reproducibility of Results , Retinoscopy/standards , Sensitivity and Specificity , Spain
7.
IEEE Trans Med Imaging ; 30(1): 146-58, 2011 Jan.
Article in English | MEDLINE | ID: mdl-20699207

ABSTRACT

This paper presents a new supervised method for blood vessel detection in digital retinal images. This method uses a neural network (NN) scheme for pixel classification and computes a 7-D vector composed of gray-level and moment invariants-based features for pixel representation. The method was evaluated on the publicly available DRIVE and STARE databases, widely used for this purpose, since they contain retinal images where the vascular structure has been precisely marked by experts. Method performance on both sets of test images is better than other existing solutions in literature. The method proves especially accurate for vessel detection in STARE images. Its application to this database (even when the NN was trained on the DRIVE database) outperforms all analyzed segmentation approaches. Its effectiveness and robustness with different image conditions, together with its simplicity and fast implementation, make this blood vessel segmentation proposal suitable for retinal image computer analyses such as automated screening for early diabetic retinopathy detection.


Subject(s)
Fluorescein Angiography/methods , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Pattern Recognition, Automated/methods , Retinal Vessels/anatomy & histology , Algorithms , Databases, Factual , Diabetic Retinopathy/diagnosis , Humans , Reproducibility of Results , Sensitivity and Specificity
8.
IEEE Trans Med Imaging ; 29(11): 1860-9, 2010 Nov.
Article in English | MEDLINE | ID: mdl-20562037

ABSTRACT

Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99% of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86%. The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper.


Subject(s)
Algorithms , Artificial Intelligence , Image Interpretation, Computer-Assisted/methods , Optic Disk/anatomy & histology , Pattern Recognition, Automated/methods , Retinoscopy/methods , Fundus Oculi , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...