Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(5)2024 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-38475100

RESUMO

Camera traps, an invaluable tool for biodiversity monitoring, capture wildlife activities day and night. In low-light conditions, near-infrared (NIR) imaging is commonly employed to capture images without disturbing animals. However, the reflection properties of NIR light differ from those of visible light in terms of chrominance and luminance, creating a notable gap in human perception. Thus, the objective is to enrich near-infrared images with colors, thereby bridging this domain gap. Conventional colorization techniques are ineffective due to the difference between NIR and visible light. Moreover, regular supervised learning methods cannot be applied because paired training data are rare. Solutions to such unpaired image-to-image translation problems currently commonly involve generative adversarial networks (GANs), but recently, diffusion models gained attention for their superior performance in various tasks. In response to this, we present a novel framework utilizing diffusion models for the colorization of NIR images. This framework allows efficient implementation of various methods for colorizing NIR images. We show NIR colorization is primarily controlled by the translation of the near-infrared intensities to those of visible light. The experimental evaluation of three implementations with increasing complexity shows that even a simple implementation inspired by visible-near-infrared (VIS-NIR) fusion rivals GANs. Moreover, we show that the third implementation is capable of outperforming GANs. With our study, we introduce an intersection field joining the research areas of diffusion models, NIR colorization, and VIS-NIR fusion.

2.
Sensors (Basel) ; 24(5)2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38475140

RESUMO

Land Surface Temperature (LST) is an important resource for a variety of tasks. The data are mostly free of charge and combine high spatial and temporal resolution with reliable data collection over a historical timeframe. When remote sensing is used to provide LST data, such as the MODA11 product using information from the MODIS sensors attached to NASA satellites, data acquisition can be hindered by clouds or cloud shadows, occluding the sensors' view on different areas of the world. This makes it difficult to take full advantage of the high resolution of the data. A common solution to interpolating LST data is statistical interpolation methods, such as fitting polynomials or thin plate spine interpolation. These methods have difficulties in incorporating additional knowledge about the research area and learning local dependencies that can help with the interpolation process. We propose a novel approach to interpolating remote sensing LST data in a fixed research area considering local ground-site air temperature measurements. The two-step approach consists of learning the LST from air temperature measurements, where the ground-site weather stations are located, and interpolating the remaining missing values with partial convolutions within a U-Net deep learning architecture. Our approach improves the interpolation of LST for our research area by 44% in terms of RMSE, when compared to state-of-the-art statistical methods. Due to the use of air temperature, we can provide coverage of 100%, even when no valid LST measurements were available. The resulting gapless coverage of high resolution LST data will help unlock the full potential of remote sensing LST data.

3.
Sensors (Basel) ; 24(3)2024 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-38339487

RESUMO

Remote sensing data represent one of the most important sources for automized yield prediction. High temporal and spatial resolution, historical record availability, reliability, and low cost are key factors in predicting yields around the world. Yield prediction as a machine learning task is challenging, as reliable ground truth data are difficult to obtain, especially since new data points can only be acquired once a year during harvest. Factors that influence annual yields are plentiful, and data acquisition can be expensive, as crop-related data often need to be captured by experts or specialized sensors. A solution to both problems can be provided by deep transfer learning based on remote sensing data. Satellite images are free of charge, and transfer learning allows recognition of yield-related patterns within countries where data are plentiful and transfers the knowledge to other domains, thus limiting the number of ground truth observations needed. Within this study, we examine the use of transfer learning for yield prediction, where the data preprocessing towards histograms is unique. We present a deep transfer learning framework for yield prediction and demonstrate its successful application to transfer knowledge gained from US soybean yield prediction to soybean yield prediction within Argentina. We perform a temporal alignment of the two domains and improve transfer learning by applying several transfer learning techniques, such as L2-SP, BSS, and layer freezing, to overcome catastrophic forgetting and negative transfer problems. Lastly, we exploit spatio-temporal patterns within the data by applying a Gaussian process. We are able to improve the performance of soybean yield prediction in Argentina by a total of 19% in terms of RMSE and 39% in terms of R2 compared to predictions without transfer learning and Gaussian processes. This proof of concept for advanced transfer learning techniques for yield prediction and remote sensing data in the form of histograms can enable successful yield prediction, especially in emerging and developing countries, where reliable data are usually limited.

4.
Sensors (Basel) ; 22(23)2022 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-36501782

RESUMO

The development and application of modern technology are an essential basis for the efficient monitoring of species in natural habitats to assess the change of ecosystems, species communities and populations, and in order to understand important drivers of change. For estimating wildlife abundance, camera trapping in combination with three-dimensional (3D) measurements of habitats is highly valuable. Additionally, 3D information improves the accuracy of wildlife detection using camera trapping. This study presents a novel approach to 3D camera trapping featuring highly optimized hardware and software. This approach employs stereo vision to infer the 3D information of natural habitats and is designated as StereO CameRA Trap for monitoring of biodivErSity (SOCRATES). A comprehensive evaluation of SOCRATES shows not only a 3.23% improvement in animal detection (bounding box mAP75), but also its superior applicability for estimating animal abundance using camera trap distance sampling. The software and documentation of SOCRATES is openly provided.


Assuntos
Animais Selvagens , Ecossistema , Animais , Biodiversidade
5.
Syst Biol ; 71(2): 320-333, 2022 02 10.
Artigo em Inglês | MEDLINE | ID: mdl-34143222

RESUMO

Automated species identification and delimitation is challenging, particularly in rare and thus often scarcely sampled species, which do not allow sufficient discrimination of infraspecific versus interspecific variation. Typical problems arising from either low or exaggerated interspecific morphological differentiation are best met by automated methods of machine learning that learn efficient and effective species identification from training samples. However, limited infraspecific sampling remains a key challenge also in machine learning. In this study, we assessed whether a data augmentation approach may help to overcome the problem of scarce training data in automated visual species identification. The stepwise augmentation of data comprised image rotation as well as visual and virtual augmentation. The visual data augmentation applies classic approaches of data augmentation and generation of artificial images using a generative adversarial networks approach. Descriptive feature vectors are derived from bottleneck features of a VGG-16 convolutional neural network that are then stepwise reduced in dimensionality using Global Average Pooling and principal component analysis to prevent overfitting. Finally, data augmentation employs synthetic additional sampling in feature space by an oversampling algorithm in vector space. Applied on four different image data sets, which include scarab beetle genitalia (Pleophylla, Schizonycha) as well as wing patterns of bees (Osmia) and cattleheart butterflies (Parides), our augmentation approach outperformed a deep learning baseline approach by means of resulting identification accuracy with nonaugmented data as well as a traditional 2D morphometric approach (Procrustes analysis of scarab beetle genitalia). [Deep learning; image-based species identification; generative adversarial networks; limited infraspecific sampling; synthetic oversampling.].


Assuntos
Borboletas , Algoritmos , Animais , Abelhas , Aprendizado de Máquina , Redes Neurais de Computação , Filogenia
6.
Sensors (Basel) ; 18(3)2018 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-29498702

RESUMO

Wine growers prefer cultivars with looser bunch architecture because of the decreased risk for bunch rot. As a consequence, grapevine breeders have to select seedlings and new cultivars with regard to appropriate bunch traits. Bunch architecture is a mosaic of different single traits which makes phenotyping labor-intensive and time-consuming. In the present study, a fast and high-precision phenotyping pipeline was developed. The optical sensor Artec Spider 3D scanner (Artec 3D, L-1466, Luxembourg) was used to generate dense 3D point clouds of grapevine bunches under lab conditions and an automated analysis software called 3D-Bunch-Tool was developed to extract different single 3D bunch traits, i.e., the number of berries, berry diameter, single berry volume, total volume of berries, convex hull volume of grapes, bunch width and bunch length. The method was validated on whole bunches of different grapevine cultivars and phenotypic variable breeding material. Reliable phenotypic data were obtained which show high significant correlations (up to r² = 0.95 for berry number) compared to ground truth data. Moreover, it was shown that the Artec Spider can be used directly in the field where achieved data show comparable precision with regard to the lab application. This non-invasive and non-contact field application facilitates the first high-precision phenotyping pipeline based on 3D bunch traits in large plant sets.


Assuntos
Vitis , Automação , Frutas , Fenótipo , Vinho
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...