Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 4 de 4
Filtre
1.
Int. j. high dilution res ; 21(2): 28-29, May 6, 2022.
Article Dans Anglais | LILACS, HomeoIndex | ID: biblio-1396703

Résumé

:The reaction of plants to ultra-high dilute substances (UHD) is well known, however, the signaling of the immediate effect still doesn't have a widely accepted methodology. The objective of this experiment was to use non-destructive sampling to find signs of UHD soon after application to plants. Methods:The control consisted of untreated purslane [Pilea microphylla (L.) Liebm] plants and imaged with a digital cameraMobius (CMOS 1270x720 pixels) directed at a laser beam (±680 nm) emitted over the plant canopy for 220 seconds, with 6-second intervals. Then, the same plants were treated with Fluoricum acidum30CH (Fl. ac.30),and ten minutes later, new images of the leaves were taken to verify the possible existence of reaction patterns of the plants generated by Biospeckle Laser (1,2). Results:Several types of imaging were performed to choose the image pattern, and the NIR type was chosen, generated by the Mobius camera connected directly to a laptop (Figure 1). The images were treated using the THSP algorithm, which generated data to compare the variation of pixel intensity with and without the presence of UHD. Conclusion:Research has shown that "Fl. ac. 30" is identified in purslane plants soon after application and this sign persists for at least 180 minutesafter application, with a significant difference from the control at the 1% probability level.


Sujets)
Simulation numérique
2.
Neuroscience Bulletin ; (6): 1454-1468, 2021.
Article Dans Chinois | WPRIM | ID: wpr-951946

Résumé

Visual object recognition in humans and nonhuman primates is achieved by the ventral visual pathway (ventral occipital-temporal cortex, VOTC), which shows a well-documented object domain structure. An on-going question is what type of information is processed in the higher-order VOTC that underlies such observations, with recent evidence suggesting effects of certain visual features. Combining computational vision models, fMRI experiment using a parametric-modulation approach, and natural image statistics of common objects, we depicted the neural distribution of a comprehensive set of visual features in the VOTC, identifying voxel sensitivities with specific feature sets across geometry/shape, Fourier power, and color. The visual feature combination pattern in the VOTC is significantly explained by their relationships to different types of response-action computation (fight-or-flight, navigation, and manipulation), as derived from behavioral ratings and natural image statistics. These results offer a comprehensive visual feature map in the VOTC and a plausible theoretical explanation as a mapping onto different types of downstream response-action systems.

3.
Neuroscience Bulletin ; (6): 1454-1468, 2021.
Article Dans Anglais | WPRIM | ID: wpr-922640

Résumé

Visual object recognition in humans and nonhuman primates is achieved by the ventral visual pathway (ventral occipital-temporal cortex, VOTC), which shows a well-documented object domain structure. An on-going question is what type of information is processed in the higher-order VOTC that underlies such observations, with recent evidence suggesting effects of certain visual features. Combining computational vision models, fMRI experiment using a parametric-modulation approach, and natural image statistics of common objects, we depicted the neural distribution of a comprehensive set of visual features in the VOTC, identifying voxel sensitivities with specific feature sets across geometry/shape, Fourier power, and color. The visual feature combination pattern in the VOTC is significantly explained by their relationships to different types of response-action computation (fight-or-flight, navigation, and manipulation), as derived from behavioral ratings and natural image statistics. These results offer a comprehensive visual feature map in the VOTC and a plausible theoretical explanation as a mapping onto different types of downstream response-action systems.


Sujets)
Animaux , Humains , Cartographie cérébrale , Imagerie par résonance magnétique , Lobe occipital , Reconnaissance visuelle des formes , Stimulation lumineuse , Lobe temporal , Voies optiques/imagerie diagnostique , Perception visuelle
4.
Braz. J. Vet. Res. Anim. Sci. (Online) ; 58(n.esp): e174951, 2021. tab, ilus, graf
Article Dans Anglais | LILACS, VETINDEX | ID: biblio-1348268

Résumé

Vehicle-animal collisions represent a serious problem in roadway infrastructure. To avoid these roadway collisions, different mitigation systems have been applied in various regions of the world. In this article, a system for detecting animals on highways is presented using computer vision and machine learning algorithms. The models were trained to classify two groups of animals: capybaras and donkeys. Two variants of the convolutional neural network called Yolo (You only look once) were used, Yolov4 and Yolov4-tiny (a lighter version of the network). The training was carried out using pre-trained models. Detection tests were performed on 147 images. The accuracy results obtained were 84.87% and 79.87% for Yolov4 and Yolov4-tiny, respectively. The proposed system has the potential to improve road safety by reducing or preventing accidents with animals.(AU)


As colisões entre veículos e animais representam um sério problema na infraestrutura rodoviária. Para evitar tais acidentes, medidas mitigatórias têm sido aplicadas em diferentes regiões do mundo. Neste projeto é apresentado um sistema de detecção de animais em rodovias utilizando visão computacional e algoritmo de aprendizado de máquina. Os modelos foram treinados para classificar dois grupos de animais: capivaras e equídeos. Foram utilizadas duas variantes da rede neural convolucional chamada Yolo (você só vê uma vez) ­ Yolov4 e Yolov4-tiny (versão mais leve da rede) ­ e o treinamento foi realizado a partir de modelos pré-treinados. Testes de detecção foram realizados em 147 imagens e os resultados de precisão obtidos foram de 84,87% e 79,87% para Yolov4 e Yolov4-tiny, respectivamente. O sistema proposto tem o potencial de melhorar a segurança rodoviária reduzindo ou prevenindo acidentes com animais.(AU)


Sujets)
Animaux , Simulation numérique , Accidents de la route , Animaux
SÉLECTION CITATIONS
Détails de la recherche