Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 163: 107200, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37393786

RESUMO

Healthcare has benefited from the implementation of deep-learning models to solve medical image classification tasks. For example, White Blood Cell (WBC) image analysis is used to diagnose different pathologies like leukemia. However, medical datasets are mostly imbalanced, inconsistent, and costly to collect. Hence, it is difficult to select an adequate model to overcome the mentioned drawbacks. Therefore, we propose a novel methodology to automatically select models to solve WBC classification tasks. These tasks contain images collected using different staining methods, microscopes, and cameras. The proposed methodology includes meta- and base-level learnings. At the meta-level, we implemented meta-models based on prior-models to acquire meta-knowledge by solving meta-tasks using the shades of gray color constancy method. To determine the best models to solve new WBC tasks we developed an algorithm that uses the meta-knowledge and the Centered Kernel Alignment metric. Next, a learning rate finder method is employed to adapt the selected models. The adapted models (base-models) are used in an ensemble learning approach achieving accuracy and balanced accuracy scores of 98.29 and 97.69 in the Raabin dataset; 100 in the BCCD dataset; 99.57 and 99.51 in the UACH dataset, respectively. The results in all datasets outperform most of the state-of-the-art models, which demonstrates our methodology's advantage of automatically selecting the best model to solve WBC tasks. The findings also indicate that our methodology can be extended to other medical image classification tasks where is difficult to select an adequate deep-learning model to solve new tasks with imbalanced, limited, and out-of-distribution data.


Assuntos
Algoritmos , Leucócitos , Processamento de Imagem Assistida por Computador/métodos , Microscopia
2.
Comput Biol Med ; 159: 106909, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37071937

RESUMO

Speech imagery has been successfully employed in developing Brain-Computer Interfaces because it is a novel mental strategy that generates brain activity more intuitively than evoked potentials or motor imagery. There are many methods to analyze speech imagery signals, but those based on deep neural networks achieve the best results. However, more research is necessary to understand the properties and features that describe imagined phonemes and words. In this paper, we analyze the statistical properties of speech imagery EEG signals from the KaraOne dataset to design a method that classifies imagined phonemes and words. With this analysis, we propose a Capsule Neural Network that categorizes speech imagery patterns into bilabial, nasal, consonant-vocal, and vowels/iy/ and/uw/. The method is called Capsules for Speech Imagery Analysis (CapsK-SI). The input of CapsK-SI is a set of statistical features of EEG speech imagery signals. The architecture of the Capsule Neural Network is composed of a convolution layer, a primary capsule layer, and a class capsule layer. The average accuracy reached is 90.88%±7 for bilabial, 90.15%±8 for nasal, 94.02%±6 for consonant-vowel, 89.70%±8 for word-phoneme, 94.33%± for/iy/ vowel and, 94.21%±3 for/uw/ vowel detection. Finally, with the activity vectors of the CapsK-SI capsules, we generated brain maps to represent brain activity in the production of bilabial, nasal, and consonant-vocal signals.


Assuntos
Interfaces Cérebro-Computador , Fala , Fala/fisiologia , Cápsulas , Eletroencefalografia/métodos , Redes Neurais de Computação , Encéfalo/fisiologia , Imaginação/fisiologia , Algoritmos
3.
Cogn Process ; 23(1): 27-40, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34779948

RESUMO

Scene analysis in video sequences is a complex task for a computer vision system. Several schemes have been addressed in this analysis, such as deep learning networks or traditional image processing methods. However, these methods require thorough training or manual adjustment of parameters to achieve accurate results. Therefore, it is necessary to develop novel methods to analyze the scenario information in video sequences. For this reason, this paper proposes a method for object segmentation in video sequences inspired by the structural layers of the visual cortex. The method is called Neuro-Inspired Object Segmentation, SegNI. SegNI has a hierarchical architecture that analyzes object features such as edges, color, and motion to generate regions that represent the objects in the scenario. The results obtained with the Video Segmentation Benchmark VSB100 dataset demonstrate that SegNI can adapt automatically to videos with scenarios that have different nature, composition, and different types of objects. Also, SegNI adapts its processing to new scenario conditions without training, which is a significant advantage over deep learning networks.


Assuntos
Algoritmos , Córtex Visual , Inteligência Artificial , Humanos , Processamento de Imagem Assistida por Computador , Movimento (Física)
4.
IEEE Trans Image Process ; 30: 7090-7100, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34351859

RESUMO

Birds of prey especially eagles and hawks have a visual acuity two to five times better than humans. Among the peculiar characteristics of their biological vision are that they have two types of foveae; one shallow fovea used in their binocular vision, and a deep fovea for monocular vision. The deep fovea allows these birds to see objects at long distances and to identify them as possible prey. Inspired by the biological functioning of the deep fovea a model called DeepFoveaNet is proposed in this paper. DeepFoveaNet is a convolutional neural network model to detect moving objects in video sequences. DeepFoveaNet emulates the monocular vision of birds of prey through two Encoder-Decoder convolutional neural network modules. This model combines the capacity of magnification of the deep fovea and the context information of the peripheral vision. Unlike algorithms to detect moving objects, ranked in the first places of the Change Detection database (CDnet14), DeepFoveaNet does not depend on previously trained neural networks, neither on a huge number of training images for its training. Besides, its architecture allows it to learn spatiotemporal information of the video. DeepFoveaNet was evaluated in the CDnet14 database achieving high performance and was ranked as one of the ten best algorithms. The characteristics and results of DeepFoveaNet demonstrated that the model is comparable to the state-of-the-art algorithms to detect moving objects, and it can detect very small moving objects through its deep fovea model that other algorithms cannot detect.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Modelos Biológicos , Redes Neurais de Computação , Visão Binocular/fisiologia , Algoritmos , Animais , Bases de Dados Factuais , Águias/fisiologia , Fóvea Central/fisiologia , Humanos , Movimento/fisiologia , Gravação em Vídeo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...