Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Intervalo de ano de publicação
1.
Int. j. morphol ; 42(3): 826-832, jun. 2024. ilus, tab
Artigo em Inglês | LILACS | ID: biblio-1564601

RESUMO

SUMMARY: The study aims to demonstrate the success of deep learning methods in sex prediction using hyoid bone. The images of people aged 15-94 years who underwent neck Computed Tomography (CT) were retrospectively scanned in the study. The neck CT images of the individuals were cleaned using the RadiAnt DICOM Viewer (version 2023.1) program, leaving only the hyoid bone. A total of 7 images in the anterior, posterior, superior, inferior, right, left, and right-anterior-upward directions were obtained from a patient's cut hyoid bone image. 2170 images were obtained from 310 hyoid bones of males, and 1820 images from 260 hyoid bones of females. 3990 images were completed to 5000 images by data enrichment. The dataset was divided into 80 % for training, 10 % for testing, and another 10 % for validation. It was compared with deep learning models DenseNet121, ResNet152, and VGG19. An accuracy rate of 87 % was achieved in the ResNet152 model and 80.2 % in the VGG19 model. The highest rate among the classified models was 89 % in the DenseNet121 model. This model had a specificity of 0.87, a sensitivity of 0.90, an F1 score of 0.89 in women, a specificity of 0.90, a sensitivity of 0.87, and an F1 score of 0.88 in men. It was observed that sex could be predicted from the hyoid bone using deep learning methods DenseNet121, ResNet152, and VGG19. Thus, a method that had not been tried on this bone before was used. This study also brings us one step closer to strengthening and perfecting the use of technologies, which will reduce the subjectivity of the methods and support the expert in the decision-making process of sex prediction.


El estudio tuvo como objetivo demostrar el éxito de los métodos de aprendizaje profundo en la predicción del sexo utilizando el hueso hioides. En el estudio se escanearon retrospectivamente las imágenes de personas de entre 15 y 94 años que se sometieron a una tomografía computarizada (TC) de cuello. Las imágenes de TC del cuello de los individuos se limpiaron utilizando el programa RadiAnt DICOM Viewer (versión 2023.1), dejando solo el hueso hioides. Se obtuvieron un total de 7 imágenes en las direcciones anterior, posterior, superior, inferior, derecha, izquierda y derecha-anterior-superior a partir de una imagen seccionada del hueso hioides de un paciente. Se obtuvieron 2170 imágenes de 310 huesos hioides de hombres y 1820 imágenes de 260 huesos hioides de mujeres. Se completaron 3990 imágenes a 5000 imágenes mediante enriquecimiento de datos. El conjunto de datos se dividió en un 80 % para entrenamiento, un 10 % para pruebas y otro 10 % para validación. Se comparó con los modelos de aprendizaje profundo DenseNet121, ResNet152 y VGG19. Se logró una tasa de precisión del 87 % en el modelo ResNet152 y del 80,2 % en el modelo VGG19. La tasa más alta entre los modelos clasificados fue del 89 % en el modelo DenseNet121. Este modelo tenía una especificidad de 0,87, una sensibilidad de 0,90, una puntuación F1 de 0,89 en mujeres, una especificidad de 0,90, una sensibilidad de 0,87 y una puntuación F1 de 0,88 en hombres. Se observó que se podía predecir el sexo a partir del hueso hioides utilizando los métodos de aprendizaje profundo DenseNet121, ResNet152 y VGG19. De esta manera, se utilizó un método que no se había probado antes en este hueso. Este estudio también nos acerca un paso más al fortalecimiento y perfeccionamiento del uso de tecnologías, que reducirán la subjetividad de los métodos y apoyarán al experto en el proceso de toma de decisiones de predicción del sexo.


Assuntos
Humanos , Masculino , Feminino , Adolescente , Adulto , Pessoa de Meia-Idade , Idoso , Idoso de 80 Anos ou mais , Adulto Jovem , Tomografia Computadorizada por Raios X , Determinação do Sexo pelo Esqueleto , Aprendizado Profundo , Osso Hioide/diagnóstico por imagem , Valor Preditivo dos Testes , Sensibilidade e Especificidade , Osso Hioide/anatomia & histologia
2.
Front Big Data ; 7: 1384240, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38812700

RESUMO

Tradescantia plant is a complex system that is sensible to environmental factors such as water supply, pH, temperature, light, radiation, impurities, and nutrient availability. It can be used as a biomonitor for environmental changes; however, the bioassays are time-consuming and have a strong human interference factor that might change the result depending on who is performing the analysis. We have developed computer vision models to study color variations from Tradescantia clone 4430 plant stamen hair cells, which can be stressed due to air pollution and soil contamination. The study introduces a novel dataset, Trad-204, comprising single-cell images from Tradescantia clone 4430, captured during the Tradescantia stamen-hair mutation bioassay (Trad-SHM). The dataset contain images from two experiments, one focusing on air pollution by particulate matter and another based on soil contaminated by diesel oil. Both experiments were carried out in Curitiba, Brazil, between 2020 and 2023. The images represent single cells with different shapes, sizes, and colors, reflecting the plant's responses to environmental stressors. An automatic classification task was developed to distinguishing between blue and pink cells, and the study explores both a baseline model and three artificial neural network (ANN) architectures, namely, TinyVGG, VGG-16, and ResNet34. Tradescantia revealed sensibility to both air particulate matter concentration and diesel oil in soil. The results indicate that Residual Network architecture outperforms the other models in terms of accuracy on both training and testing sets. The dataset and findings contribute to the understanding of plant cell responses to environmental stress and provide valuable resources for further research in automated image analysis of plant cells. Discussion highlights the impact of turgor pressure on cell shape and the potential implications for plant physiology. The comparison between ANN architectures aligns with previous research, emphasizing the superior performance of ResNet models in image classification tasks. Artificial intelligence identification of pink cells improves the counting accuracy, thus avoiding human errors due to different color perceptions, fatigue, or inattention, in addition to facilitating and speeding up the analysis process. Overall, the study offers insights into plant cell dynamics and provides a foundation for future investigations like cells morphology change. This research corroborates that biomonitoring should be considered as an important tool for political actions, being a relevant issue in risk assessment and the development of new public policies relating to the environment.

3.
Arq. bras. oftalmol ; Arq. bras. oftalmol;87(5): e2022, 2024. tab, graf
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1527853

RESUMO

ABSTRACT Purpose: This study aimed to evaluate the classification performance of pretrained convolutional neural network models or architectures using fundus image dataset containing eight disease labels. Methods: A publicly available ocular disease intelligent recognition database has been used for the diagnosis of eight diseases. This ocular disease intelligent recognition database has a total of 10,000 fundus images from both eyes of 5,000 patients for the following eight diseases: healthy, diabetic retinopathy, glaucoma, cataract, age-related macular degeneration, hypertension, myopia, and others. Ocular disease classification performances were investigated by constructing three pretrained convolutional neural network architectures including VGG16, Inceptionv3, and ResNet50 models with adaptive moment optimizer. These models were implemented in Google Colab, which made the task straight-forward without spending hours installing the environment and supporting libraries. To evaluate the effectiveness of the models, the dataset was divided into 70%, 10%, and 20% for training, validation, and testing, respectively. For each classification, the training images were augmented to 10,000 fundus images. Results: ResNet50 achieved an accuracy of 97.1%; sensitivity, 78.5%; specificity, 98.5%; and precision, 79.7%, and had the best area under the curve and final score to classify cataract (area under the curve = 0.964, final score = 0.903). By contrast, VGG16 achieved an accuracy of 96.2%; sensitivity, 56.9%; specificity, 99.2%; precision, 84.1%; area under the curve, 0.949; and final score, 0.857. Conclusions: These results demonstrate the ability of the pretrained convolutional neural network architectures to identify ophthalmological diseases from fundus images. ResNet50 can be a good architecture to solve problems in disease detection and classification of glaucoma, cataract, hypertension, and myopia; Inceptionv3 for age-related macular degeneration, and other disease; and VGG16 for normal and diabetic retinopathy.


RESUMO Objetivo: Avaliar o desempenho de classificação de modelos ou arquiteturas de rede neural convolucional pré--treinadas usando um conjunto de dados de imagem de fundo de olho contendo oito rótulos de doenças diferentes. Métodos: Neste artigo, o conjunto de dados de reconhecimento inteligente de doenças oculares publicamente disponível foi usado para o diagnóstico de oito rótulos de doenças diferentes. O banco de dados de reconhecimento inteligente de doenças oculares tem um total de 10.000 imagens de fundo de olho de ambos os olhos de 5.000 pacientes para oito categorias que contêm rótulos saudáveis, retinopatia diabética, glaucoma, catarata, degeneração macular relacionada à idade, hipertensão, miopia, outros. Investigamos o desempenho da classificação de doenças oculares construindo três arquiteturas de rede neural convolucional pré-treinadas diferentes, incluindo os modelos VGG16, Inceptionv3 e ResNet50 com otimizador de Momento Adaptativo. Esses modelos foram implementados no Google Colab o que facilitou a tarefa sem gastar horas instalando o ambiente e suportando bibliotecas. Para avaliar a eficácia dos modelos, o conjunto de dados é dividido em 70% para treinamento, 10% para validação e os 20% restantes utilizados para teste. As imagens de treinamento foram expandidas para 10.000 imagens de fundo de olho para cada tal. Resultados: Observou-se que o modelo ResNet50 alcançou acurácia de 97,1%, sensibilidade de 78,5%, especificidade de 98,5% e precisão de 79,7% e teve a melhor área sob a curva e pontuação final para classificar a categoria da catarata (área sob a curva=0,964, final=0,903). Em contraste, o modelo VGG16 alcançou uma precisão de 96,2%, sensibilidade de 56,9%, especificidade de 99,2% e precisão de 84,1%, área sob a curva 0,949 e pontuação final de 0,857. Conclusão: Esses resultados demonstram a capacidade das arquiteturas de rede neural convolucional pré-treinadas em identificar doenças oftalmológicas a partir de imagens de fundo de olho. ResNet50 pode ser uma boa solução para resolver problemas na detecção e classificação de doenças como glaucoma, catarata, hipertensão e miopia; Inceptionv3 para degeneração macular relacionada à idade e outras doenças; e VGG16 para retinopatia normal e diabética.

4.
J Imaging ; 7(3)2021 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-34460715

RESUMO

With the exponential growth of high-quality fake images in social networks and media, it is necessary to develop recognition algorithms for this type of content. One of the most common types of image and video editing consists of duplicating areas of the image, known as the copy-move technique. Traditional image processing approaches manually look for patterns related to the duplicated content, limiting their use in mass data classification. In contrast, approaches based on deep learning have shown better performance and promising results, but they present generalization problems with a high dependence on training data and the need for appropriate selection of hyperparameters. To overcome this, we propose two approaches that use deep learning, a model by a custom architecture and a model by transfer learning. In each case, the impact of the depth of the network is analyzed in terms of precision (P), recall (R) and F1 score. Additionally, the problem of generalization is addressed with images from eight different open access datasets. Finally, the models are compared in terms of evaluation metrics, and training and inference times. The model by transfer learning of VGG-16 achieves metrics about 10% higher than the model by a custom architecture, however, it requires approximately twice as much inference time as the latter.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA