Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Assunto principal
Intervalo de ano de publicação
1.
Diagnostics (Basel) ; 14(2)2024 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-38248062

RESUMO

The integration of artificial intelligence (AI), particularly through machine learning (ML) and deep learning (DL) algorithms, marks a transformative progression in medical imaging diagnostics. This technical note elucidates a novel methodology for semantic segmentation of the vertebral column in CT scans, exemplified by a dataset of 250 patients from Riga East Clinical University Hospital. Our approach centers on the accurate identification and labeling of individual vertebrae, ranging from C1 to the sacrum-coccyx complex. Patient selection was meticulously conducted, ensuring demographic balance in age and sex, and excluding scans with significant vertebral abnormalities to reduce confounding variables. This strategic selection bolstered the representativeness of our sample, thereby enhancing the external validity of our findings. Our workflow streamlined the segmentation process by eliminating the need for volume stitching, aligning seamlessly with the methodology we present. By leveraging AI, we have introduced a semi-automated annotation system that enables initial data labeling even by individuals without medical expertise. This phase is complemented by thorough manual validation against established anatomical standards, significantly reducing the time traditionally required for segmentation. This dual approach not only conserves resources but also expedites project timelines. While this method significantly advances radiological data annotation, it is not devoid of challenges, such as the necessity for manual validation by anatomically skilled personnel and reliance on specialized GPU hardware. Nonetheless, our methodology represents a substantial leap forward in medical data semantic segmentation, highlighting the potential of AI-driven approaches to revolutionize clinical and research practices in radiology.

2.
Tomography ; 9(5): 1772-1786, 2023 09 22.
Artigo em Inglês | MEDLINE | ID: mdl-37888733

RESUMO

In this technical note, we examine the capabilities of deep convolutional neural networks (DCNNs) for diagnosing osteoporosis through cone-beam computed tomography (CBCT) scans of the mandible. The evaluation was conducted using 188 patients' mandibular CBCT images utilizing DCNN models built on the ResNet-101 framework. We adopted a segmented three-phase method to assess osteoporosis. Stage 1 focused on mandibular bone slice identification, Stage 2 pinpointed the coordinates for mandibular bone cross-sectional views, and Stage 3 computed the mandibular bone's thickness, highlighting osteoporotic variances. The procedure, built using ResNet-101 networks, showcased efficacy in osteoporosis detection using CBCT scans: Stage 1 achieved a remarkable 98.85% training accuracy, Stage 2 minimized L1 loss to a mere 1.02 pixels, and the last stage's bone thickness computation algorithm reported a mean squared error of 0.8377. These findings underline the significant potential of AI in osteoporosis identification and its promise for enhanced medical care. The compartmentalized method endorses a sturdier DCNN training and heightened model transparency. Moreover, the outcomes illustrate the efficacy of a modular transfer learning method for osteoporosis detection, even when relying on limited mandibular CBCT datasets. The methodology given is accompanied by the source code available on GitLab.


Assuntos
Osteoporose , Humanos , Estudos Transversais , Osteoporose/diagnóstico por imagem , Tomografia Computadorizada de Feixe Cônico/métodos , Mandíbula/diagnóstico por imagem , Redes Neurais de Computação
3.
Data Brief ; 42: 108332, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35677456

RESUMO

With long-term changes in temperature and weather patterns, ecologically adaptable fruit varieties are becoming increasingly important in agriculture. For selection of candidate cultivars in fruit breeding or for yield predictions, fruit set characteristics at different growth stages need to be described and evaluated, which is largely done visually. This is a time-consuming and labor-intensive process that also requires sufficient expert knowledge. The annotated dataset for Japanese quince - QuinceSet - consists of images of Japanese quince (Chaenomeles japonica) fruits taken at two phenological developmental stages and annotated for detection and phenotyping. First, after flowering, when the second fruit fall is over and the fruits have reached 30-50% of their final size, and second, at the ripening stage of quince, just before the fruits are yielded. Both stages of quince images classified as unripe and ripe were annotated using ground truth ROI and presented in YOLO format. The dataset contains 1515 high-resolution RGB .jpg images with the same number of annotated .txt files. Images in the dataset were manually annotated using LabelImg software. A total of 17,171 annotations were provided by the experts. The images were acquired on site at the Institute of Horticulture in Dobele, Latvia. Homogenization of the images was performed under different weather conditions, at different times of the day, and from different capturing angles. The dataset contains both fully visible quinces and quinces partially obscured by leaves. Care was also taken to ensure that the foreground, which contains the leaves has adequate brightness with minimal shadows, while the background is darker. The presented dataset will allow to increase the efficiency of the breeding process and yield estimation, to identify and phenotype quinces more reliably, and may also be useful for breeding other crops.

4.
J Imaging ; 8(2)2022 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-35200732

RESUMO

Model understanding is critical in many domains, particularly those involved in high-stakes decisions, e.g., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as convolutional neural networks. This paper evaluates the traffic sign classifier of the Deep Neural Network (DNN) from the Programmable Systems for Intelligence in Automobiles (PRYSTINE) project for explainability. The results of explanations were further used for the CNN PRYSTINE classifier vague kernels' compression. Then, the precision of the classifier was evaluated in different pruning scenarios. The proposed classifier performance methodology was realised by creating an original traffic sign and traffic light classification and explanation code. First, the status of the kernels of the network was evaluated for explainability. For this task, the post-hoc, local, meaningful perturbation-based forward explainable method was integrated into the model to evaluate each kernel status of the network. This method enabled distinguishing high- and low-impact kernels in the CNN. Second, the vague kernels of the classifier of the last layer before the fully connected layer were excluded by withdrawing them from the network. Third, the network's precision was evaluated in different kernel compression levels. It is shown that by using the XAI approach for network kernel compression, the pruning of 5% of kernels leads to a 2% loss in traffic sign and traffic light classification precision. The proposed methodology is crucial where execution time and processing capacity prevail.

5.
Data Brief ; 31: 105833, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32577458

RESUMO

Weed management technologies that can identify weeds and distinguish them from crops are in need of artificial intelligence solutions based on a computer vision approach, to enable the development of precisely targeted and autonomous robotic weed management systems. A prerequisite of such systems is to create robust and reliable object detection that can unambiguously distinguish weed from food crops. One of the essential steps towards precision agriculture is using annotated images to train convolutional neural networks to distinguish weed from food crops, which can be later followed using mechanical weed removal or selected spraying of herbicides. In this data paper, we propose an open-access dataset with manually annotated images for weed detection. The dataset is composed of 1118 images in which 6 food crops and 8 weed species are identified, altogether 7853 annotations were made in total. Three RGB digital cameras were used for image capturing: Intel RealSense D435, Canon EOS 800D, and Sony W800. The images were taken on food crops and weeds grown in controlled environment and field conditions at different growth stages.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...