Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
Comput Biol Med ; 176: 108590, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38763066

RESUMO

Over the past two decades, machine analysis of medical imaging has advanced rapidly, opening up significant potential for several important medical applications. As complicated diseases increase and the number of cases rises, the role of machine-based imaging analysis has become indispensable. It serves as both a tool and an assistant to medical experts, providing valuable insights and guidance. A particularly challenging task in this area is lesion segmentation, a task that is challenging even for experienced radiologists. The complexity of this task highlights the urgent need for robust machine learning approaches to support medical staff. In response, we present our novel solution: the D-TrAttUnet architecture. This framework is based on the observation that different diseases often target specific organs. Our architecture includes an encoder-decoder structure with a composite Transformer-CNN encoder and dual decoders. The encoder includes two paths: the Transformer path and the Encoders Fusion Module path. The Dual-Decoder configuration uses two identical decoders, each with attention gates. This allows the model to simultaneously segment lesions and organs and integrate their segmentation losses. To validate our approach, we performed evaluations on the Covid-19 and Bone Metastasis segmentation tasks. We also investigated the adaptability of the model by testing it without the second decoder in the segmentation of glands and nuclei. The results confirmed the superiority of our approach, especially in Covid-19 infections and the segmentation of bone metastases. In addition, the hybrid encoder showed exceptional performance in the segmentation of glands and nuclei, solidifying its role in modern medical image analysis.


Assuntos
Redes Neurais de Computação , Humanos , COVID-19/diagnóstico por imagem , SARS-CoV-2 , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos
2.
Sensors (Basel) ; 24(5)2024 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-38475092

RESUMO

COVID-19 analysis from medical imaging is an important task that has been intensively studied in the last years due to the spread of the COVID-19 pandemic. In fact, medical imaging has often been used as a complementary or main tool to recognize the infected persons. On the other hand, medical imaging has the ability to provide more details about COVID-19 infection, including its severity and spread, which makes it possible to evaluate the infection and follow-up the patient's state. CT scans are the most informative tool for COVID-19 infection, where the evaluation of COVID-19 infection is usually performed through infection segmentation. However, segmentation is a tedious task that requires much effort and time from expert radiologists. To deal with this limitation, an efficient framework for estimating COVID-19 infection as a regression task is proposed. The goal of the Per-COVID-19 challenge is to test the efficiency of modern deep learning methods on COVID-19 infection percentage estimation (CIPE) from CT scans. Participants had to develop an efficient deep learning approach that can learn from noisy data. In addition, participants had to cope with many challenges, including those related to COVID-19 infection complexity and crossdataset scenarios. This paper provides an overview of the COVID-19 infection percentage estimation challenge (Per-COVID-19) held at MIA-COVID-2022. Details of the competition data, challenges, and evaluation metrics are presented. The best performing approaches and their results are described and discussed.


Assuntos
COVID-19 , Pandemias , Humanos , Benchmarking , Cintilografia , Tomografia Computadorizada por Raios X
3.
Med Image Anal ; 86: 102797, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36966605

RESUMO

Since the emergence of the Covid-19 pandemic in late 2019, medical imaging has been widely used to analyze this disease. Indeed, CT-scans of the lungs can help diagnose, detect, and quantify Covid-19 infection. In this paper, we address the segmentation of Covid-19 infection from CT-scans. To improve the performance of the Att-Unet architecture and maximize the use of the Attention Gate, we propose the PAtt-Unet and DAtt-Unet architectures. PAtt-Unet aims to exploit the input pyramids to preserve the spatial awareness in all of the encoder layers. On the other hand, DAtt-Unet is designed to guide the segmentation of Covid-19 infection inside the lung lobes. We also propose to combine these two architectures into a single one, which we refer to as PDAtt-Unet. To overcome the blurry boundary pixels segmentation of Covid-19 infection, we propose a hybrid loss function. The proposed architectures were tested on four datasets with two evaluation scenarios (intra and cross datasets). Experimental results showed that both PAtt-Unet and DAtt-Unet improve the performance of Att-Unet in segmenting Covid-19 infections. Moreover, the combination architecture PDAtt-Unet led to further improvement. To Compare with other methods, three baseline segmentation architectures (Unet, Unet++, and Att-Unet) and three state-of-the-art architectures (InfNet, SCOATNet, and nCoVSegNet) were tested. The comparison showed the superiority of the proposed PDAtt-Unet trained with the proposed hybrid loss (PDEAtt-Unet) over all other methods. Moreover, PDEAtt-Unet is able to overcome various challenges in segmenting Covid-19 infections in four datasets and two evaluation scenarios.


Assuntos
COVID-19 , Pandemias , Humanos , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador
4.
Sensors (Basel) ; 23(4)2023 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-36850392

RESUMO

The detection and quantification of severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) virus particles in ambient waters using a membrane-based in-gel loop-mediated isothermal amplification (mgLAMP) method can play an important role in large-scale environmental surveillance for early warning of potential outbreaks. However, counting particles or cells in fluorescence microscopy is an expensive, time-consuming, and tedious task that only highly trained technicians and researchers can perform. Although such objects are generally easy to identify, manually annotating cells is occasionally prone to fatigue errors and arbitrariness due to the operator's interpretation of borderline cases. In this research, we proposed a method to detect and quantify multiscale and shape variant SARS-CoV-2 fluorescent cells generated using a portable (mgLAMP) system and captured using a smartphone camera. The proposed method is based on the YOLOv5 algorithm, which uses CSPnet as its backbone. CSPnet is a recently proposed convolutional neural network (CNN) that duplicates gradient information within the network using a combination of Dense nets and ResNet blocks, and bottleneck convolution layers to reduce computation while at the same time maintaining high accuracy. In addition, we apply the test time augmentation (TTA) algorithm in conjunction with YOLO's one-stage multihead detection heads to detect all cells of varying sizes and shapes. We evaluated the model using a private dataset provided by the Linde + Robinson Laboratory, California Institute of Technology, United States. The model achieved a mAP@0.5 score of 90.3 in the YOLOv5-s6.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , SARS-CoV-2 , Membranas , Microscopia de Fluorescência
5.
Artif Intell Med ; 134: 102392, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36462909

RESUMO

Nowadays, breast and cervical cancers are respectively the first and fourth most common causes of cancer death in females. It is believed that, automated systems based on artificial intelligence would allow the early diagnostic which increases significantly the chances of proper treatment and survival. Although Convolutional Neural Networks (CNNs) have achieved human-level performance in object classification tasks, the regular growing of the amount of medical data and the continuous increase of the number of classes make them difficult to learn new tasks without being re-trained from scratch. Nevertheless, fine tuning and transfer learning in deep models are techniques that lead to the well-known catastrophic forgetting problem. In this paper, an Incremental Deep Tree (IDT) framework for biological image classification is proposed to address the catastrophic forgetting of CNNs allowing them to learn new classes while maintaining acceptable accuracies on the previously learnt ones. To evaluate the performance of our approach, the IDT framework is compared against with three popular incremental methods, namely iCaRL, LwF and SupportNet. The experimental results on MNIST dataset achieved 87 % of accuracy and the obtained values on the BreakHis, the LBC and the SIPaKMeD datasets are promising with 92 %, 98 % and 93 % respectively.


Assuntos
Inteligência Artificial , Feminino , Humanos , Aprendizagem , Redes Neurais de Computação
6.
Sensors (Basel) ; 22(10)2022 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-35632169

RESUMO

Currently, face recognition technology is the most widely used method for verifying an individual's identity. Nevertheless, it has increased in popularity, raising concerns about face presentation attacks, in which a photo or video of an authorized person's face is used to obtain access to services. Based on a combination of background subtraction (BS) and convolutional neural network(s) (CNN), as well as an ensemble of classifiers, we propose an efficient and more robust face presentation attack detection algorithm. This algorithm includes a fully connected (FC) classifier with a majority vote (MV) algorithm, which uses different face presentation attack instruments (e.g., printed photo and replayed video). By including a majority vote to determine whether the input video is genuine or not, the proposed method significantly enhances the performance of the face anti-spoofing (FAS) system. For evaluation, we considered the MSU MFSD, REPLAY-ATTACK, and CASIA-FASD databases. The obtained results are very interesting and are much better than those obtained by state-of-the-art methods. For instance, on the REPLAY-ATTACK database, we were able to attain a half-total error rate (HTER) of 0.62% and an equal error rate (EER) of 0.58%. We attained an EER of 0% on both the CASIA-FASD and the MSU MFSD databases.


Assuntos
Transtornos do Espectro Alcoólico Fetal , Algoritmos , Reconhecimento Facial Automatizado , Face/anatomia & histologia , Feminino , Humanos , Redes Neurais de Computação , Gravidez
7.
Sensors (Basel) ; 22(7)2022 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-35408048

RESUMO

For self-driving systems or autonomous vehicles (AVs), accurate lane-level localization is a important for performing complex driving maneuvers. Classical GNSS-based methods are usually not accurate enough to have lane-level localization to support the AV's maneuvers. LiDAR-based localization can provide accurate localization. However, the price of LiDARs is still one of the big issues preventing this kind of solution from becoming wide-spread commodity. Therefore, in this work, we propose a low-cost solution for lane-level localization using a vision-based system and a low-cost GPS to achieve high precision lane-level localization. Experiments in real-world and real-time demonstrate that the proposed method achieves good lane-level localization accuracy, outperforming solutions based on only GPS.

8.
J Imaging ; 7(9)2021 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-34564115

RESUMO

COVID-19 infection recognition is a very important step in the fight against the COVID-19 pandemic. In fact, many methods have been used to recognize COVID-19 infection including Reverse Transcription Polymerase Chain Reaction (RT-PCR), X-ray scan, and Computed Tomography scan (CT- scan). In addition to the recognition of the COVID-19 infection, CT scans can provide more important information about the evolution of this disease and its severity. With the extensive number of COVID-19 infections, estimating the COVID-19 percentage can help the intensive care to free up the resuscitation beds for the critical cases and follow other protocol for less severity cases. In this paper, we introduce COVID-19 percentage estimation dataset from CT-scans, where the labeling process was accomplished by two expert radiologists. Moreover, we evaluate the performance of three Convolutional Neural Network (CNN) architectures: ResneXt-50, Densenet-161, and Inception-v3. For the three CNN architectures, we use two loss functions: MSE and Dynamic Huber. In addition, two pretrained scenarios are investigated (ImageNet pretrained models and pretrained models using X-ray data). The evaluated approaches achieved promising results on the estimation of COVID-19 infection. Inception-v3 using Dynamic Huber loss function and pretrained models using X-ray data achieved the best performance for slice-level results: 0.9365, 5.10, and 9.25 for Pearson Correlation coefficient (PC), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE), respectively. On the other hand, the same approach achieved 0.9603, 4.01, and 6.79 for PCsubj, MAEsubj, and RMSEsubj, respectively, for subject-level results. These results prove that using CNN architectures can provide accurate and fast solution to estimate the COVID-19 infection percentage for monitoring the evolution of the patient state.

9.
Sensors (Basel) ; 21(17)2021 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-34502769

RESUMO

Since the appearance of the COVID-19 pandemic (at the end of 2019, Wuhan, China), the recognition of COVID-19 with medical imaging has become an active research topic for the machine learning and computer vision community. This paper is based on the results obtained from the 2021 COVID-19 SPGC challenge, which aims to classify volumetric CT scans into normal, COVID-19, or community-acquired pneumonia (Cap) classes. To this end, we proposed a deep-learning-based approach (CNR-IEMN) that consists of two main stages. In the first stage, we trained four deep learning architectures with a multi-tasks strategy for slice-level classification. In the second stage, we used the previously trained models with an XG-boost classifier to classify the whole CT scan into normal, COVID-19, or Cap classes. Our approach achieved a good result on the validation set, with an overall accuracy of 87.75% and 96.36%, 52.63%, and 95.83% sensitivities for COVID-19, Cap, and normal, respectively. On the other hand, our approach achieved fifth place on the three test datasets of SPGC in the COVID-19 challenge, where our approach achieved the best result for COVID-19 sensitivity. In addition, our approach achieved second place on two of the three testing sets.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , Pandemias , SARS-CoV-2 , Tomografia Computadorizada por Raios X
10.
J Imaging ; 7(3)2021 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-34460707

RESUMO

In recent years, automatic tissue phenotyping has attracted increasing interest in the Digital Pathology (DP) field. For Colorectal Cancer (CRC), tissue phenotyping can diagnose the cancer and differentiate between different cancer grades. The development of Whole Slide Images (WSIs) has provided the required data for creating automatic tissue phenotyping systems. In this paper, we study different hand-crafted feature-based and deep learning methods using two popular multi-classes CRC-tissue-type databases: Kather-CRC-2016 and CRC-TP. For the hand-crafted features, we use two texture descriptors (LPQ and BSIF) and their combination. In addition, two classifiers are used (SVM and NN) to classify the texture features into distinct CRC tissue types. For the deep learning methods, we evaluate four Convolutional Neural Network (CNN) architectures (ResNet-101, ResNeXt-50, Inception-v3, and DenseNet-161). Moreover, we propose two Ensemble CNN approaches: Mean-Ensemble-CNN and NN-Ensemble-CNN. The experimental results show that the proposed approaches outperformed the hand-crafted feature-based methods, CNN architectures and the state-of-the-art methods in both databases.

11.
Sensors (Basel) ; 21(9)2021 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-34063625

RESUMO

Recently, most state-of-the-art anomaly detection methods are based on apparent motion and appearance reconstruction networks and use error estimation between generated and real information as detection features. These approaches achieve promising results by only using normal samples for training steps. In this paper, our contributions are two-fold. On the one hand, we propose a flexible multi-channel framework to generate multi-type frame-level features. On the other hand, we study how it is possible to improve the detection performance by supervised learning. The multi-channel framework is based on four Conditional GANs (CGANs) taking various type of appearance and motion information as input and producing prediction information as output. These CGANs provide a better feature space to represent the distinction between normal and abnormal events. Then, the difference between those generative and ground-truth information is encoded by Peak Signal-to-Noise Ratio (PSNR). We propose to classify those features in a classical supervised scenario by building a small training set with some abnormal samples of the original test set of the dataset. The binary Support Vector Machine (SVM) is applied for frame-level anomaly detection. Finally, we use Mask R-CNN as detector to perform object-centric anomaly localization. Our solution is largely evaluated on Avenue, Ped1, Ped2, and ShanghaiTech datasets. Our experiment results demonstrate that PSNR features combined with supervised SVM are better than error maps computed by previous methods. We achieve state-of-the-art performance for frame-level AUC on Ped1 and ShanghaiTech. Especially, for the most challenging Shanghaitech dataset, a supervised training model outperforms up to 9% the state-of-the-art an unsupervised strategy.

12.
Sensors (Basel) ; 21(5)2021 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-33802428

RESUMO

The recognition of COVID-19 infection from X-ray images is an emerging field in the learning and computer vision community. Despite the great efforts that have been made in this field since the appearance of COVID-19 (2019), the field still suffers from two drawbacks. First, the number of available X-ray scans labeled as COVID-19-infected is relatively small. Second, all the works that have been carried out in the field are separate; there are no unified data, classes, and evaluation protocols. In this work, based on public and newly collected data, we propose two X-ray COVID-19 databases, which are three-class COVID-19 and five-class COVID-19 datasets. For both databases, we evaluate different deep learning architectures. Moreover, we propose an Ensemble-CNNs approach which outperforms the deep learning architectures and shows promising results in both databases. In other words, our proposed Ensemble-CNNs achieved a high performance in the recognition of COVID-19 infection, resulting in accuracies of 100% and 98.1% in the three-class and five-class scenarios, respectively. In addition, our approach achieved promising results in the overall recognition accuracy of 75.23% and 81.0% for the three-class and five-class scenarios, respectively. We make our databases of COVID-19 X-ray scans publicly available to encourage other researchers to use it as a benchmark for their studies and comparisons.


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , Redes Neurais de Computação , Radiografia Torácica , Algoritmos , Humanos , Raios X
13.
J Med Imaging Radiat Sci ; 50(3): 425-440, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31128942

RESUMO

OBJECTIVE: To propose a hybrid multiatlas fusion and correction approach to estimate a pseudo-computed tomography (pCT) image from T2-weighted brain magnetic resonance (MR) images in the context of MRI-only radiotherapy. MATERIALS AND METHODS: A set of eleven pairs of T2-weighted MR and CT brain images was included. Using leave-one-out cross-validation, atlas MR images were registered to the target MRI with multimetric, multiresolution deformable registration. The subsequent deformations were applied to the atlas CT images, producing uncorrected pCT images. Afterward, a three-dimensional hybrid CT number correction technique was used. This technique uses information about MR intensity, spatial location, and tissue label from segmented MR images with the fuzzy c-means algorithm and combines them in a weighted fashion to correct Hounsfield unit values of the uncorrected pCT images. The corrected pCT images were then fused into a final pCT image. RESULTS: The proposed hybrid approach proved to be performant in correcting Hounsfield unit values in terms of qualitative and quantitative measures. Average correlation was 0.92 and 0.91 for the proposed approach by taking the mean and the median, respectively, compared with 0.86 for the uncorrected unfused version. Average values of dice similarity coefficient for bone were 0.68 and 0.72 for the fused corrected pCT images by taking the mean and the median, respectively, compared with 0.65 for the uncorrected unfused version indicating a significant bone estimation improvement. CONCLUSION: A hybrid fusion and correction method is presented to estimate a pCT image from T2-weighted brain MR images.


Assuntos
Imageamento por Ressonância Magnética/métodos , Imagem Multimodal/métodos , Neuroimagem/métodos , Radioterapia Guiada por Imagem/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos
14.
Artigo em Inglês | MEDLINE | ID: mdl-26009857

RESUMO

Prostate contours delineation on Magnetic Resonance (MR) images is a challenging and important task in medical imaging with applications of guiding biopsy, surgery and therapy. While a fully automated method is highly desired for this application, it can be a very difficult task due to the structure and surrounding tissues of the prostate gland. Traditional active contours-based delineation algorithms are typically quite successful for piecewise constant images. Nevertheless, when MR images have diffuse edges or multiple similar objects (e.g. bladder close to prostate) within close proximity, such approaches have proven to be unsuccessful. In order to mitigate these problems, we proposed a new framework for bi-stage contours delineation algorithm based on directional active contours (DAC) incorporating prior knowledge of the prostate shape. We first explicitly addressed the prostate contour delineation problem based on fast globally DAC that incorporates both statistical and parametric shape prior model. In doing so, we were able to exploit the global aspects of contour delineation problem by incorporating a user feedback in contours delineation process where it is shown that only a small amount of user input can sometimes resolve ambiguous scenarios raised by DAC. In addition, once the prostate contours have been delineated, a cost functional is designed to incorporate both user feedback interaction and the parametric shape prior model. Using data from publicly available prostate MR datasets, which includes several challenging clinical datasets, we highlighted the effectiveness and the capability of the proposed algorithm. Besides, the algorithm has been compared with several state-of-the-art methods.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Modelos Biológicos , Próstata/anatomia & histologia , Algoritmos , Humanos , Imageamento por Ressonância Magnética , Masculino , Modelos Estatísticos , Ossos Pélvicos/anatomia & histologia , Bexiga Urinária/anatomia & histologia
15.
Radiat Oncol ; 10: 83, 2015 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-25890308

RESUMO

BACKGROUND: Cone-beam computed tomography (CBCT) image-guided radiotherapy (IGRT) systems are widely used tools to verify and correct the target position before each fraction, allowing to maximize treatment accuracy and precision. In this study, we evaluate automatic three-dimensional intensity-based rigid registration (RR) methods for prostate setup correction using CBCT scans and study the impact of rectal distension on registration quality. METHODS: We retrospectively analyzed 115 CBCT scans of 10 prostate patients. CT-to-CBCT registration was performed using (a) global RR, (b) bony RR, or (c) bony RR refined by a local prostate RR using the CT clinical target volume (CTV) expanded with 1-to-20-mm varying margins. After propagation of the manual CT contours, automatic CBCT contours were generated. For evaluation, a radiation oncologist manually delineated the CTV on the CBCT scans. The propagated and manual CBCT contours were compared using the Dice similarity and a measure based on the bidirectional local distance (BLD). We also conducted a blind visual assessment of the quality of the propagated segmentations. Moreover, we automatically quantified rectal distension between the CT and CBCT scans without using the manual CBCT contours and we investigated its correlation with the registration failures. To improve the registration quality, the air in the rectum was replaced with soft tissue using a filter. The results with and without filtering were compared. RESULTS: The statistical analysis of the Dice coefficients and the BLD values resulted in highly significant differences (p<10(-6)) for the 5-mm and 8-mm local RRs vs the global, bony and 1-mm local RRs. The 8-mm local RR provided the best compromise between accuracy and robustness (Dice median of 0.814 and 97% of success with filtering the air in the rectum). We observed that all failures were due to high rectal distension. Moreover, the visual assessment confirmed the superiority of the 8-mm local RR over the bony RR. CONCLUSION: The most successful CT-to-CBCT RR method proved to be the 8-mm local RR. We have shown the correlation between its registration failures and rectal distension. Furthermore, we have provided a simple (easily applicable in routine) and automatic method to quantify rectal distension and to predict registration failure using only the manual CT contours.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Erros de Configuração em Radioterapia/prevenção & controle , Radioterapia Guiada por Imagem/métodos , Humanos , Imageamento Tridimensional , Masculino , Movimento (Física) , Tamanho do Órgão , Órgãos em Risco , Ossos Pélvicos/diagnóstico por imagem , Próstata/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radioterapia de Intensidade Modulada , Reto/diagnóstico por imagem , Estudos Retrospectivos , Bexiga Urinária/diagnóstico por imagem
16.
Artigo em Inglês | MEDLINE | ID: mdl-24110607

RESUMO

We propose to evaluate automatic three-dimensional gray-value rigid registration (RR) methods for prostate localization on cone-beam computed tomography (CBCT) scans. In total, 103 CBCT scans of 9 prostate patients have been analyzed. Each one was registered to the planning CT scan using different methods: (a) global RR, (b) pelvis bone structure RR, (c) bone RR refined by local soft-tissue RR using the CT clinical target volume (CTV) expanded with a 1, 3, 5, 8, 10, 12, 15 or 20-mm margin. To evaluate results, a radiation oncologist was asked to manually delineate the CTV on the CBCT scans. The Dice coefficients between each automatic CBCT segmentation - derived from the transformation of the manual CT segmentation - and the manual CBCT segmentation were calculated. Global or bone CT/CBCT RR has been shown to yield insufficient results in average. Local RR with an 8-mm margin around the CTV after bone RR was found to be the best candidate for systematically significantly improving prostate localization.


Assuntos
Tomografia Computadorizada de Feixe Cônico/métodos , Processamento de Imagem Assistida por Computador , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem , Algoritmos , Osso e Ossos/diagnóstico por imagem , Humanos , Masculino , Pelve/diagnóstico por imagem , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos
17.
Artigo em Inglês | MEDLINE | ID: mdl-18003279

RESUMO

Although bone mineral density measurements constitute one of the main clinical indicators of osteoporosis, we know that bone fragility risk is also related to deteriorations of osseous architecture. Medical imaging constitutes one means to appreciate in vivo bone screen, what is particularly important in the follow up of the osteoporosis. This paper presents a method of bone textural MRI and CT scan classification, based on the use of multifractal analysis by the WTMM-2d method, we propose the choice of three features to realize these images classification: the Hölder exponents average at the peaks of Legendre spectrums, the wavelet transform skeletons density by pixel, and variance of directions of gradients. The preliminary results of 40 images directly resulting from two medical imaging (MRI and CT scan), prove to be interesting since 90% of cases are well estimated, and two classes instantaneous clustering of the results (one healthy patient class and one osteoporotic patient class) quite separate.


Assuntos
Algoritmos , Inteligência Artificial , Osso e Ossos/diagnóstico por imagem , Osso e Ossos/patologia , Interpretação de Imagem Assistida por Computador/métodos , Osteoporose/diagnóstico , Reconhecimento Automatizado de Padrão/métodos , Análise por Conglomerados , Humanos , Aumento da Imagem/métodos , Radiografia , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...