Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
Entropy (Basel) ; 26(5)2024 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-38785623

RESUMO

This paper addresses the critical need for precise thermal modeling in electronics, where temperature significantly impacts system reliability. We emphasize the necessity of accurate temperature measurement and uncertainty quantification in thermal imaging, a vital tool across multiple industries. Current mathematical models and uncertainty measures, such as Rényi and Shannon entropies, are inadequate for the detailed informational content required in thermal images. Our work introduces a novel entropy that effectively captures the informational content of thermal images by combining local and global data, surpassing existing metrics. Validated by rigorous experimentation, this method enhances thermal images' reliability and information preservation. We also present two enhancement frameworks that integrate an optimized genetic algorithm and image fusion techniques, improving image quality by reducing artifacts and enhancing contrast. These advancements offer significant contributions to thermal imaging and uncertainty quantification, with broad applications in various sectors.

2.
Heliyon ; 10(6): e27973, 2024 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-38532999

RESUMO

Solar Photovoltaic (PV) systems are increasingly vital for enhancing energy security worldwide. However, their efficiency and power output can be significantly reduced by hotspots and snail trails, predominantly caused by cracks in PV modules. This article introduces a novel methodology for the automatic segmentation and analysis of such anomalies, utilizing unsupervised sensing algorithms coupled with 3D Augmented Reality (AR) for enhanced visualization. The methodology outperforms existing segmentation techniques, including Weka and the Meta Segment Anything Model (SAM), as demonstrated through computer simulations. These simulations were conducted using the Cali-Thermal Solar Panels and Solar Panel Infrared Image Datasets, with evaluation metrics such as the Jaccard Index, Dice Coefficient, Precision, and Recall, achieving scores of 0.76, 0.82, 0.90, 0.99, and 0.76, respectively. By integrating drone technology, the proposed approach aims to revolutionize PV maintenance by facilitating real-time, automated solar panel detection. This advancement promises substantial cost reductions, heightened energy production, and improved performance of solar PV installations. Furthermore, the innovative integration of unsupervised sensing algorithms with 3D AR visualization opens new avenues for future research and development in the field of solar PV maintenance.

3.
Phys Med Biol ; 69(3)2024 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-38211307

RESUMO

Objective. Liver cancer is a major global health problem expected to increase by more than 55% by 2040. Accurate segmentation of liver tumors from computed tomography (CT) images is essential for diagnosis and treatment planning. However, this task is challenging due to the variations in liver size, the low contrast between tumor and normal tissue, and the noise in the images. APPROACH: In this study, we propose a novel method called location-related enhancement network (LRENet) which can enhance the contrast of liver lesions in CT images and facilitate their segmentation. LRENet consists of two steps: (1) locating the lesions and the surrounding tissues using a morphological approach and (2) enhancing the lesions and smoothing the other regions using a new loss function. MAIN RESULTS: We evaluated LRENet on two public datasets (LiTS and 3Dircadb01) and one dataset collected from a collaborative hospital (Liver cancer dateset), and compared it with state-of-the-art methods regarding several metrics. The results of the experiments showed that our proposed method outperformed the compared methods on three datasets in several metrics. We also trained the Swin-Transformer network on the enhanced datasets and showed that our method could improve the segmentation performance of both liver and lesions. SIGNIFICANCE: Our method has potential applications in clinical diagnosis and treatment planning, as it can provide more reliable and informative CT images of liver tumors.


Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Neoplasias Hepáticas/diagnóstico por imagem
4.
Math Biosci Eng ; 20(9): 16786-16806, 2023 08 23.
Artigo em Inglês | MEDLINE | ID: mdl-37920034

RESUMO

Identifying and delineating suspicious regions in thermal breast images poses significant challenges for radiologists during the examination and interpretation of thermogram images. This paper aims to tackle concerns related to enhancing the differentiation between cancerous regions and the background to achieve uniformity in the intensity of breast cancer's (BC) existence. Furthermore, it aims to effectively segment tumors that exhibit limited contrast with the background and extract relevant features that can distinguish tumors from the surrounding tissue. A new cancer segmentation scheme comprised of two primary stages is proposed to tackle these challenges. In the first stage, an innovative image enhancement technique based on local image enhancement with a hyperbolization function is employed to significantly improve the quality and contrast of breast imagery. This technique enhances the local details and edges of the images while preserving global brightness and contrast. In the second stage, a dedicated algorithm based on an image-dependent weighting strategy is employed to accurately segment tumor regions within the given images. This algorithm assigns different weights to different pixels based on their similarity to the tumor region and uses a thresholding method to separate the tumor from the background. The proposed enhancement and segmentation methods were evaluated using the Database for Mastology Research (DMR-IR). The experimental results demonstrate remarkable performance, with an average segmentation accuracy, sensitivity, and specificity coefficient values of 97%, 80%, and 99%, respectively. These findings convincingly establish the superiority of the proposed method over state-of-the-art techniques. The obtained results demonstrate the potential of the proposed method to aid in the early detection of breast cancer through improved diagnosis and interpretation of thermogram images.


Assuntos
Neoplasias da Mama , Mama , Humanos , Feminino , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico , Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos
5.
Artigo em Inglês | MEDLINE | ID: mdl-37030670

RESUMO

Gaze-based implicit intention inference provides a new human-robot interaction for people with disabilities to accomplish activities of daily living independently. Existing gaze-based intention inference is mainly implemented by the data-driven method without prior object information in intention expression, which yields low inference accuracy. Aiming to improve the inference accuracy, we propose a gaze-based hybrid method by integrating model-driven and data-driven intention inference tailored to disability applications. Specifically, intention is considered as the combination of verbs and nouns. The objects corresponding to the nouns are regarded as intention-interpreting objects and served as prior knowledge, i.e., punished factors. The punished factor considers the object information, i.e., the priority in object selection. Class-specific attribute weighted naïve Bayes model learned through training data is presented to represent the relationship among intentions and objects. An intention inference engine is developed by combining the human prior knowledge, and the data-driven class-specific attribute weighted naïve Bayes model. Computer simulations: (i) verify the contribution of each critical component of the proposed model, (ii) evaluate the inference accuracy of the proposed model, and (iii) show that the proposed method is superior to state-of-the-art intention inference methods in terms of accuracy.


Assuntos
Atividades Cotidianas , Intenção , Humanos , Teorema de Bayes , Simulação por Computador
6.
IEEE Trans Cybern ; 53(9): 5448-5458, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37022843

RESUMO

Single-image haze removal is challenging due to its ill-posed nature. The breadth of real-world scenarios makes it difficult to find an optimal dehazing approach that works well for various applications. This article addresses this challenge by utilizing a novel robust quaternion neural network architecture for single-image dehazing applications. The architecture's performance to dehaze images and its impact on real applications, such as object detection, is presented. The proposed single-image dehazing network is based on an encoder-decoder architecture capable of taking advantage of quaternion image representation without interrupting the quaternion dataflow end-to-end. We achieve this by introducing a novel quaternion pixel-wise loss function and quaternion instance normalization layer. The performance of the proposed QCNN-H quaternion framework is evaluated on two synthetic datasets, two real-world datasets, and one real-world task-oriented benchmark. Extensive experiments confirm that the QCNN-H outperforms state-of-the-art haze removal procedures in visual quality and quantitative metrics. Furthermore, the evaluation shows increased accuracy and recall of state-of-the-art object detection in hazy scenes using the presented QCNN-H method. This is the first time the quaternion convolutional network has been applied to the haze removal task.

7.
Diagnostics (Basel) ; 13(4)2023 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-36832215

RESUMO

Age-related macular degeneration is a visual disorder caused by abnormalities in a part of the eye's retina and is a leading source of blindness. The correct detection, precise location, classification, and diagnosis of choroidal neovascularization (CNV) may be challenging if the lesion is small or if Optical Coherence Tomography (OCT) images are degraded by projection and motion. This paper aims to develop an automated quantification and classification system for CNV in neovascular age-related macular degeneration using OCT angiography images. OCT angiography is a non-invasive imaging tool that visualizes retinal and choroidal physiological and pathological vascularization. The presented system is based on new retinal layers in the OCT image-specific macular diseases feature extractor, including Multi-Size Kernels ξcho-Weighted Median Patterns (MSKξMP). Computer simulations show that the proposed method: (i) outperforms current state-of-the-art methods, including deep learning techniques; and (ii) achieves an overall accuracy of 99% using ten-fold cross-validation on the Duke University dataset and over 96% on the noisy Noor Eye Hospital dataset. In addition, MSKξMP performs well in binary eye disease classifications and is more accurate than recent works in image texture descriptors.

8.
Diagnostics (Basel) ; 13(3)2023 Feb 03.
Artigo em Inglês | MEDLINE | ID: mdl-36766672

RESUMO

The World Health Organization estimates that there were around 10 million deaths due to cancer in 2020, and lung cancer was the most common type of cancer, with over 2.2 million new cases and 1.8 million deaths. While there have been advances in the diagnosis and prediction of lung cancer, there is still a need for new, intelligent methods or diagnostic tools to help medical professionals detect the disease. Since it is currently unable to detect at an early stage, speedy detection and identification are crucial because they can increase a patient's chances of survival. This article focuses on developing a new tool for diagnosing lung tumors and providing thermal touch feedback using virtual reality visualization and thermal technology. This tool is intended to help identify and locate tumors and measure the size and temperature of the tumor surface. The tool uses data from CT scans to create a virtual reality visualization of the lung tissue and includes a thermal display incorporated into a haptic device. The tool is also tested by touching virtual tumors in a virtual reality application. On the other hand, thermal feedback could be used as a sensory substitute or adjunct for visual or tactile feedback. The experimental results are evaluated with the performance comparison of different algorithms and demonstrate that the proposed thermal model is effective. The results also show that the tool can estimate the characteristics of tumors accurately and that it has the potential to be used in a virtual reality application to "touch" virtual tumors. In other words, the results support the use of the tool for diagnosing lung tumors and providing thermal touch feedback using virtual reality visualization, force, and thermal technology.

9.
IEEE Trans Cybern ; 53(7): 4718-4731, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35077381

RESUMO

Image restoration techniques process degraded images to highlight obscure details or enhance the scene with good contrast and vivid color for the best possible visibility. Poor illumination condition causes issues, such as high-level noise, unlikely color or texture distortions, nonuniform exposure, halo artifacts, and lack of sharpness in the images. This article presents a novel end-to-end trainable deep convolutional neural network called the deep perceptual image enhancement network (DPIENet) to address these challenges. The novel contributions of the proposed work are: 1) a framework to synthesize multiple exposures from a single image and utilizing the exposure variation to restore the image and 2) a loss function based on the approximation of the logarithmic response of the human eye. Extensive computer simulations on the benchmark MIT-Adobe FiveK and user studies performed using Google high dynamic range, DIV2K, and low light image datasets show that DPIENet has clear advantages over state-of-the-art techniques. It has the potential to be useful for many everyday applications such as modernizing traditional camera technologies that currently capture images/videos with under/overexposed regions due to their sensors limitations, to be used in consumer photography to help the users capture appealing images, or for a variety of intelligent systems, including automated driving and video surveillance applications.


Assuntos
Aumento da Imagem , Fotografação , Humanos , Aumento da Imagem/métodos , Fotografação/métodos , Redes Neurais de Computação , Simulação por Computador , Artefatos
10.
Diagnostics (Basel) ; 12(3)2022 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-35328202

RESUMO

Recently many studies have shown the effectiveness of using augmented reality (AR) and virtual reality (VR) in biomedical image analysis. However, they are not automating the COVID level classification process. Additionally, even with the high potential of CT scan imagery to contribute to research and clinical use of COVID-19 (including two common tasks in lung image analysis: segmentation and classification of infection regions), publicly available data-sets are still a missing part in the system care for Algerian patients. This article proposes designing an automatic VR and AR platform for the severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) pandemic data analysis, classification, and visualization to address the above-mentioned challenges including (1) utilizing a novel automatic CT image segmentation and localization system to deliver critical information about the shapes and volumes of infected lungs, (2) elaborating volume measurements and lung voxel-based classification procedure, and (3) developing an AR and VR user-friendly three-dimensional interface. It also centered on developing patient questionings and medical staff qualitative feedback, which led to advances in scalability and higher levels of engagement/evaluations. The extensive computer simulations on CT image classification show a better efficiency against the state-of-the-art methods using a COVID-19 dataset of 500 Algerian patients. The developed system has been used by medical professionals for better and faster diagnosis of the disease and providing an effective treatment plan more accurately by using real-time data and patient information.

11.
IEEE J Biomed Health Inform ; 26(4): 1650-1659, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34606466

RESUMO

The application of Artificial Intelligence in dental healthcare has a very promising role due to the abundance of imagery and non-imagery-based clinical data. Expert analysis of dental radiographs can provide crucial information for clinical diagnosis and treatment. In recent years, Convolutional Neural Networks have achieved the highest accuracy in various benchmarks, including analyzing dental X-ray images to improve clinical care quality. The Tufts Dental Database, a new X-ray panoramic radiography image dataset, has been presented in this paper. This dataset consists of 1000 panoramic dental radiography images with expert labeling of abnormalities and teeth. The classification of radiography images was performed based on five different levels: anatomical location, peripheral characteristics, radiodensity, effects on the surrounding structure, and the abnormality category. This first-of-its-kind multimodal dataset also includes the radiologist's expertise captured in the form of eye-tracking and think-aloud protocol. The contributions of this work are 1) publicly available dataset that can help researchers to incorporate human expertise into AI and achieve more robust and accurate abnormality detection; 2) a benchmark performance analysis for various state-of-the-art systems for dental radiograph image enhancement and image segmentation using deep learning; 3) an in-depth review of various panoramic dental image datasets, along with segmentation and detection systems. The release of this dataset aims to propel the development of AI-powered automated abnormality detection and classification in dental panoramic radiographs, enhance tooth segmentation algorithms, and the ability to distill the radiologist's expertise into AI.


Assuntos
Benchmarking , Dente , Inteligência Artificial , Humanos , Radiografia Panorâmica/métodos , Dente/diagnóstico por imagem , Raios X
12.
Biomed Signal Process Control ; 73: 103371, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34840591

RESUMO

Coronavirus disease (COVID-19) is a severe infectious disease that causes respiratory illness and has had devastating medical and economic consequences globally. Therefore, early, and precise diagnosis is critical to control disease progression and management. Compared to the very popular RT-PCR (reverse-transcription polymerase chain reaction) method, chest CT imaging is a more consistent, sensible, and fast approach for identifying and managing infected COVID-19 patients, specifically in the epidemic area. CT images use computational methods to combine 2D X-ray images and transform them into 3D images. One major drawback of CT scans in diagnosing COVID-19 is creating false-negative effects, especially early infection. This article aims to combine novel CT imaging tools and Virtual Reality (VR) technology and generate an automatize system for accurately screening COVID-19 disease and navigating 3D visualizations of medical scenes. The key benefits of this system are a) it offers stereoscopic depth perception, b) give better insights and comprehension into the overall imaging data, c) it allows doctors to visualize the 3D models, manipulate them, study the inside 3D data, and do several kinds of measurements, and finally d) it has the capacity of real-time interactivity and accurately visualizes dynamic 3D volumetric data. The tool provides novel visualizations for medical practitioners to identify and analyze the change in the shape of COVID-19 infectious. The second objective of this work is to generate, the first time, the CT African patient COVID-19 scan datasets containing 224 patients positive for an infection and 70 regular patients CT-scan images. Computer simulations demonstrate that the proposed method's effectiveness comparing with state-of-the-art baselines methods. The results have also been evaluated with medical professionals. The developed system could be used for medical education professional training and a telehealth VR platform.

13.
IEEE J Biomed Health Inform ; 25(6): 1852-1863, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33788696

RESUMO

The coronavirus (COVID-19) pandemic has been adversely affecting people's health globally. To diminish the effect of this widespread pandemic, it is essential to detect COVID-19 cases as quickly as possible. Chest radiographs are less expensive and are a widely available imaging modality for detecting chest pathology compared with CT images. They play a vital role in early prediction and developing treatment plans for suspected or confirmed COVID-19 chest infection patients. In this paper, a novel shape-dependent Fibonacci-p patterns-based feature descriptor using a machine learning approach is proposed. Computer simulations show that the presented system (1) increases the effectiveness of differentiating COVID-19, viral pneumonia, and normal conditions, (2) is effective on small datasets, and (3) has faster inference time compared to deep learning methods with comparable performance. Computer simulations are performed on two publicly available datasets; (a) the Kaggle dataset, and (b) the COVIDGR dataset. To assess the performance of the presented system, various evaluation parameters, such as accuracy, recall, specificity, precision, and f1-score are used. Nearly 100% differentiation between normal and COVID-19 radiographs is observed for the three-class classification scheme using the lung area-specific Kaggle radiographs. While Recall of 72.65 ± 6.83 and specificity of 77.72 ± 8.06 is observed for the COVIDGR dataset.


Assuntos
COVID-19/diagnóstico por imagem , Reconhecimento Automatizado de Padrão , Pneumonia Viral/diagnóstico por imagem , Automação , COVID-19/virologia , Simulação por Computador , Humanos , Aprendizado de Máquina , Pneumonia Viral/virologia , Radiografia Torácica , SARS-CoV-2/isolamento & purificação , Sensibilidade e Especificidade , Tomografia Computadorizada por Raios X
14.
Pattern Recognit ; 114: 107747, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33162612

RESUMO

History shows that the infectious disease (COVID-19) can stun the world quickly, causing massive losses to health, resulting in a profound impact on the lives of billions of people, from both a safety and an economic perspective, for controlling the COVID-19 pandemic. The best strategy is to provide early intervention to stop the spread of the disease. In general, Computer Tomography (CT) is used to detect tumors in pneumonia, lungs, tuberculosis, emphysema, or other pleura (the membrane covering the lungs) diseases. Disadvantages of CT imaging system are: inferior soft tissue contrast compared to MRI as it is X-ray-based Radiation exposure. Lung CT image segmentation is a necessary initial step for lung image analysis. The main challenges of segmentation algorithms exaggerated due to intensity in-homogeneity, presence of artifacts, and closeness in the gray level of different soft tissue. The goal of this paper is to design and evaluate an automatic tool for automatic COVID-19 Lung Infection segmentation and measurement using chest CT images. The extensive computer simulations show better efficiency and flexibility of this end-to-end learning approach on CT image segmentation with image enhancement comparing to the state of the art segmentation approaches, namely GraphCut, Medical Image Segmentation (MIS), and Watershed. Experiments performed on COVID-CT-Dataset containing (275) CT scans that are positive for COVID-19 and new data acquired from the EL-BAYANE center for Radiology and Medical Imaging. The means of statistical measures obtained using the accuracy, sensitivity, F-measure, precision, MCC, Dice, Jacquard, and specificity are 0.98, 0.73, 0.71, 0.73, 0.71, 0.71, 0.57, 0.99 respectively; which is better than methods mentioned above. The achieved results prove that the proposed approach is more robust, accurate, and straightforward.

15.
IEEE Trans Pattern Anal Mach Intell ; 42(3): 509-520, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-30507525

RESUMO

Cross-modality face recognition is an emerging topic due to the wide-spread usage of different sensors in day-to-day life applications. The development of face recognition systems relies greatly on existing databases for evaluation and obtaining training examples for data-hungry machine learning algorithms. However, currently, there is no publicly available face database that includes more than two modalities for the same subject. In this work, we introduce the Tufts Face Database that includes images acquired in various modalities: photograph images, thermal images, near infrared images, a recorded video, a computerized facial sketch, and 3D images of each volunteer's face. An Institutional Research Board protocol was obtained and images were collected from students, staff, faculty, and their family members at Tufts University. The database includes over 10,000 images from 113 individuals from more than 15 different countries, various gender identities, ages, and ethnic backgrounds. The contributions of this work are: 1) Detailed description of the content and acquisition procedure for images in the Tufts Face Database; 2) The Tufts Face Database is publicly available to researchers worldwide, which will allow assessment and creation of more robust, consistent, and adaptable recognition algorithms; 3) A comprehensive, up-to-date review on face recognition systems and face datasets.


Assuntos
Reconhecimento Facial Automatizado/métodos , Bases de Dados Factuais , Processamento de Imagem Assistida por Computador/métodos , Adolescente , Adulto , Idoso , Algoritmos , Benchmarking , Criança , Pré-Escolar , Face/anatomia & histologia , Face/diagnóstico por imagem , Feminino , Humanos , Imageamento Tridimensional , Masculino , Pessoa de Meia-Idade , Adulto Jovem
16.
Adv Urol ; 2019: 3590623, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31164907

RESUMO

OBJECTIVE: To develop software to assess the potential aggressiveness of an incidentally detected renal mass using images. METHODS: Thirty randomly selected patients who underwent nephrectomy for renal cell carcinoma (RCC) had their images independently reviewed by engineers. Tumor "Roughness" was based on image algorithm of tumor topographic features visualized on computed tomography (CT) scans. Univariant and multivariant statistical analyses are utilized for analysis. RESULTS: We investigated 30 subjects that underwent partial or radical nephrectomy. After excluding poor image-rendered images, 27 patients remained (benign cyst = 1, oncocytoma = 2, clear cell RCC = 15, papillary RCC = 7, and chromophobe RCC = 2). The mean roughness score for each mass is 1.18, 1.16, 1.27, 1.52, and 1.56 units, respectively (p < 0.004). Renal masses were correlated with tumor roughness (Pearson's, p=0.02). However, tumor size itself was larger in benign tumors (p=0.1). Linear regression analysis noted that the roughness score is the most influential on the model with all other demographics being equal including tumor size (p=0.003). CONCLUSION: Using basic CT imaging software, tumor topography ("roughness") can be quantified and correlated with histologies such as RCC subtype and could lead to determining aggressiveness of small renal masses.

18.
IEEE Rev Biomed Eng ; 8: 98-113, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25055385

RESUMO

Prostate cancer (PCa) is currently diagnosed by microscopic evaluation of biopsy samples. Since tissue assessment heavily relies on the pathologists level of expertise and interpretation criteria, it is still a subjective process with high intra- and interobserver variabilities. Computer-aided diagnosis (CAD) may have a major impact on detection and grading of PCa by reducing the pathologists reading time, and increasing the accuracy and reproducibility of diagnosis outcomes. However, the complexity of the prostatic tissue and the large volumes of data generated by biopsy procedures make the development of CAD systems for PCa a challenging task. The problem of automated diagnosis of prostatic carcinoma from histopathology has received a lot of attention. As a result, a number of CAD systems, have been proposed for quantitative image analysis and classification. This review aims at providing a detailed description of selected literature in the field of CAD of PCa, emphasizing the role of texture analysis methods in tissue description. It includes a review of image analysis tools for image preprocessing, feature extraction, classification, and validation techniques used in PCa detection and grading, as well as future directions in pursuit of better texture-based CAD systems.


Assuntos
Histocitoquímica/métodos , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias da Próstata/diagnóstico , Neoplasias da Próstata/patologia , Humanos , Masculino , Gradação de Tumores , Reconhecimento Automatizado de Padrão , Próstata/patologia
19.
Int J Biomed Imaging ; 2014: 937849, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25177347

RESUMO

Medical imaging systems often require image enhancement, such as improving the image contrast, to provide medical professionals with the best visual image quality. This helps in anomaly detection and diagnosis. Most enhancement algorithms are iterative processes that require many parameters be selected. Poor or nonoptimal parameter selection can have a negative effect on the enhancement process. In this paper, a quantitative metric for measuring the image quality is used to select the optimal operating parameters for the enhancement algorithms. A variety of measures evaluating the quality of an image enhancement will be presented along with each measure's basis for analysis, namely, on image content and image attributes. We also provide guidelines for systematically choosing the proper measure of image quality for medical images.

20.
Int J Biomed Imaging ; 2014: 931375, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25132844

RESUMO

Edge detection is a key step in medical image processing. It is widely used to extract features, perform segmentation, and further assist in diagnosis. A poor quality edge map can result in false alarms and misses in cancer detection algorithms. Therefore, it is necessary to have a reliable edge measure to assist in selecting the optimal edge map. Existing reference based edge measures require a ground truth edge map to evaluate the similarity between the generated edge map and the ground truth. However, the ground truth images are not available for medical images. Therefore, a nonreference edge measure is ideal for medical image processing applications. In this paper, a nonreference reconstruction based edge map evaluation (NREM) is proposed. The theoretical basis is that a good edge map keeps the structure and details of the original image thus would yield a good reconstructed image. The NREM is based on comparing the similarity between the reconstructed image with the original image using this concept. The edge measure is used for selecting the optimal edge detection algorithm and optimal parameters for the algorithm. Experimental results show that the quantitative evaluations given by the edge measure have good correlations with human visual analysis.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...