Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Curr Med Imaging ; 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38462827

RESUMO

BACKGROUND: Lumbar disc herniation (LDH) is a common clinical condition causing lower back and leg pain. Accurate segmentation of the lumbar discs is crucial for assessing and diagnosing LDH. Magnetic resonance imaging (MRI) can reveal the condition of articular cartilage. However, manual segmentation of MRI images is burdensome for physicians and needs to be more efficient. OBJECTIVE: In this study, we propose a method that combines UNet and superpixel segmentation to address the problem of loss of detailed information in the feature extraction phase, leading to poor segmentation results at object edges. The aim is to provide a reproducible solution for diagnosing patients with lumbar disc herniation. METHODS: We suggest using the network structure of UNet. Firstly, dense blocks are inserted into the UNet network, and training is performed using the Swish activation function. The Dense-UNet model extracts semantic features from the images and obtains rough semantic segmentation results. Then, an adaptive-scale superpixel segmentation algorithm is applied to segment the input images into superpixel images. Finally, high-level abstract semantic features are fused with the detailed information of the superpixels to obtain edge-optimized semantic segmentation results. RESULTS: Evaluation of a private dataset of multifidus muscles in magnetic resonance images demonstrates that compared to other segmentation algorithms, this algorithm exhibits better semantic segmentation performance in detailed areas such as object edges. Compared to UNet, it achieves a 9.5% improvement in the Dice Similarity Coefficient (DSC) and an 11.3% improvement in the Jaccard Index (JAC). CONCLUSION: The experimental results indicate that this algorithm improves segmentation performance while reducing computational complexity.

2.
Brain Sci ; 13(6)2023 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-37371371

RESUMO

In the present scenario, Alzheimer's Disease (AD) is one of the incurable neuro-degenerative disorders, which accounts for nearly 60% to 70% of dementia cases. Currently, several machine-learning approaches and neuroimaging modalities are utilized for diagnosing AD. Among the available neuroimaging modalities, functional Magnetic Resonance Imaging (fMRI) is extensively utilized for studying brain activities related to AD. However, analyzing complex brain structures in fMRI is a time-consuming and complex task; so, a novel automated model was proposed in this manuscript for early diagnosis of AD using fMRI images. Initially, the fMRI images are acquired from an online dataset: Alzheimer's Disease Neuroimaging Initiative (ADNI). Further, the quality of the acquired fMRI images was improved by implementing a normalization technique. Then, the Segmentation by Aggregating Superpixels (SAS) method was implemented for segmenting the brain regions (AD, Normal Controls (NC), Mild Cognitive Impairment (MCI), Early Mild Cognitive Impairment (EMCI), Late Mild Cognitive Impairment (LMCI), and Significant Memory Concern (SMC)) from the denoised fMRI images. From the segmented brain regions, feature vectors were extracted by employing Gabor and Gray Level Co-Occurrence Matrix (GLCM) techniques. The obtained feature vectors were dimensionally reduced by implementing Honey Badger Optimization Algorithm (HBOA) and fed to the Multi-Layer Perceptron (MLP) model for classifying the fMRI images as AD, NC, MCI, EMCI, LMCI, and SMC. The extensive investigation indicated that the presented model attained 99.44% of classification accuracy, 88.90% of Dice Similarity Coefficient (DSC), 90.82% of Jaccard Coefficient (JC), and 88.43% of Hausdorff Distance (HD). The attained results are better compared with the conventional segmentation and classification models.

3.
Food Res Int ; 169: 112866, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37254314

RESUMO

This study developed a novel method for monitoring cheese contamination with Clostridium spores non-invasively using hyperspectral imaging (HSI). The ability of HSI to quantify Clostridium metabolites was investigated with control cheese and cheese manufactured with milk contaminated with Clostridium tyrobutyricum, Clostridium butyricum and Clostridium sporogenes. Microbial count, HSI and SPME-GC-MS data were obtained over 10 weeks of storage. The developed method using HSI successfully quantified butyric acid (R2 = 0.91, RPD = 3.38) a major compound of Clostridium metabolism in cheese. This study creates a new venue to monitor the spatial and temporal development of late blowing defect (LBD) in cheese using fast and non-invasive measurement.


Assuntos
Queijo , Vácuo , Queijo/análise , Imageamento Hiperespectral , Clostridium/metabolismo , Ácido Butírico/metabolismo
4.
J Ambient Intell Humaniz Comput ; 14(7): 9217-9232, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36310644

RESUMO

In computer vision segmentation field, super pixel identity has become an important index in the recently segmentation algorithms especially in medical images. Simple Linear Iterative Clustering (SLIC) algorithm is one of the most popular super pixel methods as it has a great robustness, less sensitive to the image type and benefit to the boundary recall in different kinds of image processing. Recently, COVID-19 severity increased with the lack of an effective treatment or vaccine. As the Corona virus spreads in an unknown manner, th-ere is a strong need for segmenting the lungs infected regions for fast tracking and early detection, no matter how small. This may consider difficult to be achieved with traditional segmentation techniques. From this perspective, this paper presents an efficient modified central force optimization (MCFO)-based SLIC segmentation algorithm to discuss chest CT images for detecting the positive COVID-19 cases. The proposed MCFO-based SLIC segmentation algorithm performance is evaluated and compared with the thresholding segmentation algorithm using different evaluation metrics such as accuracy, boundary recall, F-measure, similarity index, MCC, Dice, and Jaccard. The outcomes demonstrated that the proposed MCFO-based SLIC segmentation algorithm has achieved better detection for the small infected regions in CT lung scans than the thresholding segmentation.

5.
Bioengineering (Basel) ; 9(8)2022 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-36004876

RESUMO

Lung segmentation of chest X-ray (CXR) images is a fundamental step in many diagnostic applications. Most lung field segmentation methods reduce the image size to speed up the subsequent processing time. Then, the low-resolution result is upsampled to the original high-resolution image. Nevertheless, the image boundaries become blurred after the downsampling and upsampling steps. It is necessary to alleviate blurred boundaries during downsampling and upsampling. In this paper, we incorporate the lung field segmentation with the superpixel resizing framework to achieve the goal. The superpixel resizing framework upsamples the segmentation results based on the superpixel boundary information obtained from the downsampling process. Using this method, not only can the computation time of high-resolution medical image segmentation be reduced, but also the quality of the segmentation results can be preserved. We evaluate the proposed method on JSRT, LIDC-IDRI, and ANH datasets. The experimental results show that the proposed superpixel resizing framework outperforms other traditional image resizing methods. Furthermore, combining the segmentation network and the superpixel resizing framework, the proposed method achieves better results with an average time score of 4.6 s on CPU and 0.02 s on GPU.

6.
Sensors (Basel) ; 22(14)2022 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-35890992

RESUMO

Semantic segmentation for accurate visual perception is a critical task in computer vision. In principle, the automatic classification of dynamic visual scenes using predefined object classes remains unresolved. The challenging problems of learning deep convolution neural networks, specifically ResNet-based DeepLabV3+ (the most recent version), are threefold. The problems arise due to (1) biased centric exploitations of filter masks, (2) lower representational power of residual networks due to identity shortcuts, and (3) a loss of spatial relationship by using per-pixel primitives. To solve these problems, we present a proficient approach based on DeepLabV3+, along with an added evaluation metric, namely, Unified DeepLabV3+ and S3core, respectively. The presented unified version reduced the effect of biased exploitations via additional dilated convolution layers with customized dilation rates. We further tackled the problem of representational power by introducing non-linear group normalization shortcuts to solve the focused problem of semi-dark images. Meanwhile, to keep track of the spatial relationships in terms of the global and local contexts, geometrically bunched pixel cues were used. We accumulated all the proposed variants of DeepLabV3+ to propose Unified DeepLabV3+ for accurate visual decisions. Finally, the proposed S3core evaluation metric was based on the weighted combination of three different accuracy measures, i.e., the pixel accuracy, IoU (intersection over union), and Mean BFScore, as robust identification criteria. Extensive experimental analysis performed over a CamVid dataset confirmed the applicability of the proposed solution for autonomous vehicles and robotics for outdoor settings. The experimental analysis showed that the proposed Unified DeepLabV3+ outperformed DeepLabV3+ by a margin of 3% in terms of the class-wise pixel accuracy, along with a higher S3core, depicting the effectiveness of the proposed approach.


Assuntos
Processamento de Imagem Assistida por Computador , Semântica , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
7.
Comput Methods Programs Biomed ; 222: 106947, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35749885

RESUMO

BACKGROUND AND OBJECTIVES: Chest X-ray (CXR) is a non-invasive imaging modality used in the prognosis and management of chronic lung disorders like tuberculosis (TB), pneumonia, coronavirus disease (COVID-19), etc. The radiomic features associated with different disease manifestations assist in detection, localization, and grading the severity of infected lung regions. The majority of the existing computer-aided diagnosis (CAD) system used these features for the classification task, and only a few works have been dedicated to disease-localization and severity scoring. Moreover, the existing deep learning approaches use class activation map and Saliency map, which generate a rough localization. This study aims to generate a compact disease boundary, infection map, and grade the infection severity using proposed multistage superpixel classification-based disease localization and severity assessment framework. METHODS: The proposed method uses a simple linear iterative clustering (SLIC) technique to subdivide the lung field into small superpixels. Initially, the different radiomic texture and proposed shape features are extracted and combined to train different benchmark classifiers in a multistage framework. Subsequently, the predicted class labels are used to generate an infection map, mark disease boundary, and grade the infection severity. The performance is evaluated using a publicly available Montgomery dataset and validated using Friedman average ranking and Holm and Nemenyi post-hoc procedures. RESULTS: The proposed multistage classification approach achieved accuracy (ACC)= 95.52%, F-Measure (FM)= 95.48%, area under the curve (AUC)= 0.955 for Stage-I and ACC=85.35%, FM=85.20%, AUC=0.853 for Stage-II using calibration dataset and ACC = 93.41%, FM = 95.32%, AUC = 0.936 for Stage-I and ACC = 84.02%, FM = 71.01%, AUC = 0.795 for Stage-II using validation dataset. Also, the model has demonstrated the average Jaccard Index (JI) of 0.82 and Pearson's correlation coefficient (r) of 0.9589. CONCLUSIONS: The obtained classification results using calibration and validation dataset confirms the promising performance of the proposed framework. Also, the average JI shows promising potential to localize the disease, and better agreement between radiologist score and predicted severity score (r) confirms the robustness of the method. Finally, the statistical test justified the significance of the obtained results.


Assuntos
COVID-19 , Pneumopatias , COVID-19/diagnóstico por imagem , Diagnóstico por Computador/métodos , Humanos , Tórax , Raios X
8.
Comput Biol Med ; 145: 105466, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35585732

RESUMO

Fast and accurate diagnosis is critical for the triage and management of pneumonia, particularly in the current scenario of a COVID-19 pandemic, where this pathology is a major symptom of the infection. With the objective of providing tools for that purpose, this study assesses the potential of three textural image characterisation methods: radiomics, fractal dimension and the recently developed superpixel-based histon, as biomarkers to be used for training Artificial Intelligence (AI) models in order to detect pneumonia in chest X-ray images. Models generated from three different AI algorithms have been studied: K-Nearest Neighbors, Support Vector Machine and Random Forest. Two open-access image datasets were used in this study. In the first one, a dataset composed of paediatric chest X-ray, the best performing generated models achieved an 83.3% accuracy with 89% sensitivity for radiomics, 89.9% accuracy with 93.6% sensitivity for fractal dimension and 91.3% accuracy with 90.5% sensitivity for superpixels based histon. Second, a dataset derived from an image repository developed primarily as a tool for studying COVID-19 was used. For this dataset, the best performing generated models resulted in a 95.3% accuracy with 99.2% sensitivity for radiomics, 99% accuracy with 100% sensitivity for fractal dimension and 99% accuracy with 98.6% sensitivity for superpixel-based histons. The results confirm the validity of the tested methods as reliable and easy-to-implement automatic diagnostic tools for pneumonia.


Assuntos
COVID-19 , Aprendizado Profundo , Inteligência Artificial , COVID-19/diagnóstico por imagem , Criança , Humanos , Pandemias , SARS-CoV-2 , Raios X
9.
Front Radiol ; 2: 1061402, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37492689

RESUMO

With the increased reliance on medical imaging, Deep convolutional neural networks (CNNs) have become an essential tool in the medical imaging-based computer-aided diagnostic pipelines. However, training accurate and reliable classification models often require large fine-grained annotated datasets. To alleviate this, weakly-supervised methods can be used to obtain local information such as region of interest from global labels. This work proposes a weakly-supervised pipeline to extract Relevance Maps of medical images from pre-trained 3D classification models using localized perturbations. The extracted Relevance Map describes a given region's importance to the classification model and produces the segmentation for the region. Furthermore, we propose a novel optimal perturbation generation method that exploits 3D superpixels to find the most relevant area for a given classification using U-net architecture. This model is trained with perturbation loss, which maximizes the difference between unperturbed and perturbed predictions. We validated the effectiveness of our methodology by applying it to the segmentation of Glioma brain tumours in MRI scans using only classification labels for glioma type. The proposed method outperforms existing methods in both Dice Similarity Coefficient for segmentation and resolution for visualizations.

10.
Med Biol Eng Comput ; 59(9): 1795-1814, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34304371

RESUMO

Microcalcifications (MCs) are considered as the first indicator of breast cancer development. Their morphology, in terms of shape and size, is considered as the most important criterion that determines their malignity degrees. Therefore, the accurate delineation of MC is a cornerstone step in their automatic diagnosis process. In this paper, we propose a new conditional region growing (CRG) approach with the ability of finding the accurate MC boundaries starting from selected seed points. The starting seed points are determined based on regional maxima detection and superpixel analysis. The region growing step is controlled by a set of criteria that are adapted to MC detection in terms of contrast and shape variation. These criteria are derived from prior knowledge to characterize MCs and can be divided into two categories. The first one concerns the neighbourhood searching size. The second one deals with the analysis of gradient information and shape evolution within the growing process. In order to prove the effectiveness and the reliability in terms of MC detection and delineation, several experiments have been carried out on MCs of various types, with both qualitative and quantitative analysis. The comparison of the proposed approach with state-of-the art proves the importance of the used criteria in the context of MC delineation, towards a better management of breast cancer. Graphical Abstract Flowchart of the proposed approach.


Assuntos
Neoplasias da Mama , Calcinose , Algoritmos , Neoplasias da Mama/diagnóstico por imagem , Calcinose/diagnóstico por imagem , Feminino , Humanos , Mamografia , Reprodutibilidade dos Testes
11.
J Digit Imaging ; 34(1): 162-181, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33415444

RESUMO

Melanoma is the most fatal type of skin cancer. Detection of melanoma from dermoscopic images in an early stage is critical for improving survival rates. Numerous image processing methods have been devised to discriminate between melanoma and benign skin lesions. Previous studies show that the detection performance depends significantly on the skin lesion image representations and features. In this work, we propose a melanoma detection approach that combines graph-theoretic representations with conventional dermoscopic image features to enhance the detection performance. Instead of using individual pixels of skin lesion images as nodes for complex graph representations, superpixels are generated from the skin lesion images and are then used as graph nodes in a superpixel graph. An edge of such a graph connects two adjacent superpixels where the edge weight is a function of the distance between feature descriptors of these superpixels. A graph signal can be defined by assigning to each graph node the output of some single-valued function of the associated superpixel descriptor. Features are extracted from weighted and unweighted graph models in the vertex domain at both local and global scales and in the spectral domain using the graph Fourier transform (GFT). Other features based on color, geometry and texture are extracted from the skin lesion images. Several conventional and ensemble classifiers have been trained and tested on different combinations from those features using two datasets of dermoscopic images from the International Skin Imaging Collaboration (ISIC) archive. The proposed system achieved an AUC of [Formula: see text], an accuracy of [Formula: see text], a specificity of [Formula: see text] and a sensitivity of [Formula: see text].


Assuntos
Melanoma , Neoplasias Cutâneas , Algoritmos , Dermoscopia , Humanos , Processamento de Imagem Assistida por Computador , Melanoma/diagnóstico por imagem , Pele , Neoplasias Cutâneas/diagnóstico por imagem
12.
Ophthalmol Glaucoma ; 4(2): 209-215, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-32866692

RESUMO

PURPOSE: To compare local ganglion cell-inner plexiform layer (GCIPL) thickness measurements between 2 OCT devices and to explore factors that may influence the difference in measurements. DESIGN: Cross-sectional study. PARTICIPANTS: Sixty-nine glaucoma eyes (63 patients) with evidence of central damage or mean deviation (MD) of -6.0 dB or worse on a 24-2 visual field (VF). METHODS: Cirrus and Spectralis OCT macular volume scans were exported, data from the central 20° of both OCT devices were centered and aligned, and 50 × 50 arrays of 0.4° × 0.4° superpixels were created. We estimated nonparametric (Spearman's) correlations and used Bland-Altman plots to compare GCIPL thickness measurements between the two OCTs at the superpixel level. Factors that may have influenced the differences between thickness measurements between the two devices were explored with linear mixed models. MAIN OUTCOME MEASURES: Pooled and individual-eye Spearman's correlation and agreement between thickness measurements from the two devices. RESULTS: The median 24-2 VF MD was -6.8 dB (interquartile range [IQR], -4.9 to -12.3 dB). The overall pooled Spearman's correlation between the two devices for all superpixels and eyes was 0.97 (P < 0.001). The median within-eye correlation coefficient was 0.72 (IQR, 0.59-0.79). Bland-Altman plots demonstrated a systematic bias in most individual eyes, with Spectralis GCIPL measurements becoming larger than Cirrus measurements with increasing superpixel thickness. The average superpixel thickness and distance to the fovea influenced the thickness difference between the two devices in multivariate models (P < 0.001). CONCLUSIONS: Local macular thickness measurements from the Spectralis and Cirrus devices are highly correlated, but not interchangeable. Differences in thickness measurements between the two devices are influenced by the location of superpixels and their thickness.


Assuntos
Fibras Nervosas , Tomografia de Coerência Óptica , Estudos Transversais , Humanos , Pressão Intraocular , Células Ganglionares da Retina
13.
Rev. mex. ing. bioméd ; 41(3): e1050, Sep.-Dec. 2020. tab, graf
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1150053

RESUMO

Abstract Multiple Sclerosis (MS) is the most common neurodegenerative disease among young adults. Diagnosis and monitoring of MS is performed with T2-weighted or T2 FLAIR magnetic resonance imaging, where MS lesions appear as hyperintense spots in the white matter. In recent years, multiple algorithms have been proposed to detect these lesions with varying success rates, which greatly depend on the amount of a priori information required by each algorithm, such as the use of an atlas or the involvement of an expert to guide the segmentation process. In this work, a fully automatic method that does not rely on a priori anatomical information is proposed and evaluated. The proposed algorithm is based on an over-segmentation in superpixels and their classification by means of Gauss-Markov Measure Fields (GMMF). The main advantage of the over-segmentation is that it preserves the borders between tissues, while the GMMF classifier is robust to noise and computationally efficient. The proposed segmentation is then applied in two stages: first to segment the brain region and then to detect hyperintense spots within the brain. The proposed method is evaluated with synthetic images from BrainWeb, as well as real images from MS patients. The proposed method produces competitive results with respect to other algorithms in the state of the art, without requiring user assistance nor anatomical prior information.


Resumen La Esclerosis Múltiple (MS) es una de las enfermedades neurodegenerativas más comunes en adultos jóvenes. El diagnóstico y su monitoreo se realiza generalmente mediante imágenes de resonancia magnética T2 o T2 FLAIR, donde se observan regiones hiperintensas relacionadas a lesiones cerebrales causadas por la MS. En años recientes, múltiples algoritmos han sido propuestos para detectar estas lesiones con diferentes tasas de éxito las cuales dependen en gran medida de la cantidad de información a priori que requiere cada algoritmo, como el uso de un atlas o el involucramiento de un experto que guíe el proceso de segmentación. En este trabajo, se propone un método automático independiente de información anatómica. El algoritmo propuesto está basado en una sobresegmentación en superpixeles y su clasificación mediante un proceso de Campos Aleatorios de Markov de Medidas Gaussianas (GMMF). La principal ventaja de la sobresegmentación es que preserva bordes entre tejidos, además que tiene un costo reducido en tiempo de ejecución, mientras que el clasificador GMMF es robusto a ruido y computacionalmente eficiente. La segmentación propuesta es aplicada en dos etapas: primero para segmentar el cerebro y después para detectar las lesiones en él. El método propuesto es evaluado usando imágenes sintéticas de BrainWeb, así como también imágenes reales de pacientes con MS. Con respecto a los resultados, el método propuesto muestra un desempeño competitivo respecto a otros métodos en el estado del arte, tomando en cuenta que éste no requiere de asistencia o información a priori.

14.
Med Biol Eng Comput ; 58(9): 1947-1964, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32566988

RESUMO

Automatic and reliable prostate segmentation is an essential prerequisite for assisting the diagnosis and treatment, such as guiding biopsy procedure and radiation therapy. Nonetheless, automatic segmentation is challenging due to the lack of clear prostate boundaries owing to the similar appearance of prostate and surrounding tissues and the wide variation in size and shape among different patients ascribed to pathological changes or different resolutions of images. In this regard, the state-of-the-art includes methods based on a probabilistic atlas, active contour models, and deep learning techniques. However, these techniques have limitations that need to be addressed, such as MRI scans with the same spatial resolution, initialization of the prostate region with well-defined contours and a set of hyperparameters of deep learning techniques determined manually, respectively. Therefore, this paper proposes an automatic and novel coarse-to-fine segmentation method for prostate 3D MRI scans. The coarse segmentation step combines local texture and spatial information using the Intrinsic Manifold Simple Linear Iterative Clustering algorithm and probabilistic atlas in a deep convolutional neural networks model jointly with the particle swarm optimization algorithm to classify prostate and non-prostate tissues. Then, the fine segmentation uses the 3D Chan-Vese active contour model to obtain the final prostate surface. The proposed method has been evaluated on the Prostate 3T and PROMISE12 databases presenting a dice similarity coefficient of 84.86%, relative volume difference of 14.53%, sensitivity of 90.73%, specificity of 99.46%, and accuracy of 99.11%. Experimental results demonstrate the high performance potential of the proposed method compared to those previously published.


Assuntos
Interpretação de Imagem Assistida por Computador/estatística & dados numéricos , Imageamento Tridimensional/estatística & dados numéricos , Imageamento por Ressonância Magnética/estatística & dados numéricos , Redes Neurais de Computação , Neoplasias da Próstata/diagnóstico por imagem , Algoritmos , Bases de Dados Factuais , Aprendizado Profundo , Humanos , Análise de Classes Latentes , Masculino , Modelos Estatísticos
15.
Sensors (Basel) ; 19(24)2019 Dec 13.
Artigo em Inglês | MEDLINE | ID: mdl-31847162

RESUMO

Geometric deep learning (GDL) generalizes convolutional neural networks (CNNs) to non-Euclidean domains. In this work, a GDL technique, allowing the application of CNN on graphs, is examined. It defines convolutional filters with the use of the Gaussian mixture model (GMM). As those filters are defined in continuous space, they can be easily rotated without the need for some additional interpolation. This, in turn, allows constructing systems having rotation equivariance property. The characteristic of the proposed approach is illustrated with the problem of ear detection, which is of great importance in biometric systems enabling image based, discrete human identification. The analyzed graphs were constructed taking into account superpixels representing image content. This kind of representation has several advantages. On the one hand, it significantly reduces the amount of processed data, allowing building simpler and more effective models. On the other hand, it seems to be closer to the conscious process of human image understanding as it does not operate on millions of pixels. The contributions of the paper lie both in GDL application area extension (semantic segmentation of the images) and in the novel concept of trained filter transformations. We show that even significantly reduced information about image content and a relatively simple, in comparison with classic CNN, model (smaller number of parameters and significantly faster processing) allows obtaining detection results on the quality level similar to those reported in the literature on the UBEAR dataset. Moreover, we show experimentally that the proposed approach possesses in fact the rotation equivariance property allowing detecting rotated structures without the need for labor consuming training on all rotated and non-rotated images.

16.
Front Plant Sci ; 10: 1176, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31616456

RESUMO

Crop yield is an essential measure for breeders, researchers, and farmers and is composed of and may be calculated by the number of ears per square meter, grains per ear, and thousand grain weight. Manual wheat ear counting, required in breeding programs to evaluate crop yield potential, is labor-intensive and expensive; thus, the development of a real-time wheat head counting system would be a significant advancement. In this paper, we propose a computationally efficient system called DeepCount to automatically identify and count the number of wheat spikes in digital images taken under natural field conditions. The proposed method tackles wheat spike quantification by segmenting an image into superpixels using simple linear iterative clustering (SLIC), deriving canopy relevant features, and then constructing a rational feature model fed into the deep convolutional neural network (CNN) classification for semantic segmentation of wheat spikes. As the method is based on a deep learning model, it replaces hand-engineered features required for traditional machine learning methods with more efficient algorithms. The method is tested on digital images taken directly in the field at different stages of ear emergence/maturity (using visually different wheat varieties), with different canopy complexities (achieved through varying nitrogen inputs) and different heights above the canopy under varying environmental conditions. In addition, the proposed technique is compared with a wheat ear counting method based on a previously developed edge detection technique and morphological analysis. The proposed approach is validated with image-based ear counting and ground-based measurements. The results demonstrate that the DeepCount technique has a high level of robustness regardless of variables, such as growth stage and weather conditions, hence demonstrating the feasibility of the approach in real scenarios. The system is a leap toward a portable and smartphone-assisted wheat ear counting systems, results in reducing the labor involved, and is suitable for high-throughput analysis. It may also be adapted to work on Red; Green; Blue (RGB) images acquired from unmanned aerial vehicle (UAVs).

17.
Math Biosci Eng ; 16(3): 1115-1137, 2019 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-30947411

RESUMO

Ultrasound (US) imaging has the technical advantages for the functional evaluation of myocardium compared with other imaging modalities. However, it is a challenge of extracting the myocardial tissues from the background due to low quality of US imaging. To better extract the myocardial tissues, this study proposes a semi-supervised segmentation method of fast Superpixels and Neighborhood Patches based Continuous Min-Cut (fSP-CMC). The US image is represented by a graph, which is constructed depending on the features of superpixels and neighborhood patches. A novel similarity measure is defined to capture and enhance the features correlation using Pearson correlation coefficient and Pearson distance. Interactive labels provided by user play a subsidiary role in the semi-supervised segmentation. The continuous graph cut model is solved via a fast minimization algorithm based on augmented Lagrangian and operator splitting. Additionally, Non-Uniform Rational B-Spline (NURBS) curve fitting is used as post-processing to solve the low resolution problem caused by the graph-based method. 200 B-mode US images of left ventricle of the rats were collected in this study. The myocardial tissues were segmented using the proposed fSP-CMC method compared with the method of fast Neighborhood Patches based Continuous Min-Cut (fP-CMC). The results show that the fSP-CMC segmented the myocardial tissues with a higher agreement with the ground truth (GT) provided by medical experts. The mean absolute distance (MAD) and Hausdorff distance (HD) were significantly lower than those values of fP-CMC (p < 0.05), while the Dice was significantly higher (p < 0.05). In conclusion, the proposed fSP-CMC method accurately and effectively segments the myocardiumn in US images. This method has potentials to be a reliable segmentation method and useful for the functional evaluation of myocardium in the future study.


Assuntos
Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Infarto do Miocárdio/diagnóstico por imagem , Miocárdio/metabolismo , Ultrassonografia , Algoritmos , Animais , Área Sob a Curva , Imageamento Tridimensional , Reconhecimento Automatizado de Padrão/métodos , Curva ROC , Ratos , Ratos Sprague-Dawley , Software
18.
Elife ; 82019 02 26.
Artigo em Inglês | MEDLINE | ID: mdl-30803483

RESUMO

Correct cell/cell interactions and motion dynamics are fundamental in tissue homeostasis, and defects in these cellular processes cause diseases. Therefore, there is strong interest in identifying factors, including drug candidates that affect cell/cell interactions and motion dynamics. However, existing quantitative tools for systematically interrogating complex motion phenotypes in timelapse datasets are limited. We present Motion Sensing Superpixels (MOSES), a computational framework that measures and characterises biological motion with a unique superpixel 'mesh' formulation. Using published datasets, MOSES demonstrates single-cell tracking capability and more advanced population quantification than Particle Image Velocimetry approaches. From > 190 co-culture videos, MOSES motion-mapped the interactions between human esophageal squamous epithelial and columnar cells mimicking the esophageal squamous-columnar junction, a site where Barrett's esophagus and esophageal adenocarcinoma often arise clinically. MOSES is a powerful tool that will facilitate unbiased, systematic analysis of cellular dynamics from high-content time-lapse imaging screens with little prior knowledge and few assumptions.


Assuntos
Comunicação Celular , Movimento Celular , Técnicas Citológicas/métodos , Células Epiteliais/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Esôfago/citologia , Humanos , Fenótipo
19.
Bio Protoc ; 9(18): e3365, 2019 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-33654862

RESUMO

Precise spatiotemporal regulation is the foundation for the healthy development and maintenance of living organisms. All cells must correctly execute their function in the right place at the right time. Cellular motion is thus an important dynamic readout of signaling in key disease-relevant molecular pathways. However despite the rapid advancement of imaging technology, a comprehensive quantitative description of motion imaged under different imaging modalities at all spatiotemporal scales; molecular, cellular and tissue-level is still lacking. Generally, cells move either 'individually' or 'collectively' as a group with nearby cells. Current computational tools specifically focus on one or the other regime, limiting their general applicability. To address this, we recently developed and reported a new computational framework, Motion Sensing Superpixels (MOSES). Incorporating the individual advantages of single cell trackers for individual cell and particle image velocimetry (PIV) for collective cell motion analyses, MOSES enables 'mesoscale' analysis of both single-cell and collective motion over arbitrarily long times. At the same time, MOSES readily complements existing single-cell tracking workflows with additional characterization of global motion patterns and interaction analysis between cells and also operates directly on PIV extracted motion fields to yield rich motion trajectories analogous for single-cell tracks suitable for high-throughput motion phenotyping. This protocol provides a step-by-step practical guide for those interested in applying MOSES to their own datasets. The protocol highlights the salient features of a MOSES analysis and demonstrates the ease-of-use and wide applicability of MOSES to biological imaging through demo experimental analyses with ready-to-use code snippets of four datasets from different microscope modalities; phase-contrast, fluorescent, lightsheet and intra-vital microscopy. In addition we discuss critical points of consideration in the analysis.

20.
Med Biol Eng Comput ; 57(3): 653-665, 2019 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-30327998

RESUMO

The analysis of cell characteristics from high-resolution digital histopathological images is the standard clinical practice for the diagnosis and prognosis of cancer. Yet, it is a rather exhausting process for pathologists to examine the cellular structures manually in this way. Automating this tedious and time-consuming process is an emerging topic of the histopathological image-processing studies in the literature. This paper presents a two-stage segmentation method to obtain cellular structures in high-dimensional histopathological images of renal cell carcinoma. First, the image is segmented to superpixels with simple linear iterative clustering (SLIC) method. Then, the obtained superpixels are clustered by the state-of-the-art clustering-based segmentation algorithms to find similar superpixels that compose the cell nuclei. Furthermore, the comparison of the global clustering-based segmentation methods and local region-based superpixel segmentation algorithms are also compared. The results show that the use of the superpixel segmentation algorithm as a pre-segmentation method improves the performance of the cell segmentation as compared to the simple single clustering-based segmentation algorithm. The true positive ratio (TPR), true negative ratio (TNR), F-measure, precision, and overlap ratio (OR) measures are utilized as segmentation performance evaluation. The computation times of the algorithms are also evaluated and presented in the study. Graphical Abstract The visual flowchart of the proposed automatic cell segmentation in histopathological images via two-staged superpixel-based algorithms.


Assuntos
Algoritmos , Carcinoma de Células Renais/patologia , Técnicas de Preparação Histocitológica/métodos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Renais/patologia , Análise por Conglomerados , Bases de Dados Factuais , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...