Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 170: 107916, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38237237

RESUMO

In the medical field, the application of machine learning technology in the automatic diagnosis and monitoring of osteoporosis often faces challenges related to domain adaptation in drug therapy research. The existing neural networks used for the diagnosis of osteoporosis may experience a decrease in model performance when applied to new data domains due to changes in radiation dose and equipment. To address this issue, in this study, we propose a new method for multi domain diagnostic and quantitative computed tomography (QCT) images, called DeepmdQCT. This method adopts a domain invariant feature strategy and integrates a comprehensive attention mechanism to guide the fusion of global and local features, effectively improving the diagnostic performance of multi domain CT images. We conducted experimental evaluations on a self-created OQCT dataset, and the results showed that for dose domain images, the average accuracy reached 91%, while for device domain images, the accuracy reached 90.5%. our method successfully estimated bone density values, with a fit of 0.95 to the gold standard. Our method not only achieved high accuracy in CT images in the dose and equipment fields, but also successfully estimated key bone density values, which is crucial for evaluating the effectiveness of osteoporosis drug treatment. In addition, we validated the effectiveness of our architecture in feature extraction using three publicly available datasets. We also encourage the application of the DeepmdQCT method to a wider range of medical image analysis fields to improve the performance of multi-domain images.


Assuntos
Osteoporose , Humanos , Osteoporose/diagnóstico por imagem , Densidade Óssea , Tomografia Computadorizada por Raios X , Computadores , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador
2.
Artigo em Inglês | MEDLINE | ID: mdl-37844005

RESUMO

Rehabilitation movement assessment often requires patients to wear expensive and inconvenient sensors or optical markers. To address this issue, we propose a non-contact and real-time approach using a lightweight pose detection algorithm-Sports Rehabilitation-Pose (SR-Pose), and a depth camera for accurate assessment of rehabilitation movement. Our approach utilizes an E-Shufflenet network to extract underlying features of the target, a RLE-Decoder module to directly regress the coordinate values of 16 key points, and a Weight Fusion Unit (WFU) module to output optimal human posture detection results. By combining the detected human pose information with depth information, we accurately calculate the angle between each joint in three-dimensional space. Furthermore, we apply the DTW algorithm to solve the distance measurement and matching problem of video sequences with different lengths in rehabilitation evaluation tasks. Experimental results show that our method can detect human joint nodes with an average detection speed of 14.32ms and an average detection accuracy for pose of 91.2%, demonstrating its computational efficiency and effectiveness for practical application. Our proposed approach provides a low-cost and user-friendly alternative to traditional sensor-based methods, making it a promising solution for rehabilitation movement assessment.


Assuntos
Algoritmos , Esportes , Humanos , Movimento , Postura , Tecnologia
3.
Comput Intell Neurosci ; 2023: 3018320, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36970245

RESUMO

Osteoporosis is a significant global health concern that can be difficult to detect early due to a lack of symptoms. At present, the examination of osteoporosis depends mainly on methods containing dual-energyX-ray, quantitative CT, etc., which are high costs in terms of equipment and human time. Therefore, a more efficient and economical method is urgently needed for diagnosing osteoporosis. With the development of deep learning, automatic diagnosis models for various diseases have been proposed. However, the establishment of these models generally requires images with only lesion areas, and annotating the lesion areas is time-consuming. To address this challenge, we propose a joint learning framework for osteoporosis diagnosis that combines localization, segmentation, and classification to enhance diagnostic accuracy. Our method includes a boundary heat map regression branch for thinning segmentation and a gated convolution module for adjusting context features in the classification module. We also integrate segmentation and classification features and propose a feature fusion module to adjust the weight of different levels of vertebrae. We trained our model on a self-built dataset and achieved an overall accuracy rate of 93.3% for the three label categories (normal, osteopenia, and osteoporosis) in the testing datasets. The area under the curve for the normal category is 0.973; for the osteopenia category, it is 0.965; and for the osteoporosis category, it is 0.985. Our method provides a promising alternative for the diagnosis of osteoporosis at present.


Assuntos
Doenças Ósseas Metabólicas , Osteoporose , Humanos , Osteoporose/diagnóstico por imagem , Tomografia Computadorizada por Raios X
4.
IEEE Trans Neural Netw Learn Syst ; 34(12): 9806-9820, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35349456

RESUMO

The study of mouse social behaviors has been increasingly undertaken in neuroscience research. However, automated quantification of mouse behaviors from the videos of interacting mice is still a challenging problem, where object tracking plays a key role in locating mice in their living spaces. Artificial markers are often applied for multiple mice tracking, which are intrusive and consequently interfere with the movements of mice in a dynamic environment. In this article, we propose a novel method to continuously track several mice and individual parts without requiring any specific tagging. First, we propose an efficient and robust deep-learning-based mouse part detection scheme to generate part candidates. Subsequently, we propose a novel Bayesian-inference integer linear programming (BILP) model that jointly assigns the part candidates to individual targets with necessary geometric constraints while establishing pair-wise association between the detected parts. There is no publicly available dataset in the research community that provides a quantitative test bed for part detection and tracking of multiple mice, and we here introduce a new challenging Multi-Mice PartsTrack dataset that is made of complex behaviors. Finally, we evaluate our proposed approach against several baselines on our new datasets, where the results show that our method outperforms the other state-of-the-art approaches in terms of accuracy. We also demonstrate the generalization ability of the proposed approach on tracking zebra and locust.


Assuntos
Algoritmos , Redes Neurais de Computação , Teorema de Bayes , Movimento
5.
Med Image Anal ; 83: 102640, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36260951

RESUMO

Domain shift is a problem commonly encountered when developing automated histopathology pipelines. The performance of machine learning models such as convolutional neural networks within automated histopathology pipelines is often diminished when applying them to novel data domains due to factors arising from differing staining and scanning protocols. The Dual-Channel Auto-Encoder (DCAE) model was previously shown to produce feature representations that are less sensitive to appearance variation introduced by different digital slide scanners. In this work, the Multi-Channel Auto-Encoder (MCAE) model is presented as an extension to DCAE which learns from more than two domains of data. Experimental results show that the MCAE model produces feature representations that are less sensitive to inter-domain variations than the comparative StaNoSA method when tested on a novel synthetic dataset. This was apparent when applying the MCAE, DCAE, and StaNoSA models to three different classification tasks from unseen domains. The results of this experiment show the MCAE model out performs the other models. These results show that the MCAE model is able to generalise better to novel data, including data from unseen domains, than existing approaches by actively learning normalised feature representations.

6.
Front Physiol ; 14: 1308987, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38169744

RESUMO

The structural morphology of mesenteric artery vessels is of significant importance for the diagnosis and treatment of colorectal cancer. However, developing automated vessel segmentation methods for this purpose remains challenging. Existing convolution-based segmentation methods have limitations in capturing long-range dependencies, while transformer-based models require large datasets, making them less suitable for tasks with limited training samples. Moreover, over-segmentation, mis-segmentation, and vessel discontinuity are common challenges in vessel segmentation tasks. To address these issues, we propose a parallel encoding architecture that combines transformers and convolutions to retain the advantages of both approaches. The model effectively learns position deviations and enhances robustness for small-scale datasets. Additionally, we introduce a vessel edge capture module to improve vessel continuity and topology. Extensive experimental results demonstrate the improved performance of our model, with Dice Similarity Coefficient and Average Hausdorff Distance scores of 81.64% and 7.7428, respectively.

7.
J Imaging ; 8(2)2022 Feb 11.
Artigo em Inglês | MEDLINE | ID: mdl-35200744

RESUMO

Developing Field Programmable Gate Array (FPGA)-based applications is typically a slow and multi-skilled task. Research in tools to support application development has gradually reached a higher level. This paper describes an approach which aims to further raise the level at which an application developer works in developing FPGA-based implementations of image and video processing applications. The starting concept is a system of streamed soft coprocessors. We present a set of soft coprocessors which implement some of the key abstractions of Image Algebra. Our soft coprocessors are designed for easy chaining, and allow users to describe their application as a dataflow graph. A prototype implementation of a development environment, called SCoPeS, is presented. An application can be modified even during execution without requiring re-synthesis. The paper concludes with performance and resource utilization results for different implementations of a sample algorithm. We conclude that the soft coprocessor approach has the potential to deliver better performance than the soft processor approach, and can improve programmability over dedicated HDL cores for domain-specific applications while achieving competitive real time performance and utilization.

8.
IEEE J Biomed Health Inform ; 26(6): 2703-2713, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35085096

RESUMO

Facial phenotyping for medical prediagnosis has recently been successfully exploited as a novel way for the preclinical assessment of a range of rare genetic diseases, where facial biometrics is revealed to have rich links to underlying genetic or medical causes. In this paper, we aim to extend this facial prediagnosis technology for a more general disease, Parkinson's Diseases (PD), and proposed an Artificial-Intelligence-of-Things (AIoT) edge-oriented privacy-preserving facial prediagnosis framework to analyze the treatment of Deep Brain Stimulation (DBS) on PD patients. In the proposed framework, a novel edge-based privacy-preserving framework is proposed to implement private deep facial diagnosis as a service over an AIoT-oriented information theoretically secure multi-party communication scheme, while data privacy has been a primary concern toward a wider exploitation of Electronic Health and Medical Records (EHR/EMR) over cloud-based medical services. In our experiments with a collected facial dataset from PD patients, for the first time, we proved that facial patterns could be used to evaluate the facial difference of PD patients undergoing DBS treatment. We further implemented a privacy-preserving information theoretical secure deep facial prediagnosis framework that can achieve the same accuracy as the non-encrypted one, showing the potential of our facial prediagnosis as a trustworthy edge service for grading the severity of PD in patients.


Assuntos
Estimulação Encefálica Profunda , Doença de Parkinson , Computação em Nuvem , Confidencialidade , Registros Eletrônicos de Saúde , Humanos , Doença de Parkinson/diagnóstico , Doença de Parkinson/terapia , Privacidade
9.
Sci Total Environ ; 771: 145256, 2021 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-33736153

RESUMO

Earthquakes have become one of the leading causes of death from natural hazards in the last fifty years. Continuous efforts have been made to understand the physical characteristics of earthquakes and the interaction between the physical hazards and the environments so that appropriate warnings may be generated before earthquakes strike. However, earthquake forecasting is not trivial at all. Reliable forecastings should include the analysis and the signals indicating the coming of a significant quake. Unfortunately, these signals are rarely evident before earthquakes occur, and therefore it is challenging to detect such precursors in seismic analysis. Among the available technologies for earthquake research, remote sensing has been commonly used due to its unique features such as fast imaging and wide image-acquisition range. Nevertheless, early studies on pre-earthquake and remote-sensing anomalies are mostly oriented towards anomaly identification and analysis of a single physical parameter. Many analyses are based on singular events, which provide a lack of understanding of this complex natural phenomenon because usually, the earthquake signals are hidden in the environmental noise. The universality of such analysis still is not being demonstrated on a worldwide scale. In this paper, we investigate physical and dynamic changes of seismic data and thereby develop a novel machine learning method, namely Inverse Boosting Pruning Trees (IBPT), to issue short-term forecast based on the satellite data of 1371 earthquakes of magnitude six or above due to their impact on the environment. We have analyzed and compared our proposed framework against several states of the art machine learning methods using ten different infrared and hyperspectral measurements collected between 2006 and 2013. Our proposed method outperforms all the six selected baselines and shows a strong capability in improving the likelihood of earthquake forecasting across different earthquake databases.

10.
IEEE Trans Cybern ; 50(2): 689-702, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30296251

RESUMO

Multipopulation is an effective optimization component often embedded into evolutionary algorithms to solve optimization problems. In this paper, a new multipopulation-based multiobjective genetic algorithm (MOGA) is proposed, which uses a unique cross-subpopulation migration process inspired by biological processes to share information between subpopulations. Then, a Markov model of the proposed multipopulation MOGA is derived, the first of its kind, which provides an exact mathematical model for each possible population occurring simultaneously with multiple objectives. Simulation results of two multiobjective test problems with multiple subpopulations justify the derived Markov model, and show that the proposed multipopulation method can improve the optimization ability of the MOGA. Also, the proposed multipopulation method is applied to other multiobjective evolutionary algorithms (MOEAs) for evaluating its performance against the IEEE Congress on Evolutionary Computation multiobjective benchmarks. The experimental results show that a single-population MOEA can be extended to a multipopulation version, while obtaining better optimization performance.

11.
Comput Intell Neurosci ; 2019: 8214975, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30863436

RESUMO

Zebrafish embryo fluorescent vessel analysis, which aims to automatically investigate the pathogenesis of diseases, has attracted much attention in medical imaging. Zebrafish vessel segmentation is a fairly challenging task, which requires distinguishing foreground and background vessels from the 3D projection images. Recently, there has been a trend to introduce domain knowledge to deep learning algorithms for handling complex environment segmentation problems with accurate achievements. In this paper, a novel dual deep learning framework called Dual ResUNet is developed to conduct zebrafish embryo fluorescent vessel segmentation. To avoid the loss of spatial and identity information, the U-Net model is extended to a dual model with a new residual unit. To achieve stable and robust segmentation performance, our proposed approach merges domain knowledge with a novel contour term and shape constraint. We compare our method qualitatively and quantitatively with several standard segmentation models. Our experimental results show that the proposed method achieves better results than the state-of-art segmentation methods. By investigating the quality of the vessel segmentation, we come to the conclusion that our Dual ResUNet model can learn the characteristic features in those cases where fluorescent protein is deficient or blood vessels are overlapped and achieves robust performance in complicated environments.


Assuntos
Vasos Sanguíneos/anatomia & histologia , Vasos Sanguíneos/diagnóstico por imagem , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Algoritmos , Animais , Vasos Sanguíneos/embriologia , Embrião não Mamífero , Modelos Anatômicos , Peixe-Zebra
12.
IEEE Trans Image Process ; 28(3): 1133-1148, 2019 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-30307863

RESUMO

Automated recognition of mouse behaviors is crucial in studying psychiatric and neurologic diseases. To achieve this objective, it is very important to analyze the temporal dynamics of mouse behaviors. In particular, the change between mouse neighboring actions is swift in a short period. In this paper, we develop and implement a novel hidden Markov model (HMM) algorithm to describe the temporal characteristics of mouse behaviors. In particular, we here propose a hybrid deep learning architecture, where the first unsupervised layer relies on an advanced spatial-temporal segment Fisher vector encoding both visual and contextual features. Subsequent supervised layers based on our segment aggregate network are trained to estimate the state-dependent observation probabilities of the HMM. The proposed architecture shows the ability to discriminate between visually similar behaviors and results in high recognition rates with the strength of processing imbalanced mouse behavior datasets. Finally, we evaluate our approach using JHuang's and our own datasets, and the results show that our method outperforms other state-of-the-art approaches.

13.
IEEE Trans Cybern ; 49(8): 3191-3202, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-29994697

RESUMO

Many approaches to unconstrained face identification exploit small patches which are unaffected by distortions outside their locality. A larger area usually contains more discriminative information, but may be unidentifiable due to local appearance changes across its area, given limited training data. We propose a novel block-based approach, as a complement to existing patch-based approaches, to exploit the greater discriminative information in larger areas, while maintaining robustness to limited training data. A testing block contains several neighboring patches, each of a small size. We identify the matching training block by jointly estimating all of the matching patches, as a means of reducing the uncertainty of each small matching patch with the addition of the neighboring patch information, without assuming additional training data. We further propose a multiscale extension in which we carry out block-based matching at several block sizes, to combine complementary information across scales for further robustness. We have conducted face identification experiments using three datasets, the constrained Georgia Tech dataset to validate the new approach, and two unconstrained datasets, LFW and UFI, to evaluate its potential for improving robustness. The results show that the new approach is able to significantly improve over existing patch-based face identification approaches, in the presence of uncontrolled pose, expression, and lighting variations, using small training datasets. It is also shown that the new block-based scheme can be combined with existing approaches to further improve performance.


Assuntos
Identificação Biométrica/métodos , Face/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Bases de Dados Factuais , Humanos
14.
IEEE Trans Cybern ; 47(3): 796-808, 2017 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26955057

RESUMO

In this paper, we introduce a novel approach to face recognition which simultaneously tackles three combined challenges: (1) uneven illumination; (2) partial occlusion; and (3) limited training data. The new approach performs lighting normalization, occlusion de-emphasis and finally face recognition, based on finding the largest matching area (LMA) at each point on the face, as opposed to traditional fixed-size local areabased approaches. Robustness is achieved with novel approaches for feature extraction, LMA-based face image comparison and unseen data modeling. On the extended YaleB and AR face databases for face identification, our method using only a single training image per person, outperforms other methods using a single training image, and matches or exceeds methods which require multiple training images. On the labeled faces in the wild face verification database, our method outperforms comparable unsupervised methods. We also show that the new method performs competitively even when the training images are corrupted.

15.
IEEE Trans Biomed Eng ; 57(9): 2219-28, 2010 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-20483698

RESUMO

In this paper, a novel motion-tracking scheme using scale-invariant features is proposed for automatic cell motility analysis in gray-scale microscopic videos, particularly for the live-cell tracking in low-contrast differential interference contrast (DIC) microscopy. In the proposed approach, scale-invariant feature transform (SIFT) points around live cells in the microscopic image are detected, and a structure locality preservation (SLP) scheme using Laplacian Eigenmap is proposed to track the SIFT feature points along successive frames of low-contrast DIC videos. Experiments on low-contrast DIC microscopic videos of various live-cell lines shows that in comparison with principal component analysis (PCA) based SIFT tracking, the proposed Laplacian-SIFT can significantly reduce the error rate of SIFT feature tracking. With this enhancement, further experimental results demonstrate that the proposed scheme is a robust and accurate approach to tackling the challenge of live-cell tracking in DIC microscopy.


Assuntos
Movimento Celular/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Microscopia de Interferência/métodos , Microscopia de Vídeo/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Animais , Linhagem Celular , Humanos , Microscopia de Contraste de Fase , Análise de Componente Principal
16.
Artigo em Inglês | MEDLINE | ID: mdl-18002071

RESUMO

Cervical virtual slides are ultra-large, can have size up to 120K x 80K pixels. This paper introduces an image segmentation method for the automated identification of Squamous epithelium from such virtual slides. In order to produce the best segmentation results, in addition to saving processing time and memory, a multiresolution segmentation strategy was developed. The Squamous epithelium layer is first segmented at a low resolution (2X magnification). The boundaries of segmented Squamous epithelium are further fine tuned at the highest resolution of 40X magnification, using an iterative boundary expanding-shrinking method. The block-based segmentation method uses robust texture feature vectors in combination with a Support Vector Machine (SVM) to perform classification. Medical histology rules are finally applied to remove misclassifications. Results demonstrate that, with typical virtual slides, classification accuracies of between 94.9% and 96.3% are achieved.


Assuntos
Epitélio/patologia , Processamento de Imagem Assistida por Computador/métodos , Displasia do Colo do Útero/patologia , Neoplasias do Colo do Útero/patologia , Feminino , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...