Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 57
Filtrar
2.
Med Image Anal ; 92: 103066, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38141453

RESUMO

Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to restore a physiological blood exchange among twins. The procedure is particularly challenging, from the surgeon's side, due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to amniotic fluid turbidity, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation of pathological anastomoses, resulting in persistent TTTS. Computer-assisted intervention (CAI) can provide TTTS surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision (EndoVis) challenge, we released the first large-scale multi-center TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms with a focus on creating drift-free mosaics from long duration fetoscopy videos. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips of an average length of 411 frames for developing placental scene segmentation and frame registration for mosaicking techniques. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. For the segmentation task, overall baseline performed was the top performing (aggregated mIoU of 0.6763) and was the best on the vessel class (mIoU of 0.5817) while team RREB was the best on the tool (mIoU of 0.6335) and fetus (mIoU of 0.5178) classes. For the registration task, overall the baseline performed better than team SANO with an overall mean 5-frame SSIM of 0.9348. Qualitatively, it was observed that team SANO performed better in planar scenarios, while baseline was better in non-planner scenarios. The detailed analysis showed that no single team outperformed on all 6 test fetoscopic videos. The challenge provided an opportunity to create generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge, alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-center fetoscopic data, we provide a benchmark for future research in this field.


Assuntos
Transfusão Feto-Fetal , Placenta , Feminino , Humanos , Gravidez , Algoritmos , Transfusão Feto-Fetal/diagnóstico por imagem , Transfusão Feto-Fetal/cirurgia , Transfusão Feto-Fetal/patologia , Fetoscopia/métodos , Feto , Placenta/diagnóstico por imagem
3.
Comput Biol Med ; 167: 107602, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37925906

RESUMO

Accurate prediction of fetal weight at birth is essential for effective perinatal care, particularly in the context of antenatal management, which involves determining the timing and mode of delivery. The current standard of care involves performing a prenatal ultrasound 24 hours prior to delivery. However, this task presents challenges as it requires acquiring high-quality images, which becomes difficult during advanced pregnancy due to the lack of amniotic fluid. In this paper, we present a novel method that automatically predicts fetal birth weight by using fetal ultrasound video scans and clinical data. Our proposed method is based on a Transformer-based approach that combines a Residual Transformer Module with a Dynamic Affine Feature Map Transform. This method leverages tabular clinical data to evaluate 2D+t spatio-temporal features in fetal ultrasound video scans. Development and evaluation were carried out on a clinical set comprising 582 2D fetal ultrasound videos and clinical records of pregnancies from 194 patients performed less than 24 hours before delivery. Our results show that our method outperforms several state-of-the-art automatic methods and estimates fetal birth weight with an accuracy comparable to human experts. Hence, automatic measurements obtained by our method can reduce the risk of errors inherent in manual measurements. Observer studies suggest that our approach may be used as an aid for less experienced clinicians to predict fetal birth weight before delivery, optimizing perinatal care regardless of the available expertise.


Assuntos
Peso Fetal , Ultrassonografia Pré-Natal , Recém-Nascido , Gravidez , Humanos , Feminino , Peso ao Nascer , Ultrassonografia Pré-Natal/métodos , Biometria
5.
Am J Obstet Gynecol MFM ; 5(12): 101182, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37821009

RESUMO

BACKGROUND: Fetal weight is currently estimated from fetal biometry parameters using heuristic mathematical formulas. Fetal biometry requires measurements of the fetal head, abdomen, and femur. However, this examination is prone to inter- and intraobserver variability because of factors, such as the experience of the operator, image quality, maternal characteristics, or fetal movements. Our study tested the hypothesis that a deep learning method can estimate fetal weight based on a video scan of the fetal abdomen and gestational age with similar performance to the full biometry-based estimations provided by clinical experts. OBJECTIVE: This study aimed to develop and test a deep learning method to automatically estimate fetal weight from fetal abdominal ultrasound video scans. STUDY DESIGN: A dataset of 900 routine fetal ultrasound examinations was used. Among those examinations, 800 retrospective ultrasound video scans of the fetal abdomen from 700 pregnant women between 15 6/7 and 41 0/7 weeks of gestation were used to train the deep learning model. After the training phase, the model was evaluated on an external prospectively acquired test set of 100 scans from 100 pregnant women between 16 2/7 and 38 0/7 weeks of gestation. The deep learning model was trained to directly estimate fetal weight from ultrasound video scans of the fetal abdomen. The deep learning estimations were compared with manual measurements on the test set made by 6 human readers with varying levels of expertise. Human readers used standard 3 measurements made on the standard planes of the head, abdomen, and femur and heuristic formula to estimate fetal weight. The Bland-Altman analysis, mean absolute percentage error, and intraclass correlation coefficient were used to evaluate the performance and robustness of the deep learning method and were compared with human readers. RESULTS: Bland-Altman analysis did not show systematic deviations between readers and deep learning. The mean and standard deviation of the mean absolute percentage error between 6 human readers and the deep learning approach was 3.75%±2.00%. Excluding junior readers (residents), the mean absolute percentage error between 4 experts and the deep learning approach was 2.59%±1.11%. The intraclass correlation coefficients reflected excellent reliability and varied between 0.9761 and 0.9865. CONCLUSION: This study reports the use of deep learning to estimate fetal weight using only ultrasound video of the fetal abdomen from fetal biometry scans. Our experiments demonstrated similar performance of human measurements and deep learning on prospectively acquired test data. Deep learning is a promising approach to directly estimate fetal weight using ultrasound video scans of the fetal abdomen.


Assuntos
Aprendizado Profundo , Peso Fetal , Gravidez , Feminino , Humanos , Estudos Retrospectivos , Reprodutibilidade dos Testes , Abdome/diagnóstico por imagem
6.
J Hum Hypertens ; 37(10): 898-906, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-36528682

RESUMO

The study characterises vascular phenotypes of hypertensive patients utilising machine learning approaches. Newly diagnosed and treatment-naïve primary hypertensive patients without co-morbidities (aged 18-55, n = 73), and matched normotensive controls (n = 79) were recruited (NCT04015635). Blood pressure (BP) and BP variability were determined using 24 h ambulatory monitoring. Vascular phenotyping included SphygmoCor® measurement of pulse wave velocity (PWV), pulse wave analysis-derived augmentation index (PWA-AIx), and central BP; EndoPAT™-2000® provided reactive hyperaemia index (LnRHI) and augmentation index adjusted to heart rate of 75bpm. Ultrasound was used to analyse flow mediated dilatation and carotid intima-media thickness (CIMT). In addition to standard statistical methods to compare normotensive and hypertensive groups, machine learning techniques including biclustering explored hypertensive phenotypic subgroups. We report that arterial stiffness (PWV, PWA-AIx, EndoPAT-2000-derived AI@75) and central pressures were greater in incident hypertension than normotension. Endothelial function, percent nocturnal dip, and CIMT did not differ between groups. The vascular phenotype of white-coat hypertension imitated sustained hypertension with elevated arterial stiffness and central pressure; masked hypertension demonstrating values similar to normotension. Machine learning revealed three distinct hypertension clusters, representing 'arterially stiffened', 'vaso-protected', and 'non-dipper' patients. Key clustering features were nocturnal- and central-BP, percent dipping, and arterial stiffness measures. We conclude that untreated patients with primary hypertension demonstrate early arterial stiffening rather than endothelial dysfunction or CIMT alterations. Phenotypic heterogeneity in nocturnal and central BP, percent dipping, and arterial stiffness observed early in the course of disease may have implications for risk stratification.


Assuntos
Hipertensão , Rigidez Vascular , Humanos , Espessura Intima-Media Carotídea , Análise de Onda de Pulso , Monitorização Ambulatorial da Pressão Arterial , Hipertensão/diagnóstico , Pressão Sanguínea/fisiologia , Fenótipo
7.
Pediatr Res ; 93(2): 376-381, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36195629

RESUMO

Necrotising enterocolitis (NEC) is one of the most common diseases in neonates and predominantly affects premature or very-low-birth-weight infants. Diagnosis is difficult and needed in hours since the first symptom onset for the best therapeutic effects. Artificial intelligence (AI) may play a significant role in NEC diagnosis. A literature search on the use of AI in the diagnosis of NEC was performed. Four databases (PubMed, Embase, arXiv, and IEEE Xplore) were searched with the appropriate MeSH terms. The search yielded 118 publications that were reduced to 8 after screening and checking for eligibility. Of the eight, five used classic machine learning (ML), and three were on the topic of deep ML. Most publications showed promising results. However, no publications with evident clinical benefits were found. Datasets used for training and testing AI systems were small and typically came from a single institution. The potential of AI to improve the diagnosis of NEC is evident. The body of literature on this topic is scarce, and more research in this area is needed, especially with a focus on clinical utility. Cross-institutional data for the training and testing of AI algorithms are required to make progress in this area. IMPACT: Only a few publications on the use of AI in NEC diagnosis are available although they offer some evidence that AI may be helpful in NEC diagnosis. AI requires large, multicentre, and multimodal datasets of high quality for model training and testing. Published results in the literature are based on data from single institutions and, as such, have limited generalisability. Large multicentre studies evaluating broad datasets are needed to evaluate the true potential of AI in diagnosing NEC in a clinical setting.


Assuntos
Enterocolite Necrosante , Doenças do Recém-Nascido , Recém-Nascido , Humanos , Recém-Nascido Prematuro , Enterocolite Necrosante/prevenção & controle , Inteligência Artificial , Recém-Nascido de muito Baixo Peso
8.
AMIA Annu Symp Proc ; 2023: 389-396, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38222421

RESUMO

The effectiveness of digital treatments can be measured by requiring patients to self-report their state through applications, however, it can be overwhelming and causes disengagement. We conduct a study to explore the impact of gamification on self-reporting. Our approach involves the creation of a system to assess cognitive load (CL) through the analysis of photoplethysmography (PPG) signals. The data from 11 participants is utilized to train a machine learning model to detect CL. Subsequently, we create two versions of surveys: a gamified and a traditional one. We estimate the CL experienced by other participants (13) while completing surveys. We find that CL detector performance can be enhanced via pre-training on stress detection tasks. For 10 out of 13 participants, a personalized CL detector can achieve an F1 score above 0.7. We find no difference between the gamified and non-gamified surveys in terms of CL but participants prefer the gamified version.


Assuntos
Gamificação , Telemedicina , Humanos , Estudos de Viabilidade , Aprendizado de Máquina , Cognição
9.
Phys Med Biol ; 67(4)2022 02 16.
Artigo em Inglês | MEDLINE | ID: mdl-35051921

RESUMO

Objective.This work investigates the use of deep convolutional neural networks (CNN) to automatically perform measurements of fetal body parts, including head circumference, biparietal diameter, abdominal circumference and femur length, and to estimate gestational age and fetal weight using fetal ultrasound videos.Approach.We developed a novel multi-task CNN-based spatio-temporal fetal US feature extraction and standard plane detection algorithm (called FUVAI) and evaluated the method on 50 freehand fetal US video scans. We compared FUVAI fetal biometric measurements with measurements made by five experienced sonographers at two time points separated by at least two weeks. Intra- and inter-observer variabilities were estimated.Main results.We found that automated fetal biometric measurements obtained by FUVAI were comparable to the measurements performed by experienced sonographers The observed differences in measurement values were within the range of inter- and intra-observer variability. Moreover, analysis has shown that these differences were not statistically significant when comparing any individual medical expert to our model.Significance.We argue that FUVAI has the potential to assist sonographers who perform fetal biometric measurements in clinical settings by providing them with suggestions regarding the best measuring frames, along with automated measurements. Moreover, FUVAI is able perform these tasks in just a few seconds, which is a huge difference compared to the average of six minutes taken by sonographers. This is significant, given the shortage of medical experts capable of interpreting fetal ultrasound images in numerous countries.


Assuntos
Aprendizado Profundo , Biometria/métodos , Feminino , Feto/diagnóstico por imagem , Idade Gestacional , Humanos , Gravidez , Ultrassonografia Pré-Natal/métodos
10.
J Nucl Med ; 63(4): 500-510, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34740952

RESUMO

The nuclear medicine field has seen a rapid expansion of academic and commercial interest in developing artificial intelligence (AI) algorithms. Users and developers can avoid some of the pitfalls of AI by recognizing and following best practices in AI algorithm development. In this article, recommendations on technical best practices for developing AI algorithms in nuclear medicine are provided, beginning with general recommendations and then continuing with descriptions of how one might practice these principles for specific topics within nuclear medicine. This report was produced by the AI Task Force of the Society of Nuclear Medicine and Molecular Imaging.


Assuntos
Inteligência Artificial , Medicina Nuclear , Algoritmos , Imagem Molecular , Cintilografia
11.
Eur J Nucl Med Mol Imaging ; 49(6): 1881-1893, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-34967914

RESUMO

PURPOSE: We sought to evaluate the diagnostic performance for coronary artery disease (CAD) of myocardial blood flow (MBF) quantification with 18F-flurpiridaz PET using motion correction (MC) and residual activity correction (RAC). METHODS: In total, 231 patients undergoing same-day pharmacologic rest and stress 18F-flurpiridaz PET from Phase III Flurpiridaz trial (NCT01347710) were studied. Frame-by-frame MC was performed and RAC was accomplished by subtracting the rest residual counts from the dynamic stress polar maps. MBF and myocardial flow reserve (MFR) were derived with a two-compartment early kinetic model for the entire left ventricle (global), each coronary territory, and 17-segment. Global and minimal values of three territorial (minimal vessel) and segmental estimation (minimal segment) of stress MBF and MFR were evaluated in the prediction of CAD. MBF and MFR were evaluated with and without MC and RAC (1: no MC/no RAC, 2: no MC/RAC, 3: MC/RAC). RESULTS: The area-under the receiver operating characteristics curve (AUC [95% confidence interval]) of stress MBF with MC/RAC was higher for minimal segment (0.89 [0.85-0.94]) than for minimal vessel (0.86 [0.81-0.92], p = 0.03) or global estimation (0.81 [0.75-0.87], p < 0.0001). The AUC of MFR with MC/RAC was higher for minimal segment (0.87 [0.81-0.93]) than for minimal vessel (0.83 [0.76-0.90], p = 0.014) or global estimation (0.77 [0.69-0.84], p < 0.0001). The AUCs of minimal segment stress MBF and MFR with MC/RAC were higher compared to those with no MC/RAC (p < 0.001 for both) or no MC/no RAC (p < 0.0001 for both). CONCLUSIONS: Minimal segment MBF or MFR estimation with MC and RAC improves the diagnostic performance for obstructive CAD compared to global assessment.


Assuntos
Doença da Artéria Coronariana , Reserva Fracionada de Fluxo Miocárdico , Imagem de Perfusão do Miocárdio , Doença da Artéria Coronariana/diagnóstico por imagem , Circulação Coronária/fisiologia , Humanos , Imagem de Perfusão do Miocárdio/métodos , Tomografia por Emissão de Pósitrons/métodos
12.
PET Clin ; 16(4): 483-492, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34353746

RESUMO

Artificial intelligence (AI) has significant potential to positively impact and advance medical imaging, including positron emission tomography (PET) imaging applications. AI has the ability to enhance and optimize all aspects of the PET imaging chain from patient scheduling, patient setup, protocoling, data acquisition, detector signal processing, reconstruction, image processing, and interpretation. AI poses industry-specific challenges which will need to be addressed and overcome to maximize the future potentials of AI in PET. This article provides an overview of these industry-specific challenges for the development, standardization, commercialization, and clinical adoption of AI and explores the potential enhancements to PET imaging brought on by AI in the near future. In particular, the combination of on-demand image reconstruction, AI, and custom-designed data-processing workflows may open new possibilities for innovation which would positively impact the industry and ultimately patients.


Assuntos
Inteligência Artificial , Tomografia por Emissão de Pósitrons , Humanos , Processamento de Imagem Assistida por Computador , Radiografia
13.
IEEE J Biomed Health Inform ; 24(6): 1805-1813, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-28026794

RESUMO

This study aims to develop an automatic classifier based on deep learning for exacerbation frequency in patients with chronic obstructive pulmonary disease (COPD). A three-layer deep belief network (DBN) with two hidden layers and one visible layer was employed to develop classification models and the models' robustness to exacerbation was analyzed. Subjects from the COPDGene cohort were labeled with exacerbation frequency, defined as the number of exacerbation events per year. A total of 10 300 subjects with 361 features each were included in the analysis. After feature selection and parameter optimization, the proposed classification method achieved an accuracy of 91.99%, using a ten-fold cross validation experiment. The analysis of DBN weights showed that there was a good visual spatial relationship between the underlying critical features of different layers. Our findings show that the most sensitive features obtained from the DBN weights are consistent with the consensus showed by clinical rules and standards for COPD diagnostics. We, thus, demonstrate that DBN is a competitive tool for exacerbation risk assessment for patients suffering from COPD.


Assuntos
Aprendizado Profundo , Doença Pulmonar Obstrutiva Crônica , Algoritmos , Estudos de Coortes , Progressão da Doença , Humanos , Doença Pulmonar Obstrutiva Crônica/classificação , Doença Pulmonar Obstrutiva Crônica/epidemiologia , Doença Pulmonar Obstrutiva Crônica/genética , Doença Pulmonar Obstrutiva Crônica/fisiopatologia , Sensibilidade e Especificidade , Máquina de Vetores de Suporte
15.
Expert Rev Med Devices ; 14(3): 197-212, 2017 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-28277804

RESUMO

INTRODUCTION: Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.


Assuntos
Diagnóstico por Imagem , Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Medicina de Precisão , Software , Diagnóstico por Imagem/instrumentação , Diagnóstico por Imagem/métodos , Humanos , Processamento de Imagem Assistida por Computador/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Medicina de Precisão/instrumentação , Medicina de Precisão/métodos
16.
Phys Med Biol ; 61(24): 8941-8944, 2016 12 21.
Artigo em Inglês | MEDLINE | ID: mdl-27910819

RESUMO

The origin ensemble (OE) algorithm is a new method used for image reconstruction from nuclear tomographic data. The main advantage of this algorithm is the ease of implementation for complex tomographic models and the sound statistical theory. In this comment, the author provides the basics of the statistical interpretation of OE and gives suggestions for the improvement of the algorithm in the application to prompt gamma imaging as described in Polf et al (2015 Phys. Med. Biol. 60 7085).


Assuntos
Raios gama , Prótons , Estudos de Viabilidade , Método de Monte Carlo , Terapia com Prótons
17.
Med Phys ; 43(10): 5475, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27782695

RESUMO

PURPOSE: The authors are currently developing a dual-resolution multiple-pinhole microSPECT imaging system based on three large NaI(Tl) gamma cameras. Two multiple-pinhole tungsten collimator tubes will be used sequentially for whole-body "scout" imaging of a mouse, followed by high-resolution (hi-res) imaging of an organ of interest, such as the heart or brain. Ideally, the whole-body image will be reconstructed in real time such that data need only be acquired until the area of interest can be visualized well-enough to determine positioning for the hi-res scan. The authors investigated the utility of the origin ensemble (OE) algorithm for online and offline reconstructions of the scout data. This algorithm operates directly in image space, and can provide estimates of image uncertainty, along with reconstructed images. Techniques for accelerating the OE reconstruction were also introduced and evaluated. METHODS: System matrices were calculated for our 39-pinhole scout collimator design. SPECT projections were simulated for a range of count levels using the MOBY digital mouse phantom. Simulated data were used for a comparison of OE and maximum-likelihood expectation maximization (MLEM) reconstructions. The OE algorithm convergence was evaluated by calculating the total-image entropy and by measuring the counts in a volume-of-interest (VOI) containing the heart. Total-image entropy was also calculated for simulated MOBY data reconstructed using OE with various levels of parallelization. RESULTS: For VOI measurements in the heart, liver, bladder, and soft-tissue, MLEM and OE reconstructed images agreed within 6%. Image entropy converged after ∼2000 iterations of OE, while the counts in the heart converged earlier at ∼200 iterations of OE. An accelerated version of OE completed 1000 iterations in <9 min for a 6.8M count data set, with some loss of image entropy performance, whereas the same dataset required ∼79 min to complete 1000 iterations of conventional OE. A combination of the two methods showed decreased reconstruction time and no loss of performance when compared to conventional OE alone. CONCLUSIONS: OE-reconstructed images were found to be quantitatively and qualitatively similar to MLEM, yet OE also provided estimates of image uncertainty. Some acceleration of the reconstruction can be gained through the use of parallel computing. The OE algorithm is useful for reconstructing multiple-pinhole SPECT data and can be easily modified for real-time reconstruction.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada de Emissão de Fóton Único , Animais , Camundongos , Imagens de Fantasmas , Fatores de Tempo
18.
Phys Med ; 32(10): 1252-1258, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27692754

RESUMO

INTRODUCTION: Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. MATERIALS AND METHODS: The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. RESULTS AND DISCUSSION: Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). CONCLUSIONS: The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities.


Assuntos
Diagnóstico por Imagem/estatística & dados numéricos , Teorema de Bayes , Fenômenos Biofísicos , Simulação por Computador , Fluordesoxiglucose F18/farmacocinética , Humanos , Cinética , Análise dos Mínimos Quadrados , Cadeias de Markov , Modelos Biológicos , Método de Monte Carlo , Tomografia por Emissão de Pósitrons/estatística & dados numéricos , Compostos Radiofarmacêuticos/farmacocinética
19.
Phys Med ; 31(8): 1105-1107, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26508015

RESUMO

OBJECTIVES: The assumption that nuclear decays are governed by Poisson statistics is an approximation. This approximation becomes unjustified when data acquisition times longer than or even comparable with the half-lives of the radioisotope in the sample are considered. In this work, the limits of the Poisson-statistics approximation are investigated. METHODS: The formalism for the statistics of radioactive decay based on binomial distribution is derived. The theoretical factor describing the deviation of variance of the number of decays predicated by the Poisson distribution from the true variance is defined and investigated for several commonly used radiotracers such as (18)F, (15)O, (82)Rb, (13)N, (99m)Tc, (123)I, and (201)Tl. RESULTS: The variance of the number of decays estimated using the Poisson distribution is significantly different than the true variance for a 5-minute observation time of (11)C, (15)O, (13)N, and (82)Rb. CONCLUSIONS: Durations of nuclear medicine studies often are relatively long; they may be even a few times longer than the half-lives of some short-lived radiotracers. Our study shows that in such situations the Poisson statistics is unsuitable and should not be applied to describe the statistics of the number of decays in radioactive samples. However, the above statement does not directly apply to counting statistics at the level of event detection. Low sensitivities of detectors which are used in imaging studies make the Poisson approximation near perfect.


Assuntos
Distribuição de Poisson , Radioatividade , Medicina Nuclear , Radioisótopos
20.
Eur J Nucl Med Mol Imaging ; 42(10): 1551-61, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26012901

RESUMO

PURPOSE: Longstanding uncontrolled atherogenic risk factors may contribute to left atrial (LA) hypertension, LA enlargement (LAE) and coronary vascular dysfunction. Together they may better identify risk of major adverse cardiac events (MACE). The aim of this study was to test the hypothesis that chronic LA hypertension as assessed by LAE modifies the relationship between coronary vascular function and MACE. METHODS: In 508 unselected subjects with a normal clinical (82)Rb PET/CT, ejection fraction ≥40 %, no prior coronary artery disease, valve disease or atrial fibrillation, LAE was determined based on LA volumes estimated from the hybrid perfusion and CT transmission scan images and indexed to body surface area. Absolute myocardial blood flow and global coronary flow reserve (CFR) were calculated. Subjects were systematically followed-up for the primary end-point - MACE - a composite of all-cause death, myocardial infarction, hospitalization for heart failure, stroke, coronary artery disease progression or revascularization. RESULTS: During a median follow-up of 862 days, 65 of the subjects experienced a composite event. Compared with subjects with normal LA size, subjects with LAE showed significantly lower CFR (2.25 ± 0.83 vs. 1.95 ± 0.80, p = 0.01). LAE independently and incrementally predicted MACE even after accounting for clinical risk factors, medication use, stress left ventricular ejection fraction, stress left ventricular end-diastolic volume index and CFR (chi-squared statistic increased from 30.9 to 48.3; p = 0.001). Among subjects with normal CFR, those with LAE had significantly worse event-free survival (risk adjusted HR 5.4, 95 % CI 2.3 - 12.8, p < 0.0001). CONCLUSION: LAE and reduced CFR are related but distinct cardiovascular adaptations to atherogenic risk factors. LAE is a risk marker for MACE independent of clinical factors and left ventricular volumes; individuals with LAE may be at risk of MACE despite normal coronary vascular function.


Assuntos
Doença da Artéria Coronariana/diagnóstico , Doença da Artéria Coronariana/mortalidade , Átrios do Coração/diagnóstico por imagem , Insuficiência Cardíaca/mortalidade , Infarto do Miocárdio/mortalidade , Idoso , Boston/epidemiologia , Causalidade , Comorbidade , Intervalo Livre de Doença , Teste de Esforço/estatística & dados numéricos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Imagem Multimodal/estatística & dados numéricos , Tomografia por Emissão de Pósitrons/estatística & dados numéricos , Reprodutibilidade dos Testes , Fatores de Risco , Sensibilidade e Especificidade , Taxa de Sobrevida , Tomografia Computadorizada por Raios X/estatística & dados numéricos , Vasodilatadores
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...