Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
J Med Internet Res ; 26: e58599, 2024 Jul 23.
Artigo em Inglês | MEDLINE | ID: mdl-39042442

RESUMO

BACKGROUND: Diagnosing underlying causes of nonneurogenic male lower urinary tract symptoms associated with bladder outlet obstruction (BOO) is challenging. Video-urodynamic studies (VUDS) and pressure-flow studies (PFS) are both invasive diagnostic methods for BOO. VUDS can more precisely differentiate etiologies of male BOO, such as benign prostatic obstruction, primary bladder neck obstruction, and dysfunctional voiding, potentially outperforming PFS. OBJECTIVE: These examinations' invasive nature highlights the need for developing noninvasive predictive models to facilitate BOO diagnosis and reduce the necessity for invasive procedures. METHODS: We conducted a retrospective study with a cohort of men with medication-refractory, nonneurogenic lower urinary tract symptoms suspected of BOO who underwent VUDS from 2001 to 2022. In total, 2 BOO predictive models were developed-1 based on the International Continence Society's definition (International Continence Society-defined bladder outlet obstruction; ICS-BOO) and the other on video-urodynamic studies-diagnosed bladder outlet obstruction (VBOO). The patient cohort was randomly split into training and test sets for analysis. A total of 6 machine learning algorithms, including logistic regression, were used for model development. During model development, we first performed development validation using repeated 5-fold cross-validation on the training set and then test validation to assess the model's performance on an independent test set. Both models were implemented as paper-based nomograms and integrated into a web-based artificial intelligence prediction tool to aid clinical decision-making. RESULTS: Among 307 patients, 26.7% (n=82) met the ICS-BOO criteria, while 82.1% (n=252) were diagnosed with VBOO. The ICS-BOO prediction model had a mean area under the receiver operating characteristic curve (AUC) of 0.74 (SD 0.09) and mean accuracy of 0.76 (SD 0.04) in development validation and AUC and accuracy of 0.86 and 0.77, respectively, in test validation. The VBOO prediction model yielded a mean AUC of 0.71 (SD 0.06) and mean accuracy of 0.77 (SD 0.06) internally, with AUC and accuracy of 0.72 and 0.76, respectively, externally. When both models' predictions are applied to the same patient, their combined insights can significantly enhance clinical decision-making and simplify the diagnostic pathway. By the dual-model prediction approach, if both models positively predict BOO, suggesting all cases actually resulted from medication-refractory primary bladder neck obstruction or benign prostatic obstruction, surgical intervention may be considered. Thus, VUDS might be unnecessary for 100 (32.6%) patients. Conversely, when ICS-BOO predictions are negative but VBOO predictions are positive, indicating varied etiology, VUDS rather than PFS is advised for precise diagnosis and guiding subsequent therapy, accurately identifying 51.1% (47/92) of patients for VUDS. CONCLUSIONS: The 2 machine learning models predicting ICS-BOO and VBOO, based on 6 noninvasive clinical parameters, demonstrate commendable discrimination performance. Using the dual-model prediction approach, when both models predict positively, VUDS may be avoided, assisting in male BOO diagnosis and reducing the need for such invasive procedures.


Assuntos
Nomogramas , Obstrução do Colo da Bexiga Urinária , Urodinâmica , Humanos , Obstrução do Colo da Bexiga Urinária/diagnóstico , Obstrução do Colo da Bexiga Urinária/fisiopatologia , Masculino , Estudos Retrospectivos , Pessoa de Meia-Idade , Idoso , Inteligência Artificial
2.
Bioengineering (Basel) ; 9(8)2022 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-36004876

RESUMO

Lung segmentation of chest X-ray (CXR) images is a fundamental step in many diagnostic applications. Most lung field segmentation methods reduce the image size to speed up the subsequent processing time. Then, the low-resolution result is upsampled to the original high-resolution image. Nevertheless, the image boundaries become blurred after the downsampling and upsampling steps. It is necessary to alleviate blurred boundaries during downsampling and upsampling. In this paper, we incorporate the lung field segmentation with the superpixel resizing framework to achieve the goal. The superpixel resizing framework upsamples the segmentation results based on the superpixel boundary information obtained from the downsampling process. Using this method, not only can the computation time of high-resolution medical image segmentation be reduced, but also the quality of the segmentation results can be preserved. We evaluate the proposed method on JSRT, LIDC-IDRI, and ANH datasets. The experimental results show that the proposed superpixel resizing framework outperforms other traditional image resizing methods. Furthermore, combining the segmentation network and the superpixel resizing framework, the proposed method achieves better results with an average time score of 4.6 s on CPU and 0.02 s on GPU.

3.
IEEE Trans Biomed Circuits Syst ; 13(4): 766-780, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31135368

RESUMO

The paper proposes an innovative deep convolutional neural network (DCNN) combined with texture map for detecting cancerous regions and marking the ROI in a single model automatically. The proposed DCNN model contains two collaborative branches, namely an upper branch to perform oral cancer detection, and a lower branch to perform semantic segmentation and ROI marking. With the upper branch the network model extracts the cancerous regions, and the lower branch makes the cancerous regions more precision. To make the features in the cancerous more regular, the network model extracts the texture images from the input image. A sliding window is then applied to compute the standard deviation values of the texture image. Finally, the standard deviation values are used to construct a texture map, which is partitioned into multiple patches and used as the input data to the deep convolutional network model. The method proposed by this paper is called texture-map-based branch-collaborative network. In the experimental result, the average sensitivity and specificity of detection are up to 0.9687 and 0.7129, respectively based on wavelet transform. And the average sensitivity and specificity of detection are up to 0.9314 and 0.9475, respectively based on Gabor filter.


Assuntos
Algoritmos , Detecção Precoce de Câncer , Neoplasias Bucais/diagnóstico , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador , Análise de Ondaletas
4.
J Microbiol Immunol Infect ; 44(6): 449-55, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-21684227

RESUMO

BACKGROUND: Useful predictive models for identifying patients at high risk of bacteremia at the emergency department (ED) are lacking. This study attempted to provide useful predictive models for identifying patients at high risk of bacteremia at the ED. METHODS: A prospective cohort study was conducted at the ED of a tertiary care hospital from October 1 to November 30, 2004. Patients aged 15 years or older, who had at least two sets of blood culture, were recruited. Data were analyzed on selected covariates, including demographic characteristics, predisposing conditions, clinical presentations, laboratory tests, and presumptive diagnosis, at the ED. An iterative procedure was used to build up a logistic model, which was then simplified into a coefficient-based scoring system. RESULTS: A total of 558 patients with 84 episodes of true bacteremia were enrolled. Predictors of bacteremia and their assigned scores were as follows: fever greater than or equal to 38.3°C [odds ratio (OR), 2.64], 1 point; tachycardia greater than or equal to 120/min (OR, 2.521), 1 point; lymphopenia less than 0.5×10(3)/µL (OR, 3.356), 2 points; aspartate transaminase greater than 40IU/L (OR, 2.355), 1 point; C-reactive protein greater than 10mg/dL (OR, 2.226), 1 point; procalcitonin greater than 0.5 ng/mL (OR, 3.147), 2 points; and presumptive diagnosis of respiratory tract infection (OR, 0.236), -2 points. The area under the receiver operating characteristic curves of the original logistic model and the simplified scoring model using the aforementioned seven predictors and their assigned scores were 0.854 (95% confidence interval, 0.806-0.902) and 0.845 (95% confidence interval, 0.798-0.894), respectively. CONCLUSION: This simplified scoring system could rapidly identify high-risk patients of bacteremia at the ED.


Assuntos
Bacteriemia/sangue , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Aspartato Aminotransferases/metabolismo , Bacteriemia/microbiologia , Calcitonina/metabolismo , Peptídeo Relacionado com Gene de Calcitonina , Estudos de Coortes , Serviço Hospitalar de Emergência/estatística & dados numéricos , Feminino , Febre/microbiologia , Humanos , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Modelos Estatísticos , Análise Multivariada , Valor Preditivo dos Testes , Estudos Prospectivos , Precursores de Proteínas/metabolismo , Curva ROC , Taquicardia/microbiologia
5.
IEEE Trans Inf Technol Biomed ; 7(3): 208-17, 2003 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-14518735

RESUMO

Identifying abdominal organs is one of the essential steps in visualizing organ structure to assist in teaching, clinical training, diagnosis, and medical image retrieval. However, due to partial volume effects, gray-level similarities of adjacent organs, contrast media affect, and the relatively high variations of organ position and shape, automatically identifying abdominal organs has always been a high challenging task. To conquer these difficulties, this paper proposes combining a multimodule contextual neural network and spatial fuzzy rules and fuzzy descriptors for automatically identifying abdominal organs from a series of CT image slices. The multimodule contextual neural network segments each image slice through a divide-and-conquer concept, embedded within multiple neural network modules, where the results obtained from each module are forwarded to other modules for integration, in which contextual constraints are enforced. With this approach, the difficulties arising from partial volume effects, gray-level similarities of adjacent organs, and contrast media affect can be reduced to the extreme. To address the issue of high variations in organ position and shape, spatial fuzzy rules and fuzzy descriptors are adopted, along with a contour modification scheme implementing consecutive organ region overlap constraints. This approach has been tested on 40 sets of abdominal CT images, where each set consists of about 40 image slices. We have found that 99% of the organ regions in the test images are correctly identified as its belonging organs, implying the high promise of the proposed method.


Assuntos
Algoritmos , Sistema Digestório/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Radiografia Abdominal/métodos , Urografia/métodos , Anatomia Transversal , Lógica Fuzzy , Humanos , Rim/diagnóstico por imagem , Fígado/diagnóstico por imagem , Especificidade de Órgãos , Reconhecimento Automatizado de Padrão , Reto/diagnóstico por imagem , Baço/diagnóstico por imagem , Bexiga Urinária/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...