Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Diagnostics (Basel) ; 14(11)2024 May 29.
Article in English | MEDLINE | ID: mdl-38893657

ABSTRACT

A comparative interpretation of mammograms has become increasingly important, and it is crucial to develop subtraction processing and registration methods for mammograms. However, nonrigid image registration has seldom been applied to subjects constructed with soft tissue only, such as mammograms. We examined whether subtraction processing for the comparative interpretation of mammograms can be performed using nonrigid image registration. As a preliminary study, we evaluated the results of subtraction processing by applying nonrigid image registration to normal mammograms, assuming a comparative interpretation between the left and right breasts. Mediolateral-oblique-view mammograms were taken from noncancer patients and divided into 1000 cases for training, 100 cases for validation, and 500 cases for testing. Nonrigid image registration was applied to align the horizontally flipped left-breast mammogram with the right one. We compared the sum of absolute differences (SAD) of the difference of bilateral images (Difference Image) with and without the application of nonrigid image registration. Statistically, the average SAD was significantly lower with the application of nonrigid image registration than without it (without: 0.0692; with: 0.0549 (p < 0.001)). In four subgroups using the breast area, breast density, compressed breast thickness, and Difference Image without nonrigid image registration, the average SAD of the Difference Image was also significantly lower with nonrigid image registration than without it (p < 0.001). Nonrigid image registration was found to be sufficiently useful in aligning bilateral mammograms, and it is expected to be an important tool in the development of a support system for the comparative interpretation of mammograms.

2.
Front Med (Lausanne) ; 11: 1335958, 2024.
Article in English | MEDLINE | ID: mdl-38510449

ABSTRACT

Introduction: Physical measurements of expiratory flow volume and speed can be obtained using spirometry. These measurements have been used for the diagnosis and risk assessment of chronic obstructive pulmonary disease and play a crucial role in delivering early care. However, spirometry is not performed frequently in routine clinical practice, thereby hindering the early detection of pulmonary function impairment. Chest radiographs (CXRs), though acquired frequently, are not used to measure pulmonary functional information. This study aimed to evaluate whether spirometry parameters can be estimated accurately from single frontal CXR without image findings using deep learning. Methods: Forced vital capacity (FVC), forced expiratory volume in 1 s (FEV1), and FEV1/FVC as spirometry measurements as well as the corresponding chest radiographs of 11,837 participants were used in this study. The data were randomly allocated to the training, validation, and evaluation datasets at an 8:1:1 ratio. A deep learning network was pretrained using ImageNet. The input and output information were CXRs and spirometry test values, respectively. The training and evaluation of the deep learning network were performed separately for each parameter. The mean absolute error rate (MAPE) and Pearson's correlation coefficient (r) were used as the evaluation indices. Results: The MAPEs between the spirometry measurements and AI estimates for FVC, FEV1 and FEV1/FVC were 7.59% (r = 0.910), 9.06% (r = 0.879) and 5.21% (r = 0.522), respectively. A strong positive correlation was observed between the measured and predicted indices of FVC and FEV1. The average accuracy of >90% was obtained in each estimation of spirometry indices. Bland-Altman analysis revealed good agreement between the estimated and measured values for FVC and FEV1. Discussion: Frontal CXRs contain information related to pulmonary function, and AI estimation performed using frontal CXRs without image findings could accurately estimate spirometry values. The network proposed for estimating pulmonary function in this study could serve as a recommendation for performing spirometry or as an alternative method, suggesting its utility.

3.
Front Oncol ; 14: 1255109, 2024.
Article in English | MEDLINE | ID: mdl-38505584

ABSTRACT

Background: Mammography is the modality of choice for breast cancer screening. However, some cases of breast cancer have been diagnosed through ultrasonography alone with no or benign findings on mammography (hereby referred to as non-visibles). Therefore, this study aimed to identify factors that indicate the possibility of non-visibles based on the mammary gland content ratio estimated using artificial intelligence (AI) by patient age and compressed breast thickness (CBT). Methods: We used AI previously developed by us to estimate the mammary gland content ratio and quantitatively analyze 26,232 controls and 150 non-visibles. First, we evaluated divergence trends between controls and non-visibles based on the average estimated mammary gland content ratio to ensure the importance of analysis by age and CBT. Next, we evaluated the possibility that mammary gland content ratio ≥50% groups affect the divergence between controls and non-visibles to specifically identify factors that indicate the possibility of non-visibles. The images were classified into two groups for the estimated mammary gland content ratios with a threshold of 50%, and logistic regression analysis was performed between controls and non-visibles. Results: The average estimated mammary gland content ratio was significantly higher in non-visibles than in controls when the overall sample, the patient age was ≥40 years and the CBT was ≥40 mm (p < 0.05). The differences in the average estimated mammary gland content ratios in the controls and non-visibles for the overall sample was 7.54%, the differences in patients aged 40-49, 50-59, and ≥60 years were 6.20%, 7.48%, and 4.78%, respectively, and the differences in those with a CBT of 40-49, 50-59, and ≥60 mm were 6.67%, 9.71%, and 16.13%, respectively. In evaluating mammary gland content ratio ≥50% groups, we also found positive correlations for non-visibles when controls were used as the baseline for the overall sample, in patients aged 40-59 years, and in those with a CBT ≥40 mm (p < 0.05). The corresponding odds ratios were ≥2.20, with a maximum value of 4.36. Conclusion: The study findings highlight an estimated mammary gland content ratio of ≥50% in patients aged 40-59 years or in those with ≥40 mm CBT could be indicative factors for non-visibles.

4.
Cancers (Basel) ; 15(10)2023 May 17.
Article in English | MEDLINE | ID: mdl-37345132

ABSTRACT

Recently, breast types were categorized into four types based on the Breast Imaging Reporting and Data System (BI-RADS) atlas, and evaluating them is vital in clinical practice. A Japanese guideline, called breast composition, was developed for the breast types based on BI-RADS. The guideline is characterized using a continuous value called the mammary gland content ratio calculated to determine the breast composition, therefore allowing a more objective and visual evaluation. Although a discriminative deep convolutional neural network (DCNN) has been developed conventionally to classify the breast composition, it could encounter two-step errors or more. Hence, we propose an alternative regression DCNN based on mammary gland content ratio. We used 1476 images, evaluated by an expert physician. Our regression DCNN contained four convolution layers and three fully connected layers. Consequently, we obtained a high correlation of 0.93 (p < 0.01). Furthermore, to scrutinize the effectiveness of the regression DCNN, we categorized breast composition using the estimated ratio obtained by the regression DCNN. The agreement rates are high at 84.8%, suggesting that the breast composition can be calculated using regression DCNN with high accuracy. Moreover, the occurrence of two-step errors or more is unlikely, and the proposed method can intuitively understand the estimated results.

SELECTION OF CITATIONS
SEARCH DETAIL
...