Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Front Cell Neurosci ; 17: 1249043, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37868193

RESUMO

Optogenetic techniques combine optics and genetics to enable cell-specific targeting and precise spatiotemporal control of excitable cells, and they are increasingly being employed. One of the most significant advantages of the optogenetic approach is that it allows for the modulation of nearby cells or circuits with millisecond precision, enabling researchers to gain a better understanding of the complex nervous system. Furthermore, optogenetic neuron activation permits the regulation of information processing in the brain, including synaptic activity and transmission, and also promotes nerve structure development. However, the optimal conditions remain unclear, and further research is required to identify the types of cells that can most effectively and precisely control nerve function. Recent studies have described optogenetic glial manipulation for coordinating the reciprocal communication between neurons and glia. Optogenetically stimulated glial cells can modulate information processing in the central nervous system and provide structural support for nerve fibers in the peripheral nervous system. These advances promote the effective use of optogenetics, although further experiments are needed. This review describes the critical role of glial cells in the nervous system and reviews the optogenetic applications of several types of glial cells, as well as their significance in neuron-glia interactions. Together, it briefly discusses the therapeutic potential and feasibility of optogenetics.

2.
Korean J Radiol ; 24(11): 1151-1163, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37899524

RESUMO

OBJECTIVE: To develop a deep-learning-based bone age prediction model optimized for Korean children and adolescents and evaluate its feasibility by comparing it with a Greulich-Pyle-based deep-learning model. MATERIALS AND METHODS: A convolutional neural network was trained to predict age according to the bone development shown on a hand radiograph (bone age) using 21036 hand radiographs of Korean children and adolescents without known bone development-affecting diseases/conditions obtained between 1998 and 2019 (median age [interquartile range {IQR}], 9 [7-12] years; male:female, 11794:9242) and their chronological ages as labels (Korean model). We constructed 2 separate external datasets consisting of Korean children and adolescents with healthy bone development (Institution 1: n = 343; median age [IQR], 10 [4-15] years; male: female, 183:160; Institution 2: n = 321; median age [IQR], 9 [5-14] years; male: female, 164:157) to test the model performance. The mean absolute error (MAE), root mean square error (RMSE), and proportions of bone age predictions within 6, 12, 18, and 24 months of the reference age (chronological age) were compared between the Korean model and a commercial model (VUNO Med-BoneAge version 1.1; VUNO) trained with Greulich-Pyle-based age as the label (GP-based model). RESULTS: Compared with the GP-based model, the Korean model showed a lower RMSE (11.2 vs. 13.8 months; P = 0.004) and MAE (8.2 vs. 10.5 months; P = 0.002), a higher proportion of bone age predictions within 18 months of chronological age (88.3% vs. 82.2%; P = 0.031) for Institution 1, and a lower MAE (9.5 vs. 11.0 months; P = 0.022) and higher proportion of bone age predictions within 6 months (44.5% vs. 36.4%; P = 0.044) for Institution 2. CONCLUSION: The Korean model trained using the chronological ages of Korean children and adolescents without known bone development-affecting diseases/conditions as labels performed better in bone age assessment than the GP-based model in the Korean pediatric population. Further validation is required to confirm its accuracy.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Adolescente , Humanos , Criança , Masculino , Feminino , Lactente , Determinação da Idade pelo Esqueleto , Radiografia , República da Coreia
3.
Korean J Radiol ; 24(10): 1038-1041, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37793672
4.
Sci Rep ; 13(1): 5934, 2023 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-37045856

RESUMO

The identification of abnormal findings manifested in retinal fundus images and diagnosis of ophthalmic diseases are essential to the management of potentially vision-threatening eye conditions. Recently, deep learning-based computer-aided diagnosis systems (CADs) have demonstrated their potential to reduce reading time and discrepancy amongst readers. However, the obscure reasoning of deep neural networks (DNNs) has been the leading cause to reluctance in its clinical use as CAD systems. Here, we present a novel architectural and algorithmic design of DNNs to comprehensively identify 15 abnormal retinal findings and diagnose 8 major ophthalmic diseases from macula-centered fundus images with the accuracy comparable to experts. We then define a notion of counterfactual attribution ratio (CAR) which luminates the system's diagnostic reasoning, representing how each abnormal finding contributed to its diagnostic prediction. By using CAR, we show that both quantitative and qualitative interpretation and interactive adjustment of the CAD result can be achieved. A comparison of the model's CAR with experts' finding-disease diagnosis correlation confirms that the proposed model identifies the relationship between findings and diseases similarly as ophthalmologists do.


Assuntos
Aprendizado Profundo , Oftalmopatias , Humanos , Algoritmos , Redes Neurais de Computação , Fundo de Olho , Retina/diagnóstico por imagem
5.
Ultrasonography ; 42(2): 297-306, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36935594

RESUMO

PURPOSE: The purpose of this study was to elucidate whether contrast-enhanced ultrasonography (CEUS) can visualize orally administered Sonazoid leaking into the peritoneal cavity in a postoperative stomach leakage mouse model. METHODS: Adult female mice (n=33, 9-10 weeks old) were used. Preoperative CEUS was performed after delivering Sonazoid via intraperitoneal injection and the per oral route. A gastric leakage model was then generated by making a surgical incision of about 0.5 cm at the stomach wall, and CEUS with per oral Sonazoid administration was performed. A region of interest was drawn on the CEUS images and the signal intensity was quantitatively measured. Statistical analysis was performed using a mixed model to compare the signal intensity sampled from the pre-contrast images with those of the post-contrast images obtained at different time points. RESULTS: CEUS after Sonazoid intraperitoneal injection in normal mice and after oral administration in mice with gastric perforation visualized the contrast medium spreading within the liver interlobar fissures continuous to the peritoneal cavity. A quantitative analysis showed that in the mice with gastric perforation, the orally delivered Sonazoid leaking into the peritoneal cavity induced a statistically significant (P<0.05) increase in signal intensity in all CEUS images obtained 10 seconds or longer after contrast delivery. However, enhancement was not observed before gastric perforation surgery (P=0.167). CONCLUSION: CEUS with oral Sonazoid administration efficiently visualized the contrast medium spreading within the peritoneal cavity in a postoperative stomach leakage mouse model.

6.
J Digit Imaging ; 35(4): 1061-1068, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35304676

RESUMO

Algorithms that automatically identify nodular patterns in chest X-ray (CXR) images could benefit radiologists by reducing reading time and improving accuracy. A promising approach is to use deep learning, where a deep neural network (DNN) is trained to classify and localize nodular patterns (including mass) in CXR images. Such algorithms, however, require enough abnormal cases to learn representations of nodular patterns arising in practical clinical settings. Obtaining large amounts of high-quality data is impractical in medical imaging where (1) acquiring labeled images is extremely expensive, (2) annotations are subject to inaccuracies due to the inherent difficulty in interpreting images, and (3) normal cases occur far more frequently than abnormal cases. In this work, we devise a framework to generate realistic nodules and demonstrate how they can be used to train a DNN identify and localize nodular patterns in CXR images. While most previous research applying generative models to medical imaging are limited to generating visually plausible abnormalities and using these patterns for augmentation, we go a step further to show how the training algorithm can be adjusted accordingly to maximally benefit from synthetic abnormal patterns. A high-precision detection model was first developed and tested on internal and external datasets, and the proposed method was shown to enhance the model's recall while retaining the low level of false positives.


Assuntos
Redes Neurais de Computação , Radiografia Torácica , Algoritmos , Humanos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia , Radiografia Torácica/métodos
7.
Eur Radiol ; 32(2): 1054-1064, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34331112

RESUMO

OBJECTIVES: To evaluate the effects of computer-aided diagnosis (CAD) on inter-reader agreement in Lung Imaging Reporting and Data System (Lung-RADS) categorization. METHODS: Two hundred baseline CT scans covering all Lung-RADS categories were randomly selected from the National Lung Cancer Screening Trial. Five radiologists independently reviewed the CT scans and assigned Lung-RADS categories without CAD and with CAD. The CAD system presented up to five of the most risk-dominant nodules with measurements and predicted Lung-RADS category. Inter-reader agreement was analyzed using multirater Fleiss κ statistics. RESULTS: The five readers reported 139-151 negative screening results without CAD and 126-142 with CAD. With CAD, readers tended to upstage (average, 12.3%) rather than downstage Lung-RADS category (average, 4.4%). Inter-reader agreement of five readers for Lung-RADS categorization was moderate (Fleiss kappa, 0.60 [95% confidence interval, 0.57, 0.63]) without CAD, and slightly improved to substantial (Fleiss kappa, 0.65 [95% CI, 0.63, 0.68]) with CAD. The major cause for disagreement was assignment of different risk-dominant nodules in the reading sessions without and with CAD (54.2% [201/371] vs. 63.6% [232/365]). The proportion of disagreement in nodule size measurement was reduced from 5.1% (102/2000) to 3.1% (62/2000) with the use of CAD (p < 0.001). In 31 cancer-positive cases, substantial management discrepancies (category 1/2 vs. 4A/B) between reader pairs decreased with application of CAD (pooled sensitivity, 85.2% vs. 91.6%; p = 0.004). CONCLUSIONS: Application of CAD demonstrated a minor improvement in inter-reader agreement of Lung-RADS category, while showing the potential to reduce measurement variability and substantial management change in cancer-positive cases. KEY POINTS: • Inter-reader agreement of five readers for Lung-RADS categorization was minimally improved by application of CAD, with a Fleiss kappa value of 0.60 to 0.65. • The major cause for disagreement was assignment of different risk-dominant nodules in the reading sessions without and with CAD (54.2% vs. 63.6%). • In 31 cancer-positive cases, substantial management discrepancies between reader pairs, referring to a difference in follow-up interval of at least 9 months (category 1/2 vs. 4A/B), were reduced in half by application of CAD (32/310 to 16/310) (pooled sensitivity, 85.2% vs. 91.6%; p = 0.004).


Assuntos
Neoplasias Pulmonares , Computadores , Detecção Precoce de Câncer , Humanos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Variações Dependentes do Observador , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
8.
Eur Radiol ; 31(12): 8947-8955, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34115194

RESUMO

OBJECTIVES: Bone age is considered an indicator for the diagnosis of precocious or delayed puberty and a predictor of adult height. We aimed to evaluate the performance of a deep neural network model in assessing rapidly advancing bone age during puberty using elbow radiographs. METHODS: In all, 4437 anteroposterior and lateral pairs of elbow radiographs were obtained from pubertal individuals from two institutions to implement and validate a deep neural network model. The reference standard bone age was established by five trained researchers using the Sauvegrain method, a scoring system based on the shapes of the lateral condyle, trochlea, olecranon apophysis, and proximal radial epiphysis. A test set (n = 141) was obtained from an external institution. The differences between the assessment of the model and that of reviewers were compared. RESULTS: The mean absolute difference (MAD) in bone age estimation between the model and reviewers was 0.15 years on internal validation. In the test set, the MAD between the model and the five experts ranged from 0.19 to 0.30 years. Compared with the reference standard, the MAD was 0.22 years. Interobserver agreement was excellent among reviewers (ICC: 0.99) and between the model and the reviewers (ICC: 0.98). In the subpart analysis, the olecranon apophysis exhibited the highest accuracy (74.5%), followed by the trochlea (73.7%), lateral condyle (73.7%), and radial epiphysis (63.1%). CONCLUSIONS: Assessment of rapidly advancing bone age during puberty on elbow radiographs using our deep neural network model was similar to that of experts. KEY POINTS: • Bone age during puberty is particularly important for patients with scoliosis or limb-length discrepancy to determine the phase of the disease, which influences the timing and method of surgery. • The commonly used hand radiographs-based methods have limitations in assessing bone age during puberty due to the less prominent morphological changes of the hand and wrist bones in this period. • A deep neural network model trained with elbow radiographs exhibited similar performance to human experts on estimating rapidly advancing bone age during puberty.


Assuntos
Determinação da Idade pelo Esqueleto , Cotovelo , Adulto , Cotovelo/diagnóstico por imagem , Humanos , Lactente , Redes Neurais de Computação , Puberdade , Radiografia
9.
Radiology ; 299(2): 450-459, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33754828

RESUMO

Background Previous studies assessing the effects of computer-aided detection on observer performance in the reading of chest radiographs used a sequential reading design that may have biased the results because of reading order or recall bias. Purpose To compare observer performance in detecting and localizing major abnormal findings including nodules, consolidation, interstitial opacity, pleural effusion, and pneumothorax on chest radiographs without versus with deep learning-based detection (DLD) system assistance in a randomized crossover design. Materials and Methods This study included retrospectively collected normal and abnormal chest radiographs between January 2016 and December 2017 (https://cris.nih.go.kr/; registration no. KCT0004147). The radiographs were randomized into two groups, and six observers, including thoracic radiologists, interpreted each radiograph without and with use of a commercially available DLD system by using a crossover design with a washout period. Jackknife alternative free-response receiver operating characteristic (JAFROC) figure of merit (FOM), area under the receiver operating characteristic curve (AUC), sensitivity, specificity, false-positive findings per image, and reading times of observers with and without the DLD system were compared by using McNemar and paired t tests. Results A total of 114 normal (mean patient age ± standard deviation, 51 years ± 11; 58 men) and 114 abnormal (mean patient age, 60 years ± 15; 75 men) chest radiographs were evaluated. The radiographs were randomized to two groups: group A (n = 114) and group B (n = 114). Use of the DLD system improved the observers' JAFROC FOM (from 0.90 to 0.95, P = .002), AUC (from 0.93 to 0.98, P = .002), per-lesion sensitivity (from 83% [822 of 990 lesions] to 89.1% [882 of 990 lesions], P = .009), per-image sensitivity (from 80% [548 of 684 radiographs] to 89% [608 of 684 radiographs], P = .009), and specificity (from 89.3% [611 of 684 radiographs] to 96.6% [661 of 684 radiographs], P = .01) and reduced the reading time (from 10-65 seconds to 6-27 seconds, P < .001). The DLD system alone outperformed the pooled observers (JAFROC FOM: 0.96 vs 0.90, respectively, P = .007; AUC: 0.98 vs 0.93, P = .003). Conclusion Observers including thoracic radiologists showed improved performance in the detection and localization of major abnormal findings on chest radiographs and reduced reading time with use of a deep learning-based detection system. © RSNA, 2021 Online supplemental material is available for this article.


Assuntos
Aprendizado Profundo , Pneumopatias/diagnóstico por imagem , Radiografia Torácica/métodos , Estudos Cross-Over , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Variações Dependentes do Observador , República da Coreia , Estudos Retrospectivos , Sensibilidade e Especificidade
10.
Sci Rep ; 11(1): 2876, 2021 02 03.
Artigo em Inglês | MEDLINE | ID: mdl-33536550

RESUMO

There have been substantial efforts in using deep learning (DL) to diagnose cancer from digital images of pathology slides. Existing algorithms typically operate by training deep neural networks either specialized in specific cohorts or an aggregate of all cohorts when there are only a few images available for the target cohort. A trade-off between decreasing the number of models and their cancer detection performance was evident in our experiments with The Cancer Genomic Atlas dataset, with the former approach achieving higher performance at the cost of having to acquire large datasets from the cohort of interest. Constructing annotated datasets for individual cohorts is extremely time-consuming, with the acquisition cost of such datasets growing linearly with the number of cohorts. Another issue associated with developing cohort-specific models is the difficulty of maintenance: all cohort-specific models may need to be adjusted when a new DL algorithm is to be used, where training even a single model may require a non-negligible amount of computation, or when more data is added to some cohorts. In resolving the sub-optimal behavior of a universal cancer detection model trained on an aggregate of cohorts, we investigated how cohorts can be grouped to augment a dataset without increasing the number of models linearly with the number of cohorts. This study introduces several metrics which measure the morphological similarities between cohort pairs and demonstrates how the metrics can be used to control the trade-off between performance and the number of models.


Assuntos
Conjuntos de Dados como Assunto , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Neoplasias/diagnóstico , Estudos de Coortes , Humanos , Neoplasias/patologia
11.
Eur Radiol ; 31(8): 6239-6247, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33555355

RESUMO

OBJECTIVES: To evaluate a deep learning-based model using model-generated segmentation masks to differentiate invasive pulmonary adenocarcinoma (IPA) from preinvasive lesions or minimally invasive adenocarcinoma (MIA) on CT, making comparisons with radiologist-derived measurements of solid portion size. METHODS: Four hundred eleven subsolid nodules (SSNs) (120 preinvasive lesions or MIAs and 291 IPAs) in 333 patients who underwent surgery between June 2010 and August 2016 were retrospectively included to develop the model (370 SSNs in 293 patients for training and 41 SSNs in 40 patients for tuning). Ninety SSNs of 2 cm or smaller (45 preinvasive lesions or MIAs and 45 IPAs) resected in 2018 formed a validation set. Six radiologists measured the solid portion of each nodule. Performances of the model and radiologists were assessed using receiver operating characteristics curve analysis. RESULTS: The deep learning model differentiated IPA from preinvasive lesions or MIA with areas under the curve (AUCs) of 0.914, 0.956, and 0.833 for the training, tuning, and validation sets, respectively. The mean AUC of the radiologists was 0.835 in the validation set, without significant differences between radiologists and the model (p = 0.97). The sensitivity, specificity, and accuracy of the model were 71% (32/45), 87% (39/45), and 79% (71/90), respectively, whereas the corresponding values of the radiologists were 75.2% (203/270), 76.7% (207/270), and 75.9% (410/540) with a 5-mm threshold for the solid portion size. CONCLUSIONS: The performance of the model for differentiating IPA from preinvasive lesions or MIA was comparable to that of the radiologists' measurements of solid portion size. KEY POINTS: • A deep learning-based model differentiated IPA from preinvasive lesions or MIA with AUCs of 0.914 and 0.956 for the training and tuning sets, respectively. • In the validation set including subsolid nodules of 2 cm or smaller, the model showed an AUC of 0.833, being on par with the performance of the solid portion size measurements made by the radiologists (AUC, 0.835; p = 0.97). • SSNs with a solid portion measuring > 10 mm on CT showed a high probability of being IPA (positive predictive value, 93.5-100.0%).


Assuntos
Adenocarcinoma , Aprendizado Profundo , Neoplasias Pulmonares , Adenocarcinoma/diagnóstico por imagem , Adenocarcinoma/cirurgia , Diagnóstico Diferencial , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Invasividade Neoplásica , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
12.
Radiology ; 299(1): 211-219, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33560190

RESUMO

Background Studies on the optimal CT section thickness for detecting subsolid nodules (SSNs) with computer-aided detection (CAD) are lacking. Purpose To assess the effect of CT section thickness on CAD performance in the detection of SSNs and to investigate whether deep learning-based super-resolution algorithms for reducing CT section thickness can improve performance. Materials and Methods CT images obtained with 1-, 3-, and 5-mm-thick sections were obtained in patients who underwent surgery between March 2018 and December 2018. Patients with resected synchronous SSNs and those without SSNs (negative controls) were retrospectively evaluated. The SSNs, which ranged from 6 to 30 mm, were labeled ground-truth lesions. A deep learning-based CAD system was applied to SSN detection on CT images of each section thickness and those converted from 3- and 5-mm section thickness into 1-mm section thickness by using the super-resolution algorithm. The CAD performance on each section thickness was evaluated and compared by using the jackknife alternative free response receiver operating characteristic figure of merit. Results A total of 308 patients (mean age ± standard deviation, 62 years ± 10; 183 women) with 424 SSNs (310 part-solid and 114 nonsolid nodules) and 182 patients without SSNs (mean age, 65 years ± 10; 97 men) were evaluated. The figures of merit differed across the three section thicknesses (0.92, 0.90, and 0.89 for 1, 3, and 5 mm, respectively; P = .04) and between 1- and 5-mm sections (P = .04). The figures of merit varied for nonsolid nodules (0.78, 0.72, and 0.66 for 1, 3, and 5 mm, respectively; P < .001) but not for part-solid nodules (range, 0.93-0.94; P = .76). The super-resolution algorithm improved CAD sensitivity on 3- and 5-mm-thick sections (P = .02 for 3 mm, P < .001 for 5 mm). Conclusion Computer-aided detection (CAD) of subsolid nodules performed better at 1-mm section thickness CT than at 3- and 5-mm section thickness CT, particularly with nonsolid nodules. Application of a super-resolution algorithm improved the sensitivity of CAD at 3- and 5-mm section thickness CT. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Goo in this issue.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Estudos Retrospectivos
13.
Clin Cancer Res ; 27(3): 719-728, 2021 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-33172897

RESUMO

PURPOSE: Gastric cancer remains the leading cause of cancer-related deaths in Northeast Asia. Population-based endoscopic screenings in the region have yielded successful results in early detection of gastric tumors. Endoscopic screening rates are continuously increasing, and there is a need for an automatic computerized diagnostic system to reduce the diagnostic burden. In this study, we developed an algorithm to classify gastric epithelial tumors automatically and assessed its performance in a large series of gastric biopsies and its benefits as an assistance tool. EXPERIMENTAL DESIGN: Using 2,434 whole-slide images, we developed an algorithm based on convolutional neural networks to classify a gastric biopsy image into one of three categories: negative for dysplasia (NFD), tubular adenoma, or carcinoma. The performance of the algorithm was evaluated by using 7,440 biopsy specimens collected prospectively. The impact of algorithm-assisted diagnosis was assessed by six pathologists using 150 gastric biopsy cases. RESULTS: Diagnostic performance evaluated by the AUROC curve in the prospective study was 0.9790 for two-tier classification: negative (NFD) versus positive (all cases except NFD). When limited to epithelial tumors, the sensitivity and specificity were 1.000 and 0.9749. Algorithm-assisted digital image viewer (DV) resulted in 47% reduction in review time per image compared with DV only and 58% decrease to microscopy. CONCLUSIONS: Our algorithm has demonstrated high accuracy in classifying epithelial tumors and its benefits as an assistance tool, which can serve as a potential screening aid system in diagnosing gastric biopsy specimens.


Assuntos
Aprendizado Profundo , Mucosa Gástrica/patologia , Interpretação de Imagem Assistida por Computador/métodos , Patologistas/estatística & dados numéricos , Neoplasias Gástricas/diagnóstico , Adulto , Idoso , Idoso de 80 Anos ou mais , Biópsia/estatística & dados numéricos , Estudos de Viabilidade , Feminino , Mucosa Gástrica/diagnóstico por imagem , Gastroscopia/estatística & dados numéricos , Humanos , Interpretação de Imagem Assistida por Computador/estatística & dados numéricos , Masculino , Pessoa de Meia-Idade , Variações Dependentes do Observador , Estudos Prospectivos , Estudos Retrospectivos , Sensibilidade e Especificidade , Neoplasias Gástricas/patologia
14.
ACS Chem Neurosci ; 11(24): 4280-4288, 2020 12 16.
Artigo em Inglês | MEDLINE | ID: mdl-33269905

RESUMO

Increasing evidence demonstrates that optogenetics contributes to the regulation of brain behavior, cognition, and physiology, particularly during myelination, potentially allowing for the bidirectional modulation of specific cell lines with spatiotemporal accuracy. However, the type of cell to be targeted, namely, glia vs neurons, and the degree to which optogenetically induced cell activity can regulate myelination during the development of the peripheral nervous system (PNS) are still underexplored. Herein, we report the comparison of optogenetic stimulation (OS) of Schwann cells (SCs) and motor neurons (MNs) for activation of myelination in the PNS. Capitalizing on these optogenetic tools, we confirmed that the formation of the myelin sheath was initially promoted more by OS of calcium translocating channelrhodopsin (CatCh)-transfected SCs than by OS of transfected MNs at 7 days in vitro (DIV). Additionally, the level of myelination was substantially enhanced even until 14 DIV. Surprisingly, after OS of SCs, > 91.1% ± 5.9% of cells expressed myelin basic protein, while that of MNs was 67.8% ± 6.1%. The potent effect of OS of SCs was revealed by the increased thickness of the myelin sheath at 14 DIV. Thus, the OS of SCs could highly accelerate myelination, while the OS of MNs only somewhat promoted myelination, indicating a clear direction for the optogenetic application of unique cell types for initiating and promoting myelination. Together, our findings support the importance of precise cell type selection for use in optogenetics, which in turn can be broadly applied to overcome the limitations of optogenetics after injury.


Assuntos
Optogenética , Células de Schwann , Axônios , Células Cultivadas , Neurônios Motores , Bainha de Mielina
15.
Transl Vis Sci Technol ; 9(6): 28, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33184590

RESUMO

Purpose: To evaluate high accumulation of coronary artery calcium (CAC) from retinal fundus images with deep learning technologies as an inexpensive and radiation-free screening method. Methods: Individuals who underwent bilateral retinal fundus imaging and CAC score (CACS) evaluation from coronary computed tomography scans on the same day were identified. With this database, performances of deep learning algorithms (inception-v3) to distinguish high CACS from CACS of 0 were evaluated at various thresholds for high CACS. Vessel-inpainted and fovea-inpainted images were also used as input to investigate areas of interest in determining CACS. Results: A total of 44,184 images from 20,130 individuals were included. A deep learning algorithm for discrimination of no CAC from CACS >100 achieved area under receiver operating curve (AUROC) of 82.3% (79.5%-85.0%) and 83.2% (80.2%-86.3%) using unilateral and bilateral fundus images, respectively, under a 5-fold cross validation setting. AUROC increased as the criterion for high CACS was increased, showing a plateau at 100 and losing significant improvement thereafter. AUROC decreased when fovea was inpainted and decreased further when vessels were inpainted, whereas AUROC increased when bilateral images were used as input. Conclusions: Visual patterns of retinal fundus images in subjects with CACS > 100 could be recognized by deep learning algorithms compared with those with no CAC. Exploiting bilateral images improves discrimination performance, and ablation studies removing retinal vasculature or fovea suggest that recognizable patterns reside mainly in these areas. Translational Relevance: Retinal fundus images can be used by deep learning algorithms for prediction of high CACS.


Assuntos
Vasos Coronários , Aprendizado Profundo , Algoritmos , Vasos Coronários/diagnóstico por imagem , Fundo de Olho , Humanos , Tomografia Computadorizada por Raios X
16.
Ophthalmology ; 127(1): 85-94, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31281057

RESUMO

PURPOSE: To develop and evaluate deep learning models that screen multiple abnormal findings in retinal fundus images. DESIGN: Cross-sectional study. PARTICIPANTS: For the development and testing of deep learning models, 309 786 readings from 103 262 images were used. Two additional external datasets (the Indian Diabetic Retinopathy Image Dataset and e-ophtha) were used for testing. A third external dataset (Messidor) was used for comparison of the models with human experts. METHODS: Macula-centered retinal fundus images from the Seoul National University Bundang Hospital Retina Image Archive, obtained at the health screening center and ophthalmology outpatient clinic at Seoul National University Bundang Hospital, were assessed for 12 major findings (hemorrhage, hard exudate, cotton-wool patch, drusen, membrane, macular hole, myelinated nerve fiber, chorioretinal atrophy or scar, any vascular abnormality, retinal nerve fiber layer defect, glaucomatous disc change, and nonglaucomatous disc change) with their regional information using deep learning algorithms. MAIN OUTCOME MEASURES: Area under the receiver operating characteristic curve and sensitivity and specificity of the deep learning algorithms at the highest harmonic mean were evaluated and compared with the performance of retina specialists, and visualization of the lesions was qualitatively analyzed. RESULTS: Areas under the receiver operating characteristic curves for all findings were high at 96.2% to 99.9% when tested in the in-house dataset. Lesion heatmaps highlight salient regions effectively in various findings. Areas under the receiver operating characteristic curves for diabetic retinopathy-related findings tested in the Indian Diabetic Retinopathy Image Dataset and e-ophtha dataset were 94.7% to 98.0%. The model demonstrated a performance that rivaled that of human experts, especially in the detection of hemorrhage, hard exudate, membrane, macular hole, myelinated nerve fiber, and glaucomatous disc change. CONCLUSIONS: Our deep learning algorithms with region guidance showed reliable performance for detection of multiple findings in macula-centered retinal fundus images. These interpretable, as well as reliable, classification outputs open the possibility for clinical use as an automated screening system for retinal fundus images.


Assuntos
Algoritmos , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Doenças Retinianas/diagnóstico por imagem , Adulto , Idoso , Idoso de 80 Anos ou mais , Área Sob a Curva , Estudos Transversais , Conjuntos de Dados como Assunto , Feminino , Fundo de Olho , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Redes Neurais de Computação , Curva ROC , Sensibilidade e Especificidade
17.
Eur Radiol ; 30(3): 1359-1368, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31748854

RESUMO

OBJECTIVE: To investigate the feasibility of a deep learning-based detection (DLD) system for multiclass lesions on chest radiograph, in comparison with observers. METHODS: A total of 15,809 chest radiographs were collected from two tertiary hospitals (7204 normal and 8605 abnormal with nodule/mass, interstitial opacity, pleural effusion, or pneumothorax). Except for the test set (100 normal and 100 abnormal (nodule/mass, 70; interstitial opacity, 10; pleural effusion, 10; pneumothorax, 10)), radiographs were used to develop a DLD system for detecting multiclass lesions. The diagnostic performance of the developed model and that of nine observers with varying experiences were evaluated and compared using area under the receiver operating characteristic curve (AUROC), on a per-image basis, and jackknife alternative free-response receiver operating characteristic figure of merit (FOM) on a per-lesion basis. The false-positive fraction was also calculated. RESULTS: Compared with the group-averaged observations, the DLD system demonstrated significantly higher performances on image-wise normal/abnormal classification and lesion-wise detection with pattern classification (AUROC, 0.985 vs. 0.958; p = 0.001; FOM, 0.962 vs. 0.886; p < 0.001). In lesion-wise detection, the DLD system outperformed all nine observers. In the subgroup analysis, the DLD system exhibited consistently better performance for both nodule/mass (FOM, 0.913 vs. 0.847; p < 0.001) and the other three abnormal classes (FOM, 0.995 vs. 0.843; p < 0.001). The false-positive fraction of all abnormalities was 0.11 for the DLD system and 0.19 for the observers. CONCLUSIONS: The DLD system showed the potential for detection of lesions and pattern classification on chest radiographs, performing normal/abnormal classifications and achieving high diagnostic performance. KEY POINTS: • The DLD system was feasible for detection with pattern classification of multiclass lesions on chest radiograph. • The DLD system had high performance of image-wise classification as normal or abnormal chest radiographs (AUROC, 0.985) and showed especially high specificity (99.0%). • In lesion-wise detection of multiclass lesions, the DLD system outperformed all 9 observers (FOM, 0.962 vs. 0.886; p < 0.001).


Assuntos
Aprendizado Profundo , Pneumopatias/diagnóstico por imagem , Doenças Pleurais/diagnóstico por imagem , Radiografia Torácica/métodos , Adulto , Idoso , Área Sob a Curva , Feminino , Humanos , Doenças Pulmonares Intersticiais/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Derrame Pleural/diagnóstico por imagem , Pneumotórax/diagnóstico por imagem , Curva ROC , Radiografia , Sensibilidade e Especificidade , Nódulo Pulmonar Solitário/diagnóstico por imagem
18.
Sci Rep ; 9(1): 18738, 2019 12 10.
Artigo em Inglês | MEDLINE | ID: mdl-31822774

RESUMO

To investigate the reproducibility of computer-aided detection (CAD) for detection of pulmonary nodules and masses for consecutive chest radiographies (CXRs) of the same patient within a short-term period. A total of 944 CXRs (Chest PA) with nodules and masses, recorded between January 2010 and November 2016 at the Asan Medical Center, were obtained. In all, 1092 regions of interest for the nodules and mass were delineated using an in-house software. All CXRs were randomly split into 6:2:2 sets for training, development, and validation. Furthermore, paired follow-up CXRs (n = 121) acquired within one week in the validation set, in which expert thoracic radiologists confirmed no changes, were used to evaluate the reproducibility of CAD by two radiologists (R1 and R2). The reproducibility comparison of four different convolutional neural net algorithms and two chest radiologists (with 13- and 14-years' experience) was conducted. Model performances were evaluated by figure-of-merit (FOM) analysis of the jackknife free-response receiver operating curve and reproducibility rates were evaluated in terms of percent positive agreement (PPA) and Chamberlain's percent positive agreement (CPPA). Reproducibility analysis of the four CADs and R1 and R2 showed variations in the PPA and CPPA. Model performance of YOLO (You Only Look Once) v2 based eDenseYOLO showed a higher FOM (0.89; 0.85-0.93) than RetinaNet (0.89; 0.85-0.93) and atrous spatial pyramid pooling U-Net (0.85; 0.80-0.89). eDenseYOLO showed higher PPAs (97.87%) and CPPAs (95.80%) than Mask R-CNN, RetinaNet, ASSP U-Net, R1, and R2 (PPA: 96.52%, 94.23%, 95.04%, 96.55%, and 94.98%; CPPA: 93.18%, 89.09%, 90.57%, 93.33%, and 90.43%). There were moderate variations in the reproducibility of CAD with different algorithms, which likely indicates that measurement of reproducibility is necessary for evaluating CAD performance in actual clinical environments.


Assuntos
Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Torácica/métodos , Idoso , Algoritmos , Computadores , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Radiografia/métodos , Radiologistas , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade , Software , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos
19.
Sci Rep ; 9(1): 17615, 2019 11 26.
Artigo em Inglês | MEDLINE | ID: mdl-31772195

RESUMO

In this study, a deep learning-based method for developing an automated diagnostic support system that detects periodontal bone loss in the panoramic dental radiographs is proposed. The presented method called DeNTNet not only detects lesions but also provides the corresponding teeth numbers of the lesion according to dental federation notation. DeNTNet applies deep convolutional neural networks(CNNs) using transfer learning and clinical prior knowledge to overcome the morphological variation of the lesions and imbalanced training dataset. With 12,179 panoramic dental radiographs annotated by experienced dental clinicians, DeNTNet was trained, validated, and tested using 11,189, 190, and 800 panoramic dental radiographs, respectively. Each experimental model was subjected to comparative study to demonstrate the validity of each phase of the proposed method. When compared to the dental clinicians, DeNTNet achieved the F1 score of 0.75 on the test set, whereas the average performance of dental clinicians was 0.69.


Assuntos
Perda do Osso Alveolar/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Panorâmica , Algoritmos , Conjuntos de Dados como Assunto , Higienistas Dentários , Humanos , Variações Dependentes do Observador , Estudos Retrospectivos
20.
Sci Rep ; 9(1): 3487, 2019 03 05.
Artigo em Inglês | MEDLINE | ID: mdl-30837563

RESUMO

Schwann cells (SCs) constitute a crucial element of the peripheral nervous system, by structurally supporting the formation of myelin and conveying vital trophic factors to the nervous system. However, the functions of SCs in developmental and regenerative stages remain unclear. Here, we investigated how optogenetic stimulation (OS) of SCs regulates their development. In SC monoculture, OS substantially enhanced SC proliferation and the number of BrdU+-S100ß+-SCs over time. In addition, OS also markedly promoted the expression of both Krox20 and myelin basic protein (MBP) in SC culture medium containing dBcAMP/NRG1, which induced differentiation. We found that the effects of OS are dependent on the intracellular Ca2+ level. OS induces elevated intracellular Ca2+ levels through the T-type voltage-gated calcium channel (VGCC) and mobilization of Ca2+ from both inositol 1,4,5-trisphosphate (IP3)-sensitive stores and caffeine/ryanodine-sensitive stores. Furthermore, we confirmed that OS significantly increased expression levels of both Krox20 and MBP in SC-motor neuron (MN) coculture, which was notably prevented by pharmacological intervention with Ca2+. Taken together, our results demonstrate that OS of SCs increases the intracellular Ca2+ level and can regulate proliferation, differentiation, and myelination, suggesting that OS of SCs may offer a new approach to the treatment of neurodegenerative disorders.


Assuntos
Diferenciação Celular , Proliferação de Células , Luz , Proteína Básica da Mielina/metabolismo , Animais , Cálcio/metabolismo , Canais de Cálcio Tipo T/metabolismo , Diferenciação Celular/efeitos dos fármacos , Proliferação de Células/efeitos dos fármacos , Células Cultivadas , Técnicas de Cocultura , Meios de Cultura/química , Meios de Cultura/farmacologia , Proteína 2 de Resposta de Crescimento Precoce/metabolismo , Inositol 1,4,5-Trifosfato/farmacologia , Camundongos , Neurônios Motores/citologia , Neurônios Motores/metabolismo , Optogenética , Células de Schwann/citologia , Células de Schwann/metabolismo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...