Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
Sci Rep ; 13(1): 21363, 2023 12 04.
Article in English | MEDLINE | ID: mdl-38049475

ABSTRACT

Rapid and precise intraoperative diagnosing systems are required for improving surgical outcomes and patient prognosis. Because of the poor quality and time-intensive process of the prevalent frozen section procedure, various intraoperative diagnostic imaging systems have been explored. Microscopy with ultraviolet surface excitation (MUSE) is an inexpensive, maintenance-free, and rapid imaging technique that yields images like thin-sectioned samples without sectioning. However, pathologists find it nearly impossible to assign diagnostic labels to MUSE images of unfixed specimens; thus, AI for intraoperative diagnosis cannot be trained in a supervised learning manner. In this study, we propose a deep-learning pipeline model for lymph node metastasis detection, in which CycleGAN translate MUSE images of unfixed lymph nodes to formalin-fixed paraffin-embedded (FFPE) sample, and diagnostic prediction is performed using deep convolutional neural network trained on FFPE sample images. Our pipeline yielded an average accuracy of 84.6% when using each of the three deep convolutional neural networks, which is a 18.3% increase over the classification-only model without CycleGAN. The modality translation to FFPE sample images using CycleGAN can be applied to various intraoperative diagnostic imaging systems and eliminate the difficulty for pathologists in labeling new modality images in clinical sites. We anticipate our pipeline to be a starting point for accurate rapid intraoperative diagnostic systems for new imaging modalities, leading to healthcare quality improvement.


Subject(s)
Alprostadil , Neural Networks, Computer , Humans , Lymphatic Metastasis/diagnostic imaging , Microscopy, Fluorescence
2.
Sensors (Basel) ; 23(23)2023 Nov 22.
Article in English | MEDLINE | ID: mdl-38067704

ABSTRACT

In this paper, we present a prototype pseudo-direct time-of-flight (ToF) CMOS image sensor, achieving high distance accuracy, precision, and robustness to multipath interference. An indirect ToF (iToF)-based image sensor, which enables high spatial resolution, is used to acquire temporal compressed signals in the charge domain. Whole received light waveforms, like those acquired with conventional direct ToF (dToF) image sensors, can be obtained after image reconstruction based on compressive sensing. Therefore, this method has the advantages of both dToF and iToF depth image sensors, such as high resolution, high accuracy, immunity to multipath interference, and the absence of motion artifacts. Additionally, two approaches to refine the depth resolution are explained: (1) the introduction of a sub-time window; and (2) oversampling in image reconstruction and quadratic fitting in the depth calculation. Experimental results show the separation of two reflections 40 cm apart under multipath interference conditions and a significant improvement in distance precision down to around 1 cm. Point cloud map videos demonstrate the improvements in depth resolution and accuracy. These results suggest that the proposed method could be a promising approach for virtually implementing dToF imaging suitable for challenging environments with multipath interference.

3.
J Biomed Opt ; 28(10): 107001, 2023 10.
Article in English | MEDLINE | ID: mdl-37915398

ABSTRACT

Significance: Evaluation of biological chromophore levels is useful for detection of various skin diseases, including cancer, monitoring of health status and tissue metabolism, and assessment of clinical and physiological vascular functions. Clinically, it is useful to assess multiple different chromophores in vivo with a single technique or instrument. Aim: To investigate the possibility of estimating the concentration of four chromophores, bilirubin, oxygenated hemoglobin, deoxygenated hemoglobin, and melanin from diffuse reflectance spectra in the visible region. Approach: A new diffuse reflectance spectroscopic method based on the multiple regression analysis aided by Monte Carlo simulations for light transport was developed to quantify bilirubin, oxygenated hemoglobin, deoxygenated hemoglobin, and melanin. Three different experimental animal models were used to induce hyperbilirubinemia, hypoxemia, and melanogenesis in rats. Results: The estimated bilirubin concentration increased after ligation of the bile duct and reached around 18 mg/dl at 50 h after the onset of ligation, which corresponds to the reference value of bilirubin measured by a commercially available transcutaneous bilirubin meter. The concentration of oxygenated hemoglobin and that of deoxygenated hemoglobin decreased and increased, respectively, as the fraction of inspired oxygen decreased. Consequently, the tissue oxygen saturation dramatically decreased. The time course of melanin concentration after depilation of skin on the back of rats was indicative of the supply of melanosomes produced by melanocytes of hair follicles to the growing hair shaft. Conclusions: The results of our study showed that the proposed method is capable of the in vivo evaluation of percutaneous bilirubin level, skin hemodynamics, and melanogenesis in rats, and that it has potential as a tool for the diagnosis and management of hyperbilirubinemia, hypoxemia, and pigmented skin lesions.


Subject(s)
Bilirubin , Melanins , Rats , Animals , Melanins/analysis , Bilirubin/analysis , Bilirubin/metabolism , Spectrum Analysis/methods , Skin/chemistry , Hypoxia/diagnostic imaging , Hemoglobins/analysis , Oxyhemoglobins/analysis , Hyperbilirubinemia/diagnostic imaging , Hyperbilirubinemia/metabolism
4.
Sensors (Basel) ; 23(17)2023 Aug 30.
Article in English | MEDLINE | ID: mdl-37687990

ABSTRACT

A camera captures multidimensional information of the real world by convolving it into two dimensions using a sensing matrix. The original multidimensional information is then reconstructed from captured images. Traditionally, multidimensional information has been captured by uniform sampling, but by optimizing the sensing matrix, we can capture images more efficiently and reconstruct multidimensional information with high quality. Although compressive video sensing requires random sampling as a theoretical optimum, when designing the sensing matrix in practice, there are many hardware limitations (such as exposure and color filter patterns). Existing studies have found random sampling is not always the best solution for compressive sensing because the optimal sampling pattern is related to the scene context, and it is hard to manually design a sampling pattern and reconstruction algorithm. In this paper, we propose an end-to-end learning approach that jointly optimizes the sampling pattern as well as the reconstruction decoder. We applied this deep sensing approach to the video compressive sensing problem. We modeled the spatio-temporal sampling and color filter pattern using a convolutional neural network constrained by hardware limitations during network training. We demonstrated that the proposed method performs better than the manually designed method in gray-scale video and color video acquisitions.

5.
Arthritis Res Ther ; 25(1): 181, 2023 09 25.
Article in English | MEDLINE | ID: mdl-37749583

ABSTRACT

BACKGROUND: This work aims to develop a deep learning model, assessing atlantoaxial subluxation (AAS) in rheumatoid arthritis (RA), which can often be ambiguous in clinical practice. METHODS: We collected 4691 X-ray images of the cervical spine of the 906 patients with RA. Among these images, 3480 were used for training the deep learning model, 803 were used for validating the model during the training process, and the remaining 408 were used for testing the performance of the trained model. The two-dimensional key points' detection model of Deep High-Resolution Representation Learning for Human Pose Estimation was adopted as the base convolutional neural network model. The model inferred four coordinates to calculate the atlantodental interval (ADI) and space available for the spinal cord (SAC). Finally, these values were compared with those by clinicians to evaluate the performance of the model. RESULTS: Among the 408 cervical images for testing the performance, the trained model correctly identified the four coordinates in 99.5% of the dataset. The values of ADI and SAC were positively correlated among the model and two clinicians. The sensitivity of AAS diagnosis with ADI or SAC by the model was 0.86 and 0.97 respectively. The specificity of that was 0.57 and 0.5 respectively. CONCLUSIONS: We present the development of a deep learning model for the evaluation of cervical lesions of patients with RA. The model was demonstrably shown to be useful for quantitative evaluation.


Subject(s)
Arthritis, Rheumatoid , Deep Learning , Humans , Arthritis, Rheumatoid/complications , Arthritis, Rheumatoid/diagnostic imaging , Cervical Vertebrae , Neural Networks, Computer
6.
Front Psychiatry ; 14: 1184156, 2023.
Article in English | MEDLINE | ID: mdl-37457784

ABSTRACT

Introduction: Developing approaches for early detection of possible risk clusters for mental health problems among undergraduate university students is warranted to reduce the duration of untreated illness (DUI). However, little is known about indicators of need for care by others. Herein, we aimed to clarify the specific value of study engagement and lifestyle habit variables in predicting potentially high-risk cluster of mental health problems among undergraduate university students. Methods: This cross-sectional study used a web-based demographic questionnaire [the Utrecht Work Engagement Scale for Students (UWES-S-J)] as study engagement scale. Moreover, information regarding life habits such as sleep duration and meal frequency, along with mental health problems such as depression and fatigue were also collected. Students with both mental health problems were classified as high risk. Characteristics of students in the two groups were compared. Univariate logistic regression was performed to identify predictors of membership. Receiver Operating Characteristic (ROC) curve was used to clarify the specific values that differentiated the groups in terms of significant predictors in univariate logistic analysis. Cut-off point was calculated using Youden index. Statistical significance was set at p < 0.05. Results: A total of 1,644 students were assessed, and 30.1% were classified as high-risk for mental health problems. Significant differences were found between the two groups in terms of sex, age, study engagement, weekday sleep duration, and meal frequency. In the ROC curve, students who had lower study engagement with UWES-S-J score < 37.5 points (sensitivity, 81.5%; specificity, 38.0%), <6 h sleep duration on weekdays (sensitivity, 82.0%; specificity, 24.0%), and < 2.5 times of meals per day (sensitivity, 73.3%; specificity, 35.8%), were more likely to be classified into the high-risk group for mental health problems. Conclusion: Academic staff should detect students who meet these criteria at the earliest and provide mental health support to reduce DUI among undergraduate university students.

7.
BMC Med Inform Decis Mak ; 23(1): 80, 2023 05 04.
Article in English | MEDLINE | ID: mdl-37143041

ABSTRACT

PURPOSE: Estimating the surgery length has the potential to be utilized as skill assessment, surgical training, or efficient surgical facility utilization especially if it is done in real-time as a remaining surgery duration (RSD). Surgical length reflects a certain level of efficiency and mastery of the surgeon in a well-standardized surgery such as cataract surgery. In this paper, we design and develop a real-time RSD estimation method for cataract surgery that does not require manual labeling and is transferable with minimum fine-tuning. METHODS: A regression method consisting of convolutional neural networks (CNNs) and long short-term memory (LSTM) is designed for RSD estimation. The model is firstly trained and evaluated for the single main surgeon with a large number of surgeries. Then, the fine-tuning strategy is used to transfer the model to the data of the other two surgeons. Mean Absolute Error (MAE in seconds) was used to evaluate the performance of the RSD estimation. The proposed method is compared with the naïve method which is based on the statistic of the historical data. A transferability experiment is also set to demonstrate the generalizability of the method. RESULT: The mean surgical time for the sample videos was 318.7 s (s) (standard deviation 83.4 s) for the main surgeon for the initial training. In our experiments, the lowest MAE of 19.4 s (equal to about 6.4% of the mean surgical time) is achieved by our best-trained model for the independent test data of the main target surgeon. It reduces the MAE by 35.5 s (-10.2%) compared to the naïve method. The fine-tuning strategy transfers the model trained for the main target to the data of other surgeons with only a small number of training data (20% of the pre-training). The MAEs for the other two surgeons are 28.3 s and 30.6 s with the fine-tuning model, which decreased by -8.1 s and -7.5 s than the Per-surgeon model (average declining of -7.8 s and 1.3% of video duration). External validation study with Cataract-101 outperformed 3 reported methods of TimeLSTM, RSDNet, and CataNet. CONCLUSION: An approach to build a pre-trained model for estimating RSD estimation based on a single surgeon and then transfer to other surgeons demonstrated both low prediction error and good transferability with minimum fine-tuning videos.


Subject(s)
Cataract , Memory, Short-Term , Humans , Neural Networks, Computer
8.
PLOS Digit Health ; 2(1): e0000174, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36812612

ABSTRACT

The morphological feature of retinal arterio-venous crossing patterns is a valuable source of cardiovascular risk stratification as it directly captures vascular health. Although Scheie's classification, which was proposed in 1953, has been used to grade the severity of arteriolosclerosis as diagnostic criteria, it is not widely used in clinical settings as mastering this grading is challenging as it requires vast experience. In this paper, we propose a deep learning approach to replicate a diagnostic process of ophthalmologists while providing a checkpoint to secure explainability to understand the grading process. The proposed pipeline is three-fold to replicate a diagnostic process of ophthalmologists. First, we adopt segmentation and classification models to automatically obtain vessels in a retinal image with the corresponding artery/vein labels and find candidate arterio-venous crossing points. Second, we use a classification model to validate the true crossing point. At last, the grade of severity for the vessel crossings is classified. To better address the problem of label ambiguity and imbalanced label distribution, we propose a new model, named multi-diagnosis team network (MDTNet), in which the sub-models with different structures or different loss functions provide different decisions. MDTNet unifies these diverse theories to give the final decision with high accuracy. Our automated grading pipeline was able to validate crossing points with precision and recall of 96.3% and 96.3%, respectively. Among correctly detected crossing points, the kappa value for the agreement between the grading by a retina specialist and the estimated score was 0.85, with an accuracy of 0.92. The numerical results demonstrate that our method can achieve a good performance in both arterio-venous crossing validation and severity grading tasks following the diagnostic process of ophthalmologists. By the proposed models, we could build a pipeline reproducing ophthalmologists' diagnostic process without requiring subjective feature extractions. The code is available (https://github.com/conscienceli/MDTNet).

9.
Cancer Cytopathol ; 131(4): 217-225, 2023 04.
Article in English | MEDLINE | ID: mdl-36524985

ABSTRACT

BACKGROUND: Several studies have used artificial intelligence (AI) to analyze cytology images, but AI has yet to be adopted in clinical practice. The objective of this study was to demonstrate the accuracy of AI-based image analysis for thyroid fine-needle aspiration cytology (FNAC) and to propose its application in clinical practice. METHODS: In total, 148,395 microscopic images of FNAC were obtained from 393 thyroid nodules to train and validate the data, and EfficientNetV2-L was used as the image-classification model. The 35 nodules that were classified as atypia of undetermined significance (AUS) were predicted using AI training. RESULTS: The precision-recall area under the curve (PR AUC) was >0.95, except for poorly differentiated thyroid carcinoma (PR AUC = 0.49) and medullary thyroid carcinoma (PR AUC = 0.91). Poorly differentiated thyroid carcinoma had the lowest recall (35.4%) and was difficult to distinguish from papillary thyroid carcinoma, medullary thyroid carcinoma, and follicular thyroid carcinoma. Follicular adenomas and follicular thyroid carcinomas were distinguished from each other by 86.7% and 93.9% recall, respectively. For two-dimensional mapping of the data using t-distributed stochastic neighbor embedding, the lymphomas, follicular adenomas, and anaplastic thyroid carcinomas were divided into three, two, and two groups, respectively. Analysis of the AUS nodules showed 94.7% sensitivity, 14.4% specificity, 56.3% positive predictive value, and 66.7% negative predictive value. CONCLUSIONS: The authors developed an AI-based approach to analyze thyroid FNAC cases encountered in routine practice. This analysis could be useful for the clinical management of AUS and follicular neoplasm nodules (e.g., an online AI platform for thyroid cytology consultations).


Subject(s)
Adenocarcinoma, Follicular , Adenoma , Deep Learning , Thyroid Neoplasms , Thyroid Nodule , Humans , Artificial Intelligence , Thyroid Neoplasms/diagnosis , Thyroid Neoplasms/pathology , Thyroid Nodule/diagnosis , Thyroid Nodule/pathology , Adenocarcinoma, Follicular/diagnosis , Adenocarcinoma, Follicular/pathology , Retrospective Studies
10.
IEEE Trans Pattern Anal Mach Intell ; 45(4): 4109-4121, 2023 Apr.
Article in English | MEDLINE | ID: mdl-35925849

ABSTRACT

The unprecedented success of deep convolutional neural networks (CNN) on the task of video-based human action recognition assumes the availability of good resolution videos and resources to develop and deploy complex models. Unfortunately, certain budgetary and environmental constraints on the camera system and the recognition model may not be able to accommodate these assumptions and require reducing their complexity. To alleviate these issues, we introduce a deep sensing solution to directly recognize human actions from coded exposure images. Our deep sensing solution consists of a binary CNN-based encoder network that emulates the capturing of a coded exposure image of a dynamic scene using a coded exposure camera, followed by a 2D CNN for recognizing human action in the captured coded exposure image. Furthermore, we propose a novel knowledge distillation framework to jointly train the encoder and the action recognition model and show that the proposed training approach improves the action recognition accuracy by an absolute margin of 6.2%, 2.9%, and 7.9% on Something 2-v2, Kinetics-400, and UCF-101 datasets, respectively, in comparison to our previous approach. Finally, we built a prototype coded exposure camera using LCoS to validate the feasibility of our deep sensing solution. Our evaluation of the prototype camera show results that are consistent with the simulation results.

11.
Sci Rep ; 12(1): 14067, 2022 08 18.
Article in English | MEDLINE | ID: mdl-35982217

ABSTRACT

This study sought to develop a deep learning-based diagnostic algorithm for plaque vulnerability by analyzing intravascular optical coherence tomography (OCT) images and to investigate the relation between AI-plaque vulnerability and clinical outcomes in patients with coronary artery disease (CAD). A total of 1791 study patients who underwent OCT examinations were recruited from a multicenter clinical database, and the OCT images were first labeled as either normal, a stable plaque, or a vulnerable plaque by expert cardiologists. A DenseNet-121-based deep learning algorithm for plaque characterization was developed by training with 44,947 prelabeled OCT images, and demonstrated excellent differentiation among normal, stable plaques, and vulnerable plaques. Patients who were diagnosed with vulnerable plaques by the algorithm had a significantly higher rate of both events from the OCT-observed segments and clinical events than the patients with normal and stable plaque (log-rank p < 0.001). On the multivariate logistic regression analyses, the OCT diagnosis of a vulnerable plaque by the algorithm was independently associated with both types of events (p = 0.047 and p < 0.001, respectively). The AI analysis of intracoronary OCT imaging can assist cardiologists in diagnosing plaque vulnerability and identifying CAD patients with a high probability of occurrence of future clinical events.


Subject(s)
Coronary Artery Disease , Plaque, Atherosclerotic , Coronary Angiography , Coronary Artery Disease/diagnostic imaging , Coronary Vessels/diagnostic imaging , Humans , Plaque, Atherosclerotic/diagnostic imaging , Tomography, Optical Coherence
12.
Sensors (Basel) ; 22(7)2022 Mar 22.
Article in English | MEDLINE | ID: mdl-35408057

ABSTRACT

Multi-path interference causes depth errors in indirect time-of-flight (ToF) cameras. In this paper, resolving multi-path interference caused by surface reflections using a multi-tap macro-pixel computational CMOS image sensor is demonstrated. The imaging area is implemented by an array of macro-pixels composed of four subpixels embodied by a four-tap lateral electric field charge modulator (LEFM). This sensor can simultaneously acquire 16 images for different temporal shutters. This method can reproduce more than 16 images based on compressive sensing with multi-frequency shutters and sub-clock shifting. In simulations, an object was placed 16 m away from the sensor, and the depth of an interference object was varied from 1 to 32 m in 1 m steps. The two reflections were separated in two stages: coarse estimation based on a compressive sensing solver and refinement by a nonlinear search to investigate the potential of our sensor. Relative standard deviation (precision) and relative mean error (accuracy) were evaluated under the influence of photon shot noise. The proposed method was verified using a prototype multi-tap macro-pixel computational CMOS image sensor in single-path and dual-path situations. In the experiment, an acrylic plate was placed 1 m or 2 m and a mirror 9.3 m from the sensor.

13.
Sensors (Basel) ; 22(5)2022 Mar 02.
Article in English | MEDLINE | ID: mdl-35271100

ABSTRACT

An ultra-high-speed computational CMOS image sensor with a burst frame rate of 303 megaframes per second, which is the fastest among the solid-state image sensors, to our knowledge, is demonstrated. This image sensor is compatible with ordinary single-aperture lenses and can operate in dual modes, such as single-event filming mode or multi-exposure imaging mode, by reconfiguring the number of exposure cycles. To realize this frame rate, the charge modulator drivers were adequately designed to suppress the peak driving current taking advantage of the operational constraint of the multi-tap charge modulator. The pixel array is composed of macropixels with 2 × 2 4-tap subpixels. Because temporal compressive sensing is performed in the charge domain without any analog circuit, ultrafast frame rates, small pixel size, low noise, and low power consumption are achieved. In the experiments, single-event imaging of plasma emission in laser processing and multi-exposure transient imaging of light reflections to extend the depth range and to decompose multiple reflections for time-of-flight (TOF) depth imaging with a compression ratio of 8× were demonstrated. Time-resolved images similar to those obtained by the direct-type TOF were reproduced in a single shot, while the charge modulator for the indirect TOF was utilized.

14.
Eur J Orthod ; 44(4): 436-444, 2022 08 16.
Article in English | MEDLINE | ID: mdl-35050343

ABSTRACT

AIM: This study was aimed to evaluate two artificial intelligence (AI) systems that created a prioritized problem list and treatment plan, and examine whether the performance of the aforementioned systems was equivalent to orthodontists. MATERIALS AND METHODS: A total of 967 consecutive cases [800: training; 67: validation; 100: evaluation (40: randomly selected for the clinical evaluation)] were used. We used a stored document that describes (1) the patient's clinical information, (2) the prioritized list, and (3) a treatment strategy without digital tooth movement. Sentences of (1) were vectorized according to the bag of words method (V); sentences of (2) and (3) were relabelled with 423 and 330 labels, respectively. AI systems that output labels for the prioritized list (subtask 1) and treatment planning (subtask 2) based on the vectors V were developed using a support vector machine and self-attention network, respectively, while the system was trained to improve precision and recall. Clinical evaluations were conducted by four orthodontists (no faculty or residents; peer group) in two sessions: in the first session, peer group and the developed AI systems created problem lists and treatment plans; in the second session, two of the peer group (not AI) evaluated these lists and plans, including the lists and plans of the AIs, by scoring them using 4-point scales [unacceptable (1) to ideal (4)]. Scores were compared among the system and peer group (Wilcoxon signed-rank test, P < 0.05). RESULTS: The precision after system training was 65% and 48% for subtasks 1 and 2 respectively, with recall of 55% and 48%, respectively. The clinical evaluation of the AI system for subtask 1 showed a mid-rank. For subtask 2, the AI system had a significantly lower score than the three panels but the same rank with one panel. CONCLUSIONS: Two AI systems that output a prioritized problem list and create a treatment plan were developed. The clinical system ability of the former system showed a mid-rank in the peer group, and the latter system was almost equivalent to the worst orthodontist.


Subject(s)
Artificial Intelligence , Tooth Movement Techniques , Humans
15.
IEEE Trans Pattern Anal Mach Intell ; 44(9): 5618-5630, 2022 Sep.
Article in English | MEDLINE | ID: mdl-33848240

ABSTRACT

We introduce a method of recovering the shape of a smooth dielectric object using diffuse polarization images taken with different directional light sources. We present two constraints on shading and polarization and use both in a single optimization scheme. This integration is motivated by photometric stereo and polarization-based methods having complementary abilities. Polarization gives strong cues for the surface orientation and refractive index, which are independent of the light direction. However, employing polarization leads to ambiguities in selecting between two ambiguous choices of the surface orientation, in the relationship between the refractive index and zenith angle (observing angle). Moreover, polarization-based methods for surface points with small zenith angles perform poorly owing to the weak polarization. In contrast, the photometric stereo method with multiple light sources disambiguates the surface normals and gives a strong relationship between surface normals and light directions. However, the method has limited performance for large zenith angles and refractive index estimation and faces strong ambiguity when light directions are unknown. Taking the advantages of these methods, our proposed method recovers surface normals for small and large zenith angles, light directions, and refractive indexes of the object. The proposed method is positively evaluated in simulations and real-world experiments.

16.
Front Psychiatry ; 12: 731137, 2021.
Article in English | MEDLINE | ID: mdl-34589012

ABSTRACT

This study aimed to clarify the adaptation features of University students exposed to fully online education during the novel coronavirus disease 2019 (COVID-19) pandemic and to identify accompanying mental health problems and predictors of school adaptation. The pandemic has forced many universities to transition rapidly to delivering online education. However, little is known about the impact of this drastic change on students' school adaptation. This cross-sectional study used an online questionnaire, including assessments of impressions of online education, study engagement, mental health, and lifestyle habits. In total, 1,259 students were assessed. The characteristics of school adaptation were analyzed by a two-step cluster analysis. The proportion of mental health problems was compared among different groups based on a cluster analysis. A logistic regression analysis was used to identify predictors of cluster membership. P-values < 0.05 were considered statistically significant. The two-step cluster analysis determined three clusters: school adaptation group, school maladaptation group, and school over-adaptation group. The last group significantly exhibited the most mental health problems. Membership of this group was significantly associated with being female (OR = 1.42; 95% CI 1.06-1.91), being older (OR = 1.21; 95% CI 1.01-1.44), those who considered online education to be less beneficial (OR = 2.17; 95% CI 1.64-2.88), shorter sleep time on weekdays (OR = 0.826; 95% CI 0.683-.998), longer sleep time on holidays (OR = 1.21; 95% CI 1.03-1.43), and worse restorative sleep (OR = 2.27; 95% CI 1.81-2.86). The results suggest that academic staff should understand distinctive features of school adaptation owing to the rapid transition of the educational system and should develop support systems to improve students' mental health. They should consider ways to incorporate online classes with their lectures to improve students' perceived benefits of online education. Additionally, educational guidance on lifestyle, such as sleep hygiene, may be necessary.

17.
Eur Radiol ; 31(4): 1978-1986, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33011879

ABSTRACT

OBJECTIVES: To compare diagnostic performance for pulmonary invasive adenocarcinoma among radiologists with and without three-dimensional convolutional neural network (3D-CNN). METHODS: Enrolled were 285 patients with adenocarcinoma in situ (AIS, n = 75), minimally invasive adenocarcinoma (MIA, n = 58), and invasive adenocarcinoma (IVA, n = 152). A 3D-CNN model was constructed with seven convolution-pooling and two max-pooling layers and fully connected layers, in which batch normalization, residual connection, and global average pooling were used. Only the flipping process was performed for augmentation. The output layer comprised two nodes for two conditions (AIS/MIA and IVA) according to prognosis. Diagnostic performance of the 3D-CNN model in 285 patients was calculated using nested 10-fold cross-validation. In 90 of 285 patients, results from each radiologist (R1, R2, and R3; with 9, 14, and 26 years of experience, respectively) with and without the 3D-CNN model were statistically compared. RESULTS: Without the 3D-CNN model, accuracy, sensitivity, and specificity of the radiologists were as follows: R1, 70.0%, 52.1%, and 90.5%; R2, 72.2%, 75%, and 69%; and R3, 74.4%, 89.6%, and 57.1%, respectively. With the 3D-CNN model, accuracy, sensitivity, and specificity of the radiologists were as follows: R1, 72.2%, 77.1%, and 66.7%; R2, 74.4%, 85.4%, and 61.9%; and R3, 74.4%, 93.8%, and 52.4%, respectively. Diagnostic performance of each radiologist with and without the 3D-CNN model had no significant difference (p > 0.88), but the accuracy of R1 and R2 was significantly higher with than without the 3D-CNN model (p < 0.01). CONCLUSIONS: The 3D-CNN model can support a less-experienced radiologist to improve diagnostic accuracy for pulmonary invasive adenocarcinoma without deteriorating any diagnostic performances. KEY POINTS: • The 3D-CNN model is a non-invasive method for predicting pulmonary invasive adenocarcinoma in CT images with high sensitivity. • Diagnostic accuracy by a less-experienced radiologist was better with the 3D-CNN model than without the model.


Subject(s)
Adenocarcinoma of Lung , Lung Neoplasms , Adenocarcinoma of Lung/diagnostic imaging , Humans , Lung Neoplasms/diagnostic imaging , Neural Networks, Computer , Radiologists , Tomography, X-Ray Computed
18.
Front Psychol ; 11: 568, 2020.
Article in English | MEDLINE | ID: mdl-32296374

ABSTRACT

Climate change is one of the most important issues for humanity. To defuse this problem, it is considered necessary to improve energy efficiency, make energy sources cleaner, and reduce energy consumption in urban areas. The Japanese government has recommended an air conditioner setting of 28°C in summer and 20°C in winter since 2005. The aim of this setting is to save energy by keeping room temperatures constant. However, it is unclear whether this is an appropriate temperature for workers and students. This study examined whether thermal environments influence task performance over time. To examine whether the relationship between task performance and thermal environments influences the psychological states of participants, we recorded their subjective rating of mental workload along with their working memory score, electroencephalogram (EEG), heart rate variability, skin conductance level (SCL), and tympanum temperature during the task and compared the results among different conditions. In this experiment, participants were asked to read some texts and answer questions related to those texts. Room temperature (18, 22, 25, or 29°C) and humidity (50%) were manipulated during the task and participants performed the task at these temperatures. The results of this study showed that the temporal cost of task and theta power of EEG, which is an index for concentration, decreased over time. However, subjective mental workload increased with time. Moreover, the low frequency to high frequency ratio and SCL increased with time and heat (25 and 29°C). These results suggest that mental workload, especially implicit mental workload, increases in warmer environments, even if learning efficiency is facilitated. This study indicates integrated evidence for relationships among task performance, psychological state, and thermal environment by analyzing behavioral, subjective, and physiological indexes multidirectionally.

19.
J Surg Res ; 242: 11-22, 2019 10.
Article in English | MEDLINE | ID: mdl-31059944

ABSTRACT

BACKGROUND: Biomedical imaging devices that utilize the optical characteristics of hemoglobin (Hb) have become widespread. In the field of gastroenterology, there is a strong demand for devices that can apply this technique to surgical navigation. We aimed to introduce our novel multispectral device capable of intraoperatively performing quantitative imaging of the oxygen (O2) saturation and Hb amount of tissues noninvasively and in real time, and to examine its application for deciding the appropriate anastomosis point after subtotal or total esophagectomy. MATERIALS AND METHODS: A total of 39 patients with esophageal cancer were studied. Tissue O2 saturation and Hb amount of the gastric tube just before esophagogastric anastomosis were evaluated using a multispectral tissue quantitative imaging device. The anastomosis point was decided depending on the quantitative values and patterns of both the tissue O2 saturation and Hb amount. RESULTS: The device can instantaneously and noninvasively quantify and visualize the tissue O2 saturation and Hb amount using reflected light. The tissue Hb status could be classified into the following four types: good circulation type, congestion type, ischemia type, and mixed type of congestion and ischemia. Postoperative anastomotic failure occurred in 2 cases, and both were mixed cases. CONCLUSIONS: The method of quantitatively imaging the tissue O2 saturation and Hb level in real time and noninvasively using a multispectral device allows instantaneous determination of the anastomosis and related organ conditions, thereby contributing to determining the appropriate treatment direction.


Subject(s)
Esophageal Neoplasms/surgery , Esophagectomy , Esophagus/diagnostic imaging , Esophagus/surgery , Optical Imaging/instrumentation , Stomach/diagnostic imaging , Stomach/surgery , Aged , Aged, 80 and over , Anastomosis, Surgical , Biomarkers/metabolism , Esophagus/blood supply , Female , Hemoglobins/metabolism , Humans , Intraoperative Care/instrumentation , Intraoperative Care/methods , Male , Middle Aged , Optical Imaging/methods , Oxygen/metabolism , Stomach/blood supply
20.
Sensors (Basel) ; 18(3)2018 Mar 05.
Article in English | MEDLINE | ID: mdl-29510599

ABSTRACT

The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.

SELECTION OF CITATIONS
SEARCH DETAIL
...