Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Ann Oper Res ; : 1-34, 2023 Apr 06.
Article in English | MEDLINE | ID: mdl-37361091

ABSTRACT

With growing environmental concerns and the exploitation of ubiquitous big data, smart transportation is transforming logistics business and operations into a more sustainable approach. To answer questions in intelligent transportation planning, such as which data are feasible, which methods are applicable for intelligent prediction of such data, and what are the available operations for prediction, this paper offers a new deep learning approach called bi-directional isometric-gated recurrent unit (BDIGRU). It is merged to the deep learning framework of neural networks for predictive analysis of travel time and business adoption for route planning. The proposed new method directly learns high-level features from big traffic data and reconstructs them by its own attention mechanism drawn by temporal orders to complete the learning process recursively in an end-to-end manner. After deriving the computational algorithm with stochastic gradient descent, we use the proposed method to perform predictive analysis of stochastic travel time under various traffic conditions (especially for congestions) and then determine the optimal vehicle route with the shortest travel time under future uncertainty. Based on empirical results with big traffic data, we show that the proposed BDIGRU method can (1) significantly improve the predictive accuracy of one-step 30 min ahead travel time compared to several conventional (data-driven, model-driven, hybrid, and heuristics) methods measured with several performance criteria, and (2) efficiently determine the optimal vehicle route in relation to the predictive variability under uncertainty.

2.
Clin Oral Investig ; 26(11): 6629-6637, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35881240

ABSTRACT

OBJECTIVE: Successful application of deep machine learning could reduce time-consuming and labor-intensive clinical work of calculating the amount of radiographic bone loss (RBL) in diagnosing and treatment planning for periodontitis. This study aimed to test the accuracy of RBL classification by machine learning. MATERIALS AND METHODS: A total of 236 patients with standardized full mouth radiographs were included. Each tooth from the periapical films was evaluated by three calibrated periodontists for categorization of RBL and radiographic defect morphology. Each image was pre-processed and augmented to ensure proper data balancing without data pollution, then a novel multitasking InceptionV3 model was applied. RESULTS: The model demonstrated an average accuracy of 0.87 ± 0.01 in the categorization of mild (< 15%) or severe (≥ 15%) bone loss with fivefold cross-validation. Sensitivity, specificity, positive predictive, and negative predictive values of the model were 0.86 ± 0.03, 0.88 ± 0.03, 0.88 ± 0.03, and 0.86 ± 0.02, respectively. CONCLUSIONS: Application of deep machine learning for the detection of alveolar bone loss yielded promising results in this study. Additional data would be beneficial to enhance model construction and enable better machine learning performance for clinical implementation. CLINICAL RELEVANCE: Higher accuracy of radiographic bone loss classification by machine learning can be achieved with more clinical data and proper model construction for valuable clinical application.


Subject(s)
Alveolar Bone Loss , Deep Learning , Periodontitis , Humans , Machine Learning , Radiography , Periodontitis/diagnostic imaging , Alveolar Bone Loss/diagnostic imaging
3.
Sensors (Basel) ; 18(9)2018 Sep 12.
Article in English | MEDLINE | ID: mdl-30213128

ABSTRACT

Dynamic voltage and frequency scaling (DVFS) is a well-known method for saving energy consumption. Several DVFS studies have applied learning-based methods to implement the DVFS prediction model instead of complicated mathematical models. This paper proposes a lightweight learning-directed DVFS method that involves using counter propagation networks to sense and classify the task behavior and predict the best voltage/frequency setting for the system. An intelligent adjustment mechanism for performance is also provided to users under various performance requirements. The comparative experimental results of the proposed algorithms and other competitive techniques are evaluated on the NVIDIA JETSON Tegra K1 multicore platform and Intel PXA270 embedded platforms. The results demonstrate that the learning-directed DVFS method can accurately predict the suitable central processing unit (CPU) frequency, given the runtime statistical information of a running program, and achieve an energy savings rate up to 42%. Through this method, users can easily achieve effective energy consumption and performance by specifying the factors of performance loss.

SELECTION OF CITATIONS
SEARCH DETAIL
...