Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Netw ; 169: 713-732, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37976595

ABSTRACT

The remarkable performance of Convolutional Neural Networks (CNNs) has increased their use in real-time systems and devices with limited resources. Hence, compacting these networks while preserving accuracy has become necessary, leading to multiple compression methods. However, the majority require intensive iterative procedures and do not delve into the influence of the used data. To overcome these issues, this paper presents several contributions, framed in the context of explainable Artificial Intelligence (xAI): (a) two filter pruning methods for CNNs, which remove the less significant convolutional kernels; (b) a fine-tuning strategy to recover generalization; (c) a layer pruning approach for U-Net; and (d) an explanation of the relationship between performance and the used data. Filter and feature maps information are used in the pruning process: Principal Component Analysis (PCA) is combined with a next-convolution influence-metric, while the latter and the mean standard deviation are used in an importance score distribution-based method. The developed strategies are generic, and therefore applicable to different models. Experiments demonstrating their effectiveness are conducted over distinct CNNs and datasets, focusing mainly on semantic segmentation (using U-Net, DeepLabv3+, SegNet, and VGG-16 as highly representative models). Pruned U-Net on agricultural benchmarks achieves 98.7% parameters and 97.5% FLOPs drop, with a 0.35% gain in accuracy. DeepLabv3+ and SegNet on CamVid reach 46.5% and 72.4% parameters reduction and a 51.9% and 83.6% FLOPs drop respectively, with almost no decrease in accuracy. VGG-16 on CIFAR-10 obtains up to 86.5% parameter and 82.2% FLOPs decrease with a 0.78% accuracy gain.


Subject(s)
Artificial Intelligence , Semantics , Neural Networks, Computer , Algorithms , Benchmarking
2.
J Digit Imaging ; 36(5): 2259-2277, 2023 10.
Article in English | MEDLINE | ID: mdl-37468696

ABSTRACT

Peri-implantitis can cause marginal bone remodeling around implants. The aim is to develop an automatic image processing approach based on two artificial intelligence (AI) techniques in intraoral (periapical and bitewing) radiographs to assist dentists in determining bone loss. The first is a deep learning (DL) object-detector (YOLOv3) to roughly identify (no exact localization is required) two objects: prosthesis (crown) and implant (screw). The second is an image understanding-based (IU) process to fine-tune lines on screw edges and to identify significant points (intensity bone changes, intersections between screw and crown). Distances between these points are used to compute bone loss. A total of 2920 radiographs were used for training (50%) and testing (50%) the DL process. The mAP@0.5 metric is used for performance evaluation of DL considering periapical/bitewing and screws/crowns in upper and lower jaws, with scores ranging from 0.537 to 0.898 (sufficient because DL only needs an approximation). The IU performance is assessed with 50% of the testing radiographs through the t test statistical method, obtaining p values of 0.0106 (line fitting) and 0.0213 (significant point detection). The IU performance is satisfactory, as these values are in accordance with the statistical average/standard deviation in pixels for line fitting (2.75/1.01) and for significant point detection (2.63/1.28) according to the expert criteria of dentists, who establish the ground-truth lines and significant points. In conclusion, AI methods have good prospects for automatic bone loss detection in intraoral radiographs to assist dental specialists in diagnosing peri-implantitis.


Subject(s)
Alveolar Bone Loss , Peri-Implantitis , Tooth , Humans , Artificial Intelligence , Prostheses and Implants
SELECTION OF CITATIONS
SEARCH DETAIL
...