Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 6498, 2024 Mar 18.
Article in English | MEDLINE | ID: mdl-38499588

ABSTRACT

Three-dimensional (3D) images provide a comprehensive view of material microstructures, enabling numerical simulations unachievable with two-dimensional (2D) imaging alone. However, obtaining these 3D images can be costly and constrained by resolution limitations. We introduce a novel method capable of generating large-scale 3D images of material microstructures, such as metal or rock, from a single 2D image. Our approach circumvents the need for 3D image data while offering a cost-effective, high-resolution alternative to existing imaging techniques. Our method combines a denoising diffusion probabilistic model with a generative adversarial network framework. To compensate for the lack of 3D training data, we implement chain sampling, a technique that utilizes the 3D intermediate outputs obtained by reversing the diffusion process. During the training phase, these intermediate outputs are guided by a 2D discriminator. This technique facilitates our method's ability to gradually generate 3D images that accurately capture the geometric properties and statistical characteristics of the original 2D input. This study features a comparative analysis of the 3D images generated by our method, SliceGAN (the current state-of-the-art method), and actual 3D micro-CT images, spanning a diverse set of rock and metal types. The results shown an improvement of up to three times in the Frechet inception distance score, a typical metric for evaluating the performance of image generative models, and enhanced accuracy in derived properties compared to SliceGAN. The potential of our method to produce high-resolution and statistically representative 3D images paves the way for new applications in material characterization and analysis domains.

2.
Sci Rep ; 11(1): 19123, 2021 09 27.
Article in English | MEDLINE | ID: mdl-34580400

ABSTRACT

Obtaining an accurate segmentation of images obtained by computed microtomography (micro-CT) techniques is a non-trivial process due to the wide range of noise types and artifacts present in these images. Current methodologies are often time-consuming, sensitive to noise and artifacts, and require skilled people to give accurate results. Motivated by the rapid advancement of deep learning-based segmentation techniques in recent years, we have developed a tool that aims to fully automate the segmentation process in one step, without the need for any extra image processing steps such as noise filtering or artifact removal. To get a general model, we train our network using a dataset made of high-quality three-dimensional micro-CT images from different scanners, rock types, and resolutions. In addition, we use a domain-specific augmented training pipeline with various types of noise, synthetic artifacts, and image transformation/distortion. For validation, we use a synthetic dataset to measure accuracy and analyze noise/artifact sensitivity. The results show a robust and accurate segmentation performance for the most common types of noises present in real micro-CT images. We also compared the segmentation of our method and five expert users, using commercial and open software packages on real rock images. We found that most of the current tools fail to reduce the impact of local and global noises and artifacts. We quantified the variation on human-assisted segmentation results in terms of physical properties and observed a large variation. In comparison, the new method is more robust to local noises and artifacts, outperforming the human segmentation and giving consistent results. Finally, we compared the porosity of our model segmented images with experimental porosity measured in the laboratory for ten different untrained samples, finding very encouraging results.

SELECTION OF CITATIONS
SEARCH DETAIL
...