Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
Nat Commun ; 15(1): 3939, 2024 May 14.
Article in English | MEDLINE | ID: mdl-38744870

ABSTRACT

Visualizing the internal structure of museum objects is a crucial step in acquiring knowledge about the origin, state, and composition of cultural heritage artifacts. Among the most powerful techniques for exposing the interior of museum objects is computed tomography (CT), a technique that computationally forms a 3D image using hundreds of radiographs acquired in a full circular range. However, the lack of affordable and versatile CT equipment in museums, combined with the challenge of transporting precious collection objects, currently keeps this technique out of reach for most cultural heritage applications. We propose an approach for creating accurate CT reconstructions using only standard 2D radiography equipment already available in most larger museums. Specifically, we demonstrate that a combination of basic X-ray imaging equipment, a tailored marker-based image acquisition protocol, and sophisticated data-processing algorithms, can achieve 3D imaging of collection objects without the need for a costly CT imaging system. We implemented this approach in the British Museum (London), the J. Paul Getty Museum (Los Angeles), and the Rijksmuseum (Amsterdam). Our work paves the way for broad facilitation and adoption of CT technology across museums worldwide.

2.
J Xray Sci Technol ; 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38701129

ABSTRACT

BACKGROUND: X-ray imaging is widely used for the non-destructive detection of defects in industrial products on a conveyor belt. In-line detection requires highly accurate, robust, and fast algorithms. Deep Convolutional Neural Networks (DCNNs) satisfy these requirements when a large amount of labeled data is available. To overcome the challenge of collecting these data, different methods of X-ray image generation are considered. OBJECTIVE: Depending on the desired degree of similarity to real data, different physical effects should either be simulated or can be ignored. X-ray scattering is known to be computationally expensive to simulate, and this effect can greatly affect the accuracy of a generated X-ray image. We aim to quantitatively evaluate the effect of scattering on defect detection. METHODS: Monte-Carlo simulation is used to generate X-ray scattering distribution. DCNNs are trained on the data with and without scattering and applied to the same test datasets. Probability of Detection (POD) curves are computed to compare their performance, characterized by the size of the smallest detectable defect. RESULTS: We apply the methodology to a model problem of defect detection in cylinders. When trained on data without scattering, DCNNs reliably detect defects larger than 1.3 mm, and using data with scattering improves performance by less than 5%. If the analysis is performed on the cases with large scattering-to-primary ratio (1 < SPR < 5), the difference in performance could reach 15% (approx. 0.4 mm). CONCLUSION: Excluding the scattering signal from the training data has the largest effect on the smallest detectable defects, and the difference decreases for larger defects. The scattering-to-primary ratio has a significant effect on detection performance and the required accuracy of data generation.

3.
Sci Rep ; 13(1): 1881, 2023 Feb 02.
Article in English | MEDLINE | ID: mdl-36732337

ABSTRACT

Although X-ray imaging is used routinely in industry for high-throughput product quality control, its capability to detect internal defects has strong limitations. The main challenge stems from the superposition of multiple object features within a single X-ray view. Deep Convolutional neural networks can be trained by annotated datasets of X-ray images to detect foreign objects in real-time. However, this approach depends heavily on the availability of a large amount of data, strongly hampering the viability of industrial use with high variability between batches of products. We present a computationally efficient, CT-based approach for creating artificial single-view X-ray data based on just a few physically CT-scanned objects. By algorithmically modifying the CT-volume, a large variety of training examples is obtained. Our results show that applying the generative model to a single CT-scanned object results in image analysis accuracy that would otherwise be achieved with scans of tens of real-world samples. Our methodology leads to a strong reduction in training data needed, improved coverage of the combinations of base and foreign objects, and extensive generalizability to additional features. Once trained on just a single CT-scanned object, the resulting deep neural network can detect foreign objects in real-time with high accuracy.

4.
J Imaging ; 6(12)2020 Dec 02.
Article in English | MEDLINE | ID: mdl-34460529

ABSTRACT

An important challenge in hyperspectral imaging tasks is to cope with the large number of spectral bins. Common spectral data reduction methods do not take prior knowledge about the task into account. Consequently, sparsely occurring features that may be essential for the imaging task may not be preserved in the data reduction step. Convolutional neural network (CNN) approaches are capable of learning the specific features relevant to the particular imaging task, but applying them directly to the spectral input data is constrained by the computational efficiency. We propose a novel supervised deep learning approach for combining data reduction and image analysis in an end-to-end architecture. In our approach, the neural network component that performs the reduction is trained such that image features most relevant for the task are preserved in the reduction step. Results for two convolutional neural network architectures and two types of generated datasets show that the proposed Data Reduction CNN (DRCNN) approach can produce more accurate results than existing popular data reduction methods, and can be used in a wide range of problem settings. The integration of knowledge about the task allows for more image compression and higher accuracies compared to standard data reduction methods.

5.
J Imaging ; 6(12)2020 Dec 11.
Article in English | MEDLINE | ID: mdl-34460535

ABSTRACT

X-ray plenoptic cameras acquire multi-view X-ray transmission images in a single exposure (light-field). Their development is challenging: designs have appeared only recently, and they are still affected by important limitations. Concurrently, the lack of available real X-ray light-field data hinders dedicated algorithmic development. Here, we present a physical emulation setup for rapidly exploring the parameter space of both existing and conceptual camera designs. This will assist and accelerate the design of X-ray plenoptic imaging solutions, and provide a tool for generating unlimited real X-ray plenoptic data. We also demonstrate that X-ray light-fields allow for reconstructing sharp spatial structures in three-dimensions (3D) from single-shot data.

6.
Opt Express ; 27(6): 7834-7856, 2019 Mar 18.
Article in English | MEDLINE | ID: mdl-31052612

ABSTRACT

Recently we have shown that light-field photography images can be interpreted as limited-angle cone-beam tomography acquisitions. Here, we use this property to develop a direct-space tomographic refocusing formulation that allows one to refocus both unfocused and focused light-field images. We express the reconstruction as a convex optimization problem, thus enabling the use of various regularization terms to help suppress artifacts, and a wide class of existing advanced tomographic algorithms. This formulation also supports super-resolved reconstructions and the correction of the optical system's limited frequency response (point spread function). We validate this method with numerical and real-world examples.

7.
Opt Express ; 26(18): 22574-22602, 2018 Sep 03.
Article in English | MEDLINE | ID: mdl-30184917

ABSTRACT

Current computational methods for light field photography model the ray-tracing geometry inside the plenoptic camera. This representation of the problem, and some common approximations, can lead to errors in the estimation of object sizes and positions. We propose a representation that leads to the correct reconstruction of object sizes and distances to the camera, by showing that light field images can be interpreted as limited angle cone-beam tomography acquisitions. We then quantitatively analyze its impact on image refocusing, depth estimation and volumetric reconstructions, comparing it against other possible representations. Finally, we validate these results with numerical and real-world examples.

8.
IEEE Trans Vis Comput Graph ; 18(7): 1017-26, 2012 Jul.
Article in English | MEDLINE | ID: mdl-22291149

ABSTRACT

Models of interaction tasks are quantitative descriptions of relationships between human temporal performance and the spatial characteristics of the interactive tasks. Examples include Fitts' law for modeling the pointing task and Accot and Zhai's steering law for the path steering task. Interaction models can be used as guidelines to design efficient user interfaces and quantitatively evaluate interaction techniques and input devices. In this paper, we introduce and experimentally verify an interaction model for a 3D object-pursuit interaction task. Object pursuit requires that a user continuously tracks an object that moves with constant velocities in a desktop virtual environment. For modeling purposes, we divide the total object-pursuit movement into a tracking phase and a correction phase. Following a two-step modeling methodology that is originally proposed in this paper, the time for the correction phase is modeled as a function of path length, path curvature, target width, and target velocity. The object-pursuit model can be used to quantitatively evaluate the efficiency of user interfaces that involve 3D interaction with moving objects.


Subject(s)
Computers , Models, Theoretical , Psychomotor Performance/physiology , User-Computer Interface , Adult , Analysis of Variance , Computer Graphics , Feedback, Sensory , Female , Humans , Male , Regression Analysis , Statistics, Nonparametric , Video Games
9.
Proc Biol Sci ; 277(1700): 3555-61, 2010 Dec 07.
Article in English | MEDLINE | ID: mdl-20573621

ABSTRACT

In addition to experimental studies, computational models provide valuable information about colony development in scleractinian corals. Using our simulation model, we show how environmental factors such as nutrient distribution and light availability affect growth patterns of coral colonies. To compare the simulated coral growth forms with those of real coral colonies, we quantitatively compared our modelling results with coral colonies of the morphologically variable Caribbean coral genus Madracis. Madracis species encompass a relatively large morphological variation in colony morphology and hence represent a suitable genus to compare, for the first time, simulated and real coral growth forms in three dimensions using a quantitative approach. This quantitative analysis of three-dimensional growth forms is based on a number of morphometric parameters (such as branch thickness, branch spacing, etc.). Our results show that simulated coral morphologies share several morphological features with real coral colonies (M. mirabilis, M. decactis, M. formosa and M. carmabi). A significant correlation was found between branch thickness and branch spacing for both real and simulated growth forms. Our present model is able to partly capture the morphological variation in closely related and morphologically variable coral species of the genus Madracis.


Subject(s)
Anthozoa/growth & development , Computer Simulation , Models, Biological , Animals , Anthozoa/anatomy & histology , Anthozoa/classification , Caribbean Region , Morphogenesis , Software , Species Specificity , Tomography, X-Ray Computed
10.
IEEE Trans Vis Comput Graph ; 16(1): 28-42, 2010.
Article in English | MEDLINE | ID: mdl-19910659

ABSTRACT

Display systems typically operate at a minimum rate of 60 Hz. However, existing VR-architectures generally produce application updates at a lower rate. Consequently, the display is not updated by the application every display frame. This causes a number of undesirable perceptual artifacts. We describe an architecture that provides a programmable display layer (PDL) in order to generate updated display frames. This replaces the default display behavior of repeating application frames until an update is available. We will show three benefits of the architecture typical to VR. First, smooth motion is provided by generating intermediate display frames by per-pixel depth-image warping using 3D motion fields. Smooth motion eliminates various perceptual artifacts due to judder. Second, we implement fine-grained latency reduction at the display frame level using a synchronized prediction of simulation objects and the viewpoint. This improves the average quality and consistency of latency reduction. Third, a crosstalk reduction algorithm for consecutive display frames is implemented, which improves the quality of stereoscopic images. To evaluate the architecture, we compare image quality and latency to that of a classic level-of-detail approach.


Subject(s)
Computer Graphics/instrumentation , Imaging, Three-Dimensional/instrumentation , Imaging, Three-Dimensional/methods , Models, Theoretical , User-Computer Interface , Equipment Design , Equipment Failure Analysis
12.
J Am Soc Mass Spectrom ; 19(6): 823-32, 2008 Jun.
Article in English | MEDLINE | ID: mdl-18403214

ABSTRACT

High-resolution imaging mass spectrometry of large biological samples is the goal of several research groups. In mosaic imaging, the most common method, the large sample is divided into a mosaic of small areas that are then analyzed with high resolution. Here we present an automated alignment routine that uses principal component analysis to reduce the uncorrelated noise in the imaging datasets, which previously obstructed automated image alignment. An additional signal quality metric ensures that only those regions with sufficient signal quality are considered. We demonstrate that this algorithm provides superior alignment performance than manual stitching and can be used to automatically align large imaging mass spectrometry datasets comprising many individual mosaic tiles.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Mass Spectrometry/methods , Pattern Recognition, Automated/methods , Snails/anatomy & histology , Snails/metabolism , Subtraction Technique , Animals
13.
IEEE Trans Vis Comput Graph ; 12(5): 1037-43, 2006.
Article in English | MEDLINE | ID: mdl-17080832

ABSTRACT

Current practice in particle visualization renders particle position data directly onto the screen as points or glyphs. Using a camera placed at a fixed position, particle motions can be visualized by rendering trajectories or by animations. Applying such direct techniques to large, time dependent particle data sets often results in cluttered images in which the dynamic properties of the underlying system are difficult to interpret. In this case study we take an alternative approach to the visualization of ion motions. Instead of rendering ion position data directly, we first extract meaningful motion information from the ion position data and then map this information onto geometric primitives. Our goal is to produce high-level visualizations that reflect the physicists' way of thinking about ion dynamics. Parameterized geometric icons are defined to encode motion information of clusters of related ions. In addition, a parameterized camera control mechanism is used to analyze relative instead of only absolute ion motions. We apply the techniques to simulations of Fourier transform mass spectrometry (FTMS) experiments. The data produced by such simulations can amount to 5 10(4) ions and 10(5) timesteps. This paper discusses the requirements, design and informal evaluation of the implemented system.

14.
IEEE Trans Vis Comput Graph ; 12(5): 1251-8, 2006.
Article in English | MEDLINE | ID: mdl-17080859

ABSTRACT

In this paper we propose an approach in which interactive visualization and analysis are combined with batch tools for the processing of large data collections. Large and heterogeneous data collections are difficult to analyze and pose specific problems to interactive visualization. Application of the traditional interactive processing and visualization approaches as well as batch processing encounter considerable drawbacks for such large and heterogeneous data collections due to the amount and type of data. Computing resources are not sufficient for interactive exploration of the data and automated analysis has the disadvantage that the user has only limited control and feedback on the analysis process. In our approach, an analysis procedure with features and attributes of interest for the analysis is defined interactively. This procedure is used for off-line processing of large collections of data sets. The results of the batch process along with "visual summaries" are used for further analysis. Visualization is not only used for the presentation of the result, but also as a tool to monitor the validity and quality of the operations performed during the batch process. Operations such as feature extraction and attribute calculation of the collected data sets are validated by visual inspection. This approach is illustrated by an extensive case study, in which a collection of confocal microscopy data sets is analyzed.


Subject(s)
Computer Graphics , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Microscopy, Confocal/methods , Software , User-Computer Interface , Algorithms , Database Management Systems , Databases, Factual
SELECTION OF CITATIONS
SEARCH DETAIL
...