Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
ACS Sens ; 6(1): 35-42, 2021 01 22.
Artigo em Inglês | MEDLINE | ID: mdl-33372759

RESUMO

In this work, we introduce polarimetric balanced detection as a new attenuated total reflection (ATR) infrared (IR) sensing scheme, leveraging unequal effective thicknesses achieved with laser light of different polarizations. We combined a monolithic widely tunable Vernier quantum cascade laser (QCL-XT) and a multibounce ATR IR spectroscopy setup for analysis of liquids in a process analytical setting. Polarimetric balanced detection enables simultaneous recording of background and sample spectra, significantly reducing long-term drifts. The root-mean-square noise could be improved by a factor of 10 in a long-term experiment, compared to conventional absorbance measurements obtained via the single-ended optical channel. The sensing performance of the device was further evaluated by on-site measurements of ethanol in water, leading to an improved limit of detection (LOD) achieved with polarimetric balanced detection. Sequential injection analysis was employed for automated injection of samples into a custom-built ATR flow cell mounted above a zinc sulfide multibounce ATR element. The QCL-XT posed to be suitable for mid-IR-based sensing in liquids due to its wide tunability. Polarimetric balanced detection proved to enhance the robustness and long-term stability of the sensing device, along with improving the LOD by a factor of 5. This demonstrates the potential for new polarimetric QCL-based ATR mid-IR sensing schemes for in-field measurements or process monitoring usually prone to a multitude of interferences.


Assuntos
Lasers Semicondutores , Água , Espectrofotometria Infravermelho
2.
Sensors (Basel) ; 19(1)2018 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-30583457

RESUMO

In this paper, we present WaterSpy, a project developing an innovative, compact, cost-effective photonic device for pervasive water quality sensing, operating in the mid-IR spectral range. The approach combines the use of advanced Quantum Cascade Lasers (QCLs) employing the Vernier effect, used as light source, with novel, fibre-coupled, fast and sensitive Higher Operation Temperature (HOT) photodetectors, used as sensors. These will be complemented by optimised laser driving and detector electronics, laser modulation and signal conditioning technologies. The paper presents the WaterSpy concept, the requirements elicited, the preliminary architecture design of the device, the use cases in which it will be validated, while highlighting the innovative technologies that contribute to the advancement of the current state of the art.

3.
Invest Radiol ; 53(11): 655-662, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-29847412

RESUMO

OBJECTIVE: The aims of this study were to quantitatively assess two new scan modes on a photon-counting detector computed tomography system, each designed to maximize spatial resolution, and to qualitatively demonstrate potential clinical impact using patient data. MATERIALS AND METHODS: This Health Insurance Portability Act-compliant study was approved by our institutional review board. Two high-spatial-resolution scan modes (Sharp and UHR) were evaluated using phantoms to quantify spatial resolution and image noise, and results were compared with the standard mode (Macro). Patients were scanned using a conventional energy-integrating detector scanner and the photon-counting detector scanner using the same radiation dose. In first patient images, anatomic details were qualitatively evaluated to demonstrate potential clinical impact. RESULTS: Sharp and UHR modes had a 69% and 87% improvement in in-plane spatial resolution, respectively, compared with Macro mode (10% modulation-translation-function values of 16.05, 17.69, and 9.48 lp/cm, respectively). The cutoff spatial frequency of the UHR mode (32.4 lp/cm) corresponded to a limiting spatial resolution of 150 µm. The full-width-at-half-maximum values of the section sensitivity profiles were 0.41, 0.44, and 0.67 mm for the thinnest image thickness for each mode (0.25, 0.25, and 0.5 mm, respectively). At the same in-plane spatial resolution, Sharp and UHR images had up to 15% lower noise than Macro images. Patient images acquired in Sharp mode demonstrated better delineation of fine anatomic structures compared with Macro mode images. CONCLUSIONS: Phantom studies demonstrated superior resolution and noise properties for the Sharp and UHR modes relative to the standard Macro mode and patient images demonstrated the potential benefit of these scan modes for clinical practice.


Assuntos
Cálculos Renais/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Ombro/diagnóstico por imagem , Crânio/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Adulto , Humanos , Imagens de Fantasmas , Fótons , Estudos Prospectivos , Reprodutibilidade dos Testes
4.
Invest Radiol ; 53(8): 486-494, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29794949

RESUMO

OBJECTIVES: The aims of this study were to assess the value of a dedicated sharp convolution kernel for photon counting detector (PCD) computed tomography (CT) for coronary stent imaging and to evaluate to which extent iterative reconstructions can compensate for potential increases in image noise. MATERIALS AND METHODS: For this in vitro study, a phantom simulating coronary artery stenting was prepared. Eighteen different coronary stents were expanded in plastic tubes of 3 mm diameter. Tubes were filled with diluted contrast agent, sealed, and immersed in oil calibrated to an attenuation of -100 HU simulating epicardial fat. The phantom was scanned in a modified second generation 128-slice dual-source CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Erlangen, Germany) equipped with both a conventional energy integrating detector and PCD. Image data were acquired using the PCD part of the scanner with 48 × 0.25 mm slices, a tube voltage of 100 kVp, and tube current-time product of 100 mAs. Images were reconstructed using a conventional convolution kernel for stent imaging with filtered back-projection (B46) and with sinogram-affirmed iterative reconstruction (SAFIRE) at level 3 (I463). For comparison, a dedicated sharp convolution kernel with filtered back-projection (D70) and SAFIRE level 3 (Q703) and level 5 (Q705) was used. The D70 and Q70 kernels were specifically designed for coronary stent imaging with PCD CT by optimizing the image modulation transfer function and the separation of contrast edges. Two independent, blinded readers evaluated subjective image quality (Likert scale 0-3, where 3 = excellent), in-stent diameter difference, in-stent attenuation difference, mathematically defined image sharpness, and noise of each reconstruction. Interreader reliability was calculated using Goodman and Kruskal's γ and intraclass correlation coefficients (ICCs). Differences in image quality were evaluated using a Wilcoxon signed-rank test. Differences in in-stent diameter difference, in-stent attenuation difference, image sharpness, and image noise were tested using a paired-sample t test corrected for multiple comparisons. RESULTS: Interreader and intrareader reliability were excellent (γ = 0.953, ICCs = 0.891-0.999, and γ = 0.996, ICCs = 0.918-0.999, respectively). Reconstructions using the dedicated sharp convolution kernel yielded significantly better results regarding image quality (B46: 0.4 ± 0.5 vs D70: 2.9 ± 0.3; P < 0.001), in-stent diameter difference (1.5 ± 0.3 vs 1.0 ± 0.3 mm; P < 0.001), and image sharpness (728 ± 246 vs 2069 ± 411 CT numbers/voxel; P < 0.001). Regarding in-stent attenuation difference, no significant difference was observed between the 2 kernels (151 ± 76 vs 158 ± 92 CT numbers; P = 0.627). Noise was significantly higher in all sharp convolution kernel images but was reduced by 41% and 59% by applying SAFIRE levels 3 and 5, respectively (B46: 16 ± 1, D70: 111 ± 3, Q703: 65 ± 2, Q705: 46 ± 2 CT numbers; P < 0.001 for all comparisons). CONCLUSIONS: A dedicated sharp convolution kernel for PCD CT imaging of coronary stents yields superior qualitative and quantitative image characteristics compared with conventional reconstruction kernels. Resulting higher noise levels in sharp kernel PCD imaging can be partially compensated with iterative image reconstruction techniques.


Assuntos
Angiografia Coronária/métodos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Stents , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Técnicas In Vitro , Fótons , Reprodutibilidade dos Testes
5.
Invest Radiol ; 53(3): 143-149, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-28945655

RESUMO

PURPOSE: The aim of this study was to investigate computed tomography (CT) imaging characteristics of coronary stents using a novel photon-counting detector (PCD) in comparison with a conventional energy-integrating detector (EID). MATERIALS AND METHODS: In this in vitro study, 18 different coronary stents were expanded in plastic tubes of 3 mm diameter, were filled with contrast agent (diluted to an attenuation of 250 Hounsfield units [HU] at 120 kVp), and were sealed. Stents were placed in an oil-filled custom phantom calibrated to an attenuation of -100 HU at 120 kVp for resembling pericardial fat. The phantom was positioned in the gantry at 2 different angles at 0 degree and 90 degrees relative to the z axis, and was imaged in a research dual-source PCD-CT scanner. Detector subsystem "A" used a standard 64-row EID, while detector subsystem "B" used a PCD, allowing high-resolution scanning (detector pixel-size 0.250 × 0.250 mm in the isocenter). Images were obtained from both detector systems at identical tube voltage (100 kVp) and tube current-time product (100 mA), and were both reconstructed using a typical convolution kernel for stent imaging (B46f) and using the same reconstruction parameters. Two independent, blinded readers evaluated in-stent visibility and measured noise, intraluminal stent diameter, and in-stent attenuation for each detector subsystem. Differences in noise, intraluminal stent diameter, and in-stent attenuation where tested using a paired t test; differences in subjective in-stent visibility were evaluated using a Wilcoxon signed-rank test. RESULTS: Best results for in-stent visibility, noise, intraluminal stent diameter, and in-stent attenuation in EID and PCD were observed at 0-degree phantom position along the z axis, suggesting higher in-plane compared with through-plane resolution. Subjective in-stent visibility was superior in coronary stent images obtained from PCD compared with EID (P < 0.001). Mean in-stent diameter was 28.8% and 8.4% greater in PCD (0.85 ± 0.24 mm; 0.83 ± 0.14 mm) as compared with EID acquisitions (0.66 ± 0.21 mm; 0.76 ± 0.13 mm) for both 0-degree and 90-degree phantom positions, respectively. Average noise was significantly lower (P < 0.001) for PCD (5 ± 0.2 HU) compared with EID (8.3 ± 0.2 HU). The increase in in-stent attenuation (0 degree: Δ 245 ± 163 HU vs Δ 156.5 ± 126 HU; P = 0.006; 90 degrees: Δ 194 ± 141 HU vs Δ 126 ± 78 HU; P = 0.001) was significantly lower for PCD compared with EID acquisitions. CONCLUSIONS: At matched CT scan protocol settings and identical image reconstruction parameters, the PCD yields superior in-stent lumen delineation of coronary artery stents as compared with conventional EID arrays.


Assuntos
Angiografia por Tomografia Computadorizada/instrumentação , Angiografia por Tomografia Computadorizada/métodos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Stents , Algoritmos , Meios de Contraste , Técnicas In Vitro , Fótons , Intensificação de Imagem Radiográfica/métodos
6.
Phys Med Biol ; 57(21): 6849-67, 2012 Nov 07.
Artigo em Inglês | MEDLINE | ID: mdl-23038048

RESUMO

The purpose of this study was to develop and evaluate the hybrid scatter correction algorithm (HSC) for CT imaging. Therefore, two established ways to perform scatter correction, i.e. physical scatter correction based on Monte Carlo simulations and a convolution-based scatter correction algorithm, were combined in order to perform an object-dependent, fast and accurate scatter correction. Based on a reconstructed CT volume, patient-specific scatter intensity is estimated by a coarse Monte Carlo simulation that uses a reduced amount of simulated photons in order to reduce the simulation time. To further speed up the Monte Carlo scatter estimation, scatter intensities are simulated only for a fraction of all projections. In a second step, the high noise estimate of the scatter intensity is used to calibrate the open parameters in a convolution-based algorithm which is then used to correct measured intensities for scatter. Furthermore, the scatter-corrected intensities are used in order to reconstruct a scatter-corrected CT volume data set. To evaluate the scatter reduction potential of HSC, we conducted simulations in a clinical CT geometry and measurements with a flat detector CT system. In the simulation study, HSC-corrected images were compared to scatter-free reference images. For the measurements, no scatter-free reference image was available. Therefore, we used an image corrected with a low-noise Monte Carlo simulation as a reference. The results show that the HSC can significantly reduce scatter artifacts. Compared to the reference images, the error due to scatter artifacts decreased from 100% for uncorrected images to a value below 20% for HSC-corrected images for both the clinical (simulated data) and the flat detector CT geometry (measurement). Compared to a low-noise Monte Carlo simulation, with the HSC the number of photon histories can be reduced by about a factor of 100 per projection without losing correction accuracy. Furthermore, it was sufficient to calibrate the parameters in the convolution model at an angular increment of about 20°. The reduction of the simulated photon histories together with the reduced amount of simulated Monte Carlo scatter projections decreased the total runtime of the scatter correction by about two orders of magnitude for the cases investigated here when using the HSC instead of a low-noise Monte Carlo simulation for scatter correction.


Assuntos
Artefatos , Método de Monte Carlo , Espalhamento de Radiação , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Calibragem , Humanos , Fótons , Fatores de Tempo
7.
Med Phys ; 38 Suppl 1: S95, 2011 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-21978122

RESUMO

PURPOSE: Analytic CT image reconstruction is a computationally demanding task. Currently, the even more demanding iterative reconstruction algorithms find their way into clinical routine because their image quality is superior to analytic image reconstruction. The authors thoroughly analyze a so far unconsidered but valuable tool of tomorrow's reconstruction hardware (CPU and GPU) that allows implementing the forward projection and backprojection steps, which are the computationally most demanding parts of any reconstruction algorithm, much more efficiently. METHODS: Instead of the standard 32 bit floating-point values (float), a recently standardized floating-point value with 16 bit (half) is adopted for data representation in image domain and in rawdata domain. The reduction in the total data amount reduces the traffic on the memory bus, which is the bottleneck of today's high-performance algorithms, by 50%. In CT simulations and CT measurements, float reconstructions (gold standard) and half reconstructions are visually compared via difference images and by quantitative image quality evaluation. This is done for analytical reconstruction (filtered backprojection) and iterative reconstruction (ordered subset SART). RESULTS: The magnitude of quantization noise, which is caused by a reduction in the data precision of both rawdata and image data during image reconstruction, is negligible. This is clearly shown for filtered backprojection and iterative ordered subset SART reconstruction. In filtered backprojection, the implementation of the backprojection should be optimized for low data precision if the image data are represented in half format. In ordered subset SART image reconstruction, no adaptations are necessary and the convergence speed remains unchanged. CONCLUSIONS: Half precision floating-point values allow to speed up CT image reconstruction without compromising image quality.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos
8.
Med Phys ; 36(8): 3818-29, 2009 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-19746815

RESUMO

Dual energy CT (DECT) measures the object of interest using two different x-ray spectra in order to provide energy-selective CT images or in order to get the material decomposition of the object. Today, two decomposition techniques are known. Image-based DECT uses linear combinations of reconstructed images to get an image that contains material-selective DECT information. Rawdata-based DECT correctly treats the available information by passing the rawdata through a decomposition function that uses information from both rawdata sets to create DECT specific (e.g., material-selective) rawdata. Then the image reconstruction yields material-selective images. Rawdata-based image decomposition generally obtains better image quality; however, it needs matched rawdata sets. This means that physically the same lines need to be measured for each spectrum. In today's CT scanners, this is not the case. The authors propose a new image-based method to combine mismatched rawdata sets for DECT information. The method allows for implementation in a scanner's rawdata precorrection pipeline or may be used in image domain. They compare the ability of the three methods (image-based standard method, proposed method, and rawdata-based standard method) to perform material decomposition and to provide monochromatic images. Thereby they use typical clinical and preclinical scanner arrangements including circular cone-beam CT and spiral CT. The proposed method is found to perform better than the image-based standard method and is inferior to the rawdata-based method. However, the proposed method can be used with the frequent case of mismatched data sets that exclude rawdata-based methods.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Animais , Calibragem , Camundongos , Radiografia Torácica , Microtomografia por Raio-X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...