Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters










Publication year range
1.
J Biophotonics ; : e202400106, 2024 May 08.
Article in English | MEDLINE | ID: mdl-38719459

ABSTRACT

To date, the appropriate training required for the reproducible operation of multispectral optoacoustic tomography (MSOT) is poorly discussed. Therefore, the aim of this study was to assess the teachability of MSOT imaging. Five operators (two experienced and three inexperienced) performed repositioning imaging experiments. The inexperienced received the following introductions: personal supervision, video meeting, or printed introduction. The task was to image the exact same position on the calf muscle for seven times on five volunteers in two rounds of investigations. In the first session, operators used ultrasound guidance during measurements while using only photoacoustic data in the second session. The performance comparison was carried out with full-reference image quality measures to quantitatively assess the difference between repeated scans. The study demonstrates that given a personal supervision and hybrid ultrasound real-time imaging in MSOT measurements, inexperienced operators are able to achieve the same level as experienced operators in terms of repositioning accuracy.

2.
IEEE Trans Med Imaging ; 43(3): 1214-1224, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37938947

ABSTRACT

Accurate measurement of optical absorption coefficients from photoacoustic imaging (PAI) data would enable direct mapping of molecular concentrations, providing vital clinical insight. The ill-posed nature of the problem of absorption coefficient recovery has prohibited PAI from achieving this goal in living systems due to the domain gap between simulation and experiment. To bridge this gap, we introduce a collection of experimentally well-characterised imaging phantoms and their digital twins. This first-of-a-kind phantom data set enables supervised training of a U-Net on experimental data for pixel-wise estimation of absorption coefficients. We show that training on simulated data results in artefacts and biases in the estimates, reinforcing the existence of a domain gap between simulation and experiment. Training on experimentally acquired data, however, yielded more accurate and robust estimates of optical absorption coefficients. We compare the results to fluence correction with a Monte Carlo model from reference optical properties of the materials, which yields a quantification error of approximately 20%. Application of the trained U-Nets to a blood flow phantom demonstrated spectral biases when training on simulated data, while application to a mouse model highlighted the ability of both learning-based approaches to recover the depth-dependent loss of signal intensity. We demonstrate that training on experimental phantoms can restore the correlation of signal amplitudes measured in depth. While the absolute quantification error remains high and further improvements are needed, our results highlight the promise of deep learning to advance quantitative PAI.


Subject(s)
Photoacoustic Techniques , Animals , Mice , Phantoms, Imaging , Photoacoustic Techniques/methods , Diagnostic Imaging , Computer Simulation , Monte Carlo Method
3.
J Biomed Opt ; 29(Suppl 1): S11506, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38125716

ABSTRACT

Significance: Photoacoustic imaging (PAI) provides contrast based on the concentration of optical absorbers in tissue, enabling the assessment of functional physiological parameters such as blood oxygen saturation (sO2). Recent evidence suggests that variation in melanin levels in the epidermis leads to measurement biases in optical technologies, which could potentially limit the application of these biomarkers in diverse populations. Aim: To examine the effects of skin melanin pigmentation on PAI and oximetry. Approach: We evaluated the effects of skin tone in PAI using a computational skin model, two-layer melanin-containing tissue-mimicking phantoms, and mice of a consistent genetic background with varying pigmentations. The computational skin model was validated by simulating the diffuse reflectance spectrum using the adding-doubling method, allowing us to assign our simulation parameters to approximate Fitzpatrick skin types. Monte Carlo simulations and acoustic simulations were run to obtain idealized photoacoustic images of our skin model. Photoacoustic images of the phantoms and mice were acquired using a commercial instrument. Reconstructed images were processed with linear spectral unmixing to estimate blood oxygenation. Linear unmixing results were compared with a learned unmixing approach based on gradient-boosted regression. Results: Our computational skin model was consistent with representative literature for in vivo skin reflectance measurements. We observed consistent spectral coloring effects across all model systems, with an overestimation of sO2 and more image artifacts observed with increasing melanin concentration. The learned unmixing approach reduced the measurement bias, but predictions made at lower blood sO2 still suffered from a skin tone-dependent effect. Conclusion: PAI demonstrates measurement bias, including an overestimation of blood sO2, in higher Fitzpatrick skin types. Future research should aim to characterize this effect in humans to ensure equitable application of the technology.


Subject(s)
Photoacoustic Techniques , Skin Pigmentation , Humans , Animals , Mice , Oxygen , Melanins , Photoacoustic Techniques/methods , Oximetry/methods , Phantoms, Imaging
4.
Photoacoustics ; 32: 100539, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37600964

ABSTRACT

Photoacoustic imaging (PAI), also referred to as optoacoustic imaging, has shown promise in early-stage clinical trials in a range of applications from inflammatory diseases to cancer. While the first PAI systems have recently received regulatory approvals, successful adoption of PAI technology into healthcare systems for clinical decision making must still overcome a range of barriers, from education and training to data acquisition and interpretation. The International Photoacoustic Standardisation Consortium (IPASC) undertook an community exercise in 2022 to identify and understand these barriers, then develop a roadmap of strategic plans to address them. Here, we outline the nature and scope of the barriers that were identified, along with short-, medium- and long-term community efforts required to overcome them, both within and beyond the IPASC group.

5.
J Vis Exp ; (196)2023 06 16.
Article in English | MEDLINE | ID: mdl-37395576

ABSTRACT

Establishing tissue-mimicking biophotonic phantom materials that provide long-term stability are imperative to enable the comparison of biomedical imaging devices across vendors and institutions, support the development of internationally recognized standards, and assist the clinical translation of novel technologies. Here, a manufacturing process is presented that results in a stable, low-cost, tissue-mimicking copolymer-in-oil material for use in photoacoustic, optical, and ultrasound standardization efforts. The base material consists of mineral oil and a copolymer with defined Chemical Abstract Service (CAS) numbers. The protocol presented here yields a representative material with a speed of sound c(f) = 1,481 ± 0.4 m·s-1 at 5 MHz (corresponds to the speed of sound of water at 20 °C), acoustic attenuation α(f) = 6.1 ± 0.06 dB·cm-1 at 5 MHz, optical absorption µa(λ) = 0.05 ± 0.005 mm-1 at 800 nm, and optical scattering µs'(λ) = 1 ± 0.1 mm-1 at 800 nm. The material allows independent tuning of the acoustic and optical properties by respectively varying the polymer concentration or light scattering (titanium dioxide) and absorbing agents (oil-soluble dye). The fabrication of different phantom designs is displayed and the homogeneity of the resulting test objects is confirmed using photoacoustic imaging. Due to its facile, repeatable fabrication process and durability, as well as its biologically relevant properties, the material recipe has high promise in multimodal acoustic-optical standardization initiatives.


Subject(s)
Diagnostic Imaging , Mineral Oil , Phantoms, Imaging , Ultrasonography/methods , Acoustics , Polymers/chemistry
6.
Photoacoustics ; 28: 100402, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36281320

ABSTRACT

Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties with high spatial resolution. However, previous attempts to solve the optical inverse problem with supervised machine learning were hampered by the absence of labeled reference data. While this bottleneck has been tackled by simulating training data, the domain gap between real and simulated images remains an unsolved challenge. We propose a novel approach to PAT image synthesis that involves subdividing the challenge of generating plausible simulations into two disjoint problems: (1) Probabilistic generation of realistic tissue morphology, and (2) pixel-wise assignment of corresponding optical and acoustic properties. The former is achieved with Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data. According to a validation study on a downstream task our approach yields more realistic synthetic images than the traditional model-based approach and could therefore become a fundamental step for deep learning-based quantitative PAT (qPAT).

7.
Photoacoustics ; 26: 100357, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35574188

ABSTRACT

Mesoscopic photoacoustic imaging (PAI) enables non-invasive visualisation of tumour vasculature. The visual or semi-quantitative 2D measurements typically applied to mesoscopic PAI data fail to capture the 3D vessel network complexity and lack robust ground truths for assessment of accuracy. Here, we developed a pipeline for quantifying 3D vascular networks captured using mesoscopic PAI and tested the preservation of blood volume and network structure with topological data analysis. Ground truth data of in silico synthetic vasculatures and a string phantom indicated that learning-based segmentation best preserves vessel diameter and blood volume at depth, while rule-based segmentation with vesselness image filtering accurately preserved network structure in superficial vessels. Segmentation of vessels in breast cancer patient-derived xenografts (PDXs) compared favourably to ex vivo immunohistochemistry. Furthermore, our findings underscore the importance of validating segmentation methods when applying mesoscopic PAI as a tool to evaluate vascular networks in vivo.

8.
J Biomed Opt ; 27(8)2022 04.
Article in English | MEDLINE | ID: mdl-35380031

ABSTRACT

SIGNIFICANCE: Optical and acoustic imaging techniques enable noninvasive visualisation of structural and functional properties of tissue. The quantification of measurements, however, remains challenging due to the inverse problems that must be solved. Emerging data-driven approaches are promising, but they rely heavily on the presence of high-quality simulations across a range of wavelengths due to the lack of ground truth knowledge of tissue acoustical and optical properties in realistic settings. AIM: To facilitate this process, we present the open-source simulation and image processing for photonics and acoustics (SIMPA) Python toolkit. SIMPA is being developed according to modern software design standards. APPROACH: SIMPA enables the use of computational forward models, data processing algorithms, and digital device twins to simulate realistic images within a single pipeline. SIMPA's module implementations can be seamlessly exchanged as SIMPA abstracts from the concrete implementation of each forward model and builds the simulation pipeline in a modular fashion. Furthermore, SIMPA provides comprehensive libraries of biological structures, such as vessels, as well as optical and acoustic properties and other functionalities for the generation of realistic tissue models. RESULTS: To showcase the capabilities of SIMPA, we show examples in the context of photoacoustic imaging: the diversity of creatable tissue models, the customisability of a simulation pipeline, and the degree of realism of the simulations. CONCLUSIONS: SIMPA is an open-source toolkit that can be used to simulate optical and acoustic imaging modalities. The code is available at: https://github.com/IMSY-DKFZ/simpa, and all of the examples and experiments in this paper can be reproduced using the code available at: https://github.com/IMSY-DKFZ/simpa_paper_experiments.


Subject(s)
Optics and Photonics , Software , Acoustics , Dimethylpolysiloxanes , Image Processing, Computer-Assisted/methods
9.
Photoacoustics ; 26: 100341, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35371919

ABSTRACT

Photoacoustic (PA) imaging has the potential to revolutionize functional medical imaging in healthcare due to the valuable information on tissue physiology contained in multispectral photoacoustic measurements. Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information. In this work, we present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images to facilitate image interpretability. Manually annotated photoacoustic and ultrasound imaging data are used as reference and enable the training of a deep learning-based segmentation algorithm in a supervised manner. Based on a validation study with experimentally acquired data from 16 healthy human volunteers, we show that automatic tissue segmentation can be used to create powerful analyses and visualizations of multispectral photoacoustic images. Due to the intuitive representation of high-dimensional information, such a preprocessing algorithm could be a valuable means to facilitate the clinical translation of photoacoustic imaging.

10.
Photoacoustics ; 26: 100339, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35287304

ABSTRACT

Photoacoustic imaging (PAI) is an emerging modality that has shown promise for improving patient management in a range of applications. Unfortunately, the current lack of uniformity in PAI data formats compromises inter-user data exchange and comparison, which impedes: technological progress; effective research collaboration; and efforts to deliver multi-centre clinical trials. To overcome this challenge, the International Photoacoustic Standardisation Consortium (IPASC) has established a data format with a defined consensus metadata structure and developed an open-source software application programming interface (API) to enable conversion from proprietary file formats into the IPASC format. The format is based on Hierarchical Data Format 5 (HDF5) and designed to store photoacoustic raw time series data. Internal quality control mechanisms are included to ensure completeness and consistency of the converted data. By unifying the variety of proprietary data and metadata definitions into a consensus format, IPASC hopes to facilitate the exchange and comparison of PAI data.

11.
Int J Comput Assist Radiol Surg ; 16(7): 1101-1110, 2021 Jul.
Article in English | MEDLINE | ID: mdl-33993409

ABSTRACT

PURPOSE: Photoacoustic tomography (PAT) is a novel imaging technique that can spatially resolve both morphological and functional tissue properties, such as vessel topology and tissue oxygenation. While this capacity makes PAT a promising modality for the diagnosis, treatment, and follow-up of various diseases, a current drawback is the limited field of view provided by the conventionally applied 2D probes. METHODS: In this paper, we present a novel approach to 3D reconstruction of PAT data (Tattoo tomography) that does not require an external tracking system and can smoothly be integrated into clinical workflows. It is based on an optical pattern placed on the region of interest prior to image acquisition. This pattern is designed in a way that a single tomographic image of it enables the recovery of the probe pose relative to the coordinate system of the pattern, which serves as a global coordinate system for image compounding. RESULTS: To investigate the feasibility of Tattoo tomography, we assessed the quality of 3D image reconstruction with experimental phantom data and in vivo forearm data. The results obtained with our prototype indicate that the Tattoo method enables the accurate and precise 3D reconstruction of PAT data and may be better suited for this task than the baseline method using optical tracking. CONCLUSIONS: In contrast to previous approaches to 3D ultrasound (US) or PAT reconstruction, the Tattoo approach neither requires complex external hardware nor training data acquired for a specific application. It could thus become a valuable tool for clinical freehand PAT.


Subject(s)
Imaging, Three-Dimensional/methods , Phantoms, Imaging , Tattooing/methods , Tomography, X-Ray Computed/methods , Ultrasonography/methods , Humans
12.
Sci Data ; 8(1): 101, 2021 04 12.
Article in English | MEDLINE | ID: mdl-33846356

ABSTRACT

Image-based tracking of medical instruments is an integral part of surgical data science applications. Previous research has addressed the tasks of detecting, segmenting and tracking medical instruments based on laparoscopic video data. However, the proposed methods still tend to fail when applied to challenging images and do not generalize well to data they have not been trained on. This paper introduces the Heidelberg Colorectal (HeiCo) data set - the first publicly available data set enabling comprehensive benchmarking of medical instrument detection and segmentation algorithms with a specific emphasis on method robustness and generalization capabilities. Our data set comprises 30 laparoscopic videos and corresponding sensor data from medical devices in the operating room for three different types of laparoscopic surgery. Annotations include surgical phase labels for all video frames as well as information on instrument presence and corresponding instance-wise segmentation masks for surgical instruments (if any) in more than 10,000 individual frames. The data has successfully been used to organize international competitions within the Endoscopic Vision Challenges 2017 and 2019.


Subject(s)
Colon, Sigmoid/surgery , Proctocolectomy, Restorative/instrumentation , Rectum/surgery , Surgical Navigation Systems , Data Science , Humans , Laparoscopy
13.
Photoacoustics ; 22: 100241, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33717977

ABSTRACT

Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.

14.
Sci Rep ; 11(1): 6565, 2021 03 22.
Article in English | MEDLINE | ID: mdl-33753769

ABSTRACT

The ability of photoacoustic imaging to measure functional tissue properties, such as blood oxygenation sO[Formula: see text], enables a wide variety of possible applications. sO[Formula: see text] can be computed from the ratio of oxyhemoglobin HbO[Formula: see text] and deoxyhemoglobin Hb, which can be distuinguished by multispectral photoacoustic imaging due to their distinct wavelength-dependent absorption. However, current methods for estimating sO[Formula: see text] yield inaccurate results in realistic settings, due to the unknown and wavelength-dependent influence of the light fluence on the signal. In this work, we propose learned spectral decoloring to enable blood oxygenation measurements to be inferred from multispectral photoacoustic imaging. The method computes sO[Formula: see text] pixel-wise, directly from initial pressure spectra [Formula: see text], which represent initial pressure values at a fixed spatial location [Formula: see text] over all recorded wavelengths [Formula: see text]. The method is compared to linear unmixing approaches, as well as pO[Formula: see text] and blood gas analysis reference measurements. Experimental results suggest that the proposed method is able to obtain sO[Formula: see text] estimates from multispectral photoacoustic measurements in silico, in vitro, and in vivo.

15.
J Biomed Opt ; 25(4): 1, 2020 Apr.
Article in English | MEDLINE | ID: mdl-32301318

ABSTRACT

The erratum corrects an error in the published article.

16.
Sci Rep ; 9(1): 8661, 2019 06 17.
Article in English | MEDLINE | ID: mdl-31209253

ABSTRACT

Spreading depolarization (SD) is a self-propagating wave of near-complete neuronal depolarization that is abundant in a wide range of neurological conditions, including stroke. SD was only recently documented in humans and is now considered a therapeutic target for brain injury, but the mechanisms related to SD in complex brains are not well understood. While there are numerous approaches to interventional imaging of SD on the exposed brain surface, measuring SD deep in brain is so far only possible with low spatiotemporal resolution and poor contrast. Here, we show that photoacoustic imaging enables the study of SD and its hemodynamics deep in the gyrencephalic brain with high spatiotemporal resolution. As rapid neuronal depolarization causes tissue hypoxia, we achieve this by continuously estimating blood oxygenation with an intraoperative hybrid photoacoustic and ultrasonic imaging system. Due to its high resolution, promising imaging depth and high contrast, this novel approach to SD imaging can yield new insights into SD and thereby lead to advances in stroke, and brain injury research.


Subject(s)
Cerebral Cortex/diagnostic imaging , Gray Matter/diagnostic imaging , Neuroimaging/methods , Oxygen/analysis , Photoacoustic Techniques/instrumentation , Ultrasonography/instrumentation , Animals , Cerebral Cortex/blood supply , Cerebral Cortex/drug effects , Cortical Spreading Depression/drug effects , Electrocorticography , Female , Gray Matter/blood supply , Gray Matter/drug effects , Hemodynamics/drug effects , Hemodynamics/physiology , Humans , Neuroimaging/instrumentation , Oxygen/physiology , Potassium Chloride/pharmacology , Swine
17.
Int J Comput Assist Radiol Surg ; 14(6): 997-1007, 2019 Jun.
Article in English | MEDLINE | ID: mdl-30903566

ABSTRACT

PURPOSE: Optical imaging is evolving as a key technique for advanced sensing in the operating room. Recent research has shown that machine learning algorithms can be used to address the inverse problem of converting pixel-wise multispectral reflectance measurements to underlying tissue parameters, such as oxygenation. Assessment of the specific hardware used in conjunction with such algorithms, however, has not properly addressed the possibility that the problem may be ill-posed. METHODS: We present a novel approach to the assessment of optical imaging modalities, which is sensitive to the different types of uncertainties that may occur when inferring tissue parameters. Based on the concept of invertible neural networks, our framework goes beyond point estimates and maps each multispectral measurement to a full posterior probability distribution which is capable of representing ambiguity in the solution via multiple modes. Performance metrics for a hardware setup can then be computed from the characteristics of the posteriors. RESULTS: Application of the assessment framework to the specific use case of camera selection for physiological parameter estimation yields the following insights: (1) estimation of tissue oxygenation from multispectral images is a well-posed problem, while (2) blood volume fraction may not be recovered without ambiguity. (3) In general, ambiguity may be reduced by increasing the number of spectral bands in the camera. CONCLUSION: Our method could help to optimize optical camera design in an application-specific manner.


Subject(s)
Machine Learning , Neural Networks, Computer , Optical Imaging/methods , Algorithms , Humans , Uncertainty
18.
J Biomed Opt ; 23(5): 1-9, 2018 05.
Article in English | MEDLINE | ID: mdl-29777580

ABSTRACT

Real-time monitoring of functional tissue parameters, such as local blood oxygenation, based on optical imaging could provide groundbreaking advances in the diagnosis and interventional therapy of various diseases. Although photoacoustic (PA) imaging is a modality with great potential to measure optical absorption deep inside tissue, quantification of the measurements remains a major challenge. We introduce the first machine learning-based approach to quantitative PA imaging (qPAI), which relies on learning the fluence in a voxel to deduce the corresponding optical absorption. The method encodes relevant information of the measured signal and the characteristics of the imaging system in voxel-based feature vectors, which allow the generation of thousands of training samples from a single simulated PA image. Comprehensive in silico experiments suggest that context encoding-qPAI enables highly accurate and robust quantification of the local fluence and thereby the optical absorption from PA images.


Subject(s)
Machine Learning , Optical Imaging/methods , Photoacoustic Techniques/methods , Signal Processing, Computer-Assisted , Algorithms , Carotid Arteries/diagnostic imaging , Humans , Models, Cardiovascular , Oxyhemoglobins/analysis , Oxyhemoglobins/chemistry
19.
Int J Comput Assist Radiol Surg ; 12(3): 351-361, 2017 Mar.
Article in English | MEDLINE | ID: mdl-27687984

ABSTRACT

PURPOSE: Due to rapid developments in the research areas of medical imaging, medical image processing and robotics, computer-assisted interventions (CAI) are becoming an integral part of modern patient care. From a software engineering point of view, these systems are highly complex and research can benefit greatly from reusing software components. This is supported by a number of open-source toolkits for medical imaging and CAI such as the medical imaging interaction toolkit (MITK), the public software library for ultrasound imaging research (PLUS) and 3D Slicer. An independent inter-toolkit communication such as the open image-guided therapy link (OpenIGTLink) can be used to combine the advantages of these toolkits and enable an easier realization of a clinical CAI workflow. METHODS: MITK-OpenIGTLink is presented as a network interface within MITK that allows easy to use, asynchronous two-way messaging between MITK and clinical devices or other toolkits. Performance and interoperability tests with MITK-OpenIGTLink were carried out considering the whole CAI workflow from data acquisition over processing to visualization. RESULTS: We present how MITK-OpenIGTLink can be applied in different usage scenarios. In performance tests, tracking data were transmitted with a frame rate of up to 1000 Hz and a latency of 2.81 ms. Transmission of images with typical ultrasound (US) and greyscale high-definition (HD) resolutions of [Formula: see text] and [Formula: see text] is possible at up to 512 and 128 Hz, respectively. CONCLUSION: With the integration of OpenIGTLink into MITK, this protocol is now supported by all established open-source toolkits in the field. This eases interoperability between MITK and toolkits such as PLUS or 3D Slicer and facilitates cross-toolkit research collaborations. MITK and its submodule MITK-OpenIGTLink are provided open source under a BSD-style licence ( http://mitk.org ).


Subject(s)
Image Processing, Computer-Assisted/methods , Software , Surgery, Computer-Assisted/methods , Telecommunications , Ultrasonography , Humans , Robotic Surgical Procedures , Robotics , Workflow
SELECTION OF CITATIONS
SEARCH DETAIL
...