Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 79
Filter
1.
Sensors (Basel) ; 24(7)2024 Apr 04.
Article in English | MEDLINE | ID: mdl-38610507

ABSTRACT

In cardiac cine imaging, acquiring high-quality data is challenging and time-consuming due to the artifacts generated by the heart's continuous movement. Volumetric, fully isotropic data acquisition with high temporal resolution is, to date, intractable due to MR physics constraints. To assess whole-heart movement under minimal acquisition time, we propose a deep learning model that reconstructs the volumetric shape of multiple cardiac chambers from a limited number of input slices while simultaneously optimizing the slice acquisition orientation for this task. We mimic the current clinical protocols for cardiac imaging and compare the shape reconstruction quality of standard clinical views and optimized views. In our experiments, we show that the jointly trained model achieves accurate high-resolution multi-chamber shape reconstruction with errors of <13 mm HD95 and Dice scores of >80%, indicating its effectiveness in both simulated cardiac cine MRI and clinical cardiac MRI with a wide range of pathological shape variations.


Subject(s)
Cardiac Surgical Procedures , Deep Learning , Cardiac Volume , Heart/diagnostic imaging , Artifacts
2.
Med Image Anal ; 89: 102887, 2023 10.
Article in English | MEDLINE | ID: mdl-37453235

ABSTRACT

3D human pose estimation is a key component of clinical monitoring systems. The clinical applicability of deep pose estimation models, however, is limited by their poor generalization under domain shifts along with their need for sufficient labeled training data. As a remedy, we present a novel domain adaptation method, adapting a model from a labeled source to a shifted unlabeled target domain. Our method comprises two complementary adaptation strategies based on prior knowledge about human anatomy. First, we guide the learning process in the target domain by constraining predictions to the space of anatomically plausible poses. To this end, we embed the prior knowledge into an anatomical loss function that penalizes asymmetric limb lengths, implausible bone lengths, and implausible joint angles. Second, we propose to filter pseudo labels for self-training according to their anatomical plausibility and incorporate the concept into the Mean Teacher paradigm. We unify both strategies in a point cloud-based framework applicable to unsupervised and source-free domain adaptation. Evaluation is performed for in-bed pose estimation under two adaptation scenarios, using the public SLP dataset and a newly created dataset. Our method consistently outperforms various state-of-the-art domain adaptation methods, surpasses the baseline model by 31%/66%, and reduces the domain gap by 65%/82%. Source code is available at https://github.com/multimodallearning/da-3dhpe-anatomy.


Subject(s)
Learning , Software , Humans
3.
Sensors (Basel) ; 23(6)2023 Mar 07.
Article in English | MEDLINE | ID: mdl-36991588

ABSTRACT

Image registration for temporal ultrasound sequences can be very beneficial for image-guided diagnostics and interventions. Cooperative human-machine systems that enable seamless assistance for both inexperienced and expert users during ultrasound examinations rely on robust, realtime motion estimation. Yet rapid and irregular motion patterns, varying image contrast and domain shifts in imaging devices pose a severe challenge to conventional realtime registration approaches. While learning-based registration networks have the promise of abstracting relevant features and delivering very fast inference times, they come at the potential risk of limited generalisation and robustness for unseen data; in particular, when trained with limited supervision. In this work, we demonstrate that these issues can be overcome by using end-to-end differentiable displacement optimisation. Our method involves a trainable feature backbone, a correlation layer that evaluates a large range of displacement options simultaneously and a differentiable regularisation module that ensures smooth and plausible deformation. In extensive experiments on public and private ultrasound datasets with very sparse ground truth annotation the method showed better generalisation abilities and overall accuracy than a VoxelMorph network with the same feature backbone, while being two times faster at inference.

4.
Signal Image Video Process ; 17(4): 981-989, 2023.
Article in English | MEDLINE | ID: mdl-35910403

ABSTRACT

Deep learning-based image segmentation models rely strongly on capturing sufficient spatial context without requiring complex models that are hard to train with limited labeled data. For COVID-19 infection segmentation on CT images, training data are currently scarce. Attention models, in particular the most recent self-attention methods, have shown to help gather contextual information within deep networks and benefit semantic segmentation tasks. The recent attention-augmented convolution model aims to capture long range interactions by concatenating self-attention and convolution feature maps. This work proposes a novel attention-augmented convolution U-Net (AA-U-Net) that enables a more accurate spatial aggregation of contextual information by integrating attention-augmented convolution in the bottleneck of an encoder-decoder segmentation architecture. A deep segmentation network (U-Net) with this attention mechanism significantly improves the performance of semantic segmentation tasks on challenging COVID-19 lesion segmentation. The validation experiments show that the performance gain of the attention-augmented U-Net comes from their ability to capture dynamic and precise (wider) attention context. The AA-U-Net achieves Dice scores of 72.3% and 61.4% for ground-glass opacity and consolidation lesions for COVID-19 segmentation and improves the accuracy by 4.2% points against a baseline U-Net and 3.09% points compared to a baseline U-Net with matched parameters. Supplementary Information: The online version contains supplementary material available at 10.1007/s11760-022-02302-3.

5.
Med Image Anal ; 83: 102628, 2023 01.
Article in English | MEDLINE | ID: mdl-36283200

ABSTRACT

Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score - VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score - VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.


Subject(s)
Neuroma, Acoustic , Humans , Neuroma, Acoustic/diagnostic imaging
6.
J Neurosci ; 42(18): 3797-3810, 2022 05 04.
Article in English | MEDLINE | ID: mdl-35351831

ABSTRACT

Humans have the ability to store and retrieve memories with various degrees of specificity, and recent advances in reinforcement learning have identified benefits to learning when past experience is represented at different levels of temporal abstraction. How this flexibility might be implemented in the brain remains unclear. We analyzed the temporal organization of male rat hippocampal population spiking to identify potential substrates for temporally flexible representations. We examined activity both during locomotion and during memory-associated population events known as sharp-wave ripples (SWRs). We found that spiking during SWRs is rhythmically organized with higher event-to-event variability than spiking during locomotion-associated population events. Decoding analyses using clusterless methods further indicate that a similar spatial experience can be replayed in multiple SWRs, each time with a different rhythmic structure whose periodicity is sampled from a log-normal distribution. This variability increases with experience despite the decline in SWR rates that occurs as environments become more familiar. We hypothesize that the variability in temporal organization of hippocampal spiking provides a mechanism for storing experiences with various degrees of specificity.SIGNIFICANCE STATEMENT One of the most remarkable properties of memory is its flexibility: the brain can retrieve stored representations at varying levels of detail where, for example, we can begin with a memory of an entire extended event and then zoom in on a particular episode. The neural mechanisms that support this flexibility are not understood. Here we show that hippocampal sharp-wave ripples, which mark the times of memory replay and are important for memory storage, have a highly variable temporal structure that is well suited to support the storage of memories at different levels of detail.


Subject(s)
Hippocampus , Learning , Animals , Male , Rats
7.
Sensors (Basel) ; 22(3)2022 Feb 01.
Article in English | MEDLINE | ID: mdl-35161851

ABSTRACT

Deep learning based medical image registration remains very difficult and often fails to improve over its classical counterparts where comprehensive supervision is not available, in particular for large transformations-including rigid alignment. The use of unsupervised, metric-based registration networks has become popular, but so far no universally applicable similarity metric is available for multimodal medical registration, requiring a trade-off between local contrast-invariant edge features or more global statistical metrics. In this work, we aim to improve over the use of handcrafted metric-based losses. We propose to use synthetic three-way (triangular) cycles that for each pair of images comprise two multimodal transformations to be estimated and one known synthetic monomodal transform. Additionally, we present a robust method for estimating large rigid transformations that is differentiable in end-to-end learning. By minimising the cycle discrepancy and adapting the synthetic transformation to be close to the real geometric difference of the image pairs during training, we successfully tackle intra-patient abdominal CT-MRI registration and reach performance on par with state-of-the-art metric-supervision and classic methods. Cyclic constraints enable the learning of cross-modality features that excel at accurate anatomical alignment of abdominal CT and MRI scans.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Algorithms , Humans
8.
Comput Methods Programs Biomed ; 211: 106374, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34601186

ABSTRACT

BACKGROUND AND OBJECTIVE: Fast and robust alignment of pre-operative MRI planning scans to intra-operative ultrasound is an important aspect for automatically supporting image-guided interventions. Thus far, learning-based approaches have failed to tackle the intertwined objectives of fast inference computation time and robustness to unexpectedly large motion and misalignment. In this work, we propose a novel method that decouples deep feature learning and the computation of long ranging local displacement probability maps from fast and robust global transformation prediction. METHODS: In our approach, we firstly train a convolutional neural network (CNN) to extract modality-agnostic features with sub-second computation times for both 3D volumes during inference. Using sparsity-based network weight pruning, the model complexity and computation times can be substantially reduced. Based on these features, a large discretized search range of 3D motion vectors is explored to compute a probabilistic displacement map for each control point. These 3D probability maps are employed in our newly proposed, computationally efficient, instance optimisation that robustly estimates the most likely globally linear transformation that best reflects the local displacement beliefs subject to outlier rejection. RESULTS: Our experimental validation demonstrates state-of-the-art accuracy on the challenging CuRIOUS dataset with average target registration errors of 2.50 mm, model size of only 1.2 MByte and run times of approx. 3 seconds for a full 3D multimodal registration. CONCLUSION: We show that a significant improvement in accuracy and robustness can be gained with instance optimisation and our fast self-supervised deep learning model can achieve state-of-the-art accuracy on challenging registration task in only 3 seconds.


Subject(s)
Magnetic Resonance Imaging , Neural Networks, Computer , Motion , Ultrasonography , Ultrasonography, Interventional
9.
Article in English | MEDLINE | ID: mdl-34531633

ABSTRACT

A major goal of lung cancer screening is to identify individuals with particular phenotypes that are associated with high risk of cancer. Identifying relevant phenotypes is complicated by the variation in body position and body composition. In the brain, standardized coordinate systems (e.g., atlases) have enabled separate consideration of local features from gross/global structure. To date, no analogous standard atlas has been presented to enable spatial mapping and harmonization in chest computational tomography (CT). In this paper, we propose a thoracic atlas built upon a large low dose CT (LDCT) database of lung cancer screening program. The study cohort includes 466 male and 387 female subjects with no screening detected malignancy (age 46-79 years, mean 64.9 years). To provide spatial mapping, we optimize a multi-stage inter-subject non-rigid registration pipeline for the entire thoracic space. Briefly, with 50 scans of 50 randomly selected female subjects as fine tuning dataset, we search for the optimal configuration of the non-rigid registration module in a range of adjustable parameters including: registration searching radius, degree of keypoint dispersion, regularization coefficient and similarity patch size, to minimize the registration failure rate approximated by the number of samples with low Dice similarity score (DSC) for lung and body segmentation. We evaluate the optimized pipeline on a separate cohort (100 scans of 50 female and 50 male subjects) relative to two baselines with alternative non-rigid registration module: the same software with default parameters and an alternative software. We achieve a significant improvement in terms of registration success rate based on manual QA. For the entire study cohort, the optimized pipeline achieves a registration success rate of 91.7%. The application validity of the developed atlas is evaluated in terms of discriminative capability for different anatomic phenotypes, including body mass index (BMI), chronic obstructive pulmonary disease (COPD), and coronary artery calcification (CAC).

10.
NPJ Digit Med ; 4(1): 137, 2021 Sep 15.
Article in English | MEDLINE | ID: mdl-34526639

ABSTRACT

Deep vein thrombosis (DVT) is a blood clot most commonly found in the leg, which can lead to fatal pulmonary embolism (PE). Compression ultrasound of the legs is the diagnostic gold standard, leading to a definitive diagnosis. However, many patients with possible symptoms are not found to have a DVT, resulting in long referral waiting times for patients and a large clinical burden for specialists. Thus, diagnosis at the point of care by non-specialists is desired. We collect images in a pre-clinical study and investigate a deep learning approach for the automatic interpretation of compression ultrasound images. Our method provides guidance for free-hand ultrasound and aids non-specialists in detecting DVT. We train a deep learning algorithm on ultrasound videos from 255 volunteers and evaluate on a sample size of 53 prospectively enrolled patients from an NHS DVT diagnostic clinic and 30 prospectively enrolled patients from a German DVT clinic. Algorithmic DVT diagnosis performance results in a sensitivity within a 95% CI range of (0.82, 0.94), specificity of (0.70, 0.82), a positive predictive value of (0.65, 0.89), and a negative predictive value of (0.99, 1.00) when compared to the clinical gold standard. To assess the potential benefits of this technology in healthcare we evaluate the entire clinical DVT decision algorithm and provide cost analysis when integrating our approach into diagnostic pathways for DVT. Our approach is estimated to generate a positive net monetary benefit at costs up to £72 to £175 per software-supported examination, assuming a willingness to pay of £20,000/QALY.

11.
Int J Comput Assist Radiol Surg ; 16(12): 2079-2087, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34420184

ABSTRACT

PURPOSE: Body weight is a crucial parameter for patient-specific treatments, particularly in the context of proper drug dosage. Contactless weight estimation from visual sensor data constitutes a promising approach to overcome challenges arising in emergency situations. Machine learning-based methods have recently been shown to perform accurate weight estimation from point cloud data. The proposed methods, however, are designed for controlled conditions in terms of visibility and position of the patient, which limits their practical applicability. In this work, we aim to decouple accurate weight estimation from such specific conditions by predicting the weight of covered patients from voxelized point cloud data. METHODS: We propose a novel deep learning framework, which comprises two 3D CNN modules solving the given task in two separate steps. First, we train a 3D U-Net to virtually uncover the patient, i.e. to predict the patient's volumetric surface without a cover. Second, the patient's weight is predicted from this 3D volume by means of a 3D CNN architecture, which we optimized for weight regression. RESULTS: We evaluate our approach on a lying pose dataset (SLP) under two different cover conditions. The proposed framework considerably improves on the baseline model by up to [Formula: see text] and reduces the gap between the accuracy of weight estimates for covered and uncovered patients by up to [Formula: see text]. CONCLUSION: We present a novel pipeline to estimate the weight of patients, which are covered by a blanket. Our approach relaxes the specific conditions that were required for accurate weight estimates by previous contactless methods and thus constitutes an important step towards fully automatic weight estimation in clinical practice.


Subject(s)
Cloud Computing , Machine Learning , Humans
12.
Sensors (Basel) ; 21(11)2021 May 28.
Article in English | MEDLINE | ID: mdl-34071615

ABSTRACT

There is a current healthcare need for improved prosthetic socket fit provision for the masses using low-cost and simple to manufacture sensors that can measure pressure, shear, and friction. There is also a need to address society's increasing concerns regarding the environmental impact of electronics and IoT devices. Prototype thin, low-cost, and low-weight pressure, shear, and loss of friction sensors have been developed and assembled for trans-femoral amputees. These flexible and conformable sensors are simple to manufacture and utilize more enviro-friendly novel magnetite-based QTSS™ (Quantum Technology Supersensor™) quantum materials. They have undergone some initial tests on flat and curved surfaces in a pilot amputee trial, which are presented in this paper. These initial findings indicate that the prototype pressure sensor strip is capable of measuring pressure both on flat and curved socket surfaces in a pilot amputee trial. They have also demonstrated that the prototype shear sensor can indicate increasing shear forces, the resultant direction of the shear forces, and loss of friction/slippage events. Further testing, amputee trials, and ongoing optimization is continuing as part of the SocketSense project to assist prosthetic comfort and fit.


Subject(s)
Amputees , Wearable Electronic Devices , Electronics , Friction , Humans
13.
J Biomed Inform ; 119: 103816, 2021 07.
Article in English | MEDLINE | ID: mdl-34022421

ABSTRACT

Deep learning based medical image segmentation is an important step within diagnosis, which relies strongly on capturing sufficient spatial context without requiring too complex models that are hard to train with limited labelled data. Training data is in particular scarce for segmenting infection regions of CT images of COVID-19 patients. Attention models help gather contextual information within deep networks and benefit semantic segmentation tasks. The recent criss-cross-attention module aims to approximate global self-attention while remaining memory and time efficient by separating horizontal and vertical self-similarity computations. However, capturing attention from all non-local locations can adversely impact the accuracy of semantic segmentation networks. We propose a new Dynamic Deformable Attention Network (DDANet) that enables a more accurate contextual information computation in a similarly efficient way. Our novel technique is based on a deformable criss-cross attention block that learns both attention coefficients and attention offsets in a continuous way. A deep U-Net (Schlemper et al., 2019) segmentation network that employs this attention mechanism is able to capture attention from pertinent non-local locations and also improves the performance on semantic segmentation tasks compared to criss-cross attention within a U-Net on a challenging COVID-19 lesion segmentation task. Our validation experiments show that the performance gain of the recursively applied dynamic deformable attention blocks comes from their ability to capture dynamic and precise attention context. Our DDANet achieves Dice scores of 73.4% and 61.3% for Ground-glass opacity and consolidation lesions for COVID-19 segmentation and improves the accuracy by 4.9% points compared to a baseline U-Net and 24.4% points compared to current state of art methods (Fan et al., 2020).


Subject(s)
COVID-19 , Humans , Image Processing, Computer-Assisted , Neural Networks, Computer , SARS-CoV-2 , Semantics , Tomography, X-Ray Computed
14.
IEEE Trans Med Imaging ; 40(9): 2246-2257, 2021 09.
Article in English | MEDLINE | ID: mdl-33872144

ABSTRACT

In the last two years learning-based methods have started to show encouraging results in different supervised and unsupervised medical image registration tasks. Deep neural networks enable (near) real time applications through fast inference times and have tremendous potential for increased registration accuracies by task-specific learning. However, estimation of large 3D deformations, for example present in inhale to exhale lung CT or interpatient abdominal MRI registration, is still a major challenge for the widely adopted U-Net-like network architectures. Even when using multi-level strategies, current state-of-the-art DL registration results do not yet reach the high accuracy of conventional frameworks. To overcome the problem of large deformations for deep learning approaches, in this work, we present GraphRegNet, a sparse keypoint-based geometric network for dense deformable medical image registration. Similar to the successful 2D optical flow estimation of FlowNet or PWC-Net we leverage discrete dense displacement maps to facilitate the registration process. In order to cope with enormously increasing memory requirements when working with displacement maps in 3D medical volumes and to obtain a well-regularised and accurate deformation field we 1) formulate the registration task as the prediction of displacement vectors on a sparse irregular grid of distinctive keypoints and 2) introduce our efficient GraphRegNet for displacement regularisation, a combination of convolutional and graph neural network layers in a unified architecture. In our experiments on exhale to inhale lung CT registration we demonstrate substantial improvements (TRE below 1.4 mm) over other deep learning methods. Our code is publicly available at https://github.com/multimodallearning/graphregnet.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Lung/diagnostic imaging , Magnetic Resonance Imaging , Tomography, X-Ray Computed
15.
PLoS Biol ; 19(3): e3001121, 2021 03.
Article in English | MEDLINE | ID: mdl-33661886

ABSTRACT

Hematopoietic stem and progenitor cells (HSPCs) are a small population of undifferentiated cells that have the capacity for self-renewal and differentiate into all blood cell lineages. These cells are the most useful cells for clinical transplantations and for regenerative medicine. So far, it has not been possible to expand adult hematopoietic stem cells (HSCs) without losing their self-renewal properties. CD74 is a cell surface receptor for the cytokine macrophage migration inhibitory factor (MIF), and its mRNA is known to be expressed in HSCs. Here, we demonstrate that mice lacking CD74 exhibit an accumulation of HSCs in the bone marrow (BM) due to their increased potential to repopulate and compete for BM niches. Our results suggest that CD74 regulates the maintenance of the HSCs and CD18 expression. Its absence leads to induced survival of these cells and accumulation of quiescent and proliferating cells. Furthermore, in in vitro experiments, blocking of CD74 elevated the numbers of HSPCs. Thus, we suggest that blocking CD74 could lead to improved clinical insight into BM transplant protocols, enabling improved engraftment.


Subject(s)
Antigens, Differentiation, B-Lymphocyte/genetics , Antigens, Differentiation, B-Lymphocyte/metabolism , Hematopoietic Stem Cells/metabolism , Histocompatibility Antigens Class II/genetics , Histocompatibility Antigens Class II/metabolism , Adult , Animals , Bone Marrow Cells/metabolism , Bone Marrow Transplantation/methods , Cell Lineage , Female , Healthy Volunteers , Hematopoietic Stem Cells/cytology , Hematopoietic Stem Cells/physiology , Humans , Intramolecular Oxidoreductases/metabolism , Macrophage Migration-Inhibitory Factors/metabolism , Male , Mice , Mice, Inbred C57BL , Signal Transduction
16.
Med Image Anal ; 67: 101822, 2021 01.
Article in English | MEDLINE | ID: mdl-33166774

ABSTRACT

Methods for deep learning based medical image registration have only recently approached the quality of classical model-based image alignment. The dual challenge of both a very large trainable parameter space and often insufficient availability of expert supervised correspondence annotations has led to slower progress compared to other domains such as image segmentation. Yet, image registration could also more directly benefit from an iterative solution than segmentation. We therefore believe that significant improvements, in particular for multi-modal registration, can be achieved by disentangling appearance-based feature learning and deformation estimation. In this work, we examine an end-to-end trainable, weakly-supervised deep learning-based feature extraction approach that is able to map the complex appearance to a common space. Our results on thoracoabdominal CT and MRI image registration show that the proposed method compares favourably well to state-of-the-art hand-crafted multi-modal features, Mutual Information-based approaches and fully-integrated CNN-based methods - and handles even the limitation of small and only weakly-labeled training data sets.


Subject(s)
Imaging, Three-Dimensional , Magnetic Resonance Imaging , Humans , Supervised Machine Learning
17.
Cell ; 180(3): 552-567.e25, 2020 02 06.
Article in English | MEDLINE | ID: mdl-32004462

ABSTRACT

Cognitive faculties such as imagination, planning, and decision-making entail the ability to represent hypothetical experience. Crucially, animal behavior in natural settings implies that the brain can represent hypothetical future experience not only quickly but also constantly over time, as external events continually unfold. To determine how this is possible, we recorded neural activity in the hippocampus of rats navigating a maze with multiple spatial paths. We found neural activity encoding two possible future scenarios (two upcoming maze paths) in constant alternation at 8 Hz: one scenario per ∼125-ms cycle. Further, we found that the underlying dynamics of cycling (both inter- and intra-cycle dynamics) generalized across qualitatively different representational correlates (location and direction). Notably, cycling occurred across moving behaviors, including during running. These findings identify a general dynamic process capable of quickly and continually representing hypothetical experience, including that of multiple possible futures.


Subject(s)
Behavior, Animal/physiology , Cognition/physiology , Decision Making/physiology , Hippocampus/physiology , Action Potentials/physiology , Animals , Locomotion/physiology , Male , Maze Learning/physiology , Nerve Net/physiology , Neurons/physiology , Rats , Rats, Long-Evans , Theta Rhythm/physiology
18.
IEEE Trans Med Imaging ; 39(3): 777-786, 2020 03.
Article in English | MEDLINE | ID: mdl-31425023

ABSTRACT

In brain tumor surgery, the quality and safety of the procedure can be impacted by intra-operative tissue deformation, called brain shift. Brain shift can move the surgical targets and other vital structures such as blood vessels, thus invalidating the pre-surgical plan. Intra-operative ultrasound (iUS) is a convenient and cost-effective imaging tool to track brain shift and tumor resection. Accurate image registration techniques that update pre-surgical MRI based on iUS are crucial but challenging. The MICCAI Challenge 2018 for Correction of Brain shift with Intra-Operative UltraSound (CuRIOUS2018) provided a public platform to benchmark MRI-iUS registration algorithms on newly released clinical datasets. In this work, we present the data, setup, evaluation, and results of CuRIOUS 2018, which received 6 fully automated algorithms from leading academic and industrial research groups. All algorithms were first trained with the public RESECT database, and then ranked based on a test dataset of 10 additional cases with identical data curation and annotation protocols as the RESECT database. The article compares the results of all participating teams and discusses the insights gained from the challenge, as well as future work.


Subject(s)
Algorithms , Brain/diagnostic imaging , Magnetic Resonance Imaging/methods , Neurosurgical Procedures/methods , Surgery, Computer-Assisted/methods , Ultrasonography/methods , Brain/surgery , Brain Neoplasms/diagnostic imaging , Databases, Factual , Glioma/diagnostic imaging , Glioma/surgery , Humans
19.
Oncogene ; 39(9): 1997-2008, 2020 02.
Article in English | MEDLINE | ID: mdl-31772329

ABSTRACT

Chronic lymphocytic leukemia (CLL) is a malignancy of mature B lymphocytes. The microenvironment of the CLL cells is a vital element in the regulation of the survival of these malignant cells. CLL cell longevity is dependent on external signals, originating from cells in their microenvironment including secreted and surface-bound factors. Dendritic cells (DCs) play an important part in tumor microenvironment, but their role in the CLL bone marrow (BM) niche has not been studied. We show here that CLL cells induce accumulation of bone marrow dendritic cells (BMDCs). Depletion of this population attenuates disease expansion. Our results show that the support of the microenvironment is partly dependent on CD84, a cell surface molecule belonging to the Signaling Lymphocyte Activating Molecule (SLAM) family of immunoreceptors. Our results suggest a novel therapeutic strategy whereby eliminating BMDCs or blocking the CD84 expressed on these cells may reduce the tumor load.


Subject(s)
Bone Marrow/pathology , Dendritic Cells/pathology , Leukemia, Lymphocytic, Chronic, B-Cell/pathology , Signaling Lymphocytic Activation Molecule Family/metabolism , Tumor Microenvironment/immunology , Animals , Apoptosis , Bone Marrow/immunology , Bone Marrow/metabolism , Cell Proliferation , Dendritic Cells/immunology , Dendritic Cells/metabolism , Female , Humans , Leukemia, Lymphocytic, Chronic, B-Cell/immunology , Leukemia, Lymphocytic, Chronic, B-Cell/metabolism , Mice , Mice, Transgenic , Prognosis , Tumor Cells, Cultured
20.
Int J Comput Assist Radiol Surg ; 15(2): 269-276, 2020 Feb.
Article in English | MEDLINE | ID: mdl-31741286

ABSTRACT

PURPOSE: Nonlinear multimodal image registration, for example, the fusion of computed tomography (CT) and magnetic resonance imaging (MRI), fundamentally depends on a definition of image similarity. Previous methods that derived modality-invariant representations focused on either global statistical grayscale relations or local structural similarity, both of which are prone to local optima. In contrast to most learning-based methods that rely on strong supervision of aligned multimodal image pairs, we aim to overcome this limitation for further practical use cases. METHODS: We propose a new concept that exploits anatomical shape information and requires only segmentation labels for both modalities individually. First, a shape-constrained encoder-decoder segmentation network without skip connections is jointly trained on labeled CT and MRI inputs. Second, an iterative energy-based minimization scheme is introduced that relies on the capability of the network to generate intermediate nonlinear shape representations. This further eases the multimodal alignment in the case of large deformations. RESULTS: Our novel approach robustly and accurately aligns 3D scans from the multimodal whole-heart segmentation dataset, outperforming classical unsupervised frameworks. Since both parts of our method rely on (stochastic) gradient optimization, it can be easily integrated in deep learning frameworks and executed on GPUs. CONCLUSIONS: We present an integrated approach for weakly supervised multimodal image registration. Achieving promising results due to the exploration of intermediate shape features as registration guidance encourages further research in this direction.


Subject(s)
Imaging, Three-Dimensional/methods , Multimodal Imaging/methods , Deep Learning , Humans , Magnetic Resonance Imaging/methods , Tomography, X-Ray Computed/methods
SELECTION OF CITATIONS
SEARCH DETAIL