Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2993-2996, 2021 11.
Article in English | MEDLINE | ID: mdl-34891874

ABSTRACT

TRUS-MR fusion guided biopsy highly depends on the quality of alignment between pre-operative Magnetic Resonance (MR) image and live trans-rectal ultrasound (TRUS) image during biopsy. Lot of factors influence the alignment of prostate during the biopsy like rigid motion due to patient movement and deformation of the prostate due to probe pressure. For MR-TRUS alignment during live procedure, the efficiency of the algorithm and accuracy plays an important role. In this paper, we have designed a comprehensive framework for fusion based biopsy using an end-to-end deep learning network for performing both rigid and deformation correction. Both rigid and deformation correction in one single network helps in reducing the computation time required for live TRUS-MR alignment. We have used 6500 images from 34 subjects for conducting this study. Our proposed registration pipeline provides Target Registration Error (TRE) of 2.51 mm after rigid and deformation correction on unseen patient dataset. In addition, with a total computation time of 70ms, we are able to achieve a rendering rate of 14 frames per second (FPS) that makes our network well suited for live procedures.Clinical Relevance- It is shown in the literature that systematic biopsy is the standard method for biopsy sampling in prostate that has high false negative rates. TRUS-MR fusion guided biopsy reduces the false negative rate of the sampling in prostate biopsy. Therefore, a live TRUS-MR fusion framework is helpful for prostate biopsy clinical procedures.


Subject(s)
Deep Learning , Prostatic Neoplasms , Humans , Image-Guided Biopsy , Male , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnostic imaging , Ultrasonography
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3209-3212, 2021 11.
Article in English | MEDLINE | ID: mdl-34891924

ABSTRACT

Longitudinal follicle tracking is needed in clinical practice for diagnosis and management in assisted reproduction. Follicles are tracked over the in-vitro fertilization (IVF) cycle, and this analysis is usually performed manually by a medical practitioner. It is a challenging manual analysis and is prone to error as it is largely operator dependent. In this paper we propose a two-stage framework to address the clinical need for follicular growth tracking. The first stage comprises of an unsupervised deep learning network SFR-Net to automate registration of each and every follicle across the IVF cycle. SFR-Net is composed of the standard 3DUNet [1] and Multi-Scale Residual Blocks (MSRB) [2] in order to register follicles of varying sizes. In the second stage we use the registration result to track individual follicles across the IVF cycle. The 3D Transvaginal Ultrasound (3D TVUS) volumes were acquired from 26 subjects every 2-3 days, resulting in a total of 96 volume pairs for the registration and tracking task. On the test dataset we have achieved an average DICE score of 85.84% for the follicle registration task, and we are successfully able to track follicles above 4 mm. Ours is the novel attempt towards automated tracking of follicular growth [3].Clinical Relevance- Accurate tracking of follicle count and growth is of paramount importance to increase the effectiveness of IVF procedure. Correct predictions can help doctors provide better counselling to the patients and individualize treatment for ovarian stimulation. Favorable outcome of this assisted reproductive technique depends on the estimates of the quality and quantity of the follicular pool. Therefore, automated longitudinal tracking of follicular growth is highly demanded in Assisted Reproduction clinical practice. [4].


Subject(s)
Deep Learning , Fertilization in Vitro , Humans , Ovulation Induction , Reproduction , Ultrasonography
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 2109-2112, 2020 07.
Article in English | MEDLINE | ID: mdl-33018422

ABSTRACT

Quantification of ovarian and follicular volume and follicle count are performed in clinical practice for diagnosis and management in assisted reproduction. Ovarian volume and Antral Follicle Count (AFC) are typically tracked over the ovulation cycle. Volumetric analysis of ovary and follicle is manual and largely operator dependent. In this manuscript, we have proposed a deep-learning method for automatic simultaneous segmentation of ovary and follicles in 3D Transvaginal Ultrasound (TVUS), namely S-Net. The proposed loss function restricts false detection of follicles outside the ovary. Additionally, we have used multi-layered loss to provide deep supervision for training the network. S-Net is optimized for inference time and memory while utilizing 3D context in the 2D deep-learning network. 66 3D TVUS volumes (13,200 2D image slices) were acquired from 66 subjects in this Institutional Review Board (IRB) approved study. The segmentation framework provides approximately 92% and 87% average DICE overlap with the ground truth annotations for ovary and follicles, respectively. We have obtained state-of-the-art results with a detection rate of 88%, 91% and 98% for follicles of size 2-4mm, 4-12mm and >12mm.


Subject(s)
Deep Learning , Ovary , Female , Humans , Imaging, Three-Dimensional , Ovarian Follicle/diagnostic imaging , Ovary/diagnostic imaging , Ultrasonography
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 2133-2136, 2020 07.
Article in English | MEDLINE | ID: mdl-33018428

ABSTRACT

The Uterine Junctional Zone (JZ) is identified as an important anatomical region in the implantation process during assisted reproduction. The JZ changes throughout the hormone stimulation cycle and has predictive value for implantation success. Despite advances in imaging technique, the assessment of JZ remains an enigma. The state-of-the-art method to assess the JZ is largely manual, which is time consuming, depends on operator experience, and often introduces subjective bias in assessment. In this paper, we present methods for automated visualization and quantification of the JZ in three-dimensional transvaginal ultrasound imaging (3D-TVUS). JZ is best visualized in the midcoronal plane of the 3D-TVUS uterus acquisition. We propose an algorithm pipeline, which uses a deep learning model to generate a point cloud representing the surface of the endometrium. A regularized midcoronal surface passing through the point cloud is rendered to obtain the midcoronal plane. The automated solution is designed to accommodate multiple structural deformations and pathologies in the uterus. An expert assisted reproduction clinician on 136 3D-TVUS volumes evaluated the results, and reliable performance was observed in more than 89% cases where the automated solution is able to reproduce, and sometimes even outperform the manual workflow. Automation speeds up the clinical workflow approximately by a factor of ten and reduces operator bias.


Subject(s)
Deep Learning , Uterus , Embryo Implantation , Endometrium/diagnostic imaging , Female , Humans , Ultrasonography , Uterus/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...