Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Int J Comput Assist Radiol Surg ; 15(9): 1477-1485, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32656685

ABSTRACT

PURPOSE: Real-time, two (2D) and three-dimensional (3D) ultrasound (US) has been investigated as a potential alternative to fluoroscopy imaging in various surgical and non-surgical orthopedic procedures. However, low signal to noise ratio, imaging artifacts and bone surfaces appearing several millimeters (mm) in thickness have hindered the wide spread adaptation of this safe imaging modality. Limited field of view and manual data collection cause additional problems during US-based orthopedic procedures. In order to overcome these limitations various bone segmentation and registration methods have been developed. Acoustic bone shadow is an important image artifact used to identify the presence of bone boundaries in the collected US data. Information about bone shadow region can be used (1) to guide the orthopedic surgeon or clinician to a standardized diagnostic viewing plane with minimal artifacts, (2) as a prior feature to improve bone segmentation and registration. METHOD: In this work, we propose a computational method, based on a novel generative adversarial network (GAN) architecture, to segment bone shadow images from in vivo US scans in real-time. We also show how these segmented shadow images can be incorporated, as a proxy, to a multi-feature guided convolutional neural network (CNN) architecture for real-time and accurate bone surface segmentation. Quantitative and qualitative evaluation studies are performed on 1235 scans collected from 27 subjects using two different US machines. Finally, we provide qualitative and quantitative comparison results against state-of-the-art GANs. RESULTS: We have obtained mean dice coefficient (± standard deviation) of [Formula: see text] ([Formula: see text]) for bone shadow segmentation, showing that the method is in close range with manual expert annotation. Statistical significant improvements against state-of-the-art GAN methods (paired t-test [Formula: see text]) is also obtained. Using the segmented bone shadow features average bone localization accuracy of 0.11 mm ([Formula: see text]) was achieved. CONCLUSIONS: Reported accurate and robust results make the proposed method promising for various orthopedic procedures. Although we did not investigate in this work, the segmented bone shadow images could also be used as an additional feature to improve accuracy of US-based registration methods. Further extensive validations are required in order to fully understand the clinical utility of the proposed method.


Subject(s)
Bone Diseases/diagnostic imaging , Bone and Bones/diagnostic imaging , Diagnosis, Computer-Assisted/methods , Fluoroscopy , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Ultrasonography , Acoustics , Algorithms , Artifacts , Computer Simulation , Humans , Image Processing, Computer-Assisted , Orthopedic Procedures , Orthopedics , Reproducibility of Results
2.
Int J Comput Assist Radiol Surg ; 14(5): 775-783, 2019 May.
Article in English | MEDLINE | ID: mdl-30868478

ABSTRACT

PURPOSE: Ultrasound (US) provides real-time, two-/three-dimensional safe imaging. Due to these capabilities, it is considered a safe alternative to intra-operative fluoroscopy in various computer-assisted orthopedic surgery (CAOS) procedures. However, interpretation of the collected bone US data is difficult due to high levels of noise, various imaging artifacts, and bone surfaces response appearing several millimeters (mm) in thickness. For US-guided CAOS procedures, it is an essential objective to have a segmentation mechanism, that is both robust and computationally inexpensive. METHOD: In this paper, we present our development of a convolutional neural network-based technique for segmentation of bone surfaces from in vivo US scans. The novelty of our proposed design is that it utilizes fusion of feature maps and employs multi-modal images to abate sensitivity to variations caused by imaging artifacts and low intensity bone boundaries. B-mode US images, and their corresponding local phase filtered images are used as multi-modal inputs for the proposed fusion network. Different fusion architectures are investigated for fusing the B-mode US image and the local phase features. RESULTS: The proposed methods was quantitatively and qualitatively evaluated on 546 in vivo scans by scanning 14 healthy subjects. We achieved an average F-score above 95% with an average bone surface localization error of 0.2 mm. The reported results are statistically significant compared to state-of-the-art. CONCLUSIONS: Reported accurate and robust segmentation results make the proposed method promising in CAOS applications. Further extensive validations are required in order to fully understand the clinical utility of the proposed method.


Subject(s)
Bone and Bones/diagnostic imaging , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Surgery, Computer-Assisted/methods , Ultrasonography/methods , Bone and Bones/surgery , Humans , Orthopedic Procedures/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...