Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
Stud Health Technol Inform ; 163: 264-70, 2011.
Article in English | MEDLINE | ID: mdl-21335801

ABSTRACT

Many projects have focused on the improvement of virtual education. We have contributed with the global virtual anatomy course for teaching students in multiple locations with stereoscopic volume rendering, audio/video conferencing and additional materials. This year we focused on further simplifying the deployment of the classroom by using the new collaborative and web-based visualization system CoWebViz, to transfer stereoscopic visualization to the classrooms. Besides the necessary hardware installations for stereoscopy, only a web browser is necessary to view and to interact with the remote 3D stereo visualization. This system proved stable, gave higher quality images and increased ease of deployment. Its success within our classroom at the University of Chicago and Cardiff University has motivated us to continue CoWebViz development.


Subject(s)
Anatomy/education , Computer-Assisted Instruction/methods , Imaging, Three-Dimensional/methods , Internet , Models, Anatomic , Teaching/methods , User-Computer Interface , Computer Simulation , Humans , Internationality , Multimedia , Software
2.
Stud Health Technol Inform ; 132: 463-8, 2008.
Article in English | MEDLINE | ID: mdl-18391345

ABSTRACT

Many prototype projects aspire to develop a sustainable model of immersive radiological volume visualization for virtual anatomic education. Some have focused on distributed or parallel architectures. However, very few, if any others, have combined multi-location, multi-directional, multi-stream sharing of video, audio, desktop applications, and parallel stereo volume rendering, to converge on an open, globally scalable, and inexpensive collaborative architecture and implementation method for anatomic teaching using radiological volumes. We have focused our efforts on bringing this all together for several years. We outline here the technology we're making available to the open source community and a system implementation suggestion for how to create global immersive virtual anatomy classrooms. With the releases of Access Grid 3.1 and our parallel stereo volume rendering code, inexpensive globally scalable technology is available to enable collaborative volume visualization upon an award-winning framework. Based upon these technologies, immersive virtual anatomy classrooms that share educational or clinical principles can be constructed with the setup described with moderate technological expertise and global scalability.


Subject(s)
Anatomy/education , Computer Simulation , Cooperative Behavior , Educational Technology , Systems Integration , User-Computer Interface , Computer Peripherals , General Surgery , Humans , United States
3.
Stud Health Technol Inform ; 125: 439-44, 2007.
Article in English | MEDLINE | ID: mdl-17377320

ABSTRACT

For more than a decade, various approaches have been taken to teach anatomy using immersive virtual reality. This is the first complete anatomy course we are aware of which directly substitutes immersive virtual reality via stereo volume visualization of clinical radiological datasets for cadaver dissection. The students valued highly the new approach and the overall course was very well received. Students performed well on examinations. The course efficiently added human anatomy to the University of Chicago undergraduate biology electives.


Subject(s)
Anatomy/education , Computer Simulation , User-Computer Interface , Humans , Image Processing, Computer-Assisted , Radiology , United States
4.
Stud Health Technol Inform ; 119: 518-22, 2006.
Article in English | MEDLINE | ID: mdl-16404112

ABSTRACT

Integrating anatomic nomenclature with geometric anatomic data via a web interface and providing illustration and visualization tools presents numerous challenges. We have developed a library of anatomic models and methods for navigating anatomic nomenclature. Based on the library and tools, we have developed a simple, yet powerful, open web-based tool that uses tree navigation and selection for assembling and downloading self-documenting anatomic scenes in Virtual Reality Modeling Language.


Subject(s)
Internet , Models, Anatomic , Terminology as Topic , Computer Simulation , Humans , Information Storage and Retrieval , Programming Languages , United States , User-Computer Interface
5.
Stud Health Technol Inform ; 112: 70-9, 2005.
Article in English | MEDLINE | ID: mdl-15923717

ABSTRACT

Leveraging the advances of today's commodity graphics hardware, adoption of community proven collaboration technology, and the use of standard Web and Grid technologies a flexible system is designed to enable the construction of a distributed collaborative radiological visualization application. The system builds from a prototype application as well as requirements gathered from users. Finally constraints on the system are evaluated to complete the design process.


Subject(s)
Artificial Intelligence , Computer Communication Networks , Radiology Information Systems , Computer Systems , Humans , Teleradiology/methods , User-Computer Interface
6.
Stud Health Technol Inform ; 111: 477-81, 2005.
Article in English | MEDLINE | ID: mdl-15718782

ABSTRACT

This paper describes early technical success toward enabling high quality distributed shared volumetric visualization of radiological data in concert with multipoint video collaboration using Grid infrastructures. Key principles are the use of commodity off-the-shelf hardware for client machines and open source software to permit deployment of over a large and diverse group of sites. Key software used includes the Access Grid Toolkit, the Visualization Toolkit, and Chromium.


Subject(s)
Diagnostic Imaging , Information Storage and Retrieval/methods , Radiography , United States
7.
Stud Health Technol Inform ; 111: 473-6, 2005.
Article in English | MEDLINE | ID: mdl-15718781

ABSTRACT

Creating a library of binary segmentation mask sequences for the abdominal anatomy hierarchy of SNOMED and developing a quick, flexible automatic method for generating iso-surface models from these named structure masks has been a primary goal in our research. This paper describes our development of a clear path for computing visualizations of arbitrary groups of organs from these masks (typically generated from Visible Human data). One use of these methods is teaching the anatomy of various organ systems in the human body using virtual reality environments.


Subject(s)
Computer Simulation , Diagnostic Imaging/instrumentation , Models, Anatomic , Human Body , Imaging, Three-Dimensional , Libraries, Medical , Systematized Nomenclature of Medicine , United States
8.
Stud Health Technol Inform ; 98: 347-52, 2004.
Article in English | MEDLINE | ID: mdl-15544303

ABSTRACT

Radiological volumes are typically reviewed by surgeons using cross-sections and iso-surface reconstructions. Applications that combine collaborative stereo volume visualization with symbolic anatomic information and data fusions would expand surgeons' capabilities in interpretation of data and in planning treatment. Such an application has not been seen clinically. We are developing methods to systematically combine symbolic anatomy (term hierarchies and iso-surface atlases) with patient data using data fusion. We describe our progress toward integrating these methods into our collaborative virtual reality application. The fully combined application will be a feature-rich stereo collaborative volume visualization environment for use by surgeons in which DICOM datasets will self-report underlying anatomy with visual feedback. Using hierarchical navigation of SNOMED-CT anatomic terms integrated with our existing Tele-immersive DICOM-based volumetric rendering application, we will display polygonal representations of anatomic systems on the fly from menus that query a database. The methods and tools involved in this application development are SNOMED-CT, DICOM, VISIBLE HUMAN, volumetric fusion and C++ on a Tele-immersive platform. This application will allow us to identify structures and display polygonal representations from atlas data overlaid with the volume rendering. First, atlas data is automatically translated, rotated, and scaled to the patient data during loading using a public domain volumetric fusion algorithm. This generates a modified symbolic representation of the underlying canonical anatomy. Then, through the use of collision detection or intersection testing of various transparent polygonal representations, the polygonal structures are highlighted into the volumetric representation while the SNOMED names are displayed. Thus, structural names and polygonal models are associated with the visualized DICOM data. This novel juxtaposition of information promises to expand surgeons' abilities to interpret images and plan treatment.


Subject(s)
Anatomy , User-Computer Interface , Systematized Nomenclature of Medicine , United States
9.
Dis Colon Rectum ; 46(3): 349-52, 2003 Mar.
Article in English | MEDLINE | ID: mdl-12626910

ABSTRACT

PURPOSE: A clear understanding of the intricate spatial relationships among the structures of the pelvic floor, rectum, and anal canal is essential for the treatment of numerous pathologic conditions. Virtual-reality technology allows improved visualization of three-dimensional structures over conventional media because it supports stereoscopic-vision, viewer-centered perspective, large angles of view, and interactivity. We describe a novel virtual reality-based model designed to teach anorectal and pelvic floor anatomy, pathology, and surgery. METHODS: A static physical model depicting the pelvic floor and anorectum was created and digitized at 1-mm intervals in a CT scanner. Multiple software programs were used along with endoscopic images to generate a realistic interactive computer model, which was designed to be viewed on a networked, interactive, virtual-reality display (CAVE or ImmersaDesk). A standard examination of ten basic anorectal and pelvic floor anatomy questions was administered to third-year (n = 6) and fourth-year (n = 7) surgical residents. A workshop using the Virtual Pelvic Floor Model was then given, and the standard examination was readministered so that it was possible to evaluate the effectiveness of the Digital Pelvic Floor Model as an educational instrument. RESULTS: Training on the Virtual Pelvic Floor Model produced substantial improvements in the overall average test scores for the two groups, with an overall increase of 41 percent (P = 0.001) and 21 percent (P = 0.0007) for third-year and fourth-year residents, respectively. Resident evaluations after the workshop also confirmed the effectiveness of understanding pelvic anatomy using the Virtual Pelvic Floor Model. CONCLUSION: This model provides an innovative interactive educational framework that allows educators to overcome some of the barriers to teaching surgical and endoscopic principles based on understanding highly complex three-dimensional anatomy. Using this collaborative, shared virtual-reality environment, teachers and students can interact from locations world-wide to manipulate the components of this model to achieve the educational goals of this project along with the potential for virtual surgery.


Subject(s)
Anal Canal/anatomy & histology , Colorectal Surgery/education , Educational Technology , Pathology/education , Pelvic Floor/anatomy & histology , Rectum/anatomy & histology , User-Computer Interface , Anal Canal/surgery , Computer Simulation , Endoscopy , Humans , Internship and Residency , Models, Anatomic , Pelvic Floor/surgery , Rectum/surgery
10.
Surgery ; 132(2): 274-7, 2002 Aug.
Article in English | MEDLINE | ID: mdl-12219023

ABSTRACT

BACKGROUND: Understanding the spatial relationships among the liver segments, and intrahepatic portal and hepatic veins is essential for surgical treatment of liver diseases. Teleimmersive virtual reality enables improved visualization over conventional media because it supports stereo vision, viewer-centered perspective, large angles of view, and interactivity with remote locations. We report a successful pilot study teaching hepatic surgical principles using teleimmersion. METHODS: We developed a teleimmersive environment for teaching with biomedical models including virtual models of the liver segments and portal and hepatic veins. Using the environment, 1 instructor gave a workshop to 6 senior general surgery residents at 2 physical locations. A 24-question (36-point) examination was administered before and after the workshop. RESULTS: The workshop produced significant improvements in the mean test scores between the pretests and posttests (17.67 to 23.67, P <.02). We found no differences between residents who were with the instructor and those at the remote location. Six-month delayed testing demonstrated complete retention of new knowledge. CONCLUSIONS: The teleimmersive environment enabled surgeons to overcome some of the barriers to teaching complex surgical anatomic principles. Using teleimmersive environments, surgical educators and trainees can interact from locations worldwide using virtual anatomic information to achieve their educational goals.


Subject(s)
Computer-Assisted Instruction/methods , Education, Medical/methods , General Surgery/education , Liver Diseases/surgery , User-Computer Interface , Biliary Tract/anatomy & histology , Hepatic Veins/anatomy & histology , Humans , Liver/anatomy & histology , Liver/blood supply , Liver/surgery , Portal Vein/anatomy & histology , Surgical Procedures, Operative/education
11.
Stud Health Technol Inform ; 85: 24-30, 2002.
Article in English | MEDLINE | ID: mdl-15458055

ABSTRACT

By combining teleconferencing, tele-presence, and Virtual Reality, the Tele-Immersive environment enables master surgeons to teach residents in remote locations. The design and implementation of a Tele-Immersive medical educational environment, Teledu, is presented in this paper. Teledu defines a set of Tele-Immersive user interfaces for medical education. In addition, an Application Programming Interface (API) is provided so that developers can easily develop different applications with different requirements in this environment. With the help of this API, programmers only need to design a plug-in to load their application specific data set. The plug-in is an object-oriented data set loader. Methods for rendering, handling, and interacting with the data set for each application can be programmed in the plug-in. The environment has a teacher mode and a student mode. The teacher and the students can interact with the same medical models, point, gesture, converse, and see each other.


Subject(s)
Education, Medical/methods , General Surgery/education , Internship and Residency , Telecommunications , Telemedicine/methods , User-Computer Interface , Computer Communication Networks , Curriculum , Humans , Imaging, Three-Dimensional/methods , Pelvic Floor/anatomy & histology , Pelvic Floor/surgery , Temporal Bone/anatomy & histology , Temporal Bone/surgery
12.
Stud Health Technol Inform ; 85: 494-500, 2002.
Article in English | MEDLINE | ID: mdl-15458139

ABSTRACT

INTRODUCTION: Skill, effort, and time are required to identify and visualize anatomic structures in three-dimensions from radiological data. Fundamentally, automating these processes requires a technique that uses symbolic information not in the dynamic range of the voxel data. We were developing such a technique based on mutual information for automatic multi-modality image fusion (MIAMI Fuse, University of Michigan). This system previously demonstrated facility at fusing one voxel dataset with integrated symbolic structure information to a CT dataset (different scale and resolution) from the same person. The next step of development of our technique was aimed at accommodating the variability of anatomy from patient to patient by using warping to fuse our standard dataset to arbitrary patient CT datasets. METHODS: A standard symbolic information dataset was created from the full color Visible Human Female by segmenting the liver parenchyma, portal veins, and hepatic veins and overwriting each set of voxels with a fixed color. Two arbitrarily selected patient CT scans of the abdomen were used for reference datasets. We used the warping functions in MIAMI Fuse to align the standard structure data to each patient scan. The key to successful fusion was the focused use of multiple warping control points that place themselves around the structure of interest automatically. The user assigns only a few initial control points to align the scans. Fusion 1 and 2 transformed the atlas with 27 points around the liver to CT1 and CT2 respectively. Fusion 3 transformed the atlas with 45 control points around the liver to CT1 and Fusion 4 transformed the atlas with 5 control points around the portal vein. The CT dataset is augmented with the transformed standard structure dataset, such that the warped structure masks are visualized in combination with the original patient dataset. This combined volume visualization is then rendered interactively in stereo on the ImmersaDesk in an immersive Virtual Reality (VR) environment. RESULTS: The accuracy of the fusions was determined qualitatively by comparing the transformed atlas overlaid on the appropriate CT. It was examined for where the transformed structure atlas was incorrectly overlaid (false positive) and where it was incorrectly not overlaid (false negative). According to this method, fusions 1 and 2 were correct roughly 50-75% of the time, while fusions 3 and 4 were correct roughly 75-100%. The CT dataset augmented with transformed dataset was viewed arbitrarily in user-centered perspective stereo taking advantage of features such as scaling, windowing and volumetric region of interest selection. CONCLUSIONS: This process of auto-coloring conserved structures in variable datasets is a step toward the goal of a broader, standardized automatic structure visualization method for radiological data. If successful it would permit identification, visualization or deletion of structures in radiological data by semi-automatically applying canonical structure information to the radiological data (not just processing and visualization of the data's intrinsic dynamic range). More sophisticated selection of control points and patterns of warping may allow for more accurate transforms, and thus advances in visualization, simulation, education, diagnostics, and treatment planning.


Subject(s)
Computer Simulation , Electronic Data Processing , Hepatic Veins/diagnostic imaging , Image Enhancement , Image Processing, Computer-Assisted , Imaging, Three-Dimensional , Liver/diagnostic imaging , Portal Vein/diagnostic imaging , Tomography, X-Ray Computed , User-Computer Interface , Algorithms , Anatomy, Cross-Sectional , Female , Humans , Reference Values , Software
SELECTION OF CITATIONS
SEARCH DETAIL
...