Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
Ergonomics ; : 1-17, 2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38515318

ABSTRACT

This paper examines opportunities and challenges of integrating augmented reality (AR) into education and investigates requirements to enable instructors to author AR educational experiences. Although AR technology is recognised for its potential in educational enhancement, it poses challenges for instructors creating AR-based experiences due to their limited digital skills and the complexity of 3D authoring tools. Semi-structured interviews with 17 aviation instructors identified current pedagogical approaches, gaps, and potential applications of AR in aviation weather education. Additionally, results highlighted the benefits of AR and obstacles to its integration into education, followed by outlining design priorities and user needs for educational AR authoring. For AR authoring toolkit development, this study recommended incorporating interactive AR lesson modules, early development of user requirements, and prebuilt AR modules. Findings will guide the development of a 3D authoring toolkit for non-technologist instructors, enabling wider AR use in aviation weather education and other educational fields.


Research interviews with aviation instructors were conducted to derive design implications of AR authoring toolkits for non-technologist instructors. Key findings highlighted gaps in aviation weather education, potential AR applications, and barriers to AR in education. Design recommendations emphasised incorporating interactive AR lesson modules, initial user requirements, and prebuilt AR modules.

2.
Front Psychol ; 12: 553015, 2021.
Article in English | MEDLINE | ID: mdl-33732174

ABSTRACT

This research assessed how the performance and team skills of three-person teams working with an Intelligent Team Tutoring System (ITTS) on a virtual military surveillance task were affected by feedback privacy, participant role, task experience, prior team experience, and teammate familiarity. Previous work in Intelligent Tutoring Systems (ITSs) has focused on outcomes for task skill training for individual learners. As research extends into intelligent tutoring for teams, both task skills and team skills are necessary for good team performance. This work includes a brief review of previous research on ITTSs, feedback, teams, and teamwork, including the recounting of two categories of a framework of teamwork performance, Communication and Cognition, which are relevant to the present study. This research examines the effects of an intelligent agent, as well as features of the team, its members, and the task being undertaken, on team communication (measured by relevant key-presses) and team situation awareness (as measured by scores on a quiz). Thirty-seven teams of three participants, each at their own computer running a multiplayer surveillance simulation, were given just-in-time private (individually delivered) or public (team-delivered) performance feedback during four 5-min trials. In the fourth trial, two of the three participants switched roles. Feedback type, teamwork experience, and teammate familiarity had no statistically significant effect on communication or team situation awareness. However, higher levels of role experience and task experience showed significant and medium-sized effects on communication performance. Results, based on performance data and structured interview responses, also revealed areas of improvement in future feedback design and a potential benchmark for feedback frequency in an action-oriented serious game-based ITTS. Among the conclusions are six design objectives for future ITTSs, establishing a foundation for future research on designing effective ITTSs that train interpersonal skills to nascent teams.

3.
J Digit Imaging ; 30(6): 738-750, 2017 Dec.
Article in English | MEDLINE | ID: mdl-28585063

ABSTRACT

Powerful non-invasive imaging technologies like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI) are used daily by medical professionals to diagnose and treat patients. While 2D slice viewers have long been the standard, many tools allowing 3D representations of digital medical data are now available. The newest imaging advancement, functional MRI (fMRI) technology, has changed medical imaging from viewing static to dynamic physiology (4D) over time, particularly to study brain activity. Add this to the rapid adoption of mobile devices for everyday work and the need to visualize fMRI data on tablets or smartphones arises. However, there are few mobile tools available to visualize 3D MRI data, let alone 4D fMRI data. Building volume rendering tools on mobile devices to visualize 3D and 4D medical data is challenging given the limited computational power of the devices. This paper describes research that explored the feasibility of performing real-time 3D and 4D volume raycasting on a tablet device. The prototype application was tested on a 9.7" iPad Pro using two different fMRI datasets of brain activity. The results show that mobile raycasting is able to achieve between 20 and 40 frames per second for traditional 3D datasets, depending on the sampling interval, and up to 9 frames per second for 4D data. While the prototype application did not always achieve true real-time interaction, these results clearly demonstrated that visualizing 3D and 4D digital medical data is feasible with a properly constructed software framework.


Subject(s)
Brain/diagnostic imaging , Computers, Handheld , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging/methods , Telemedicine/instrumentation , Humans , Smartphone , Telemedicine/methods
4.
Comput Biol Med ; 61: 138-43, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25909641

ABSTRACT

In the medical field, digital images are present in diagnosis, pre-operative planning, minimally invasive surgery, instruction, and training. The use of medical digital imaging has afforded new ways to interact with a patient, such as seeing fine details inside a body. This increased usage also raises many basic research questions on human perception and performance when utilizing these images. The work presented here attempts to answer the question: How would adding the stereopsis depth cue affect relative position tasks in a medical context compared to a monoscopic view? By designing and conducting a study to isolate the benefits between monoscopic 3D and stereoscopic 3D displays in a relative position task, the following hypothesis was tested: stereoscopic 3D displays are beneficial over monoscopic 3D displays for relative position judgment tasks in a medical visualization setting. 44 medical students completed a series of relative position judgments tasks. The results show that stereoscopic condition yielded a higher score than the monoscopic condition with regard to the hypothesis.


Subject(s)
Diagnostic Imaging/methods , Imaging, Three-Dimensional/methods , Models, Theoretical , Humans
5.
J Laparoendosc Adv Surg Tech A ; 23(1): 65-70, 2013 Jan.
Article in English | MEDLINE | ID: mdl-23101794

ABSTRACT

Visualization of medical data in three-dimensional (3D) or two-dimensional (2D) views is a complex area of research. In many fields 3D views are used to understand the shape of an object, and 2D views are used to understand spatial relationships. It is unclear how 2D/3D views play a role in the medical field. Using 3D views can potentially decrease the learning curve experienced with traditional 2D views by providing a whole representation of the patient's anatomy. However, there are challenges with 3D views compared with 2D. This current study expands on a previous study to evaluate the mental workload associated with both 2D and 3D views. Twenty-five first-year medical students were asked to localize three anatomical structures--gallbladder, celiac trunk, and superior mesenteric artery--in either 2D or 3D environments. Accuracy and time were taken as the objective measures for mental workload. The NASA Task Load Index (NASA-TLX) was used as a subjective measure for mental workload. Results showed that participants viewing in 3D had higher localization accuracy and a lower subjective measure of mental workload, specifically, the mental demand component of the NASA-TLX. Results from this study may prove useful for designing curricula in anatomy education and improving training procedures for surgeons.


Subject(s)
Anatomy , Diagnostic Imaging , Imaging, Three-Dimensional , Mental Processes , Task Performance and Analysis , Workload , Humans , Image Processing, Computer-Assisted , Software
6.
Comput Biol Med ; 42(12): 1170-8, 2012 Dec.
Article in English | MEDLINE | ID: mdl-23099211

ABSTRACT

Segmenting tumors from grayscale medical image data can be difficult due to the close intensity values between tumor and healthy tissue. This paper presents a study that demonstrates how colorizing CT images prior to segmentation can address this problem. Colorizing the data a priori accentuates the tissue density differences between tumor and healthy tissue, thereby allowing for easier identification of the tumor tissue(s). The method presented allows pixels representing tumor and healthy tissues to be colorized distinctly in an accurate and efficient manner. The associated segmentation process is then tailored to utilize this color data. It is shown that colorization significantly decreases segmentation time and allows the method to be performed on commodity hardware. To show the effectiveness of the method, a basic segmentation method, thresholding, was implemented with and without colorization. To evaluate the method, False Positives (FP) and False Negatives (FN) were calculated from 10 datasets (476 slices) with tumors of varying size and tissue composition. The colorization method demonstrated statistically significant differences for lower FP in nine out of 10 cases and lower FN in five out of 10 datasets.


Subject(s)
Color , Image Processing, Computer-Assisted/methods , Neoplasms/diagnosis , Neoplasms/pathology , Tomography, X-Ray Computed/methods , Algorithms , Databases, Factual , Humans
7.
IEEE Trans Vis Comput Graph ; 18(4): 581-8, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22402685

ABSTRACT

Stereoscopic depth cues improve depth perception and increase immersion within virtual environments (VEs). However, improper display of these cues can distort perceived distances and directions. Consider a multi-user VE, where all users view identical stereoscopic images regardless of physical location. In this scenario, cues are typically customized for one "leader" equipped with a head-tracking device. This user stands at the center of projection (CoP) and all other users ("followers") view the scene from other locations and receive improper depth cues. This paper examines perceived depth distortion when viewing stereoscopic VEs from follower perspectives and the impact of these distortions on collaborative spatial judgments. Pairs of participants made collaborative depth judgments of virtual shapes viewed from the CoP or after displacement forward or backward. Forward and backward displacement caused perceived depth compression and expansion, respectively, with greater compression than expansion. Furthermore, distortion was less than predicted by a ray-intersection model of stereo geometry. Collaboration times were significantly longer when participants stood at different locations compared to the same location, and increased with greater perceived depth discrepancy between the two viewing locations. These findings advance our understanding of spatial distortions in multi-user VEs, and suggest a strategy for reducing distortion.


Subject(s)
Depth Perception , User-Computer Interface , Computer Graphics , Environment , Female , Humans , Male
8.
Stud Health Technol Inform ; 163: 343-7, 2011.
Article in English | MEDLINE | ID: mdl-21335815

ABSTRACT

Graphics technology has extended medical imaging tools to the hands of surgeons and doctors, beyond the radiology suite. However, a common issue in most medical imaging software is the added complexity for non-radiologists. This paper presents the development of a unique software toolset that is highly customizable and targeted at the general physicians as well as the medical specialists. The core functionality includes features such as viewing medical images in two-and three-dimensional representations, clipping, tissue windowing, and coloring. Additional features can be loaded in the form of 'plug-ins' such as tumor segmentation, tissue deformation, and surgical planning. This allows the software to be lightweight and easy to use while still giving the user the flexibility of adding the necessary features, thus catering to a wide range of user population.


Subject(s)
Algorithms , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Radiology Information Systems , Software , User-Computer Interface , Computer Graphics , Humans , Image Enhancement/methods , Programming Languages , Reproducibility of Results , Sensitivity and Specificity , Software Design
9.
Comput Biol Med ; 41(1): 56-65, 2011 Jan.
Article in English | MEDLINE | ID: mdl-21146165

ABSTRACT

Automatic segmentation of tumors is a complicated and difficult process as most tumors are rarely clearly delineated from healthy tissues. A new method for probabilistic segmentation to efficiently segment tumors within CT data and to improve the use of digital medical data in diagnosis has been developed. Image data are first enhanced by manually setting the appropriate window center and width, and if needed a sharpening or noise removal filter is applied. To initialize the segmentation process, a user places a seed point within the object of interest and defines a search region for segmentation. Based on the pixels' spatial and intensity properties, a probabilistic selection criterion is used to extract pixels with a high probability of belonging to the object. To facilitate the segmentation of multiple slices, an automatic seed selection algorithm was developed to keep the seeds in the object as its shape and/or location changes between consecutive slices. The seed selection algorithm performs a greedy search by searching for pixels with matching intensity close to the location of the original seed point. A total of ten CT datasets were used as test cases, each with varying difficulty in terms of automatic segmentation. Five test cases had mean false positive error rates less than 10%, and four test cases had mean false negative error rates less than 10% when compared to manual segmentation of those tumors by radiologists.


Subject(s)
Algorithms , Computational Biology/methods , Image Processing, Computer-Assisted/methods , Neoplasms/diagnostic imaging , Tomography, X-Ray Computed/methods , Humans , Reproducibility of Results
10.
Comput Biol Med ; 39(10): 869-78, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19647818

ABSTRACT

A new segmentation method using a fuzzy rule based system to segment tumors in a three-dimensional CT data was developed. To initialize the segmentation process, the user selects a region of interest (ROI) within the tumor in the first image of the CT study set. Using the ROI's spatial and intensity properties, fuzzy inputs are generated for use in the fuzzy rules inference system. With a set of predefined fuzzy rules, the system generates a defuzzified output for every pixel in terms of similarity to the object. Pixels with the highest similarity values are selected as tumor. This process is automatically repeated for every subsequent slice in the CT set without further user input, as the segmented region from the previous slice is used as the ROI for the current slice. This creates a propagation of information from the previous slices, used to segment the current slice. The membership functions used during the fuzzification and defuzzification processes are adaptive to the changes in the size and pixel intensities of the current ROI. The method is highly customizable to suit different needs of a user, requiring information from only a single two-dimensional image. Test cases success in segmenting the tumor from seven of the 10 CT datasets with <10% false positive errors and five test cases with <10% false negative errors. The consistency of the segmentation results statistics also showed a high repeatability factor, with low values of inter- and intra-user variability for both methods.


Subject(s)
Fuzzy Logic , Image Interpretation, Computer-Assisted , Neoplasms/diagnostic imaging , Tomography, X-Ray Computed/methods , Humans
11.
Stud Health Technol Inform ; 142: 97-102, 2009.
Article in English | MEDLINE | ID: mdl-19377123

ABSTRACT

The proliferation of virtual reality visualization and interaction technologies has changed the way medical image data is analyzed and processed. This paper presents a multi-modal environment that combines a virtual reality application with a desktop application for collaborative surgical planning. Both visualization applications can function independently but can also be synced over a network connection for collaborative work. Any changes to either application is immediately synced and updated to the other. This is an efficient collaboration tool that allows multiple teams of doctors with only an internet connection to visualize and interact with the same patient data simultaneously. With this multi-modal environment framework, one team working in the VR environment and another team from a remote location working on a desktop machine can both collaborate in the examination and discussion for procedures such as diagnosis, surgical planning, teaching and tele-mentoring.


Subject(s)
Computer Simulation , Cooperative Behavior , General Surgery/organization & administration , Planning Techniques , User-Computer Interface
12.
J Laparoendosc Adv Surg Tech A ; 19 Suppl 1: S211-7, 2009 Apr.
Article in English | MEDLINE | ID: mdl-18999974

ABSTRACT

Visualizing patient data in a three-dimensional (3D) representation can be an effective surgical planning tool.As medical imaging technologies improve with faster and higher resolution scans, the use of virtual reality for interacting with medical images adds another level of realism to a 3D representation. The software framework presented in this paper is designed to load and display any DICOM/PACS-compatible 3D image data for visualization and interaction in an immersive virtual environment. In "examiner" mode, the surgeon can interact with a 3D virtual model of the patient by using an intuitive set of controls designed to allow slicing, coloring,and windowing of the image to show different tissue densities and enhance important structures. In the simulated"endoscopic camera" mode, the surgeon can see through the point of view of a virtual endoscopic camera to navigate inside the patient. These tools allow the surgeon to perform virtual endoscopy on any suitable structure.The software is highly scalable, as it can be used on a single desktop computer to a cluster of computers in an immersive multiprojection virtual environment. By wearing a pair of stereo glasses, a surgeon becomes immersed within the model itself, thus providing a sense of realism, as if the surgeon is "inside" the patient.


Subject(s)
Endoscopy , Surgical Procedures, Operative , User-Computer Interface , Humans , Software
13.
J Laparoendosc Adv Surg Tech A ; 18(5): 697-706, 2008 Oct.
Article in English | MEDLINE | ID: mdl-18803512

ABSTRACT

The visualization of medical images obtained from scanning techniques such as computed tomography and magnetic resonance imaging is a well-researched field. However, advanced tools and methods to manipulate these data for surgical planning and other tasks have not seen widespread use among medical professionals. Radiologists have begun using more advanced visualization packages on desktop computer systems, but most physicians continue to work with basic two-dimensional grayscale images or not work directly with the data at all. In addition, new display technologies that are in use in other fields have yet to be fully applied in medicine. It is our estimation that usability is the key aspect in keeping this new technology from being more widely used by the medical community at large. Therefore, we have a software and hardware framework that not only make use of advanced visualization techniques, but also feature powerful, yet simple-to-use, interfaces. A virtual reality system was created to display volume-rendered medical models in three dimensions. It was designed to run in many configurations, from a large cluster of machines powering a multiwalled display down to a single desktop computer. An augmented reality system was also created for, literally, hands-on interaction when viewing models of medical data. Last, a desktop application was designed to provide a simple visualization tool, which can be run on nearly any computer at a user's disposal. This research is directed toward improving the capabilities of medical professionals in the tasks of preoperative planning, surgical training, diagnostic assistance, and patient education.


Subject(s)
Image Processing, Computer-Assisted/instrumentation , Magnetic Resonance Imaging , Radiology Information Systems/instrumentation , Tomography, X-Ray Computed , User-Computer Interface , Data Display , Humans , Software
14.
Stud Health Technol Inform ; 132: 120-2, 2008.
Article in English | MEDLINE | ID: mdl-18391270

ABSTRACT

An immersive virtual environment for viewing and interacting with three-dimensional representations of medical image data is presented. Using a newly developed automatic segmentation method, a segmented object (e.g., tumor or organ) can also be viewed in the context of the original patient data. Real time interaction is established using joystick movements and button presses on a wireless gamepad. Several open-source platforms have been utilized, such as DCMTK for processing of DICOM formatted data, Coin3D for scenegraph management, SimVoleon for volume rendering, and VRJuggler to handle the immersive visualization. The application allows the user to manipulate representations with features such as fast pseudo-coloring to highlight details of the patient data, windowing to select a range of tissue densities for display, and multiple clipping planes to allow the user to slice into the patient.


Subject(s)
Computer Simulation , Neoplasms/pathology , User-Computer Interface , Algorithms , Humans , Imaging, Three-Dimensional , Software
15.
Behav Ther ; 38(1): 39-48, 2007 Mar.
Article in English | MEDLINE | ID: mdl-17292693

ABSTRACT

This report examined whether Virtual Reality Exposure Therapy (VRET) could be used in the treatment of posttraumatic stress disorder (PTSD) symptoms in the aftermath of a serious motor vehicle accident. Six individuals reporting either full or severe subsyndromal PTSD completed 10 sessions of VRET, which was conducted using software designed to create real-time driving scenarios. Results indicated significant reductions in posttrauma symptoms involving reexperiencing, avoidance, and emotional numbing, with effect sizes ranging from d=.79 to d=1.49. Indices of clinically significant and reliable change suggested that the magnitude of these changes was meaningful. Additionally, high levels of perceived reality ("presence") within the virtual driving situation were reported, and patients reported satisfaction with treatment. Results are discussed in light of the possibility for VRET to be useful in guiding exposure in the treatment of PTSD following road accidents.


Subject(s)
Accidents, Traffic/psychology , Reality Therapy/methods , Stress Disorders, Post-Traumatic/etiology , Stress Disorders, Post-Traumatic/therapy , User-Computer Interface , Confusion/epidemiology , Escape Reaction , Female , Habituation, Psychophysiologic , Humans , Male , Middle Aged , Nausea/epidemiology , Severity of Illness Index , Software , Stress Disorders, Post-Traumatic/diagnosis , Surveys and Questionnaires , Treatment Outcome
SELECTION OF CITATIONS
SEARCH DETAIL
...