Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Stud Health Technol Inform ; 103: 93-100, 2004.
Article in English | MEDLINE | ID: mdl-15747910

ABSTRACT

Information and Communication Technology (ICT) assist healthcare professionals in the process of diagnosis, treatment, monitoring, medication prescription, referral, information retrieval and communication, documentation and transactions. This applies to intramural, transmural and extramural conditions. About 67% of patients involved in safety incidents believe that the incidents could have been prevented had an electronic health record been available. However, preliminary research also suggests that about 30% of patient safety incidents relate to software problems. ICT might improve quality of healthcare provided that ICT products, the ICT administration and use comply to essential requirements. This paper presents information about the state of the art of software quality, the measuring of software quality, quality management systems and a evaluation and certification method.


Subject(s)
Medical Informatics/instrumentation , Medical Informatics/organization & administration , Medical Records Systems, Computerized , Software Validation , Medical Errors/prevention & control , Medical Records Systems, Computerized/instrumentation , Medical Records Systems, Computerized/organization & administration , Quality Assurance, Health Care/methods , Quality Assurance, Health Care/organization & administration , Quality Control , Risk Assessment , Safety Management/methods , Safety Management/organization & administration
2.
Article in English | MEDLINE | ID: mdl-15058407

ABSTRACT

A standard is a set of agreements between all parties involved. Delivering healthcare is a matter of co-operation: healthcare can only be delivered in a responsible fashion when parties involved comply to standards. This becomes evident when a healthcare information infrastructure is under development. The authors deliver a comprehensive introduction to standards which apply to the Electronic Healthcare Record (EHR), describe work in progress of relevant standardisation committees and argue about future work of such committees in order to pursue a coherent healthcare information infrastructure.


Subject(s)
Information Management/organization & administration , Information Management/standards , Medical Records Systems, Computerized/organization & administration , Medical Records Systems, Computerized/standards , Computer Communication Networks/organization & administration , Computer Communication Networks/standards , Documentation/methods , Documentation/standards , Interprofessional Relations , Medical Record Linkage/methods , Medical Record Linkage/standards
3.
Technol Health Care ; 3(2): 75-89, 1995 Oct.
Article in English | MEDLINE | ID: mdl-8574765

ABSTRACT

Virtual Environments allow a human to interact with a (computer) system in such a way that a high level of presence in a computer-synthesised world is experienced. In principle, all human senses are involved with the interaction. Many applications may benefit from this type of human-machine interfacing, however, few have emerged so far for medicine. In this paper we elaborate on some realistic potential applications of Virtual Environment technology in the field of medicine. These applications can be found in education/training, therapy, surgery, rehabilitation, diagnosis, telemedicine and biomechanics. The value to be added to these applications by VE technology lies in the fact that patient data or patient models may be moderated to the physician in a more intuitive and natural manner. Despite these potentials, the short-term feasibility of these applications can be put into question for various reasons. Firstly, the current generation of display devices have a resolution that may show to be too low to achieve a sufficiently high degree of realism for medical applications. Secondly, there are no commercially-available actuators for tactile and force feedback which the physician desperately need for the simulation of the contact with the (virtual) patient. Thirdly, the enormous computing power required for these applications (still) needs a considerable investment. With these limitations in mind, we believe that we are at the cradle of a whole new generation of VE applications in medicine.


Subject(s)
Computer Simulation , Medical Informatics Applications , User-Computer Interface , Diagnosis, Computer-Assisted , Education, Medical , Feasibility Studies , Feedback , Humans , Technology Assessment, Biomedical , Telemedicine
4.
Comput Biol Med ; 25(2): 139-48, 1995 Mar.
Article in English | MEDLINE | ID: mdl-7554832

ABSTRACT

The minimally invasive nature of endoscopic surgery allows operations to be performed on patients through small incisions, often under local anaesthesia. Patient recovery times and cosmetic detriment are thus greatly reduced, while overall quality of care is improved. Presently, surgeons are trained to perform endosurgical procedures in a number of ways: practising with surgical training devices, using animal models and assisting experienced surgeons. In this paper, the focus is on answering the key question: "Can virtual environment technology assist surgeons in training and maintaining endoscopic surgery skills?" Initial developments towards surgical simulators have clearly demonstrated the great potential of virtual environment technology for surgical training purposes. Breakthroughs towards surgery are expected within the next 5 to 10 years.


Subject(s)
Computer Simulation , Education, Medical, Graduate , Endoscopy , General Surgery/education , Clinical Competence , Forecasting , Humans , Models, Anatomic , Models, Biological
5.
Int J Card Imaging ; 10(3): 205-15, 1994 Sep.
Article in English | MEDLINE | ID: mdl-7876660

ABSTRACT

For clinical decision-making and documentation purposes we have developed techniques to extract, label and analyze the coronary vasculature from arteriograms in an automated, quantitative manner. Advanced image processing techniques were applied to extract and analyze the vasculatures from non-subtracted arteriograms while artificial intelligence techniques were employed to assign anatomical labels. Lumen diameters of 11 phantom vessels were assessed with an accuracy of 0.27 +/- 0.19 mm (dtrue = 0.45 + 0.92dmeasured; r > 0.99) and 0.21 +/- 0.15 mm (dtrue = 0.42 + 0.91dmeasured; r > 0.99), from cine and digital images, respectively. We collected a total of 15 routinely acquired cine-arteriograms showing 74 vessel segments with 18 stenoses (severity larger than 30% assessed quantitatively), and 53 digital arteriograms showing 236 vessel segments with 69 stenoses. From the cine arteriograms we extracted 64 (86%) of the vessel segments without manual correction and 196 (83%) from the digital arteriograms. Repeated analysis (3 times) of the arteriograms by the same operator resulted in a standard deviation of the mean segment diameters (precision) of 0.064 mm for the cine-images and 0.020 mm for the digital images, while the standard deviations in the measurement of the minimal luminal diameter of the observed stenoses were 0.020 mm and 0.019 mm, respectively. The LAD artery, the septal and diagonal branches were correctly identified automatically in 86% of the segments. From these evaluations we conclude that our automated approach provides reliable tools for the assessment of multi-vessel disease, both in an off- and on-line environment.


Subject(s)
Coronary Angiography , Radiographic Image Interpretation, Computer-Assisted , Cineangiography/methods , Coronary Angiography/methods , Coronary Disease/diagnostic imaging , In Vitro Techniques , Models, Structural
6.
Int J Card Imaging ; 5(2-3): 213-24, 1990.
Article in English | MEDLINE | ID: mdl-2230298

ABSTRACT

In theory, radiographic myocardial perfusion imaging allows a quantitative assessment of the functional significance of a coronary stenosis. However, in the conventional two-dimensional projection images there does not exist a one-two-one relationship between a selected myocardial region of interest (ROI) and one particular coronary segment perfusing that area due to over-projection of myocardial regions in front of and behind the selected ROI perfused by other arterial segments, which may result in measurements which are difficult to interpret or even unreliable. To overcome these problems, we have developed two algorithms to determine the spatial distribution of perfusion levels in slices of the heart, selected approximately perpendicular to the left ventricular long axis, from two orthogonal angiographic views: the Segmental Reconstruction Technique (SRT) and the Network Programming Reconstruction Technique (NPRT). Both techniques require a priori geometric information about the myocardium, which can be obtained from the epicardial coronary tree (epicardial boundaries) and the left ventricular lumen (endocardial boundaries). Using the SRT approach, pie-shaped segments are defined for each slice within the myocardial geometric constraints such that superimposition of these segments when projected in orthogonal biplane views is minimal. The reconstruction process uses a model with identical myocardial geometry and definition of segments. Each segment of the model is assigned a relative perfusion level with unit one if no other a priori information is available. In this case, the model contains geometric information only. In case a priori information about expected segmental perfusion levels is available, a level between zero and one is assigned to each segment. The a priori information on the myocardial perfusion levels can be extracted from either anatomic information about the location and severity of existing coronary arterial obstructions, or from a slice adjacent to the one under reconstruction. Using the NPRT approach perfusion levels are computed for each volume picture element of a slice within the reconstructed myocardial geometry, thus resulting in a much higher spatial resolution than the SRT approach. A priori information of perfusion levels must be included in this approach, again based upon anatomical information, or upon the slice adjacent to the one under reconstruction. The very first slice of a myocardial study will be reconstructed by the SRT approach. Extensive computer simulations for the SRT have proved that the mean difference between the actual and reconstructed segmental perfusion levels, on a scale from 0 to 1, is smaller than 0.45 (SEE = 0.0033, REE = 1.80) for various coronary artery disease states without the use of a priori information on expected perfusion levels. This error becomes smaller than 0.36 (SEE = 0.0026, REE = 1.42), if a priori information in the reconstruction technique is included. Similar computer simulations for the NPRT have proved that these mean differences in geometric segments equal to those defined for the SRT, are smaller than 2.94 (SEE = 0.0308, REE = 0.77) on a scale from 0 to 16, without the use of a priori information on expected perfusion levels, and smaller than 1.72 (SEE = 0.0304, REE = 1.10) on the same scale when a priori information is included. Therefore, it may be concluded that slice-wise three-dimensional reconstruction of perfusion levels is feasible from biplane computer-simulated data, and that a similarity exists for mean perfusion levels in corresponding regions in the simulated and reconstructed slices, for various states of single coronary artery disease.


Subject(s)
Coronary Circulation , Heart/anatomy & histology , Image Processing, Computer-Assisted/methods , Algorithms , Heart/physiology , Models, Cardiovascular
7.
Int J Card Imaging ; 3(2-3): 141-52, 1988.
Article in English | MEDLINE | ID: mdl-3171240

ABSTRACT

The assessment of coronary flow reserve from the instantaneous distribution of the contrast agent within the coronary vessels and myocardial muscle at the control state and at maximal flow has been limited by the superimposition of myocardial regions of interest in the two-dimensional images. To overcome these limitations, we are in the process of developing a three-dimensional (3D) reconstruction technique to compute the contrast distribution in cross sections of the myocardial muscle from two orthogonal cineangiograms. To limit the number of feasible solutions in the 3D-reconstruction space, the 3D-geometry of the endo- and epicardial boundaries of the myocardium must be determined. For the geometric reconstruction of the epicardium, the centerlines of the left coronary arterial tree are manually or automatically traced in the biplane views. Next, the bifurcations are detected automatically and matched in these two views, allowing a 3D-representation of the coronary tree. Finally, the circumference of the left ventricular myocardium in a selected cross section can be computed from the intersection points of this cross section with the 3D coronary tree using B-splines. For the geometric reconstruction of the left ventricular cavity, we envision to apply the elliptical approximation technique using the LV boundaries defined in the two orthogonal views, or by applying more complex 3D-reconstruction techniques including densitometry. The actual 3D-reconstruction of the contrast distribution in the myocardium is based on a linear programming technique (Transportation model) using cost coefficient matrices. Such a cost coefficient matrix must contain a maximum amount of a priori information, provided by a computer generated model and updated with actual data from the angiographic views. We have only begun to solve this complex problem. However, based on our first experimental results we expect that the linear programming approach with advanced cost coefficient matrices and computed model will lead to acceptable solutions in the 3D-reconstruction of the myocardial contrast distribution from biplane cineangiograms.


Subject(s)
Cineangiography , Coronary Angiography , Coronary Circulation , Heart/diagnostic imaging , Radiographic Image Enhancement , Densitometry , Humans , Programming, Linear
SELECTION OF CITATIONS
SEARCH DETAIL
...