Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
1.
Cell Signal ; 21(11): 1634-44, 2009 Nov.
Article in English | MEDLINE | ID: mdl-19591923

ABSTRACT

3'-Phosphoinositide-dependent protein kinase-1 (PDK1), the direct upstream kinase of Akt, can localize to the nucleus during specific signalling events. The mechanism used for its import into the nucleus, however, remains unresolved as it lacks a canonical nuclear localization signal (NLS). Expression of activated Src kinase in C6 glioblastoma cells promotes the association of tyrosylphosphorylated PDK1 with the NLS-containing tyrosine phosphatase SHP-1 as well as the nuclear localization of both proteins. A constitutive nucleo-cytoplasmic SHP-1:PDK1 shuttling complex is supported by several lines of evidence including (i) the distribution of both proteins to similar subcellular compartments following manipulation of the nuclear pore complex, (ii) the nuclear retention of SHP-1 upon overexpression of a PDK1 protein bearing a disrupted nuclear export signal (NES), and (iii) the exclusion of PDK1 from the nucleus upon overexpression of SHP-1 lacking the NLS or following siRNA-mediated knock-down of SHP-1. The latter case results in a perinuclear distribution of PDK1 that corresponds with the distribution of PIP3 (phosphatidylinositol 3,4,5-triphosphate), while a PDK1 protein bearing a mutated PH domain that abrogates PIP3-binding is excluded from the nucleus. Our data suggest that the SHP-1:PDK1 complex is recruited to the nuclear membrane by binding to perinuclear PIP3, whereupon SHP-1 (and its NLS) facilitates active import. Export from the nucleus relies on PDK1 (and its NES). The intact complex contributes to Src kinase-induced, Akt-sensitive podial formation in C6 cells.


Subject(s)
Cell Nucleus/enzymology , Protein Serine-Threonine Kinases/metabolism , Protein Tyrosine Phosphatase, Non-Receptor Type 6/metabolism , 3-Phosphoinositide-Dependent Protein Kinases , Cell Line , Gene Knockdown Techniques , Humans , Phosphatidylinositol Phosphates/metabolism , Phosphorylation , Protein Tyrosine Phosphatase, Non-Receptor Type 6/genetics , Proto-Oncogene Proteins c-akt/metabolism , RNA, Small Interfering/metabolism , Signal Transduction , src-Family Kinases/metabolism
2.
Yearb Med Inform ; : 57-67, 2006.
Article in English | MEDLINE | ID: mdl-17051296

ABSTRACT

OBJECTIVES: The number of articles published annually in the fields of biomedical signal and image acquisition and processing is increasing. Based on selected examples, this survey aims at comprehensively demonstrating the recent trends and developments. METHODS: Four articles are selected for biomedical data acquisition covering topics such as dose saving in CT, C-arm X-ray imaging systems for volume imaging, and the replacement of dose-intensive CT-based diagnostic with harmonic ultrasound imaging. Regarding biomedical signal analysis (BSA), the four selected articles discuss the equivalence of different time-frequency approaches for signal analysis, an application to Cochlea implants, where time-frequency analysis is applied for controlling the replacement system, recent trends for fusion of different modalities, and the role of BSA as part of a brain machine interfaces. To cover the broad spectrum of publications in the field of biomedical image processing, six papers are focused. Important topics are content-based image retrieval in medical applications, automatic classification of tongue photographs from traditional Chinese medicine, brain perfusion analysis in single photon emission computed tomography (SPECT), model-based visualization of vascular trees, and virtual surgery, where enhanced visualization and haptic feedback techniques are combined with a sphere-filled model of the organ. RESULTS: The selected papers emphasize the five fields forming the chain of biomedical data processing: (1) data acquisition, (2) data reconstruction and pre-processing, (3) data handling, (4) data analysis, and (5) data visualization. Fields 1 and 2 form the sensor informatics, while fields 2 to 5 form signal or image informatics with respect to the nature of the data considered. CONCLUSIONS: Biomedical data acquisition and pre-processing, as well as data handling, analysis and visualization aims at providing reliable tools for decision support that improve the quality of health care. Comprehensive evaluation of the processing methods and their reliable integration in routine applications are future challenges in the field of sensor, signal and image informatics.


Subject(s)
Diagnostic Imaging , Image Processing, Computer-Assisted , Medical Informatics Applications , Humans , Man-Machine Systems
3.
Methods Inf Med ; 43(4): 308-14, 2004.
Article in English | MEDLINE | ID: mdl-15472739

ABSTRACT

Starting from raw data files coding eight bits of gray values per image pixel and identified with no more than eight characters to refer to the patient, the study, and technical parameters of the imaging modality, biomedical imaging has undergone manifold and rapid developments. Today, rather complex protocols such as Digital Imaging and Communications in Medicine (DICOM) are used to handle medical images. Most restrictions to image formation, visualization, storage and transfer have basically been solved and image interpretation now sets the focus of research. Currently, a method-driven modeling approach dominates the field of biomedical image processing, as algorithms for registration, segmentation, classification and measurements are developed on a methodological level. However, a further metamorphosis of paradigms has already started. The future of medical image processing is seen in task-oriented solutions integrated into diagnosis, intervention planning, therapy and follow-up studies. This alteration of paradigms is also reflected in the literature. As German activities are strongly tied to the international research, this change of paradigm is demonstrated by selected papers from the German annual workshop on medical image processing collected in this special issue.


Subject(s)
Electronic Data Processing/trends , Image Interpretation, Computer-Assisted/methods , Medical Informatics Applications , Forecasting , Humans , Pattern Recognition, Automated
4.
Methods Inf Med ; 43(4): 315-9, 2004.
Article in English | MEDLINE | ID: mdl-15472740

ABSTRACT

OBJECTIVES: To design, implement in Java, and evaluate a method and means for the automated localization of artificial landmarks in optical images for tuned-aperture computed tomography (TACT) that allows the replacement of radiographic with optical landmarks. METHODS: Circular, colored, optical landmarks were designed to provide flexibility with regard to landmark constellation, imaging equipment, and lighting conditions. The landmark detection was based on Hough transforms (HT) for ellipses and lines. The HT for ellipses was extended to enable selective detection of bright ellipses on a dark background and vice versa, and the number of irrelevant votes in the accumulator arrays was reduced. An experiment was performed in vitro to test the automated landmark localization scheme, verify registration accuracy, and measure the required computation time. RESULTS: A visual evaluation of the tomographic slices that were produced using the new method revealed good registration accuracy. A comparison to tomographic slices similarly produced by means of conventional TACT showed identical results. The algorithm ran sufficiently fast on standard hardware to allow landmark localization in "real time" during successive image acquisition in clinical applications. CONCLUSIONS: The proposed method provides robust automated localization of landmarks in optical images. Using a hybrid imaging system, TACT can now be clinically applied without manual interaction of a human operator and without radiopaque landmarks, which might cover anatomic details of diagnostic interest.


Subject(s)
Imaging, Three-Dimensional , Radiographic Image Interpretation, Computer-Assisted/instrumentation , Radiography, Dental, Digital/instrumentation , Tomography, X-Ray Computed/instrumentation , Algorithms , Humans , Medical Informatics Applications , Radiographic Image Enhancement , Signal Processing, Computer-Assisted
5.
Methods Inf Med ; 43(4): 354-61, 2004.
Article in English | MEDLINE | ID: mdl-15472746

ABSTRACT

OBJECTIVES: To develop a general structure for semantic image analysis that is suitable for content-based image retrieval in medical applications and an architecture for its efficient implementation. METHODS: Stepwise content analysis of medical images results in six layers of information modeling incorporating medical expert knowledge (raw data layer, registered data layer, feature layer, scheme layer, object layer, knowledge layer). A reference database with 10,000 images categorized according to the image modality, orientation, body region, and biological system is used. By means of prototypes in each category, identification of objects and their geometrical or temporal relationships are handled in the object and the knowledge layer, respectively. A distributed system designed with only three core elements is implemented: (i) the central database holds program sources, processing scheme descriptions, images, features, and administrative information about the workstation cluster; (ii) the scheduler balances distributed computing; and (iii) the web server provides graphical user interfaces for data entry and retrieval, which can be easily adapted to a variety of applications for content-based image retrieval in medicine. RESULTS: Leaving-one-out experiments were distributed by the scheduler and controlled via corresponding job lists offering transparency regarding the viewpoints of a distributed system and the user. The proposed architecture is suitable for content-based image retrieval in medical applications. It improves current picture archiving and communication systems that still rely on alphanumerical descriptions, which are insufficient for image retrieval of high recall and precision.


Subject(s)
Diagnostic Imaging/methods , Image Processing, Computer-Assisted/methods , Information Storage and Retrieval/methods , Medical Informatics Applications , Pattern Recognition, Automated , Databases as Topic , Humans , Information Management
6.
Methods Inf Med ; 42(1): 89-98, 2003.
Article in English | MEDLINE | ID: mdl-12695800

ABSTRACT

OBJECTIVES: To provide a comprehensive bottom-up categorization of model-based segmentation techniques that allows to select, implement, and apply well-suited active contour models for segmentation of medical images, where major challenges are the high variability in shape and appearance of objects, noise, artifacts, partial occlusions of objects, and the required reliability and correctness of results. METHODS: We consider the general purpose of segmentation, the dimension of images, the object representation within the model, image and contour influences, as well as the solution and the parameter selection of the model. Potentials and limits are characterized for all instances in each category providing essential information for the application of active contours to various purposes in medical image processing. Based on prolops surgery planning, we exemplify the use of the scheme to successfully design robust 3D-segmentation. RESULTS: The construction scheme allows to design robust segmentation methods, which, in particular, should avoid any gaps of dimension. Such gaps result from different image domains and value ranges with respect to the applied model domain and the dimension of relevant subsets for image influences, respectively. CONCLUSIONS: A general segmentation procedure with sufficient robustness for medical applications is still missing. It is shown that in almost every category, novel techniques are available to improve the initial snake model, which was introduced in 1987.


Subject(s)
Diagnostic Imaging , Image Processing, Computer-Assisted/methods , Humans
7.
Dentomaxillofac Radiol ; 31(3): 187-92, 2002 May.
Article in English | MEDLINE | ID: mdl-12058267

ABSTRACT

OBJECTIVE: To develop a three-dimensional (3D) model for quantitative analysis of image subtraction methods simulating clinical conditions and relevant to dental radiology. METHOD: A high-resolution volume representation of a formalin-preserved segment of a human maxilla was synthesized from a set of 51 digital radiographs equidistantly covering the entire sampling aperture by means of Tuned-Aperture Computed Tomography (TACT). Two-dimensional (2D) projection renderings of a 3D model were generated yielding arbitrary but well-known 2D projections with, and without, structured noise producing 'virtual radiographs'. RESULTS: Virtual radiographs were found to be similar to actual clinical images with respect to appearance, structure, and texture. Because the TACT reconstruction process allows all possible positions and orientations of source, specimen, and image plane to be simulated with negligible under sampling over a reasonable range of solid angles (sampling aperture), the resulting 3D model provided a rigorous method for establishing a truly objective gold standard (ground truth) for testing different registration techniques. CONCLUSIONS: TACT image registration can be assessed quantitatively by comparing actually observed vs theoretically professed parameters that presumably constrain the underlying projection geometries. Other attributes that vary from one method to the next, such as the use of nonlinear or region-specific techniques to facilitate registration, likewise, now can be rigorously measured by context-based methods such as quantitative determination of image similarity. Hence, a 3D model that renders idealized virtual radiographs from any desired projection geometry makes possible truly objective comparison of various digital subtraction techniques.


Subject(s)
Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Radiography, Dental, Digital/methods , Subtraction Technique , Tomography, X-Ray Computed/methods , Algorithms , Artifacts , Computer Simulation , Humans , Maxilla/diagnostic imaging , Signal Processing, Computer-Assisted , User-Computer Interface
8.
Dentomaxillofac Radiol ; 31(4): 249-56, 2002 Jul.
Article in English | MEDLINE | ID: mdl-12087442

ABSTRACT

OBJECTIVES: To implement, refine, and evaluate a generalized TACT reconstruction method that corrects for misregistration caused by uncontrolled variations in projective magnification, alleviates normalization artifacts at borders of backprojections, and exploits all available source information to minimize losses produced from projective truncations in three dimensions. METHODS: A new Java-based software application was designed and tested in vitro using clinically representive data derived from four titanium dental implants in a cadaver jaw segment. These implants were irradiated by an intra-oral X-ray machine from various angles and distances using a solid-state sensor producing 48 radiographs. Six radiopaque markers were attached to the segment facilitating inference of associated projection geometries from analyses of the distributions of their respective shadows as seen by the sensor. Three-dimensional (3D) images were produced using the new algorithm, and the results were compared with those obtained from existing code. RESULTS: Slices processed using the new program were corrected for magnification errors. The resulting 3D displays showed significantly reduced tomosynthetic blur relative to uncorrected counterparts. The new reconstructions also minimized known border artifacts and made use of all available information. These images demonstrated apparent details otherwise hidden or lost when comparably processed using the control algorithm. CONCLUSIONS: The new software reduces both misregistration and scaling artifacts in tomosynthetically reconstructed slices. Hence, these modifications are expected to increase diagnostic accuracy and facilitate the appropriate application of TACT to an enlarged set of diagnostic tasks as compared with earlier implementations of the method.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Imaging, Three-Dimensional/methods , Radiography, Dental, Digital/methods , Tomography, X-Ray Computed/methods , Artifacts , Humans , Mandible/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted , Radiographic Magnification , Software , Technology, Radiologic
9.
Dentomaxillofac Radiol ; 31(4): 264-72, 2002 Jul.
Article in English | MEDLINE | ID: mdl-12087444

ABSTRACT

OBJECTIVES: To identify and analyse methods/algorithms for image processing provided by various commercial software programs used in direct digital dental imaging and to map them onto a standardized nomenclature. METHODS: Twelve programs presented at the 28th International Dental-Show, March, 2001, Cologne, Germany and the Emago advanced software were included in this study. An artificial test image, comprised of gray scale ramps, step wedges, fields with Gaussian-distributed noise, and salt and pepper noise, was synthesized and imported to all programs to classify algorithms for display; linear, non-linear and histogram-based point processing; pseudo-coloration; linear and non-linear spatial filtering; frequency domain filtering; measurements; image analysis; and annotations. RESULTS: The 13 programs were found to possess a great variety of image processing and enhancement facilities. All programs offer gray-scale image display with interactive brightness and contrast adjustment and gray-scale inversion as well as calibration and length measurements. While Emago enables arbitrary spatial filtering with user-defined masks up to 7x7 pixels in size, most programs sparsely include filters and tools for image analysis and comparison. Moreover, the naming and implementation of provided functions differ. Some functions inappropriately use standardized image processing terms to describe their operations. CONCLUSIONS: Image processing and enhancement functions are rarely incorporated in commercial software for direct digital imaging in dental radiology. Until now, comparison of software was limited by the arbitrary naming used in each system. Standardized terminology and increased functionality of image processing should be offered to the dental profession.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Radiography, Dental, Digital/methods , Software , Color , Filtration , Humans , Phantoms, Imaging , Terminology as Topic
10.
J Opt Soc Am A Opt Image Sci Vis ; 18(11): 2679-91, 2001 Nov.
Article in English | MEDLINE | ID: mdl-11688858

ABSTRACT

The estimation of illuminant color is mandatory for many applications in the field of color image quantification. However, it is an unresolved problem if no additional heuristics or restrictive assumptions apply. Assuming uniformly colored and roundly shaped objects, Lee has presented a theory and a method for computing the scene-illuminant chromaticity from specular highlights [H. C. Lee, J. Opt. Soc. Am. A 3, 1694 (1986)]. However, Lee's method, called image path search, is less robust to noise and is limited in the handling of microtextured surfaces. We introduce a novel approach to estimate the color of a single illuminant for noisy and microtextured images, which frequently occur in real-world scenes. Using dichromatic regions of different colored surfaces, our approach, named color line search, reverses Lee's strategy of image path search. Reliable color lines are determined directly in the domain of the color diagrams by three steps. First, regions of interest are automatically detected around specular highlights, and local color diagrams are computed. Second, color lines are determined according to the dichromatic reflection model by Hough transform of the color diagrams. Third, a consistency check is applied by a corresponding path search in the image domain. Our method is evaluated on 40 natural images of fruit and vegetables. In comparison with those of Lee's method, accuracy and stability are substantially improved. In addition, the color line search approach can easily be extended to scenes of objects with macrotextured surfaces.


Subject(s)
Color , Light , Models, Theoretical , Algorithms , Scattering, Radiation
11.
IEEE Trans Med Imaging ; 20(7): 660-5, 2001 Jul.
Article in English | MEDLINE | ID: mdl-11465471

ABSTRACT

This paper analyzes B-spline interpolation techniques of degree 2, 4, and 5 with respect to all criteria that have been applied to evaluate various interpolation schemes in a recently published survey on image interpolation in medical imaging (Lehmann et al., 1999). It is shown that high-degree B-spline interpolation has superior Fourier properties, smallest interpolation error, and reasonable computing times. Therefore, high-degree B-splines are preferable interpolators for numerous applications in medical image processing, particularly if high precision is required. If no aliasing occurs, this result neither depends on the geometric transform applied for the tests nor the actual content of images.


Subject(s)
Diagnostic Imaging/methods , Image Processing, Computer-Assisted/methods , Fourier Analysis
12.
IEEE Trans Biomed Eng ; 48(6): 706-17, 2001 Jun.
Article in English | MEDLINE | ID: mdl-11396600

ABSTRACT

This paper presents a system for computer-assisted quantification of axo-somatic boutons at motoneuron cell-surface membranes. Different immunohistochemical stains can be used to prepare tissue of the spinal cord. Based on micrographs displaying single neurons, a finite element balloon model has been applied to determine the exact location of the cell membrane. A synaptic profile is extracted next to the cell membrane and normalized with reference to the intracellular brightness. Furthermore, a manually selected reference cell is used to normalize settings of the microscope as well as variations in histochemical processing for each stain. Thereafter, staining, homogeneity, and allocation of boutons are determined automatically from the synaptic profiles. The system is evaluated by applying the coefficient of variation (Cv) to repeated measurements of a quantity. Based on 1856 motoneuronal images acquired from four animals with three stains, 93% of the images are analyzed correctly. The others were rejected, based on process protocols. Using only rabbit anti-synaptophysin as primary antibody, the correctness increases above 96%. Cv values are below 3%, 5%, and 6% for all measures with respect to stochastic optimization, cell positioning, and a large range of microscope settings, respectively. A sample size of about 100 is required to validate a significant reduction of staining in motoneurons below a hemi-section (Wilcoxon rank-sum test, alpha = 0.05, beta = 0.9). Our system yields statistically robust results from light micrographs. In future, it is hoped that this system will substitute for the expensive and time-consuming analysis of spinal cord injury at the ultra-structural level, such as by manual interpretation of nonoverlapping electron micrographs.


Subject(s)
Image Processing, Computer-Assisted , Motor Neurons/metabolism , Spinal Cord/anatomy & histology , Spinal Cord/metabolism , Animals , Cell Membrane , Female , Immunoenzyme Techniques , Microscopy , Rats , Rats, Sprague-Dawley
13.
Dentomaxillofac Radiol ; 29(6): 323-46, 2000 Nov.
Article in English | MEDLINE | ID: mdl-11114663

ABSTRACT

OBJECTIVES: (1) To review computerized a posteriori techniques for geometry and contrast registration prior to digital subtraction in dental radiography; (2) to define a uniform notation for their methodological and technical classification and based on this key code; (3) to derive criteria for successful application of computer-based a posteriori registration for routine clinical subtraction. METHODS: All techniques are classified with respect to the (1) dimension of geometry registration; (2) origin; (3) abstraction level, and (4) linkage of features used for registration of geometry; (5) elasticity; (6) domain, and (7) parameter determination of the geometrical transform used; (8) interaction of geometrical registration; as well as (9) origin of features, (10) model of transform, and (11) interaction of procedure for contrast correction. RESULTS: With respect to clinical practicability, superior registration techniques are based on the low level abstraction of intrinsic features for both geometry and contrast registration. By approximately linking the features, a global projective transform should be generated for geometry registration by automatic methods, while automatic contrast correction should be non-parametric. This challenge is met only by one out of 36 published algorithms. Hence, although numerous computer-based techniques have been published, only a few of them are applied more than once in practice. CONCLUSION: The key code proposed in this paper is useful for technical classification of a posteriori registration methods in dental radiography and allows their objective comparison. Further investigations will focus on standardization of practicable procedures to evaluate the robustness of competing methods.


Subject(s)
Image Processing, Computer-Assisted/methods , Radiographic Image Enhancement/methods , Radiography, Dental/methods , Subtraction Technique , Algorithms , Classification , Electronic Data Processing , Humans , Pattern Recognition, Automated , Software
14.
IEEE Trans Biomed Eng ; 47(7): 941-51, 2000 Jul.
Article in English | MEDLINE | ID: mdl-10916266

ABSTRACT

Leukocytes play an important role in the host defense as they may travel from the blood stream into the tissue in reacting to inflammatory stimuli. The leukocyte-vessel wall interactions are studied in post capillary vessels by intravital video microscopy during in vivo animal experiments. Sequences of video images are obtained and digitized with a frame grabber. A method for automatic detection and characterization of leukocytes in the video images is developed. Individual leukocytes are detected using a neural network that is trained with synthetic leukocyte images generated using a novel stochastic model. This model makes it feasible to generate images of leukocytes with different shapes and sizes under various lighting conditions. Experiments indicate that neural networks trained with the synthetic leukocyte images perform better than networks trained with images of manually detected leukocytes. The best performing neural network trained with synthetic leukocyte images resulted in an 18% larger area under the ROC curve than the best performing neural network trained with manually detected leukocytes.


Subject(s)
Leukocytes/cytology , Neural Networks, Computer , Animals , Biomedical Engineering , Blood Vessels/cytology , Cell Adhesion , Image Processing, Computer-Assisted , Microscopy , Models, Biological , Stochastic Processes
15.
IEEE Trans Med Imaging ; 18(11): 1049-75, 1999 Nov.
Article in English | MEDLINE | ID: mdl-10661324

ABSTRACT

Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation kernels of finite size have been introduced. This paper compares 1) truncated and windowed sinc; 2) nearest neighbor; 3) linear; 4) quadratic; 5) cubic B-spline; 6) cubic; g) Lagrange; and 7) Gaussian interpolation and approximation techniques with kernel sizes from 1 x 1 up to 8 x 8. The comparison is done by: 1) spatial and Fourier analyses; 2) computational complexity as well as runtime evaluations; and 3) qualitative and quantitative interpolation error determinations for particular interpolation tasks which were taken from common situations in medical image processing. For local and Fourier analyses, a standardized notation is introduced and fundamental properties of interpolators are derived. Successful methods should be direct current (DC)-constant and interpolators rather than DC-inconstant or approximators. Each method's parameters are tuned with respect to those properties. This results in three novel kernels, which are introduced in this paper and proven to be within the best choices for medical image interpolation: the 6 x 6 Blackman-Harris windowed sinc interpolator, and the C2-continuous cubic kernels with N = 6 and N = 8 supporting points. For quantitative error evaluations, a set of 50 direct digital X rays was used. They have been selected arbitrarily from clinical routine. In general, large kernel sizes were found to be superior to small interpolation masks. Except for truncated sinc interpolators, all kernels with N = 6 or larger sizes perform significantly better than N = 2 or N = 3 point methods (p << 0.005). However, the differences within the group of large-sized kernels were not significant. Summarizing the results, the cubic 6 x 6 interpolator with continuous second derivatives, as defined in (24), can be recommended for most common interpolation tasks. It appears to be the fastest six-point kernel to implement computationally. It provides eminent local and Fourier properties, is easy to implement, and has only small errors. The same characteristics apply to B-spline interpolation, but the 6 x 6 cubic avoids the intrinsic border effects produced by the B-spline technique. However, the goal of this study was not to determine an overall best method, but to present a comprehensive catalogue of methods in a uniform terminology, to define general properties and requirements of local techniques, and to enable the reader to select that method which is optimal for his specific application in medical imaging.


Subject(s)
Diagnostic Imaging , Image Processing, Computer-Assisted/methods , Fourier Analysis , Humans
16.
Dentomaxillofac Radiol ; 27(3): 140-50, 1998 May.
Article in English | MEDLINE | ID: mdl-9693526

ABSTRACT

OBJECTIVES: To prove that the model of perspective projection allows precise registration of intra-oral radiographs regardless of whether they have been acquired with or without individual adjustment aids and independent of the human observer or computer algorithm marking corresponding landmarks in the images and, based on in vivo radiographs, to introduce and evaluate a model-based registration method. METHODS: Five observers (three experts and two non-experts) were asked to define corresponding points in 24 pairs of in vivo dental radiographs from the same region of the same patient. The landmarks were used to fit the model of perspective projection applying the least squares method. Misplaced landmarks were detected and suppressed by analysing the quality of all subsets of landmarks with respect to the minimal residual (leaving one out method). In addition, local correlation was used to optimize the quality of registration as well as observer independence. RESULTS: Using six or more corresponding landmarks in both radiographs the correlation of the images registered was > 0.95 (S.D. < 0.063) irrespective of the observers' expertise. CONCLUSIONS: Perspective projection is a reliable model for sequentially acquired intra-oral radiographs. The co-ordinates of anatomical landmarks are useful for determining the parameters of perspective projection. Local correlation and leaving one out techniques improve the geometrical adjustment as well as observer independence. Registration is nearly independent of the actual position of the landmarks and hence independent of the observer. Our algorithm will also be useful for registration techniques based on automatically detected landmarks.


Subject(s)
Radiography, Dental, Digital/methods , Humans , Least-Squares Analysis , Radiographic Image Enhancement , Radiographic Image Interpretation, Computer-Assisted , Subtraction Technique
SELECTION OF CITATIONS
SEARCH DETAIL
...