Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters











Database
Language
Publication year range
1.
J Bone Miner Res ; 16(5): 901-10, 2001 May.
Article in English | MEDLINE | ID: mdl-11341335

ABSTRACT

In this article we examine the role of bone mineral density (BMD) in the diagnosis of osteoporosis. Using information from 7671 women in the Study of Osteoporotic Fractures (SOF) with BMD measurements at the proximal femur, lumbar spine, forearm, and calcaneus, we examine three models with differing criteria for the diagnosis of osteoporosis. Model 1 is based on the World Health Organization (WHO) criteria using a T score of -2.5 relative to the manufacturers' young normative data aged 20-29 years, with modifications using information from the Third National Health and Nutrition Examination Survey (NHANES). Model 2 uses a T score of -1 relative to women aged 65 years at the baseline of the SOF population. Model 3 classifies women as osteoporotic if their estimated osteoporotic fracture risk (spine and/or hip) based on age and BMD is above 14.6%. We compare the agreement in osteoporosis classification according to the different BMD measurements for the three models. We also consider whether reporting additional BMD parameters at the femur or forearm improves risk assessment for osteoporotic fractures. We observe that using the WHO criteria with the manufacturers' normative data results in very inconsistent diagnoses. Only 25% of subjects are consistently diagnosed by all of the eight BMD variables. Such inconsistency is reduced by using a common elderly normative population as in model 2, in which case 50% of the subjects are consistently diagnosed as osteoporotic by all of the eight diagnostic methods. Risk-based diagnostic criteria as in model 3 improve consistency substantially to 68%. Combining the results of BMD assessments at more than one region of interest (ROI) from a single scan significantly increases prediction of hip and/or spine fracture risk and elevates the relative risk with increasing number of low BMD subregions. We conclude that standardization of normative data, perhaps referenced to an older population, may be necessary when applying T scores as diagnostic criteria in patient management. A risk-based osteoporosis classification does not depend on the manufacturers' reference data and may be more consistent and efficient for patient diagnosis.


Subject(s)
Osteoporosis/classification , Osteoporosis/physiopathology , Adult , Aged , Bone Density , Female , Hip Fractures/physiopathology , Humans , Spinal Fractures/physiopathology
2.
AJR Am J Roentgenol ; 173(2): 329-34, 1999 Aug.
Article in English | MEDLINE | ID: mdl-10430129

ABSTRACT

OBJECTIVE: The aim of our study was to evaluate the diagnostic agreement between quantitative sonography of the calcaneus and dual X-ray absorptiometry (DXA) of the spine and femur for revealing osteoporosis. SUBJECTS AND METHODS: In 1252 patients (795 women, 54.9+/-15 years old; 457 men, 50.5+/-15 years old [mean+/-SD]), bone mineral density measurements of the lumbar spine (posteroanterior, L1-L4) and the proximal femur (neck, trochanter, intertrochanteric region, total proximal femur, and Ward's triangle) and quantitative sonographic measurements of the stiffness of the calcaneus were performed. The presence of osteoporosis is defined, according to the World Health Organization criteria, as a T-score lower than -2.5. The percentage of patients below the threshold (prevalence of osteoporosis) was calculated for each imaging technique. The diagnostic agreement in identifying individuals as osteoporotic was assessed using kappa scores. RESULTS: Forty-nine percent of the women and 42% of the men were classified as osteoporotic by quantitative sonography, 32% of women and 30% of men by DXA of the spine, and 23-54% of women and 16-54% of men by the different regions of interest revealed on femoral DXA. Kappa analysis showed the diagnostic agreement among these measures to be generally poor (kappa = .28-.41 [women] and .25-.45 [men]). CONCLUSION: The considerable diagnostic disagreement between quantitative sonography and DXA could cause confusion in the daily practice of radiology and make establishing the correct diagnosis a difficult task. The choice of imaging technique influences which patients are diagnosed as osteoporotic.


Subject(s)
Absorptiometry, Photon , Calcaneus/diagnostic imaging , Femur/diagnostic imaging , Spine/diagnostic imaging , Absorptiometry, Photon/instrumentation , Absorptiometry, Photon/methods , Absorptiometry, Photon/statistics & numerical data , Adolescent , Adult , Aged , Aged, 80 and over , Bone Density , Diagnostic Errors , Female , Humans , Male , Middle Aged , Osteoporosis/diagnosis , Sex Characteristics , Ultrasonography/statistics & numerical data
3.
Stat Med ; 16(16): 1889-905, 1997 Aug 30.
Article in English | MEDLINE | ID: mdl-9280040

ABSTRACT

Comparative calibration is the broad statistical methodology used to assess the calibration of a set of p instruments, each designed to measure the same characteristic, on a common group of individuals. Different from the usual calibration problem, the true underlying quantity measured is unobservable. Many authors have shown that this problem, in general, does not have a unique solution. Most commonly used assumptions to obtain a unique solution are (i) one instrument is the gold standard (that is, unbiased) and (ii) the measurement errors of the p instruments are independent. Such constraints, however, may not be valid for many clinical applications, for example, the universal standardization project for dual X-ray absorptiometry (DXA) scanners. In this paper, we propose a new approach to resolve the comparative calibration problem when a gold standard is unavailable. Instead of the usual assumptions, we use external information in addition to data from the p instruments, to solve the problem. We address statistical estimation, hypothesis testing and missing data problems. We apply the new method specifically to the universal standardization project data where a group of individuals have been measured for bone mineral density (BMD) by three DXA scanners. We compare the results of the new method to currently used methods and show that they have better statistical properties.


Subject(s)
Absorptiometry, Photon/instrumentation , Absorptiometry, Photon/statistics & numerical data , Bone Density/physiology , Adult , Aged , Aged, 80 and over , Algorithms , Calibration , Female , Humans , Likelihood Functions , Middle Aged , Models, Statistical , Osteoporosis/prevention & control , Radionuclide Imaging , Reproducibility of Results , Spine/diagnostic imaging
4.
J Bone Miner Res ; 11(5): 626-37, 1996 May.
Article in English | MEDLINE | ID: mdl-9157777

ABSTRACT

Dual X-ray absorptiometry (DXA) is widely used to monitor treatment efficacy in reducing the rate of bone mineral loss. In order to assure the validity of these measurements, instrument quality control of the DXA scanners becomes very important. This paper compares five quality control procedures (visual inspection, Shewhart chart with sensitizing rules, Shewhart chart with sensitizing rules and a filter for clinically insignificant mean changes, moving average chart and standard deviation, and cumulative sum chart [CUSUM]) in their ability to identify scanner malfunction by means of (1) an analysis of five longitudinal phantom data sets that had been collected during a clinical trial and (2) an analysis of simulated data sets. The visual inspection method is relatively subjective and depends on the operator's experience and attention. The regular Shewhart chart with sensitizing rules has a high false alarm rate. The Shewhart chart with sensitizing rules and an additional filter for clinically insignificant mean changes has the lowest false alarm rate but a relatively low sensitivity. The CUSUM method has good sensitivity and a low false alarm rate. In addition, this method provides an estimate of the date a change in the DXA scanner performance might have occurred. The method combining a moving average chart and a moving standard deviation chart came closest to the performance of the CUSUM method. Comparing the advantages and disadvantages of all methods, we propose the use of the CUSUM method as a quality control procedure for monitoring DXA scanner performance. For clinical trials use of the more intuitive Shewhart charts may be acceptable at the individual sites provided their scanner performance is followed up by CUSUM analysis at a central quality assurance center.


Subject(s)
Absorptiometry, Photon/methods , Bone Density , Absorptiometry, Photon/standards , Humans , Reference Standards
5.
Int J Card Imaging ; 10(2): 113-21, 1994 Jun.
Article in English | MEDLINE | ID: mdl-7963749

ABSTRACT

The effects of misregistration artifacts and background corrections on the densitometric measurement of left ventricular ejection fraction (EF) from digital subtraction angiography (DSA) images were studied in 20 patients. Densitometric ejection fraction measurements were performed on both conventional time subtraction images and on dual-energy subtraction images. Dual-energy subtraction is not sensitive to the motion induced artifacts which often mar time subtraction images. While the time subtraction images had varying degrees of misregistration artifacts, none of the dual-energy studies contained significant misregistration artifacts. Densitometrically determined ejection fractions measured with and without correction for background signals were compared. Poor agreement between time subtraction ejection fractions determined with and without background correction (EFNO-BKG = 0.88 EFBKG - 6.0%, SEE = 8.1%, r = 0.83) demonstrated the sensitivity of time subtraction EFs to the performance of a background correction procedure. Conversely, densitometric measurement of ejection fraction using dual-energy subtraction was significantly less sensitive to the performance of a background correction (EFNO-BKG = 0.99 EFBKG - 5.3%, SEE = 4.3%, r = 0.96). Since background correction requires accurate definition of ventricular borders, but motion artifacts often preclude accurate border definition, it is concluded that dual-energy subtraction is a significantly more robust method for measuring left ventricular ejection fraction using densitometry. It is further concluded that identification of the systolic endocardial border is not required when performing densitometric EF measurements on dual-energy images. Drawing of the end-diastolic border alone is sufficient to produce an accurate ejection fraction measurement.


Subject(s)
Angiography, Digital Subtraction/methods , Densitometry , Radiography, Dual-Energy Scanned Projection/methods , Stroke Volume/physiology , Ventricular Function, Left/physiology , Adult , Aged , Algorithms , Artifacts , Cardiac Output/physiology , Cardiac Volume/physiology , Diastole/physiology , Heart Ventricles/diagnostic imaging , Humans , Middle Aged , Sensitivity and Specificity , Systole/physiology
6.
Med Phys ; 20(3): 795-803, 1993.
Article in English | MEDLINE | ID: mdl-8350839

ABSTRACT

A system to measure the absolute iodine concentration (mg iodine/milliliter blood) in the left ventricle (LV) during digital subtraction ventriculography has been developed. The technique uses a catheter to draw blood from the LV through a detection cell. This occurs as the iodine bolus passes through the heart. The cell determines iodine concentration by measuring x-ray attenuation as the iodinated blood passes between a low-power x-ray tube and a diode detector. In vitro and in vivo testing of the system was conducted. The system response was linear (r = 0.99) with respect to iodine concentration. This response was independent of hematocrit. Dispersion of contrast medium in the catheter caused distortion of the shape of the time-concentration curve. The impulse response of the system was measured and found to be independent of hematocrit. A correction algorithm based on Wiener filter deconvolution was developed. In vitro testing using simulated time-concentration curves demonstrated that the rms error in the iodine measurement after dispersion correction did not exceed 4% over the region of the time-concentration curve extending from the peak to the point on the tail where the signal fell to 50% of its peak value. Cardiac output was measured from the time-concentration curve via the indicator-dilution method in an animal model. This cardiac output measurement (COI) agreed closely with cardiac output measured simultaneously with an aortic flow probe (CO(P)), namely, COI = 1.02 CO(P)-0.03 L/min, r = 0.95, SEE = 10%, p < 0.001.(ABSTRACT TRUNCATED AT 250 WORDS)


Subject(s)
Cardiac Output/physiology , Iodine/blood , Ventricular Function, Left/physiology , Animals , Dogs , Humans
SELECTION OF CITATIONS
SEARCH DETAIL