Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
J Imaging Inform Med ; 37(2): 864-872, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38343252

ABSTRACT

In CT imaging of the head, multiple image series are routinely reconstructed with different kernels and slice thicknesses. Reviewing the redundant information is an inefficient process for radiologists. We address this issue with a convolutional neural network (CNN)-based technique, synthesiZed Improved Resolution and Concurrent nOise reductioN (ZIRCON), that creates a single, thin, low-noise series that combines the favorable features from smooth and sharp head kernels. ZIRCON uses a CNN model with an autoencoder U-Net architecture that accepts two input channels (smooth- and sharp-kernel CT images) and combines their salient features to produce a single CT image. Image quality requirements are built into a task-based loss function with a smooth and sharp loss terms specific to anatomical regions. The model is trained using supervised learning with paired routine-dose clinical non-contrast head CT images as training targets and simulated low-dose (25%) images as training inputs. One hundred unique de-identified clinical exams were used for training, ten for validation, and ten for testing. Visual comparisons and contrast measurements of ZIRCON revealed that thinner slices and the smooth-kernel loss function improved gray-white matter contrast. Combined with lower noise, this increased visibility of small soft-tissue features that would be otherwise impaired by partial volume averaging or noise. Line profile analysis showed that ZIRCON images largely retained sharpness compared to the sharp-kernel input images. ZIRCON combined desirable image quality properties of both smooth and sharp input kernels into a single, thin, low-noise series suitable for both brain and skull imaging.

2.
J Comput Assist Tomogr ; 47(4): 603-607, 2023.
Article in English | MEDLINE | ID: mdl-37380148

ABSTRACT

OBJECTIVE: Noise quantification is fundamental to computed tomography (CT) image quality assessment and protocol optimization. This study proposes a deep learning-based framework, Single-scan Image Local Variance EstimatoR (SILVER), for estimating the local noise level within each region of a CT image. The local noise level will be referred to as a pixel-wise noise map. METHODS: The SILVER architecture resembled a U-Net convolutional neural network with mean-square-error loss. To generate training data, 100 replicate scans were acquired of 3 anthropomorphic phantoms (chest, head, and pelvis) using a sequential scan mode; 120,000 phantom images were allocated into training, validation, and testing data sets. Pixel-wise noise maps were calculated for the phantom data by taking the per-pixel SD from the 100 replicate scans. For training, the convolutional neural network inputs consisted of phantom CT image patches, and the training targets consisted of the corresponding calculated pixel-wise noise maps. Following training, SILVER noise maps were evaluated using phantom and patient images. For evaluation on patient images, SILVER noise maps were compared with manual noise measurements at the heart, aorta, liver, spleen, and fat. RESULTS: When tested on phantom images, the SILVER noise map prediction closely matched the calculated noise map target (root mean square error <8 Hounsfield units). Within 10 patient examinations, SILVER noise map had an average percent error of 5% relative to manual region-of-interest measurements. CONCLUSION: The SILVER framework enabled accurate pixel-wise noise level estimation directly from patient images. This method is widely accessible because it operates in the image domain and requires only phantom data for training.


Subject(s)
Deep Learning , Humans , Tomography, X-Ray Computed/methods , Neural Networks, Computer , Thorax , Phantoms, Imaging , Image Processing, Computer-Assisted/methods
3.
Med Phys ; 50(10): 6283-6295, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37042049

ABSTRACT

BACKGROUND: Photon-counting-detector CT (PCD-CT) enables the production of virtual monoenergetic images (VMIs) at a high spatial resolution (HR) via simultaneous acquisition of multi-energy data. However, noise levels in these HR VMIs are markedly increased. PURPOSE: To develop a deep learning technique that utilizes a lower noise VMI as prior information to reduce image noise in HR, PCD-CT coronary CT angiography (CTA). METHODS: Coronary CTA exams of 10 patients were acquired using PCD-CT (NAEOTOM Alpha, Siemens Healthineers). A prior-information-enabled neural network (Pie-Net) was developed, treating one lower-noise VMI (e.g., 70 keV) as a prior input and one noisy VMI (e.g., 50 keV or 100 keV) as another. For data preprocessing, noisy VMIs were reconstructed by filtered back-projection (FBP) and iterative reconstruction (IR), which were then subtracted to generate "noise-only" images. Spatial decoupling was applied to the noise-only images to mitigate overfitting and improve randomization. Thicker slice averaging was used for the IR and prior images. The final training inputs for the convolutional neural network (CNN) inside the Pie-Net consisted of thicker-slice signal images with the reinsertion of spatially decoupled noise-only images and the thicker-slice prior images. The CNN training labels consisted of the corresponding thicker-slice label images without noise insertion. Pie-Net's performance was evaluated in terms of image noise, spatial detail preservation, and quantitative accuracy, and compared to a U-net-based method that did not include prior information. RESULTS: Pie-Net provided strong noise reduction, by 95 ± 1% relative to FBP and by 60 ± 8% relative to IR. For HR VMIs at different keV (e.g., 50 keV or 100 keV), Pie-Net maintained spatial and spectral fidelity. The inclusion of prior information from the PCD-CT data in the spectral domain was able to improve a robust deep learning-based denoising performance compared to the U-net-based method, which caused some loss of spatial detail and introduced some artifacts. CONCLUSION: The proposed Pie-Net achieved substantial noise reduction while preserving HR VMI's spatial and spectral properties.


Subject(s)
Computed Tomography Angiography , Deep Learning , Humans , Computed Tomography Angiography/methods , Phantoms, Imaging , Tomography, X-Ray Computed/methods , Coronary Angiography/methods
4.
Med Phys ; 50(7): 4173-4181, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37069830

ABSTRACT

BACKGROUND: Small coronary arteries containing stents pose a challenge in CT imaging due to metal-induced blooming artifact. High spatial resolution imaging capability is as the presence of highly attenuating materials limits noninvasive assessment of luminal patency. PURPOSE: The purpose of this study was to quantify the effective lumen diameter within coronary stents using a clinical photon-counting-detector (PCD) CT in concert with a convolutional neural network (CNN) denoising algorithm, compared to an energy-integrating-detector (EID) CT system. METHODS: Seven coronary stents of different materials and inner diameters between 3.43 and 4.72 mm were placed in plastic tubes of diameters 3.96-4.87 mm containing 20 mg/mL of iodine solution, mimicking stented contrast-enhanced coronary arteries. Tubes were placed parallel with or perpendicular to the scanner's z-axis in an anthropomorphic phantom emulating an average-sized patient and scanned with a clinical EID-CT and PCD-CT. EID scans were performed using our standard coronary computed tomography angiography (cCTA) protocol (120 kV, 180 quality reference mAs). PCD scans were performed using the ultra-high-resolution (UHR) mode (120 × 0.2 mm collimation) at 120 kV with tube current adjusted so that CTDIvol was matched to that of EID scans. EID images were reconstructed per our routine clinical protocol (Br40, 0.6 mm thickness), and with the sharpest available kernel (Br69). PCD images were reconstructed at a thickness of 0.6 mm and a dedicated sharp kernel (Br89) which is only possible with the PCD UHR mode. To address increased image noise introduced by the Br89 kernel, an image-based CNN denoising algorithm was applied to the PCD images of stents scanned parallel to the scanner's z-axis. Stents were segmented based on full-width half maximum thresholding and morphological operations, from which effective lumen diameter was calculated and compared to reference sizes measured with a caliper. RESULTS: Substantial blooming artifacts were observed on EID Br40 images, resulting in larger stent struts and reduced lumen diameter (effective diameter underestimated by 41% and 47% for parallel and perpendicular orientations, respectively). Blooming artifacts were observed on EID Br69 images with 19% and 31% underestimation of lumen diameter compared to the caliper for parallel and perpendicular scans, respectively. Overall image quality was substantially improved on PCD, with higher spatial resolution and reduced blooming artifacts, resulting in the clearer delineation of stent struts. Effective lumen diameters were underestimated by 9% and 19% relative to the reference for parallel and perpendicular scans, respectively. CNN reduced image noise by about 50% on PCD images without impacting lumen quantification (<0.3% difference). CONCLUSION: The PCD UHR mode improved in-stent lumen quantification for all seven stents as compared to EID images due to decreased blooming artifacts. Implementation of CNN denoising algorithms to PCD data substantially improved image quality.


Subject(s)
Coronary Vessels , Tomography, X-Ray Computed , Humans , Coronary Vessels/diagnostic imaging , Tomography, X-Ray Computed/methods , Computed Tomography Angiography/methods , Neural Networks, Computer , Phantoms, Imaging , Stents , Photons
5.
J Med Imaging (Bellingham) ; 10(1): 014003, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36743869

ABSTRACT

Purpose: Deep convolutional neural network (CNN)-based methods are increasingly used for reducing image noise in computed tomography (CT). Current attempts at CNN denoising are based on 2D or 3D CNN models with a single- or multiple-slice input. Our work aims to investigate if the multiple-slice input improves the denoising performance compared with the single-slice input and if a 3D network architecture is better than a 2D version at utilizing the multislice input. Approach: Two categories of network architectures can be used for the multislice input. First, multislice images can be stacked channel-wise as the multichannel input to a 2D CNN model. Second, multislice images can be employed as the 3D volumetric input to a 3D CNN model, in which the 3D convolution layers are adopted. We make performance comparisons among 2D CNN models with one, three, and seven input slices and two versions of 3D CNN models with seven input slices and one or three output slices. Evaluation was performed on liver CT images using three quantitative metrics with full-dose images as reference. Visual assessment was made by an experienced radiologist. Results: When the input channels of the 2D CNN model increases from one to three to seven, a trend of improved performance was observed. Comparing the three models with the seven-slice input, the 3D CNN model with a one-slice output outperforms the other models in terms of noise texture and homogeneity in liver parenchyma as well as subjective visualization of vessels. Conclusions: We conclude the that multislice input is an effective strategy for improving performance for 2D deep CNN denoising models. The pure 3D CNN model tends to have a better performance than the other models in terms of continuity across axial slices, but the difference was not significant compared with the 2D CNN model with the same number of slices as the input.

6.
Med Phys ; 50(2): 821-830, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36385704

ABSTRACT

BACKGROUND: Deep artificial neural networks such as convolutional neural networks (CNNs) have been shown to be effective models for reducing noise in CT images while preserving anatomic details. A practical bottleneck for developing CNN-based denoising models is the procurement of training data consisting of paired examples of high-noise and low-noise CT images. Obtaining these paired data are not practical in a clinical setting where the raw projection data is not available. This work outlines a technique to optimize CNN denoising models using methods that are available in a routine clinical setting. PURPOSE: To demonstrate a phantom-based training framework for CNN noise reduction that can be efficiently implemented on any CT scanner. METHODS: The phantom-based training framework uses supervised learning in which training data are synthesized using an image-based noise insertion technique. Ten patient image series were used for training and validation (9:1) and noise-only images obtained from anthropomorphic phantom scans. Phantom noise-only images were superimposed on patient images to imitate low-dose CT images for use in training. A modified U-Net architecture was used with mean-squared-error and feature reconstruction loss. The training framework was tested for clinically indicated whole-body-low-dose CT images, as well as routine abdomen-pelvis exams for which projection data was unavailable. Performance was assessed based on root-mean-square error, structural similarity, line profiles, and visual assessment. RESULTS: When the CNN was tested on five reserved quarter-dose whole-body-low-dose CT images, noise was reduced by 75%, root-mean-square-error reduced by 34%, and structural similarity increased by 60%. Visual analysis and line profiles indicated that the method significantly reduced noise while maintaining spatial resolution of anatomic features. CONCLUSION: The proposed phantom-based training framework demonstrated strong noise reduction while preserving spatial detail. Because this method is based within the image domain, it can be easily implemented without access to projection data.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Tomography Scanners, X-Ray Computed , Phantoms, Imaging , Signal-To-Noise Ratio
7.
Radiology ; 306(1): 229-236, 2023 01.
Article in English | MEDLINE | ID: mdl-36066364

ABSTRACT

Background Photon-counting detector (PCD) CT and deep learning noise reduction may improve spatial resolution at lower radiation doses compared with energy-integrating detector (EID) CT. Purpose To demonstrate the diagnostic impact of improved spatial resolution in whole-body low-dose CT scans for viewing multiple myeloma by using PCD CT with deep learning denoising compared with conventional EID CT. Materials and Methods Between April and July 2021, adult participants who underwent a whole-body EID CT scan were prospectively enrolled and scanned with a PCD CT system in ultra-high-resolution mode at matched radiation dose (8 mSv for an average adult) at an academic medical center. EID CT and PCD CT images were reconstructed with Br44 and Br64 kernels at 2-mm section thickness. PCD CT images were also reconstructed with Br44 and Br76 kernels at 0.6-mm section thickness. The thinner PCD CT images were denoised by using a convolutional neural network. Image quality was objectively quantified in two phantoms and a randomly selected subset of participants (10 participants; median age, 63.5 years; five men). Two radiologists scored PCD CT images relative to EID CT by using a five-point Likert scale to detect findings reflecting multiple myeloma. The scoring for the matched reconstruction series was blinded to scanner type. Reader-averaged scores were tested with the null hypothesis of equivalent visualization between EID and PCD. Results Twenty-seven participants (median age, 68 years; IQR, 61-72 years; 16 men) were included. The blinded assessment of 2-mm images demonstrated improvement in viewing lytic lesions, intramedullary lesions, fatty metamorphosis, and pathologic fractures for PCD CT versus EID CT (P < .05 for all comparisons). The 0.6-mm PCD CT images with convolutional neural network denoising also demonstrated improvement in viewing all four pathologic abnormalities and detected one or more lytic lesions in 21 of 27 participants compared with the 2-mm EID CT images (P < .001). Conclusion Ultra-high-resolution photon-counting detector CT improved the visibility of multiple myeloma lesions relative to energy-integrating detector CT. © RSNA, 2022 Online supplemental material is available for this article.


Subject(s)
Deep Learning , Multiple Myeloma , Adult , Aged , Humans , Male , Middle Aged , Phantoms, Imaging , Photons , Tomography, X-Ray Computed/methods , Female
8.
Phys Med Biol ; 67(17)2022 09 02.
Article in English | MEDLINE | ID: mdl-35944556

ABSTRACT

Objective.To develop a convolutional neural network (CNN) noise reduction technique for ultra-high-resolution photon-counting detector computed tomography (UHR-PCD-CT) that can be efficiently implemented using only clinically available reconstructed images. The developed technique was demonstrated for skeletal survey, lung screening, and head angiography (CTA).Approach. There were 39 participants enrolled in this study, each received a UHR-PCD and an energy integrating detector (EID) CT scan. The developed CNN noise reduction technique uses image-based noise insertion and UHR-PCD-CT images to train a U-Net via supervised learning. For each application, 13 patient scans were reconstructed using filtered back projection (FBP) and iterative reconstruction (IR) and allocated into training, validation, and testing datasets (9:1:3). The subtraction of FBP and IR images resulted in approximately noise-only images. The 5-slice average of IR produced a thick reference image. The CNN training input consisted of thick reference images with reinsertion of spatially decoupled noise-only images. The training target consisted of the corresponding thick reference images without noise insertion. Performance was evaluated based on difference images, line profiles, noise measurements, nonlinear perturbation assessment, and radiologist visual assessment. UHR-PCD-CT images were compared with EID images (clinical standard).Main results.Up to 89% noise reduction was achieved using the proposed CNN. Nonlinear perturbation assessment indicated reasonable retention of 1 mm radius and 1000 HU contrast signals (>80% for skeletal survey and head CTA, >50% for lung screening). A contour plot indicated reduced retention for small-radius and low contrast perturbations. Radiologists preferred CNN over IR for UHR-PCD-CT noise reduction. Additionally, UHR-PCD-CT with CNN was preferred over standard resolution EID-CT images.Significance.CT images reconstructed with very sharp kernels and/or thin sections suffer from increased image noise. Deep learning noise reduction can be used to offset noise level and increase utility of UHR-PCD-CT images.


Subject(s)
Photons , Radiographic Image Enhancement , Humans , Neural Networks, Computer , Phantoms, Imaging , Radiographic Image Enhancement/methods , Tomography, X-Ray Computed/methods
9.
Med Phys ; 49(8): 4988-4998, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35754205

ABSTRACT

BACKGROUND: A common rule of thumb for object detection is the Rose criterion, which states that a signal must be five standard deviations above background to be detectable to a human observer. The validity of the Rose criterion in CT imaging is limited due to the presence of correlated noise. Recent reconstruction and denoising methodologies are also able to restore apparent image quality in very noisy conditions, and the ultimate limits of these methodologies are not yet known. PURPOSE: To establish a lower bound on the minimum achievable signal-to-noise ratio (SNR) for object detection, below which detection performance is poor regardless of reconstruction or denoising methodology. METHODS: We consider a numerical observer that operates on projection data and has perfect knowledge of the background and the objects to be detected, and determine the minimum projection SNR that is necessary to achieve predetermined lesion-level sensitivity and case-level specificity targets. We define a set of discrete signal objects O $\mathcal{O}$ that encompasses any lesion of interest and could include lesions of different sizes, shapes, and locations. The task is to determine which object of O $\mathcal{O}$ is present, or to state the null hypothesis that no object is present. We constrain each object in O $\mathcal{O}$ to have equivalent projection SNR and use Monte Carlo methods to calculate the required projection SNR necessary. Because our calculations are performed in projection space, they impose an upper limit on the performance possible from reconstructed images. We chose O $\mathcal{O}$ to be a collection of elliptical or circular low contrast metastases and simulated detection of these objects in a parallel beam system with Gaussian statistics. Unless otherwise stated, we assume a target of 80% lesion-level sensitivity and 80% case-level specificity and a search field of view that is 6 cm by 6 cm by 10 slices. RESULTS: When O $\mathcal{O}$ contains only a single object, our problem is equivalent to two-alternative forced choice (2AFC) and the required projection SNR is 1.7. When O $\mathcal{O}$ consists of circular 6-mm lesions at different locations in space, the required projection SNR is 5.1. When O $\mathcal{O}$ is extended to include ellipses and circles of different sizes, the required projection SNR increases to 5.3. The required SNR increases if the sensitivity target, specificity target, or search field of view increases. CONCLUSIONS: Even with perfect knowledge of the background and target objects, the ideal observer still requires an SNR of approximately 5. This is a lower bound on the SNR that would be required in real conditions, where the background and target objects are not known perfectly. Algorithms that denoise lesions with less than 5 projection SNR, regardless of the denoising methodology, are expected to show vanishing effects or false positive lesions.


Subject(s)
Algorithms , Tomography, X-Ray Computed , Humans , Image Processing, Computer-Assisted/methods , Monte Carlo Method , Phantoms, Imaging , Radiation Dosage , Signal-To-Noise Ratio , Tomography, X-Ray Computed/methods
10.
Article in English | MEDLINE | ID: mdl-36685338

ABSTRACT

The Rose criterion, stating that an object is detectable if it is five standard deviations above background, has been used as a rule of thumb for decades but its applicability is limited in computed tomography. Recent denoising algorithms, powered by convolutional neural networks, promise to reveal objects that were previously obscured by noise, but any denoising algorithm is fundamentally limited by the statistics of the sinogram. In this work, we estimate the minimum SNR necessary for detecting one of a set of objects in the projection domain. We assume there is a set of objects O for which detection is desired, and we study an ideal observer that sequentially compares each member of O to the null hypothesis. This comparison can be reduced to the classic one-dimensional signal detection problem between two Gaussians with different mean values, and from this we define a quantity, the projection SNR. We use simulations to estimate the minimum projection SNR necessary to achieve a sensitivity of 80% and specificity of 80%. We find that when we model a search task of a circular 6 mm lesion in a region of interest that is 60 mm by 60 mm by 10 slices, the minimum projection SNR is 5.1. This required SNR is reminiscent of the Rose criterion but is derived with entirely different assumptions, including the application of the ideal observer in the projection domain.

11.
J Comput Assist Tomogr ; 45(4): 544-551, 2021.
Article in English | MEDLINE | ID: mdl-34519453

ABSTRACT

OBJECTIVE: The aim of this study was to evaluate a narrowly trained convolutional neural network (CNN) denoising algorithm when applied to images reconstructed differently than training data set. METHODS: A residual CNN was trained using 10 noise inserted examinations. Training images were reconstructed with 275 mm of field of view (FOV), medium smooth kernel (D30), and 3 mm of thickness. Six examinations were reserved for testing; these were reconstructed with 100 to 450 mm of FOV, smooth to sharp kernels, and 1 to 5 mm of thickness. RESULTS: When test and training reconstruction settings were not matched, there was either reduced denoising efficiency or resolution degradation. Denoising efficiency was reduced when FOV was decreased or a smoother kernel was used. Resolution loss occurred when the network was applied to an increased FOV, sharper kernel, or decreased image thickness. CONCLUSIONS: The CNN denoising performance was degraded with variations in FOV, kernel, or decreased thickness. Denoising performance was not affected by increased thickness.


Subject(s)
Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Signal-To-Noise Ratio , Tomography, X-Ray Computed/methods , Algorithms , Deep Learning , Humans
12.
J Med Imaging (Bellingham) ; 8(5): 052104, 2021 Sep.
Article in English | MEDLINE | ID: mdl-33889658

ABSTRACT

Purpose: We developed a deep learning method to reduce noise and beam-hardening artifact in virtual monoenergetic image (VMI) at low x-ray energy levels. Approach: An encoder-decoder type convolutional neural network was implemented with customized inception modules and in-house-designed training loss (denoted as Incept-net), to directly estimate VMI from multi-energy CT images. Images of an abdomen-sized water phantom with varying insert materials were acquired from a research photon-counting-detector CT. The Incept-net was trained with image patches ( 64 × 64 pixels ) extracted from the phantom data, as well as synthesized, random-shaped numerical insert materials. The whole CT images ( 512 × 512 pixels ) with the remaining real insert materials that were unseen in network training were used for testing. Seven contrast-enhanced abdominal CT exams were used for preliminary evaluation of Incept-net generalizability over anatomical background. Mean absolute percentage error (MAPE) was used to evaluate CT number accuracy. Results: Compared to commercial VMI software, Incept-net largely suppressed beam-hardening artifact and reduced noise (53%) in phantom study. Incept-net presented comparable CT number accuracy at higher-density ( P -value [0.0625, 0.999]) and improved it at lower-density inserts ( P - value = 0.0313 ) with overall MAPE: Incept-net [2.9%, 4.6%]; commercial-VMI [6.7%, 10.9%]. In patient images, Incept-net suppressed beam-hardening artifact and reduced noise (up to 50%, P - value = 0.0156 ). Conclusion: In this preliminary study, Incept-net presented the potential to improve low-energy VMI quality.

13.
Article in English | MEDLINE | ID: mdl-35386837

ABSTRACT

In this study, we describe a systematic approach to optimize deep-learning-based image processing algorithms using random search. The optimization technique is demonstrated on a phantom-based noise reduction training framework; however, the techniques described can be applied generally for other deep learning image processing applications. The parameter space explored included number of convolutional layers, number of filters, kernel size, loss function, and network architecture (either U-Net or ResNet). A total of 100 network models were examined (50 random search, 50 ablation experiments). Following the random search, ablation experiments resulted in a very minor performance improvement indicating near optimal settings were found during the random search. The top performing network architecture was a U-Net with 4 pooling layers, 64 filters, 3×3 kernel size, ELU activation, and a weighted feature reconstruction loss (0.2×VGG + 0.8×MSE). Relative to the low-dose input image, the CNN reduced noise by 90%, reduced RMSE by 34%, and increased SSIM by 76% on six patient exams reserved for testing. The visualization of hepatic and bone lesions was greatly improved following noise reduction.

14.
J Acoust Soc Am ; 141(3): EL239, 2017 03.
Article in English | MEDLINE | ID: mdl-28372141

ABSTRACT

Refracto-vibrometry was used to optically image propagating Mach cones in water. These Mach cones were produced by ultrasonic longitudinal and shear waves traveling through submerged 12.7 mm diameter metal cylinders. Full-field videos of the propagating wave fronts were obtained using refracto-vibrometry. A laser Doppler vibrometer, directed at a retroreflective surface, sampled time-varying water density at numerous scan points. Wave speeds were determined from the Mach cone apex angles; the measured longitudinal and shear wave speeds in steel (6060 ± 170 m/s and 3310 ± 110 m/s, respectively) and beryllium (12 400 ± 700 m/s and 8100 ± 500 m/s) agreed with published values.

SELECTION OF CITATIONS
SEARCH DETAIL
...