Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
Artif Intell Med ; 95: 82-87, 2019 04.
Article in English | MEDLINE | ID: mdl-30266546

ABSTRACT

In this paper, we propose a pathological image compression framework to address the needs of Big Data image analysis in digital pathology. Big Data image analytics require analysis of large databases of high-resolution images using distributed storage and computing resources along with transmission of large amounts of data between the storage and computing nodes that can create a major processing bottleneck. The proposed image compression framework is based on the JPEG2000 Interactive Protocol and aims to minimize the amount of data transfer between the storage and computing nodes as well as to considerably reduce the computational demands of the decompression engine. The proposed framework was integrated into hotspot detection from images of breast biopsies, yielding considerable reduction of data and computing requirements.


Subject(s)
Big Data , Breast Neoplasms/diagnosis , Data Compression/methods , Female , Humans , Image Processing, Computer-Assisted/methods , Information Storage and Retrieval
2.
IEEE Trans Image Process ; 10(3): 465-70, 2001.
Article in English | MEDLINE | ID: mdl-18249635

ABSTRACT

Blur identification is a crucial first step in many image restoration techniques. An approach for identifying image blur using vector quantizer encoder distortion is proposed. The blur in an image is identified by choosing from a finite set of candidate blur functions. The method requires a set of training images produced by each of the blur candidates. Each of these sets is used to train a vector quantizer codebook. Given an image degraded by unknown blur, it is first encoded with each of these codebooks. The blur in the image is then estimated by choosing from among the candidates, the one corresponding to the codebook that provides the lowest encoder distortion. Simulations are performed at various bit rates and with different levels of noise. Results show that the method performs well even at a signal-to-noise ratio (SNR) as low as 10 dB.

3.
Magn Reson Med ; 43(5): 682-90, 2000 May.
Article in English | MEDLINE | ID: mdl-10800033

ABSTRACT

An adaptive implementation of the spatial matched filter and its application to the reconstruction of phased array MR imagery is described. Locally relevant array correlation statistics for the NMR signal and noise processes are derived directly from the set of complex individual coil images, in the form of sample correlation matrices. Eigen-analysis yields an optimal filter vector for the estimated signal and noise array correlation statistics. The technique enables near-optimal reconstruction of multicoil MR imagery without a-priori knowledge of the individual coil field maps or noise correlation structure. Experimental results indicate SNR performance approaching that of the optimal matched filter. Compared to the sum-of-squares technique, the RMS noise level in dark image regions is reduced by as much as the square root of N, where N is the number of coils in the array. The technique is also effective in suppressing localized motion and flow artifacts.


Subject(s)
Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/instrumentation , Artifacts , Humans , Magnetic Resonance Angiography , Mathematics , Movement , Thorax
4.
J Opt Soc Am A Opt Image Sci Vis ; 17(2): 265-75, 2000 Feb.
Article in English | MEDLINE | ID: mdl-10680628

ABSTRACT

The Viterbi algorithm (VA) is known to given an optimal solution to the problem of estimating one-dimensional sequences of discrete-valued pixels corrupted by finite-support blur and memoryless noise. A row-by-row estimation along with decision feedback and vector quantization is used to reduce the computational complexity of the VA and allow the estimation of two-dimensional images. This reduced-complexity VA (RCVA) is shown to produce near-optimal estimation of random binary images. In addition, simulated restorations of gray-scale images show the RCVA estimates to be an improvement over the estimates obtained by the conventional Wiener filter (WF). Unlike the WF, the RCVA is capable of superresolution and is adaptable for use in restoring data from signal-dependent Poisson noise corruption. Experimental restorations of random binary data gathered from an optical imaging system support the simulations and show that the RCVA estimate has fewer than one third of the errors of the WF estimate.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Models, Theoretical , Artifacts , Humans , Likelihood Functions
5.
IEEE Trans Image Process ; 9(2): 295-8, 2000.
Article in English | MEDLINE | ID: mdl-18255400

ABSTRACT

This correspondence presents an improved version of an algorithm designed to perform image restoration via nonlinear interpolative vector quantization (NLIVQ). The improvement results from using lapped blocks during the decoding process. The algorithm is trained on original and diffraction-limited image pairs. The discrete cosine transform is again used in the codebook design process to control complexity. Simulation results are presented which demonstrate improvements over the nonlapped algorithm in both observed image quality and peak signal-to-noise ratio. In addition, the nonlinearity of the algorithm is shown to produce super-resolution in the restored images.

6.
IEEE Trans Image Process ; 9(11): 1972-7, 2000.
Article in English | MEDLINE | ID: mdl-18262932

ABSTRACT

Reversible integer wavelet transforms allow both lossless and lossy decoding using a single bitstream. We present a new fully scalable image coder and investigate the lossless and lossy performance of these transforms in the proposed coder. The lossless compression performance of the presented method is comparable to JPEG-LS. The lossy performance is quite competitive with other efficient lossy compression methods.

7.
Appl Opt ; 39(2): 269-76, 2000 Jan 10.
Article in English | MEDLINE | ID: mdl-18337894

ABSTRACT

We present a new image-restoration algorithm for binary-valued imagery. A trellis-based search method is described that exploits the finite alphabet of the target imagery. This algorithm seeks the maximum-likelihood solution to the image-restoration problem and is motivated by the Viterbi algorithm for traditional binary data detection in the presence of intersymbol interference and noise. We describe a blockwise method to restore two-dimensional imagery on a row-by-row basis and in which a priori knowledge of image pixel correlation structure can be included through a modification to the trellis transition probabilities. The performance of the new Viterbi-based algorithm is shown to be superior to Wiener filtering in terms of both bit error rate and visual quality. Algorithmic choices related to trellis state configuration, complexity reduction, and transition probability selection are investigated, and various trade-offs are discussed.

8.
Appl Opt ; 39(11): 1799-814, 2000 Apr 10.
Article in English | MEDLINE | ID: mdl-18345077

ABSTRACT

A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.

9.
IEEE Trans Image Process ; 8(11): 1527-33, 1999.
Article in English | MEDLINE | ID: mdl-18267428

ABSTRACT

Complex phase history data in synthetic aperture radar (SAR) systems require extensive processing before useful images can be obtained. In spotlight mode SAR systems, useful images can be obtained by applying aperture weighting and inverse Fourier transform operations to SAR phase history data. In this paper, we are concerned with the compression of the complex phase history data obtained by a spotlight SAR system. We exploit knowledge of the aperture weighting function along with Fourier transform processing to attach a "gain" factor to each complex phase history data sample. This gain factor is then used to efficiently allocate bits to the phase history data during quantization. Performance evaluations are presented for this compression system relative to other existing SAR phase history data compression systems.

10.
IEEE Trans Image Process ; 8(11): 1638-43, 1999.
Article in English | MEDLINE | ID: mdl-18267438

ABSTRACT

In this work, we present coding techniques that enable progressive transmission when trellis coded quantization (TCQ) is applied to wavelet coefficients. A method for approximately inverting TCQ in the absence of least significant bits is developed. Results are presented using different rate allocation strategies and different entropy coders. The proposed wavelet-TCQ coder yields excellent coding efficiency while supporting progressive modes analogous to those available in JPEG.

11.
IEEE Trans Image Process ; 8(12): 1677-87, 1999.
Article in English | MEDLINE | ID: mdl-18267446

ABSTRACT

A new form of trellis coded quantization based on uniform quantization thresholds and "on-the-fly" quantizer training is presented. The universal trellis coded quantization (UTCQ) technique requires neither stored codebooks nor a computationally intense codebook design algorithm. Its performance is comparable with that of fully optimized entropy-constrained trellis coded quantization (ECTCQ) for most encoding rates. The codebook and trellis geometry of UTCQ are symmetric with respect to the trellis superset. This allows sources with a symmetric probability density to be encoded with a single variable-rate code. Rate allocation and quantizer modeling procedures are given for UTCQ which allow access to continuous quantization rates. An image coding application based on adaptive wavelet coefficient subblock classification, arithmetic coding, and UTCQ is presented. The excellent performance of this coder demonstrates the efficacy of UTCQ. We also present a simple scheme to improve the perceptual performance of UTCQ for certain imagery at low bit rates. This scheme has the added advantage of being applied during image decoding, without the need to reencode the original image.

12.
IEEE Trans Image Process ; 7(1): 119-24, 1998.
Article in English | MEDLINE | ID: mdl-18267386

ABSTRACT

This paper presents a novel technique for image restoration based on nonlinear interpolative vector quantization (NLIVQ). The algorithm performs nonlinear restoration of diffraction-limited images concurrently with quantization. It is trained on image pairs consisting of an original image and its diffraction-limited counterpart. The discrete cosine transform is used in the codebook design process to control complexity. Simulation results are presented that demonstrate improvements in visual quality and peak signal-to-noise ratio of the restored images.

13.
IEEE Trans Image Process ; 7(2): 225-8, 1998.
Article in English | MEDLINE | ID: mdl-18267396

ABSTRACT

A near-lossless image compression scheme is presented. It is essentially a differential pulse code modulation (DPCM) system with a mechanism incorporated to minimize the entropy of the quantized prediction error sequence. With a "near-lossless" criterion of no more than a d gray-level error for each pixel, where d is a small nonnegative integer, trellises describing all allowable quantized prediction error sequences are constructed. A set of "contexts" is defined for the conditioning prediction error model and an algorithm that produces minimum entropy conditioned on the contexts is presented. Finally, experimental results are given.

14.
IEEE Trans Image Process ; 6(11): 1473-86, 1997.
Article in English | MEDLINE | ID: mdl-18282907

ABSTRACT

This paper investigates various classification techniques, applied to subband coding of images, as a way of exploiting the nonstationary nature of image subbands. The advantages of subband classification are characterized in a rate-distortion framework in terms of "classification gain" and overall "subband classification gain." Two algorithms, maximum classification gain and equal mean-normalized standard deviation classification, which allow unequal number of blocks in each class, are presented. The dependence between the classification maps from different subbands is exploited either directly while encoding the classification maps or indirectly by constraining the classification maps. The trade-off between the classification gain and the amount of side information is explored. Coding results for a subband image coder based on classification are presented. The simulation results demonstrate the value of classification in subband coding.

15.
IEEE Trans Image Process ; 6(4): 566-73, 1997.
Article in English | MEDLINE | ID: mdl-18282949

ABSTRACT

A training-sequence-based entropy-constrained predictive trellis coded quantization (ECPTCQ) scheme is presented for encoding autoregressive sources. For encoding a first-order Gauss-Markov source, the mean squared error (MSE) performance of an eight-state ECPTCQ system exceeds that of entropy-constrained differential pulse code modulation (ECDPCM) by up to 1.0 dB. In addition, a hyperspectral image compression system is developed, which utilizes ECPTCQ. A hyperspectral image sequence compressed at 0.125 b/pixel/band retains an average peak signal-to-noise ratio (PSNR) of greater than 43 dB over the spectral bands.

16.
IEEE Trans Image Process ; 4(6): 725-33, 1995.
Article in English | MEDLINE | ID: mdl-18290023

ABSTRACT

The discrete wavelet transform has recently emerged as a powerful technique for decomposing images into various multi-resolution approximations. Multi-resolution decomposition schemes have proven to be very effective for high-quality, low bit-rate image coding. In this work, we investigate the use of entropy-constrained trellis-coded quantization (ECTCQ) for encoding the wavelet coefficients of both monochrome and color images. ECTCQ is known as an effective scheme for quantizing memoryless sources with low to moderate complexity, The ECTCQ approach to data compression has led to some of the most effective source codes found to date for memoryless sources. Performance comparisons are made using the classical quadrature mirror filter bank of Johnston and nine-tap spline filters that were built from biorthogonal wavelet bases. We conclude that the encoded images obtained from the system employing nine-tap spline filters are marginally superior although at the expense of additional computational burden. Excellent peak-signal-to-noise ratios are obtained for encoding monochrome and color versions of the 512x512 "Lenna" image. Comparisons with other results from the literature reveal that the proposed wavelet coder is quite competitive.

17.
IEEE Trans Image Process ; 4(8): 1061-9, 1995.
Article in English | MEDLINE | ID: mdl-18292000

ABSTRACT

A predictive image coder having minimal decoder complexity is presented. The image coder utilizes recursive interpolative DPCM in conjunction with adaptive classification, entropy-constrained trellis coded quantization, and optimal rate allocation to obtain signal-to-noise ratios (SNRs) in the range of those provided by the most advanced transform coders.

SELECTION OF CITATIONS
SEARCH DETAIL
...