Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 62
Filter
1.
J Bioinform Comput Biol ; 18(6): 2050031, 2020 12.
Article in English | MEDLINE | ID: mdl-32938284

ABSTRACT

The amount of sequencing data is growing at a fast pace due to a rapid revolution in sequencing technologies. Quality scores, which indicate the reliability of each of the called nucleotides, take a significant portion of the sequencing data. In addition, quality scores are more challenging to compress than nucleotides, and they are often noisy. Hence, a natural solution to further decrease the size of the sequencing data is to apply lossy compression to the quality scores. Lossy compression may result in a loss in precision, however, it has been shown that when operating at some specific rates, lossy compression can achieve performance on variant calling similar to that achieved with the losslessly compressed data (i.e. the original data). We propose Coding with Random Orthogonal Matrices for quality scores (CROMqs), the first lossy compressor designed for the quality scores with the "infinitesimal successive refinability" property. With this property, the encoder needs to compress the data only once, at a high rate, while the decoder can decompress it iteratively. The decoder can reconstruct the set of quality scores at each step with reduced distortion each time. This characteristic is specifically useful in sequencing data compression, since the encoder does not generally know what the most appropriate rate of compression is, e.g. for not degrading variant calling accuracy. CROMqs avoids the need of having to compress the data at multiple rates, hence incurring time savings. In addition to this property, we show that CROMqs obtains a comparable rate-distortion performance to the state-of-the-art lossy compressors. Moreover, we also show that it achieves a comparable performance on variant calling to that of the lossless compressed data while achieving more than 50% reduction in size.


Subject(s)
Algorithms , Data Compression/methods , High-Throughput Nucleotide Sequencing/methods , Chromosomes, Human, Pair 20/genetics , Computational Biology , Computer Simulation , Data Compression/standards , Data Compression/statistics & numerical data , Databases, Genetic/statistics & numerical data , Fourier Analysis , High-Throughput Nucleotide Sequencing/standards , High-Throughput Nucleotide Sequencing/statistics & numerical data , Humans , Software
2.
BMC Bioinformatics ; 21(1): 321, 2020 Jul 20.
Article in English | MEDLINE | ID: mdl-32689929

ABSTRACT

BACKGROUND: Recent advancements in high-throughput sequencing technologies have generated an unprecedented amount of genomic data that must be stored, processed, and transmitted over the network for sharing. Lossy genomic data compression, especially of the base quality values of sequencing data, is emerging as an efficient way to handle this challenge due to its superior compression performance compared to lossless compression methods. Many lossy compression algorithms have been developed for and evaluated using DNA sequencing data. However, whether these algorithms can be used on RNA sequencing (RNA-seq) data remains unclear. RESULTS: In this study, we evaluated the impacts of lossy quality value compression on common RNA-seq data analysis pipelines including expression quantification, transcriptome assembly, and short variants detection using RNA-seq data from different species and sequencing platforms. Our study shows that lossy quality value compression could effectively improve RNA-seq data compression. In some cases, lossy algorithms achieved up to 1.2-3 times further reduction on the overall RNA-seq data size compared to existing lossless algorithms. However, lossy quality value compression could affect the results of some RNA-seq data processing pipelines, and hence its impacts to RNA-seq studies cannot be ignored in some cases. Pipelines using HISAT2 for alignment were most significantly affected by lossy quality value compression, while the effects of lossy compression on pipelines that do not depend on quality values, e.g., STAR-based expression quantification and transcriptome assembly pipelines, were not observed. Moreover, regardless of using either STAR or HISAT2 as the aligner, variant detection results were affected by lossy quality value compression, albeit to a lesser extent when STAR-based pipeline was used. Our results also show that the impacts of lossy quality value compression depend on the compression algorithms being used and the compression levels if the algorithm supports setting of multiple compression levels. CONCLUSIONS: Lossy quality value compression can be incorporated into existing RNA-seq analysis pipelines to alleviate the data storage and transmission burdens. However, care should be taken on the selection of compression tools and levels based on the requirements of the downstream analysis pipelines to avoid introducing undesirable adverse effects on the analysis results.


Subject(s)
Algorithms , Data Compression/methods , Data Compression/standards , Genomics/methods , High-Throughput Nucleotide Sequencing/methods , Sequence Analysis, RNA/methods , Base Sequence , Gene Expression Profiling , Genome, Human , Humans
3.
PLoS One ; 15(4): e0230997, 2020.
Article in English | MEDLINE | ID: mdl-32298280

ABSTRACT

The existing tamper detection schemes for absolute moment block truncation coding (AMBTC) compressed images are able to detect the tampering. However, the marked image qualities of these schemes can be enhanced, and their authentication methods may fail to detect some special tampering. We propose a secure AMBTC tamper detection scheme that preserves high image fidelity with excellent detectability. In the proposed approach, a bit in bitmaps of AMBTC codes is sequentially toggled to generate a set of authentication codes. The one that causes the least distortion is embedded into the quantization levels with the guidance of a key-generated reference table (RT). Without the correct key, the same reference table cannot be constructed. Therefore, the proposed method is able to detect various kinds of malicious tampering, including those special tampering techniques designed for RT-based authentication schemes. The proposed method not only offers better image quality, but also provides an excellent and satisfactory detectability as compared with previous works.


Subject(s)
Image Processing, Computer-Assisted , Security Measures , Algorithms , Computer Security/standards , Computer Security/statistics & numerical data , Data Compression/standards , Data Compression/statistics & numerical data , Humans , Image Processing, Computer-Assisted/standards , Image Processing, Computer-Assisted/statistics & numerical data , Internet/standards , Internet/statistics & numerical data , Security Measures/standards , Security Measures/statistics & numerical data
4.
PLoS One ; 15(1): e0226943, 2020.
Article in English | MEDLINE | ID: mdl-31923261

ABSTRACT

In this work, we propose a framework to store and manage spatial data, which includes new efficient algorithms to perform operations accepting as input a raster dataset and a vector dataset. More concretely, we present algorithms for solving a spatial join between a raster and a vector dataset imposing a restriction on the values of the cells of the raster; and an algorithm for retrieving K objects of a vector dataset that overlap cells of a raster dataset, such that the K objects are those overlapping the highest (or lowest) cell values among all objects. The raster data is stored using a compact data structure, which can directly manipulate compressed data without the need for prior decompression. This leads to better running times and lower memory consumption. In our experimental evaluation comparing our solution to other baselines, we obtain the best space/time trade-offs.


Subject(s)
Data Compression/methods , Information Storage and Retrieval/methods , Algorithms , Data Compression/standards , Datasets as Topic , Information Storage and Retrieval/standards
5.
Neural Netw ; 123: 134-141, 2020 Mar.
Article in English | MEDLINE | ID: mdl-31855748

ABSTRACT

Recurrent neural networks (RNNs) have recently achieved remarkable successes in a number of applications. However, the huge sizes and computational burden of these models make it difficult for their deployment on edge devices. A practically effective approach is to reduce the overall storage and computation costs of RNNs by network pruning techniques. Despite their successful applications, those pruning methods based on Lasso either produce irregular sparse patterns in weight matrices, which is not helpful in practical speedup. To address these issues, we propose a structured pruning method through neuron selection which can remove the independent neuron of RNNs. More specifically, we introduce two sets of binary random variables, which can be interpreted as gates or switches to the input neurons and the hidden neurons, respectively. We demonstrate that the corresponding optimization problem can be addressed by minimizing the L0 norm of the weight matrix. Finally, experimental results on language modeling and machine reading comprehension tasks have indicated the advantages of the proposed method in comparison with state-of-the-art pruning competitors. In particular, nearly 20× practical speedup during inference was achieved without losing performance for the language model on the Penn TreeBank dataset, indicating the promising performance of the proposed method.


Subject(s)
Neural Networks, Computer , Data Compression/methods , Data Compression/standards , Natural Language Processing
6.
BMC Bioinformatics ; 20(Suppl 9): 302, 2019 Nov 22.
Article in English | MEDLINE | ID: mdl-31757199

ABSTRACT

MOTIVATION: Current NGS techniques are becoming exponentially cheaper. As a result, there is an exponential growth of genomic data unfortunately not followed by an exponential growth of storage, leading to the necessity of compression. Most of the entropy of NGS data lies in the quality values associated to each read. Those values are often more diversified than necessary. Because of that, many tools such as Quartz or GeneCodeq, try to change (smooth) quality scores in order to improve compressibility without altering the important information they carry for downstream analysis like SNP calling. RESULTS: We use the FM-Index, a type of compressed suffix array, to reduce the storage requirements of a dictionary of k-mers and an effective smoothing algorithm to maintain high precision for SNP calling pipelines, while reducing quality scores entropy. We present YALFF (Yet Another Lossy Fastq Filter), a tool for quality scores compression by smoothing leading to improved compressibility of FASTQ files. The succinct k-mers dictionary allows YALFF to run on consumer computers with only 5.7 GB of available free RAM. YALFF smoothing algorithm can improve genotyping accuracy while using less resources. AVAILABILITY: https://github.com/yhhshb/yalff.


Subject(s)
Data Compression/standards , High-Throughput Nucleotide Sequencing/methods , Algorithms , Base Sequence , Humans , Polymorphism, Single Nucleotide/genetics , Quality Control , ROC Curve , Software
7.
J Comput Biol ; 25(10): 1141-1151, 2018 10.
Article in English | MEDLINE | ID: mdl-30059248

ABSTRACT

Previous studies on quality score compression can be classified into two main lines: lossy schemes and lossless schemes. Lossy schemes enable a better management of computational resources. Thus, in practice, and for preliminary analyses, bioinformaticians may prefer to work with a lossy quality score representation. However, the original quality scores might be required for a deeper analysis of the data. Hence, it might be necessary to keep them; in addition to lossy compression this requires lossless compression as well. We developed a space-efficient hierarchical representation of quality scores, QScomp, which allows the users to work with lossy quality scores in routine analysis, without sacrificing the capability of reaching the original quality scores when further investigations are required. Each quality score is represented by a tuple through a novel decomposition. The first and second dimensions of these tuples are separately compressed such that the first-level compression is a lossy scheme. The compressed information of the second dimension allows the users to extract the original quality scores. Experiments on real data reveal that the downstream analysis with the lossy part-spending only 0.49 bits per quality score on average-shows a competitive performance, and that the total space usage with the inclusion of the compressed second dimension is comparable to the performance of competing lossless schemes.


Subject(s)
Algorithms , Data Compression/methods , Data Compression/standards , Genetic Variation , High-Throughput Nucleotide Sequencing/methods , Sequence Analysis, DNA/standards , Genomics , Humans
8.
IEEE/ACM Trans Comput Biol Bioinform ; 14(6): 1228-1236, 2017.
Article in English | MEDLINE | ID: mdl-27214907

ABSTRACT

The research in DNA data compression lacks a standard dataset to test out compression tools specific to DNA. This paper argues that the current state of achievement in DNA compression is unable to be benchmarked in the absence of such scientifically compiled whole genome sequence dataset and proposes a benchmark dataset using multistage sampling procedure. Considering the genome sequence of organisms available in the National Centre for Biotechnology and Information (NCBI) as the universe, the proposed dataset selects 1,105 prokaryotes, 200 plasmids, 164 viruses, and 65 eukaryotes. This paper reports the results of using three established tools on the newly compiled dataset and show that their strength and weakness are evident only with a comparison based on the scientifically compiled benchmark dataset. AVAILABILITY: The sample dataset and the respective links are available @ https://sourceforge.net/projects/benchmarkdnacompressiondataset/.


Subject(s)
Data Compression/standards , Databases, Genetic/standards , Genome/genetics , Genomics/methods , Genomics/standards , Sequence Analysis, DNA/standards , Algorithms , Bacteria/genetics , Benchmarking , Humans , Yeasts/genetics
9.
J Magn Reson Imaging ; 44(2): 433-44, 2016 08.
Article in English | MEDLINE | ID: mdl-26777856

ABSTRACT

PURPOSE: To determine the efficacy of compressed sensing (CS) reconstructions for specific clinical magnetic resonance neuroimaging applications beyond more conventional acceleration techniques such as parallel imaging (PI) and low-resolution acquisitions. MATERIALS AND METHODS: Raw k-space data were acquired from five healthy volunteers on a 3T scanner using a 32-channel head coil using T2 -FLAIR, FIESTA-C, time of flight (TOF), and spoiled gradient echo (SPGR) sequences. In a series of blinded studies, three radiologists independently evaluated CS, PI (GRAPPA), and low-resolution images at up to 5× accelerations. Synthetic T2 -FLAIR images with artificial lesions were used to assess diagnostic accuracy for CS reconstructions. RESULTS: CS reconstructions were of diagnostically acceptable quality at up to 4× acceleration for T2 -FLAIR and FIESTA-C (average qualitative scores 3.7 and 4.3, respectively, on a 5-point scale at 4× acceleration), and at up to 3× acceleration for TOF and SPGR (average scores 4.0 and 3.7, respectively, at 3× acceleration). The qualitative scores for CS reconstructions were significantly better than low-resolution images for T2 -FLAIR, FIESTA-C, and TOF and significantly better than GRAPPA for TOF and SPGR (Wilcoxon signed rank test, P < 0.05) with no significant difference found otherwise. Diagnostic accuracy was acceptable for both CS and low-resolution images at up to 3× acceleration (area under the ROC curve 0.97 and 0.96, respectively.) CONCLUSION: Mild to moderate accelerations are possible for those sequences by a combined CS and PI reconstruction. Nevertheless, for certain sequences/applications one might mildly reduce the acquisition time by appropriately reducing the imaging resolution rather than the more complicated CS reconstruction. J. Magn. Reson. Imaging 2016;44:433-444.


Subject(s)
Data Compression/methods , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neuroimaging/methods , Quality Assurance, Health Care/methods , Signal Processing, Computer-Assisted , Data Compression/standards , Female , Humans , Image Enhancement/methods , Magnetic Resonance Imaging/standards , Male , Neuroimaging/standards , Observer Variation , Ontario , Reproducibility of Results , Sensitivity and Specificity , Single-Blind Method
10.
Physiol Meas ; 36(9): 1981-94, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26260978

ABSTRACT

The aim of electrocardiogram (ECG) compression is to reduce the amount of data as much as possible while preserving the significant information for diagnosis. Objective metrics that are derived directly from the signal are suitable for controlling the quality of the compressed ECGs in practical applications. Many approaches have employed figures of merit based on the percentage root mean square difference (PRD) for this purpose. The benefits and drawbacks of the PRD measures, along with other metrics for quality assessment in ECG compression, are analysed in this work. We propose the use of the root mean square error (RMSE) for quality control because it provides a clearer and more stable idea about how much the retrieved ECG waveform, which is the reference signal for establishing diagnosis, separates from the original. For this reason, the RMSE is applied here as the target metric in a thresholding algorithm that relies on the retained energy. A state of the art compressor based on this approach, and its PRD-based counterpart, are implemented to test the actual capabilities of the proposed technique. Both compression schemes are employed in several experiments with the whole MIT-BIH Arrhythmia Database to assess both global and local signal distortion. The results show that, using the RMSE for quality control, the distortion of the reconstructed signal is better controlled without reducing the compression ratio.


Subject(s)
Data Compression/methods , Electrocardiography/methods , Algorithms , Arrhythmias, Cardiac/physiopathology , Data Compression/standards , Databases, Factual , Electrocardiography/standards , Quality Control
11.
Bioinformatics ; 31(19): 3122-9, 2015 10 01.
Article in English | MEDLINE | ID: mdl-26026138

ABSTRACT

MOTIVATION: Recent advancements in sequencing technology have led to a drastic reduction in the cost of sequencing a genome. This has generated an unprecedented amount of genomic data that must be stored, processed and transmitted. To facilitate this effort, we propose a new lossy compressor for the quality values presented in genomic data files (e.g. FASTQ and SAM files), which comprise roughly half of the storage space (in the uncompressed domain). Lossy compression allows for compression of data beyond its lossless limit. RESULTS: The proposed algorithm QVZ exhibits better rate-distortion performance than the previously proposed algorithms, for several distortion metrics and for the lossless case. Moreover, it allows the user to define any quasi-convex distortion function to be minimized, a feature not supported by the previous algorithms. Finally, we show that QVZ-compressed data exhibit better performance in the genotyping than data compressed with previously proposed algorithms, in the sense that for a similar rate, a genotyping closer to that achieved with the original quality values is obtained. AVAILABILITY AND IMPLEMENTATION: QVZ is written in C and can be downloaded from https://github.com/mikelhernaez/qvz. CONTACT: mhernaez@stanford.edu or gmalysa@stanford.edu or iochoa@stanford.edu SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
Algorithms , Data Compression/standards , Animals , Databases, Genetic , Genotype , Genotyping Techniques , Humans , Polymorphism, Single Nucleotide/genetics
12.
AJR Am J Roentgenol ; 203(5): 1006-12, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25341138

ABSTRACT

OBJECTIVE: The purpose of this article is to examine the rates of appendiceal visualization by sonography, imaging-based diagnoses of appendicitis, and CT use after appendiceal sonography, before and after the introduction of a sonographic algorithm involving sequential changes in patient positioning. MATERIALS AND METHODS: We used a search engine to retrospectively identify patients who underwent graded-compression sonography for suspected appendicitis during 6-month periods before (period 1; 419 patients) and after (period 2; 486 patients) implementation of a new three-step positional sonographic algorithm. The new algorithm included initial conventional supine scanning and, as long as the appendix remained nonvisualized, left posterior oblique scanning and then "second-look" supine scanning. Abdominal CT within 7 days after sonography was recorded. RESULTS: Between periods 1 and 2, appendiceal visualization on sonography increased from 31.0% to 52.5% (p < 0.001), postsonography CT use decreased from 31.3% to 17.7% (p < 0.001), and the proportion of imaging-based diagnoses of appendicitis made by sonography increased from 63.8% to 85.7% (p = 0.002). The incidence of appendicitis diagnosed by imaging (either sonography or CT) remained similar at 16.5% and 17.3%, respectively (p = 0.790). Sensitivity and overall accuracy were 57.8% (95% CI, 44.8-70.1%) and 93.0% (95% CI, 90.1-95.3%), respectively, in period 1 and 76.5% (95% CI, 65.8-85.2%) and 95.4% (95% CI, 93.1-97.1%), respectively, in period 2. Similar findings were observed for adults and children. CONCLUSION: Implementation of an ultrasound algorithm with sequential positioning significantly improved the appendiceal visualization rate and the proportion of imaging-based diagnoses of appendicitis made by ultrasound, enabling a concomitant decrease in abdominal CT use in both children and adults.


Subject(s)
Algorithms , Appendicitis/diagnosis , Image Enhancement/methods , Patient Positioning/methods , Tomography, X-Ray Computed/statistics & numerical data , Ultrasonography/methods , Ultrasonography/statistics & numerical data , Adolescent , Aged , Aged, 80 and over , Child , Child, Preschool , Data Compression/methods , Data Compression/standards , Female , Humans , Infant , Male , Middle Aged , Observer Variation , Patient Positioning/statistics & numerical data , Reproducibility of Results , Retrospective Studies , Sensitivity and Specificity , Young Adult
13.
ScientificWorldJournal ; 2014: 536930, 2014.
Article in English | MEDLINE | ID: mdl-25258724

ABSTRACT

The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications.


Subject(s)
Algorithms , Computer Communication Networks , Computer Security , Image Processing, Computer-Assisted/methods , Data Compression/methods , Data Compression/standards , Image Processing, Computer-Assisted/standards , Reproducibility of Results , Time Factors
14.
ScientificWorldJournal ; 2014: 803983, 2014.
Article in English | MEDLINE | ID: mdl-25028681

ABSTRACT

This paper presents a novel watermarking method to facilitate the authentication and detection of the image forgery on the Quran images. Two layers of embedding scheme on wavelet and spatial domain are introduced to enhance the sensitivity of fragile watermarking and defend the attacks. Discrete wavelet transforms are applied to decompose the host image into wavelet prior to embedding the watermark in the wavelet domain. The watermarked wavelet coefficient is inverted back to spatial domain then the least significant bits is utilized to hide another watermark. A chaotic map is utilized to blur the watermark to make it secure against the local attack. The proposed method allows high watermark payloads, while preserving good image quality. Experiment results confirm that the proposed methods are fragile and have superior tampering detection even though the tampered area is very small.


Subject(s)
Data Compression/methods , Algorithms , Computer Graphics/standards , Data Compression/standards , Wavelet Analysis
15.
J Med Syst ; 38(6): 54, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24832688

ABSTRACT

Watermarking is the most widely used technology in the field of copyright and biological information protection. In this paper, we use quantization based digital watermark encryption technology on the Electrocardiogram (ECG) to protect patient rights and information. Three transform domains, DWT, DCT, and DFT are adopted to implement the quantization based watermarking technique. Although the watermark embedding process is not invertible, the change of the PQRST complexes and amplitude of the ECG signal is very small and so the watermarked data can meet the requirements of physiological diagnostics. In addition, the hidden information can be extracted without knowledge of the original ECG data. In other words, the proposed watermarking scheme is blind. Experimental results verify the efficiency of the proposed scheme.


Subject(s)
Computer Security/standards , Confidentiality/standards , Data Compression/methods , Electrocardiography/methods , Signal Processing, Computer-Assisted , Algorithms , Data Compression/standards , Electrocardiography/standards , Humans
16.
Skin Res Technol ; 20(1): 67-73, 2014 Feb.
Article in English | MEDLINE | ID: mdl-23724923

ABSTRACT

BACKGROUND: Despite the importance of images in the discipline and the diffusion of digital imaging devices, the issue of image compression in dermatology was discussed only in few studies, which yielded results often not comparable, and left some unanswered questions. OBJECTIVE: To evaluate and compare the performance of the JPEG and JPEG2000 algorithms for compression of dermatological images. METHODS: Nineteen macroscopic and fifteen videomicroscopic images of skin lesions were compressed with JPEG and JPEG2000 at 18 different compression rates, from 90% to 99.5%. Compressed images were shown, next to uncompressed versions, to three dermatologists with different experience, who judged quality and suitability for educational/scientific and diagnostic purposes. Moreover, alterations and quality were evaluated by calculation of mean 'distance' of pixel colors between compressed and original images and by peak signal-to-noise ratio, respectively. RESULTS: JPEG2000 was qualitatively better than JPEG at all compression rates, particularly highest ones, as shown by dermatologists' ratings and objective parameters. Agreement between raters was high, but with some differences in specific cases, showing that different professional experience can influence judgement on images. CONCLUSION: In consideration of its high qualitative performance and wide diffusion, JPEG2000 represents an optimal solution for the compression of digital dermatological images.


Subject(s)
Data Compression/standards , Dermoscopy/standards , Image Interpretation, Computer-Assisted/methods , Photography/standards , Signal Processing, Computer-Assisted , Skin Diseases/pathology , Benchmarking , Data Compression/methods , Dermoscopy/methods , Humans , Image Interpretation, Computer-Assisted/standards , Internationality , Observer Variation , Photography/methods , Reproducibility of Results , Sensitivity and Specificity
17.
IEEE Trans Image Process ; 23(1): 274-86, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24196862

ABSTRACT

In this paper, we investigate a new inter-channel coding mode called LM mode proposed for the next generation video coding standard called high efficiency video coding. This mode exploits inter-channel correlation using reconstructed luma to predict chroma linearly with parameters derived from neighboring reconstructed luma and chroma pixels at both encoder and decoder to avoid overhead signaling. In this paper, we analyze the LM mode and prove that the LM parameters for predicting original chroma and reconstructed chroma are statistically the same. We also analyze the error sensitivity of the LM parameters. We identify some LM mode problematic situations and propose three novel LM-like modes called LMA, LML, and LMO to address the situations. To limit the increase in complexity due to the LM-like modes, we propose some fast algorithms with the help of some new cost functions. We further identify some potentially-problematic conditions in the parameter estimation (including regression dilution problem) and introduce a novel model correction technique to detect and correct those conditions. Simulation results suggest that considerable BD-rate reduction can be achieved by the proposed LM-like modes and model correction technique. In addition, the performance gain of the two techniques appears to be essentially additive when combined.


Subject(s)
Algorithms , Color , Data Compression/methods , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Signal Processing, Computer-Assisted , Video Recording/methods , Data Compression/standards , Internationality , Reference Standards , Reproducibility of Results , Sensitivity and Specificity , Video Recording/standards
18.
Stud Health Technol Inform ; 192: 1001, 2013.
Article in English | MEDLINE | ID: mdl-23920775

ABSTRACT

Formats for data storage in personal computers vary according to manufacturer and models for personal health-monitoring devices such as blood-pressure and body-composition meters. In contrast, the data format of images from digital cameras is unified into a JPEG format with an Exif area and is already familiar to many users. We have devised a method that can contain health data as a JPEG file. Health data is stored in the Exif area in JPEG in a HL7 format. There is, however, a capacity limit of 64 KB for the Exif area. The aim of this study is to examine how much health data can actually be stored in the Exif area. We found that even with combined data from multiple devices, it was possible to store over a month of health data in a JPEG file, and using multiple JPEG files simply overcomes this limit. We believe that this method will help people to more easily handle health data regardless of the various device modelsthey use.


Subject(s)
Computer Graphics/statistics & numerical data , Computer Graphics/standards , Data Compression/statistics & numerical data , Data Compression/standards , Electronic Health Records/standards , Information Storage and Retrieval/statistics & numerical data , Information Storage and Retrieval/standards , Data Curation/standards , Data Curation/statistics & numerical data , Health Level Seven/standards
19.
J Digit Imaging ; 26(5): 866-74, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23589187

ABSTRACT

This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.


Subject(s)
Data Compression/methods , Magnetic Resonance Imaging/methods , Radiographic Image Enhancement/methods , Tomography, X-Ray Computed/methods , Algorithms , Data Compression/standards , Data Compression/statistics & numerical data , Humans , Image Processing, Computer-Assisted/methods , Image Processing, Computer-Assisted/standards , Magnetic Resonance Imaging/standards , Magnetic Resonance Imaging/statistics & numerical data , Radiographic Image Enhancement/standards , Radiographic Image Interpretation, Computer-Assisted/methods , Radiographic Image Interpretation, Computer-Assisted/standards , Statistics, Nonparametric , Tomography, X-Ray Computed/standards , Tomography, X-Ray Computed/statistics & numerical data
20.
J Telemed Telecare ; 18(4): 204-10, 2012 Jun.
Article in English | MEDLINE | ID: mdl-22604273

ABSTRACT

We assessed the feasibility, image adequacy and clinical utility of a tele-echocardiography service which combined video compression with low-bandwidth store-and-forward transmission. Echocardiograms were acquired by a hospital geriatrician, compressed and transmitted using both near real-time (urgent) and delayed (pre-programmed) protocols via an Internet connection to the notebook PC of a remote cardiologist. Clinical utility was evaluated as a change in therapeutic management. During a one-year period, 101 tele-echocardiography consultations were successfully performed (feasibility = 100%) on 95 patients (age 22-95 years), admitted with cardiovascular or neurological diagnoses (24% of the consultations were urgent). In total, 4617 files (1.4 GByte of data) were transmitted, 2669 of which were short video clips. On average, 46 files (13.8 MByte) were transmitted (mean duration 10 min) at each examination. Consultations (both urgent and pre-programmed) were clinically useful in 83% of examinations. Logistic regression analysis showed that both a low left ventricular systolic function and the examination indication were determinants of clinical utility. The transmitted images were considered adequate for diagnosis in 100% of the pre-programmed teleconsultations. Tele-echocardiography using MPEG-4 video compression is a feasible, adequate and clinically useful tool for telemedicine.


Subject(s)
Data Compression/methods , Echocardiography , Remote Consultation/methods , Adult , Aged , Aged, 80 and over , Cardiovascular Diseases/diagnostic imaging , Data Compression/standards , Female , Humans , Internet , Logistic Models , Male , Middle Aged , Nervous System Diseases/diagnostic imaging , Prospective Studies , Remote Consultation/standards , Reproducibility of Results , Telemetry/instrumentation , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...