Lossless data compression
Encyclopedia
Lossless data compression is a class of data compression
algorithm
s that allows the exact original data to be reconstructed from the compressed data. The term lossless is in contrast to lossy data compression
, which only allows an approximation of the original data to be reconstructed, in exchange for better compression rates.
Lossless data compression is used in many applications. For example, it is used in the ZIP
file format and in the Unix
tool gzip
. It is also often used as a component within lossy data compression technologies (e.g. lossless mid/side joint stereo preprocessing by the LAME
MP3
encoder and other lossy audio encoders).
Lossless compression is used in cases where it is important that the original and the decompressed data be identical, or where deviations from the original data could be deleterious. Typical examples are executable programs, text documents and source code. Some image file formats, like PNG or GIF, use only lossless compression, while others like TIFF and MNG may use either lossless or lossy methods. Lossless audio formats are most often used for archiving or production purposes, with smaller lossy audio files being typically used on portable players and in other cases where storage space is limited and/or exact replication of the audio is unnecessary.
The primary encoding algorithms used to produce bit sequences are Huffman coding
(also used by DEFLATE) and arithmetic coding
. Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the information entropy
, whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1.
There are two primary ways of constructing statistical models: in a static model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces a single model to be used for all data being compressed, and so performs poorly on files containing heterogeneous data. Adaptive models dynamically update the model as the data is compressed. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data, performance improves. Most popular types of compression used in practice now use adaptive coders.
Lossless compression methods may be categorized according to the type of data they are designed to compress. While, in principle, any general-purpose lossless compression algorithm (general-purpose meaning that they can compress any bitstring) can be used on any type of data, many are unable to achieve significant compression on data that are not of the form for which they were designed to compress. Many of the lossless compression techniques used for text also work reasonably well for indexed image
s.
Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having a much higher probability than large values.
This is often also applied to sound files and can compress files which contain mostly low frequencies and low volumes.
For images, this step can be repeated by taking the difference to the top pixel, and then in videos, the difference to the pixel in the next frame can be taken.
A hierarchical version of this technique takes neighboring pairs of data points, stores their difference and sum, and on a higher level with lower resolution continues with the sums. This is called discrete wavelet transform
. JPEG2000 additionally uses data points from other pairs and multiplication factors to mix them into the difference. These factors have to be integers so that the result is an integer under all circumstances. So the values are increased, increasing file size, but hopefully the distribution of values is more peaked.
The adaptive encoding uses the probabilities from the previous sample in sound encoding, from the left and upper pixel in image encoding, and additionally from the previous frame in video encoding. In the wavelet transformation, the probabilities are also passed through the hierarchy.
and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of LZW compression, and in particular licensing practices by patent holder Unisys that many developers considered abusive, some open source proponents encouraged people to avoid using the Graphics Interchange Format (GIF) for compressing image files in favor of Portable Network Graphics (PNG), which combines the LZ77-based deflate algorithm with a selection of domain-specific prediction filters. However, the patents on LZW have expired on June 20, 2003.
Many of the lossless compression techniques used for text also work reasonably well for indexed images, but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of the specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that color images usually have a preponderance of a limited range of colors out of those representable in the color space).
As mentioned previously, lossless sound compression is a somewhat specialised area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the data – essentially using models to predict the "next" value and encoding the (hopefully small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the "error") tends to be small, then certain difference values (like 0, +1, -1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits.
It is sometimes beneficial to compress only the differences between two versions of a file (or, in video compression, of an image). This is called delta compression
(from the Greek letter Δ
which is commonly used in mathematics to denote a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta compression
from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.
Some of the most common lossless compression algorithms are listed below.
s often compress data before encryption for added security; compression prior to encryption helps remove redundancies and patterns that might facilitate cryptanalysis. However, many ordinary lossless compression algorithms introduce predictable patterns (such as headers, wrappers, and tables) into the compressed data that may actually make cryptanalysis easier. Therefore, cryptosystems often incorporate specialized compression algorithms specific to the cryptosystem—or at least demonstrated or widely held to be cryptographically secure—rather than standard compression algorithms that are efficient but provide potential opportunities for cryptanalysis.
coding, where competitions are held for demos with strict size limits, as small as 1k
.
This type of compression is not strictly limited to binary executables, but can also be applied to scripts, such as JavaScript
.
s. There are a number of better-known compression benchmarks. Some benchmarks cover only the compression ratio
, so winners in these benchmark may be unsuitable for everyday use due to the slow speed of the top performers. Another drawback of some benchmarks is that their data files are known, so some program writers may optimize their programs for best performance on a particular data set. The winners on these benchmarks often come from the class of context-mixing compression software. The benchmarks listed in the 5th edition of the Handbook of Data Compression (Springer, 2009) are:
Matt Mahoney
, in his February 2010 edition of the free booklet Data Compression Explained, additionally lists the following:
Any lossless compression algorithm that makes some files shorter must necessarily make some files longer, but it is not necessary that those files become very much longer. Most practical compression algorithms provide an "escape" facility that can turn off the normal coding for files that would become longer by being encoded. In theory, only a single additional bit is required to tell the decoder that the normal coding has been turned off for the entire input; however, most encoding algorithms use at least one full byte (and typically more than one) for this purpose. For example, DEFLATE
compressed files never need to grow by more than 5 bytes per 65,535 bytes of input.
In fact, if we consider files of length N, if all files were equally probable, then for any lossless compression that reduces the size of some file, the expected length of a compressed file (averaged over all possible files of length N) must necessarily be greater than N. So if we know nothing about the properties of the data we are compressing, we might as well not compress it at all. A lossless compression algorithm is useful only when we are more likely to compress certain types of files than others; then the algorithm could be designed to compress those types of data better.
Thus, the main lesson from the argument is not that one risks big losses, but merely that one cannot always win. To choose an algorithm always means implicitly to select a subset of all files that will become usefully shorter. This is the theoretical reason why we need to have different compression algorithms for different kinds of files: there cannot be any algorithm that is good for all kinds of data.
The "trick" that allows lossless compression algorithms, used on the type of data they were designed for, to consistently compress such files to a shorter form is that the files the algorithms are designed to act on all have some form of easily-modeled redundancy
that the algorithm is designed to remove, and thus belong to the subset of files that that algorithm can make shorter, whereas other files would not get compressed or even get bigger. Algorithms are generally quite specifically tuned to a particular type of file: for example, lossless audio compression programs do not work well on text files, and vice versa.
In particular, files of random data cannot be consistently compressed by any conceivable lossless data compression algorithm: indeed, this result is used to define the concept of randomness in algorithmic complexity theory.
An algorithm
that is asserted to be able to losslessly compress any data stream is provably impossible. While there have been many claims through the years of companies achieving "perfect compression" where an arbitrary number N of random bits can always be compressed to N-1 bits, these kinds of claims can be safely discarded without even looking at any further details regarding the purported compression scheme. Such an algorithm contradicts fundamental laws of mathematics because, if it existed, it could be applied repeatedly to losslessly reduce any file to length 0. Allegedly "perfect" compression algorithms are usually called derisively "magic" compression algorithms.
On the other hand, it has also been proven that there is no algorithm to determine whether a file is incompressible in the sense of Kolmogorov complexity
; hence, given any particular file, even if it appears random, it's possible that it may be significantly compressed, even including the size of the decompressor. An example is the digits of the mathematical constant pi
, which appear random but can be generated by a very small program. However, even though it cannot be determined whether a particular file is incompressible, a simple theorem about incompressible strings shows that over 99% of files of any given length cannot be compressed by more than one byte (including the size of the decompressor).
that maps sequences of units (normally octet
s) into other sequences of the same units. Compression is successful if the resulting sequence is shorter than the original sequence plus the map needed to decompress it. In order for a compression algorithm to be considered lossless, there needs to exist a reverse mapping from compressed bit sequences to original bit sequences; that is to say, the compression method would need to encapsulate a bijection
between "plain" and "compressed" bit sequences.
The sequences of length N or less are clearly a strict superset of the sequences of length N-1 or less. It follows that there are more sequences of length N or less than there are sequences of length N-1 or less. It therefore follows from the pigeonhole principle that it is not possible to map every sequence of length N or less to a unique sequence of length N-1 or less. Therefore it is not possible to produce an algorithm that reduces the size of every possible input sequence.
sense, and thus, most lossless algorithms a layperson is likely to apply on regular files compress them relatively well. This may, through misapplication of intuition
, lead some individuals to conclude that a well-designed compression algorithm can compress any input, thus, constituting a magic compression algorithm.
data format specifies the 'compression method' of 'Stored' for input files that have been copied into the archive verbatim.
The FAQ
for the comp.compression newsgroup
contains a challenge by Mike Goldman offering $5,000 for a program that can compress random data. Patrick Craig took up the challenge, but rather than compressing the data, he split it up into separate files all of which ended in the number '5' which was not stored as part of the file. Omitting this character allowed the resulting files (plus, in accordance with the rules, the size of the program that reassembled them) to be smaller than the original file. However, no actual compression took place, and the information stored in the names of the files was necessary in order to reassemble them in the correct order into the original file, and this information was not taken into account in the file size comparison. The files themselves are thus not sufficient to reconstitute the original file; the file names are also necessary. A full history of the event, including discussion on whether or not the challenge was technically met, is on Patrick Craig's web site.
Data compression
In computer science and information theory, data compression, source coding or bit-rate reduction is the process of encoding information using fewer bits than the original representation would use....
algorithm
Algorithm
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...
s that allows the exact original data to be reconstructed from the compressed data. The term lossless is in contrast to lossy data compression
Lossy data compression
In information technology, "lossy" compression is a data encoding method that compresses data by discarding some of it. The procedure aims to minimize the amount of data that need to be held, handled, and/or transmitted by a computer...
, which only allows an approximation of the original data to be reconstructed, in exchange for better compression rates.
Lossless data compression is used in many applications. For example, it is used in the ZIP
ZIP (file format)
Zip is a file format used for data compression and archiving. A zip file contains one or more files that have been compressed, to reduce file size, or stored as is...
file format and in the Unix
Unix
Unix is a multitasking, multi-user computer operating system originally developed in 1969 by a group of AT&T employees at Bell Labs, including Ken Thompson, Dennis Ritchie, Brian Kernighan, Douglas McIlroy, and Joe Ossanna...
tool gzip
Gzip
Gzip is any of several software applications used for file compression and decompression. The term usually refers to the GNU Project's implementation, "gzip" standing for GNU zip. It is based on the DEFLATE algorithm, which is a combination of Lempel-Ziv and Huffman coding...
. It is also often used as a component within lossy data compression technologies (e.g. lossless mid/side joint stereo preprocessing by the LAME
LAME
LAME is a free software codec used to encode/compress audio into the lossy MP3 file format.-History:The name LAME is a recursive acronym for "LAME Ain't an MP3 Encoder". Around mid-1998, Mike Cheng created LAME 1.0 as a set of modifications against the "8Hz-MP3" encoder source code...
MP3
MP3
MPEG-1 or MPEG-2 Audio Layer III, more commonly referred to as MP3, is a patented digital audio encoding format using a form of lossy data compression...
encoder and other lossy audio encoders).
Lossless compression is used in cases where it is important that the original and the decompressed data be identical, or where deviations from the original data could be deleterious. Typical examples are executable programs, text documents and source code. Some image file formats, like PNG or GIF, use only lossless compression, while others like TIFF and MNG may use either lossless or lossy methods. Lossless audio formats are most often used for archiving or production purposes, with smaller lossy audio files being typically used on portable players and in other cases where storage space is limited and/or exact replication of the audio is unnecessary.
Lossless compression techniques
Most lossless compression programs do two things in sequence: the first step generates a statistical model for the input data, and the second step uses this model to map input data to bit sequences in such a way that "probable" (e.g. frequently encountered) data will produce shorter output than "improbable" data.The primary encoding algorithms used to produce bit sequences are Huffman coding
Huffman coding
In computer science and information theory, Huffman coding is an entropy encoding algorithm used for lossless data compression. The term refers to the use of a variable-length code table for encoding a source symbol where the variable-length code table has been derived in a particular way based on...
(also used by DEFLATE) and arithmetic coding
Arithmetic coding
Arithmetic coding is a form of variable-length entropy encoding used in lossless data compression. Normally, a string of characters such as the words "hello there" is represented using a fixed number of bits per character, as in the ASCII code...
. Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the information entropy
Information entropy
In information theory, entropy is a measure of the uncertainty associated with a random variable. In this context, the term usually refers to the Shannon entropy, which quantifies the expected value of the information contained in a message, usually in units such as bits...
, whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1.
There are two primary ways of constructing statistical models: in a static model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces a single model to be used for all data being compressed, and so performs poorly on files containing heterogeneous data. Adaptive models dynamically update the model as the data is compressed. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data, performance improves. Most popular types of compression used in practice now use adaptive coders.
Lossless compression methods may be categorized according to the type of data they are designed to compress. While, in principle, any general-purpose lossless compression algorithm (general-purpose meaning that they can compress any bitstring) can be used on any type of data, many are unable to achieve significant compression on data that are not of the form for which they were designed to compress. Many of the lossless compression techniques used for text also work reasonably well for indexed image
Indexed color
In computing, indexed color is a technique to manage digital images' colors in a limited fashion, in order to save computer memory and file storage, while speeding up display refresh and file transfers...
s.
Text and Image
Statistical modeling algorithms for text (or text-like binary data such as executables) include:- Context Tree Weighting method (CTW)
- Burrows-Wheeler transformBurrows-Wheeler transformThe Burrows–Wheeler transform , is an algorithm used in data compression techniques such as bzip2. It was invented by Michael Burrows and David Wheeler in 1994 while working at DEC Systems Research Center in Palo Alto, California...
(block sorting preprocessing that makes compression more efficient) - LZ77 (used by DEFLATE)
- LZWLZWLempel–Ziv–Welch is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978...
- PPMd
Multimedia
Techniques that take advantage of the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones.Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having a much higher probability than large values.
This is often also applied to sound files and can compress files which contain mostly low frequencies and low volumes.
For images, this step can be repeated by taking the difference to the top pixel, and then in videos, the difference to the pixel in the next frame can be taken.
A hierarchical version of this technique takes neighboring pairs of data points, stores their difference and sum, and on a higher level with lower resolution continues with the sums. This is called discrete wavelet transform
Discrete wavelet transform
In numerical analysis and functional analysis, a discrete wavelet transform is any wavelet transform for which the wavelets are discretely sampled...
. JPEG2000 additionally uses data points from other pairs and multiplication factors to mix them into the difference. These factors have to be integers so that the result is an integer under all circumstances. So the values are increased, increasing file size, but hopefully the distribution of values is more peaked.
The adaptive encoding uses the probabilities from the previous sample in sound encoding, from the left and upper pixel in image encoding, and additionally from the previous frame in video encoding. In the wavelet transformation, the probabilities are also passed through the hierarchy.
Historical legal issues
Many of these methods are implemented in open-source and proprietary tools, particularly LZW and its variants. Some algorithms are patented in the USAUnited States
The United States of America is a federal constitutional republic comprising fifty states and a federal district...
and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of LZW compression, and in particular licensing practices by patent holder Unisys that many developers considered abusive, some open source proponents encouraged people to avoid using the Graphics Interchange Format (GIF) for compressing image files in favor of Portable Network Graphics (PNG), which combines the LZ77-based deflate algorithm with a selection of domain-specific prediction filters. However, the patents on LZW have expired on June 20, 2003.
Many of the lossless compression techniques used for text also work reasonably well for indexed images, but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of the specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that color images usually have a preponderance of a limited range of colors out of those representable in the color space).
As mentioned previously, lossless sound compression is a somewhat specialised area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the data – essentially using models to predict the "next" value and encoding the (hopefully small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the "error") tends to be small, then certain difference values (like 0, +1, -1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits.
It is sometimes beneficial to compress only the differences between two versions of a file (or, in video compression, of an image). This is called delta compression
Delta encoding
Delta encoding is a way of storing or transmitting data in the form of differences between sequential data rather than complete files; more generally this is known as data differencing...
(from the Greek letter Δ
Delta (letter)
Delta is the fourth letter of the Greek alphabet. In the system of Greek numerals it has a value of 4. It was derived from the Phoenician letter Dalet...
which is commonly used in mathematics to denote a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta compression
Delta encoding
Delta encoding is a way of storing or transmitting data in the form of differences between sequential data rather than complete files; more generally this is known as data differencing...
from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.
Lossless compression methods
By operation of the pigeonhole principle, no lossless compression algorithm can efficiently compress all possible data, and completely random data streams cannot be compressed. For this reason, many different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain.Some of the most common lossless compression algorithms are listed below.
General purpose
- Run-length encodingRun-length encodingRun-length encoding is a very simple form of data compression in which runs of data are stored as a single data value and count, rather than as the original run...
(RLE) – a simple scheme that provides good compression of data containing lots of runs of the same value. - Lempel-Ziv 1978 (LZ78), Lempel-Ziv-WelchLZWLempel–Ziv–Welch is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978...
(LZW) – used by GIFGIFThe Graphics Interchange Format is a bitmap image format that was introduced by CompuServe in 1987 and has since come into widespread usage on the World Wide Web due to its wide support and portability....
images and compressCompressCompress is a UNIX compression program based on the LZC compression method, which is an LZW implementation using variable size pointers as in LZ78.- Description of program :Files compressed by compress are typically given the extension .Z...
among many other applications - DEFLATEDEFLATEDeflate is a lossless data compression algorithm that uses a combination of the LZ77 algorithm and Huffman coding. It was originally defined by Phil Katz for version 2 of his PKZIP archiving tool and was later specified in RFC 1951....
– used by gzipGzipGzip is any of several software applications used for file compression and decompression. The term usually refers to the GNU Project's implementation, "gzip" standing for GNU zip. It is based on the DEFLATE algorithm, which is a combination of Lempel-Ziv and Huffman coding...
, ZIPZIP (file format)Zip is a file format used for data compression and archiving. A zip file contains one or more files that have been compressed, to reduce file size, or stored as is...
(since version 2.0), and as part of the compression process of Portable Network Graphics (PNG), Point-to-Point ProtocolPoint-to-Point ProtocolIn networking, the Point-to-Point Protocol is a data link protocol commonly used in establishing a direct connection between two networking nodes...
(PPP), HTTP, SSHSSH- In science and technology :* Saffir–Simpson Hurricane Scale* Sea surface height, the topography of the ocean surface* Secure Shell, a network protocol for remote administration of Unix computers* Social sciences and humanities, a broad field of research... - bzip2Bzip2bzip2 is a free and open source implementation of the Burrows–Wheeler algorithm. It is developed and maintained by Julian Seward. Seward made the first public release of bzip2, version 0.15, in July 1996.-Compression efficiency:...
- using the Burrows–Wheeler transform, this provides slower but higher compression than DEFLATEDEFLATEDeflate is a lossless data compression algorithm that uses a combination of the LZ77 algorithm and Huffman coding. It was originally defined by Phil Katz for version 2 of his PKZIP archiving tool and was later specified in RFC 1951.... - Lempel–Ziv–Markov chain algorithm (LZMA) - used by 7zip, xzXzxz is a lossless data compression file format incorporating the LZMA2 compression algorithm. Like gzip and bzip2, concatenation is supported to compress multiple files, but the convention is to bundle a file that is an archive itself, such as those created by the tar or cpio Unix...
, and other programs; higher compression than bzip2Bzip2bzip2 is a free and open source implementation of the Burrows–Wheeler algorithm. It is developed and maintained by Julian Seward. Seward made the first public release of bzip2, version 0.15, in July 1996.-Compression efficiency:...
as well as much faster decompression. - Lempel–Ziv–Oberhumer (LZO) - designed for compression/decompression speed at the expense of compression ratios
- Statistical Lempel ZivStatistical Lempel ZivStatistical Lempel-Ziv is a concept of lossless data compression technique published by Dr. Sam Kwong and Yu Fan Ho in 2001. It may be viewed as a variant of the Lempel-Ziv based method...
- a combination of statistical method and dictionary-based method; better compression ratio than using single method.
Audio
- Free Lossless Audio Codec – FLAC
- Apple LosslessApple LosslessApple Lossless Apple Lossless Apple Lossless (also known as ALAC (Apple Lossless Audio Codec), or ALE (Apple Lossless Encoder) is an audio codec developed by Apple Inc. for lossless data compression of digital music. After initially being proprietary for many years, in late 2011 Apple open sourced...
– ALAC (Apple Lossless Audio Codec) - apt-XApt-XThe digital audio data reduction technology now known as apt-X is a family of proprietary audio codec compression algorithms developed by APT Licensing. From March 14, 2009, APT Licensing changed the company's trading identity to APTX. The original algorithm was developed in the 1980s by Dr...
– Lossless - Adaptive Transform Acoustic Coding – ATRAC
- Audio Lossless CodingAudio Lossless CodingMPEG-4 Audio Lossless Coding, also known as MPEG-4 ALS, is an extension to the MPEG-4 Part 3 audio standard to allow lossless audio compression. The extension was finalized in December 2005 and published as ISO/IEC 14496-3:2005/Amd 2:2006 in 2006...
– also known as MPEG-4 ALS - MPEG-4 SLSMPEG-4 SLSMPEG-4 SLS, or MPEG-4 Scalable to Lossless as per ISO/IEC 14496-3:2005/Amd 3:2006 , is an extension to the MPEG-4 Part 3 standard to allow lossless audio compression scalable to lossy MPEG-4 General Audio coding methods...
– also known as HD-AAC - Direct Stream Transfer – DST
- Dolby TrueHDDolby TrueHDDolby TrueHD is an advanced lossless multi-channel audio codec developed by Dolby Laboratories which is intended primarily for high-definition home-entertainment equipment such as Blu-ray Disc and HD DVD. It is the successor to the AC-3 Dolby Digital surround sound codec which was used as the...
- DTS-HD Master AudioDTS-HD Master AudioDTS-HD Master Audio is a lossless audio codec created by Digital Theater System. It was previously known as DTS++. It is an extension of DTS which, when played back on devices which do not support the Master Audio or High Resolution extension, degrades to a "core" track which is lossy. DTS-HD...
- Meridian Lossless PackingMeridian Lossless PackingMeridian Lossless Packing, also known as Packed PCM , is a proprietary lossless compression technique for compressing PCM audio data developed by Meridian Audio, Ltd. MLP is the standard lossless compression method for DVD-Audio content and typically provides about 1.5:1 compression on most music...
– MLP - Monkey's AudioMonkey's AudioMonkey's Audio is a file format for audio data compression. Being a lossless format, Monkey's Audio does not discard data during the process of encoding, unlike lossy compression methods such as AAC, MP3, Vorbis and Musepack....
– Monkey's Audio APE - OptimFROGOptimFROGOptimFROG is a proprietary lossless audio data compression codec developed by Florin Ghido. OptimFROG is optimized for very high compression ratios, even for the price of slower encoding and decoding.-OptimFROG DualStream:...
- Original Sound QualityOriginal Sound QualityOriginal Sound Quality is a audio file format developed in 2002 by Steinberg Media Technologies GmbH and implemented e.g. in their audio editing software Wavelab 4 for lossless audio data compression....
- OSQ - RealPlayerRealPlayerRealPlayer is a cross-platform media player by RealNetworks that plays a number of multimedia formats including MP3, MPEG-4, QuickTime, Windows Media, and multiple versions of proprietary RealAudio and RealVideo formats.-History:...
– RealAudio Lossless - ShortenShortenShorten is a file format used for compressing audio data. It is a form of data compression of files and is used to losslessly compress CD-quality audio files . Shorten is no longer developed and more recent lossless audio codecs such as FLAC, Monkey's Audio , TTA, and WavPack have become more...
– SHN - TTATTA (codec)True Audio is a free software, real-time lossless audio codec, based on adaptive prognostic filters.Also, .tta is the generic extension to filenames of audio files created by True Audio codec.- Codec overview :...
– True Audio Lossless - WavPackWavPackWavPack is a free, open source lossless audio compression format developed by David Bryant.-Features:WavPack compression can compress 8-, 16-, 24-, and 32-bit fixed-point, and 32-bit floating point audio files in the .WAV file format. It also supports surround sound streams and high frequency...
– WavPack lossless - WMA LosslessWindows Media Audio 9 LosslessWindows Media Audio 9 Lossless is a lossless audio codec by Microsoft, released in early 2003.It compresses an audio CD to a range of 206 to 411MB, at bit rates of 470 to 940 kbit/s. The result is a bit-for-bit duplicate of the original audio file; in other words, the audio quality on the CD will...
– Windows Media Lossless
Graphics
- ILBMILBMILBM is a subtype of the Interchange File Format used for storing picture data. ILBM stands for InterLeaved BitMap which refers to the way the pictures are stored. The image data is stored as a varying number of bitplanes, each storing one bit of data for each pixel in the image...
– (lossless RLE compression of AmigaAmigaThe Amiga is a family of personal computers that was sold by Commodore in the 1980s and 1990s. The first model was launched in 1985 as a high-end home computer and became popular for its graphical, audio and multi-tasking abilities...
IFFInterchange File FormatInterchange File Format , is a generic container file format originally introduced by the Electronic Arts company in 1985 in order to ease transfer of data between software produced by different companies....
images) - JBIG2JBIG2JBIG2 is an image compression standard for bi-level images, developed by the Joint Bi-level Image Experts Group. It is suitable for both lossless and lossy compression...
– (lossless or lossy compression of B&W images) - JPEG-LS – (lossless/near-lossless compression standard)
- JPEG 2000JPEG 2000JPEG 2000 is an image compression standard and coding system. It was created by the Joint Photographic Experts Group committee in 2000 with the intention of superseding their original discrete cosine transform-based JPEG standard with a newly designed, wavelet-based method...
– (includes lossless compression method, as proven by Sunil Kumar, Prof San Diego State University) - JPEG XR – formerly WMPhoto and HD Photo, includes a lossless compression method
- PGFProgressive Graphics FilePGF is a wavelet-based bitmapped image format that employs lossless and lossy data compression. PGF was created to improve upon and replace the JPEG format...
– Progressive Graphics File (lossless or lossy compression) - PNG – Portable Network Graphics
- TIFFTagged Image File FormatTIFF is a file format for storing images, popular among graphic artists, the publishing industry, and both amateur and professional photographers in general. As of 2009, it is under the control of Adobe Systems...
– Tagged Image File Format - Gifsicle (GPLGNU General Public LicenseThe GNU General Public License is the most widely used free software license, originally written by Richard Stallman for the GNU Project....
) – Optimize gif files - Jpegoptim (GPLGNU General Public LicenseThe GNU General Public License is the most widely used free software license, originally written by Richard Stallman for the GNU Project....
) – Optimize jpeg files
Cryptography
CryptosystemCryptosystem
There are two different meanings of the word cryptosystem. One is used by the cryptographic community, while the other is the meaning understood by the public.- General meaning :...
s often compress data before encryption for added security; compression prior to encryption helps remove redundancies and patterns that might facilitate cryptanalysis. However, many ordinary lossless compression algorithms introduce predictable patterns (such as headers, wrappers, and tables) into the compressed data that may actually make cryptanalysis easier. Therefore, cryptosystems often incorporate specialized compression algorithms specific to the cryptosystem—or at least demonstrated or widely held to be cryptographically secure—rather than standard compression algorithms that are efficient but provide potential opportunities for cryptanalysis.
Executables
Self-extracting executables contain a compressed application and a decompressor. When executed, the decompressor transparently decompresses and runs the original application. This is especially often used in demoDemo (computer programming)
A demo is a non-interactive multimedia presentation made within the computer subculture known as the demoscene. Demogroups create demos to demonstrate their abilities in programming, music, drawing, and 3D modeling...
coding, where competitions are held for demos with strict size limits, as small as 1k
Kilobyte
The kilobyte is a multiple of the unit byte for digital information. Although the prefix kilo- means 1000, the term kilobyte and symbol KB have historically been used to refer to either 1024 bytes or 1000 bytes, dependent upon context, in the fields of computer science and information...
.
This type of compression is not strictly limited to binary executables, but can also be applied to scripts, such as JavaScript
JavaScript
JavaScript is a prototype-based scripting language that is dynamic, weakly typed and has first-class functions. It is a multi-paradigm language, supporting object-oriented, imperative, and functional programming styles....
.
Lossless compression benchmarks
Lossless compression algorithms and their implementations are routinely tested in head-to-head benchmarkBenchmark (computing)
In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it...
s. There are a number of better-known compression benchmarks. Some benchmarks cover only the compression ratio
Compression ratio
The 'compression ratio' of an internal-combustion engine or external combustion engine is a value that represents the ratio of the volume of its combustion chamber from its largest capacity to its smallest capacity...
, so winners in these benchmark may be unsuitable for everyday use due to the slow speed of the top performers. Another drawback of some benchmarks is that their data files are known, so some program writers may optimize their programs for best performance on a particular data set. The winners on these benchmarks often come from the class of context-mixing compression software. The benchmarks listed in the 5th edition of the Handbook of Data Compression (Springer, 2009) are:
- The Maximum Compression benchmark, started in 2003 and frequently updated, includes over 150 programs. Maintained by Werner Bergmans, it tests on a variety of data sets, including text, images and executable code. Two types of results are reported: single file compression (SFC) and multiple file compression (MFC). Not surprisingly, context mixing programs often wins here; programs from the PAQPAQPAQ is a series of lossless data compression archivers that have evolved through collaborative development to top rankings on several benchmarks measuring compression ratio . Specialized versions of PAQ have won the Hutter Prize and the Calgary Challenge...
series and WinRK often are in the top. The site also has a list of pointers to other benchmarks. - UCLC (the ultimate command-line compressors) benchmark by Johan de Bock is another actively maintained benchmark including over 100 programs. The winners in most test usually are PAQPAQPAQ is a series of lossless data compression archivers that have evolved through collaborative development to top rankings on several benchmarks measuring compression ratio . Specialized versions of PAQ have won the Hutter Prize and the Calgary Challenge...
programs and WinRK, with the exception of lossless audio encoding and grayscale image compression where some specialized algorithms shine. - Squeeze Chart by Stephan Busch is another frequently updated site.
- The EmilCont benchmarks by Berto Destasio are somewhat outdated having been most recently updated in 2004. A distinctive feature is that the data set is not made public in order to prevent optimizations targeting it specifically. Nevertheless, the best ratio winner are again the PAQ family, SLIM and WinRK.
- The Archive Comparison Test (ACT) by Jeff Gilchrist included 162 DOS/Windows and 8 Macintosh lossless compression programs, but it was last updated in 2002.
- The Art Of Lossless Data Compression by Alexander Ratushnyak provides similar test performed in 2003.
Matt Mahoney
Matt Mahoney
Matt Mahoney is a programmer and researcher on data compression, who is well known for creation of an open compression standard and API, libZPAQ, which is released into Public domain, for the creation of data compressor, PAQ, which is released in GPLv3, and for creating the Hutter prize, and many...
, in his February 2010 edition of the free booklet Data Compression Explained, additionally lists the following:
- The Calgary CorpusCalgary CorpusThe Calgary Corpus is a collection of text and binary data files, commonly used for comparing data compression algorithms. It was created by Ian Witten, Tim Bell and John Cleary from the University of Calgary in 1987 and was commonly used in the 1990s...
dating back to 1987 is no longer widely used due to its small size, although Leonid A. Broukhis still maintains The Calgary Corpus Compression Challenge, which started in 1996. - The Large Text Compression Benchmark and the similar Hutter PrizeHutter PrizeThe Hutter Prize is a cash prize funded by Marcus Hutter which rewards data compression improvements on a specific 100 MB English text file. Specifically, the prize awards 500 euros for each one percent improvement in the compressed size of the file enwik8, which is the smaller of two files used...
, both using a trimmed WikipediaWikipediaWikipedia is a free, web-based, collaborative, multilingual encyclopedia project supported by the non-profit Wikimedia Foundation. Its 20 million articles have been written collaboratively by volunteers around the world. Almost all of its articles can be edited by anyone with access to the site,...
XMLXMLExtensible Markup Language is a set of rules for encoding documents in machine-readable form. It is defined in the XML 1.0 Specification produced by the W3C, and several other related specifications, all gratis open standards....
UTF-8UTF-8UTF-8 is a multibyte character encoding for Unicode. Like UTF-16 and UTF-32, UTF-8 can represent every character in the Unicode character set. Unlike them, it is backward-compatible with ASCII and avoids the complications of endianness and byte order marks...
data set. - The Generic Compression Benchmark, maintained by Mahoney himself, test compression on random data.
- Sami Runsas (author of NanoZip) maintains Compression Ratings, a benchmark similar to Maximum Compression multiple file test, but with minimum speed requirements. It also offers a calculator that allows the user to weight the importance of speed and compression ratio. The top programs here are fairly different due to speed requirement. In January 2010, the top programs were NanoZip followed by FreeArcFreeArcFreeArc is a free and open source file archiver developed by Bulat Ziganshin.-Algorithms:FreeArc uses LZMA, PPMD, TrueAudio, Tornado and GRzip algorithms with automatic switching by file type, and also uses set of filters—for instance it can remove repetitions from text.-Archive size:In Tom's...
, CCM, flashzip, and 7-Zip7-Zip7-Zip is an open source file archiver. 7-Zip operates with the 7z archive format, but can read and write several other archive formats. The program can be used from a command line interface, graphical user interface, or with Microsoft Windows shell integration. 7-Zip began in 1999 and is actively...
. - The Monster of Compression benchmark by N. F. Antonio tests compression on 1Gb of public data with a 40 minute time limit. As of Dec. 20, 2009 the top ranked archiver is NanoZip 0.07a and the top ranked single file compressor is ccmx 1.30c, both context mixingContext mixingContext mixing is a type of data compression algorithm in which the next-symbol predictions of two or more statistical models are combined to yield a prediction that is often more accurate than any of the individual predictions. For example, one simple method is to average the probabilities...
.
Limitations
Lossless data compression algorithms cannot guarantee compression for all input data sets. In other words, for any (lossless) data compression algorithm, there will be an input data set that does not get smaller when processed by the algorithm. This is easily proven with elementary mathematics using a counting argument, as follows:- Assume that each file is represented as a string of bits of some arbitrary length.
- Suppose that there is a compression algorithm that transforms every file into a distinct file which is no longer than the original file, and that at least one file will be compressed into something that is shorter than itself.
- Let be the least number such that there is a file with length bits that compresses to something shorter. Let be the length (in bits) of the compressed version of .
- Because , every file of length keeps its size during compression. There are such files. Together with , this makes files which all compress into one of the files of length .
- But is smaller than , so by the pigeonhole principle there must be some file of length which is simultaneously the output of the compression function on two different inputs. That file cannot be decompressed reliably (which of the two originals should that yield?), which contradicts the assumption that the algorithm was lossless.
- We must therefore conclude that our original hypothesis (that the compression function makes no file longer) is necessarily untrue.
Any lossless compression algorithm that makes some files shorter must necessarily make some files longer, but it is not necessary that those files become very much longer. Most practical compression algorithms provide an "escape" facility that can turn off the normal coding for files that would become longer by being encoded. In theory, only a single additional bit is required to tell the decoder that the normal coding has been turned off for the entire input; however, most encoding algorithms use at least one full byte (and typically more than one) for this purpose. For example, DEFLATE
DEFLATE
Deflate is a lossless data compression algorithm that uses a combination of the LZ77 algorithm and Huffman coding. It was originally defined by Phil Katz for version 2 of his PKZIP archiving tool and was later specified in RFC 1951....
compressed files never need to grow by more than 5 bytes per 65,535 bytes of input.
In fact, if we consider files of length N, if all files were equally probable, then for any lossless compression that reduces the size of some file, the expected length of a compressed file (averaged over all possible files of length N) must necessarily be greater than N. So if we know nothing about the properties of the data we are compressing, we might as well not compress it at all. A lossless compression algorithm is useful only when we are more likely to compress certain types of files than others; then the algorithm could be designed to compress those types of data better.
Thus, the main lesson from the argument is not that one risks big losses, but merely that one cannot always win. To choose an algorithm always means implicitly to select a subset of all files that will become usefully shorter. This is the theoretical reason why we need to have different compression algorithms for different kinds of files: there cannot be any algorithm that is good for all kinds of data.
The "trick" that allows lossless compression algorithms, used on the type of data they were designed for, to consistently compress such files to a shorter form is that the files the algorithms are designed to act on all have some form of easily-modeled redundancy
Redundancy (information theory)
Redundancy in information theory is the number of bits used to transmit a message minus the number of bits of actual information in the message. Informally, it is the amount of wasted "space" used to transmit certain data...
that the algorithm is designed to remove, and thus belong to the subset of files that that algorithm can make shorter, whereas other files would not get compressed or even get bigger. Algorithms are generally quite specifically tuned to a particular type of file: for example, lossless audio compression programs do not work well on text files, and vice versa.
In particular, files of random data cannot be consistently compressed by any conceivable lossless data compression algorithm: indeed, this result is used to define the concept of randomness in algorithmic complexity theory.
An algorithm
Algorithm
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...
that is asserted to be able to losslessly compress any data stream is provably impossible. While there have been many claims through the years of companies achieving "perfect compression" where an arbitrary number N of random bits can always be compressed to N-1 bits, these kinds of claims can be safely discarded without even looking at any further details regarding the purported compression scheme. Such an algorithm contradicts fundamental laws of mathematics because, if it existed, it could be applied repeatedly to losslessly reduce any file to length 0. Allegedly "perfect" compression algorithms are usually called derisively "magic" compression algorithms.
On the other hand, it has also been proven that there is no algorithm to determine whether a file is incompressible in the sense of Kolmogorov complexity
Kolmogorov complexity
In algorithmic information theory , the Kolmogorov complexity of an object, such as a piece of text, is a measure of the computational resources needed to specify the object...
; hence, given any particular file, even if it appears random, it's possible that it may be significantly compressed, even including the size of the decompressor. An example is the digits of the mathematical constant pi
Pi
' is a mathematical constant that is the ratio of any circle's circumference to its diameter. is approximately equal to 3.14. Many formulae in mathematics, science, and engineering involve , which makes it one of the most important mathematical constants...
, which appear random but can be generated by a very small program. However, even though it cannot be determined whether a particular file is incompressible, a simple theorem about incompressible strings shows that over 99% of files of any given length cannot be compressed by more than one byte (including the size of the decompressor).
Mathematical background
Any compression algorithm can be viewed as a functionFunction (mathematics)
In mathematics, a function associates one quantity, the argument of the function, also known as the input, with another quantity, the value of the function, also known as the output. A function assigns exactly one output to each input. The argument and the value may be real numbers, but they can...
that maps sequences of units (normally octet
Octet
-Music:* Octet , ensemble consisting of eight instruments or voices, or composition written for such an ensemble* Octet , 1793 composition by Ludwig van Beethoven* Octet , 1825 composition by Felix Mendelssohn...
s) into other sequences of the same units. Compression is successful if the resulting sequence is shorter than the original sequence plus the map needed to decompress it. In order for a compression algorithm to be considered lossless, there needs to exist a reverse mapping from compressed bit sequences to original bit sequences; that is to say, the compression method would need to encapsulate a bijection
Bijection
A bijection is a function giving an exact pairing of the elements of two sets. A bijection from the set X to the set Y has an inverse function from Y to X. If X and Y are finite sets, then the existence of a bijection means they have the same number of elements...
between "plain" and "compressed" bit sequences.
The sequences of length N or less are clearly a strict superset of the sequences of length N-1 or less. It follows that there are more sequences of length N or less than there are sequences of length N-1 or less. It therefore follows from the pigeonhole principle that it is not possible to map every sequence of length N or less to a unique sequence of length N-1 or less. Therefore it is not possible to produce an algorithm that reduces the size of every possible input sequence.
Psychological background
Most everyday files are relatively 'sparse' in an information entropyInformation entropy
In information theory, entropy is a measure of the uncertainty associated with a random variable. In this context, the term usually refers to the Shannon entropy, which quantifies the expected value of the information contained in a message, usually in units such as bits...
sense, and thus, most lossless algorithms a layperson is likely to apply on regular files compress them relatively well. This may, through misapplication of intuition
Intuition (knowledge)
Intuition is the ability to acquire knowledge without inference or the use of reason. "The word 'intuition' comes from the Latin word 'intueri', which is often roughly translated as meaning 'to look inside'’ or 'to contemplate'." Intuition provides us with beliefs that we cannot necessarily justify...
, lead some individuals to conclude that a well-designed compression algorithm can compress any input, thus, constituting a magic compression algorithm.
Points of application in real compression theory
Real compression algorithm designers accept that streams of high information entropy cannot be compressed, and accordingly, include facilities for detecting and handling this condition. An obvious way of detection is applying a raw compression algorithm and testing if its output is smaller than its input. Sometimes, detection is made by heuristics; for example, a compression application may consider files whose names end in ".zip", ".arj" or ".lha" uncompressible without any more sophisticated detection. A common way of handling this situation is quoting input, or uncompressible parts of the input in the output, minimising the compression overhead. For example, the zipZIP (file format)
Zip is a file format used for data compression and archiving. A zip file contains one or more files that have been compressed, to reduce file size, or stored as is...
data format specifies the 'compression method' of 'Stored' for input files that have been copied into the archive verbatim.
The Million Random Number Challenge
Mark Nelson, frustrated over many cranks trying to claim having invented a magic compression algorithm appearing in comp.compression, has constructed a 415,241 byte binary file (http://marknelson.us/attachments/million-digit-challenge/AMillionRandomDigits.bin) of highly entropic content, and issued a public challenge of $100 to anyone to write a program that, together with its input, would be smaller than his provided binary data yet be able to reconstitute ("decompress") it without error.The FAQ
FAQ
Frequently asked questions are listed questions and answers, all supposed to be commonly asked in some context, and pertaining to a particular topic. "FAQ" is usually pronounced as an initialism rather than an acronym, but an acronym form does exist. Since the acronym FAQ originated in textual...
for the comp.compression newsgroup
Newsgroup
A usenet newsgroup is a repository usually within the Usenet system, for messages posted from many users in different locations. The term may be confusing to some, because it is usually a discussion group. Newsgroups are technically distinct from, but functionally similar to, discussion forums on...
contains a challenge by Mike Goldman offering $5,000 for a program that can compress random data. Patrick Craig took up the challenge, but rather than compressing the data, he split it up into separate files all of which ended in the number '5' which was not stored as part of the file. Omitting this character allowed the resulting files (plus, in accordance with the rules, the size of the program that reassembled them) to be smaller than the original file. However, no actual compression took place, and the information stored in the names of the files was necessary in order to reassemble them in the correct order into the original file, and this information was not taken into account in the file size comparison. The files themselves are thus not sufficient to reconstitute the original file; the file names are also necessary. A full history of the event, including discussion on whether or not the challenge was technically met, is on Patrick Craig's web site.
See also
- Audio compression (data)
- Comparison of file archiversComparison of file archiversThe following tables compare general and technical information for a number of file archivers. Please see the individual products' articles for further information. They are neither all-inclusive nor are some entries necessarily up to date...
- David A. HuffmanDavid A. HuffmanDavid Albert Huffman was a pioneer in computer science. He is well-known for his Huffman coding. David Huffman died at the age of 74 after a 10-month battle with cancer.-Education:...
- Entropy (information theory)
- Kolmogorov complexityKolmogorov complexityIn algorithmic information theory , the Kolmogorov complexity of an object, such as a piece of text, is a measure of the computational resources needed to specify the object...
- Data compressionData compressionIn computer science and information theory, data compression, source coding or bit-rate reduction is the process of encoding information using fewer bits than the original representation would use....
- PrecompressorPrecompressorA precompressor is a computer program, which alternates file content so that a real lossless compression program will achieve better results than without precompressing. This is possible by creating such patterns that compression programs will recognize....
- Lossy compression
- Lossless Transform Audio Compression (LTAC)
- List of codecs
- Information theoryInformation theoryInformation theory is a branch of applied mathematics and electrical engineering involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and...
- Universal code (data compression)Universal code (data compression)In data compression, a universal code for integers is a prefix code that maps the positive integers onto binary codewords, with the additional property that whatever the true probability distribution on integers, as long as the distribution is monotonic , the expected lengths of the codewords are...
- Grammar inductionGrammar InductionGrammatical induction, also known as grammatical inference or syntactic pattern recognition, refers to the process in machine learning of learning a formal grammar from a set of observations, thus constructing a model which accounts for the characteristics of the observed objects...
External links
- Comparison of Lossless Audio Compressors at Hydrogenaudio Wiki
- Comparing lossless and lossy audio formats for music archiving
- http://www.data-compression.com/theory.html — data-compression.com's overview of data compression and its fundamentals limitations
- http://www.faqs.org/faqs/compression-faq/part2/section-4.html — comp.compression's FAQ item 73, What is the theoretical compression limit?
- http://www.c10n.info/archives/439 — c10n.info's overview of US patent #7,096,360, "[a]n "Frequency-Time Based Data Compression Method" supporting the compression, encryption, decompression, and decryption and persistence of many binary digits through frequencies where each frequency represents many bits."