Saturday, July 12, 2008

MP3

MPEG-1 Audio Layer 3, more commonly referred to as MP3, is a digital audio encoding format using a form of lossy data compression.

It is a common audio format for consumer audio storage, as well as a de facto standard encoding for the transfer and playback of music on digital audio players.

MP3 is an audio-specific format that was co-designed by several teams of engineers at Fraunhofer IIS in Erlangen, Germany, AT&T-Bell Labs in Murray Hill, NJ, USA, Thomson-Brandt, and CCETT. It was approved as an ISO/IEC standard in 1991.

MP3's use of a lossy compression algorithm is designed to greatly reduce the amount of data required to represent the audio recording and still sound like a faithful reproduction of the original uncompressed audio for most listeners, but is not considered high fidelity audio by audiophiles. An MP3 file that is created using the mid-range bit rate setting of 128 kbit/s will result in a file that is typically about 1/10th the size of the CD file created from the original audio source. An MP3 file can also be constructed at higher or lower bit rates, with higher or lower resulting quality. The compression works by reducing accuracy of certain parts of sound that are deemed beyond the auditory resolution ability of most people. This method is commonly referred to as perceptual coding. It internally provides a representation of sound within a short term time/frequency analysis window, by using psychoacoustic models to discard or reduce precision of components less audible to human hearing, and recording the remaining information in an efficient manner. This is relatively similar to the principles used by JPEG, an image compression format.

The MP3 audio data compression algorithm takes advantage of a perceptual limitation of human hearing called auditory masking. In 1894, Mayer reported that a tone could be rendered inaudible by another tone of lower frequency. In 1959, Richard Ehmer described a complete set of auditory curves regarding this phenomena. Ernst Terhardt et al. created an algorithm describing auditory masking with high accuracy.

The psychoacoustic masking codec was first proposed in 1979, apparently independently, by Manfred Schroeder, et al.. Received 8 June 1979; accepted for publication 13 August 1979 from AT&T-Bell Labs in Murray Hill, NJ, and M. A.Krasner both in the United States. Krasner was the first to publish and to produce hardware, but the publication of his results as a relatively obscure Lincoln Laboratory Technical Report did not immediately influence the mainstream of psychoacoustic codec development. Manfred Schroeder was already a well-known and revered figure in the worldwide community of acoustical and electrical engineers, and his paper had influence in acoustic and source-coding (audio data compression) research. Both Krasner and Schroeder built upon the work performed by Eberhard F. Zwicker in the areas of tuning and masking of critical bands, that in turn built on the fundamental research in the area from Bell Labs of Harvey Fletcher and his collaborators. A wide variety of (mostly perceptual) audio compression algorithms were reported in IEEE's refereed Journal on Selected Areas in Communications. That journal reported in February 1988 on a wide range of established, working audio bit compression technologies, most of them using auditory masking as part of their fundamental design, and several showing real-time hardware implementations.

The immediate predecessors of MP3 were "Optimum Coding in the Frequency Domain" (OCF), and Perceptual Transform Coding (PXFM). These two codecs, along with block-switching contributions from Thomson-Brandt, were merged into a codec called ASPEC, which was submitted to MPEG, and which won the quality competition, but that was mistakenly rejected as too complex to implement. The first practical implementation of an audio perceptual coder (OCF) in hardware (Krasner's hardware was too cumbersome and slow for practical use), was an implementation of a psychoacoustic transform coder based on Motorola 56000 DSP chips. MP3 is directly descended from OCF and PXFM. MP3 represents the outcome of the collaboration of Dr. Karlheinz Brandenburg, working as a postdoc at AT&T-Bell Labs with Mr. James D. Johnston of AT&T-Bell Labs, collaborating with the Fraunhofer Society for Integrated Circuits, Erlangen, with relatively minor contributions from the MP2 branch of psychoacoustic sub-band coders.

MPEG-1 Audio Layer 2 encoding began as the Digital Audio Broadcast (DAB) project managed by Egon Meier-Engelen of the Deutsche Forschungs- und Versuchsanstalt für Luft- und Raumfahrt (later on called Deutsches Zentrum für Luft- und Raumfahrt, German Aerospace Center) in Germany. The European Community financed this project, commonly known as EU-147, from 1987 to 1994 as a part of the EUREKA research program.

As a doctoral student at Germany's University of Erlangen-Nuremberg, Karlheinz Brandenburg began working on digital music compression in the early 1980s, focusing on how people perceive music. He completed his doctoral work in 1989 and became an assistant professor at Erlangen-Nuremberg. While there, he continued to work on music compression with scientists at the Fraunhofer Society (in 1993 he joined the staff of the Fraunhofer Institute).

In 1991 there were two proposals available: Musicam and ASPEC - (Short excerpt on German Wikipedia) (Adaptive Spectral Perceptual Entropy Coding). The Musicam technique, as proposed by Philips (The Netherlands), CCETT (France) and Institut für Rundfunktechnik (Germany) was chosen due to its simplicity and error robustness, as well as its low computational power associated with the encoding of high quality compressed audio. The Musicam format, based on sub-band coding, was the basis of the MPEG Audio compression format (sampling rates, structure of frames, headers, number of samples per frame). Much of its technology and ideas were incorporated into the definition of ISO MPEG Audio Layer I and Layer II and the filter bank alone into Layer III (MP3) format as part of the computationally inefficient hybrid filter bank. Under the chairmanship of Professor Musmann (University of Hannover) the editing of the standard was made under the responsibilities of Leon van de Kerkhof (Layer I) and Gerhard Stoll (Layer II).

A working group consisting of Leon van de Kerkhof (The Netherlands), Gerhard Stoll (Germany), Leonardo Chiariglione (Italy), Yves-François Dehery (France), Karlheinz Brandenburg (Germany) and James D. Johnston (USA) took ideas from ASPEC, integrated the filter bank from Layer 2, added some of their own ideas and created MP3, which was designed to achieve the same quality at 128 kbit/s as MP2 at 192 kbit/s.

All algorithms were approved in 1991 and finalized in 1992 as part of MPEG-1, the first standard suite by MPEG, which resulted in the international standard ISO/IEC 11172-3, published in 1993. Further work on MPEG audio was finalized in 1994 as part of the second suite of MPEG standards, MPEG-2, more formally known as international standard ISO/IEC 13818-3, originally published in 1995.

Compression efficiency of encoders is typically defined by the bit rate, because compression ratio depends on the bit depth and sampling rate of the input signal. Nevertheless, compression ratios are often published. They may use the CD parameters as references (44.1 kHz, 2 channels at 16 bits per channel or 2×16 bit), or sometimes the Digital Audio Tape (DAT) SP parameters (48 kHz, 2×16 bit). Compression ratios with this latter reference are higher, which demonstrates the problem with use of the term compression ratio for lossy encoders.

Karlheinz Brandenburg used a CD recording of Suzanne Vega's song "Tom's Diner" to assess and refine the MP3 compression algorithm.[citation needed] This song was chosen because of its nearly monophonic nature and wide spectral content, making it easier to hear imperfections in the compression format during playbacks. Some jokingly refer to Suzanne Vega as "The mother of MP3"[16]. Some more critical audio excerpts (glockenspiel, triangle, accordion, etc.) were taken from the EBU V3/SQAM reference compact disc and have been used by professional sound engineers to assess the subjective quality of the MPEG Audio formats. It is important to understand that Suzanne Vega is recorded in an interesting fashion that results in substantial difficulties that arise due to Binaural Masking Level Depression (BMLD) as discussed in Brian C. J. Moore's book on the Psychology of Human Hearing, for instance

Encoding Audio
The MPEG-1 standard does not include a precise specification for an MP3 encoder. Implementers of the standard were supposed to devise their own algorithms suitable for removing parts of the information in the raw audio (or rather its MDCT representation in the frequency domain). During encoding, 576 time-domain samples are taken and are transformed to 576 frequency-domain samples. If there is a transient, 192 samples are taken instead of 576. This is done to limit the temporal spread of quantization noise accompanying the transient.

As a result, there are many different MP3 encoders available, each producing files of differing quality. Comparisons are widely available, so it is easy for a prospective user of an encoder to research the best choice. It must be kept in mind that an encoder that is proficient at encoding at higher bit rates (such as LAME) is not necessarily as good at lower bit rates.

Decoding Audio
Decoding, on the other hand, is carefully defined in the standard. Most decoders are "bitstream compliant", which means that the decompressed output - that they produce from a given MP3 file - will be the same (within a specified degree of rounding tolerance) as the output specified mathematically in the ISO/IEC standard document. The MP3 file has a standard format, which is a frame that consists of 384, 576, or 1152 samples (depends on MPEG version and layer), and all the frames have associated header information (32 bits) and side information (9, 17, or 32 bytes, depending on MPEG version and stereo/mono). The header and side information help the decoder to decode the associated Huffman encoded data correctly.

Therefore, comparison of decoders is usually based on how computationally efficient they are (i.e., how much memory or CPU time they use in the decoding process).

Audio Quality
When performing lossy audio encoding, such as creating an MP3 file, there is a trade-off between the amount of space used and the sound quality of the result. Typically, the creator is allowed to set a bit rate, which specifies how many kilobits the file may use per second of audio, for example, when ripping a compact disc to this format. The lower the bit rate used, the lower the audio quality will be, but the smaller the file size. Likewise, the higher the bit rate used, the higher the quality, and therefore, larger the resulting file will be.

Files encoded with a lower bit rate will generally play back at a lower quality. With too low a bit rate, "compression artifacts" (i.e., sounds that were not present in the original recording) may be audible in the reproduction. Some audio is hard to compress because of its randomness and sharp attacks. When this type of audio is compressed, artifacts such as ringing or pre-echo are usually heard. A sample of applause compressed with a relatively low bit rate provides a good example of compression artifacts.

Besides the bit rate of an encoded piece of audio, the quality of MP3 files also depends on the quality of the encoder itself, and the difficulty of the signal being encoded. As the MP3 standard allows quite a bit of freedom with encoding algorithms, different encoders may feature quite different quality, even when targeting similar bit rates. As an example, in a public listening test featuring two different MP3 encoders at about 128 kbit/s, one scored 3.66 on a 1–5 scale, while the other scored only 2.22.

Quality is heavily dependent on the choice of encoder and encoding parameters. While quality around 128 kbit/s was somewhere between annoying and acceptable with older encoders, modern MP3 encoders can provide adequate quality at those bit rates (January 2006). However, in 1998, MP3 at 128 kbit/s was only providing quality equivalent to AAC-LC at 96 kbit/s and MP2 at 192 kbit/s.

The transparency threshold of MP3 can be estimated to be at about 128 kbit/s with good encoders on typical music as evidenced by its strong performance in the above test, however some particularly difficult material, or music encoded for the use of people with more sensitive hearing can require 192 kbit/s or higher. As with all lossy formats, some samples cannot be encoded to be transparent for all users.

The simplest type of MP3 file uses one bit rate for the entire file — this is known as Constant Bit Rate (CBR) encoding. Using a constant bit rate makes encoding simpler and faster. However, it is also possible to create files where the bit rate changes throughout the file. These are known as Variable Bit Rate (VBR) files. The idea behind this is that, in any piece of audio, some parts will be much easier to compress, such as silence or music containing only a few instruments, while others will be more difficult to compress. So, the overall quality of the file may be increased by using a lower bit rate for the less complex passages and a higher one for the more complex parts. With some encoders, it is possible to specify a given quality, and the encoder will vary the bit rate accordingly. Users who know a particular "quality setting" that is transparent to their ears can use this value when encoding all of their music, and not need to worry about performing personal listening tests on each piece of music to determine the correct settings.

In a listening test, MP3 encoders at low bit rates performed significantly worse than those using more modern compression methods (such as AAC). In a 2004 public listening test at 32 kbit/s, the LAME MP3 encoder scored only 1.79/5 — behind all modern encoders — with Nero Digital HE AAC scoring 3.30/5.

Perceived quality can be influenced by listening environment (ambient noise), listener attention, and listener training and in most cases by listener audio equipment (such as sound cards, speakers and headphones).

Bit Rates
Several bit rates are specified in the MPEG-1 Layer 3 standard: 32, 40, 48, 56, 64, 80, 96, 112, 128, 144, 160, 192, 224, 256 and 320 kbit/s, and the available sampling frequencies are 32, 44.1 and 48 kHz. A sample rate of 44.1 kHz is almost always used, because this is also used for CD audio, the main source used for creating MP3 files. A greater variety of bit rates are used on the Internet. 128 kbit/s is the most common, because it typically offers adequate audio quality in a relatively small space. 192 kbit/s is often used by those who notice artifacts at lower bit rates. As the Internet bandwidth availability and hard drive sizes have increased, 128 kbit/s bit rate files are slowly being replaced with higher bit rates like 192 kbit/s, with some being encoded up to MP3's maximum of 320 kbit/s. It is unlikely that higher bit rates will be popular with any lossy audio codec as higher bit rates than 320 kbit/s encroach on the domain of lossless codecs such as FLAC.

By contrast, uncompressed audio as stored on a compact disc has a bit rate of 1,411.2 kbit/s (16 bits/sample × 44100 samples/second × 2 channels / 1000 bits/kilobit).

Some additional bit rates and sample rates were made available in the MPEG-2 and the (unofficial) MPEG-2.5 standards: bit rates of 8, 16, 24, and 144 kbit/s and sample rates of 8, 11.025, 12, 16, 22.05 and 24 kHz.

Non-standard bit rates up to 640 kbit/s can be achieved with the LAME encoder and the freeformat option, although few MP3 players can play those files. According to the ISO standard, decoders are only required to be able to decode streams up to 320 kbit/s

An MP3 file is made up of multiple MP3 frames, which consist of the MP3 header and the MP3 data. This sequence of frames is called an Elementary stream. Frames are not independent items ("byte reservoir") and therefore cannot be extracted on arbitrary frame boundaries. The MP3 data is the actual audio payload. The diagram shows that the MP3 header consists of a sync word, which is used to identify the beginning of a valid frame. This is followed by a bit indicating that this is the MPEG standard and two bits that indicate that layer 3 is used; hence MPEG-1 Audio Layer 3 or MP3. After this, the values will differ, depending on the MP3 file. ISO/IEC 11172-3 defines the range of values for each section of the header along with the specification of the header. Most MP3 files today contain ID3 metadata, which precedes or follows the MP3 frames; this is also shown in the diagram.

Source : Wikipedia


Windows Media Audio (WMA)

Windows Media Audio (WMA) is an audio data compression technology developed by Microsoft. The name can be used to refer to its audio file format or its audio codecs. It is a proprietary technology which forms part of the Windows Media framework. WMA consists of four distinct codecs. The original WMA codec, known simply as WMA, was conceived as a competitor to the popular MP3 and RealAudio codecs. Today it is one of the most popular codecs, together with MP3 and MPEG-4 AAC. In 2003 it came second after MP3 in terms of standalone players supporting it. WMA Pro, a newer and more advanced codec, supports multichannel and high resolution audio. A lossless codec, WMA Lossless, compresses audio data without loss of audio fidelity. And WMA Voice, targeted at voice content, applies compression using a range of low bit rates.

The first WMA codec was based on the previous work from Henrique Malvar and his team. According to the published article, the technology was transferred over to the Windows Media team at Microsoft. Malvar was a senior researcher and manager of the Signal Processing Group at Microsoft Research, whose team worked on the project called MSAudio. The first finalized codec was at first referred as MSAudio 4.0. It was later officially released as Windows Media Audio, as part of Windows Media Technologies 4.0. Microsoft initially claimed that WMA delivers the same quality of MP3 at half the bit rate; Microsoft also claimed that WMA delivers "CD-quality" audio at 64 kbit/s. The former claim however was rejected by some audiophiles according to EDN. RealNetworks also challenged Microsoft's claims regarding WMA's superior audio quality compared to RealAudio.

Newer versions of WMA became available: Windows Media Audio 2 in 1999, Windows Media Audio 7 in 2000, Windows Media Audio 8 in 2001, and Windows Media Audio 9 in 2003. Microsoft first announced its plans to license WMA technology to third-parties in 1999. Although earlier versions of Windows Media Player played WMA files, support for WMA file creation was not added until the seventh version. In 2003, Microsoft released new audio codecs which were not compatible with the original WMA codec. These codecs were Windows Media Audio 9 Professional, Windows Media Audio 9 Lossless, and Windows Media Audio 9 Voice.

A WMA file is in most circumstances encapsulated, or contained, in the Advanced Systems Format (ASF) container format, featuring a single audio track in one of following codecs: WMA, WMA Pro, WMA Lossless, or WMA Voice. These codecs are technically distinct and mutually incompatible. The ASF container format specifies how metadata about the file is to be encoded, similar to the ID3 tags used by MP3 files. Metadata may include song name, track number, artist name, and also audio normalization values.

This container can optionally support digital rights management (DRM) using a combination of elliptic curve cryptography key exchange, DES block cipher, a custom block cipher, RC4 stream cipher and the SHA-1 hashing function.

Windows Media Audio (WMA) is the most common codec of the four WMA codecs. Colloquial usage of the term WMA, especially in marketing materials and device specifications, usually refers to this codec only. The first version of the codec released in 1999 is regarded as WMA 1. In the same year, the bit stream syntax, or compression algorithm, was altered in minor ways and became WMA 2. Since then, newer versions of the codec were released, but the decoding process remained the same, ensuring compatibility between codec versions. WMA is a lossy audio codec based on the study of psychoacoustics. Audio signals which are deemed to be imperceptible to the human ear encoded with reduced resolution during the compression process.

WMA can encode audio signals sampled at up to 48000 times per second (48 kHz) with up to two discrete channels (stereo). WMA 9 introduced variable bit rate (VBR) and average bit rate (ABR) coding techniques into the MS encoder although both were technically supported by the original format,. WMA 9.1 also added support for low-delay audio, which reduces latency for encoding and decoding.

Fundamentally, WMA is a transform coder based on modified discrete cosine transform (MDCT), somewhat similar to AAC and Vorbis. The bit stream of WMA is composed of superframes, each containing 1 or more frames of 2048 samples. If the bit reservoir is not used, a frame is equal to a superframe. Each frame contains a number of blocks, which are 128, 256, 512, 1024, or 2048 samples long after being transformed into the frequency domain via the MDCT. In the frequency domain, masking for the transformed samples is determined, and then used to requantize the samples. Finally, the floating point samples are decomposed into coefficient and exponent parts and independently huffman coded. Stereo information is typically mid/side coded. At low bit rates, line spectral pairs (typically less than 17 kbit/s) and a form of noise coding (typically less than 33 kbit/s) can also be used to improve quality.

Like AAC and Ogg Vorbis, WMA was intended to address perceived deficiencies in the MP3 standard. Given their common design goals, it's not surprising that the three formats ended up making similar design choices. All three are pure transform codecs. Furthermore the MDCT implementation used in WMA is essentially a superset of those used in Ogg and AAC such that WMA iMDCT and windowing routines can be used to decode AAC and Ogg Vorbis almost unmodified. However, quantization and stereo coding is handled differently in each codec. The primary distinguishing trait of the WMA Standard format is its unique use of 5 different block sizes, compared to MP3, AAC, and Ogg Vorbis which each restrict files to just two sizes.

WMA is one of the most popular audio codecs. Certified PlaysForSure devices, as well as a large number of uncertified devices, ranging from portable hand-held music players to set-top DVD players, support the playback of WMA files. Most PlaysForSure-certified online stores distribute content using this codec only. In 2005, Nokia announced its plans to support WMA playback in future Nokia handsets.[20] In the same year, an update was made available for the PlayStation Portable (version 2.60) which allowed WMA files to be played on the device for the first time.

Microsoft claims that audio encoded with WMA sounds better than MP3 at the same bit rate; Microsoft also claims that audio encoded with WMA at lower bit rates sound better than MP3 at higher bit rates. Double blind listening tests with other lossy audio codecs have shown varying results, from failure to support Microsoft's claims about its superior quality to supremacy over other codecs. One independent test conducted in May 2004 at 128 kbit/s showed that WMA was roughly equivalent to LAME MP3; inferior to AAC and Vorbis; and superior to ATRAC3 (software version). Another test performed by ExtremeTech showed different results, however, placing WMA at the top of the list in terms of quality.

Some conclusions made by recent studies:

  • At 32 kbit/s, WMA Standard was noticeably better than LAME MP3, but not better than other modern codecs in a collective, independent test in July 2004.
  • At 48 kbit/s, WMA 10 Pro was ranked second after Nero HE-AAC and better than WMA 9.2 in an independent listening test organized and supported by Sebastian Mares and Hydrogenaudio Forums in December 2006. This test, however, used CBR for WMA 10 Pro and VBR for the other codecs.
  • At 64 kbit/s, WMA Pro outperformed Nero HE-AAC in a commissioned, independent listening test performed by the National Software Testing Labs in 2005. Out of 300 participants, "71% of all listeners indicated that WMA Pro was equal to or better than HE AAC."
  • At 80 kbit/s and 96 kbit/s, WMA had lower quality than HE-AAC, AAC-LC, and Vorbis; near-equivalent quality to MP3, and better quality than MPC in individual tests done in 2005.
  • At 128 kbit/s, there was a four-way tie between aoTuV Vorbis, LAME MP3, WMA 9 Pro and AAC in a large scale test in January 2006, with each codec sounding close to the uncompressed music file for most listeners.
  • At 768 kbit/s, WMA 9 Pro delivered full-spectrum response at half the bit rate required for DTS in a comparative test done by EDN in October 2003. The test sample was a 48 kHz, 5.1 channel surround audio track.


Source: Wikipedia