Digital video
Encyclopedia
Digital video is a type of digital recording
system that works by using a digital
rather than an analog video
signal.
The terms camera
, video camera
, and camcorder
are used interchangeably in this article.
equipment- such as time base correctors (TBC) and digital video effects
(DVE) units (one of the former being the Thomson-CSF
9100 Digital Video Processor, an internally all-digital full-frame TBC introduced in 1980, and two of the latter being the Ampex
ADO, and the Nippon Electric Corporation (NEC) DVE) were introduced that operated by taking a standard analog composite video
input and digitizing it internally. This made it easier to either correct or enhance the video signal, as in the case of a TBC, or to manipulate and add effects to the video, in the case of a DVE unit. The digitized and processed video information from these units would then be converted back to standard analog video.
Later on in the 1970s, manufacturers of professional video broadcast equipment, such as Bosch
(through their Fernseh
division), RCA
, and Ampex
developed prototype digital videotape recorders (VTR) in their research and development
labs. Bosch's machine used a modified 1" Type B
transport, and recorded an early form of CCIR 601
digital video. None of these machines from these manufacturers were ever marketed commercially, however.
Digital video was first introduced commercially in 1986 with the Sony D-1
format, which recorded an uncompressed standard definition component video
signal in digital form instead of the high-band analog forms that had been commonplace until then. Due to its expense, D-1 was used primarily by large television network
s. It would eventually be replaced by cheaper systems using video compression, most notably Sony's Digital Betacam (still heavily used as a electronic field production
(EFP) recording format by professional television producers) that were introduced into the network's television studio
s.
One of the first digital video products to run on personal computers was PACo: The PICS Animation Compiler from The Company of Science & Art in Providence, RI, which was developed starting in 1990 and first shipped in May 1991. PACo could stream unlimited-length video with synchronized sound from a single file on CD-ROM. Creation required a Mac; playback was possible on Macs, PCs, and Sun Sparcstations. In 1992, Bernard Luskin, Philips Interactive Media, and Eric Doctorow, Paramount Worldwide Video, successfully put the first fifty videos in digital MPEG 1 on CD, developed the packaging and launched movies on CD, leading to advancing versions of MPEG, and to DVD.
QuickTime
, Apple Computer
's architecture for time-based and streaming data formats appeared in June, 1991. Initial consumer-level content creation tools were crude, requiring an analog video source to be digitized to a computer-readable format. While low-quality at first, consumer digital video increased rapidly in quality, first with the introduction of playback standards such as MPEG-1
and MPEG-2
(adopted for use in television transmission and DVD
media), and then the introduction of the DV
tape format allowing recording direct to digital data and simplifying the editing process, allowing non-linear editing system
s (NLE) to be deployed cheaply and widely on desktop computer
s with no external playback/recording equipment needed. The widespread adoption of digital video has also drastically reduced the bandwidth needed for a high-definition video
signal (with HDV
and AVCHD
, as well as several commercial variants such as DVCPRO-HD, all using less bandwidth than a standard definition analog signal) and tapeless camcorder
s based on flash memory
and often a variant of MPEG-4
.
s displayed in rapid succession at a constant rate. In the context of video these images are called frames
. We measure the rate at which frames are displayed in frames per second (FPS).
Since every frame is an orthogonal bitmap digital image it comprises a raster of pixel
s. If it has a width of W pixels and a height of H pixels we say that the frame size is WxH.
Pixels have only one property, their color. The color of a pixel is represented by a fixed number of bits. The more bits the more subtle variations of colors can be reproduced. This is called the color depth (CD)
of the video.
An example video can have a duration (T) of 1 hour (3600sec), a frame size of 640x480 (WxH) at a color depth of 24bits and a frame rate of 25fps. This example video has the following properties:
The most important properties are bit rate and video size. The formulas relating those two with all other properties are:
BR = W * H * CD * FPS
VS = BR * T = W * H * CD * FPS * T
(units are: BR in bit/s, W and H in pixels, CD in bits, VS in bits, T in seconds)
while some secondary formulas are:
pixels_per_frame = W * H
pixels_per_second = W * H * FPS
bits_per_frame = W * H * CD
BR = W * H * CD * FPS / CF
VS = BR * T / CF
Please note that it is not necessary that all frames are equally compressed by a factor of CF. In practice they are not, so CF is the average factor of compression for all the frames taken together.
The above equation for the bit rate can be rewritten by combining the compression factor and the color depth like this:
BR = W * H * ( CD / CF ) * FPS
The value (CD / CF) represents the average bits per pixel (BPP). As an example, if we have a color depth of 12bits/pixel and an algorithm that compresses at 40x, then BPP equals 0.3 (12/40). So in the case of compressed video the formula for bit rate is:
BR = W * H * BPP * FPS
In fact the same formula is valid for uncompressed video because in that case one can assume that the "compression" factor is 1 and that the average bits per pixel equal the color depth.
is a measure of the rate of information content of the digital video stream. In the case of uncompressed video, bit rate corresponds directly to the quality of the video (remember that bit rate is proportional to every property that affects the video quality). Bit rate is an important property when transmitting video because the transmission link must be capable of supporting that bit rate. Bit rate is also important when dealing with the storage of video because, as shown above, the video size is proportional to the bit rate and the duration. Bit rate of uncompressed video is too high for most practical applications. Video compression is used to greatly reduce the bit rate.
BPP is a measure of the efficiency of compression. A true-color video with no compression at all may have a BPP of 24 bits/pixel. Chroma subsampling
can reduce the BPP to 16 or 12 bits/pixel. Applying jpeg
compression on every frame can reduce the BPP to 8 or even 1 bits/pixel. Applying video compression algorithms like MPEG1, MPEG2 or MPEG4 allows for fractional BPP values.
. This CBR video is suitable for real-time, non-buffered, fixed bandwidth video streaming (e.g. in videoconferencing).
Noting that not all frames can be compressed at the same level because quality is more severely impacted for scenes of high complexity some algorithms try to constantly adjust the BPP. They keep it high while compressing complex scenes and low for less demanding scenes. This way one gets the best quality at the smallest average bit rate (and the smallest file size accordingly). Of course when using this method the bit rate is variable because it tracks the variations of the BPP.
s such as 16 mm and 35 mm record at 24 frames per second
. For video, there are two frame rate standards: NTSC
, which shoot at 30/1.001 (about 29.97) frames per second or 59.94 fields per second, and PAL
, 25 frames per second or 50 fields per second.
Digital video cameras come in two different image capture formats: interlaced and deinterlaced / progressive scan
.
Interlaced cameras record the image in alternating sets of lines: the odd-numbered lines are scanned, and then the even-numbered lines are scanned, then the odd-numbered lines are scanned again, and so on. One set of odd or even lines is referred to as a "field", and a consecutive pairing of two fields of opposite parity is called a frame. Deinterlaced cameras records each frame as distinct, with all scan lines being captured at the same moment in time. Thus, interlaced video captures samples the scene motion twice as often as progressive video does, for the same number of frames per second
. Progressive-scan camcorders generally produce a slightly sharper image. However, motion may not be as smooth as interlaced video which uses 50 or 59.94 fields per second, particularly if they employ the 24 frames per second standard of film.
Digital video can be copied with no degradation in quality. No matter how many generations of a digital source is copied, it will still be as clear as the original first generation of digital footage. However a change in parameters like frame size as well as a change of the digital format can decrease the quality of the video due to new calculations that have to be made. Digital video can be manipulated and edited to follow an order or sequence on an NLE, or non-linear editing workstation, a computer-based device intended to edit video and audio. More and more, videos are edited on readily available, increasingly affordable consumer-grade computer hardware and software. However, such editing systems require ample disk space for video footage. The many video formats and parameters to be set make it quite impossible to come up with a specific number for how many minutes need how much time.
Digital video has a significantly lower cost than 35 mm film. The tape stock itself is very inexpensive. Digital video also allows footage to be viewed on location without the expensive chemical processing required by film. Also physical deliveries of tapes and broadcasts do not apply anymore. Digital television
(including higher quality HDTV) started to spread in most developed countries in early 2000s. Digital video is also used in modern mobile phones and video conferencing systems. Digital video is also used for Internet
distribution of media, including streaming video and peer-to-peer
movie distribution. However even within Europe are lots of TV-Stations not broadcasting in HD, due to restricted budgets for new equipment for processing HD.
Many types of video compression exist for serving digital video over the internet and on optical disks. The file sizes of digital video used for professional editing are generally not practical for these purposes, and the video requires further compression with codecs such as Sorenson, H264 and more recently AppleProRes especially for HD. Probably the most widely used formats for delivering video over the internet are MPEG4, Quicktime, Flash and Windows Media, while MPEG2 is used almost exclusively for DVDs, providing an exceptional image in minimal size but resulting in a high level of CPU consumption to decompress.
, the highest resolution demonstrated for digital video generation is 35 megapixels (8192 x 4320) at 1080p. The highest speed is attained in industrial and scientific high speed camera
s that are capable of filming 1024x1024 video at up to 1 million frames per second for brief periods of recording.
.
The following interface has been designed for carrying MPEG
-Transport compressed video:
Compressed video is also carried using UDP
-IP
over Ethernet
. Two approaches exist for this:
based.
Digital recording
In digital recording, digital audio and digital video is directly recorded to a storage device as a stream of discrete numbers, representing the changes in air pressure for audio and chroma and luminance values for video through time, thus making an abstract template for the original sound or...
system that works by using a digital
Digital
A digital system is a data technology that uses discrete values. By contrast, non-digital systems use a continuous range of values to represent information...
rather than an analog video
Analog video
Analog video is a video signal transferred by an analog signal. An analog color video signal contains luminance, brightness and chrominance of an analog television image...
signal.
The terms camera
Camera
A camera is a device that records and stores images. These images may be still photographs or moving images such as videos or movies. The term camera comes from the camera obscura , an early mechanism for projecting images...
, video camera
Video camera
A video camera is a camera used for electronic motion picture acquisition, initially developed by the television industry but now common in other applications as well. The earliest video cameras were those of John Logie Baird, based on the electromechanical Nipkow disk and used by the BBC in...
, and camcorder
Camcorder
A camcorder is an electronic device that combines a video camera and a video recorder into one unit. Equipment manufacturers do not seem to have strict guidelines for the term usage...
are used interchangeably in this article.
History
Starting in the late 1970s to the early 1980s, several types of video productionVideo production
Video production is videography, the process of capturing moving images on electronic media even streaming media. The term includes methods of production and post-production...
equipment- such as time base correctors (TBC) and digital video effects
Digital Video Effects
Digital Video Effects, commonly called DVEs, are visual effects that provide comprehensive video image manipulation, in the same form as optical printer effects in film...
(DVE) units (one of the former being the Thomson-CSF
Thomson-CSF
Thomson-CSF was a major electronics and defence contractor. In December 2000 it was renamed Thales Group.-History:In 1879 Elihu Thomson and Edwin Houston formed the Thomson-Houston Electric Company in the United States....
9100 Digital Video Processor, an internally all-digital full-frame TBC introduced in 1980, and two of the latter being the Ampex
Ampex
Ampex is an American electronics company founded in 1944 by Alexander M. Poniatoff. The name AMPEX is an acronym, created by its founder, which stands for Alexander M. Poniatoff Excellence...
ADO, and the Nippon Electric Corporation (NEC) DVE) were introduced that operated by taking a standard analog composite video
Composite video
Composite video is the format of an analog television signal before it is combined with a sound signal and modulated onto an RF carrier. In contrast to component video it contains all required video information, including colors in a single line-level signal...
input and digitizing it internally. This made it easier to either correct or enhance the video signal, as in the case of a TBC, or to manipulate and add effects to the video, in the case of a DVE unit. The digitized and processed video information from these units would then be converted back to standard analog video.
Later on in the 1970s, manufacturers of professional video broadcast equipment, such as Bosch
Robert Bosch GmbH
Robert Bosch GmbH is a multinational engineering and electronics company headquartered in Gerlingen, near Stuttgart, Germany. It is the world's largest supplier of automotive components...
(through their Fernseh
Fernseh
The Fernseh AG television company was registered in Berlin on July 3, 1929 by John Logie Baird, Robert Bosch and other partners with an initial capital of 100,000 Reichsmark....
division), RCA
RCA
RCA Corporation, founded as the Radio Corporation of America, was an American electronics company in existence from 1919 to 1986. The RCA trademark is currently owned by the French conglomerate Technicolor SA through RCA Trademark Management S.A., a company owned by Technicolor...
, and Ampex
Ampex
Ampex is an American electronics company founded in 1944 by Alexander M. Poniatoff. The name AMPEX is an acronym, created by its founder, which stands for Alexander M. Poniatoff Excellence...
developed prototype digital videotape recorders (VTR) in their research and development
Research and development
The phrase research and development , according to the Organization for Economic Co-operation and Development, refers to "creative work undertaken on a systematic basis in order to increase the stock of knowledge, including knowledge of man, culture and society, and the use of this stock of...
labs. Bosch's machine used a modified 1" Type B
1 inch type B videotape
1 inch type B VTR is a reel-to-reel analog recording video tape format developed by the Bosch Fernseh division of Bosch in Germany in 1976...
transport, and recorded an early form of CCIR 601
CCIR 601
ITU-R Recommendation BT.601, more commonly known by the abbreviations Rec. 601 or BT.601 is a standard published in 1982 by International Telecommunication Union - Radiocommunications sector for encoding interlaced analog video signals in digital video form...
digital video. None of these machines from these manufacturers were ever marketed commercially, however.
Digital video was first introduced commercially in 1986 with the Sony D-1
D1 (Sony)
D-1 is an SMPTE digital recording video standard, introduced in 1986 through efforts by SMPTE engineering committees. It started as a Sony and Bosch - BTS product and was the first major professional digital video format.- Format :...
format, which recorded an uncompressed standard definition component video
Component video
Component video is a video signal that has been split into two or more component channels. In popular use, it refers to a type of component analog video information that is transmitted or stored as three separate signals...
signal in digital form instead of the high-band analog forms that had been commonplace until then. Due to its expense, D-1 was used primarily by large television network
Television network
A television network is a telecommunications network for distribution of television program content, whereby a central operation provides programming to many television stations or pay TV providers. Until the mid-1980s, television programming in most countries of the world was dominated by a small...
s. It would eventually be replaced by cheaper systems using video compression, most notably Sony's Digital Betacam (still heavily used as a electronic field production
Electronic field production
Electronic field production is a television industry term referring to a video production which takes place in the field, outside of a formal television studio, in a practical location or special venue...
(EFP) recording format by professional television producers) that were introduced into the network's television studio
Television studio
A television studio is an installation in which a video productions take place, either for the recording of live television to video tape, or for the acquisition of raw footage for post-production. The design of a studio is similar to, and derived from, movie studios, with a few amendments for the...
s.
One of the first digital video products to run on personal computers was PACo: The PICS Animation Compiler from The Company of Science & Art in Providence, RI, which was developed starting in 1990 and first shipped in May 1991. PACo could stream unlimited-length video with synchronized sound from a single file on CD-ROM. Creation required a Mac; playback was possible on Macs, PCs, and Sun Sparcstations. In 1992, Bernard Luskin, Philips Interactive Media, and Eric Doctorow, Paramount Worldwide Video, successfully put the first fifty videos in digital MPEG 1 on CD, developed the packaging and launched movies on CD, leading to advancing versions of MPEG, and to DVD.
QuickTime
QuickTime
QuickTime is an extensible proprietary multimedia framework developed by Apple Inc., capable of handling various formats of digital video, picture, sound, panoramic images, and interactivity. The classic version of QuickTime is available for Windows XP and later, as well as Mac OS X Leopard and...
, Apple Computer
Apple Computer
Apple Inc. is an American multinational corporation that designs and markets consumer electronics, computer software, and personal computers. The company's best-known hardware products include the Macintosh line of computers, the iPod, the iPhone and the iPad...
's architecture for time-based and streaming data formats appeared in June, 1991. Initial consumer-level content creation tools were crude, requiring an analog video source to be digitized to a computer-readable format. While low-quality at first, consumer digital video increased rapidly in quality, first with the introduction of playback standards such as MPEG-1
MPEG-1
MPEG-1 is a standard for lossy compression of video and audio. It is designed to compress VHS-quality raw digital video and CD audio down to 1.5 Mbit/s without excessive quality loss, making video CDs, digital cable/satellite TV and digital audio broadcasting possible.Today, MPEG-1 has become...
and MPEG-2
MPEG-2
MPEG-2 is a standard for "the generic coding of moving pictures and associated audio information". It describes a combination of lossy video compression and lossy audio data compression methods which permit storage and transmission of movies using currently available storage media and transmission...
(adopted for use in television transmission and DVD
DVD
A DVD is an optical disc storage media format, invented and developed by Philips, Sony, Toshiba, and Panasonic in 1995. DVDs offer higher storage capacity than Compact Discs while having the same dimensions....
media), and then the introduction of the DV
DV
DV is a format for the digital recording and playing back of digital video. The DV codec was launched in 1995 with joint efforts of leading producers of video camcorders....
tape format allowing recording direct to digital data and simplifying the editing process, allowing non-linear editing system
Non-linear editing system
In video, a non-linear editing system is a video editing or audio editing digital audio workstation system which can perform random access non-destructive editing on the source material...
s (NLE) to be deployed cheaply and widely on desktop computer
Desktop computer
A desktop computer is a personal computer in a form intended for regular use at a single location, as opposed to a mobile laptop or portable computer. Early desktop computers are designed to lay flat on the desk, while modern towers stand upright...
s with no external playback/recording equipment needed. The widespread adoption of digital video has also drastically reduced the bandwidth needed for a high-definition video
High-definition video
High-definition video or HD video refers to any video system of higher resolution than standard-definition video, and most commonly involves display resolutions of 1,280×720 pixels or 1,920×1,080 pixels...
signal (with HDV
HDV
HDV is a format for recording of high-definition video on DV cassette tape. The format was originally developed by JVC and supported by Sony, Canon and Sharp...
and AVCHD
AVCHD
AVCHD is a file-based format for the digital recording and playback of high-definition video....
, as well as several commercial variants such as DVCPRO-HD, all using less bandwidth than a standard definition analog signal) and tapeless camcorder
Tapeless Camcorder
A tapeless camcorder is a camcorder that does not use video tape for the digital recording of video productions as 20th century ones did. Tapeless camcorders record video as digital computer files onto random access data storage devices such as optical discs, hard disk drives and solid-state flash...
s based on flash memory
Flash memory
Flash memory is a non-volatile computer storage chip that can be electrically erased and reprogrammed. It was developed from EEPROM and must be erased in fairly large blocks before these can be rewritten with new data...
and often a variant of MPEG-4
MPEG-4
MPEG-4 is a method of defining compression of audio and visual digital data. It was introduced in late 1998 and designated a standard for a group of audio and video coding formats and related technology agreed upon by the ISO/IEC Moving Picture Experts Group under the formal standard ISO/IEC...
.
Overview of basic properties
Digital video comprises a series of orthogonal bitmap digital imageDigital image
A digital image is a numeric representation of a two-dimensional image. Depending on whether or not the image resolution is fixed, it may be of vector or raster type...
s displayed in rapid succession at a constant rate. In the context of video these images are called frames
Film frame
In filmmaking, video production, animation, and related fields, a film frame or video frame is one of the many still images which compose the complete moving picture...
. We measure the rate at which frames are displayed in frames per second (FPS).
Since every frame is an orthogonal bitmap digital image it comprises a raster of pixel
Pixel
In digital imaging, a pixel, or pel, is a single point in a raster image, or the smallest addressable screen element in a display device; it is the smallest unit of picture that can be represented or controlled....
s. If it has a width of W pixels and a height of H pixels we say that the frame size is WxH.
Pixels have only one property, their color. The color of a pixel is represented by a fixed number of bits. The more bits the more subtle variations of colors can be reproduced. This is called the color depth (CD)
Color depth
In computer graphics, color depth or bit depth is the number of bits used to represent the color of a single pixel in a bitmapped image or video frame buffer. This concept is also known as bits per pixel , particularly when specified along with the number of bits used...
of the video.
An example video can have a duration (T) of 1 hour (3600sec), a frame size of 640x480 (WxH) at a color depth of 24bits and a frame rate of 25fps. This example video has the following properties:
- pixels per frame = 640 * 480 = 307,200
- bits per frame = 307,200 * 24 = 7,372,800 = 7.37Mbits
- bit rate (BR) = 7.37 * 25 = 184.25Mbits/sec
- video size (VS) = 184Mbits/sec * 3600sec = 662,400Mbits = 82,800Mbytes = 82.8Gbytes
The most important properties are bit rate and video size. The formulas relating those two with all other properties are:
BR = W * H * CD * FPS
VS = BR * T = W * H * CD * FPS * T
(units are: BR in bit/s, W and H in pixels, CD in bits, VS in bits, T in seconds)
while some secondary formulas are:
pixels_per_frame = W * H
pixels_per_second = W * H * FPS
bits_per_frame = W * H * CD
Regarding Interlacing
In interlaced video each frame is composed of two halves of an image. The first half contains only the odd-numbered lines of a full frame. The second half contains only the even-numbered lines. Those halves are referred to individually as fields. Two consecutive fields compose a full frame. If an interlaced video has a frame rate of 15 frames per second the field rate is 30 fields per second. All the properties and formulas discussed here apply equally to interlaced video but one should be careful not to confuse the fields per second rate with the frames per second rate.Properties of compressed video
The above are accurate for uncompressed video. Because of the relatively high bit rate of uncompressed video, video compression is extensively used. In the case of compressed video each frame requires a small percentage of the original bits. Assuming a compression algorithm that shrinks the input data by a factor of CF, the bit rate and video size would equal to:BR = W * H * CD * FPS / CF
VS = BR * T / CF
Please note that it is not necessary that all frames are equally compressed by a factor of CF. In practice they are not, so CF is the average factor of compression for all the frames taken together.
The above equation for the bit rate can be rewritten by combining the compression factor and the color depth like this:
BR = W * H * ( CD / CF ) * FPS
The value (CD / CF) represents the average bits per pixel (BPP). As an example, if we have a color depth of 12bits/pixel and an algorithm that compresses at 40x, then BPP equals 0.3 (12/40). So in the case of compressed video the formula for bit rate is:
BR = W * H * BPP * FPS
In fact the same formula is valid for uncompressed video because in that case one can assume that the "compression" factor is 1 and that the average bits per pixel equal the color depth.
More on bit rate and BPP
As is obvious by its definition bit rateBit rate
In telecommunications and computing, bit rate is the number of bits that are conveyed or processed per unit of time....
is a measure of the rate of information content of the digital video stream. In the case of uncompressed video, bit rate corresponds directly to the quality of the video (remember that bit rate is proportional to every property that affects the video quality). Bit rate is an important property when transmitting video because the transmission link must be capable of supporting that bit rate. Bit rate is also important when dealing with the storage of video because, as shown above, the video size is proportional to the bit rate and the duration. Bit rate of uncompressed video is too high for most practical applications. Video compression is used to greatly reduce the bit rate.
BPP is a measure of the efficiency of compression. A true-color video with no compression at all may have a BPP of 24 bits/pixel. Chroma subsampling
Chroma subsampling
Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance....
can reduce the BPP to 16 or 12 bits/pixel. Applying jpeg
JPEG
In computing, JPEG . The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality....
compression on every frame can reduce the BPP to 8 or even 1 bits/pixel. Applying video compression algorithms like MPEG1, MPEG2 or MPEG4 allows for fractional BPP values.
Constant bit rate versus variable bit rate
As noted above BPP represents the average bits per pixel. There are compression algorithms that keep the BPP almost constant throughout the entire duration of the video. In this case we also get video output with a constant bit rate (CBR)Constant bitrate
Constant bitrate is a term used in telecommunications, relating to the quality of service. Compare with variable bitrate.When referring to codecs, constant bit rate encoding means that the rate at which a codec's output data should be consumed is constant...
. This CBR video is suitable for real-time, non-buffered, fixed bandwidth video streaming (e.g. in videoconferencing).
Noting that not all frames can be compressed at the same level because quality is more severely impacted for scenes of high complexity some algorithms try to constantly adjust the BPP. They keep it high while compressing complex scenes and low for less demanding scenes. This way one gets the best quality at the smallest average bit rate (and the smallest file size accordingly). Of course when using this method the bit rate is variable because it tracks the variations of the BPP.
Technical overview
Standard film stockFilm stock
Film stock is photographic film on which filmmaking of motion pictures are shot and reproduced. The equivalent in television production is video tape.-1889–1899:...
s such as 16 mm and 35 mm record at 24 frames per second
Frame rate
Frame rate is the frequency at which an imaging device produces unique consecutive images called frames. The term applies equally well to computer graphics, video cameras, film cameras, and motion capture systems...
. For video, there are two frame rate standards: NTSC
NTSC
NTSC, named for the National Television System Committee, is the analog television system that is used in most of North America, most of South America , Burma, South Korea, Taiwan, Japan, the Philippines, and some Pacific island nations and territories .Most countries using the NTSC standard, as...
, which shoot at 30/1.001 (about 29.97) frames per second or 59.94 fields per second, and PAL
PAL
PAL, short for Phase Alternating Line, is an analogue television colour encoding system used in broadcast television systems in many countries. Other common analogue television systems are NTSC and SECAM. This page primarily discusses the PAL colour encoding system...
, 25 frames per second or 50 fields per second.
Digital video cameras come in two different image capture formats: interlaced and deinterlaced / progressive scan
Progressive scan
Progressive scanning is a way of displaying, storing, or transmitting moving images in which all the lines of each frame are drawn in sequence...
.
Interlaced cameras record the image in alternating sets of lines: the odd-numbered lines are scanned, and then the even-numbered lines are scanned, then the odd-numbered lines are scanned again, and so on. One set of odd or even lines is referred to as a "field", and a consecutive pairing of two fields of opposite parity is called a frame. Deinterlaced cameras records each frame as distinct, with all scan lines being captured at the same moment in time. Thus, interlaced video captures samples the scene motion twice as often as progressive video does, for the same number of frames per second
Frame rate
Frame rate is the frequency at which an imaging device produces unique consecutive images called frames. The term applies equally well to computer graphics, video cameras, film cameras, and motion capture systems...
. Progressive-scan camcorders generally produce a slightly sharper image. However, motion may not be as smooth as interlaced video which uses 50 or 59.94 fields per second, particularly if they employ the 24 frames per second standard of film.
Digital video can be copied with no degradation in quality. No matter how many generations of a digital source is copied, it will still be as clear as the original first generation of digital footage. However a change in parameters like frame size as well as a change of the digital format can decrease the quality of the video due to new calculations that have to be made. Digital video can be manipulated and edited to follow an order or sequence on an NLE, or non-linear editing workstation, a computer-based device intended to edit video and audio. More and more, videos are edited on readily available, increasingly affordable consumer-grade computer hardware and software. However, such editing systems require ample disk space for video footage. The many video formats and parameters to be set make it quite impossible to come up with a specific number for how many minutes need how much time.
Digital video has a significantly lower cost than 35 mm film. The tape stock itself is very inexpensive. Digital video also allows footage to be viewed on location without the expensive chemical processing required by film. Also physical deliveries of tapes and broadcasts do not apply anymore. Digital television
Digital television
Digital television is the transmission of audio and video by digital signals, in contrast to the analog signals used by analog TV...
(including higher quality HDTV) started to spread in most developed countries in early 2000s. Digital video is also used in modern mobile phones and video conferencing systems. Digital video is also used for Internet
Internet
The Internet is a global system of interconnected computer networks that use the standard Internet protocol suite to serve billions of users worldwide...
distribution of media, including streaming video and peer-to-peer
Peer-to-peer
Peer-to-peer computing or networking is a distributed application architecture that partitions tasks or workloads among peers. Peers are equally privileged, equipotent participants in the application...
movie distribution. However even within Europe are lots of TV-Stations not broadcasting in HD, due to restricted budgets for new equipment for processing HD.
Many types of video compression exist for serving digital video over the internet and on optical disks. The file sizes of digital video used for professional editing are generally not practical for these purposes, and the video requires further compression with codecs such as Sorenson, H264 and more recently AppleProRes especially for HD. Probably the most widely used formats for delivering video over the internet are MPEG4, Quicktime, Flash and Windows Media, while MPEG2 is used almost exclusively for DVDs, providing an exceptional image in minimal size but resulting in a high level of CPU consumption to decompress.
, the highest resolution demonstrated for digital video generation is 35 megapixels (8192 x 4320) at 1080p. The highest speed is attained in industrial and scientific high speed camera
High speed camera
A high speed camera is a device used for recording fast moving objects as a photographic image onto a storage media. After recording, the images stored on the media can be played back in slow-motion...
s that are capable of filming 1024x1024 video at up to 1 million frames per second for brief periods of recording.
Poster frame
A poster frame or preview frame is a selected frame of the video used as a thumbnailThumbnail
Thumbnails are reduced-size versions of pictures, used to help in recognizing and organizing them, serving the same role for images as a normal text index does for words...
.
Interfaces and cables
Many interfaces have been designed specifically to handle the requirements of uncompressed digital video (at roughly 400 Mbit/s):- Serial Digital InterfaceSerial Digital InterfaceSerial digital interface is a family of video interfaces standardized by SMPTE. For example, ITU-R BT.656 and SMPTE 259M define digital video interfaces used for broadcast-grade video...
- FireWire
- High-Definition Multimedia InterfaceHigh-Definition Multimedia InterfaceHDMI is a compact audio/video interface for transmitting uncompressed digital data. It is a digital alternative to consumer analog standards, such as radio frequency coaxial cable, composite video, S-Video, SCART, component video, D-Terminal, or VGA...
- Digital Visual InterfaceDigital Visual InterfaceThe Digital Visual Interface is a video interface standard covering the transmission of video between a source device and a display device. The DVI standard has achieved widespread acceptance in the PC industry, both in desktop PCs and monitors...
- Unified Display Interface
- DisplayPortDisplayPortDisplayPort is a digital display interface standard produced by the Video Electronics Standards Association . The specification defines a royalty-free digital interconnect for audio and video. The interface is primarily used to connect a video source to a display device such as a computer monitor...
- USB
- Digital component videoDigital component videoDigital component video is defined by ITU-R BT.601 standard and uses the Y'CbCr colorspace. Like Analog Component Video it gets its name from the fact that the video signal has been split into two or more components, that are then carried on multiple conductors between devices...
The following interface has been designed for carrying MPEG
Moving Picture Experts Group
The Moving Picture Experts Group is a working group of experts that was formed by ISO and IEC to set standards for audio and video compression and transmission. It was established in 1988 by the initiative of Hiroshi Yasuda and Leonardo Chiariglione, who has been from the beginning the Chairman...
-Transport compressed video:
- DVB-ASI
Compressed video is also carried using UDP
User Datagram Protocol
The User Datagram Protocol is one of the core members of the Internet Protocol Suite, the set of network protocols used for the Internet. With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol network without requiring...
-IP
Internet Protocol
The Internet Protocol is the principal communications protocol used for relaying datagrams across an internetwork using the Internet Protocol Suite...
over Ethernet
Ethernet
Ethernet is a family of computer networking technologies for local area networks commercially introduced in 1980. Standardized in IEEE 802.3, Ethernet has largely replaced competing wired LAN technologies....
. Two approaches exist for this:
- Using RTPReal-time Transport ProtocolThe Real-time Transport Protocol defines a standardized packet format for delivering audio and video over IP networks. RTP is used extensively in communication and entertainment systems that involve streaming media, such as telephony, video teleconference applications, television services and...
as a wrapper for video packets - 1-7 MPEG Transport Packets are placed directly in the UDPUser Datagram ProtocolThe User Datagram Protocol is one of the core members of the Internet Protocol Suite, the set of network protocols used for the Internet. With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol network without requiring...
packet
Encoding
All current formats, which are listed below, are PCMPulse-code modulation
Pulse-code modulation is a method used to digitally represent sampled analog signals. It is the standard form for digital audio in computers and various Blu-ray, Compact Disc and DVD formats, as well as other uses such as digital telephone systems...
based.
- CCIR 601CCIR 601ITU-R Recommendation BT.601, more commonly known by the abbreviations Rec. 601 or BT.601 is a standard published in 1982 by International Telecommunication Union - Radiocommunications sector for encoding interlaced analog video signals in digital video form...
used for broadcast stations - MPEG-4MPEG-4MPEG-4 is a method of defining compression of audio and visual digital data. It was introduced in late 1998 and designated a standard for a group of audio and video coding formats and related technology agreed upon by the ISO/IEC Moving Picture Experts Group under the formal standard ISO/IEC...
good for online distribution of large videos and video recorded to flash memoryFlash memoryFlash memory is a non-volatile computer storage chip that can be electrically erased and reprogrammed. It was developed from EEPROM and must be erased in fairly large blocks before these can be rewritten with new data... - MPEG-2MPEG-2MPEG-2 is a standard for "the generic coding of moving pictures and associated audio information". It describes a combination of lossy video compression and lossy audio data compression methods which permit storage and transmission of movies using currently available storage media and transmission...
used for DVDs, Super-VCDs, and many broadcast television formats - MPEG-1MPEG-1MPEG-1 is a standard for lossy compression of video and audio. It is designed to compress VHS-quality raw digital video and CD audio down to 1.5 Mbit/s without excessive quality loss, making video CDs, digital cable/satellite TV and digital audio broadcasting possible.Today, MPEG-1 has become...
used for video CDs - H.261H.261H.261 is a ITU-T video coding standard, ratified in November 1988. It is the first member of the H.26x family of video coding standards in the domain of the ITU-T Video Coding Experts Group , and was the first video codec that was useful in practical terms.H.261 was originally designed for...
- H.263H.263H.263 is a video compression standard originally designed as a low-bitrate compressed format for videoconferencing. It was developed by the ITU-T Video Coding Experts Group in a project ending in 1995/1996 as one member of the H.26x family of video coding standards in the domain of the ITU-T.H.263...
- H.264H.264/MPEG-4 AVCH.264/MPEG-4 Part 10 or AVC is a standard for video compression, and is currently one of the most commonly used formats for the recording, compression, and distribution of high definition video...
also known as MPEG-4 Part 10, or as AVC, used for Blu-ray DiscBlu-ray DiscBlu-ray Disc is an optical disc storage medium designed to supersede the DVD format. The plastic disc is 120 mm in diameter and 1.2 mm thick, the same size as DVDs and CDs. Blu-ray Discs contain 25 GB per layer, with dual layer discs being the norm for feature-length video discs...
s and some broadcast television formats - TheoraTheoraTheora is a free lossy video compression format. It is developed by the Xiph.Org Foundation and distributed without licensing fees alongside their other free and open media projects, including the Vorbis audio format and the Ogg container....
used for video on Wikipedia
Tapes
- Betacam SXBetacamBetacam is family of half-inch professional videocassette products developed by Sony in 1982. In colloquial use, "Betacam" singly is often used to refer to a Betacam camcorder, a Betacam tape, a Betacam video recorder or the format itself....
, Betacam IMX, Digital Betacam, or DigiBeta — Commercial video systems by SonySony, commonly referred to as Sony, is a Japanese multinational conglomerate corporation headquartered in Minato, Tokyo, Japan and the world's fifth largest media conglomerate measured by revenues....
, based on original BetamaxBetamaxBetamax was a consumer-level analog videocassette magnetic tape recording format developed by Sony, released on May 10, 1975. The cassettes contain -wide videotape in a design similar to the earlier, professional wide, U-matic format...
technology - HDCAMHDCAMHDCAM, introduced in 1997, is an High-definition video digital recording videocassette version of Digital Betacam, using an 8-bit DCT compressed 3:1:1 recording, in 1080i-compatible downsampled resolution of 1440×1080, and adding 24p and 23.976 PsF modes to later models...
was introduced by Sony as a high-definition alternative to DigiBeta. - D1D1 (Sony)D-1 is an SMPTE digital recording video standard, introduced in 1986 through efforts by SMPTE engineering committees. It started as a Sony and Bosch - BTS product and was the first major professional digital video format.- Format :...
, D2D2 (video format)D-2 is a professional digital recording videocassette format created by Ampex and other manufacturers through a standards group of the Society of Motion Picture and Television Engineers and introduced at the 1988 NAB convention as a lower-cost alternative to the D-1 format...
, D3, D5D5 HDD-5 is a professional digital video format introduced by Panasonic in 1994. Like Sony's D-1 , it is an uncompressed digital component system , but uses the same half-inch tapes as Panasonic's digital composite D-3 format...
, D9 (also known as Digital-S) — various SMPTE commercial digital video standards - DVDVDV is a format for the digital recording and playing back of digital video. The DV codec was launched in 1995 with joint efforts of leading producers of video camcorders....
, MiniDV — used in most of today's videotape-based consumer camcorders; designed for high quality and easy editing; can also record high-definition data (HDVHDVHDV is a format for recording of high-definition video on DV cassette tape. The format was originally developed by JVC and supported by Sony, Canon and Sharp...
) in MPEG-2 format - DVCAM, DVCPRO — used in professional broadcast operations; similar to DV but generally considered more robust; though DV-compatible, these formats have better audio handling.
- DVCPRO50, DVCPROHD support higher bandwidths as compared to Panasonic's DVCPRO.
- Digital8Digital8Digital8 is a consumer digital recording videocassette for camcorders based on the 8 mm video format developed by Sony, and introduced in 1999.The Digital8 format is a combination of the older Hi8 tape transport with the DV codec...
— DV-format data recorded on Hi8-compatible cassettes; largely a consumer format - MicroMVMicroMVMicroMV was a proprietary videotape format introduced in 2001 by Sony. This cassette is physically smaller than a Digital8 or DV cassette. In fact, MicroMV is the smallest videotape format — 70% smaller than MiniDV or about the size of two US quarter coins. Each cassette can hold up to 60 minutes...
— MPEG-2-format data recorded on a very small, matchbook-sized cassette; obsolete - D-VHSD-VHSD-VHS is a digital recording format developed by JVC, in collaboration with Hitachi, Matsushita, and Philips. The "D" in D-VHS originally stood for Data VHS, but with the expansion of the format from standard definition to high definition capability, JVC renamed it Digital VHS and uses that...
— MPEG-2 format data recorded on a tape similar to S-VHSS-VHSS-VHS is an improved version of the VHS standard for consumer-level analog recording videocassettes. It was introduced by JVC in Japan in April 1987 with the HR-S7000 VCR and certain overseas markets soon afterwards...