JPEG or JFIF is a lossy compression standard for images. It created by the group Joint Photographic Experts Group-JPEG from which it took its name. Due to the small file that can occur with this compression method is used mainly in sites (like the GIF) and cameras :in high analyzes an image which has not been compressed can be used up to 100MB of space while in the form of JPEG uses about 3MB). The file extensions that have content JPEG are jpg .jpeg .jif jpe jfif.
An uncompressed bitmap image (Bitmap) may have very large size, depending on the analysis. For example, an image at a resolution of 1024 × 768 has size 2,25MB, occupying a corresponding area in a storage device such as a hard disk or a memory card, or requiring long time to be downloaded by an Internet user with slow connection. The size should be reduced without much loss in quality. So it was necessary to create a standard for image compression. The creation of this model undertook the JPEG group. The group was created in 1986 and the model in 1992. This model was called, officially, ISO 10918-1 1994.
Because JPEG regards lossy compression, imperfections appear in the image. Depending on the compression level chosen (0 to 100) the quality of image increases or decreases along with the size of the file. There are different kinds of imperfections in JPEG image. An imperfection is the separation of the image into blocks 8 × 8 pixels. This phenomenon is called «macroblocking». Other imperfections are color distortion, deformation of the edges of the image and the dissimilarity of colors (the colors are not solid and blend at the edges of the depicted object). However, losses are not visible, in most cases, when the display is done through a computer screen. They start to appear when the compression ratio grown enough (more than 60%) and the display is done through a large projector on screen with large dimensions. The JPEG images are not suitable for use in printing works on a printer or on large format plotters and their use in these cases avoided.
The JPEG2000 standard represents the latest developments in image compression technology and is optimized not only in performance but also in the ability to provide scalable services and interoperability between network environments and mobile applications. With the amazing spread of the Internet and the widespread use of digital images, the JPEG2000 looks like a powerful tool in the hands of designers and users of network video applications. The JPEG2000 standard includes a number of advanced features involving many advanced and emerging applications, fully exploiting new technologies. It treats with success cases where current standards fail to achieve the maximum quality or performance and provide new opportunities in markets that have not used compression technology to date. Applications and markets served better by the new model are: Internet, colored fax, printing, scanning, digital photography, mobile, medical images, digital libraries and files, e-commerce, etc.
The first method of compression that used in television signals was Motion JPEG, a technical extension of JPEG technical for still images. It is about lossy compression for the implementation of which several steps are followed. In case of TV signal component, method of encoding should be repeated in three parallel levels, that for the luminance and the two color differences. As it is known each electronic signal can be represented in two ways: either by the change in time either on a spectral composition. For the purpose of the compression the representation of signal is selected as a function of spectral content, because in this form becomes more apparent the redundant information and the process is facilitated. Before the transformation each frame is divided into blocks of 8 × 8 pixels. This size offers more advantages, as it allows fast process but also leads to a number of pixels that usually has satisfactory correlation between them, that many neighboring pixels have the same values.
The application of lossy compression techniques at this stage would cause very limited efficiency, for this reason passage in the frequency domain is selected where largest numbers of similar terms produce. The transformation DCT (Discreet Cosine Transform) is particularly effective in this area, which in addition can be easily implemented in the form of integrated circuits.
MPEG (Moving Picture Experts Group) is an international committee, which works in accordance with the principles of ISO / IEC to develop international standards for compression, decompression, processing and audio encoding and moving pictures. The standards which the team has presented so far are MPEG-1, MPEG-2, MPEG-4, MPEG-7 and MPEG-21. The compression algorithm of MPEG contains elements from the lossy compression and lossless compression and presents several similarities with compression JPEG. The basic difference relates to the fact that an image sequence contains excess not only of space but also of time. Thus the significance of a particular pixel (pixel) can be predicted not only from neighboring pixels belonging to the same frame (coding technique intraframe) and from pixels belonging to other neighboring frames (encoding technique interframe). For the reduction of the temporal redundancy between frames (frame), we use the prediction with motion compensation (Motion Compensated prediction). It is based on motion estimation of a pixel from a previous encoded frame using motion vectors and prediction error images that are transmitted to the receiver. Due to the significant spatial correlation of motion vectors it is possible the movement of a portion (block) of neighboring pixels be represented by a representative motion vector.
The algorithm of moving picture MPEG-1 developed based on the activities of the group JPEG and the standard H.261. The MPEG video sequences consist of several levels, which allow random access sequence as well as protection from erroneous information. The basic technique of MPEG-1 compression is based on the structure of macroblocks, in motion compensation and in the hypothetical replacement of the macroblocks. In this process the sequence is divided into groups of pictures, wherein each picture discern sections (blocks). A collection of blocks gives macroblocks. The sequence consists of three different types of encoded pictures:
- IIntra-coded (l-frames) they are encoded as discrete frames without reference to any previous or next frame.
- Predictive-coded (P-frames) for these frame motion compensation is applied with reference to the previous frame.
- Bidirectionally-predictive-coded (B-frames) for these frame, motion compensation is applied with reference to previous and next I- or P-frames. The first frame of the sequence is encoded in accordance with the Intra method. On the encoder the transformation DCT is applied to each 8 × 8 block of luminosity and chromaticity. Then the output incurs quantization and burdens arising are transmitted to the recipient. The DC weight component represents the average intensity of the block and is encoded using a differential method DC prediction. Unlike the non-zero values are detected by the method of “zig-zag” and are encoded according with entropy methods. At this point we should mention that the l-frames have the worst compression ratio of the three forms, the P-frames leaded to a reasonable size encoded frame while the B-frames offer the highest compression rate. These three forms are combined to give a flexible sequence adapted to the needs of the application. The compression that finally achieved by this model is about 26: 1.
Generic coding of moving pictures and associated audio information. Developed for application in digital television. The basic resolution of the image follows the CCIR-601 television standard (broadcast quality – quality emission) ie 704 × 480 pixels (NTSC) or 704 × 576 pixels (PAL) and supports interlaced scanning image (interlaced). The transmission rate ranges from 3 to 10 Mbits / sec. The applications are on cable TV (CableTV), on the satellite (Direct Broadcasting Satellite TV) but are expected to be extended to the terrestrial TV. Also it used to save films in DVD (Digital Video Disk). MPEG-3 was originally aimed at various HDTV applications but later was incorporated into MPEG-2.
The development of multimedia in the Internet led the MPEG team in developing this standard. MPEG-4 provides data synchronization processes before the transmission for the attainment of the desired QoS, and allows the interactive manipulation of the stage in the receiver’s console. It allows the combination of different components in a multimedia application and supports the encoding of objects classified both time and space. Today there are four versions of this standard.
H.261 (Τηλεφωνία με Video)
Standard for video compression for transmission through low bandwidth lines, such as:
H.263 (video conferencing)
Based on H.261, but it has been designed for transmission over the IP protocol.
Η.264 (MPEG 4 Part 10)
Standard for high quality streaming and video on demand. It can be used via the IP protocol (eg. Internet). It has been designed to compress and decompress digital video. H.264 is used to reduce the bandwidth that required to transmit and store video, offering new opportunities to reduce costs and increase efficiency. In applications that require high resolution and high frame rate (25/30 IPS), as in the game industry, airports and monitoring of traffic,
H.264 makes a difference and offer big savings by reducing bandwidth and storage needs. H.264 will be the main video standard in the next few years as the H.264 can reduce the size of recorded video by more than 80% compared with the format JPEG moving, by 50% compared to the traditional standard MPEG-4 and 30% approximately as compared with the compression MPEG-4.