Bits Bytes and Multiple Bytes

The various terms used for digital measurements can be confusing. This page attempts to at least explain what the various terms actually mean. CKnow can’t control people’s use of those terms however and ignorance of the true meanings has sometimes crept into common use. With that caveat in mind let’s start at the beginning.

Bit

The most basic piece of information that a computer understands is a bit. The term bit is a shorted form of binary digit and the bit can have only one of two values: 0 or 1. Think of a bit like a light switch; it’s either on or it’s off.

The short form of a bit is a lower-case b. To be certain your meaning is understood, you should probably use the full word bit as, in general use, sometimes people will improperly use the lower case b for byte.

As interesting as a bit may be, humans really can’t think in terms of bits. Only computers “think” in terms of bits. So, humans need some organization to the bits in order to interact with the computer. That’s where bytes and other bit collections come in…

Byte

A byte is a common unit for groupings of bits. In general use, a byte is taken to mean a contiguous sequence of eight bits. There are other, less common meanings for a byte but for this discussion the eight bit sequence is a good one and will be used throughout.

The short form of a byte is an upper-case B.

The word byte is pronounced like the word “bite” and is thought to have come from a byte being the smallest amount of data a computer could bite at one time. Couple that with early computer scientists feeling the need to construct their own terminology and the possible confusion between bit and bite and you end up with the spelling: byte.

To take the analogy of eating a step further, some early computers worked in sequences of four bits instead of eight for some operations. This, as you might have guessed, led to the term nibble for these four-bit sequences. (You might also see nybble but since there is no other term to confuse nibble with, it is the spelling generally used.)

In some circumstances instead of byte you might see the word octet. This is a term often used in standards and also in communications and networking settings where the number of bits that represents a given single piece of information might differ from eight. If you look at MIME types for anything, you will probably note its use in the type application/octet-stream which basically tells the communication system to transfer the data stream without any modification during transit.

In terms of a conversion for human use, consider what eight bits can represent. Starting with zero (counting by a computer generally starts at zero instead of one) you get a sequence like…

  • 00000000 = decimal 0
  • 00000001 = decimal 1
  • 00000010 = decimal 2
  • 00000011 = decimal 3
  • 11111111 = decimal 255

Each byte therefore represents up to 256 different things. You can map the integers 0 through 255 to one byte, use 256 bytes to represent 256 different ASCII characters, use 256 bytes to represent 0 through FF hexadecimal numbers, or any other mapping that you define. This ability to do this mapping alows humans to better interact with computers without having to think in binary.

So far, so good. However things get a bit more complicated when talking about groupings of bytes in larger quantities. The problem is that terminology from decimal numbering systems has made its way into binary numbering systems as equivalent but, in reality, they are not equivalent. As one example, the term “kilo” is generally used to denote 1,000 of something in the decimal numbering system. In binary, however, “kilo” means 210 (two to the tenth power) which turns out to be 1,024 in decimal. This confusion of terms continues up the numbering scale for all the prefixes.

In 1998, the International Electrotechnical Commission (IEC) attempted to resolve this problem by introducing new prefixes for binary multiples. The names were taken from the decimal prefixes with the addition of “bi” at the end to stand for “binary” and clearly indicate that a multiple referred to the 1,024 factor and not the 1,000 factor (the “bi” is pronounced as the word “bee”). The prefixes introduced include: kibi-, mebi-, gibi-, and others. More of a similar nature were added in 2005 as the need arose. CKnow will show both below when discussing multiples.

But, even that didn’t completely work. Certain units are generally understood to be decimal despite being used when talking about computing.

  • Hertz (Hz) measures clock rate so a 2GHz CPU clock performs 2,000,000,000 clock ticks per second.
  • Bits per second measures a data rate so a 128 kbit/s MP3 stream will send 128,000 bits every second which works out to 16,000 bytes per second when you divide the 128,000 by 8 (assuming an 8-bit byte and no overhead).
  • Hard disk drive makers typically state capacity in decimal units. The numbers (for retail purposes) are larger and so the disk drive sounds bigger. The operating system, however, reports the binary size of the disk which is why when you format a 100GB hard drive you only get a bit over 93GB of free space reported out. There are good engineering reasons why this is convention is used but it doesn’t hurt marketing either 🙂 (although, there have been some legal battles over this as well).
  • Floppy disk sizes (and some other newer devices) used a hybrid of the two systems to report out disk size. Disks are accessed using sectors which are measured in binary sizes (from 512 to 2048 bytes depending on the device). Thus, for the most basic measure the “kilo” means 1,024. However, sizes above that are multiples of 10 and not 2 so a megabyte for a disk of this type really means a thousand 1024-byte kilobytes. This means that a 1.44MB floppy diskette holds 1.44x1000x1024 bytes!
  • To further confuse things, CDs are measured using binary units but DVDs are measured using decimal units! A 4.7GB DVD really only has a binary capacity of about 4.38 billion binary bytes.
  • Bus bandwidth is typically a decimal measure. Again, since the bus is clock-based, decimal units are used instead of binary units even though binary bytes are being pumped over the bus.

With all this in mind, here are the various prefixes in use today and what they refer to in actual numbers. Each multiple is shown first as its decimal value and next as its binary value.

  • Kilo (K) = 1,000 = 103 decimal
    Kibi (Ki) = 1,024 = 210 binary
  • Mega (M) = 1,000,000 = 106 decimal
    Mebi (Mi) = 1,048,576 = 220 binary
  • Giga (G) = 1,000,000,000 = 109 decimal
    Gibi (Gi) = 1,073,741,824 = 230 binary
  • Tera (T) = 1,000,000,000,000 = 1012 decimal
    Tebi (Ti) = 1,099,511,627,776 = 240 binary
  • Peta (P) = 1,000,000,000,000,000 = 1015 decimal
    Pebi (Pi) = 1,125,899,906,842,624 = 250 binary
  • Exa (E) = 1,000,000,000,000,000,000 = 1018 decimal
    Exbi (Ei) = 1,152,921,504,606,846,976 = 260 binary
  • Zetta (Z) = 1,000,000,000,000,000,000,000 = 1021 decimal
    Zebi (Zi) = 1,180,591,620,717,411,303,424 = 270 binary
  • Yotta (Y) = 1,000,000,000,000,000,000,000,000 = 1024 decimal
    Yobi (Yi) = 1,208,925,819,614,629,174,706,176 = 280 binary

Now we’ll list each in turn to discuss any particular meanings relevant to that specific prefix as it applies to various uses.

Kilo- and Kibi-

  • Kilobit. A unit of information abbreviated kbit or kb. Use of this term can be confusing as it was introduced early and therefore takes on either the decimal or binary meaning in various circumstances moreso than most of the other prefixes. Fortunately, at this level the difference is small. A decimal kilobit equals 125 8-bit bytes while a binary kilobit equals 128 8-bit bytes. When discussing telecommunications the decimal use is almost always used. When precision is necessary then kibibit should be used.
  • Kibibit. A unit of information or storage abbreviated Kibit or Kib (note that when kibi is used it is always capitalized unlike kilo). One kibibit is always equal to 1,024 bits or 128 8-bit bytes.
  • Kilobyte. A unit of information or storage abbreviated KB, kB, Kbyte, kbyte, or, informally, K or k. Kilo was chosen early because while computers use binary, the tenth power of 2 (210) is 1,024 and is close to the 1,000 that kilo meant. This informal adoption has confused things ever since unfortunately as seen from the discussion above. Be careful when encountering kilobyte and know the specific reference.
  • Kibibyte. A unit of information or storage abbreviated KiB (never with a lower case “k”). One KiB will always mean exactly 1,024 (210) bytes. When precision is needed, use KiB instead of KB if you are discussing binary measures.

Mega- and Mebi-

  • Megabit. A unit of information or storage abbreviated Mbit or Mb. Again, this term takes on a dual meaning depending on the context. However, in the case of the megabit the term is almost always used in a communication setting and therefore most typically takes on the decimal meaning where 1 megabit = 106 = 1,000,000 bits (one million).
  • Mebibit. A unit of information or storage abbreviated Mibit or Mib. As with all the other binary bit definitions, 1 mebibit = 220 bits = 1,048,576 bits = 1,024 kibibits.
  • Megabyte. A unit of information or storage abbreviated MB (never Mb which would mean bits and not bytes — see just above) or meg. Again, there are confusing interpretations for this term as they have developed over time and for different contexts (also see the more general discussion above).
    • 1,000,000 bytes (106) when used in a networking context, clocks, or performance measures.
    • 1,048,576 bytes (220) when used discussing memory and file size.
    • 1,024,000 bytes (1,024×1,000) when used for floppy disk sizes.
  • Mebibyte. A unit of information or storage abbreviated MiB. This is the specific measure of the binary representation of 1,048,576 bytes = 1,024 kibibytes = 220 bytes. When precision is demanded, this is the term to use. Note: Mebibyte is often misspelled as “Mibibyte.”

Giga- and Gibi-

  • Gigabit. A unit of information or storage abbreviated Gbit or Gb. One gigabit most typically equals 109 or 1,000,000,000 bits (one billion or one milliard or thousand million in long scale measure*).
  • Gibibit. A unit of information or storage abbreviated Gibit or Gib. This is the absolute binary measure equaling 1,073,741,824 (230) bits. Use it when precision is needed.
  • Gigabyte. A unit of information or storage abbreviated GB (never Gb) or gig when writing informally or speaking. Again, there are confusing interpretations for this term as they have developed over time and for different contexts (also see the more general discussion above).
    • 1,000,000,000 bytes (109) when used in a networking context, clocks, or performance measures. Disk drive manufacturers also typically use this form as it results in an apparent larger size for the disk which the operating system then formats and reports out in binary, or lower numbers.
    • 1,073,741,824 (230 ) bytes. This definition is used for memory, file and formatted disk size, and other contexts where binary notation fits better.
  • Gibibyte. A unit of information or storage abbreviated GiB. This is the specific measure of the binary representation of 1,073,741,824 (230) bytes. When precision is demanded, this is the term to use.

Tera- and Tebi-

  • Terabit. A unit of information or storage abbreviated Tbit or Tb. One terabit most typically equals 1012; or 1,000,000,000,000 bits (one trillion or one billion in long scale measure*).
  • Tebibit. A unit of information or storage abbreviated Tibit or Tib. This is the absolute binary measure equaling 1,099,511,627,776 (240) bits. Use it when precision is needed.
  • Terabyte. A unit of information or storage abbreviated TB. Again, there are/will be confusing interpretations for this term for different contexts (also see the more general discussion above). Note: Tera derives from the Greek word “teras” which means “monster.”
    • 1,000,000,000,000 bytes (1012) when used in a networking context, clocks, or performance measures.
    • 1,099,511,627,776 (240) bytes. This definition is used for memory, file and formatted disk size, and other contexts where binary notation fits better.
  • Tebibyte. A unit of information or storage abbreviated TiB. This is the specific measure of the binary representation of 1,099,511,627,776 (240) bytes. When precision is demanded, this is the term to use.

Peta- and Pebi-

  • Petabit. A unit of information or storage abbreviated Pbit or Pb. One petabit most typically equals 1015; or 1,000,000,000,000,000 bits (one quadrillion or one billard in long scale measure*).
  • Pebibit. A unit of information or storage abbreviated Pibit or Pib. This is the absolute binary measure equaling 1,125,899,906,842,624 (250) bits. Use it when precision is needed.
  • Petabyte. A unit of information or storage abbreviated PB. Again, there are/will be confusing interpretations for this term for different contexts (also see the more general discussion above).
    • 1,000,000,000,000,000 bytes (1015) when used in a networking context, clocks, or performance measures.
    • 1,125,899,906,842,624 (250) bytes. This definition is used for memory, file and formatted disk size, and other contexts where binary notation fits better.
  • Pebibyte. A unit of information or storage abbreviated PiB. This is the specific measure of the binary representation of 1,125,899,906,842,624 (250) bytes. When precision is demanded, this is the term to use.

Exa- and Exbi-

  • Exabit. A unit of information or storage abbreviated Ebit or Eb. One exabit most typically equals 1018; or 1,000,000,000,000,000,000 bits (one quintrillion or one trillion in long scale measure*).
  • Exbibit. A unit of information or storage abbreviated Eibit or Eib. This is the absolute binary measure equaling 1,152,921,504,606,846,976 (260) bits. Use it when precision is needed.
  • Exabyte. A unit of information or storage abbreviated EB. Again, there are/will be confusing interpretations for this term for different contexts (also see the more general discussion above).
    • 1,000,000,000,000,000,000 bytes (1018) when used in a networking context, clocks, or performance measures.
    • 1,152,921,504,606,846,976 (260) bytes. This definition is used for memory, file and formatted disk size, and other contexts where binary notation fits better.
  • Exbibyte. A unit of information or storage abbreviated EiB. This is the specific measure of the binary representation of 1,152,921,504,606,846,976 (260) bytes. When precision is demanded, this is the term to use.

Zetta- and Zebi-

  • Zettabit. A unit of information or storage abbreviated Zbit or Zb. One zettabit most typically equals 1021 or 1,000,000,000,000,000,000,000 bits (one sextillion or one trilliard in long scale measure*). Note: Zettabit is not yet used; thought it’s only a matter of time before it is.
  • Zebibit. A unit of information or storage abbreviated Zibit or Zib. This is the absolute binary measure equaling 1,180,591,620,717,411,303,424 (270) bits. Use it when precision is needed. Note: Zebibit is not yet used; thought it’s only a matter of time before it is.
  • Zettabyte. A unit of information or storage abbreviated ZB. Again, there are/will be confusing interpretations for this term for different contexts (also see the more general discussion above).
    • 1,000,000,000,000,000,000,000 bytes (1021) when used in a networking context, clocks, or performance measures.
    • 1,180,591,620,717,411,303,424 (270) bytes. This definition is used for memory, file and formatted disk size, and other contexts where binary notation fits better.
  • Zebibyte. A unit of information or storage abbreviated ZiB. This is the specific measure of the binary representation of 1,180,591,620,717,411,303,424 (270) bytes. When precision is demanded, this is the term to use.

Yotta- and Yobi-

  • Yottabit. A unit of information or storage abbreviated Ybit or Yb. One yottabit most typically equals 1024 or 1,000,000,000,000,000,000,000,000 bits (one septillion or one quadrillion in long scale measure*). Note: Yottabit is not yet used; thought it’s only a matter of time before it is.
  • Yobibit. A unit of information or storage abbreviated Yibit or Yib. This is the absolute binary measure equaling 1,208,925,819,614,629,174,706,176 (280) bits. Use it when precision is needed. Note: Yobibit is not yet used; thought it’s only a matter of time before it is.
  • Yottabyte. A unit of information or storage abbreviated YB. Again, there are/will be confusing interpretations for this term for different contexts (also see the more general discussion above).
    • 1,000,000,000,000,000,000,000,000 bytes (1024) when used in a networking context, clocks, or performance measures.
    • 1,208,925,819,614,629,174,706,176 (280) bytes. This definition is used for memory, file and formatted disk size, and other contexts where binary notation fits better.
  • Yobibyte. A unit of information or storage abbreviated YiB. This is the specific measure of the binary representation of 1,208,925,819,614,629,174,706,176 (280) bytes. When precision is demanded, this is the term to use.

As a matter of trivia, if you think teleportation is coming soon consider what it would take to describe an average human. Considering the average makeup of a 75 kg (165 pound) person they find the body contains about 11,800 moles of atoms. If you assign 100 bytes to store the location, type, and state of every atom in that average body, you would need about 600,000 yottabytes. Just try sending that over your local area network! 🙂

*Note: The description of a particular number of digits (e.g., thousand for 1,000) comes in two forms; long scale and short scale. In the long scale a billion means a million millions and in the short scale a billion means a thousand millions.

Comments from 5/20/2009 page:

STELLA JAMES
Said this on 2011-05-11 At 07:30 am
state the relationship between a bit, a character, byte, kilo byte, mega byte, giga byte and terra byte

[A character is either one or two bytes depending on the encoding. The rest is explained in the article. –DaBoss]

#7
Tom Gardner
Said this on 2012-02-07 At 05:00 pm
You have a typo at “Megibit;” I believe u mean Mebibit.

More importantly Megabyte was historically and consistently used by the hard disk drive industry in their decimal connotation long before its binary meaning became generally used. Megabyte hard disk drives are still available today, albeit used or refurbished.

Similarly and consistently the disk drive industry applied decimal meanings to Gigabyte and Terabyte long before the alternate binary meaning became common for memory and file size reporting. There is no evidence to support your assertion in the Gigabyte paragraph that this was done because the reported sizes are larger.

Finally, modern OSes such as Apple report file and disk size in decimal units.

[Fixed the typo, thank you. As to the rest most of what you say agrees with what I say although we say it differently. As to the decimal vs binary for disks, the assertion is based on simple marketing. I doubt sincerely that the marketing department of disk makers did not notice that even, larger numbers sounded better than what would be shown with a binary version and that became at least part of the reason for disk makers using that terminology early on so I will stand by my assertion but note that it is just that, an assertion and not a fact. –DaBoss]

Video Display Standards

Video display standards have evolved from early monochrome to today’s high resolution color. The evolution of these standards is summarized here.

Initial video standards were developed by IBM as one of the only players in the PC marketplace. As IBM’s influence over the hardware waned (or got diluted, whichever viewpoint you care to take) the Video Electronics Standards Association (VESA) was formed to define new standards for computer video displays.

But, it all started with the…

Monochrome Display Adapter (MDA)

Introduced in 1981, MDA was a pure text display showing 80 character lines with 25 vertical lines on the screen. Typically, the display was green text on a black background. Individual characters were 9 pixels wide by 14 pixels high (7×11 for the character, the rest for spacing). If you multiply that out you get a resolution of 720×350 but since the individual pixels were not capable of being addressed there were no graphics. Although, some programs managed some interesting bar charts and line art using various ASCII characters; particularly those above 128 used by code page 437.

The IBM MDA card had 4 KB of video memory. Display attributes included: invisible, underline, normal, bright/bold, reverse video, and blinking. Some attributes could be combined. IBM graphic’s card also contained a parallel printer port giving it the full name: Monochrome Display and Printer Adapter.

The monitor’s refresh rate was 50 Hz and users tended to complain about eyestrain after long days in front of the monitor.

See hereWeb Link for pictures and more details.

Hercules Graphics Card

Noting the 720×350 resolution of the MDA display, a company called Hercules Computer Technology (founded by Van Suwannukul), in 1982, developed an MDA-compatible video card that could display MDA text as well as graphics by having routines to individually address each pixel in the display. Because the screen height had to be a multiple of four, the full resolution of the Hercules Graphics Card was 720×348.

The Hercules card addressed two graphic pages, one at B0000h and the other at B8000h. When the second page was disabled there was no conflict with other adapters and the Hercules card could run in a dual-monitor mode with CGA or other graphics cards on the same computer. Hercules even made a CGA-compatible card called the Hercules Color Card and later the Hercules Graphics Card Plus (June 1986) followed by the Hercules InColor Card (April 1987) which had capabilities similar to EGA cards.

The graphics caught on and not only did Hercules cards multiply rapidly but clones of them started to appear; the ultimate homage to success. Most major software included a Hercules driver.

However, despite its attempts to keep up, Hercules started to fail as a company and was acquired by ELSA in August 1998 for $8.5 million. ELSA then declared bankrupcy in 1999 and the Hercules brand was bought by Guillemot Corporation, a French-based company, for $1.5 million. In 2004 Guillemot stopped producing graphic cards but Hercules, the name, lives on in some of their software and other products.

But, color was still the ultimate goal and Hercules was pushed out by other IBM specifications…

Color Graphics Adapter (CGA)

IBM came back to the fore when color started to appear in computer displays. The CGA standard, introduced in 1981 and primative by today’s standards, was still color; even if only 16 of them. Because the first PCs were for business, the color did not first catch on and the MDA monochrome standard we more often used. As prices came down and clones of the IBM PC were introduced, CGA became more of a standard.

The CGA card came with 16 KB of video memory and supported several different modes:

  • Text mode which included 80×25 text (like the MDA system) in 16 colors. The resolution, however was lower as each character was made up of 8×8 pixels instead of the MDA’s 9×14 pixels. A 40×25 text mode was also supported in 16 colors. In both, the foreground and background colors could be changed for each character.
  • Monochrome graphics mode which displayed graphics at 640×200 pixels. This was lower than the Hercules card but seemed to serve the purpose for an initial release and this was quickly replaced with the EGA standard.
  • Color graphics mode which came in two flavors: a 320×200 pixel mode with four colors and a lesser-used resolution of 160×200 in 16 colors. The four-color mode only had two official palettes to choose from:
    • Magenta, cyan, white and background color (black by default).
    • Red, green, brown/yellow and background color (black by default).

The 16-color graphic mode used a composite color mode instead of the 16 colors of the CGA text above. Because the color technique was not supported in the BIOS there was little adoption of that mode except by some games.

The CGA color palette was based on the Motorola MC6845 display controller. Red, green, and blue were created by the three cathode rays with black being an absence of cathode rays. The other colors were mixes of two different colors and white used all three color beams. An “intensifier” bit gave a brighter version of the basic 8 colors for a total of 16. There was one exception to this. In the normal RGB model color #6 should be a dark yellow (#AAAA00) however IBM changed the monitor circuitry to detect it and lower its green component to more closely match a brown (#AA5500) color. Other monitor makers mimicked this which is why the intense version of #6, brown, turned out to be a bright yellow as the intense version was not so modified. There is no clear reason expressed why IBM did this but it’s speculated they wanted to match 3270 mainframe colors. So, the colors appeared as…

  • Color 0 – Black – #000000 and the intense version, color 8 – Dark Grey – #555555
  • Color 1 – Blue – #0000AA and the intense version, color 9 – Bright Blue – #5555FF
  • Color 2 – Green – #00AA00 and the intense version, color 10 – Bright Green – #55FF55
  • Color 3 – Cyan – #00AAAA and the intense version, color 11 – Bright Cyan – #55FFFF
  • Color 4 – Red – #AA0000 and the intense version, color 12 – Bright Red – #FF5555
  • Color 5 – Magenta – #AA00AA and the intense version, color 13 – Bright Magenta – #FF55FF
  • Color 6 – Brown – #AA5500 and the intense version, color 14 – Bright Yellow – #FFFF55
    Color 6 in some clone monitors –Yellow – #AAAA00
  • Color 7 – Light Grey – #AAAAAA and the intense version, color 15 – Bright White – #FFFFFF (which you won’t see because the background here is white)

There were several tweaks to the CGA text and graphics systems which resulted in different default background colors, different colored borders, and other tweaks which gave the appearance of the CGA system having more than the graphic modes above; but, these were all tweaks and not changes to the basic system itself.

Refresh rate for CGA monitors was increased to 60 Hz as a result of eyestrain complaints from the MDA 50 Hz rate. (The higher the refresh rate the less likely pixels on the screen will flicker as the phosphor is refreshed at a faster rate.)

See hereWeb Link for more details and pictures.

But, the low resolution of CGA begged for higher resolutions. To fill those demands IBM developed EGA…

Enhanced Graphics Adapter (EGA)

The Enhanced Graphics Adapter was introduced by IBM in 1984 as the primary display for the new PC-AT Intel 286-based computer. EGA increased resolution to 640×350 pixels in 16 colors. The card itself contained 16 KB of ROM to extend the system BIOS to add graphics functions. The card started with 64 KB of video memory but later cards and clone cards came with 256KB of video memory to allow full implementation of all EGA modes which included…

  • High-resolution mode with 640×350 pixel resolution. On any given screen display a total of 16 colors could be displayed; however, these could be selected from a palette of 64 colors.
  • CGA mode included full 16-color versions of the CGA 640×200 and 320×200 graphics modes. The original CGA modes were present in the card but EGA is not 100% hardware-compatible with CGA.
  • MDA could be supported to some degree. By setting switches on the card an MDA monitor could be driven by an EGA card however only the 640×350 display could be supported.

Some EGA clones extended the EGA features to include 640×400, 640×480, and 720×540 along with hardware detection of the attached monitor and a special 400-line interlace mode to use with older CGA monitors. None of these became standard however.

EGA’s life was fairly short as VGA was introduced by IBM in April of 1987 and quickly took over the market. In the meantime, IBM had a brief go with a specialized graphics system called PGC and the 8514 Display Standard…

Professional Graphics Controller (PGC)

The Professional Graphics Controller (PGC) enjoyed a short lifetime between 1984 and 1987. It offered the “high” resolution of 640×480 pixels with 256 colors out of a palette of 4,096 colors. Refresh rate was 60 Hz. The card had 320 KB of video RAM and an on-board microprocessor. The card had a CGA mode as well but this could be turned off in order to maintain a CGA card in the same computer if necessary.

Designed for high-end use, the controller was composed of three(!) adapter cards (two cards, each taking a single adapter slot and a third card between and attached to each card). All were physically connected together with cables. See hereWeb Link for a full description and pictures.

The price of several thousand dollars and the complicated hardware brought the PGC to a quick end even though it was a very good graphics card for its day.

8514 Display Standard

IBM introduced the 8514 Display Standard in 1987; about the same time as VGA. The companion monitor (model 8514) was also sold by IBM. The pair (8514/A Display Adapter and 8514 monitor) comprise the 8514 Display Standard and is generally regarded as the first mass-market video card accelerator. It was certainly not the first in the industry, but others before it were largely designed for workstations. Workstation accelerators were programmable; the 8514 was not; it was a fixed-function accelerator and could therefore be sold at a much lower price for mass-market use. The card typically had 2D-drawing functions like line-draw, color-fill, and BITBLT offloaded to it while the CPU worked on other tasks.

The basic modes the 8514 were designed to operate at were…

  • 1024×768 pixels at 256 colors and 43.5 Hz interlaced.
  • 640×480 pixels at 256 colors and 60 Hz non-interlaced and other regular VGA modes. The 8514/A card was only responsible for the 1024×768 graphic mode. All other modes were created using the VGA hardware on the computer’s motherboard and then the video was fed through the adapter card to the monitor which was connected to the adapter card. 8514 did not support an 800×600 pixel mode even though you might think it could.

Note the difference between interlaced and non-interlaced display and the frequency above. While the 8514 displayed a much higher resolution screen than most other mass-market solutions of the day, the use of an interlaced display was unusual.

8514 was replaced by IBM’s XGA standard which we’ll talk about later on this page. For now, we’ll get back in sequence with VGA…

Video Graphics Array (VGA)

With VGA you see a change in the terminology from adapter to array. This was a result of the fact that VGA graphics started to come on the motherboard as a single chip and not as plug-in adapter boards that took up an expansion slot in the computer. While since replaced with other standards for general use, VGA’s 640×480 remains a sort of lowest common denominator for all graphics cards. Indeed, even the Windows splash screen logo comes in at 640×480 because it shows before the graphics drivers for higher resolution are loaded into the system.

VGA supports both graphics and text modes of operation and can be used to emulate most (but not all) of the EGA, CGA, and MDA modes of operation). The most common VGA graphics modes include:

  • 640×480 in 16 colors. This is a planar mode with four bit planes. When speaking about VGA, this is the mode most often thought of and is often what is meant when some say “VGA.”
  • 640×350 in 16 colors.
  • 320×200 in 16 colors.
  • 320×200 in 256 colors (Mode 13h). This is a packed-pixel mode.

The VGA specification dictated 256KB of video RAM, 16- and 256-color modes, a 262,144 color palette (six bits for each of red, green, and blue), a selectable master clock (25 MHz or 28 MHz), up to 720 horizontal pixels, up to 480 lines, hardware smooth scrolling, split screen support, soft fonts, and more.

Another VGA programming trick essentially created another graphics mode: Mode X. By manipulating the 256 KB video RAM four separate planes could be formed where each used 256 colors. Mode X transferred some of the video memory operations to the video hardware instead of keeping them with the CPU. This sped up the display for things like games and was most often seen in 320×240 pixel resolution as that produced square pixels in 4:3 aspect ratio. Mode X also allowed double buffering; a method of keeping multiple video pages in memory in order to quickly flip between them. All VGA 16-color modes supported double buffering; only Mode X could do it in 256 colors.

Many other programming tweaks to VGA could (and were) also performed. Some, however, caused monitor display problems such as flickering, roll, and other abnormalities so they were not used commercially. Commercial software typically used “safe” VGA modes.

Video memory typically mapped into real mode memory in a PC in the memory spaces…

  • B0000h (used for monochrome text mode)
  • B8000h (used for color text and CGA graphics modes)
  • A0000h (used for EGA/VGA graphics modes)

Note that by using the different memory areas it is possible to have two different monitors attached and running in a single computer. Early on, Lotus 1-2-3 took advantage of this by having the ability to display “high resolution” text on an MDA display along with color (low-resolution) graphics showing an associated graph of some part of the spreadsheet. Other such uses included coding on one screen with debugging information on another and similar applications.

VGA also had a subset called…

Multicolor Graphics Adapter (MCGA)

MCGA shipped first with the IBM PS/2 Model 25 in 1987. MCGA graphics were built into the motherboard of the computer. As a sort of step between EGA and VGA, MCGA had a short life and was shipped with only two IBM models, the PS/2 Model 25 and PS/2 Model 30 and fully discontinued by 1992. The MCGA capabilities were incorporated into VGA. Note: Some say that the 256-color mode of VGA is MCGA but, to be accurate, no MCGA cards were ever made; only the two IBM PS/2 models indicated had true MCGA chips. The 256-color mode of VGA, while similar, stands alone as part of the VGA specification.

The specific MCGA display modes included:

  • All CGA modes (except the text mode that allowed connection of the MDA (model 5151) monitor).
  • 640×480 monochrome at a 60 Hz refresh rate.
  • 320×200 256-color at a 70 Hz refresh rate. The 256 colors were chosen from a palatte of 262,144 colors.

Like the other IBM standards, clone makers quickly cloned VGA. Indeed, while IBM produced later graphics specifications as we’ll see below, the VGA specification was the last IBM standard that other manufacturers followed closely. Over time, as extensions to VGA appeared, they were loosely grouped under the name Super VGA.

Super VGA (SVGA)

Super VGA was first defined in 1989 by the Video Electronics Standards Association (VESA); an association dedicated to providing open standards instead of the closed standards from a single company (IBM). While initially defined as 800×600 with 16 colors, SVGA evolved to 1024×768 with 256 colors and even higher resolutions and colors as time went on.

As a result SVGA is more of an umbrella than a fixed standard. Indeed, most any graphics system released between the early 1990s and early 2000s (a decade!) has generally been called SVGA. And, it was up to the user to determine from the specifications if the graphics system supported their needs.

The VESA SVGA standard was also called the VESA BIOS Extension (VBE). VBE could be implemented in either hardware or software. Often you would find a version of the VBE in a graphic card’s hardware BIOS with extensions in software drivers.

How could a standard be so fractured? With the introduction of VGA, the video interface between the adapter and the monitor changed to analog from digital. An analog system can support what is effectively an infinite number of colors. Therefore, color depth largely became a function of how the video adapter was constructed and not the monitor. Therefore, for a set of different monitors there could be thousands of different video adapters that could connect to the monitors and drive them accordingly. Of course, the monitors had to be able to handle the various refresh frequencies and some had to be larger to support the increasing number of pixels but it was easier to produce a few large multi-frequency monitors than it was to produce the graphics computing power necessary to drive them.

Thus, while SVGA is an accepted term, it has no specific meaning except to indicate a display capability generally somewhere between 800×600 pixels and 1024×768 pixels at color depths ranging from 256 colors (8-bits) to 65,536 colors (16-bits). But, even those values overlap the various XGA standards…

Extended Graphics Array (XGA)

IBM’s XGA was introduced in 1990 and is generally considered to be a 1024×768 pixel display. It would be wrong, however, to consider XGA a successor to SVGA as the two were initially released about the same time. Indeed, the SVGA “definition” has expanded as seen above and one might consider XGA to have been folded under the SVGA umbrella.

Initially, XGA was an enhancement to VGA and added two modes to VGA…

  • 800×600 pixels at 16-bits/pixel for 65,536 colors.
  • 1024×768 pixels at 8-bits/pixel for 256 colors.

Graphic display processing offloading features from the 8514 system were incorporated into and expanded under XGA. The number and type of drawing primitives were increased over the 8514 and the 16-bit color mode added.

Later, and XGA-2 specification added 640×480 at true color, increased the 1024×768 mode to high color (16-bit/pixel for 65,536 colors) and improved the graphic accelerator performance.

Note: XGA was an IBM standard, the VESA released a similar standard called Extended Video Graphics Array (EVGA) in 1991. The two should not be confused. EVGA, as a standalone term, never really caught on.

XGA, over time developed into a family of different standards. The following entries summarize this family…

  • Wide Extended Graphic Array (WXGA). A widescreen standard that varied somewhat in the supported resolutions. The most common were 1280×720 pixels with an aspect ratio of 16:9, 1280×768 pixels with an aspect ratio of 5:3, 1280×800 pixels with an aspect ratio of 8:5, 1360×768 pixels with an aspect ratio of about 16:9, and 1366×768 pixels with an aspect ratio of about 16:9. The latter two are generally used for LCD Television; the first three of often found in notebook computers. The 1280×720 pixel mode is also called 720p when used with HDTV.

Super Extended Graphic Array (SXGA)

Super XGA was another step up in resolution and became a family of its own…

  • Super Extended Graphic Array (SXGA and SXGA+). A resolution of 1280×1024 pixels with an aspect ratio of 5:4 and 1.3 million pixels. This resolution is common in 17-inch to 19-inch LCD monitors. The plus version has a resolution of 1400×1050 pixels with an aspect ratio of 4:3 and 1.47 million pixels. You might find this on notebook LCD screens.
  • Wide Super Extended Graphics Array (WSXGA and WSXGA+). A resolution of 1440×900 pixels with an aspect ratio of 8:5 and 1.3 million pixels. The Apple widescreen MacBook Pro came with this resolution but it’s not otherwise widely used. The plus version has a resolution of 1680×1050 pixels with an aspect ratio of approximately 25:16 and 1.76 million pixels. Some Dell and Apple products have come with this native resolution.

Ultra Extended Graphics Array (UXGA)

Ultra XGA was another step up in resolution based on four times the standard 800×600 resolution of the SVGA standard. It’s basic format is 1600×1200 pixels and also became a family of its own…

  • Ultra Extended Graphics Array (UXGA). A resolution of 1600×1200 pixels with an aspect ratio of 4:3 and 1.9 million pixels. Some manufacturers refer to this standard as Ultra Graphics Array (UGA) but that is not a recognized official name. There is no “plus” version of UXGA.
  • Widescreen Ultra Extended Graphics Array (WUXGA). A resolution of 1920×1200 pixels with an aspect ratio of 8:5 and 2.3 million pixels. There is no “plus” version of WUXGA although one manufacturer sells a 17-inch monitor with this resolution and claims that it is WUXGA+. A number of different manufacturers make products with this resolution.

Quad (Quantum) Extended Graphics Array (QXGA)

As of 2005, the QXGA and related standards are the highest resolution presently defined. As of this writing, there are few commercial monitors with these resolutions and only a very few higher-end digital cameras. Expect more as fabrication techniques improve.

The Quad term derives from a multiple of 4 against a lower resolution standard. QXGA, for example, is 4 times the number of pixels of XGA at the same aspect ratio (4 times 786,432 pixels = 3,145,728 pixels which, at a 4:3 aspect ratio becomes a display of 2048×1536 pixels). Sometimes the name quantum is used instead to indicate there are so many pixels you’d have to measure them at the quantum level; a bit of an exaggeration but in keeping with the fun people have inventing names.

The QXGA family therefore can be summarized as…

  • Quad Extended Graphics Array (QXGA). A resolution of 2048×1536 with an aspect ratio of 4:3 and 3.1 million pixels.
  • Wide Quad Extended Graphics Array (WQXGA). A resolution of 2560×1600 with an aspect ratio of 8:5 and 4.1 million pixels.
  • Quad Super Extended Graphics Array (QSXGA). A resolution of 2560×2048 with an aspect ratio of 5:4 and 5.2 million pixels.
  • Wide Quad Super Extended Graphics Array (WQSXGA). A resolution of 3200×2048 with an aspect ratio of 25:16 and 6.6 million pixels.
  • Quad Ultra Extended Graphics Array (QUXGA). A resolution of 3200×2400 with an aspect ratio of 4:3 and 7.7 million pixels.
  • Wide Quad Ultra Extended Graphics Array (WQUXGA). A resolution of 3840×2400 with an aspect ratio of 8:5 and 9.2 million pixels.

Hex[adecatuple] Extended Graphics Array (HXGA)

As of 2005, the HXGA and related standards are the highest resolution presently defined. As of this writing, there are no commercial monitors with these resolutions and only a very few high-end digital cameras.

The Hex[adecatuple] term derives from a multiple of 16 against a lower resolution standard. HXGA, for example, is 16 times the number of pixels of XGA at the same aspect ratio (16 times 786,432 pixels = 12,582,912 pixels which, at a 4:3 aspect ratio becomes a display of 4096×3072 pixels).

The HXGA family therefore can be summarized as…

  • Hex[adecatuple] Extended Graphics Array (HXGA). A resolution of 4096×3072 with an aspect ratio of 4:3 and 12.6 million pixels.
  • Wide Hex[adecatuple] Extended Graphics Array (WHXGA). A resolution of 5120×3200 with an aspect ratio of 8:5 and 16.4 million pixels.
  • Hex[adecatuple] Super Extended Graphics Array (HSXGA). A resolution of 5120×4096 with an aspect ratio of 5:4 and 21 million pixels.
  • Wide Hex[adecatuple] Super Extended Graphics Array (WHSXGA). A resolution of 6400×4096 with an aspect ratio of 25:16 and 26 million pixels.
  • Hex[adecatuple] Ultra Extended Graphics Array (HUXGA). A resolution of 6400×4800 with an aspect ratio of 4:3 and 31 million pixels.
  • Wide Hex[adecatuple] Ultra Extended Graphics Array (WHUXGA). A resolution of 7680×4800 with an aspect ratio of 8:5 and 37 million pixels.