Home Meetings News Club History & Archives Search

Amiga Graphics Explained

by Eric Schwartz
from the AmiTech Gazette, March 2015

There have been a number of updates and developments in the Amiga sphere, but perhaps the biggest news of recent is the fact that 2015 marks the 30th anniversary of the official launch of the original Commodore Amiga 1000 computer system, in all its Mitchie’s-paw-print-inside-the-case glory. A celebration event is planned at the Computer History Museum in Mountain View, California, on July 25th and 26th. A Kickstarter campaign has been set up to fund the event, so anyone looking to contribute can look here:

Amiga 30th Anniversary in California

I have spoken on the subject of Amiga technology in the past, and how the technological strengths of one time can be the weaknesses of another. Reading the book “The Future Was Here” by Jimmy Maher, with its in-depth and honest examination of the workings and history of the Amiga system, has given me a better understanding of the reasoning behind certain design decisions. Amiga users and CG people in general are probably familiar with "bit-planes" making up a graphics display, and may have heard about the debate regarding "planar" vs. "chunky" pixels, and how they relate to graphics performance. For those not in the know, the more bitplanes on a display, the more possible colors, doubling with each bit per pixel added. Meanwhile, most computers operate in bytes, 8 bits in a chunk (or 16, or 32, or...) This means two possible ways of handling the display, putting a single byte for each pixel’s bitplanes, (the "chunky" way) or spreading that byte across eight adjoining pixels, giving 1 bit to each, with a different byte handling the next bitplane for those pixels, and so on. This is "planar" graphics, and the way a stock Amiga handles things, building a display layer by layer. Chunky graphics are much easier to understand, as it basically amounts to plugging one value into one pixel. So why did the Amiga take a seemingly more convoluted approach? It came down to efficient use of memory. The original Amiga only had the power to use 6-bits per pixel, and could only use 32 user-defined colors (five bits... It doesn’t sound like much now, but that was better than that vast majority of home systems available in 1985). Using a full 8-bit byte for each pixel would mean at least a quarter of those bits would be unused and therefore wasted. If fewer bitplanes were needed, that waste is magnified. Smearing those bits across the planes means the least amount of memory is wasted, which was good for a system that shared video memory and system memory for common usage (another cost-saving decision).

The planar graphics approach would prove to have other advantages, once discovered. Having each bitplane handled separately in memory means the system can shuffle them around like cards, for visual effects or things like parallax scrolling, where overlapping graphic elements can be moved at different speeds for a pseudo-3D effect. Another thing that probably couldn’t be done without a planar display would become part of the Amiga’s claim-to-fame. While the original system only had 5 bits worth of colors in the palette, the Amiga display could use six. One way this was used was the "half-bright" mode (in later Amiga models than the A1000) which took the 32 colors of 5 bits and used the sixth to cut the brightness of that pixel in half if active, like a sunglass lens. The true star was the Hold-and-Modify, or HAM graphic mode, which took bitplanes to a new level. HAM mode took two out of the six bits-per-pixel, and turned them into control information for how the remaining four are used, as either one of sixteen set palette colors, or by modifying the red, green, or blue component of the pixel immediately to the left to make a new color. This mode made it possible for the Amiga to display its complete possible palette of 4,096 colors, or 12 bits worth of color information, on-screen at once using only 6 bits worth of data, at the expense of "fringes" or minor color artifacts in the displayed image. It was essentially a form of lossy image compression well before formats like JPEG came into common use.

Despite the upsides to a planar display, there was still the one obvious downside. While the chunky style display is easier to explain, as seen above, it’s also easier for the average computer to do. Where a single chunky pixel needs only to be written once to a single byte of memory, a planar pixel needs as many memory writes as there are bitplanes behind it, up to six, or eight for the later AGA chipset Amigas, which takes more time. Where this difference is usually trivial at the speeds a computer operates, especially an Amiga with hardware designed to shuffle around video and graphics at high speed, it makes a difference when lots of pixels are changed around in a short time, like a whole screen’s worth. This became most apparent with the rise of 3D video games like DOOM. The amount of computing required to calculate a 3D viewpoint like that for every frame is very heavy, but adding on steps to buffer the display and write to the screen in bitplanes packs on the time and slows down the frame rate. It was around this time when the DOS and Windows PCs, which concentrated more on raw horsepower with large amounts of memory and storage, took the advantage with these types of games, and planar displays started looking more like a detriment. The Amiga CD-32 game console tried to address this shortcoming with a new chip called "Akiko" which tried to speed up the conversion between chunky and planar graphics for 3D gaming, but it was too little, too late, as few developers bothered with it, and Commodore went down in 1994, taking the CD-32 with it.

The Amiga as a whole didn’t go down nearly so easily, and many of the users of more professional-grade machines would outfit them with new video cards, using chips in common with those from Windows systems... "chunky" ones with dedicated banks of video-only memory. While these video cards were unable to use the tricks of the Amiga’s native video chips, they were usually faster, given enough CPU horsepower, and offered higher screen resolutions and color depths of 16, 24, or 32 bitplanes, which would undoubtedly overwhelm a system that would have needed to write each plane individually. The new cards required driver software, such as Picasso96 or CybergrafX, to act as a layer between the Amiga system and the hardware, allowing well-behaved Amiga software to use a video card display as easily as a native Amiga chip display, with chunky or planar issues only a matter of the speed of the screen. Unfortunately, older software or things that get their speed and efficiency by programming directly to the Amiga hardware, like games, can’t be promoted to a video card display as simply, and the new hardware wouldn’t be able to understand HAM graphics or bitplane effects anyway. Newer Amiga-related systems running OS4 or MorphOS or AROS use modern versions of the old PC video cards, so the "chunkies" have officially won, because the old memory prices which prompted more efficient ways to make use of it are no longer a concern, at least nowhere near a level as seen in the mid 1980s.