As promised in the previous article, today we will try to untangle the frequent confusions generated among many computer users by this simple word: « resolution. »
To do that, we are going to explain screen resolution, printing resolution, pixel dimensions, physical dimensions, and how everything relates to bitmaps.
Bitmaps dimensions are expressed in pixels (image width x height), either separated (usually for image files) or as the effective result of the multiplication (usually to express digital cameras sensitivity).
For example, an« 8 Megapixels digital camera » means the photos it produces (actually bitmaps) are made up of ca. 8 million pixels, a photo being, for example, 3456 pixels wide by 2304 pixels high, which gives 7,962,624 pixels (ca. 8MP).
Funny thing is that the pixel dimension of the image has little to do with how the bitmap image appears on-screen, but it has lots to do with how the image will print!
Well, to begin with, a pixel is a logical unit, not a physical one.
In other words, a pixel doesn’t have a fixed measured size, its size depends on the screen it is displayed on.
For example, imagine an LCD monitor having a screen resolution of 1000×1000 pixels, the screen’s physical dimensions being 1000×1000 mm.
In such case, the dimension of a pixel will be 1 square millimeter (1×1 mm).
Now imagine that you change the output resolution setting of the same monitor to 500×500 pixels: as the monitor’s physical dimensions don’t change, a pixel will now be 2×2 mm, meaning 4 square millimeters.
To continue this exercise of imagination, think about a bitmap image having pixel dimensions of 1000×1000 pixels: in the first case (screen resolution of 1000×1000) the image will be rendered full screen.
And for the second, case (changing the screen-resolution of the same monitor to 500×500 pixels), the image will appear bigger (because each pixel is bigger than before) and will be displayed only partially on screen.
Of course, there are no such monitors/resolutions standards in real life, we only gave these examples to show that image size in pixels is relative, not « absolute. »
So perhaps at this point, it will be easier to understand what pixels-per-inch (PPI) is: it is a value expressing how many pixels « fit » inside an inch but its significance depends on the context.
-when used with regard to an image resolution, PPI (often referred to as DPI in this context) is the number of pixels per inch in the bitmap grid and is meant for printers (to determine how the image is to be printed within a specified size);
-when used with regard to a screen resolution’s appearance, PPI is the number of pixels per inch (or pixels-per-centimeter (PPCM) when metric system is used) depending on screen’s physical size and screen’s resolution as set by the user;
Here are a few real-life facts to help to understand:
-a monitor in 800×600 mode has a lower PPI than the same monitor has in a 1024×768 or 1280×1024 mode;
-a monitor of 12 inches wide (horizontal) by 9 inches height (vertical) at a resolution of 1024×768 pixels has a PPI value of cca. 85 (1024 pixels / 12 inches = 768 pixels / 9 inches = 85.3);
-a monitor on a Windows Operating System typically displays at 96 PPI;
-a bitmap image of 1,000 × 1,000 pixels, if labeled as 250 PPI (DPI is frequently misused to replace PPI in such context), will instruct the printer to print it at a size of 4 × 4 inches.
With printers, there’s a different story.
First of all, there is an important difference between how an image appears to the human eye on the screen and how it appears when printed.
Due to visual perception physiology, an image doesn’t need to have a very high resolution in order to appear at a decent size and with good quality on a computer screen.
Unfortunately, this is not the case when it comes to printing it: what is seen on print simply doesn’t match the quality of what is seen on-screen.
Screens display colors out of red, green and blue (RGB color model), by mixing them together into a vast color palette and light is emitted directly to the eyes.
White is obtained when all 3 colors are displayed at full intensity, black is obtained by their absence (zero intensity), hence the name « additive color model » for RGB.
Printers work differently: they create colors by mixing cyan, magenta, yellow and black inks (CMYK color model but there’s also a CMY model), inks absorb light and human eyes see light reflected from paper.
White is obtained by using none of the 4 colors (as it’s the paper’s color) while black is obtained by the full combination of all 4 (or 3, for CMY) colors inks, hence the name « subtractive color model » for CMYK.
Similar to how a pixel is the picture element of an image on the screen, a dot is the picture element of a printed image.
And similar to how PPI value describes the density of pixels on the screen, the DPI (« dots-per-inch ») value expresses how many individually printed dots « accommodate » within an inch.
Of course, the higher the DPI value, the better quality of the printed image will be.
But printers have a limited range of colors for each dot and their color palette is lesser than in the case of screens.
So in order to obtain similar output quality, a bitmap image has to be printed in much higher DPI value than the PPI value needed for good screen viewing.
It is said that the printing process« could require a region of four to six dots (measured across each side) to faithfully reproduce the color contained in a single pixel. » So, if a 100×100-pixel image is to be printed inside a one-inch square, the printer must be capable of 400 to 600 dots per inch to accurately reproduce the image.
Finally, as if the already described confusions wouldn’t be enough, DPI is used to also express the resolutions in scanning processes, whereas the correct term to use appears to be « samples-per-inch » (SPI).
Hope that from now on, when using PaperScan it will be easier for you to deal with resolution-related terminology in bitmaps!
See you next week!