grayscales
background
when we convert a color image into grayscale, we lose information about the original image. instead of tracking the red, green, and blue channels for each pixel, we only keep track of the luminance -- how light or dark a pixel is, a value between 0 (black) and 255 (white).
if you've ever worked with image manipulation before, you'll first learn that the naive method for computing luminance is to just take the average of the three channels. and while this produces reasonable images, b&w images generated by most other programs yield different results. namely, they distinguish between pure RGB tones instead of just displaying the same, uniform grays.
these rendering algorithms take advantage of how our eyes see color to create more natural images by using a weighted average according to how sensitive our eye is to each color. on a pretty green planet we're most sensitive to green light, so the green portion of an image contributes the most information to the grayscale version; next red light, then blue.
using the CIE 1931 standard for computing luminance, I've reconstructed images below so that all of the pixels in the image -- despite having distinct color values -- have the same luminance, and thus appear identical in grayscale. there are only so many standard colors that have the same luminance. the palette for each image is also produced below.
results are varied and use either dithering or halftoning effects to attempt to punch beyond their weight in terms of the limited color palette.


extra!
extra images generated using halftones



