Neural networks have shown great promise in converting black and white images to color. In particular, the DeOldify network can produce outstanding results for typical photo imagery. I had used this network in the past to colorize microscope images. Occasionally I had success, but frequently the network assigned the image a drab greenish yellow or wouldn't colorize at all. I assume that this is because microscopy is far outside the trained domain of the network. This gave me hope that networks could do a good job at colorization if I had the right dataset.
An example of an image that DeOldify colorizes well.
An example of an image that DeOldify hardly colorizes at all.
For my training dataset, I used almost 2500 images from 45 years of prizewinning photos from the Nikon's Small World photo competition. Each of these images is typically beautifully colorized, or captures a vibrant microscopic scene. Each of these images could then be converted to grayscale in order to provide an input to correspond to the target color counterpart.
I attempted to retrain DeOldify, but the system is quite complex/finicky, and I was never able to get good results. Instead I trained a Pix2Pix image translation network. This network is GAN that takes an input and seeks to transform it a form that is indistinguishable from a target image. In my case, the black and white image would be transformed to color. The task of colorization is well within the network's capacity, and it started generating plausible results fairly rapidly (within 30 epochs).