From: Mark Shore on

> I start out with the RGB and know how to split it into R,G,B. Then I know how to make it gray scale. The next step, I complete a function that connects some pixels, but discards others. I want to know if I can take this gray image, with some discarded pixels, and get it back into RGB.

The reason ImageAnalyst asks whether you've retained separate greyscale images for the individual RGB components is that information is irretrievably lost when you convert a color image to a single greyscale one. The conversion is essentially one way only.
From: Walter Roberson on
Stacy Ross wrote:

> I start out with the RGB and know how to split it into R,G,B. Then I
> know how to make it gray scale. The next step, I complete a function
> that connects some pixels, but discards others. I want to know if I can
> take this gray image, with some discarded pixels, and get it back into RGB.

If you can unambiguously determine whether particular pixels have been
"discarded" or not, then you can create the index of the remaining
pixels in the image matrix, and create a new matrix that copies those
locations from the color matrix, and sets the rest of the locations to
some signal value that is to be interpreted as "discarded". In
particular, if you are working with double precision images, the
locations to be discarded could be set to nan.

This idea works only when the what you get out is a "mask" (or the
boundaries of a set of Region of Interest), and does not work if you are
trying to do something like blur the grayscale image and the amount of
blur becomes important how to modify the original pixel color.

On the other hand, if a particular operation upon the grayscale image
changes the luminance of the pixels, it is possible to back-calculate
how to scale the original RGB in order that the luminance comes out as
the new value. You will, though, have to determine the priority for each
channel in doing the back calculation. For example, if the green channel
was already fully "on" (double 1.0), that contributes about 0.57 to the
luminance. If the revised luminance is higher than the original
luminance, then you would be tempted to scale each of the R, G, and B
higher in order to produce the new luminance -- but in such a case you
would not be able to move the G any higher because 1.0 is the maximum.
If the new luminance was 1.0 (e.g., because the mostly-green pixel was
smoothed out beside some adjacent white pixels) then the only solution
would be (R, G, B) all 1.0, which would could not be any kind of uniform
or proportional scaling of the RGB channel values. So how much would you
want to preserve original proportions, vs how much would you want the
new pixel to reflect the new calculated luminance? There isn't a "right"
answer to this question: it depends upon your intents.