Category Archives: Astrophotography

Processing workflow for lunar surface images using GIMP and G’MIC

This post is going to illustrate a post-processing workflow for lunar surface images using the open source tools GIMP and G’MIC. GIMP has been largely ignored by astrophotographers in the past since it only supported 8-bit colour channels. The long awaited GIMP 2.10 release in April 2018, introduced 16-bit and 32-bit colour channel support, along with many other important improvements that enable high quality post-processing.

Astrophotographers seeking to present high detail images of The Moon, have long recognised that capturing a single still image is not sufficient. Instead normal practice is to capture a high definition video at a high frame rate lasting for a minute or more, by attaching a webcam to a telescope in place of the eyepiece. A program such as AutoStakkert2 will then process the video, analysing the quality of each video frame, selecting the “best” frames, and then merging them to produce a single frame with less noise and more detail. The output of AutoStakkert2 though is not a finished product and requires further post-processing to correct various image artefacts and pull out the inherent detail. A common tool used for this is Registax which particularly found popularity because of its wavelet sharpening feature.

Use of AutoStakkert2 can be a blog post in its own right, so won’t be covered here. What follows will pick up immediately after stacking has produced a merged image, and show how GIMP and G’MIC can replace use of the closed source, Windows based Registax tool.

The source material for the blog post is a 40 second long video captured with a modified Microsoft Lifecam HD paired with a Celestron Nexstar 4GT telescope. Most astrophotographers will spend £100 or more on CCD cameras directly designed for use with telescopes, so this modded Lifecam is very much at the low end of what can be used. This presents some extra challenges, but as can be seen, still allows for great results to be obtained with minimal expense.

The first noticeable characteristic of the video is a strong pink/purple colour cast on the edges of the frame. This is caused by unwanted infrared light reaching the webcam sensor. A IR cut filter is attached to the webcam, but it is positioned too far away from the CCD chip to be fully effective. A look at a single video frame at 100% magnification shows high level of speckled chromatic noise across the frame. Finally the image slowly drifts due to inaccurate tracking of The Moon’s movement by the telescope mount and features are stretched and squashed due to atmospheric distortion.

100% magnification crop of a single still video frame before any processing

After the video frames are stacked using AutoStakkert2, the resulting merged frame shows significant improvements. The speckled noise has been completely eliminated by the stacking process which effectively averages out the noise across 100s (even 1000s) of frames. The image, however, appears very soft lacking any fine detail and there is chromatic aberration present on the red and blue channels

100% magnification crop after stacking top (50% by quality) video frames in AutoStakkert2

AutoStakkert2 will save the merged image as a 16-bit PNG file, and GIMP 2.10 will honour this bit-depth when loading the image. It is possible to then convert it to 32-bit per channel before processing, but for lunar images this is probably overkill. The first task is to get rid of the chromatic aberration since that has the effect of making the image even softer. With this particular webcam and telescope combination it is apparent that the blue channel is shifted 2 or 3 pixels up, relative to the green, while the red is shifted 2 or 3 pixels down. It is possible to fix this in GIMP alone by decomposing the image, creating a layer for each colour channel, then moving the x,y offset of the blue and red layers until they line up with green, and finally recomposing the layers to a produce a new colour image.

This is a rather long winded process that is best automated, which is where G’MIC comes into play. It is a general purpose image processing tool which has the ability to run as a GIMP plugin, providing more than 450 image filters. The relevant filter for our purpose is called “Degradations -> Chromatic Aberrations“. It allows you to simply enter the desired x,y offset for the red and blue channels and will re-align them in one go, avoiding the multi-step decompose process in GIMP.

G’MIC Chromatic Aberration filter. The secondary colour defaults to green, but it is simpler if it is changed to blue, since that is the second fringe colour we’re looking to eliminate. The preview should be zoomed in to about 400% to allow alignment to be clearly viewed when adjusting x,y offsets.

100% magnification crop after aligning the RGB colour components to correct chromatic aberration.

With the chromatic aberration removed the next step is to get rid of the colour cast. The Moon is not a purely monochrome object, with different areas of its surface have distinct colours which would ideally be preserved in any processed images. Due to the limitations of the camera being used, however, the IR wavelength pollution makes that largely impossible/impractical. The only real option is to desaturate the image to create an uniformly monochrome image. If a slightly off-grey colour tint is desired in the end result, that could be added by colourizing the final image.

100% magnification crop after desaturating to remove colour cast due to IR wavelengths

The image that we have at this stage is still very soft, lacking in any fine detail. One of the most popular features in Registax is its wavelet based sharpening facility. Fortunately there are a number of options available in GIMP now that can achieve comparable results. GIMP 2.10 comes with a “Filters -> Enhance -> Wavelet decompose” operation, while G’MIC has “Details -> Split Details (wavelets)” both of which can get results comparable to Registax wavelets operating in linear mode. The preferred Registax approach though is to use guassian wavelets, and this has an equivalent in G’MIC available as “Details -> Split Details (gaussian)“. The way the G’MIC filter is used, however, is rather different so needs some explaining.

Split details (gaussian) filter. The image will be split into 6 layers by default, 5 layers of detail and a final background residual layer. Together the layers are identical to the original image. The number layers together with the two scale settings determine the granularity of detail in each layer. The defaults are reasonable but there’s scope to experiment if desired.

Describing the real mathematical principals behind gaussian wavelets is beyond the scope of this posting, but those interested can learn more from the AviStack implementation. Sticking to the high level, when the plugin is run it will split the visible layer into a sequence of layers. There is a base layer “residual” and then multiple layers of increasingly fine details applied with “Grain Merge” mode. Taken together these new layers are precisely equivalent to the original image.

The task now is to work on the individual detail layers to emphasize the details that are desired in the image, and (much less frequently) to de-emphasize details that are not desired. To increase the emphasis of details at a particular level, all that is required is to duplicate the appropriate layer. The finest detail layer may be duplicated many, many, many times while coarse detail layers may be duplicated only once, or not at all. If even one duplication is too strong, the duplicated layer opacity can be reduce to control its impact.

GIMP layers. The default G’MIC split details filter settings created 6 layers. The layer labelled “Scale #5” holds the fine details and has been duplicated 5 times to enhance fine details. The “Scale #4” and “Scale #3” layers have both been duplicated once, and opacity reduced on the “Scale #3” duplicate.

It is recommended to work in order from coarsest (“Scale #1”) to finest (“Scale #5”) detail layers, and typically the first two or three levels of details would be left unchanged to avoid unnatural looking images with too much contrast. There is no perfect set of wavelet adjustments that will provide the right amount of sharpening. It will vary depending on the camera, the telescope, the subject, the seeing conditions, the quality of stacking and more. Some experimentation will be required to find the right balance, but fortunately this is easy with layers since changes can be easily rolled back. After working on an image, ensure it is saved in GIMP’s native XCF format, leaving all layers intact. It that then be revisited the following day with a fresh eye whereupon the sharpening may be further fine tuned with benefit of hindsight.

100% magnification crop after sharpening using G’MIC gaussian wavelets filter and GIMP layer blending

As the image below shows, even with a modified webcam costing < £20 on a popular auction site, it is possible to produce high detail images of The Moon’s surface, using open source tools for all post-processing after the initial video stacking process.

Complete final image after all post-processing

 

Creating star trails with automatic sky glow removal

Creating star trail images is one of the easier astrophotography tasks there is, since it doesn’t require any messing about with tracking devices. You just set a camera to point skywards, with programmable remote control set to snap a few 100 images over a hour or more. There are then software programs which can merge the individual images together to create the final star trail. On Linux, I use a startrails plugin for GIMP to perform the merging of individual frames which is simple to use and works pretty well.

The basic star trail image with no sky glow adjustments applied

The basic star trail image with no sky glow adjustments applied

This is all very straighforward if you’re capturing images under ideal conditions, but life doesn’t always work out that way. The first thing people do is to add in dark frames, which are images captured with the lens cap on. These frames will record any hot pixels or general sensor noise that may be present. The startrail application will merge all the dark frames and then subtract the result from the light frames, removing the hot pixels from the final image.

When taking images in London though, there is a major problem with sky glow from the ever present light pollution. There are number of techniques for removing sky glow, with varying pluses and minuses. A simple way is play with the curves/levels to reduce the intensity of the background glow and/or use colour balance corrections to try and make it less noticeable. The problem with this is that the corrections will apply to the background sky, the stars themselves and any foreground objects uniformly. All too often the sky glow is not uniform, but in fact a gradient from top to bottom of the frame, meaning the results of curve adjustment show up the gradient

The basic star trail image with curve adjustments to reduce intensity of the sky glow. Due to the gradient, it is only possible to remove part of the sky glow

The basic star trail image with curve adjustments to reduce intensity of the sky glow. Due to the gradient, it is only possible to remove part of the sky glow

The overall goal is to produce a background that is completely black. This could be achieved if we are able to extract the background component from the main image and then subtract it, leaving just the stars behind. The layers feature in any photo editor can be used as mechanism for accomplishing this task.

If we make an assumption that sky glow is approximately the same in each frame, we can follow a simple series of steps to extract a reasonable approximation of the sky glow background. Start by opening one of the individual frames. Then select the menu option Filters -> Blur -> Gaussian Blur. The radius setting should be set to a large value such as 150px. The resulting image should now be a very smooth colour gradient just representing the sky glow background.

A single image frame blurred to leave just the sky glow background

A single image frame blurred to leave just the sky glow background

Copy this blurred image, and switch to the star trail image previously produced by the plugin. Create a new layer, paste the blurred image into it and anchor the floating selection. Finally change the layer mode to “Subtract”.

The final star trail image with the gaussian blurred sky glow subtracted

The final star trail image with the gaussian blurred sky glow subtracted

The final result above doesn’t have a completely black background because the sky glow we extracted is merely an approximation, but it should have a less significant colour cast, and lower overall intensity. A small adjustment of curves can further reduce the remaining glow:

The final star trail image with the gaussian blurred sky glow subtracted

The final star trail image with the gaussian blurred sky glow subtracted

While this is much improved, there are still some limitations with this technique. If the sky conditions were changing during the course of the image capture session, any single frame will not be so close to the average glow of the final image, so will either be over or under correcting the sky glow. Weather often plays a part, sending clouds through the scene during capture. Ideally one would simply wait for a clear day before capturing images, but not everyone has such a luxury of time. When some images contain clouds, the final image (as seen above) will have a quite uneven, patchy sky glow which is not easily removed with a simple subtraction layer.

Ideally one would subtract the sky glow from each individual frame before merging them to produce the trail. With 100’s of frames this is quite a tedious process, but since the GIMP startrails plugin is open source GPLv3+ licensed python code, we have the freedom to modify it. To that end I have created a fork which has the ability to perform automatic sky glow removal by applying the gaussian blur technique to each individual frame. This more than doubles the amount of time required to produce a star trail image, but the results are very satisfying indeed.

The star trail when sky glow has been subtracted from each individual frame before merging

The star trail when sky glow has been subtracted from each individual frame before merging

The only downside is that when there are foreground images present, they tend to get a halo around their edges. On the plus side, subtracting the sky glow from each individual frame has totally removed the clouds from the image. The background is a pretty dark gray, but not completely black. This can be tweaked with a little use of curves to produce the final image

The star trail when sky glow has been subtracted from each individual frame before merging, with curve adjustments at the edge

The star trail when sky glow has been subtracted from each individual frame before merging, with curve adjustments at the edge

One final tip is to run a guassian blur with a radius of 1.2px on the final image before saving. This will smooth out any jagged edges on the trails caused by the tiny gaps between successive frames.

Stacking multiple images to reduce noise

One of the critical problems when producing astronomical images is to minimize the amount of noise in an image, while still being able to capture the very faint detail which is barely distinguishable from noise. The post processing technique used to address this problem is to merge together multiple images of the same subject. The constant signal in the images gets emphasized while the random noise gets smoothed / cancelled out. There is specialized software to perform stacking of astronomical images to deal with alignment between subsequent frames, as the earth’s rotation can cause drift over time if the camera mount isn’t compensating. The image stacking technique is not merely something for astrophotographers to use though, it is generally applicable to any use of photography.

Image stacking in a non-astrophotography scenarios is in fact simpler than one might imagine. The only physical requirement is that the camera is fixed relative to the scene being photographed, which is trivially achieved with a tripod of other similar fixed mounting facility. In terms of camera settings, it is necessary to have consistency across all the shots, so manual focus, fixed aperture, fixed shutter speed, fixed ISO and fixed white balance are all important. With the camera configured and the subject framed, all that remains is to take a sequence of shots. How many shots to take will depend on the quality of each individual image vs the desired end result. The more noisy the initial image, the larger the number of shots that will be required. As a starting point, 10 shots may be sufficient, but as many as 100 is not unreasonable for highly noisy images.

To illustrate the versatility of the image stacking technique, rather than use images from my DSLR, I’ll use a series captured from the night vision webcam of the Wurzburg radio telescope. A single captured frame of the webcam exhibits large amounts of random noise (click image to view fullsize):

Wurburg Radio Telescope single image

 

Over the course of a few minutes, 200 still frames were captured from the webcam. The task is now to combine all 200 images into one single higher quality image. Processing 200 images in a graphical user interface is going to be painfully time consuming, so some kind of automation is desirable. The ImageMagick program is the perfect tool for the job. It has a option “-evaluate-sequence” which can be used to perform a mathematical calculation for each pixel, across a sequence of images. The idea for minimizing noise is to take the median pixel value across the set of images. Stacking the images is thus as simple as running

# convert webcam/*.jpeg -evaluate-sequence median webcam-stack.jpeg

This is pretty CPU intensive process, taking a couple of minutes to run on my 8 CPU laptop. At the end of it though, there will be a pretty impressive resulting image:

Wurzburg Radio Telescope stacked imageThe observant will have noticed the timestamp in the top left corner of the image gets mangled. This is an inevitable result of stacking process when there is part of the image which is moving/changing in every single frame. In this case it is no big deal since the timestamp can either be cropped out, cloned out, or replaced with the timestamp from one single frame. In other scenarios this behaviour might actually work to your advantage. For example, consider taking a picture of a building and a person walks through the scene. If they are only present in a relatively small subset of the total captured images (say 5 out of 100), the median calculation will “magically” remove them from the resulting image, since the pixel values the moving person contributes lie far away from the median pixel values.

Going back to our example image, the massive reduction in noise can be clearly seen if viewing at 1:1 pixel size with the two images adjacent to each other

Wurzburg Radio Telescope comparison

With the reduction in noise it is now possible to apply other post-processing techniques to the image to pull out detail that would otherwise have been lost. For example, by using curves to lighten the above image it is possible to expose detail of the structure holding up the telescope dish:

Wurzburg Radio Telescope comparisonSo next time you are in a situation where your camera’s high ISO noise performance is not adequate, consider whether you can make use of image stacking to solve the problem in post processing.