Just what is live stacking? Live stacking is the process of averaging successive frames of an object such as a galaxy, nebula, star cluster, etc. to improve the signal to noise ratio (SNR) while watching the stacked image quality improve right before your eyes in real time. Live stacking has become a common technique employed by amateur astronomers engaged in deep sky video astronomy (also called Electronically Assisted Astronomy, Near Real Time Viewing, Camera Assisted Viewing, etc.). The SNR is improved because noise is random and signal is fixed. So, for a given pixel within the sensor, noise varies from frame to frame randomly while the signal remains fairly constant. Therefore, when multiple frames are averaged together, the noise mostly cancels out while the signal remains relatively constant. Thus, the SNR goes up and the image shows more detail and the background sky has a smoother, more uniform appearance rather than the grainy appearance usual in a single frame. In other words, the image looks a lot better. The more frames averaged the better the image appears, up to a point.
There are two kinds of live stacking used: 1) in-camera stacking; 2) stacking with a computer and specialized software program. Let's discuss these one at at time.
In-camera stacking is available on some of the more recent analog video cameras but not on any of the digital cameras like the many CMOS cameras that have become available in the last few years. Analog cameras like the Revolution Imagers 1 & 2 (RI1 and RI2), Mallincam Micro and the LnTech 300 can do live stacking internally without the need for a computer. All of these cameras are security cameras which are used for astronomy, but are not purpose built for astronomy. In low light conditions the camera images can have a very grainy appearance making it hard to distinguish features such as license plates and people, which also means less ability to make out details in faint nebulae and galaxies. Security camera manufacturers combat this problem with Digital Noise Reduction (DNR). There are two types of noise reduction, 2D-DNR and 3D-DNR. With 2D-DNR the camera has an algorithm to compare the signal at each pixel relative to the signal at the same pixel in successive frames to identify and reduce noise by averaging the signals for each pixel frame to frame. 2D-DNR is a temporal noise reduction method since it looks at the time averaged variations. The drawback to 2D-DNR is the potential to blur images for moving objects. Such would not be a problem for astronomical use cases, but, once again, these cameras were created for video surveillance applications and are only being adapted to astronomy applications. To eliminate blur another DNR algorithm called 3D-DNR was developed. 3D-DNR combines the temporal noise reduction from frame to frame of 2D-DNR with spatial noise reduction within a frame by examining and comparing the signal at neighboring pixels to further reduce the overall noise within an image. 3D-DNR is more effective in reducing noise and improving image clarity and detail than 2D-DNR.
The DNR feature within the camera menu allows one to turn this type of noise reduction on or off. The RI1, Micro and LnTech 300 cameras use 3D-DNR while the RI2 does not specify either 2D or 3D. I would assume that it is also 3D since it is the newest of the cameras. When turned on, the user specifies the number of frames to average from 1 to 5 (RI1, Micro, LnTech 300) or 6 (RI2). When DNR is off, the camera does no averaging and what you see on your monitor is just a single frame exposure.
So, as an example, let's assume that DNR is turned on with frame averaging set to 4 frames. And, let's assume that the exposure is set to 5 sec. After 5 seconds, the camera has captured its first frame which it sends to a buffer and outputs this same frame to an attached video monitor at a rate of 60 (NTSC) or 50 (PAL) frames per second. We see the same image until the next 5 sec exposure is finished. At that point, the camera performs the DNR algorithm on the 1st and 2nd frames and stores the result, an average of both frames, in the camera buffer and simultaneously outputs this to the monitor. When this happens, we will see an improvement in the image quality. This processes is repeated for the 3rd frame, this time the image we see is an average of 3 frames and once again with the 4th frame. The image quality continues to improve after each frame until we reach the end of the 4th frame. If the camera is allowed to continue to run with no changes in exposure or DNR settings, the 1st exposure will be discarded and the 5th exposure will be averaged with the 2nd, 3rd and 4th exposure and placed in the camera buffer as well as out put to the monitor. This image averaged over 4, 5 sec frames has taken 20 sec to complete. In general, once the maximum number of frames has been reached, 4 in this example, the image will not get any better. However, if for instance a satellite trail passes through the field of view during one of the frames, it will be eliminated once 4 new frames have been taken after the passing of the satellite. This averaging process will continue with each new frame replacing the oldest frame in the sequence until the exposure is stopped.
In-camera stacking is a simple and convenient way to greatly improve the image you see on your monitor without a computer and additional software. And since the camera is continuously outputting frames you get to watch the image improve live. However, in-camera stacking has two limitations. First, it will only support 5 or 6 frame averaging depending upon which camera is used. This greatly limits the amount of improvement to be had since there are many very faint objects that would benefit from averaging over many additional frames. Second, the camera does not make any allowance for the object to drift within the field of view from frame to frame. In other words, there is no alignment of the frames being stacked as the camera algorithm assumes the image does not move. This is fine for surveillance applications, but for astronomical use cases objects can drift within the field of view if the telescope is not perfectly aligned with the polar axis, or one is using an Alt-Az mount. In this case, stars can tend to become elongated as the successive frames are averaged. It all depends upon the exposure time, number of frames selected for DNR and the mount type and setup. For instance, the maximum exposure of the Revolution Imager 1 is 20.48sec and the maximum DNR setting is 5, so this would mean the object would have to remain fixed in the field of view for 102.4sec. If one was using an Alt-Az mount it is virtually certain that the stars will appear elongated and not round. For the Revolution Imager 2, the maximum exposure is 5.12sec and the maximum DNR setting is 6, so the maximum total exposure would be about 31sec which would probably not show appreciable star trailing even with an Alt-Az mount.
Stacking with a Computer
Now, lets suppose you want to use a computer and software to do live stacking. One advantage of this method is the fact that you can stack as many frames as you like to bring even more detail and sharpness to the image. Another advantage is that the software aligns each successive frame to the first which can avoid star trailing for stacks of 10min or more without guiding when using a polar aligned EQ mount. The maximum time without star trailing depends on the quality of the polar alignment and the mount capability. In addition to stacking frames, most software also allows one or more of the following: dark frame subtraction: flat frames; and a histogram to stretch the stacked image. Computer stacking is a necessity for all of the digital cameras since none of them has the capability to do in-camera stacking. Examples of these are the USB cameras from ATIK (Infinity), ZWO (ASI224/1600/294, etc.), QHY (Horizon), Mallincam (SkyRaider) and Starlight Xpress (Lodestar, Ultrastar). Computer stacking can also be used with analog cameras even if they have their own internal stacking function. In the later case, it is best to turn off in camera stacking if you are using computer stacking.
Computer stacking is a little different with a digital camera compared to an analog camera since a digital camera only outputs an image when an exposure is complete, whereas, an analog camera outputs an image at 60 or 50 frames/sec. Lets discuss stacking with a digital camera first.
With a digital camera the output USB cable is connected to the USB port on the computer. This allows the software to control the camera functions like exposure, gain, cooling etc., preview an image, capture an image and perform live stacking. Some software works natively with certain cameras which means that the software will simply recognize the camera and provide control for all of it's functions. This is true for Sharpcap and the ZWO, Altair Astro, QHY, Starlight Xpress and a few other cameras. The Starlight Xpress Lodestar and Ultrastar cameras also work natively with Lodestar Live, while the Atik Infinity camera works natively with the Infinity software, and the Mallincam SkyRaider cameras with the MallincamSky software. If the software does not natively support one of the cameras, it likely will support the camera through an ASCOM driver but may not support every camera function.
Now, as an example of live computer stacking with a USB camera, consider an exposure setting of 5 sec. The camera outputs a new image every 5 sec to the computer. The software aligns each successive image to the first by translating and rotating the images to match. This continues until the user stops the stacking process. So, if stacking is allowed to run for 20 frames, the total stacking time is 5sec/frame x 20 frames = 100 sec. If dark frame subtraction is turned on, the appropriate dark frame from a dark frame library is subtracted from each frame prior to stacking. Similarly, if the software supports flat frames, flat frame scaling takes place on each captured frame prior to stacking. Also, the user has access to a histogram during the stacking process where adjustments to the black and white levels can be made on the fly to improve the detail captured. At the end of the stacking process, the user has the option to save the stacked frame.
Computer stacking with an analog camera is identical to what has just been described except for two things. First, the camera's output must be connected to an video capture device which digitizes the image and outputs that on a USB cable to the computer via its USB input. Second, the analog camera sends out frames at 60 or 50 frames per second regardless of the length of the exposure. So, if the exposure is set to 5sec as before, the camera will output 60 (or 50) frames x 5 sec = 300 frames from the first 5sec exposure before it starts to output a new exposure. All 300 frames will be stacked regardless of the fact that these are essentially identical frames. The computer software has no idea whether the individual frames are identical or not since it does not see the camera but only sees the video capture device and has no idea what the camera exposure time is set to. Thus, for the same 100sec stack, there will be 60 (or 50) frames/sec x 100sec = 6000 frames in the stack. The computer just sees each frame as it arrives and diligently stacks them all. The fact that the analog camera is outputting 60 (or 50) identical frames for each 5sec exposure is not really a problem. In fact, because the signal is analog, it is prone to pick up noise in the interconnecting cable between it and the image capture device which digitizes the frames. This noise will be random so, in fact, the 60 frames/sec are not completely identical and the fact that they are all stacked together further helps to improve the image SNR. An exception to this is the MiloSlick Mallincam Control software which can do all of the live stacking functions described above, but can also be synchronized to the actual exposure time of the analog camera so that it will stack only new frames output at the end of each complete exposure time. This software was designed to work with a variety of Mallincam analog cameras such as the Xtreme, the Xterminator and others.
So, that is how live stacking works for both analog and digital cameras using either in-camera stacking (if available) or stacking with a computer. No matter what method used, live stacking is a powerful tool to greatly improve the quality of the deep sky objects we view in real time.