eXtreme Imaging – going deep

 

A long time ago in a galaxy far away, a stream of photons made their escape after spending a million years bouncing around in their birthplace stars, spreading out into nearly empty space.  No time passes for them due to relativity but from our perspective nearly 50 million years pass until a minuscule percentage of them fall onto our Earth.

 

As seen from that galaxy, which we designate as NGC7331, the Earth is a tiny spot less than 0.00001 arcsec across, intercepting less than 10–12 of the photons produced by that galaxy.  But the galaxy creates a prodigious number of photons and about 1015 of its visual and NIR photons fall onto the Earth every second. Not every Earth-bound photon makes it to the surface due to atmospheric absorption and scatter, but when NGC7331 is high in the night sky, a 14” telescope intercepts approximately 1 million visible and NIR photons/sec from that distant place.

 

Signal-to-noise (S/N)

 

Image depth is all about “signal-to-noise ratio” and so it is worthwhile to spend a little time exploring the subject of S/N, paying particular attention to aspects related to the imaging of dim objects. 

 

In the case of NGC7331, we might regard all detected photons from the entire galaxy as the signal, particularly if we were primarily concerned with simply detecting the galaxy.  This is a rather strong signal, at least for an astronomical object.  If the 14” scope and camera had a combined efficiency near 30% then a 1 second exposure would register about 300,000 photons, with object S/N > 500.  Obviously, NGC7331 is easily detected in a 14” scope.

 

But NGC7331 is a complex object with both bright and dim features.  It is interesting to note that a single sun-like star in NGC7331 contributes less than 1 billionth of the total number of photons from the galaxy.  This means that a 14” scope will receive less than 1 photon per hour from that star and any planet of that star will contribute less than one photon every century or so.  So even if that 14” scope that could potentially “resolve” such planets we would still never be able to see them.

 

But what about dim features that we can resolve, such as the outer arms?  Extended objects are photometrically characterized by photon rate per angular-area, usually expressed as magnitudes per square-arcsec.  Atmospheric sky glow is also characterized by magnitudes per square-arcsec and it is usually a major factor determining image depth.

 

Signal to Noise Equation

 

There are many instances of the “signal-to-noise” equation, depending on the degree of simplicity or precision desired.  The simplest form presents only the Poisson noise of the object itself and noise from the associated sky-glow:

 

            Where P = total photons = photonRate * time

                        sky = sky photons from area of the object = skyRate/area * time * objSize

 

Note that this equations is valid for S = total signal or S = signal per angular-area.  The equation ignores camera issues and computes the “ideal S/N” for a potential image. It could be thought of as the result of a perfect/ideal camera having 100% QE, zero noise, and infinitesimal pixel size.  Thus it is the upper limit possible for any real-world image, so it is instructive to examine some of the consequences of exposure time and sky brightness on image depth.

 

S/N values

 

Before examining S/N equation it is useful to have a notion of some S/N values.

 

A commonly used value for object S/N for detection at limiting magnitude is 3. This limit is usually applied to point sources (stars or very small galaxies). Note that this is “object S/N” and the limiting mag for “pixel S/N” can easily be much lower, depending on sampling.

 

Determining useful S/N limits for differentiating extended objects is a problem of contrast visibility.  We want to determine the minimum S/N necessary to differentiate a given contrast. I propose that a good threshold to visually differentiate two signals is the Poisson noise of the higher signal:

 

contrast = (signal_1-signal_2) / signal_1

 

Algebraically rearranging the equations gives:

 

           

            lower limit sn = 1/contrast

 

This means that a given S/N will support contrasts greater than the inverse of the S/N.

(This form ignores sky noise, though you can use sky as “signal2” to compute the contrast between the sky and object.)

 

Examples:

 

S/N=3; contrast = 33%

S/N=10; contrast = 10%

S/N=100; contrast = 1%

S/N=500; contrast = 0.5%

 

Signal affects and effects

 

Returning to the basic S/N equation, one of the first things to notice is that P represents the number of photons gathered by a certain aperture over a certain amount of time. 

 

Not all of the photons intercepted by the scope make it to the image plane.  Each reflective surface loses about 12% of the visible & NIR photons and each glass transmission losses another 1-2%.  This can add up. My system has 3 mirrors and 2 glass plates, which results in an overall transmission of about 67%.

 

Furthermore my camera (ST-10) does not detect all of the photons delivered to it.  It has approximately 40% QE across the visual/NIR band.  This means that my camera detects less than 30% of the photons intercepted by the scope.

 

This leads to a modification of “signal” in the S/N equation:

 

                                   

 

This equation tells us several interesting things about S/N:

1)      S/N scales linearly with changes in aperture. An 8” scope will yield 2x better S/N than a 4” scope.

2)      S/N scales by the square-root of changes in the exposure time.  Doubling the exposure time yields 1.4x better S/N.

3)      Improvements or degradations in efficiency will affect S/N by the square-root of the change.  For example if I were to replace my ST-10 with an ST-10me then the 1.15x better QE would result in an S/N improvement of 1.07x.

 

Sky effects and affects

 

The S/N equation can be algebraically rearranged to:

 

           

 

This is an interesting form. It shows that the relationship of S/N to sky brightness strongly depends on the sky-to-signal ratio. The equation can be simplified into 2 different approximations, based on the relative brightness of the object to the sky:

 

For objects significantly brighter than the sky

 

           

 

If the object is significantly brighter than the sky then the sky-to-signal term is not very significant and sky brightness has only a small effect on S/N.  Thus bright objects are not much harmed by light polluted skies.

 

 

For objects dimmer than the sky

 

           

 

As the object gets dimmer than the sky then the equation becomes dominated by the inverse of the square-root of changes in sky. Thus for dim objects, the time and sky terms nearly operate in tandem. For example, a 2x brighter sky requires almost 2x more exp time to reach a similar dim object S/N.  A typical semi-urban/suburban sky will have a brightness of approximately 17 mag/arcsec^2 and a good rural sky is about mag 20.  That means that a dim object will require approximately 16x more exposure time under the bright sky to equal the S/N of the dark sky. (But recall from above that bright objects do not require much extra time.)

 

Measuring your sky glow

 

You can measure your particular sky brightness by comparing image background to a star of known magnitude.  A JavaScript calculator to do this can be found on my web site: http://www.stanmooreastro.com/CCD_topics.html

 

Filters

 

Obviously filters have a strong effect on sky flux.  Narrow-band filters usually produce very low sky brightness.

 

Factors that affect sky glow

 

1)      location

2)      time of night

3)      phase of moon

4)      sky sector

5)      filter

 

Transparency

 

Another important sky effect is transparency, which varies by altitude, atmospheric conditions, sky-alt, and spectral zone.  A low alt, low sky-alt, blue object can be attenuated by as much as 80%. Transparency t can be incorporated into the S/N equation:

 

           

 

The S/N behavior of sky transparency is complicated by the relative brightness of the sky-glow. In a bright object &/or a dark sky, S/N approximately scales by the square-root of changes in sky transparency.  But for dim objects and/or bright sky, S/N scales more linearly with changes in sky transparency (this means that it is especially advantageous to image near the zenith).

 

 

Camera and Processing Noise

 

Real cameras produce small amounts of noise, which can easily affect eXtreme images.

 

           

 

 

There are several sources of camera noise:

1)      readout noise per pixel

2)      dark current per pixel

3)      QE variation between pixels

4)      Outlier events (cosmic rays)

 

In addition there are some processing noises:

1)      calibration error

2)      combine method effects (incl resampling)

3)      arithmetic noise (almost always negligible)

 

It is possible to create S/N equations that capture most of these errors.  For example, here is an equation for object S/N of offset simple combine with no outliers:

 

 

 

            where

n = number of images

p = number of object pixels per image

                        S = sum of signal from p pixels in n images (excluding background)

b = mean sky background e- / pixel

d = dark noise e- / pixel (incl noise of calibration frame)

                        r = readout noise e- /pixel

                        fsn = Flat’s signal to noise

 

That equation is a little intimidating at first, but it is possible to ascertain several instructive aspects from studying it.  Here are a few observations, particularly as relates to deep imaging:

 

1)      Camera noise is not always significant, particularly for “Sky limited” exposures.  If it is possible to take “sky limited” exposures, then do!

2)      Camera noise increases as a function of the number of pixels used to capture an object.  From a purely S/N point of view, it doesn’t much matter if the number of pixels is driven by sampling or by sub-exposures.  When it is not possible to take a “sky limited” exposure then it can be advantageous to minimize the number of pixels via binning and/or longer & fewer sub-exposures.

3)      It is very advantageous to offset each sub-exposure, otherwise the calibration errors add up arithmetically (instead of quadratically) and CCD flaws are enhanced.

4)      A high quality flat frame is important for dim objects taken in bright skies and/or long exposures resulting in a high background.   Flat error is also decreased by many offset sub-exposures, though the readout error of non-sky-limited sub-exposures could easily overpower that effect.

5)      Dark current can play a significant role in narrow-band images.  The Kodak KAF CCD dark current has very non-normal distribution, with at least 2 hot pixel populations.  The hot pixels can inject a surprising amount of error so it is important to cool the camera as much as possible.  It is also important to create very high quality master dark frames.  For narrow-band imaging, I recommend combining at least 30 exposures to create the master dark in order to tame the readout noise and some of the hot pixels.

 

Sky-limited exposures

 

The purpose of sky-limited exposure time is to bury the camera’s readout noise with sky noise. If the image's total background noise is within about 5% of the noise due only to sky-glow, then the image is basically sky-limited (i.e. the added readout noise is trivial).

 

 

to get ratio totalNoise / skyNoise <=1.05 (i.e. 5% increase) then

skyGlow in electrons should be

 

expressed in ADU:

           

 

For example, my ST-10 has read noise = 10e- and gain = 1.3, so to achieve sky-limit, my background level should be at least 9.8*100/1.3+100 = 854 ADU.  For my site, f/8 scope and unfiltered ST-10, this level is usually reached in about 15 minutes.

 

 

Resolution and depth

 

There is a relationship between resolution and S/N for small objects and features. Poor seeing results in the object being smeared across much more sky than the object would cover in good seeing.  This acts to dilute the object’s signal across a larger area of the sky background, thus the noise is increased due to the extra sky-glow.  This effect can be surprisingly strong for stars and very small galaxies.

 

A somewhat similar effect occurs with over-sampling in a non-sky-limited exposure.  The camera noise from using an excessive number of pixels hurts the dim object S/N and limiting magnitude.  Likewise, excessive numbers of non-sky-limited sub-exposures results in damaged S/N.

 

System and environmental considerations

 

Real scopes, adapters, reducers, correctors, filters, and cameras usually produce small defects due to reflections, scatter, stray light, and such.  Most of these effects are ameliorated via flat fielding, but not all defects can be addressed with a simple flat.  For example, filter/CCD-window reflections of a star or object cannot be calibrated out with a normal flat frame.  Additionally, the sky-glow may posses a slight gradient that cannot be removed via a normal flat.

 

Usually these effects are small but they can become problematic for eXtreme imaging.  It is surprisingly difficult to obtain an image that is flat within 1% and if you are doing a very long deep exposure then that 1% error can become obvious and effectively frustrate the usefulness of longer exposures.

 

Here are a few processing techniques to deal with these defects:

1)      Most sky gradients are very nearly linear for the small FOV of a scope and can be removed with a simple linear equation.

2)      More complicated gradients might be removed by taking images of nighttime “blank sky” in the same alt/az region as the target.  These images can then be processed into a secondary flat to be applied to the base calibrated image.  This is especially useful for the “fringing” that often occurs in thinned, back-illuminated CCDs (due to atmospheric emissions).

3)      Filter/CCD window reflections of bright objects can be removed by taking secondary calibration images of a bright star and processing those images to produce a secondary flat or subtraction mask.

4)      As a last resort, some people “dodge & burn” the image in Photoshop.  This practice is dangerous because it can invent false detail and/or erase true data.

 

Summary – Important factors for eXtremely deep images

 

1)      Aperture

2)      Efficiency – scope and camera (QE)

3)      Total exposure time

4)      Sky glow and transmission; filters; sky-limited sub-exposures

5)      Offsetting sub-exposures

6)      Limiting the number of sub-exposures

7)      Resolution and sampling

8)      High quality calibration frames

9)      Appropriate processing

10)  Special processing for idiosyncratic defects