Forget about the camera, sampling and pixels for now and consider the virtual image. Photons from astronomical objects (and sky glow) are focused onto the focal plane to form a virtual image. A particular “object” can be any real target such as a star, galaxy, feature or even an arbitrary patch of sky. The only requirement is that the object be bound (limited angular extent) and fully encompassed within the image.
The number of photons that are collected from any object is entirely determined by the intrinsic brightness of the object (usually measured in magnitudes), the collecting area and efficiency of the optical objective (lens or mirror), and the length of time used to create the virtual image. Note that focal length (FL) and thus f-ratio are irrelevant because they have no effect on the any of those determinants.
The Signal-to-Noise (S/N) of any object in the virtual image consists of the number of photons from that object divided by the object's and sky's Poisson noises:
Object S/N (equation 1)
s = (signal) number of photons collected from the object
b = (background) number of photons collected from the background sky-glow within the same area
The concept of object S/N can be applied directly to extended objects like nebula and galaxies, but a useful refinement is to measure and characterize angular photon density (e.g. magnitudes per arcsec²). “Object S/N” is replaced by “angular S/N”. The equation is the same.
Object S/N is a measure of the quality of detection. “Noise” is uncertainty and S/N is a measure of the certainty of observation/determination. Noise is usually expressed in units of Sigma, where there is a 68% probability that the “real” value lies somewhere within the observed value +- sigma. Multiples of sigma increase certainty, for example, there is a 99% probability that “real” and observed values are within 3 sigma of each other.
S/N is basically an expression of the certainty of the signal at 1 sigma. Thus an object S/N = 1 means that the probability of detecting that object is 68%. An object S/N > 3 means that there is >99% probability of detecting the object. Object S/N = 3 is a commonly used threshold to determine limiting magnitude. Beyond detection, S/N also measures the accuracy of brightness determination. The magnitude estimate of a low S/N star has a larger uncertainty than a high S/N star.
For extended diffuse objects, angular S/N reveals contrast transfer functions. Low contrast features in a nebula can be buried in noise at lower angular S/N and revealed at higher angular S/N. Higher angular S/N results in finer contrast detection. It is not too difficult to work out the math necessary to quantify contrast functions but that is not done here other than to say: Contrast basically consists of adjacent areas of different brightness. If the difference in brightness is within only a few sigma then the image areas will be difficult or impossible to differentiate. This is very similar to limiting magnitude for object S/N.
Lay an arbitrary grid over the virtual image at the focal plane. Does the presence of that grid actually change anything in the virtual image? Is new information made available? Do stars appear or disappear? Does a grid make a nebula brighter or dimmer?
Now use that grid to sample the image by blurring everything within each grid square. Does this grid-blurred image reveal any new information that was not present before blurring? Or does it actually destroy information? (More correctly termed - it convolves information).
Now realize that the sampling grid is “pixels” and varying the grid spacing is identical to varying FL (“f-ratio” changes for fixed aperture) and or varying pixel size. Larger grid squares correspond to “faster” f-ratio (assuming fixed aperture). It is obvious that the only thing “faster” f-ratios (larger grid spacing) accomplish is convolution of information. No new information is produced.
Angular pixel size is the amount of sky that a pixel covers and is usually expressed as the number of arcsec per side. Varying physical pixel size (including binning) and/or varying the FL (e.g. via reducer) result in different angular pixel sizes. Typical pixel sizes run from 0.5 arcsec to 5 arcsec.
It is possible to calculate a “pixel S/N” based on whatever the pixel size happens to be:
Pixel S/N (equation 2)
sp = (signal) number of astronomical photons collected by the pixel
bp = number of background sky-glow photons collected by the pixel
cn = camera noise (readout noise per pixel)
Pixel S/N can describe some superficial aspects of the image as presented at a particular size and intensity scaling. But pixel S/N is divorced from the contents of the image (objects) because the sampling rate (pixel size) is arbitrary. For example, a star might occupy 1 pixel or 20 pixels and pixel S/N within the star could be 100 or 20, yet it is exactly the same star in exactly the same virtual image. Thus “pixel S/N” by itself cannot determine limiting magnitude or contrast transfer and is not able to meaningfully characterize imaged objects. Nor is it meaningful to directly compare pixel S/N between images having different angular pixel sizes.
To compare S/N from disparate sample rates (pixel sizes), it is necessary to normalize pixel sizes:
Normalized Pixel S/N (equation 3)
a = number of pixels per arcsec² OR number of pixels within object
Notice that equation 3 is basically the same as equation 1 except for the camera noise term:
Camera Object S/N (equation 4)
The camera noise term increases with smaller pixel sizes (finer sampling). However, the effect of the camera noise term depends on the relative values of the signal and background. If the signal and/or background are significantly greater than the camera noise then camera noise has little effect on the S/N due to the quadratic terms.
There is a common misconception that the “depth” of astronomical images is primarily determined by f-ratio, regardless of aperture. This misunderstanding is often expressed by claims such as:
“Reducing an f/10 SCT to f/5 will allow you to take images of equal depth in 1/4th the time.”
“Binning 2x will produce the same quality (depth) in 1/4th the time.“
These assertions are incorrect and represent a profound misunderstanding of physics and information theory. These erroneous notions constitute the “f-ratio myth” (also “pixel size myth”).
The historical origins of this notion are doubtlessly located in the photographic experience of varying lens f-ratio, which varies aperture and thus really does vary photon statistics of objects. Further misunderstanding arises from evaluating images based on the cosmetic appearance of the image as a whole as seen at a particular display size rather than analyzing the accuracy of imaged objects.
Shrinking the image scale via “faster f-ratio” (keeping aperture constant) or larger pixels will convolve (blur) apparent graininess even though it produces no improvement of information quality (shrinking can actually impair information quality if the resulting display is significantly undersampled).
The f-ratio myth is reinforced by “common experience” when people take camera-limited images (i.e. background is dominated by read noise) that produce results seeming to mimic the supposed “f-ratio” relationship. For example, a 10 sec binned exp usually reveals more stars than a 10 sec unbinned exp, but that is entirely due to read noise and is NOT due to pixel size or f-ratio per say.
Review equation 4 and consider the effects of the camera noise term as a function of pixel size. If the object is dim and the sky glow is low then the noise term can be dominated by camera noise. In that case, varying pixel size by changing FL or binning can have significant and visible effects due to the varying number of pixels within the object or angular area. These effects superficially mimic the erroneous “f-ratio myth” but the quantitative results do not agree.
There is a potential f-ratio effect but it is complicated and does not follow the geometric function of f-ratio or pixel size.
Each image is a 10-minute exposure using the same aperture (Tak CN-212) at vastly different f-ratios. Notice that the f/12.4 image has a very slightly noisier rendering of the dimmest diffuse objects, which is due to the effects of read noise as described above. The images are obviously very similar and if the “f-ratio myth” was true then the f/3.9 exposure should be 10 times “better”!
© Stan Moore 2010