Abstract
C-Munipack is a software suite that provides a complete solution for reduction of images carried out by a CCD camera, intended on a observation of variable stars. Since the first public release published in April 2003, it has been successfully used tens of amateur as well as professional astronomers and it is highly appreciated mainly for its easy-to-use design of its user interface.
This document is a part of the software documentation. Unlike the user’s manual, which teaches how to use the program, intention of this document is to give a reader a detailed explanation how the data are processed in the software, how the output data are computed and how the output data are formatted. The text is divided into three sections: description of the math algorithms, specifications of the file formats, and design of the library’s application building interface.
The algorithms involved at various stages of the data processing are described in detail in this section. Its purpose is to explain to a curious reader how the data are handled, what the numbers and graphs mean and how they are derived. I do not claim that I have invented the algorithms, whenever possible, proper references to original materials are given.
In the following text, reference is often made to bad and overexposed pixels. A pixel is regarded as a bad pixel if its value is equal to or less than the value of the configurable parameter Minimum pixel value. Usually this parameter is set to zero, the lowest possible value that can be stored in an image. An overexposed pixel has a value equal to or greater than the value of a second configurable parameter Maximum pixel value. This configuration parameter should be set to a value at which the analog-to-digital converter saturates, but can be set to a lower value to exclude a higher part of the intensity range where the detector response is not linear enough. In the correction algorithms, both bad and overexposed pixels are treated separately from other, valid, pixels.
There is a choice of two calibration schemes for dark-frame correction . In the standard calibration scheme, the dark-frame correction is a subtraction of a dark frame D from a source frame X to generate an output frame Y . The following formula is applied to each pixel (x,y) independently:
![]() | (1.1) |
Bad and overexposed pixels require special treatment:
In the advanced calibration scheme, a scalable dark frame, that was acquired with exposure duration tSD is scaled to match the exposure duration tX of a source frame X. Then, the dark frame is subtracted from a source frame to generate an output frame Y .
![]() | (1.2) |
The program makes sure that the dark frame is a scalable dark frame by checking the status of the SCALABLE flag, which is stored in the file’s header.
The flat-frame correction is a division of a source frame X by a flat frame F and multiplying it by a constant k. The constant k is determined such that the range of pixel values and the gain factor (number of photons per ADU) are roughly preserved. The constant k is computed as a mean value of the correction frame F by means of the robust mean algorithm, see the chapter 1.15.
![]() | (1.3) |
Bad and overexposed pixels require special treatment:
The bias-frame correction is a subtraction of a bias frame B from a source frame X to generate an output frame Y . Unlike the dark correction, scaling is not applied, because the bias is independent of exposure duration. The bias-frame correction is applied only in the advanced calibration scheme.
![]() | (1.4) |
The following statements describe how bad and overexposed pixels are treated:
A master dark frame is created from a set of raw dark frames. The robust mean algorithm (chapter 1.15) is applied on a pixel-by-pixel basis, while leaving out bad and overexposed pixels. If there are enough source frames, the robustness of the algorithm ensures that an accidental artifact (e.g. cosmic ray particle) is ruled out and does not affect the mean value. If a pixel has an invalid value on all source frames, it is marked as a bad pixel on the resulting frame.
The source dark frames must have the same exposure duration, otherwise the resulting frame is not correct.
A scalable dark frame is computed in the same way as a master dark frame. The program checks that the bias-frame correction was applied to all source frames and sets the SCALABLE flag in the file’s header. The status of this flag is checked in the dark correction, see chapter 1.1.
A master flat frame is created from a set of raw flat frames. Unlike the dark frame, we normalize the source frames first to ensure equal mean value for all of them. This is correct, because the mean level of a flat frame depends on the intensity of the light source that was used to illuminate the camera. What we are seeking here is a ratio of sensitivity of a detector pixel to the mean sensitivity of all pixels. It would be natural to normalize the flat frames to a mean value of 1.0, but if we attempted to store such a normalized frame to a file storing each pixel value as an integer number, the quantization noise would destroy all our efforts. Therefore, we have to choose a value which is high enough the reduce the quantization noise. On the other hand, the allowed range of values that can be stored to a file is limited and depends on the number of bits reserved for each pixel. If we put the mean intensity too high, some of the pixels would be out of range. The value which the flat frames is normalized to is a configurable parameter, its default value 10,000 provides a trade-off between quantization noise and the limited range, and it is appropriate in the vast majority of cases.
The steps are as follows: First, take each of the source frames, apply the robust mean algorithm (chapter 1.15) to compute its mean intensity, then normalize the frame by dividing each pixel value by the mean value and multiplying by the required mean value (10,000). Next, apply the robust mean algorithm to normalized frames on pixel-by-pixel basis, while leaving out bad and overexposed pixels. If a pixel has an invalid value on all source frames, it is marked as a bad pixel on the resulting frame.
A master bias frame is created from a set of bias frame. The robust mean algorithm (chapter 1.15) is applied on a pixel-by-pixel basis, while leaving out bad and overexposed pixels. If there are enough source frames, the robustness of the algorithm ensures that an accidental artifact is ruled out and does not affect the mean value. If a pixel has an invalid value on all source frames, it is marked as a bad pixel on the resulting frame.
Star detection is an algorithm that finds stellar objects on a CCD frame. The input of the algorithm is a calibrated CCD frame F; that is, the bias, dark and flat corrections have already been applied to the source image data. The output of the algorithm is a table of x and y coordinates that correspond to the centers of detected stellar objects. For practical use, the robustness of the detection is a crucial property, because the signal to noise ratio of real images tend to be low, and there are often artifacts present.
The algorithms that are in the C-Munipack software follow the algorithms from the Stetson’s FIND routine - see [6] for the documentation of the original code.
In the prologue of the star detection, the overall background mean level and noise level is determined. These values are required in the following phases. The robust mean algorithm, described in the section 1.15, is used to determine the mean level and standard deviation of the background. It is assumed, that majority of pixels are not stars; in this case the pixels that are covered by stars are rejected by the robust mean as outliers, but if the field was almost covered by stars, the background estimate would be biased.
Not all pixels are included in the computation of the robust mean, the program filters out pixels that belongs to the user-defined border and also bad and overexposed pixels.
Because of the noise, the shapes of stars are not regular. Especially for faint stars, the maximum pixel need not be located at its center. When the telescope was slightly de-focused during the exposure, the profiles of stars has a depression in its center caused by a projection of the secondary mirror on the detector. Insufficient filtering at this stage causes multiple detections of a single object.
The low-pass filter is implemented by fitting a function y to each pixel using several pixels in its neighborhood. The function y is a linear combination of a sampled two-dimensional Gaussian function g(i,j) that represents an image of a stellar object and a constant function that represent a local background level. The linear least squares method is used to determine the two coefficients h and s of fitted function y:
![]() | (1.5) |
where i0 and j0 are coordinates of a central pixel and c is computed from the configurable parameter FWHM, which determines the width of a Gaussian function and the filter’s frequency response. The parameter c is related to the FWHM parameter according to:
![]() | (1.6) |
The linear least squares find values of free parameters h and s which minimizes a sum of squared residuals between original image data F(i,j) and its model y(i,j)
![]() | (1.7) |
The Gaussian function as well as the constant function are contiguous and indefinite, for practical implementation we have to restrict the integration space to the pixel’s neighborhood. We determine the half-length of a filter using the FWHM parameter:
![]() | (1.8) |
For practical use, we define n as a width and height of a square matrix as n = 2 * filter half-length + 1 and also a value of N as a number of pixels in the pixel’s neighborhood N = n2.
From a solution of a linear least squares we get the following formula for the coefficient h that multiples the Gaussian function.
![]() | (1.9) |
Equation 1.9 is applied to each pixel of a source frame F independently, leaving out pixels that are too close to the border, less that filter half-length. Computed values of the h coefficient for each pixel are stored in a new auxiliary frame H.
The object candidates are found by searching for local maxima in the auxiliary frame H. The neighborhood of each pixel at coordinates x and y is checked out to the radius equal to the filter half-length. If the pixel in the center has a greater value than all pixels in its neighborhood, we have found a candidate.
The candidates are subjected to several subsequent tests. Each test was designed to be sensitive to specific class of artifacts.
The height of the fitted two-dimensional Gaussian function stored in the filtered frame H of a candidate with local maximum at coordinates x and y must be equal to or greater than minimum height, hmin:
![]() | (1.10) |
where THRESHOLD is the detection threshold parameter, adjustable in the configuration of the program.
The relerr term in the equation is determined as:
![]() | (1.11) |
where N is a number of pixels in the matrix G.
The σF 2 term is an estimation of random noise per pixel in ADU. The random noise has two independent sources. The first one is the Poisson statistics that applies to a number of photons that hit the detector during the exposure (number of discreet events that occurred in a fixed time). The variance σ2 of the Poisson random variable with mean value of λ is σ2 = λ. If we know the number of photons per ADU, the gain p of the detector, and the mean level s of the background, the mean number of photons λ that hit the detector is p.s.
The variance of the random error per pixel in ADU σADU2is determined from the formula
![]() | (1.12) |
The second source of noise is the preamplifier and A/D converter σRN2, usually referred to as the read-out noise. This parameter should be given by a manufacturer in the camera’s data sheet, however, this value is stated for a single frame and when a processed frame has been made by summing or averaging of multiple raw frames, a corresponding transformation of the specified value of the read-out noise must take place. This applies namely when one is processing DSLR images or combined CCD images.
If the processed frame has been made as a sum of N raw frames, then
![]() | (1.13) |
where READOUT_NOISE is a read-out noise value in A/D units per frame (from the data sheet).
If the processed frame has been made as a average (arithmetic mean) of M raw frames, then
![]() | (1.14) |
These sources are independent, therefore from the error propagation rules we determine the overall noise σF 2 as:
![]() | (1.15) |
This test is intended to leave out diffuse objects, traces of cosmic particles, etc. The sharpness value of a candidate must be within configurable limits. Sharpness is defined as a ratio of height of best-fitting Gaussian function and best-fitting sampled two-dimensional δ function
![]() | (1.16) |
The first value, the height of best-fitting Gaussian function has already been determined in advance – it is stored in the auxiliary frame H. The procedure of fitting the delta function is the same as for the two-dimensional Gaussian function. We model the image as a linear combination of the delta function δ and the constant function.
![]() | (1.17) |
We limit the fitting of the delta function to the same number of neighboring pixels as we did for fitting the Gaussian function. By solving the linear equations to find a value of free parameters d and t that minimizes the sum of squared residuals between the model y(i,j) and original frame F we get a height of best-fitting delta function as a value of d:
![]() | (1.18) |
The last form of the equation is easy to implement. It says: take the central pixel and subtract the average of all surrounding pixels.
This test rules out candidates that have a width along the x-axis much different than along the y-axis. Such local maxima do not correspond to real stars, but to artifacts caused during the read-out of the CCD chip.
The roundness parameter is defined as the relative difference between height of the best-fitting Gaussian in x and y axes. Please note, that unlike the previous steps, we fit the image data to a one-dimensional Gaussian function. Therefore, we need to reduce the dimensionality of the image data. The distribution of the signal is modeled after a 2D Gaussian function. We take advantage of a fact that an integral of the 2D Gaussian function G(x,y) in x is 1D Gaussian function G(y) and vice versa.
First, we sum the image data in the neighborhood of the local maxima by rows to a vertical profile of an object and we fit the result with a linear combination of a one-dimensional Gaussian function and a constant function. From the least squares fit we get the height of Gaussian hx. Second, we sum the image data by columns to get a horizontal profile of an object and by the same way we get a value hy. The roundness parameter is derived as the difference between those two values divided by their average.
![]() | (1.19) |
If the heights of both Gaussian functions are equal, the value of roundness is zero. If one the Gaussian functions has zero height, the value is 2 or -2. The default allowed range for the roundness is -1 and 1.
Up to this point, we have worked with the data as if the center of the object was located in the center of the brightest pixel. In this step, we derive the position of the object by estimating an offset vector of the centroid of the brightness enhancement to the brightest pixel.
The offset vector is derived in two subsequent steps, its dx and dy components are computed separately. The value of dx is computed by fitting the first derivative of a one-dimensional Gaussian function G′(x) to the difference between image data F(x) and the best-fitting Gaussian function G(x). We use the same horizontal profile of the object as we used in the computation of the roundness parameter and we subtract the best-fitting Gaussian function hxG(x) that we have already derived. Then we fit the result of subtraction by the linear combination of the function G′(x) and a constant function using the linear least squares method.
![]() | (1.20) |
We find the value of j that minimized the sum in equation 1.20. The value is then used to estimate the offset between the central pixel and the centroid of the brightness enhancement using the equation 1.21. Proof?
![]() | (1.21) |
The same method is used to derive the offset dy in the vertical direction.
The object’s center is computed as a sum of coordinates of central pixel
and the offset
vector
= (dx,dy).
Aperture photometry determines brightness of a stellar object by integrating the signal in a small area on a frame. This area, usually called the aperture , is chosen in such a way that it includes all pixels that were exposed by the light from the measured object, but not the light from another source.
The point spread function that represents the spatial distribution of the signal from an ideal stellar object is rotationally symmetric, so the aperture has circular shape. The conditions described above define requirements of the ideal aperture size. If it were too small, a non-negligible part of the signal would be omitted; on the other hand, the bigger the aperture the bigger is the chance that another object appears in the area.
Supposing the frames were properly calibrated, the frame contains the superposition of two sources - the sky background and the signal from objects. When we sum the pixel values in an aperture, the sum contains these both of these. To get a signal from the object only, we need to subtract the background level.
It is not possible to measure the background at the positions of the objects directly. We need to estimated local background level from neighboring pixels. For this estimation we use another aperture that has an annular shape . Its center is aligned to the same point as the integrating aperture. The central blank part of the annulus masks the signal from the measured object. The mean value of pixels in the annulus is used to estimate background level at its center. In an ideal case, the annulus would contain only pixels that represent the background, without objects. In practice this is not generally the case, therefore a robust algorithm for estimation of the mean level must be applied - such an algorithm must effectively reject pixels exposed by stellar objects in the annulus. See the chapter 1.15 for its implementation in the C-Munipack software.
The C-Munipack software uses the same algorithm for aperture photometry as Stetson’s DAOPHOT software. You can see [6] for the documentation that comes with the original code.
We have stated that the sum S of pixels in a small area A around an object is a sum of the object’s signal I plus background signal B.A:
![]() | (1.22) |
The values of S and B are derived from the source frame, the area A is determined as the area of circle of radius r, where r is the size of the aperture in pixels. It is then easy to compute the signal K of an object in ADU:
![]() | (1.23) |
Supposing that the signal I is proportional to the observed flux F, we can derive the apparent magnitude m of the object, utilizing Pogson’s law:
![]() | (1.24) |
where I0 is the signal from an object of reference flux F0. The ratio between flux and signal is not known, however it is legitimate to choose any reference signal I0 value, providing only the magnitude difference between two objects is required - the difference is independent of choice of a reference I0. In the C-Munipack software, the reference flux was set to 1010.
In some situations, brightness of an object cannot be measured:
When the output data are saved to an DAOPHOT-compatible photometry file, unmeasured objects are indicated by brightness value 99.9999. The reader should recognize any brightness greater than 99.0 magnitudes as an invalid measurement.
Once we have derived the raw instrumental brightness of an object, we will try to estimate its standard error. First of all, we will recall a few general rules that apply to the standard error and its propagation. This is a general rule for error propagation through a function f of uncertain value X:
![]() | (1.25) |
Using this general rule, we derive two laws of error propagation. In the first case, the uncertain value X is multiplied by a constant a and shifted by a constant offset b. This law can also be used in the case where only a multiplication or only an offset occurs.
![]() | (1.26) |
The second law defines the error of a logarithm of uncertain value X:
![]() | (1.27) |
Please note, that the log function here is the natural logarithm, while the Pogson’s formula (see above) incorporates the base-10 logarithm. The following equation helps us to deal with this difference:
![]() | (1.28) |
Putting these two equations together we get:
![]() | (1.29) |
If we have two uncorrelated uncertain variables X and Y , the variance of their sum is the sum of their variances, this equation is known as Bienaymé formula.
![]() | (1.30) |
From this formula, we can also derive the standard error of a sample mean. If we have N observations of random variable X with sample-based estimate of the standard error of the population s, then the standard error of a sample mean estimate of the population mean is
![]() | (1.31) |
Armed with this knowledge, we can start thinking about the estimation of standard error of object brightness. We will consider the following three sources of uncertainty: (1) random noise inside the star aperture that includes the thermal noise of the detector, read-out noise of the signal amplifier and the analog-to-digital converter, (2) Poisson statistics of counting of discreet events (photons incident on a detector) that occur during a fixed period of time and (3) the error of estimation of mean sky level.
For the estimation of mean sky level, we have used the robust mean algorithm. It allows to estimate its sample variance σpxl2. This is a pixel-based variance and because we have summed together A pixels in the star aperture, the Bienaymé formula applies, the sum S is a sum of A uncorrelated random variables, each of which has variance σpxl2. For the variance of the first source of error we get:
![]() | (1.32) |
where A is a number of pixels in the star aperture.
From Poisson statistics we can derive a variance that occur due to counting of discreet events, photons incident on a detector, that occur during a fixed period of time, the exposure. We will again need to use the gain p of the detector to convert a signal in ADU to a number of photons. If the measured signal of an object is I we compute the mean number of photons λ as
![]() | (1.33) |
Then, the variance of intensity due to Poisson statistics is equal to its mean value.
![]() | (1.34) |
The variance is in photons, we have to convert it back to ADU to get the variance in units ADU2.
![]() | (1.35) |
We have derived the sky level as a sample mean of pixel population in the sky annulus. Because each pixel in the annulus has variance σpxl2, the variance of sample mean is
![]() | (1.36) |
where nsky is the number of pixels in sky annulus.
From equation 1.30 we compute the variance of object’s signal as
![]() | (1.37) |
Note, that in equation 1.23 the sky level is multiplied by A, so we have to multiply its variance by A2 - see the equation 1.37. Now, we use the law of error propagation for the logarithm adopted to match the formula of the Pogson’s law.
![]() | (1.38) |
Putting equations 1.38 and 1.37 together, we can derive the standard error of the object’s brightness in magnitudes as
![]() | (1.39) |
I’d like to point out, that I did not invent the algorithm presented below. I reimplemented the algorithm that was part of the original Munipack software. According to the original source code, Filip Hroch is the author, but I haven’t found any paper regarding this method. In case you know about it, please let me know.
Up to this point, the source frames are processed independently. To make a light curve of a particular object, we need to trace that object throughout the set of frames. This step is called ”matching” in the C-Munipack software. It takes two or more lists of stellar objects (result of photometry) and determines which objects on one list are the same physical object on the rest of the lists.
The matching algorithm implemented in the C-Munipack software is an extension of the algorithm published by Valdes[7]. The main difference is that it builds polygons of n ≥ 3 vertices instead of triangles. The text follows the terminology of Valdes wherever possible. The matching algorithms based on similarity of triangles were described and implemented also by Stetson[5] and Groth[8].
The algorithm works on two catalogs of objects, each object is described by its coordinates in some local coordinate system and its brightness. We assume that the coordinate systems of these catalogs are shifted, scaled, rotated or even inverted. To make it even more difficult, the catalogs can overlap only partially; objects from one catalog need to be present in the second catalog and vice versa. Problems with separation of close objects might cause that objects present in one catalogs are missing in the other catalogs. In addition to that, due to random and field distortions to position of objects are subject of noise and thus they are known only imprecisely.
The matching algorithm works in three subsequent phases. In the first phase, possible transformations between the coordinate systems of two catalogs are found. Then, the best transformation is chosen and last a relation between the objects from the two catalogs is established.
Although the method in its basic form works on a pair of catalogs only, its extension to an arbitrary number of frames is obvious. One frame from the set of source frames is chosen as a reference catalog and all other source frames are matched against the reference one. The alternative is to use any other catalog of the same field and match the source frames against it.
A catalog of objects can be seen a large number of triangles - it is possible to create a triangle for every combination of three objects. If the coordinate systems of two catalogs are shifted, scaled, rotated or inverted (flipped) to each other and two catalogs show at least partially the same set of physical objects, there must be many triangles that have the similar shape in both catalogs, because the said operations do not change the their shape. Not all triangles from one catalog have their counterparts in the other one. Also, due to the noise component in position measurements and field distortion, the triangles are never exactly the same.
Following Stetson[5], shape of a triangle is described by two shape indices which form a point in two-dimensional triangle space (xt,yt) defined as:
![]() | (1.40) |
and
![]() | (1.41) |
It can be shown, using the law of sines, that if a triangle is made by shifting, rotation, scaling, inversion or their combination from an original triangle, it projects to the same point in the triangle space. The Euclidean distance in the triangle space is used to measure similarity of two triangles. If it is lesser than some value ϵ, the triangles match.
The algorithm implemented in the C-Munipack software follows the method described by Valdes. It sorts the objects by the brightness in decreasing order and takes up to Nobj objects from both catalogs. But it does not stop when it find a pair of triangles that match. It continues to build up a pair of polygons of N vertices from them by taking one of the sides and finding two objects that would form yet another similar triangles. On success, it adds the objects to the list to make a quadrilateral and so on up to a polygon with N sides. The value N is a configurable parameter. A constant value ϵ = 0.005 is used to check if two triangles match.
The following optimization reduces the number of false matches. When a pair of objects (A,B)
and (A′,B′) is takes, their distances d = and d′ =
are computed. The ratio d : d′ is
used as a scale between the two frames. Also, points in half between these points P and P′ are
determined. For any point C from the first catalog, its distance to the point P is
divided by d : d′ to get expected distance of a desired object C′ to the point P′.
Then, it is possible to restrict the search to objects of distance to P′ not very far
from the expected value. Quick sort algorithm is used to sort objects by distance
from the point P′, which allows to stop search when the distance is above upper
limit.
By means of the algorithm described in the previous section, two similar polygons P1 and P2 were found, each of which has N vertices. The next step is to find the best-fitting transformation between coordinates of vertices of P1 and P2. The scale factor is estimated first. Next, the inversion is determined. Finally, the linear least squares (LLSQ) method is applied to find the values of the rest of the parameters.
The estimation of the scale factor s between the two polygons P and P′ is estimated as the robust mean of ratios between corresponding vertex distances of the two polygons. The mean value is determined using the robust mean algorithm 1.15.
The inversion is represented by value m = 1 if mirroring does not occur (local
coordinate systems are of the same handedness) or m = -1 if it does. To determine the
value m, first two vertices of polygons P and P′ are used to make oriented lines
and we check whether the third vertices in P and P′ are on the same side of the
oriented lines. The test can be performed in 3-D space by comparison of sign of
z-coordinate of crossproduct of the vectors and
to the sign of z-coordinate of
crossproduct of the vectors
and
. The value m = 1 if the signs are equal, m = -1
otherwise.
The rotation and offset between the two frames are estimated by determining coefficients A, B, X0 and Y 0 by means of the LLSQ method.
The first estimation of the transformation matrix M0 is created by combining the scale factor, the m value and the results of the LLSQ fitting. The variance σ02 of the solution is determined using the minimized value of the sum of squares S:
![]() | (1.42) |
The denominator, 2N - 4, is the statistical degrees of freedom; 2N is the number of equations (each vertex gives two equations) and 4 is the number of free parameters. The variance will be used to determine allowed displacement between expected and observed position of a star.
The transformation found using polygon vertices only is further refined in the second stage by including stars from S and S′. It was stated above, that not all stars from S and S′ must necessarily have corresponding objects on the other frame, thus it is necessary to use a robust approach and rule such stars out.
For each star from a set S, its coordinates are transformed into the second frame using the matrix M0 (first estimation). Then, a star from the set S′ with minimum distance to the computed location is found, and if its distance is less than maximum displacement t, the two stars are included in the second estimation. The value t0 is determined using the variance σ02:
![]() | (1.43) |
The stars from the set S that were successfully identified in S′ are used in the LLSQ method (same as in the first iteration) in order to determine a second estimation of the transformation matrix M1. The scale and mirror value are kept unchanged. Also, the new value of sample variance σ12 is determined.
Number of identified stars and the sum of squares from the second iteration are used to select a best-fitting solution.
The vote array is organized as a map, where keys are unordered sets of polygon vertices and it stores number of matches, stars that were identified in the second stage, sum of squares and the transformation matrix. When a new candidate is identified, the program compares the set of polygon vertices with existing keys in the vote array. When there is a matching item, its match counter is incremented by one. If not, a new record is added to the vote array and its match counter is initialized to one.
The selection of the best-fitting solution is based on the number of matches. If the vote array contains multiple records with the maximum match count, the first one wins.
The result of the matching algorithm applied to a set of source frames is a cross-reference table which represents relations between objects detected on the source frames. The table should allow to efficiently find the measurements for a particular physical object. In the C-Munipack software, each detected object on a single source frame is assigned a unique integer number at the final stage of the photometry step; these identification numbers are frame-wise.
The cross-reference table is a map where keys are frame identification and frame-wise object identifier and the values are global object identifiers. If the matching is done against one of the source frames, object identifiers on that source frame are used as global identifiers. If a catalog file is used, the object identifiers in the catalog are used as global identifiers.
The table is stored in a distributed form, each object on each frame has got an attribute that defines if the object was found on a reference frame or a catalog file and if so, what the global identification was.
The artificial comparison star is used to enhance a quality of a light curve by utilizing multiple stars that are used to compute differential magnitudes of a variable star.
A new virtual comparison star is created from a set of stars and its brightness in magnitudes and error estimation are derived for each frame. Then, a normal course of light curve construction is followed. The brightness of an artificial star is computed by averaging the intensities (not magnitudes) of stars from a set. The same set of stars must be incorporated in the average on each frame, otherwise the results were incorrect. Therefore, if there is missing measurement for one of stars, it is not possible to compute the brightness of a comparison star for this frame.
First, we use inverse Pogson’s law to convert brightness in magnitudes mi into intensity Ii for each star i from the set.
![]() | (1.44) |
Next, intensities are combined by means of arithmetic mean. Please note, that the factor 1∕N is used to keep the brightness in the same range as the source data.
![]() | (1.45) |
Then, the intensity is converted back to magnitudes:
![]() | (1.46) |
The inverse equation 1.38 can be used to transform the error estimation from magnitudes to intensity unit. Supposing that noise sources are independent, we can compute the variance of I as:
![]() | (1.47) |
The equation 1.38 is used to convert the resulting variance back to magnitudes. Putting all three formulas together we get the following formula for the error estimation of the resulting brightness of a artificial comparison star in magnitudes:
![]() | (1.48) |
The ’Find variables’ tool is based on the dependency of noise level to mean brightness. The fainter an objects is, the smaller is the signal-to-noise ratio and the more noisy is the measurement of the object. It can be shown that two objects of similar brightness which do not vary in brightness during the observation also have similar noise levels, which is computed as the sample standard deviation. If one of the objects changes its brightness due to reasons other than measurement noise, for example it is an eclipsing binary, its sample standard error is greater than a constant object of the same mean brightness. Using this idea, we can make a graph of sample standard deviation vs. mean brightness of all objects detected on a frame set. Supposing the majority of objects are constant, the graph will reveal a curve, which grows towards lower brightness. Any outlying point located above the curve denotes an object that has brightness variations other than measurement noise and thus it might be a variable star.
The ’Find variables’ tool does not find variable stars automatically. It presents the graph described above and provides a means of selecting any object on the graph and showing its light curve. The decision whether an object is a variable or not is left to the user.
A list of source frames is analyzed and raw instrumental magnitudes stored in memory. Each detected object must be uniquely identified across all the frames. One star is chosen as the comparison star and we assume that this star was constant during the observation.
However, an object might not be measured correctly on all frames in a frame set. If the comparison star has an invalid measurement on one or more frames, we cannot derive a differential magnitude for any object on those frames, so we have to reject these frames as whole. For the rest of the frames, it may happen that an object does not have a valid measurement on one or more frames. In this case we cannot derive a differential magnitude of that particular object on those frames.
Let us denote N as the number of source frames and NC as the number of frames where a comparison star has valid measurements. For each object i we can derive a number of frames Ni where both the object and the comparison star has valid measurements. If Ni is less than a minimum number of frames Nmin, we will rule out the object from the output. The minimum number of frames Nmin is calculated from NC and configurable threshold t in units of percent.
![]() | (1.49) |
For each object i that has a valid measurement on frame j, we compute the differential magnitude mi,j:
![]() | (1.50) |
where magi,j is instrumental magnitude of the object i and magC,j is instrumental
magnitude of the comparison star C on frame j. Then, the mean value mi is computed by
means of the robust mean algorithm and the sample standard variance si2 is also determined.
Please note, that although the robust mean algorithm is used to compute mean brightness of
an object, all valid measurements should be taken into account in the computation of the
sample variance. Otherwise for a variable star, some of its valid measurements would be
unjustly rejected as outliers, resulting in lower variance. The term is known as Bessel’s
correction.
![]() | (1.51) |
Figure 1.2 shows a scatter graph of stars that were measured in the field of eclipsing binary CR Cassiopeiae showing the relationship between the stars’ mean magnitude xi versus their sample standard deviations si.
If the user does not pick a comparison star, the program uses the following algorithm to choose one and uses it to make a mag-dev curve. There are several criteria that the program can use to determine the suitability of an object to be a comparison star. A good comparison star should be 1) on the greatest number of frames to avoid more data rejection than necessary, 2) it should not vary in brightness and 3) the noise of its measurements should be low to reduce a noise of differential magnitudes and the resulting data.
Let us take all objects that were found on a reference frame. We assign a number of frames Fi which each object i has been successfully identified on. The maximum of values F is determined and all objects that have the maximum value of F are kept on the list, the others are removed.
For each pairs of objects that were left on the list, a differential light curve is derived
and a robust mean and a sample standard deviation is computed for the light curve data.
Finally, the star that has the lowest sum of standard deviations is picked as the comparison
star.
Please note, that although the robust mean algorithm is used to compute mean brightness, all valid measurements should be taken into account in the computation of the sample standard deviation. If there were a variable star on the list of candidates, some of its valid measurements would be unjustly rejected as outliers, resulting in lower deviation.
Frame merging is a technique which uses averaging to combine multiple CCD frames into one frame. In a naive implementation the mean value of corresponding pixels of source frames could be computed. However it wouldn’t work well in practice since the sky is revolving around the celestial pole throughout the night and in most cases even perfectly adjusted motorized compensation does not track the stars with sufficient precision.
Therefore, in tasks where frame merging is performed, the reduction is done in two phases. First, the original frames are reduced up to the matching step to get the information about their relative offsets. Then, the original frames are aligned to one reference frame by means of the shift transformation. The frames are divided into groups, and for each group one combined frame is made. In this way we get a set of combined frames, which are processed in the second phase. Unlike the first phase, calibration is not applied to the combined frames since the images were already calibrated in the first phase.
The heliocentric correction compensates for the differences in the observer’s distance to the observed object due to movement of the Earth around the Sun. When we measure the time that elapsed between two events and the time span is in the order of days or more, we may need to compensate for the timing due to the finite speed of light. A common approach is to transform the observation time as it would be observed from the center of the Sun, which is considered to be fixed with respect to the object. The Julian date JD is transformed into Heliocentric Julian date HJD . The difference between those two dates is called the heliocentric correction HC and usually it is expressed in days.
![]() | (1.52) |
Figure 1.3 illustrates the situation. The heliocentric correction HC is the time that light
needs to cover the distance ∥∥. The vector
can be thought of as a projection of the
Earth-Sun vector
into the direction of Earth-object unit vector
.
The length of the vector can be computed as:
![]() | (1.53) |
The cosine of angle θ can be derived from the expression for dot product of two vectors:
![]() | (1.54) |
When we put the two equations together, we get
![]() | (1.55) |
the length of vector cancels out and because the vector
is a unit vector, its length is
equal to 1. Then we can simplify the previous equation to
![]() | (1.56) |
Vector is computed using Flandern’s formulas for Earth-Sun distance RS and Sun’s
ecliptic longitude λS - see [4]
The object’s position is specified by two equatorial coordinates - right ascension α and declination δ. These coordinates must be transformed into the ecliptic coordinate system to get object’s ecliptic longitude λ and latitude β.
The easiest way to compute the dot product of two vectors in polar coordinates is to transform them into the 3-D Cartesian system first, using a trigonometric identity for cosine of the sum of two vectors we get
![]() | (1.57) |
Using a speed of light c = 299, 792, 458 meters per second we can derive the formula for the Heliocentric correction . Please note, that it is necessary to convert the units to get the value in days.
![]() | (1.58) |
Airmass is the length of an optical path that light from an observed object travels through the Earth’s atmosphere. Along this path, the light is attenuated by scattering and absorption. The greater is the airmass, the greater is the attenuation. This is illustrated in Figure 1.4, objects near the horizon appear less bright than when they are at the zenith.
The airmass coefficient X is the air mass relative to that at the zenith. So, its value is 1 when the object is at zenith, and increases as the zenith angle z grows, reaching approximately 38 at the horizon.
In versions 1.2.21 and older, the airmass coefficient X was computed using the approximation published by Hardie in 1962 [2]. This gives usable results for zenith angles up to about 85 degrees. For greater zenith angles, the program returned a negative value, to indicate that airmass cannot be computed.
Since version 1.2.22, the airmass coefficient X is computed using the approximation published by Pickering in 2002 [3]. This formula works well even for zenith angles up to 90 degrees. The program stores a negative value to indicate a situation when an object is below a horizon.
![]() | (1.59) |
where h is apparent altitude (90∘- z) in degrees.
The computation of robust mean is based on a re-descending Ψ type M-estimator, that uses Hampel’s influence function, see [9, 10]:
![]() | (1.60) |
where a = 1.7, b = 3.4 and c = 8.5. The function Ψ(x) and its first derivative Ψ′(x) is plotted on Figure 1.5
The starting point for iterations is determined by median as an estimate of location and median of absolute deviations (MAD) for an estimation of scale s.
![]() | (1.61) |
The optimum solution is found by means of the Newton-Raphson optimizing algorithm. In iteration j a new estimation of location μj is determined using the following formula:
![]() | (1.62) |
where ri is a residual
![]() | (1.63) |
If the input vector has normal distribution, the estimation of variance is:
![]() | (1.64) |
where n is the number of samples. The term is Bessel’s correction.
The iterations are stopped when one of the following conditions is satisfied:
In this chapter, you will find the description of file formats that are created and used by the C-Munipack 1.2 software.
All keywords and invariant phrases, which are included directly in the files are printed in cursive font. Names of files and directories are printed in teletype face. Default values of parameters are places in the [square brackets].
Photometry files are made by the photometry process during the reduction of CCD frames. In early stages of the development of version 1.2, the old DAOPHOT compatible format was superseded by a new XML based format. It turned that this decision influenced the speed of the program a lot, because any XML parser is slower in comparison to reading of old text based files. For this reason, yet another format of photometry files was introduced in the latter releases. This new format was designed to support good performance in terms of time required to save and access the data. While the core library of the software supports reading and writing all three formats, the Muniwin program uses the optimized binary format. The other two formats are kept there to be used in scripts and applications built atop of the library, because in many scripting languages it is easier to process text files or XML documents than binary data.
phot root element of the document
Description The phot element is a root element of the document. The mandatory attribute version identifies the revision of the file format, current revision, described in this document is revision 1. The document consists of meta-data (header), table of apertures and measurement data.
Model (width | height | jd | date | time | filter | exptime | temp | origin
| crdate | phot_stars | phot_datalo | phot_datahi | phot_gain | phot_rnoise | phot_fwhm_exp
| phot_fwhm_mean | phot_fwhm_err | phot_thresh | phot_losharp | phot_hisharp
| phot_loround | phot_hiround)+
Description The head element is a container for photometry file meta-data. The parameters are given as child elements, parameter values are stored in child elements’ content. The mandatory child elements are width and height. The order of child elements is not significant. The description of parameters is given in 1.
Element name | Presence | Description |
width | mandatory | Frame width in pixels |
height | mandatory | Frame height in pixels |
jd | optional | Julian date of observation |
date | optional | Date of observation |
time | optional | Time of observation |
filter | optional | Optical filter |
exptime | optional | Exposure duration in seconds |
temp | optional | CCD temperature in Centigrades |
origin | optional | Reduction software |
crdate | optional | Date of creation |
object | optional | Object’s designation |
ra2000 | optional | Object’s right ascension |
dec2000 | optional | Object’s declination |
location | optional | Observer’s location designation |
longitude | optional | Observer’s longitude |
latitude | optional | Observer’s latitude |
phot_stars | optional | Number of stars |
phot_datalo | optional | Lowest good pixel value |
phot_datahi | optional | Highest good pixel value |
phot_gain | optional | Gain in electrons per ADU |
phot_rnoise | optional | Readout noise in ADU |
phot_fwhm_exp | optional | Expected FWHM of objects |
phot_fwhm_mean | optional | Mean FWHM of objects |
phot_fwhm_err | optional | Standard error of FWHM of objects |
phot_thresh | optional | Detection threshold |
phot_losharp | optional | Low sharpness cutoff |
phot_hisharp | optional | High sharpness cutoff |
phot_loround | optional | Low roundness cutoff |
phot_hiround | optional | High roundness cutoff |
match_rstars | optional | Number of stars used in matching |
match_istars | optional | Number of polygon vertices |
match_clip | optional | Clipping threshold |
match_stars | optional | Number of matched stars |
offsetx | optional | Offset in X axis to the ref. frame in pixels |
offsety | optional | Offset in Y axis to the ref. frame in pixels |
Description The apertures element is a container for aperture definitions. Each aperture is defined by a child element aper.
Description The aper element defines an aperture. The mandatory id attribute is aperture’s identification, it must be unique within the file. The attribute radius is optional and its value is aperture’s radius in pixels.
Description The body element is a container for objects and measurement data, which are organized in a hierarchical structure.
object object parameters and data
Attributes id, x, y, x-ref, skymed, skysig
Description The object element defines an object. The mandatory attribute id is the object’s identification, it must be unique within the file. The mandatory attributes x and y stores the position of the object within the frame, the coordinates are specified in pixels, the origin is the left top corner of the frame. The attribute x-ref is optional, but if it is used, its value must be unique within the file. The value of the attribute is an identification assigned in the matching process. The optional attributes skymed and skysig store local background level in ADU and local background noise level in ADU, respectively. The measurement data are stored in child elements p.
Description The p element is used to store measurement data for one object, which the element is placed in, and one aperture. The identifier of the attribute is specified in the attribute a. The attributes m and e specifies raw instrumental brightness in magnitudes and its error estimation. If the object was not be measured successfully in an aperture, corresponding p element is omitted, so it is not required that the number of p elements in an object element are the same as number of apertures defined.
The binary format of photometry files was introduced to latter versions of C-Munipack 1.2, because it has turned out that the XML-based formats are too slow to parse.
Multi-byte data is stored in little-endian format (Intel). Real numbers are stored in binary64 format of IEEE 754-1985.
The general structure of binary photometry file is given in table 2.
Offset | Type | Length | Description |
0 | char[] | 28 | Format identifier, must be |
’C-Munipack photometry file\r\n’ | |||
28 | long | 4 | Format revision number, current revision is 1 |
32 | long | 4 | Length of metadata in bytes |
The parameters are stored on fixed offsets. The length of the header is 492 bytes. See the table 2.2 for description of header fields.
The table of apertures immediately follows the block of parameters. The first four bytes (long type) specify a number of records in the table, the first record starts at offset 4, records have 12 bytes each. The description of fields in the table of apertures is given in table 4
Offset | Type | Length | Description |
0 | long | 4 | Identifier |
4 | binary64 | 8 | Radius in pixels |
The table of objects immediately follows the table of apertures. The first four bytes (long type) specify a number of records in the table, the first record starts at offset 4, records have 40 bytes each. The description of fields in the table of objects is given in table 5
Offset | Type | Length | Description |
0 | long | 4 | Identifier within a file |
4 | long | 4 | Global identifier (assigned in matching), |
zero if the objects was not matched | |||
8 | binary64 | 8 | X-coordinate of position of the object |
16 | binary64 | 8 | Y-coordinate of position of the object |
24 | binary64 | 8 | Local mean background level in ADU |
32 | binary64 | 8 | Std. deviation of local background level |
The table of measurements immediately follows the table of objects. The number of records is not given explicitly, but it is always number of objects multiplied by number of apertures. Objects and apertures are stored in the same order as they appear in the tables of objects and table of apertures, respectively. The first records that specify measurement data for the first object and aperture stars at offset 0, the second record specify measurement data for the first object and second aperture, etc. Each record has 8 bytes. Unlike previous fields, magnitude and its error is stored in 24.8 format with fixed decimal point. The value 0x7FFFFFFF means undefined value. The description of fields in the table of measurements is given in table 6
Offset | Type | Length | Description |
0 | long | 4 | Raw instrumental brightness in magnitudes, |
8.24 fixed point format | |||
4 | long | 4 | Error estimation of brightness, |
8.24 fixed point format | |||
Using catalog files can speed up your work, in case you often observe particular star field, for example long-term monitoring of a variable star. The structure of the data stored in a file is very similar to a photometry file, but it contains the selection of stars (variable star, comparison star, etc.). When a catalog file is used instead of a reference file in matching phase of reduction, the selection is automatically restored from that file.
Catalog files are saved as XML documents. The EXPAT library has been used for reading the files in XML based formats, see the References section for details.
cat_file root element of XML document
Model (info, selection, taglist, stars)
Description The cat_file element is a root element of the document. The document consists of meta-data (header), table of selected stars and table of stars.
Model (object |ra2000 |dec2000 |observer |observatory |telescope |camera |filter |fov |orientation |comment)*
Description The info element is a container for catalog file meta-data. The parameters are given as child elements, parameter values are stored in child elements’ content. The order of child elements is not significant. The description of parameters is given in 7.
Element name | Presence | Description |
object | optional | Object’s designation |
ra2000 | optional | Object’s right ascension |
dec2000 | optional | Object’s declination |
observer | optional | Observer’s name |
observatory | optional | Observer’s location designation |
longitude | optional | Observer’s longitude |
latitude | optional | Observer’s latitude |
telescope | optional | Designation of the telescope |
camera | optional | Camera manufacturer and model |
filter | optional | Optical filter |
fov | optional | View field size |
rotation | optional | View field orientation |
comment | optional | User comments |
selection table of selected stars
Description The selection element is a container for elements that specify which stars are used to make a light curve. Each select element define one object and its role. If there are more roles for a single object, the result is undefined.
select selected star and its role
Description The select element assigns a role and caption to one star. The mandatory attribute id specify object’s identifier, it must be one of object identifiers that appears in the table of star. The attribute label is a description of the object. It starts with a prefix, which must be either ’var’ for a variable star, ’comp’ for a comparison star and ’chk’ for a check star. When there are more stars of the same role, the ordinal number is appended to the prefix to give them unique labels.
Description The taglist element is a container for elements that specify user-defined strings (tags) for objects. Each tag element define a tag for a single object. If there are more tags for a single object, the result is undefined.
Description The tag element assigns a tag to one star. The mandatory attribute id specify object’s identifier, it must be one of object identifiers that appears in the table of star. The attribute value is a string assigned to it.
stars table of all stars on the frame
Description The body element is a container for stars, each child element s specify a single star. The mandatory attributes width and height define a frame size in pixels. If there are multiple definitions with the same identifier, the result is undefined.
s position and brightness of a star
Description The s element defines an object. The mandatory attribute id is the object’s identification, it must be unique within the file. The mandatory attributes x and y stores the position of the object within the frame, the coordinates are specified in pixels, the origin is the left top corner of the frame. The attributes m and e specifies raw instrumental brightness in magnitudes and its error estimation.
Light curves (dependency of brightness on observation time) are usually final product of the reduction process. A light curve table consists of observation time in Julian date form followed by magnitudes and their errors of all selected stars. Two basic output working modes are provided: in differential mode, magnitudes stored in the table are differences between each pair of the stars. On the other hand, instrumental mode allows you to print out raw values computed by photometry. Note, that instrumental magnitudes cannot be compared to absolute ones, which is found in the photometry catalogs, without further post-processing.
Output files are stored in the ASCII format; the end of the line is represented by CR+LF in the DOS/MS Windows environment and by LF in the Unix/Linux environment. First line contains always a list of column names separated by single space character. Second line consists of additional information (aperture, filter, etc.) and has no special formatting, it must not start by number, though.
On the following lines, the table values are stored. The values are separated by tab character or single space, rows are separated by the end-of-line character (see above). Parsers must ignore all additional white characters. Empty lines indicates, that the corresponding frame was not successfully processed and thus a brightness of a variable or a comparison star could not be determined. See table 8 for short description of columns.
Section | Keyword | Description |
Date and | JD | Geocentric Julian date of observation |
time | JDHEL | Heliocentric Julian date of observation |
Differential | V-C | Difference of variable and comparison star |
magnitudes | s1 | Error of V-C value |
V-K1 | Difference of comparison and check star #1 | |
s2 | Error of C-C1 value | |
... | ... | |
Instrumental | V | Brightness of variable star |
magnitudes | s1 | Error of V value |
C | Brightness of comparison star | |
s2 | Error of C value | |
K1 | Brightness of check star #1 | |
s3 | Error of K1 value | |
... | ... | |
Additional | HELCOR | Heliocentric correction in days |
data | AIRMASS | Air-mass coefficient |
ALTITUDE | Altitude (elevation) in degrees above horizon | |
The AVE compatible file format allows to save a light curve with differential magnitudes to the AVE software[11]. The output file is an ASCII text file. Each frame is stored on a separate line, lines begin with a Julian date of observation, followed by a space and followed by a brightness of a variable star in magnitudes. Frames where a variable was not measured are omitted. The file does not contain any header.
The MCV compatible file format allows to save instrumental magnitudes of selected stars in a file that can be processed in the MCV software[12]. The output file is an ASCII text file. Each frame is stored on a separate line, lines begin with a Julian date of observation, followed by instrumental magnitude for each selected star. Fields are separated by a space character. A zero value stored instead of a valid magnitude indicates missing or invalid data. The file does not have any header.
Track lists (dependency of frame shift on observation time) are usually used for determining the value of periodic error of the telescope mount. A track-list table consists of observation time in Julian date form followed by shifts in the horizontal (X) and the vertical (Y) axes in pixels relative to the reference frame.
Output files are stored in the ASCII format; the end of the line is represented by CR+LF in the DOS/MS Windows environment and by LF in the Unix/Linux environment. First line contains a list of column names separated by single space character. Second line consists of additional information and has no special formatting, it must not start by number, though.
On the following lines, the table values are stored. The values are separated by tab character or single space, rows are separated by the end-of-line character (see above). Parsers must ignore all additional white characters. Empty lines indicates, that the corresponding frame was not successfully processed and thus shift values could not be determined. See table 9 for short description of columns.
Keyword | Description | |
JD | Geocentric Julian date of observation | |
OFFSETX | Relative shift in horizontal direction | |
OFFSETY | Relative shift in vertical direction | |
The table of airmass coefficients is stored in the ASCII format; lines are separated by CR+LF, LF or CR character. The writer can choose any separator, it is recommended to use a separator native to the writer’s environment. The reader must correctly handle any separator. The first line contains a list of column names separated by a single space character. Second line is left empty.
On the following lines, the table values are stored. The values are separated by a space character, rows are separated by an end-of-line character (see above). Parsers must ignore all additional white characters between fields, at the beginning and at the end of a line. See table 10 for the description of the columns.
Keyword | Description | |
JD | Geocentric Julian date of observation | |
AIRMASS | Air-mass coefficient | |
ALTITUDE | Apparent altitude in degrees above horizon | |
The mag-dev curve data file consists of table of indexes, mean magnitudes and standard deviations for all detected stars. Such data is intended for detecting variable stars on a set of CCD frames. Magnitudes are always in differential form, they are relative to the comparison star.
Output files are stored in the ASCII format; the end of the line is represented by CR+LF in the DOS/MS Windows environment and by LF in the Unix/Linux environment. First line contains a list of column names separated by single space character. Second line consists of additional information and has no special formatting, it must not start by number, though.
On the following lines, the table values are stored. The values are separated by tab character or single space, rows are separated by the end-of-line character (see above). Parsers must ignore all additional white characters. See table 11 for short description of columns.
Keyword | Description | |
INDEX | Ordinal number of a star in the reference file | |
MEAN_MAG | Mean relative magnitude of a star | |
STDEV | Standard deviation of previous value | |
GOODPOINTS | Number of measurements used for computation | |
This format is used to export the measurement data of all objects and a set of frames to one file. Its name originates from the old ’readall’ utility. In the C-Munipack software such file can be created by the ’Find variables’ tool and this tool can also import data from such file.
The readall file is a text file. Lines are separated by CR+LF, CR or LF character. The writer can use any line separator, the reader should correctly handle any of these. Fields are separated by a space character, the reader should ignore any additional space characters between fields, at the beginning of a file or between the last field and following line separator.
The first line is used to identify the file. It should contain the following text:
The seconds line starts with a hash character and it is followed by an aperture identifier and color filter name. Third and following lines consists of measurement data.
The first field of each line is a Julian date of observation. The following fields contain raw instrumental brightness of objects followed by an error estimation. The data are organized in such a way that one object occupy always the same fields on each line. Invalid magnitudes are represented as values ’99.99999’ and invalid error estimations are represented as values ’9.99999’.
The length of a line is not limited. Because each line carries measurement data for a entire frame, the reader must be designed to correctly handle very long lines.
The C-Munipack library provides an extensive set of functions with simple application programming interface (API) for reduction of images carried out by a CCD camera, aimed at the observation of variable stars.
The library has been developed as a part of the C-Munipack project. See the project’s home page for more information about the project and other interfaces.
The following text describes the implementation of very simple application, which performs a photometry of a single CCD frame using a functions from the C-Munipack library. It is supposed, that you have got the C-Munipack library and its headers installed.
Let’s start - make a new project in your favorite IDE and make a big mug of coffee for yourself. First of all, we need to include some standard headers:
The declaration from the C-Munipack library are sorted into several header files, but all we need to include just one header file, which in turn includes all other public ones.
We will use command-line arguments to specify the name of the file with an input CCD frame and the name of the file where the result shall be saved to. Because of this, our declaration of the main function looks like this:
The lc variable is called the context . It stores the configuration parameters and keeps the internal data used during the photometry process. It allows the caller to process multiple frames in different threads without interference between each other. Contexts is a trick that makes all functions from the library re-entrant; you can safely call the functions from different threads without any synchronization or locking if those threads uses different contexts. There are two exceptions to what was said about thread safety of the functions. The initialization routine must be called before any other call from the library and the clean-up routine must be called as a last call to library functions.
At the beginning of our sample program, we call library’s initialization routine:
Check the command line parameters. This demo expects two parameters, the first one is the name of the input file with a CCD frame in the FITS format, the second one is the name of the file where the result of photometry shall be written to.
Before we do any real work, we need to create a new photometry context. This is a opaque data structure that keeps the configuration parameters required to perform a photometry on a frame. The context is created by calling the cmpack_phot_init routine. This function creates a new photometry context, sets all parameters to default values, sets the reference counter to one. A pointer to the context is returned. The caller becomes an owner of the context and when we don’t need it anymore, we should release the reference by calling the cmpack_phot_free function which decrements the reference counter and destroys the context.
Let’s set up at least two most important configuration parameters – the filter half-width and the detection threshold. We let all other parameter to the default values.
Now everything is ready to do a real work. The photometry is executed by calling the cmpack_phot, which takes the context as its first parameter, the names of the input and output files as the second and third parameter. The fourth parameter is NULL.
Before we finish the program, we should destroy our reference to the photometry context and since this is the last reference, the context and all data that are stored within the context is destroyed as well.
Before terminating the program, you should call the library clean-up function.
The C-Munipack library comes with a configuration file for pkg-config. If your system supports pkg-config, this is the easiest way how to pass all options that are necessary to compile and link an application against the C-Munipack library.
Otherwise, you have to set up all the options on the command line. Here is an example that builds our source code on Debian 4 using the gcc compiler.
In the C-Munipack library, all dynamic memory allocation are performed by means of a group of routines which correspond to the C standard library memory handling function.
When you build your own application using the C-Munipack library, please check the API Reference to make sure when the caller is responsible to free the allocated memory blocks. To do so, you have to call the cmpack_free routine, otherwise, the behavior of the program is unpredictable.
When the C-Munipack library is compiled with _DEBUG symbol defined, the memory handling function keep track of blocks allocated on the heap. When the application calls the cmpack_clean routine, the list of blocks that are still allocated is printed to the standard output. This feature is designed to detect memory blocks that were allocated but not freed.
When the C-Munipack library is compiled with _DEBUG symbol defined, the memory handling function check for out-of-block modifications. When a memory block is requested, the function allocates 8 bytes more and writes a 4 bytes of constant data at the beginning of the block and 4 bytes of constant data at the end of the block. The function returns a pointer of a memory after the first four bytes, so from caller’s perspective, everything is transparent.
When a memory block is freed or reallocated, the program checks if the leading and trailing data were not modified. In such case, an assertion is issued to stop execution of the program.
The library initialization routine cmpack_init must be called prior to any other call to the library. It initializes the static data and when the library was compiled with _DEBUG symbol defined, the data required by the memory leak detection is also initialized.
The library initialization routine cmpack_cleanup can be called before the program terminates, though it is not necessary, if you don’t care about memory leaks, which is a bad idea in general. The clean-up code releases any memory allocated in static data. When the library was compiled with _DEBUG symbol defined, it prints out to the standard output a report about any memory blocks that were allocated but not freed. Please note that only memory blocks that were handled by library’s allocation/free routines are threated by the leak detector.
Functions from the C-Munipack library are re-entrant. It means, that you can call them from different threads without any external synchronization and locking if they are called on different data. The data that are used inside a function call are stored on a stack, the data that are passed between function calls are stored in an memory allocated on a heap, called a context.
On the other hand, functions from the library are not thread-safe. When you call a function on the same context (data) from two threads at the same time, the behavior of the program and the results are unpredictable.
For example, it is safe to perform a photometry on two frames in two threads simultaneously, but you have to make two photometry contexts (instances of CmpackPhot type).
There are two exceptions to this rule, the library initialization routine cmpack_init and the clean-up routine cmpack_cleanup. These routines are not re-entrant nor thread-safe.
Most of the structures that are published by the C-Munipack library are opaque. They can be referenced by pointers, but their internal structure and content can be accessed only by calling appropriate functions. This slows down an execution of the resulting program, but provides a compatibility of binary code that were dynamically compiled against the library.
To help binding into other languages, the structures contain reference counters that can be incremented and decremented and when the last reference to a structure is lost, the structure is freed automatically. For each structure that support reference counting, there are two functions:
A function that has the reference suffix increments the reference counter of an object given as an argument. Its return value is a copy of the argument. A function that has the free suffix decrements the reference counter of an object given as an argument and when the reference counter reaches zero it destroys the object and frees allocated memory.
[1] Berry, Richard; Burnell, James The handbook of astronomical image processing. Richmond, VA: Willmann-Bell, 2000
[2] Hardie, R. H. In Astronomical Techniques Hiltner, W. A., ed. Chicago: University of Chicago Press, 1962
[3] Pickering, K. A. The Southern Limits of the Ancient Star Catalog DIO 12:1, 20, n. 39., 2002
[4] van Flandern, T. C.; Pulkkinen, K. F. Low-precision formulae for planetary positions Astrophysical Journal Supplement Series, vol. 41, Nov. 1979, p. 391-411
[5] Stetson, P.B., 1989 in V Advanced School of Astrophysics, eds. B. Barbuy, E. Janot-Pacheco, A. M. Magalhães, and S. M. Viegas (São Paulo: Univ. de São Paulo)
[6] Stetson, P.B. MUD/9 - DAOPHOT II User’s Manual Available at: http://www.star.bris.ac.uk/~mbt/daophot/, retrieved 12.09.11
[7] Valdes F. D., Campusano L. E., Velásquez J. D. and Stetson P. B. Publications of the Astronomical Society of the Pacific, v. 107, p. 1119
[8] Groth, E. J., 1986 A pattern-matching algorithm for two-dimensional coordinate lists Astronomical Journal (ISSN 0004-6256), vol. 91, May 1986, p. 1244-1248.
[9] Hampel, F. R., Ronchetti, E. M., Rousseeuw, P. J. and Stahel, W. A. Robust Statistics. The Approach Based on Influence Functions. John Wiley and Sons, New York, 1986
[10] Huber, P. J. Robust Statistics John Wiley and Sons, New York, 1981
[11] Barberá R., Iparraguirre J. AVE Available at: http://www.astrogea.org/soft/ave/aveint.htm, retrieved 27.12.11
[12] Andronov I.L., Baklanov A.V. Algorithm of the artificial comparison star for the CCD photometry Astronomical School’s Report, 5, No. 1-2, 2004, p. 264-272
brightness
minimum, 7
context, 38
correction
Bessel’s, 19
bias-frame, 4
dark-frame, 3
flat-frame, 4
heliocentric, 20, 22
curve
light, 34
mag-dev, 18
date
Julian heliocentric, 21
detection
stars, 5
file
catalogue, 31
light curve, 34
mag-dev curve, 36
photometry
binary, 28
XML-based, 25
track list, 35
filter
low-pass, 6
format
readall, 36
frame
bias, 4
dark, 3
flat, 4
master bias, 5
master dark, 4
master flat, 5
matching, 14
merging, 20
function
delta, 8
Gaussian, 6, 9
point-spread, 10
interface
application programming, 38
law
Pogson’s, 11
library
clean-up, 40
initialization, 40
magnitude
differential, 11
instrumental, 11
magnitudes
differential, 34
instrumental, 34
mean robust, 23
memory
allocation, 40
leaks, 40
photometry
aperture, 10
reference counting, 41
roundness, 9
scheme
calibration
advanced, 3
standard, 3
sharpness, 8
sky annulus, 10
star
comparison, 18
artificial, 17
thread safety, 41
tool
Find variables, 18