Last updated: Dec/2015
Index
This text is not meant to be a
self-contained manual for learning the method of polarimetric
reduction, but should be viewed as a roadmap with key milestones
and tips with particular emphasis on polarimetry. Indeed, I
recommend that you read my short tutorial on the procedure for polarimetric
observations before your first attempt at data reduction. In it
various aspects of polarimetry are described in more detail.
We discuss the reduction of data
obtained with the polarimetric module installed at the
Observatório do Pico dos Dias (OPD/LNA), in Brazópolis, Brazil.
This instrument is described in Magalhães
et al. (1996). It is worth taking a look at the page of this instrument. There you can
also find other texts on polarimetric reduction. The heart of
the instrument is composed by a rotating retarder and a fixed
analyser. Specifically,
the retarder has 16 possible positions separated by 22.5deg
(22.5 deg x 16 = 360 deg). The analyser produces two images of a same object,
because the original beam is split in two. The core of a
polarimetric measurement is to obtain several images of an object
in different positions of the retarder. The ratio between the
difference and the sum of the two beam counts is used to measure
the polarization of the object.
We caution the reader that the data
reduction procedure described in this manual could not be
directly applied to polarimetric data obtained with other
instruments. In addition, we only deal with the reduction of
data using as the analyser a Savart Prism, which is an element
that splits the beam of a given object in two rays (more on that
later!). As it is made of calcite, it is sometimes refered
simply as calcite. Thus, if the frame you are reducing has two
images for each object (a cross-eyed view of the sky...), you
can be sure that it was obtained with calcite, and this document
applies. See here
an example of an image obtained with calcite: in particular, see
the two images of the brightest object in the field. Polarimetric
data can also be obtained with an analyser that produces
only an image per object, as a Polaroid, but I do not treat this
type of reduction here.
The polarimetric data can be obtained with two types of analyzers:
a half-wave retarder (l/2) or quarter-wave retarder (l/4). In the
first case, only the linear polarization can be obtained and the
minimum number of images to measure the polarization of an object
is 4. With a quarter-wave analyzer, we obtain both linear and
circular polarizations, but it is needed at least 8 images.
The
technique
and calculation procedure are described in Magalhães
et al.(1984)
for a half-wave retarder plate and in Rodrigues
et
al. (1998) for a quarter-waveplate.
Presently,
some
routines
of the pccdpack
package used by INPE's group are not the same included in the pccdpack
distribution. Ask Claudia about this INPE version. Besides
pccdpack, and because of that, you should also install the
following IRAF packages: tables,
stsdas,
and ctio. See
links for installation instructions.
Many IRAF routines allow to use not simply a input file but a list of input files, so you can process a list of input files with only one command - this is very useful and it is used a lot in this document. This can be done using as the input a file containing a list of individual files to be reduced. To tell IRAF that you input is a list of input files and not an input file itself, the input file must be preceded by an @. See how to create a list of files here.
I also suggest to organize the images in different directories: one for the images of bias; another for the flats; and one for each object. Use the task imrename to move the images directory within the IRAF. If you prefer to do this procedure on UNIX, use the command mv.
There
is a task in IRAF called apropos that can be used to find the tasks
related to a particular word. For example, apropos gaussian
returns tasks involving Gaussian.
Check
if the entire image is displayed in DS9/Saoimage. Use the command
"imhead image l-" to verify the image size. Using the cursor in
DS9 window, check if the upper right corner coordinates correspond
to the image size. If not, check if:
the display parameters are set to display the entire image;
The basic reduction outlined here applies to any dataset obtained with a CCD. It aims to correct the images of possible noise and distortion due to the characteristics of the detector. Again, a quick read-through of A User's Guide to CCD Reductions with IRAF is recommended.
Steps:
create a bias or zero image to correct for readout noise and determine the overscan region of the CCD ;
create a flat-field image to correct for differences in response of the pixels in the CCD
determine the trim-section of the CCD One must trim images to cut out the edges with very noisy signal or no signal;
calculate the offset between the images.
Below,
a detailed procedure for each of these steps is described.
The
master bias is an estimate of the error introduced by just reading
out the detector. Here, we outline the process of creating a
master bias by combining a series of bias images. The bias images
should be taken with the shutter closed and with very short
exposure times. The bias images and science images should have the
same reading parameters - binning and frequency and velocity
of reading, for example - since these parameters modify the
readout noise pattern along the frame. The steps for creating a
master bias are:
create a list containing the names of all the individual bias
images - see tip
combine the images using the zerocombine task (see lpar)
Important parameters:
(combine = "average") Type of combine operation
(reject = "avsigclip") Type of rejection
(ccdtype = "") CCD image type to combine
The images that should be corrected by bias are those of:
objects (science, standard, etc) and flats.
A
master flat is made by averaging a series of images obtained while
the CCD is uniformly illuminated. Thus, any difference in counts
are due to the sensitivity (or illumination) of the individual
pixel. One should take care to create a master flat-field for each
filter independently.
Now that you have the masterbias and masterflat images, you can use these calibration frames to correct your science images. Each science image should be corrected by bias and flat. Each flat image (or the masterflat) by bias.
For
this we use the routine ccdproc. This routine is also
removes the portion of the CCD which does not have a usable
signal. See here
for the parameters of this routine. The reader is cautioned,
however, that several of the parameters used for ccdproc may vary
from instrument to instrument as well as over time. In particular,
the "trimsec" varies according to the CCD used. See the link below
the appropriate values for the CCD LNA.
TRIMSEC and other information about detectors LNA
Before performing ccdproc, make a backup of all images! One suggestion is to create a directory with a helpful mnemonic as a name (bck, for example) and then make a copy of all images with the task imcopy.
WARNING!
If images have vignetting, do
not use the ccdproc to correct by flatfield. See below.
Overscan: CCD frames
typically include a number of rows/columns not exposed to the
light. This region is called the overscan. This area can be used to determine a
level additive in the counts (the overscan). However, tests for
images of LNA have shown that the use of the overscan area to
calculate / overscan correction separate from the bias (read
noise) only worsens the outcome.
Flatfield: ccdproc normalizes
each flatfield, so that all frames have an average count of 1. The
division of subsequent images by the master, therefore, doesn't
change the average count. There remains, however, the important
question: what is the best method to normalize a flat? Because
differential photometry and polarimetry are both based on
calculations of quantities that are functions of flux , any
multiplicative constant applied to all pixels will be canceled
out. For example, whether we use an average (as does the ccdproc) or a mode
(which must be done by correcting flat "manually"), one will
determine the same bias values and differential magnitudes. There
are, however, methods that are based on normalization to a
predefined function. In these cases, the normalizing constant will
not be invarient. I have not done any tests to determine whether
such methods allow a better end result.
ccdproc puts a flag in the images that were identified as having been processed with respect to a given correction. These flags can be viewed with the task imhead-l, or with the task ccdlist. The following is an example of the output of this last:
ccdred>
ccdlist @ arch
PAD2-R0001.fits [1023.1023] [real] [none] [] [TZF]: HD110984
PAD2-R0002.fits [1023.1023] [real] [none] [] [TZF]: HD110984
PAD2-R0003.fits [1023.1023] [real] [none] [] [TZF]: HD110984
In red, you can see the flags T, Z and F, which means that these
images have been "trimmed", zero corrected (= bias) and
flat-fielded, respectively.
The
master
bias frame (ie, the correction for the readout noise) should be
trimmed. The master flat frame (ie, the correction for the
sensitivity of the pixels) must be trimmed and bias corrected. ccdproc uses these
flags to check if a bias or flat correction has been properly
performed, and if it was not, makes the correction. Be sure to use
ccdlist to check
if the master bias and flats have been properly correction. They
should have, the T and TZ flags marked, respectively.
Run ccdproc (bias and trim) in the average image of flatfields, from hereafter the master flat;
Calculate the average value of the plateau (i.e., the region without vignetting) of the master flat. To do this:
Create a file with the center pixel of the flat (test with tvmark);
Do photometry with apertures of 20, 30, 40 ... a large number that roughly represent the border of the plateau region;
Choose the maximum count number obtained with the different
apertures. This number should represent the counts of the
region without vignetting.
Divide the master flat by this value
imarith [master flat] / [mean value] [the
final master flat]
The plateau in the resultant image should now have a value close to 1;
Perform flat correction by dividing each image by the final master flat
imarith @ arch / arch @ flatave_norm
Tests
were
made
using
the
above
procedure
and
also
using
the
ccdproc
(ie,
using
the
normalization
throughout the image area). With the proposed alternative
procedure, the errors are smaller and become closer to the value
expected from photon noise.
estimate the characteristics of the image:
seeing and sky noise (imexamine)
determine the objects that will be reduced,
i.e., create the coordinate file for the first image (daofind and order)
create the coordinate files for each image taking into
account a possible displacement between them (xregister and ordshift). You
can use xregister
to also check the shifts. I really recomend you to do that!
perform aperture photometry (phot)
perform differential photometry and/or
polarimetry. We use the tasks cria_dat and pccdgen,
respectively.The reduction procedure depends whether the data
is from a l/2 or l/4 wave plate
Let's start by investigating the tasks found in the phot package. There are several routines that share parameters that are, in turn, defined in specific tasks. The tasks of parameters to be edited are:
fitskypars (see LPARs): parameters associated with the calculation of the sky background level. These values should be kept constant for all data reduction of a given observing run
annulus, the inner radius of the ring
whose pixels are used to estimate the sky. It must be
set so that it includes the two images of the same star
but it is not very near (~ 50 pixels)
dannulus: the increment to the inner radius that defines the outer radius of the ring where it is estimated the sky (~ 10 pix)
algorithm: mode
Comment / suggestion: the choice of annulus and dannulus does not change the result very much. However, you should be secure of that! After a first complete reduction, when you should have some confidence of the correctness of the process, reduce the image of a bright star and a faint star with different annulus, and dannulus algorithms. Compare the final polarizations (circular, linear, PA and error).
datapars: image parameters
two parameters must be edited in this task
FWHM
Sigma
be
careful with saturated stars. Typically, a CCD
has a maximum score of ~ 32000. If the image has many
saturated objects, you may want to set "datamax" in
datapars. This will turn on an errorflag on the phot
output if a pixel with a value above "datamax is
found inside the aperture. Another option is to use a
task developed by Cristiane G. Targon, called disp_sat.
This routine displays the saturated stars (defined as
those with counts greater than 29,000) and facilitates
their identification and removal from the coordinate
file.
photpars: photometry parameters
choose the apertures to be used: use 10
apertures. Usually, the values range is 2 to 11
(2,3,4,....,11)
constant magnitude: if you want the output magnitudes in the mag files to have a basis in reality, you should set the magnitude zero-point. Change it in photpars. However, these mag values are NOT used in reduction. So your results DO NOT depend on this value.
centerpars: parameters for centering
the object. The centering method must be set to 'centroid'.
There can be some cross-talk between packages for some of the parameters listed above. For example, centerpars is used in both apphot and daophot. This can cause some confusion: you may remember setting the parameter in a apphot, but you also did so in daophot. It is wise to see if any such error occurred if the routine is not behaving in a way that you expect. One tip is to leave the correct parameters in both packages, since the IRAF can move between them without the User being aware.
a (press a in a given position of your image): perform aperture photometry at the cursor position;
r: try to center an object and plot the radial profile around the calculated center;
?: Help.
Use
above
actions to estimate
the
full
width
at
half-maximum
(FWHM)
of
the
stellar
profile
and
the
sky
counting.
The
width
is
an
estimate
to the seeing and the square root of the sky value is the sigma
value, both of which should be input at the datapars.
To
perform
aperture photometry, you must define which objects will be
reduced. In practice, this means that you should create a file
containing the coordinate pairs. It has the following format:
Ax1
Ay1
Ax2 Ay2
Bx1 By1
Bx2 By2
....
where
each
pair
(A,
B,
etc.)
represents
the
two
images
of
the
same
object,
i.e,
(Ax1,Ay1)
is
the
ordinary
image of the object A and (Ax2,Ay2) is the extraordinary image of
the same object. So each object has TWO lines in the coordinate
file. Note that the order of the pair must always be the same.
Example: the upper image of a given object must always be the
first of the pair. This can be achieved if you use the routines daofind
and order to
find the centers and define the pairs. This care is necessary
because the angle of polarization is dependent on the order of the
two images (of a same object) in the coordinates file.
Specifically, if you calculate the linear polarization of the
pair:
Ax1
Ay1
Ax2 Ay2
it is different by 90 degrees from the result you obtain if the
coordinate file is:
Ax2 Ay2
Ax1 Ay1 .
Hence,
define
that
the
bottom
(or
the
upper)
image
is
the
first
one
in
the
pair
to
be
listed in the coordinate file and follow this procedure throughout
the reduction of the mission! If this is not done, the angles of
polarization will be WRONG! The order of the images in each pair
will be automatically set the same if you use the daofind and ordem procedures,
The
first
step in creating a coordinate file is to find the stellar images
in the frame. This is done with the IRAF task daofind. You
can decide the threshold above which you are looking for objects.
This threshold is the standard deviation of the sky, from
hereafter sigma. Let's
see an example. Suppose you have a sky of around 100 (so the sigma
is 10). You are working with the frame of a standard star
whose profile has 10,000 countings in the peak. It is also the
brightest object in the frame by far. So you can tell daofind to find only
objects with counts above 8,000 counts (800 sigma). It will
probably find two objects (the ordinary and extraordinary images
of your standard). Their centers will be written in an output file
called <image>.coo.<number> . The <number> is 1
if it is the first coo of this image, 2 if it is the second, and
so on. You can visually check which are the found objects using
the tvmark task
that marks the objects in the coo file in your image display (ds9
or ximtool).
To
get
a file with the format described above (with the images ordered by
pairs), you need to use the task ordem
(from pccdpack). This
task finds the pairs of ordinary and extraordinary images of a
given object and creates a new file, whose extension should be ord. Among its parameters, the
shiftx and shifty
represent respectively the distance, in pixels, at the x and y
axes between the two images of the same object. You have to
estimate these values for each observing run. You can do this
using the imexamine
task. Center three or four pairs of images in different frames and
average the difference in x and y axis.
If
you
need to use "ordem" with a file containing many objects (over
100), use the routine ordem2 (a task written by Claudia Rodrigues,
not a normal part of pccdpack. Because it is compiled, it is much
faster.)
Usually one uses the first frame in the sequence for the initial
coordinate file, but sometimes another image may be more suitable
because it provides a better signal-to-noise ratio or because
pointing shifts have occurred during observations.
Let's illustrate with two extreme cases:
polarization 1 single object;
polarization of all the stars in the field.
This
section
outlines
how
to
reduce
data
when
there
is
only
a
single
object
of
interest
in
the
field.
An example is the calculation of polarization of a polar with a
quarter wave plate. If the object is the brightest in the field,
use the task daofind
with a detection limit slightly below the peak count of the
object. Easy! After finding the star, use tvmark to verify
that the object of interest was found. If so, use the routine "ordem"
to order the pair. If not, decrease the detection limit.
For
polars
in a crowded field, the coordinate file may contain a large number
of objects. When doing the polarimetry, this file can be edited to
contain only the polar (i.e. the object of interest).
Set
daofind to a detection limit of around 4 or 5 sigma. Use tvmark to see if the
desired number of objects found. If so, use ordem. Then, use tvmark to check the
pairs really found. You can set tvmark to point and number the
objects. Then you can see the order of the objects in the
coordinate file. Example:
Pair 1 (=Object 1): centers 1 and 2
Pair 2 (=Object 2): centers 3
and 4
Pair 3 (=Object 3): centers 5
and 6
and so on
It
can
be a good idea to print the image with these numbers overlaid -
this is the easiest way to know the correspondence between the
values of polarization and the field objects.
It is important at this stage to determine whether there is saturated stars in the field., i.e., do the objects possess brightness profiles which indicate that some pixels have hit stream the ceiling above which the CCD can record properly? The saturated stars can be removed by a manual edit of the output file of either daofind or "ordem". A tip from Cristiane Targon is to set imexam to display with a minimum level (z1) around 29,000 and the maximum level (z2) at 35,000. Also helpful, is setting zScale and zrange to "no" and ztrans = log. She has developed a task called disp_sat. This routine displays the saturated stars (defined as those with counts greater than 29,000) and facilitates their identification and removal from the coordinate file.
You
now
have a coordinate file to an initial reference image. However, you
need a file for each image in your dataset.
Why can't you use the same file for all images? It is quite common for the telescope to move slowly over a number of integrations. This results in a given object having different center positions (x, y) in different images. Thus, the coordinate file that was created based on the first image (or any other reference you have chosen) may not be suitable for the others. To solve this problem, we (1) calculate the offset of each image with respect to the reference and (2) apply them to the coordinate file of the reference image, creating a coordinate file for each image. The tasks used for this are xregister (step 1 - calculate the shifts) and ordshift (step 2 - create a coordinate file for each image considering the calculated shifts).
The
commands
are:
I
suggest the following procedure:
With the objects now defined, let's carry out our aperture photometry using the task phot (see parameters). It is essential that the tasks of apphot are configured properly. As such, one must confirm several parameters encountered in phot. Phot will calculate the sky background for each object (using the annulus, dannulus and other parameters from fitskypars) as well as integrating through the apertures defined by photpars. The result, for each image, will be recorded in a file whose name is<image>.mag.*, where * corresponds to a se quential number created by IRAF. If you run phot twice on a same image called huaqr001.fits, for instance. You will have two magfiles: huaqr001.mag.1 andhuaqr001.mag.2.
At
this
point, you have already an instrumental magnitude for each object
in each image.
Once
*.mag.* is created, you must choose the type of reduction you are
interested: polarimetry
(l / 2 or l / 4) or photometry.
Below we describe the procedures in each case.
To
calculate
the polarization of your target, you must select fields from the
*.mag.* files and write them to one or more separate files which
are the input for the pccdgen
task - the program that calculates the polarization. It is
customary to call these files *.dat. Each dat file corresponds to
ONE polarization point. Hence, If you are working with temporal
series you are going to have as many dat files as you are going to
get polarization points in the final polarization curve. If you
are interested in only ONE value of polarization, using all the
frames you have got, you are going to create only one dat file.
There are two tasks which may be used to select the data fields required to perform polarimetry: txdump or cria_dat (this one is simpler).
cria_dat: This task creates input files for pccdgen. The steps are:
create a file-list with magfiles. A suggested name is mag:
files *. mag.1> mag
create one or more datfiles with cria_dat. An example of parameter setup which may be used to create datfiles with 8 images from many magfiles is shown here. Typically, each datfile use 4, 8 or 16 magfiles, where each magfile corresponds to a plate position. As a rule of thumb, use:
16 to standard stars;
8 to variable objects observed with
l/4;
4 to variable objects observed with l/2;
txdump is a task for selecting and printing specific fields of a magfile within IRAF. See here the parameters to create an input file for the program for calculating the polarization.
Considering a given object, in each frame we can calculate the following value:
(ordinary flux - extraordinary flux)
X = -----------------------------------------------------------------
(ordinary flux+ extraordinary flux)
This value depends on the Stokes
parameters of the incident beam and also on the characteristics
and positions of optical elements that make up the polarimeter.
As the latter are known, we can estimate the Stokes parameters
by the modulation of the value above.
Over a full 360 degree rotation of the wave-plate, the modulation
of this value creates a curve which can be fit to determine the
value of polarization. Specifically, the amplitude is directly
related to the polarization degree. See examples of this
modulation in a linearly polarized object observed with l/2
and l/4
wave-plates, respectively. Note in the top of the chart, the
values of the Stokes's parameters determined by the fit. Another
example shows the same plot for an object with circular
polarization - a polar. Objects which are linearly polarized show
a modulation with a period equivalent to 90 degrees, while the
modulation period of circularly polarized light is 180 degrees.
There are more than one task to calculate the polarization from a
datfile, i.e., to fit the modulation of the value X.
pccdgen: A generic program that can in principle calculate the polarization from any type of data. It is the recommended task.
pccd: A simple program that calculates the polarization of one set of data obtained with a half-wave plate.
vccd:Another simple program that calculates the polarization of one set of data obtained with a quarter-wave plate.
The
task
pccdgen creates
a *. log: the file that has the polarization value! Hooray! You're
almost done ... See an example here. See
also the parameters of the routine pccdgen for l/2 and l/4 wave plates. Among the
input parameters, one is the executable to be used. Be careful to
use the correct exe file!!
(fileexe =
"/Users/cryan/iraf/extern/pccdpack/pccd/pccd4000gen08.exe) * PCCD
executable
Always check which is the
latest version of the executable above. The above version is
correct one in October 2009.
The exe file has to be adequate for your operating system and installed libraries. You can run it in your unix/linux console to check if some error occurs. Even if you do not do that, you can check if pccdgen has run correctly, checking your logfile.
It
is
worth remembering to always check the configuration of the
parameter pospars.
In particular, it is not enough to define the number of plate
positions, NHW,
as a given number, 8 for example. You also need to explicitily
edit pospars so
that only and exactly the n (8, in the example) positions of the
plate are used in the calculation.
If you are
reducing l/4 data, you should use the option "normalization =
no" in the pccdgen task. Another case in which you have to do
this is when you are NOT using a complete set of frames in the
modulation. In other words, when you are not using a number of
frames multiple of 4 for a half waveplate or 8 for a quarter
waveplate. Please see more on that on this section.
When
we
are
dealing
with
a
series
of
images
that
will
be
used
to
obtain
time
resolved
polarimetry,
cria_dat
creates dat files as:
dat.001 - frames 1 to 8
dat.002 - frames 2 to 9
dat.003 - frames 3 to 10
and so on
In
this
case, the zero position is valid for the first set up. For the
second, you should use the add 22.5 degrees to the zero position,
for the third, 22.5 degrees x 2 = 45 degrees, and so on. Thus, you
will return to zero value of + 360 degrees in position 17. There
is a task that does this automatically, but if you start a pccdgen
on a datfile that does not correspond to the zero position, you
must update this value.
As suggested above, the photometry was extracted for a series of 10 apertures. Thus, it is necessary to choose the one that corresponds to the smallest error. The parameter for guiding this choice is the sigma of fit: see a logfile. The procedure for this choice depends on the number of objects reduced.
If
you
are using the l / 2 wave plate (lucky you), continue with this
tutorial. If you are using l / 4 wave-plate, it is necessary to
find the angle of the optical axis. This is done by a procedure
specified below.
Be Careful! The new polarimetric module installed in early 2007 rotates in the contrary sense of the original module. This brings about a sign inversion of the position angle of the linear polarization and of the circular polarization. Hence make sure that you select the correct option in the pccdgen routine, otherwise you can get in trouble when converting your polarization measurements to the equatorial coordinate system.
For a of a single star, the best aperture choice can be made by visual inspection. One way of recording the result is through the graph of the variation of the value:
(ordinary - extraordinary) flux
-----------------------------------------------------------------
(ordinary + extraordinary) flux
The routine, graf, allows you to view, print or save the plots of this value.
To send the output of graf to the printer, you must:
Edit the graf metacode (.mc)
run graf
In stdplot, set device = stdplot
Run stdplot using the *.mc file created by graf
To create a postscript file, you must:
Edit the graf metacode (.mc)
run graf
In stdplot, set device = epsfl (landscape) or epsf = (portrait)
Run stdplot using the *.mc created by graf
If you are interested in only one object the reduction is completed. Congratulations! Save the graf printout as a record of your reduction. Do not forget to delete the backup images!
Suppose you are reducing a field where there are hundreds of objects for which the polarization is to be calculated. A manual determination of the apertures corresponding to error minima for each object is not practical. You should use the routine macrol (or its variants, macrol_v for l / 4 waveplate data). This routine creates a logfile entry where the lowest error measurement of each aperture is recorded in a row for every object. Thus, the number of lines of outfile should equal the number of objects measured. macrolides has two options for choosing the error minima: absolute or first minima. The last option is the default, but I prefer to choose the absolute minimum (= full).
The
task
select (the link points to an example
of lpar output) provides two different outputs. It selects
specific sub-groups of your data defined by the polarization
range, the maximum error, and so on. It also produces
graphic outputs of the selected data: histograms and other
diagrams. I used this routine only with data from lambda/2, hence
it is still necessary to test if it works for lambda/4 waveplate
data.
One
of the select
products is a map of the polarization vectors. But you usually
also want a figure that illustrates the distribution of vectors of
polarization over a real image. There are many ways to do that. I
recommend to follow a two-step procedure: (1) create the
polarimetric catalog; (2) plot the catalog using Aladin.
The catalog is a table of AR, DEC, and polarization information
for each selected object. Note that the polarimetric catalog can
be included in your publication. Let's start by the it.
Victor
de S. Magalhães developed the one-step task coords_real to
produce a catalog of polarization (the link points to an example
of lpar output). This task is included in the pccdpack_inpe
package. The task coords_real is based on original tasks of the
pccdpack package from Antonio Pereyra. To run this task, you
should download an image of your field from Aladin. In Aladin, you
should choose: file, load astronomical image and
choose a catalog. I usually choose DSS Red. Choose a image size
slightly larger (10 - 20%) than your CCD image. Fill coords_real
parameter accordingly with your CCD image. The catalog
extension is ftb. After run coords_real, be sure that AR and DEC
were correctly calculated. The task coords_real overplots the
calculated positions on downloaded image. But you should also
check it in Aladin - see below.
Having
the catalog, you can use Aladin with a filter to plot the catalog
superposed on any image of your field. Steps in Aladin:
{
draw
draw ellipse(10.*${POLARIZATION},0.*${POLARIZATION},${THETA})
}
It can be necessary to change the number 10 to scale the vector sizes as you want.
You
are done!
If
you have more than one catalog (ftb file) covering the same
region, you should use the combine_pol fortran routine. It
combines many catalogs doing the average of common stars.
== this part is obsolete ==
The step-by-step procedure is explained below.
The vecplot
taks allows you to make a figure with a background image of the
field, not necessarily of their data, with polarization vectors
superimposed, whose size and direction correctly represent the
field polarimetry. You must first run an auxiliary routine
called refer.
This routine converts the coordinates of objects in the image
"seen" for the coordinates in the image of DSS. refer, in turn,
uses the output select.
Vecplot seems to work only on the 32-bit version
of IRAF.
For the background image, you can use an image of DSS obtained from the SkyView (Advanced Form). Use the filter closest to the one used in your data. Some suggestions for the options in SkyView:
If you have trouble running vecplot, you must include the following data in select: (a) the angle correction to convert the angles of polarization for the standard system (see section below), (b) the position of the north and east directions in the image (of your data);
refer
converts the coordinates of the objects in your image to those
relating to the background image that will be used in
vecplot. To do that, you need to know the plate scale of
your telescope. If you are using the OPD telescopes, this
information can be found on page
LNA. We present some numbers apropriated for the CCD101 in
the IAG telescope of the OPD of the IAG are examples of the
parameters of the tasks vecplot
and refer.
Example: the scale of this telescope plate is
25.09 "/ mm, which is equivalent to 0.02509" / one. Since each
pixel of CCD101 (and also CCD106) is 24um, the angular pixel
size is 0.602 ". Thus, refer to the parameter on the scale of
the CCD board should be 0.602 "/ pixel. To calculate the scale
of an image plate obtained by SkyView, divide the size of the
image in degrees by the number of pixels. Example: 0.25 = 900
deg, so the scale of card is 900/1024 = 0.88 "/ pixel. It is
also necessary to define a common reference star in two images:
DSS and its data.
Some suggestions if you have problems running vecplot:
- Compare the DSS image with the CCD. The DSS has to be a little
larger than the CCD and with approximately the same center;
- Create a CCD image with the objects marked with the tvmark;
- Perform an initial run without vecplot vectors and compare the image
above;
- Also a test tvmark
on the image DSS is giving as output ;
- May be necessary to adjust the minimum and maximum levels of
background image (For example, if the background image is too
light or too dark.) This is done by setting the parameter niveisfull
of vecplot to
"no" and setting the parameter values z1 and z2.
As initial guesses, but likely inadequate ones, use those shown
above with the display screen play command.
It is worth a read and helps when referring to
vecplot.
16-março-2011 - Claudia:
Quando fizemos os catalogos para o artigo da Cristiane Targon,
notamos um acrescimo de +90 graus no catalogo final.
Aparentemente, ao criar o txt o refer esta adicionando esse
numero. Eh bom ficar de olho nisso e fazer uma conferência
cuidadosa dos gráficos e também dos valores do catálogo.
==>> De acordo com e-mail trocado com o A. Pereyra, os
angulos do txt tem mesmo uma adicao de 90deg. O fintab que
trabalha tambem com magnitudes parece que toma conta disso. Mas, a
versao que eu modifiquei do fintab carregou essa adicao
erroneamente. Fazer essa correcao quando o Victor for trabalhar
com isso.
In
order
to reduce quarter-wave plate data, you must determine the absolute
position of the waveplate relative to its optical axis in each
waveplate position. This is done by calculating the offset of the
instrumental angle relative to the real value. We call this offset
zero: from "waveplate zero" or "zero da lamina" (in
portuguese). In short, to estimate the waveplate zero you should
calculate the polarimetry for all possible values of zero (0 - 90
deg) and choose the one that provides the smallest error. This
should be done for objects in a given run. Yes, it is a lot of
work indeed... This procedure is described in more detail below
and should be done for all linearly polarized standards and also
for stars having a non-zero circular polarization, usually your
program star, if you are doing polar research.
Initially,
you
must
create
files
which
contain
the
selection
of
the
appropriate
mag
files,
the
dat
files.
See
the
specific section above. The task
used is cria_dat.
Note that the parameter interval
should be set to 8 for the program star (a polar, usually) and to
the total number of images when dealing with standard stars.
/home1/claudiavr/pccd/vccd.exe
Alternately, pccdpack has a similar routine to acha_zero called zerofind. This program selects the aperture which minimizes the linear polarization error. zerofind uses pccdgen to compute the polarization. Make sure it too is configured properly.Configure
either pccd or pccdgen with the zero value you just determined, and calculate the final
polarization as described in the section about calculating the
polarization.If you are interested in obtaining a time-resolved
polarimetry, follow the steps in the next session.
In
some
projects, we are interested in obtaining time resolved polarimetry
of a given object. There are some tasks that facilitate the
reduction of such a data-set, as described below.
pccd_var
uses pccdgen
to calculate the polarization of a series of datfiles. It is
therefore essential to set up pccdgen correctly: number of
plate-positions (NHW), number of apertures (nap), type of plate,
readout noise, the executable which is used calculate the
polarization, etc.. Never
forget to check the pccd_gen!
pccd_varcalculates
the
polarization
of
a
series
of
datfiles
(eg
dat.001,
dat.002,
etc.),
producing
a
log
for
each
datfile
(eg
log.001, log.002 ,...). In undertaking the aperture photometry
(phot), we use different apertures. Thus, each datfile contains
the result of several apertures (typically 10) and therefore
each log has the polarization of each object calculated using 10
different apertures. Which apertures provide the best estimate
to the value of polarization? First, check out the variable
"SIGMA", which measures the error of the polarization based on
the quality of the fitting of the modulation curve. It is not an
error based on photon noise! The last error is the Sigma_theor
value, also present in the log file. You must therefore select
the aperture having the lowest value of SIGMA. In this example logfile, the lowest SIGMA is the
one that corresponds to an aperture of radius = 8 pixels. This
search is done automatically by the routine macrol. pccd_var uses macrol and produces
a file called *.out (e.g., log.out) that contains the best
estimates of polarization for given datfiles.
If the first frame of the first .dat file was not made in position 1 (of 16 possible positions of the plate), you need to be careful with some parameters. If you used the half-wave plate, delta_theta, must be configured as:
[(# plate positions) -1] * (-45).
Use
values
between -360 and 0.
If
you
used a quarter-wave plate, the plate zero should be:
[(#
plate
positions) -1] * (22.5).
Rejoice!
You
have
calculated
the
polarization.
But
still
need
to
see
the
result.
Do
this
by
using
the
routine plota_pol.
For this you need to create a file with time HJD of each image.
You can do this as described here. The file
created (hjd.txt, for example) should have the number of rows
equal to the number of images used to create the datfiles.
The plota_pol to
create an output file with two columns each containing the HJD (or
phase) and polarization, which can be used as input into
diagfase2.
If
you
want to use SELECT with an output of l / 4, create a new *. out
with the command:
fields [out.original] 3-11> [out.select]
Above
we
described
how
to
determine
the
polarimetry
of
a
given
object.
However,
in
principle
the
method
should
be
calibrated. This calibration can be divided into three basic
corrections:
Below
we discuss each correction and how to calculate it.
The conversion to the equatorial standard system, which has a zero direction defined by the direction of north and increases eastward. This reference frame is defined by taking measurements of two polarized standard stars over one night. Because the degree of polarization varies with wavelength, this calibration is filter dependent, and this correction must therefore be done for every filter observed. If the instrumental setup does not change throughout a mission, the value for each filter will be unique, ie not be observed variation from night to night. The correction is calculated from the differences between the values of angles of instrumental polarization and the reference standard obtained from the literature. A list of measures already undertaken can be found here (Polarimetry / Observation of polarimetric standards). These links provide some rough estimates for the values of the polarization standards and the references which can be found in.
Specifically,
the steps are:
Now
apply
the correction to all measurements to change the angle from
instrumental to celestial equatorial systems.
Remember
that
polarization
angles
are
between
0
and
180
degrees.
So,
if
you
get
negative
angle
you
can
convert
them to positive values by adding 180.
Problems
with the conversion? Make sure you used the correct
new_module.pccdgen parameter.
The
instrument introduce a small degree of polarization as well. Therefore, in the case of an
incoming non-polarized emission, the measurement would have a
value of polarization different from zero. The instrumental polarization is
negligible LNA polarimeter, but an estimate of it can be
obtained by measuring a standard star known to have zero
polarization, and seeing what the final (post instrument)
polarization is.
We
will
define the polarimetric throughput (or efficiency) of an
instrument as its ability to fully measure the polarization of a
given object. For example, if the beam enters the instrument
having a 4% polarization, but the instrument provides a measure of
2%, the efficiency of the whole is 0.5 (or 50%). Efficiency of
polarimetric drawer installed in LNA is very close to 100%.
However, you can / should take steps to confirm this value. This
is done by inserting a Glan prism at the entrance of the beam.
This element converts an optical beam to any one 100% polarized
beam. Thus, the inferred polarization from any source with a Glan
prism should be ~ 100%. If not, the ratio of the value obtained
and 100% is an estimation of efficiency and should be applied to
each value of polarization measure.
In
this
section, it is described how to obtain light curves from a
sequence of polarimetric frames. You must create *. mag .* files,
which correspond to the output of the phot routine, i.e.,
the aperture photometry. Your coordinate file must contain at
least two objects: the program star and the comparison. However,
we recomend you to include many objects of different magnitudes to
give you room to choose the most appropriate comparison.
I
suggest that you print out an image of the star field highlighting
the objects for which you did the photometry and their sequencial
number. That is, run display
in a frame and then tvmark
with the appropriate coordinate file. You can also write out it to
a postscript or pdf file. Printing can be done through the
saoimage (ds9) which also has the option to create a ps file. With
this reference, you can quickly find the sequential number of a
given star field. This figure is also useful for you to
register (for future reference) which is the object of the program
and the comparison star which you used in the final light curves.
The
procedure
described in this section applies to data obtained with calcite as
the analyzer, irrespective whether the images were obtained with a
half-wave or quarter-wave plates.
The
procedure
can be summarized in two steps:
-
Create a single file (to all frames) that selects the necessary
the data contained in *.mag* files to the differential photometry;
- Do the differential photometry using the task phot_pol.
The
visualization
is done with a separate task, plota_luz, as described below.
Here
is a general outline of the procedure:
To
keep
things simple, please make sure that the file which contains the
list *.mag* files has the extension ".pht". It will be created
with the task txdump,
using the following parameters:
textfiles
= "*. mag.1" Input apphot / daophot text database (s)
fields = "image, msky, nsky, rapert, sum, area" Fields to be
extracted
expr = "yes" Boolean expression for record selection
(headers = no) Print the field headers?
(parameters = yes) Print the parameters if headers is yes?
(mode = "ql") Mode of task
A list of input files can be specified with wildcards, as in the example above, or through a file containing the list of files * mag *.
Run txdump directing the output to the desired file:
txdump>
name_of_the_file.pht
Now,
simply
run
phot_pol
(be
sure
to
edit
your
login.cl
to
include
this
external
task).
This
routine
performs
differential
photometry, i.e., the fluxes are calculated using one of the
objects of the field as the flux standard. Thus, the output
corresponds to flux relative to that of the comparison star. I
suggest to set the extension of the output file as ".lc" The
light-curves are calculated for all the stars in the coordinate
file and for all apertures used in phot. You need only run the phot_pol
once to get all the light curves using a given star as the
comparison.
file_in
= "rej1007.pht" file with output txdump
file_out = "rej1007.luz" Name archivo de salida
(nstars = 32) Number of stars - in this case, the pht file must have 32 *
2 = 64 lines per frame
(NHW = 112) Number of positions of the waveplate (that equals
the number of frames!)
(nap = 10) Number of apertures
(comp = 1) Number of the comparison star
(star_out = 0) Star that is not included in the sum of the flows,
(gain = 5.5) Gain e- / adu
(mode = "ql")
This
task
also
creates
a
fake
object
whose
counts
are
the
sum
of
all
the
objects
of
the
field,
except the comparison. You can cut out any other stars from that
sum by setting the parameter "star_out".
When
starting
to
reduce
a
data-set,
one
usually
does
not
know
which
field
star
will
be
most
suitable
to
use as a comparison object. Ideally, this star will be the
brightest among the objects of constant flux (i.e. not variable)
in the field. Thus, it is necessary to run phot_pol using
different comparison objects and inspect the results, what should
be done with the task plota_luz.
Before
running
the
plota_luz,
you
need
to
create
a
file
with
a
list
of
the
current
HJD
for
each image. Here
is how to do that. Let's call this file hjd.txt.
Be careful! The hjd.txt should have the same number of rows as plate positions used in phot_pol.
An
example
of parameters plota_luz
is shown below:
arqpht
= "saida2.lc" Input file pht
(time = "teste.hjd") File with
time input
(star = 1) Number of star to plot the pho
(aper = 1) Ordinal number of the aperture
(connected = no) Connect the points
(points = yes) Plot points
(title = "Cet FL) Title of the graphics
(phase = no) Convert to HJD orbital phase
(to =
0.)
To ephemeris
(per =
0.)
Period (days)
(convert_mag = yes) Convert dat to input magnitudes?
(ffile = no) Create HJD, mag file?
(mmagfile = "") Name of the HJD, mag file
(metafile = no) Create mc file
(eps = no) Create eps file
(lim = no) Change limits
(flist = "saida2.luz)
(mode =
"ql")
The
parameter
"aper" relates to the aperture order in relation to the
configuration of phot, not to their actual value. For example, you
worked with 10 apertures between 2 and 11. Aperture 1 in plota_luz
corresponds to the first one, in the case, with a value of "2".
Conventions
of plota_luz:
- if you type to the star number "0", the task plots the light
curves of all objects;
- Typing "enter" to the aperture or star/aperture, it is
added 1 to the last aperture used and the star is kept the same.
At
this
point, you need to choose the best comparison field star. One
definition would be the brightest star in the field that has the
least variability. (Actually, it would also be appropriate that
this object had the same color as the program object). This
selection must be made based on the analysis of light curves of
the star compared to others in the field. For this, you can use
the mean and sigma of a given light curve. These values appear in
the console when you use the plota_luz. If you are working with
data from multiple observing runs, it is useful to take a look at
the magnitude of mean differences among the stars in the field to
make sure that there is no long-period variability that has been
unnoticed.
The differential photometry is done! You have the light curve of the target object using a comparison star and a given aperture. Congratulations! Now you may want to calibrate the differential magnitudes. To do this in the right way, you need to have had observed photometric standard stars, in several air masses, in order to carry out the procedure for absolute photometry. If this is not your case, you may obtain a rough estimate of the magnitude of the objects of the field using the magnitudes of the USNO catalog:
http://www.nofs.navy.mil/data/fchpix/
It
is
useful to compare the differences in magnitudes of the field stars
with those obtained with this catalog. Do not forget to write down
the name of your object USNO program and the comparison star
chosen. This is a universal reference that you must specify in
your work (article, dissertation, thesis, etc..).
You
can
access the USNO catalog (and plenty of others!) from the Aladin
interface. Try it!
Use
plota_luz
to
create an output file with two columns containing the HJD and the
differential magnitude. It can be used as input to diagfase2 or pdm, for example.
The
IRAF
task pdm can be used do find the period of a light curve. To
use it, do not forget to choose the range of periods in which to
make the search. It is worth remembering the following pdm
commands:
h -> plot data in HJD (the first graph)
k -> fit the period
p -> plot the phase diagram of the period at cursor position
(Contributed by Karleyne MG da Silva)
Filter |
Frequency (10 14 Hz) |
Wavelength (micron) |
Flux of
mag = 0 (Jy) (1) |
Flux of
mag = 0 (Jy) (2) |
Fluz of
mag = 0 (Jy) (3) |
U |
8.33 |
0.36 |
1880 |
1810 |
1823 |
B |
6.18 |
0.44 |
4650 |
4260 |
4130 |
V |
5.45 |
0.55 |
3950 |
3640 |
3781 |
V * |
5.45? |
3540 |
|||
Rc |
4.68 |
0.64 |
3080 | ||
Ic |
3.79 |
0.79 |
2550 | ||
R |
4:28 |
0.70 |
2870 |
2941 |
|
I |
3:33 |
0.90 |
2240 |
2636 |
Note: To convert
flux per unit of wavelength (lambda) to flux per unit of
frequency, multiply the result by (c / lambda^2), where c is the
speed of light.
References
* For Vega's
band V, according to reference 3, flux = 3540 Jy or 3.44e-8 (W /
sq micron).
That's it! You did differential photometry using polarimetric
data. Now put your results in an article.
We append the "LPARs" of the many of the tasks used to the end of this document.
To get IRAF to show the steps taken to correct the images, type the command:
epar ccdred
verbose = yes
To read data remotely, use the tip in the FAQ.
files *. fits> file_name
remove the strings. "fits" from the "files" (or whatever name you chose) to files created from the "files" do not contain. "fits". For example, if one of the images is called "mrser01.fits" we want the "magfile" created by the phot is mrser01.mag.001 not mrser01. Fits. Mag.001.
To add HJD to the header of all images, you should do two steps:
rename <filename> <extension> extn (see epar of the rename)
TRIMSEC and other informations about detectors LNA
ccdlist
shows list of images and flags reducao
You
can
register images in different ways. Below we describe the procedure
using the routine xregister. An example of parameters used
for this routine is shown below.
Steps:
inspect the images and determine the maximum displacement in x and y of the whole set. They will be called and deltax deltaY. Such can be estimated by choosing a reference star whose details will be verified in some images along the sequence (this can be done with the photometry option, imexamine).
parameters xregister:
xwindow & ywindow and should be 2 times the deltax and deltaY;
regions: should have dimensions greater
than 2 times xwindow and ywindow. This region must contain
one (or more) objects - in all the images - which is (are)
used by xregister to calculate the displacement.
Visualization of the correction: the task disp_var (/home1/claudiavr/iraf/tasks) enables one to display of multiple images consecutively. It may be useful to check if the offsets applied by xregister were correct.
Be sure to double check the register, it is a common source of problems!
On standard polarized stars
Hsu, J.-C.; Breger, M.
http://ads.nao.ac.jp/cgi-bin/nph-bib_query?bibcode=1982ApJ...262..732H&db_key=AST
An atlas of Hubble Space Telescope photometric,
spectrophotometric, and polarimetric calibration objects
Turnshek, D. A.; Bohlin, R. C.; Williamson, R. L., II; Lupie, O.
L.; Koornneef, J.; Morgan, D. H.
http://ads.nao.ac.jp/cgi-bin/nph-bib_query?bibcode=1990AJ.....99.1243T&db_key=AST
The Hubble Space Telescope Northern-Hemisphere grid of stellar
polarimetric standards
Schmidt, Gary D.; Elston, Richard; Lupie, Olivia L.
http://adsabs.harvard.edu/abs/1992AJ....104.1563S
A Medium-Resolution Search for Polarmetric Structure: Moderate Y
Reddening Sightlines
Wolff, Michael J.; Nordsieck, Kenneth H.; Nook, Mark A.
http://adsabs.harvard.edu/abs/1996AJ....111..856W
Standard Stars for Linear Polarization Observed with FORS1
Fossati, L.; Bagnulo, S.; Mason, E.; Landi Degl'Innocenti, E.
http://adsabs.harvard.edu/abs/2007ASPC..364..503F
Systematic variations in the wavelength dependence of interstellar linear polarization
ESO Polarimetric Standard Stars
http://www.eso.org/sci/facilities/paranal/decommissioned/isaac/tools/swp1.html
http://www.eso.org/sci/facilities/paranal/instruments/fors/inst/pola.html
Compilation from Instituto Astrofisica de Canarias
http://www.ing.iac.es/Astronomy/observing/manuals/html_manuals/wht_instr/isispol/node39.html
indata
= "bd32_r.dat" File / directory entry
(outmac = "think") File / output list of macrolide
(platebeg = 0) blade position - beginning
(plateend = 90) Position the blade - final
(step = 1) Step to vary lamina
(mode = "ql")
coords_real
(objeto = "hh160") Root
of the output files
(referencia = "ESO_POSS2UKSTU_Red.fits") Reference image
(imgccd = "zhh160c1_001.fits") CCD image (your data!)
(espccd =
0.35)
Plate scale in the CCD image (arcsec/pixel)
(xtamccd =
2048.) CCD
size in X direction (pixels)
(ytamccd =
2048.) CCD
size in Y direction (pixels)
(norde = "right") Norte
direction in CCD image
(ost = "bottom") East
direction in CCD image
(inclina =
0.)
Angle between axis and equatorial reference fra
(recentra =
yes)
Recenter objects?
(centrobox =
5.)
centering box width in scale units
(flist1 = "tmp$coords6338e")
(flist2 = "tmp$coords6338g")
(mode =
"q")
file_in = teste.coo coordinate file from DAOFIND
file_out = test output file
(shiftx = 33.5) x-axis
distance of pair (in pixels) is where we put the difference in position
(shifty = 4.5) y-axis
distance of pair (in pixels)
(deltax = 2) error in x-axis distance permitted
(deltaY = 2) error in y-axis distance permitted
(deltamethrin = 1) error in magnitude permitted
(= left side) position of top object (right | left)
control (= no) include only first pair?
(flist1 =)
(mode = ql)
image = "@ arq" The input image (s)
skyfile = "" The sky input file (s)
(coords = "hd110984.ord) The input coordinate files (s) (default:
image.c
(output = "default") The output photometry file (s) (default:
image.m
(plotfiles = "") The output plots metacode file
(datapars = "") Data dependent parameters
(centerpars = "") Centering parameters
(fitskypars = "") Sky fitting parameters
(photpars = "") Photometry parameters
(interactive = no) Interactive mode?
(radplots = no) Plot the radial profiles in interactive mode?
(icommands = "") Image cursor: [xy wcs] key [cmd]
(gcommands = "") Graphics cursor: [xy wcs] key [cmd]
(wcsin =) _.wcsin) The input coordinate system (logical, tv,
physica
(wcsout =) _.wcsout) The output coordinate system (logical, tv,
physic
(cache =) _.cache) Cache the input image pixels in memory?
(verify =) _.verify) Verify critical parameters in non-interactive
m
(update =) _.update) Update critical parameters in non-interactive
m
(verbose =) _.verbose) Print messages in non-interactive mode?
(graphics =) _.graphics) Graphics device
(display =) _.display) Display device
(mode =
"ql")
file_out = "hh83.out" macrolide input file (. out)
file_ord = "obj1c10001.ord" order input file (. ord)
(file_sel = "hh83.sel") output file (. sel)
(polmin = 2.) S / N minimum
(polinf = 0.) minimum polarization (range: 0 - 1)
(Polmax = 0.1) maximum polarization (range: 0 - 1)
(maiors = no) select between higher sigma and sthe?
(stheomax = 1.) theor. maximum error?
(thetainf = 0.) theta minimum (range: 0 - 180)
(thetasup = 180.) theta maximum (range: 0 - 180)
(deltatheta = 52.13) delta theta
(coorq = 0.) Q correction
(cooru = 0.) U correction
(xpixmax = 1000.) x ccd size (pixels)
(ypixmax = 1000.) ccd y size (pixels)
(outgraph = yes) create eps from graphic output?
(vecconst = "10000") scale for fieldplot
(north = "left") position in north-CCD field?
(east = "top") position in east-CCD field?
(binpol = 0.01) for in-binwidth histogram
(thetafit = yes) fit theta-histogram?
(gaussparst = "gausspars") parameters for fit theta-histogram (: e to edit)
(bintheta = 10.) binwidth for theta-histogram
(thetamin = 0.) theta-theta-minimum for histogram
(thetamax = 180.) maximum for theta-theta-histogram
(starelim = "") file with stars to eliminate
(meanvalue = yes) print mean values?
(ivdata = no) iv date?
(flist = "")
(flist1 = "")
(flist2 = "")
(line = "")
(line1 = "")
(mode = "ql")
tvmark - herehere
input
= "@ arq" Input images to be registered
reference = "sds0001" Input reference images
regions = [340:420,340:420] Reference image regions used for
registrati
shifts = "shifts.txt" Input / output shifts database file
(output = "@ arq") Output images registered
(databasefmt = no) Write the shifts in database file format?
(append = no) shifts Open database for writing in append mode
(records = "") List of shifts database records
(coords = "") Input coordinate files defining the initial shi
(XLAG = 0) Initial shift in x
(ylag = 0) Initial shift in y
(dxlag = 0) incremental shift in x
(dylag = 0) incremental shift in y
(background = "none") Background function fitting
(border = INDEF) Width of border for background fitting
(loreject = INDEF) Low side k-sigma rejection factor
(hireject = INDEF) High side k-sigma rejection factor
(apodized =
0.)
Fraction of endpoints to apodized
(filter = "none") Spatially filter the data
(correlation = "discrete") Cross-correlation function
(xwindow = 40) Width of correlation window in xreducao_pol.html:
(ywindow = 40) Width of correlation window in y
(function = "centroid") Correlation peak centering function
(xcbox = 20) X box width for centering correlation peak
(ycbox = 20) Y box width for peak fitting correlation
(interp_type = "linear") interpolantico
(boundary_typ = "nearest") Boundary (constant, nearest, reflect,
wrap)
(constant =
0.)
Constant for constant boundary extension
(interactive = no) Interactive mode?
(verbose = yes) Verbose mode?
(graphics = "stdgraph) The standard graphics device
(display = "stdimage) The standard image display device
(gcommands = "") The graphics cursor
(icommands = "") The image display cursor
(mode =
"ql")
indata
= "dat.073 Input file
(outmac = "ZeroQI") Ma crawl output file
(emin = "first") macrolides: minimum? (first | full)
(platebeg = 0) Initial zero (degrees)
(plateend = 90) Final zero (degrees)
(step = 1) step is zero (degrees)
(mode = "ql")
==> Do not forget to pccdgen