POLARIMETRIC REDUCTION


Last updated: Dec/2015

C.V. Rodrigues



Index

  1. Scope
  2. Introduction
  3. Getting Started: working with the IRAF
  4. Remarks about specific detectors
  5. Basic Reduction
    1. Creating a Master Bias
    2. Creating a Master flat
    3. Correcting images
      1. Correction of the flat-field for images with vignetting
  6. Reduction Specific to Polarimetric Data-Sets
    1. An quick overview of the entire process
    2. Automatic routines
    3. Aperture photometry
      1. Image Characteristics
      2. Finding the stars
      3. Defining the coordinates of a single star in a field
      4. Defining the coordinates of all stars in the field
      5. Finding the displacement between the images
      6. Finishing the aperture photometry
    4. Polarimetry
      1. Field selection of magfiles - create dat files
      2. Calculating the polarization
        1. Polarization of a single object in the field
        2. Polarization of various objects in the field
          1. Viewing the result in a field with several objects
      3. What is the optical axis of a quarter-wave plate?
    5. Time Resolved Polarimetry
  7. Polarimetric calibration: Standard Stars
    1. Conversion of the polarization angle to the equatorial system
    2. Estimation of instrumental polarization
    3. Estimation of the Polarimetric Throughput
  8. Differential photometry using polarimetric data
    1. Zero point of the system of magnitudes
  9. General Tips
    1. How to register images
    2. About the normalization option in the pccdgen

  10. POLARIMETRIC STANDARDS
  11. PARAMETERS OF SOME TASKS USED
    1. acha_zero
    2. ccdproc
    3. cria_dat
    4. fitskypars
    5. flatcombine
    6. macrolide
    7. order
    8. PCCD to l / 2 [deprecated]
    9. PCCD to l / 4 [deprecated]
    10. PCCDGEN MODE l / 2
    11. PCCDGEN MODE l / 4
    12. phot

    13. refer - for the IAG with CCD101
    14. select
    15. txdump
    16. vecplot - for the telescope IAG / OPD with CCD101
    17. xregister
    18. zerocombine
    19. zerofind
  12. Acknowledgments


Scope

(rev. jan/2010)

This text is not meant to be a self-contained manual for learning the method of polarimetric reduction, but should be viewed as a roadmap with key milestones and tips with particular emphasis on polarimetry. Indeed, I recommend that you read my short tutorial on the procedure for polarimetric observations before your first attempt at data reduction. In it various aspects of polarimetry are described in more detail.


Introduction

(rev. jan/2010)

We discuss the reduction of data obtained with the polarimetric module installed at the Observatório do Pico dos Dias (OPD/LNA), in Brazópolis, Brazil. This instrument is described in Magalhães et al. (1996). It is worth taking a look at the page of this instrument. There you can also find other texts on polarimetric reduction. The heart of the instrument is composed by a rotating retarder and a fixed analyser. Specifically, the retarder has 16 possible positions separated by 22.5deg (22.5 deg x 16 = 360 deg). The analyser produces two images of a same object, because the original beam is split in two. The core of a polarimetric measurement is to obtain several images of an object in different positions of the retarder. The ratio between the difference and the sum of the two beam counts is used to measure the polarization of the object.

We caution the reader that the data reduction procedure described in this manual could not be directly applied to polarimetric data obtained with other instruments. In addition, we only deal with the reduction of data using as the analyser a Savart Prism, which is an element that splits the beam of a given object in two rays (more on that later!). As it is made of calcite, it is sometimes refered simply as calcite. Thus, if the frame you are reducing has two images for each object (a cross-eyed view of the sky...), you can be sure that it was obtained with calcite, and this document applies. See here an example of an image obtained with calcite: in particular, see the two images of the brightest object in the field. Polarimetric data can also be obtained with  an analyser that produces only an image per object, as a Polaroid, but I do not treat this type of reduction here.

The polarimetric data can be obtained with two types of analyzers: a half-wave retarder (l/2) or quarter-wave retarder (l/4). In the first case, only the linear polarization can be obtained and the minimum number of images to measure the polarization of an object is 4. With a quarter-wave analyzer, we obtain both linear and circular polarizations, but it is needed at least 8 images.

The technique and calculation procedure are described in Magalhães et al.(1984) for a half-wave retarder plate and in Rodrigues et al. (1998) for a quarter-waveplate.

Getting Started: working with the IRAF

(rev. jan/2010)
We use  IRAF to reduce our datasets. Before starting, I recommend reading the "The Beginner's Guide to using IRAF" for an initial familiarization with it. IRAF also provides some guided exercises as intro, ccd1, and phot which are incredibly helpful - especially the first two (link). I suggest to those which are interested in understand better the steps in preparing CCD images to read the document "A User's Guide to CCD Reductions with IRAF".

The IRAF environment can be configured using the login.cl file (which can be found in the IRAF directory of your account).  I present  a sample login.cl with suggested changes to the default file created by IRAF. They are of two types: (1) changes to the user environment and (2) changes that include additional routines to the standard IRAF distribution. An example  of a simple reason to edit your user environment is to adapt the image size in the display to something that makes sense given the pixel resolution of your monitor, and is done by updating the stdimage value. Another parameter that I strongly recomment you to change is the image type (use fits image). Some of the routines (tasks in the IRAF jargon) cited here are original from IRAF packages. Others are from the external package pccdpack developed by Antonio Pereyra (Pereyra 2000). So your login.cl must contain the necessary changes so that these additional routines work. In particular, remember that for a given task to work, the package that includes it should be "loaded".  For example, to run the task phot, the apphot package should be loaded. To load a package, simply type its name in the CL prompt. To find the package in which a routine is included, type "help <name_of_the_task>".

Presently, some routines of the pccdpack package used by INPE's group are not the same included in the pccdpack distribution. Ask Claudia about this INPE version. Besides pccdpack, and because of that, you should also install the following IRAF packages: tables, stsdas, and ctio. See links for  installation instructions.

Many IRAF routines allow to use not simply a input file but a list of input files, so you can process a list of input files with only one command - this is very useful and it is used a lot in this document. This can be done using as the input a file containing a list of individual files to be reduced. To tell IRAF that you input is a list of input files and not an input file itself, the input file must be preceded by an @. See how to create a list of files here.

I also suggest to organize the images in different directories: one for the images of bias; another for the flats; and one for each object. Use the task imrename to move the images directory within the IRAF. If you prefer to do this procedure on UNIX, use the command mv.

There is a task in IRAF called apropos that can be used to find the tasks related to a particular word. For example, apropos gaussian returns tasks involving Gaussian.

Check if the entire image is displayed in DS9/Saoimage. Use the command "imhead image l-" to verify the image size. Using the cursor in DS9 window, check if the upper right corner coordinates correspond to the image size. If not, check if:

 

Remarks about specific detectors

There are two remarks about images obtained using the S800 detector at OPD.

First, you may want to remove the blank spaces in the name of the files using the UNIX script tira_espaco.

Secondly, you can include/correct some keywords in the header as UT, for instance, using  the IRAF script s800_header.

Basic Reduction

The basic reduction outlined here applies to any dataset obtained with a CCD. It aims to correct the images of possible noise and distortion due to the characteristics of the detector. Again, a quick read-through of A User's Guide to CCD Reductions with IRAF is recommended.

Steps:

Below, a detailed procedure for each of these steps is described.


Creating a Master Bias - rev. May/2014

The master bias is an estimate of the error introduced by just reading out the detector. Here, we outline the process of creating a master bias by combining a series of bias images. The bias images should be taken with the shutter closed and with very short exposure times. The bias images and science images should have the same reading parameters -  binning and frequency and velocity of reading, for example - since these parameters modify the readout noise pattern along the frame. The steps for creating a master bias are:

Create one master bias per night. Combine these nightly average masters to create the master bias of the run. You can probably use a master bias for a given run.

After creating the night and run master bias images, check if they are consistent.

IRAF provides several tools that can be used to compare images.

The images that should be corrected by bias are those of: objects (science, standard, etc) and flats.

Creating a Master flat - updated 2015.06.17

A master flat is made by averaging a series of images obtained while the CCD is uniformly illuminated. Thus, any difference in counts are due to the sensitivity (or illumination) of the individual pixel. One should take care to create a master flat-field for each filter independently.

High frequency flats


It can be the case that you are interested in correct the images only from the high frequency sensibility variation. To do that, do the following:

Low frequency flats


Victor has done some tests using boxcar and imsurfit to smooth master flats with vignetting. He has found good results using boxcar, and very bad results with imsurfit. Using imsurfit the master flat had a false small frequency modulation. The boxcar smoothing should be applied if there is no sign of high frequency flat field variation. This can be checked comparing the expected Poisson error in one pixel of average master flat with the difference between the smoothed and the non-smoothed flat.

Correcting images

Now that you  have the masterbias and masterflat images, you can use these calibration frames to correct your science images. Each science image should be corrected by bias and flat. Each flat image (or the masterflat) by bias.

For this we use the routine ccdproc. This routine is also removes the portion of the CCD which does not have a usable signal. See here for the parameters of this routine. The reader is cautioned, however, that several of the parameters used for ccdproc may vary from instrument to instrument as well as over time. In particular, the "trimsec" varies according to the CCD used. See the link below the appropriate values for the CCD LNA.

TRIMSEC and other information about detectors LNA

Before performing ccdproc, make a backup of all images! One suggestion is to create a directory with a helpful mnemonic as a name (bck, for example) and then make a copy of all images with the task imcopy.

WARNING! If images have vignetting, do not use the ccdproc to correct by flatfield. See below.

Overscan: CCD frames typically include a number of rows/columns not exposed to the light. This region is called the overscan. This area can be used to determine a level additive in the counts (the overscan). However, tests for images of LNA have shown that the use of the overscan area to calculate / overscan correction separate from the bias (read noise) only worsens the outcome.

Flatfield: ccdproc normalizes each flatfield, so that all frames have an average count of 1. The division of subsequent images by the master, therefore, doesn't change the average count. There remains, however, the important question: what is the best method to normalize a flat? Because differential photometry and polarimetry are both based on calculations of quantities that are functions of flux , any multiplicative constant applied to all pixels will be canceled out. For example, whether we use an average (as does the ccdproc) or a mode (which must be done by correcting flat "manually"), one will determine the same bias values and differential magnitudes. There are, however, methods that are based on normalization to a predefined function. In these cases, the normalizing constant will not be invarient. I have not done any tests to determine whether such methods allow a better end result.

ccdproc puts a flag in the images that were identified as having been processed with respect to a given correction. These flags can be viewed with the task imhead-l, or with the task ccdlist. The following is an example of the output of this last:

ccdred> ccdlist @ arch
PAD2-R0001.fits [1023.1023] [real] [none] [] [TZF]: HD110984
PAD2-R0002.fits [1023.1023] [real] [none] [] [TZF]: HD110984
PAD2-R0003.fits [1023.1023] [real] [none] [] [TZF]: HD110984

In red, you can see the flags T, Z and F, which means that these images have been "trimmed", zero corrected (= bias) and flat-fielded, respectively.

The master bias frame (ie, the correction for the readout noise) should be trimmed. The master flat frame (ie, the correction for the sensitivity of the pixels) must be trimmed and bias corrected. ccdproc uses these flags to check if a bias or flat correction has been properly performed, and if it was not, makes the correction. Be sure to use ccdlist to check if the master bias and flats have been properly correction. They should have, the T and TZ flags marked, respectively.

Correction of the flatfield for images with vignetting

If you images are affected by vignetting, the conventional normalization of the master flat introduces an amplification of the counts in the clear region of the image. To avoid this effect, I suggest the following procedure.

Tests were made using the above procedure and also using the ccdproc (ie, using the normalization throughout the image area). With the proposed alternative procedure, the errors are smaller and become closer to the value expected from photon noise.



Reduction Specific to Polarimetric Data-Sets

(rev. jan/2010)
Now that we have corrected the images, the next step is to perform the polarimetry and, if desired, the photometry. In the next section, we present a quick enumeration of the reduction steps. After that, there are specific sections for each reduction steps with more detailed information.

An quick overview of the entire process

(rev. jan/2010)
The process to obtain the polarization of one or more objects is outlined below. This section may be itself a bit repetitive and also redundant with the remaining of this document, but we think this structure can be useful for an easier understanding of the process. The main steps of the polarimetric reduction are:
  1. create the coordinate files: you have to create an archive with the position of the objects of interest for each image.
  2. do the aperture photometry: you have to do the aperture photometry centered in each coordinate listed in the coordinate file.
  3. calculate the polarization: using the aperture photometry results, you have to prepare a file that is the input for the task that calculates the polarization.
  4. plot the results: there are many ways to visualize the measured polarization. There are  three main graphs that can be plotted.
    1. You can plot the modulation from which we calculate one value of the polarization for ONE object;
    2. you can graph the temporal variation of the polarization for ONE object, if you are interested in time variability;
    3. you can graph the polarization of all objects in a field.
Each of the steps above are done using IRAF tasks: native or from the pccdpack. Now, we present a quick description of the tasks used in each step.
  1. create the coordinate file:
    1. daofind: this task finds objects in a image. The output has the following format: <root_image>.coo.<number>.
    2. ordem: this task makes the ordinary and extraordinary pairs from the *.coo.* file. It creates a file named *.ord. The coordinate file for polarimetry.
    3. If the shifts between image are negligigle, you can use the same coordinate file (*.ord) for all images. Otherwise, you should create one "ord" file for each image. For that, we need do calculate the shifts usingthe tasks xregister and ordshift.
  2. aperture photometry
    1. phot: this taks makes the aperture photometry. It creates a <imagename>.mag.<number> for each image.
  3. calculate the polarization:
    1. cria_dat: this task creates one or more <root>.dat files from the *.mag.* files created by phot. The *.dat file has the  format appropriated to the task that calculates the polarization (bee below). One *.dat file is created when you calculate only a value for the polarization of each star in the field. More that one are created when you are working with a temporal series of images. In other words, each dat file correspond to ONE value of polarization.
    2. getting the polarization: if you are workin with only one dat file, you should use the pccdgen task. Otherwise, use the pccd_var task. The first tasks produces ONE file with extension log that contains the values of the polarization for each object in the coordinate file. The pccd_var task creates as many log file as dat files you use.
    3. choosing the better aperture: it is usual to make the aberture photometry using more than one aperture. So it is necessary the aperture that provides the smallest error. This is done by macrol task.
  4. plot the results: there are many ways to visualize the polarization. There are  three main graphs that can be plotted.
    1. You can graph the modulation from which we calculate one value of the polarization for ONE object. Use the graf or grafv task.
    2. you can graph the temporal variation of the polarization for ONE object, if you are interested in time variability. In this case, use the plota_pol task.
    3. you can graph the polarization of all objects in a field. For this, use the select task.

The sequence of tasks to be used is then:

after preparing the images (bias, flat and trim)
\/
daofind - find objects - coo file
\/
ordem - make pairs - ord file
\/
xregister - calculate the shifts between images

ordshift - applied the shifts to ord file - corrected ord files
\/
phot - aperture photometry - mag files
\/
cria_dat - create input file to polarization calculation - dat file
\/
  zerofind - if l/4 - estimate the waveplate absolute position
\/

pccdgen or pccd_var - calculate the polarization - log file
\/
macrol - choose the best aperture - out file
\/
graf or select or plota_pol - see the result

 

If you work with the quarter-waveplate, i.e., you are interested in the circular polarization, you are not a luck guy.... You need to estimate the absolute position of this plate. It is a parameter to pccdgen and pccd_var (implicit in the last case). The method we use is to do the reduction to all the possible values (0 - 90) of all data and then choose the value that gives you the smallest error. In short, you have: (1) to use the task zerofind for good choices of dat files; (2) to anotate the value of waveplate position that gives you the smallest error for each dat file,; (3) and then to choose a zero value that  you think it is the better compromise based on your results. Hint: it is best to use dat files of polarized objects with high SNR: standard stars and polars.

Automatic routines

The above steps can be one by one. But  our group developed iraf tasks to do them automatically. They are in the pccdpack_inpe package. Their names are:

Aperture photometry

(rev. jan/2010)
The core of the process consists of estimating the flux of each of the two images of each object. These correspond to the ordinary and extraordinary rays emergent from the analyser (calcite). This section outlines the basics of aperture photometry, which is required for polarimetry and/or differential photometry. We use the IRAF native package apphot. The task phot performs the aperture photometry of one or more objects specified by its center coordinates - (x,y). The steps of this procedure can be summarized as:
  1. estimate the characteristics of the image: seeing and sky noise (imexamine)

  2. determine the objects that will be reduced, i.e., create the coordinate file for the first image (daofind and order)

  3. create the coordinate files for each image taking into account a possible displacement between them (xregister and ordshift). You can use xregister to also check the shifts. I really recomend you to do that!

  4. perform aperture photometry (phot)

  5. perform differential photometry and/or polarimetry. We use the tasks cria_dat and pccdgen, respectively.The reduction procedure depends whether the data is from a l/2 or l/4 wave plate

Let's start by investigating the tasks found in the phot package. There are several routines that share parameters that are, in turn, defined in specific tasks. The tasks of parameters to be edited are:

There can be some cross-talk between packages for some of the parameters listed above. For example, centerpars is used in both apphot and daophot. This can cause some confusion: you may remember setting the parameter in a apphot, but you also did so in daophot. It is wise to see if any such error occurred if the routine is not behaving in a way that you expect. One tip is to leave the correct parameters in both packages, since the IRAF can move between them without the User being aware.

Image Characteristics

(rev jan/2010)
The characteristics of the image can be estimated using the routine imexamine. It's worth playing around with the routine and learn its possibilities. Some examples of actions of this task are:

Use above actions to estimate the full width at half-maximum (FWHM) of the stellar profile and the sky counting. The width is an estimate to the seeing and the square root of the sky value is the sigma value, both of which should be input at the datapars. 


Finding the stars

(rev. jan-2010)

To perform aperture photometry, you must define which objects will be reduced. In practice, this means that you should create a file containing the coordinate pairs. It has the following format:

Ax1 Ay1
Ax2 Ay2
Bx1 By1
Bx2 By2
....

where each pair (A, B, etc.) represents the two images of the same object, i.e, (Ax1,Ay1) is the ordinary image of the object A and (Ax2,Ay2) is the extraordinary image of the same object. So each object has TWO lines in the coordinate file. Note that the order of the pair must always be the same. Example: the upper image of a given object must always be the first of the pair. This can be achieved if you use the routines daofind and order to find the centers and define the pairs. This care is necessary because the angle of polarization is dependent on the order of the two images (of a same object) in the coordinates file. Specifically, if you calculate the linear polarization of the pair:

Ax1 Ay1
Ax2 Ay2

it is different by 90 degrees from the result you obtain if the coordinate file is:

Ax2 Ay2
Ax1 Ay1 .

Hence, define that the bottom (or the upper) image is the first one in the pair to be listed in the coordinate file and follow this procedure throughout the reduction of the mission! If this is not done, the angles of polarization will be WRONG! The order of the images in each pair will be automatically set the same if you use the daofind and ordem procedures,

The first step in creating a coordinate file is to find the stellar images in the frame. This is done with the IRAF task daofind. You can decide the threshold above which you are looking for objects. This threshold is the standard deviation of the sky, from hereafter sigma. Let's see an example. Suppose you have a sky of around 100 (so the sigma is 10). You are working with the frame of  a standard star whose profile has 10,000 countings in the peak. It is also the brightest object in the frame by far. So you can tell daofind to find only objects with counts above 8,000 counts (800 sigma). It will probably find two objects (the ordinary and extraordinary images of your standard). Their centers will be written in an output file called <image>.coo.<number> . The <number> is 1 if it is the first coo of this image, 2 if it is the second, and so on. You can visually check which are the found objects using the tvmark task that marks the objects in the coo file in your image display (ds9 or ximtool).

To get a file with the format described above (with the images ordered by pairs), you need to use the task ordem (from pccdpack). This task finds the pairs of ordinary and extraordinary images of a given object and creates a new file, whose extension should be ord. Among its parameters, the shiftx and shifty represent respectively the distance, in pixels, at the x and y axes between the two images of the same object. You have to estimate these values for each observing run. You can do this using the imexamine task. Center three or four pairs of images in different frames and average the difference in x and y axis.

If you need to use "ordem" with a file containing many objects (over 100), use the routine ordem2 (a task written by Claudia Rodrigues, not a normal part of pccdpack. Because it is compiled, it is much faster.)

Usually one uses the first frame in the sequence for the initial coordinate file, but sometimes another image may be more suitable because it provides a better signal-to-noise ratio or because pointing shifts have occurred during observations.

Let's illustrate with two extreme cases:

  1. polarization 1 single object;

  2. polarization of all the stars in the field.

Defining the coordinates of a single star in a field

(rev. jan-2010)

This section outlines how to reduce data when there is only a single object of interest in the field. An example is the calculation of polarization of a polar with a quarter wave plate. If the object is the brightest in the field, use the task daofind with a detection limit slightly below the peak count of the object. Easy! After finding the star, use tvmark to verify that the object of interest was found. If so, use the routine "ordem" to order the pair. If not, decrease the detection limit.

For polars in a crowded field, the coordinate file may contain a large number of objects. When doing the polarimetry, this file can be edited to contain only the polar (i.e. the object of interest).

Defining the coordinates of all stars in the field

(rev. jan-2010)

Set daofind to a detection limit of around 4 or 5 sigma. Use tvmark to see if the desired number of objects found. If so, use ordem. Then, use tvmark to check the pairs really found. You can set tvmark to point and number the objects. Then you can see the order of the objects in the coordinate file. Example:

Pair 1 (=Object 1): centers 1 and 2
Pair 2 (=Object 2): centers 3 and 4
Pair 3 (=Object 3): centers 5 and 6
and so on

It can be a good idea to print the image with these numbers overlaid - this is the easiest way to know the correspondence between the values of polarization and the field objects.

It is important at this stage to determine whether there is saturated stars in the field., i.e., do the objects possess brightness profiles which indicate that some pixels have hit stream the ceiling above which the CCD can record properly? The saturated stars can be removed by a manual edit of the output file of either daofind or "ordem". A tip from Cristiane Targon is to set imexam to display with a minimum level (z1) around 29,000 and the maximum level (z2) at 35,000. Also helpful, is setting zScale and zrange to "no" and ztrans = log. She has developed a task called disp_sat. This routine displays the saturated stars (defined as those with counts greater than 29,000) and facilitates their identification and removal from the coordinate file.

Finding the displacement between the images

You now have a coordinate file to an initial reference image. However, you need a file for each image in your dataset.

Why can't you use the same file for all images? It is quite common for the telescope to move slowly over a number of integrations. This results in a given object having different center positions (x, y) in different images. Thus, the coordinate file that was created based on the first image (or any other reference you have chosen) may not be suitable for the others. To solve this problem, we (1) calculate the offset of each image with respect to the reference and (2) apply them to the coordinate file of the reference image, creating a coordinate file for each image. The tasks used for this are xregister (step 1 - calculate the shifts) and ordshift (step 2 - create a coordinate file for each image considering the calculated shifts).

The commands are:

I suggest the following procedure:

Another tip regarding the shifts estimation. Suppose that the coordinate file of the reference image contains only one star (two beams, though) and some of the images were taken with clouds so that the star counts for one or both of the beams are too low in some images. Proper centering of the object may not be possible, leading to errors in xregister. In this case, choose a brighter object that has enough score on ALL images to center on.


Finishing the aperture photometry

With the objects now defined, let's carry out our aperture photometry using the task phot (see parameters). It is essential that the tasks of apphot are configured properly. As such, one must confirm several parameters encountered in phot. Phot will calculate the sky background for each object (using the annulus, dannulus and other parameters from fitskypars) as well as integrating through the apertures defined by photpars. The result, for each image, will be recorded in a file whose name is<image>.mag.*, where * corresponds to a se quential number created by IRAF. If you run phot twice on a same image called huaqr001.fits, for instance. You will have two magfiles: huaqr001.mag.1 andhuaqr001.mag.2.

At this point, you have already an instrumental magnitude for each object in each image.

Once *.mag.* is created, you must choose the type of reduction you are interested: polarimetry (l / 2 or l / 4) or photometry. Below we describe the procedures in each case.

Polarimetry

rev. jan-2010

To calculate the polarization of your target, you must select fields from the *.mag.* files and write them to one or more separate files which are the input for the pccdgen task - the program that calculates the polarization. It is customary to call these files *.dat. Each dat file corresponds to ONE polarization point. Hence, If you are working with temporal series you are going to have as many dat files as you are going to get polarization points in the final polarization curve. If you are interested in only ONE value of polarization, using all the frames you have got, you are going to create only one dat file.

Field selection of magfiles - create dat files

rev. jan-2010

There are two tasks which may be used to select the data fields required to perform polarimetry: txdump or cria_dat (this one is simpler).



Calculating the polarization

rev. jan-2010

Considering a given object, in each frame we can calculate the following value:

                   (ordinary flux - extraordinary flux)

X     = -----------------------------------------------------------------

                  (ordinary flux+ extraordinary flux)



This value depends on the Stokes parameters of the incident beam and also on the characteristics and positions of optical elements that make up the polarimeter. As the latter are known, we can estimate the Stokes parameters by the modulation of the value above.

Over a full 360 degree rotation of the wave-plate, the modulation of this value creates a curve which can be fit to determine the value of polarization. Specifically, the amplitude is directly related to the polarization degree. See examples of this modulation in a linearly polarized object observed with l/2 and l/4 wave-plates, respectively. Note in the top of the chart, the values of the Stokes's parameters determined by the fit. Another example shows the same plot for an object with circular polarization - a polar. Objects which are linearly polarized show a modulation with a period equivalent to 90 degrees, while the modulation period of circularly polarized light is 180 degrees.


There are more than one task to calculate the polarization from a datfile, i.e., to fit the modulation of the value X.

The task pccdgen creates a *. log: the file that has the polarization value! Hooray! You're almost done ... See an example here. See also the parameters of the routine pccdgen for l/2 and l/4 wave plates. Among the input parameters, one is the executable to be used. Be careful to use the correct exe file!!

(fileexe = "/Users/cryan/iraf/extern/pccdpack/pccd/pccd4000gen08.exe) * PCCD executable

Always check which is the latest version of the executable above. The above version is correct one in October 2009.

The exe file has to be adequate for your operating system and installed libraries. You can run it in your unix/linux console to check if some error occurs. Even if you do not do that, you can check if pccdgen has run correctly, checking your logfile.

It is worth remembering to always check the configuration of the parameter pospars. In particular, it is not enough to define the number of plate positions, NHW, as a given number, 8 for example. You also need to explicitily edit pospars so that only and exactly the n (8, in the example) positions of the plate are used in the calculation.

If you are reducing l/4 data, you should use the option "normalization = no" in the pccdgen task. Another case in which you have to do this is when you are NOT using a complete set of frames in the modulation. In other words, when you are not using a number of frames multiple of 4 for a half waveplate or 8 for a quarter waveplate. Please see more on that on this section.

If you are using a l/4 wave-plate, another input parameter of pccdgen is the optical axis position of the waveplate (relative to the first instrumental position of the waveplate, position 1 during the observation). To simplify, let's call this position of "zero". The polarization does not depend on zero, if you have used l/2 plate. The procedure to determine zero is explained in a separate section.

When we are dealing with a series of images that will be used to obtain time resolved polarimetry, cria_dat creates dat files as:

dat.001 - frames 1 to 8
dat.002 - frames 2 to 9
dat.003 - frames 3 to 10
and so on

In this case, the zero position is valid for the first set up. For the second, you should use the add 22.5 degrees to the zero position, for the third, 22.5 degrees x 2 = 45 degrees, and so on. Thus, you will return to zero value of + 360 degrees in position 17. There is a task that does this automatically, but if you start a pccdgen on a datfile that does not correspond to the zero position, you must update this value.

As suggested above, the photometry was extracted for a series of 10 apertures. Thus, it is necessary to choose the one that corresponds to the smallest error. The parameter for guiding this choice is the sigma of fit: see a logfile. The procedure for this choice depends on the number of objects reduced.

If you are using the l / 2 wave plate (lucky you), continue with this tutorial. If you are using l / 4 wave-plate, it is necessary to find the angle of the optical axis. This is done by a procedure specified below.

Be Careful! The new polarimetric module installed in early 2007 rotates in the contrary sense of the original module. This brings about  a sign inversion of the position angle of the linear polarization and of the circular polarization. Hence make sure that you select the correct option in the pccdgen routine, otherwise you can get in trouble when converting your polarization measurements to the equatorial coordinate system.

Polarization of a single object in the field

For a of a single star, the best aperture choice can be made by visual inspection. One way of recording the result is through the graph of the variation of the value:

(ordinary - extraordinary) flux

-----------------------------------------------------------------

(ordinary + extraordinary) flux

The routine, graf, allows you to view, print or save the plots of this value.

To send the output of graf to the printer, you must:

To create a postscript file, you must:

If you are interested in only one object the reduction is completed. Congratulations! Save the graf printout as a record of your reduction. Do not forget to delete the backup images!



Polarization of various objects in the field

Suppose you are reducing a field where there are hundreds of objects for which the polarization is to be calculated. A manual determination of the apertures corresponding to error minima for each object is not practical. You should use the routine macrol (or its variants, macrol_v for l / 4 waveplate data). This routine creates a logfile entry where the lowest error measurement of each aperture is recorded in a row for every object. Thus, the number of lines of outfile should equal the number of objects measured. macrolides has two options for choosing the error minima: absolute or first minima. The last option is the default, but I prefer to choose the absolute minimum (= full).

Viewing the result in a field with several objects (rev. 2015 May)

The task select (the link points to an example of lpar output) provides two different outputs. It selects specific sub-groups of your data defined by the polarization range, the maximum error, and so on. It also produces  graphic outputs of the selected data: histograms and other diagrams. I used this routine only with data from lambda/2, hence it is still necessary to test if it works for lambda/4 waveplate data.

One of the select products is a map of the polarization vectors. But you usually also want a figure that illustrates the distribution of vectors of polarization over a real image. There are many ways to do that. I recommend to follow a two-step procedure: (1) create the polarimetric catalog; (2) plot the catalog using Aladin. The catalog is a table of AR, DEC, and polarization information for each selected object. Note that the polarimetric catalog can be included in your publication. Let's start by the it.

Victor de S. Magalhães developed the one-step task coords_real to produce a catalog of polarization (the link points to an example of lpar output). This task is included in the pccdpack_inpe package. The task coords_real is based on original tasks of the pccdpack package from Antonio Pereyra. To run this task, you should download an image of your field from Aladin. In Aladin, you should choose: file, load astronomical image and choose a catalog. I usually choose DSS Red. Choose a image size slightly larger (10 - 20%) than your CCD image. Fill coords_real parameter accordingly with your CCD image. The catalog extension is ftb. After run coords_real, be sure that AR and DEC were correctly calculated. The task coords_real overplots the calculated positions on downloaded image. But you should also check it in Aladin - see below.

Having the catalog, you can use Aladin with a filter to plot the catalog superposed on any image of your field. Steps in Aladin:

  • load the appropriate image;
  • load the catalog. Use [open local file] and choose the ftb file. At this point, the position of your objects should appear as small circles on the image. Take a time to check if the positions are correct. If not, try to change the plate scale;
  • now, load a filter that plot the polarization information in the catalog as vectors. To use a filter, click on [filter] on the right of the image. The filter is:
  • {
    draw
    draw ellipse(10.*${POLARIZATION},0.*${POLARIZATION},${THETA})
    }

    It can be necessary to change the number 10 to scale the vector sizes as you want.

    You are done!

    If you have more than one catalog (ftb file) covering the same region, you should use the combine_pol fortran routine. It combines many catalogs doing the average of common stars.

    == this part is obsolete ==

    The step-by-step procedure is explained below.

    The vecplot taks allows you to make a figure with a background image of the field, not necessarily of their data, with polarization vectors superimposed, whose size and direction correctly represent the field polarimetry. You must first run an auxiliary routine called refer. This routine converts the coordinates of objects in the image "seen" for the coordinates in the image of DSS. refer, in turn, uses the output select.

    Vecplot seems to work only on the 32-bit version of IRAF.

    For the background image, you can use an image of DSS obtained from the SkyView (Advanced Form). Use the filter closest to the one used in your data.  Some suggestions for the options in SkyView:

    If you have trouble running vecplot, you must include the following data in select: (a) the angle correction to convert the angles of polarization for the standard system (see section below), (b) the position of the north and east directions in the image (of your data);

    refer converts the coordinates of the objects in your image to those relating to the background image that will be used in vecplot.  To do that, you need to know the plate scale of your telescope. If you are using the OPD telescopes, this information can be found on page LNA. We present some numbers apropriated for the CCD101 in the IAG telescope of the OPD of the IAG are examples of the parameters of the tasks vecplot and refer.

    Example: the scale of this telescope plate is 25.09 "/ mm, which is equivalent to 0.02509" / one. Since each pixel of CCD101 (and also CCD106) is 24um, the angular pixel size is 0.602 ". Thus, refer to the parameter on the scale of the CCD board should be 0.602 "/ pixel. To calculate the scale of an image plate obtained by SkyView, divide the size of the image in degrees by the number of pixels. Example: 0.25 = 900 deg, so the scale of card is 900/1024 = 0.88 "/ pixel. It is also necessary to define a common reference star in two images: DSS and its data.

    Some suggestions if you have problems running vecplot:
    - Compare the DSS image with the CCD. The DSS has to be a little larger than the CCD and with approximately the same center;
    - Create a CCD image with the objects marked with the tvmark;
    - Perform an initial run without vecplot vectors and compare the image above;
    - Also a test tvmark on the image DSS is giving as output ;
    - May be necessary to adjust the minimum and maximum levels of background image (For example, if the background image is too light or too dark.) This is done by setting the parameter niveisfull of vecplot to "no" and setting the parameter values z1 and z2. As initial guesses, but likely inadequate ones, use those shown above with the display screen play command.

    It is worth a read and helps when referring to vecplot.

    16-março-2011 - Claudia: Quando fizemos os catalogos para o artigo da Cristiane Targon, notamos um acrescimo de +90 graus no catalogo final. Aparentemente, ao criar o txt o refer esta adicionando esse numero. Eh bom ficar de olho nisso e fazer uma conferência cuidadosa dos gráficos e também dos valores do catálogo.
    ==>> De acordo com e-mail trocado com o A. Pereyra, os angulos do txt tem mesmo uma adicao de 90deg. O fintab que trabalha tambem com magnitudes parece que toma conta disso. Mas, a versao que eu modifiquei do fintab carregou essa adicao erroneamente. Fazer essa correcao quando o Victor for trabalhar com isso.

    == obsolete

    What is the optical axis of a quarter-wave plate? (Rev. jan-2010)

    In order to reduce quarter-wave plate data, you must determine the absolute position of the waveplate relative to its optical axis in each waveplate position. This is done by calculating the offset of the instrumental angle relative to the real value. We call this offset zero: from  "waveplate zero" or "zero da lamina" (in portuguese). In short, to estimate the waveplate zero you should calculate the polarimetry for all possible values of zero (0 - 90 deg) and choose the one that provides the smallest error. This should be done for objects in a given run. Yes, it is a lot of work indeed... This procedure is described in more detail below and should be done for all linearly polarized standards and also for stars having a non-zero circular polarization, usually your program star, if you are doing polar research.

    Initially, you must create files which contain the selection of the appropriate mag files, the dat files. See the specific section above. The task used is cria_dat. Note that the parameter interval should be set to 8 for the program star (a polar, usually) and to the total number of images when dealing with standard stars.

    How do you find the angle that provides the smallest error in the polarization? We start by the standard stars.

    ---- forget this text ---- CVR - jan 2010
    One method is to use the task acha_zero (which is semi-obsolete, see zerofind, below.) This routine calculates the Stokes parameters using a range of angles of optical axis specified by the initial value, final value and step. The output should be examined visually to find the value of angle that provides the smallest error. acha_zero uses the task PCCD to calculate the individual values of polarization. Thus, insure that PCCD is configured properly before running acha_zero. In particular, use it as executable:

    /home1/claudiavr/pccd/vccd.exe

    Alternately, pccdpack has a similar routine to acha_zero called zerofind. This program selects the aperture which minimizes the linear polarization error. zerofind uses pccdgen to compute the polarization. Make sure it too is configured properly.

    -----until here -------------

    Pccdpack has a routine called zerofind that do the job of calculating the polarization for a series of values of zero. It also plots the error as a function of the value of zero, so it is very easy to find the "right" value of zero. The task zerofind also selects the aperture which minimizes the linear polarization error. zerofind uses pccdgen to compute the polarization. So make sure it is also configured properly. The task zerofind only works for dat files with ONE object!

    zerofind must be repeated for each polarized standard star and for the program objects as well. In the latter case, I suggest to run zerofind on sets of data in which the first image is equal to a multiple of 8+1 (e.g. ,1, 9, 17...). This is because the offset is calculating assuming that the first frame in the dat file has been done in the instrumental position 1. Positions 9, 17, ...  are optically equivalent to position 1, but it is not true for the others.

    Every standard or program datafile will provide a given zero value. They should be similar. But it is not always true. You will have to decide the angle that best represents the optical axis by analyzing the values found and the quality of data used in each case. I do not have a recipe, unfortunately.


    You have now determined the value that best represents the optical axis of the quarter-wave plate. Remember that this value will change from mission to mission and may also change if you change the instrument during the mission.

    Configure either pccd or pccdgen with the zero value you just determined, and calculate the final polarization as described in the section about calculating the polarization.If you are interested in obtaining a time-resolved polarimetry, follow the steps in the next session.

    Time Resolved Polarimetry

    rev. jan-2010

    In some projects, we are interested in obtaining time resolved polarimetry of a given object. There are some tasks that facilitate the reduction of such a data-set, as described below.

    With the aperture photometry completed (phot), we have the magfiles in-hand. Thus, we can use the cria_dat to create a series of datfiles each with the number of images suitable for reduction. When observing the circular polarization of a polar, having good time resolution is important. You can use 8 images in each datfile, which is the recommended minimum number for the calculation of polarization with a quarter-wave plate. Imagine a series of 80 images. cria_dat will create a series of dat as follows:

    dat.001 - frames 1 to 8
    dat.002 - frames 2 to 9
    ...
    dat.073 - frames 73 to 80 (which is a last datfile with 8 images)

    If you are reducing data obtained with a quarter-wave plate, you need to determine the zero plate position: see the section accordingly. If you already know the zero plate position, you can use pccd_var to calculate polarizations of all datfiles at once.

    pccd_var uses pccdgen to calculate the polarization of a series of datfiles. It is therefore essential to set up pccdgen correctly: number of plate-positions (NHW), number of apertures (nap), type of plate, readout noise, the executable which is used calculate the polarization, etc.. Never forget to check the pccd_gen!

    pccd_varcalculates the polarization of a series of datfiles (eg dat.001, dat.002, etc.), producing a log for each datfile (eg log.001, log.002 ,...). In undertaking the aperture photometry (phot), we use different apertures. Thus, each datfile contains the result of several apertures (typically 10) and therefore each log has the polarization of each object calculated using 10 different apertures. Which apertures provide the best estimate to the value of polarization? First, check out the variable "SIGMA", which measures the error of the polarization based on the quality of the fitting of the modulation curve. It is not an error based on photon noise! The last error is the Sigma_theor value, also present in the log file. You must therefore select the aperture having the lowest value of SIGMA. In this example logfile, the lowest SIGMA is the one that corresponds to an aperture of radius = 8 pixels. This search is done automatically by the routine macrol. pccd_var uses macrol and produces a file called *.out (e.g., log.out) that contains the best estimates of polarization for given datfiles.

    If the first frame of the first .dat file was not made in position 1 (of 16 possible positions of the plate), you need to be careful with some parameters. If you used the half-wave plate, delta_theta, must be configured as:

    [(# plate positions) -1] * (-45).

    Use values between -360 and 0.

    If you used a quarter-wave plate, the plate zero should be:

    [(# plate positions) -1] * (22.5).

    Rejoice! You have calculated the polarization. But still need to see the result. Do this by using the routine plota_pol. For this you need to create a file with time HJD of each image. You can do this as described here. The file created (hjd.txt, for example) should have the number of rows equal to the number of images used to create the datfiles.

    The plota_pol to create an output file with two columns each containing the HJD (or phase) and polarization, which can be used as input into diagfase2.

    If you want to use SELECT with an output of l / 4, create a new *. out with the command:

    fields [out.original] 3-11> [out.select]




    Polarimetric calibration: Standard Stars

    Above we described how to determine the polarimetry of a given object. However, in principle the method should be calibrated. This calibration can be divided into three basic corrections:

    1. converting the polarization angle from instrumental to equatorial systems;
    2. correcting for the efficiency of the instrumental ensemble;
    3. subtraction of any bias introduced by the system, the so-called instrumental polarization.

    Below we discuss each correction and how to calculate it.

    Conversion of the polarization angle to the equatorial system

    The conversion to the equatorial standard system, which has a zero direction defined by the direction of north and increases eastward. This reference frame is defined by taking measurements of two polarized standard stars over one night. Because the degree of polarization varies with wavelength, this calibration is filter dependent, and this correction must therefore be done for every filter observed. If the instrumental setup does not change throughout a mission, the value for each filter will be unique, ie not be observed variation from night to night. The correction is calculated from the differences between the values of angles of instrumental polarization and the reference standard obtained from the literature. A list of measures already undertaken can be found here (Polarimetry / Observation of polarimetric standards). These links provide some rough estimates for the values of the polarization standards and the references which can be found in.

    Specifically, the steps are:

    Now apply the correction to all measurements to change the angle from instrumental to celestial equatorial systems.

    Remember that polarization angles are between 0 and 180 degrees. So, if you get negative angle you can convert them to positive values by adding 180.

    Problems with the conversion? Make sure you used the correct new_module.pccdgen parameter. 

    Estimation of instrumental polarization

    The instrument introduce a small degree of polarization as well. Therefore, in the case of an incoming non-polarized emission, the measurement would have a value of polarization different from zero. The instrumental polarization is negligible LNA polarimeter, but an estimate of it can be obtained by measuring a standard star known to have zero polarization, and seeing what the final (post instrument) polarization is.

    Estimation of the Polarimetric Throughput

    We will define the polarimetric throughput (or efficiency) of an instrument as its ability to fully measure the polarization of a given object. For example, if the beam enters the instrument having a 4% polarization, but the instrument provides a measure of 2%, the efficiency of the whole is 0.5 (or 50%). Efficiency of polarimetric drawer installed in LNA is very close to 100%. However, you can / should take steps to confirm this value. This is done by inserting a Glan prism at the entrance of the beam. This element converts an optical beam to any one 100% polarized beam. Thus, the inferred polarization from any source with a Glan prism should be ~ 100%. If not, the ratio of the value obtained and 100% is an estimation of efficiency and should be applied to each value of polarization measure.

    Differential photometry using polarimetric data

    revision: jan-2010

    In this section, it is described how to obtain light curves from a sequence of polarimetric frames. You must create *. mag .* files, which correspond to the output of the phot routine, i.e., the aperture photometry. Your coordinate file must contain at least two objects: the program star and the comparison. However, we recomend you to include many objects of different magnitudes to give you room to choose the most appropriate comparison.

    I suggest that you print out an image of the star field highlighting the objects for which you did the photometry and their sequencial number. That is, run display in a frame and then tvmark with the appropriate coordinate file. You can also write out it to a postscript or pdf file. Printing can be done through the saoimage (ds9) which also has the option to create a ps file. With this reference, you can quickly find the sequential number of a given star field.  This figure is also useful for you to register (for future reference) which is the object of the program and the comparison star which you used in the final light curves.

    The procedure described in this section applies to data obtained with calcite as the analyzer, irrespective whether the images were obtained with a half-wave or quarter-wave plates.

    The procedure can be summarized in two steps:

    - Create a single file (to all frames) that selects the necessary the data contained in *.mag* files to the differential photometry;
    - Do the differential photometry using the task phot_pol.

    The visualization is done with a separate task, plota_luz, as described below.

    Here is a general outline of the procedure:

    To keep things simple, please make sure that the file which contains the list *.mag* files has the extension ".pht". It will be created with the task txdump, using the following parameters:

    textfiles = "*. mag.1" Input apphot / daophot text database (s)
    fields = "image, msky, nsky, rapert, sum, area" Fields to be extracted
    expr = "yes" Boolean expression for record selection
    (headers = no) Print the field headers?
    (parameters = yes) Print the parameters if headers is yes?
    (mode = "ql") Mode of task

    A list of input files can be specified with wildcards, as in the example above, or through a file containing the list of files * mag *.

    Run txdump directing the output to the desired file:

    txdump> name_of_the_file.pht

    Now, simply run phot_pol (be sure to edit your login.cl to include this external task). This routine performs differential photometry, i.e., the fluxes are calculated using one of the objects of the field as the flux standard. Thus, the output corresponds to flux relative to that of the comparison star. I suggest to set the extension of the output file as ".lc" The light-curves are calculated for all the stars in the coordinate file and for all apertures used in phot. You need only run the phot_pol once to get all the light curves using a given star as the comparison.

    The phot_pol is nothing different with respect to programs that perform the differential photometry of photometric data. In our case, however, the light of an object are separated into two "images" in a frame. Therefore, you must add the two images before making the comparison.

    Here's an example of parameters phot_pol:

    file_in = "rej1007.pht" file with output txdump
    file_out = "rej1007.luz" Name archivo de salida
    (nstars = 32) Number of stars - in this case, the pht file must have 32 * 2 = 64 lines per frame
    (NHW = 112) Number of positions of the waveplate (that equals the number of frames!)
    (nap = 10) Number of apertures
    (comp = 1) Number of the comparison star
    (star_out = 0) Star that is not included in the sum of the flows,
    (gain = 5.5) Gain e- / adu
    (mode = "ql")

    This task also creates a fake object whose counts are the sum of all the objects of the field, except the comparison. You can cut out any other stars from that sum by setting the parameter "star_out".

    When starting to reduce a data-set, one usually does not know which field star will be most suitable to use as a comparison object. Ideally, this star will be the brightest among the objects of constant flux (i.e. not variable) in the field. Thus, it is necessary to run phot_pol using different comparison objects and inspect the results, what should be done with the task plota_luz.

    Before running the plota_luz, you need to create a file with a list of the current HJD for each image. Here is how to do that. Let's call this file hjd.txt.

    Be careful! The hjd.txt should have the same number of rows as plate positions used in phot_pol.

    An example of parameters plota_luz is shown below:      

    arqpht = "saida2.lc"     Input file pht
    (time = "teste.hjd")        File with time input
    (star = 1) Number of star to plot the pho
    (aper = 1) Ordinal number of the aperture
    (connected = no) Connect the points
    (points = yes) Plot points
    (title = "Cet FL) Title of the graphics
    (phase = no) Convert to HJD orbital phase
    (to = 0.)             To ephemeris
    (per = 0.)             Period (days)
    (convert_mag = yes) Convert dat to input magnitudes?
    (ffile = no) Create HJD, mag file?
    (mmagfile = "") Name of the HJD, mag file
    (metafile = no) Create mc file
    (eps = no) Create eps file
    (lim = no) Change limits
    (flist = "saida2.luz)  
    (mode = "ql")         

    The parameter "aper" relates to the aperture order in relation to the configuration of phot, not to their actual value. For example, you worked with 10 apertures between 2 and 11. Aperture 1 in plota_luz corresponds to the first one, in the case, with a value of "2".

    Conventions of plota_luz:
    - if you type to the star number "0", the task plots the light curves of all objects;
    - Typing "enter" to the aperture or  star/aperture, it is added 1 to the last aperture used and the star is kept the same.

    At this point, you need to choose the best comparison field star. One definition would be the brightest star in the field that has the least variability. (Actually, it would also be appropriate that this object had the same color as the program object). This selection must be made based on the analysis of light curves of the star compared to others in the field. For this, you can use the mean and sigma of a given light curve. These values appear in the console when you use the plota_luz. If you are working with data from multiple observing runs, it is useful to take a look at the magnitude of mean differences among the stars in the field to make sure that there is no long-period variability that has been unnoticed.

    Once you have chosen a comparison star, you should choose the best aperture: it should be the one which results in the lowest dispersion of the light curve (sigma)

    The differential photometry is done! You have the light curve of the target object using a comparison star and a given aperture. Congratulations! Now you may want to calibrate the differential magnitudes. To do this in the right way, you need to have had observed photometric standard stars, in several air masses, in order to carry out the procedure for absolute photometry. If this is not your case, you may obtain a rough estimate of the magnitude of the objects of the field using the magnitudes of the USNO catalog:

    http://www.nofs.navy.mil/data/fchpix/

    It is useful to compare the differences in magnitudes of the field stars with those obtained with this catalog. Do not forget to write down the name of your object USNO program and the comparison star chosen. This is a universal reference that you must specify in your work (article, dissertation, thesis, etc..).

    You can access the USNO catalog (and plenty of others!) from the Aladin interface. Try it!

    Use plota_luz to create an output file with two columns containing the HJD and the differential magnitude. It can be used as input to diagfase2 or pdm, for example.

    The IRAF task pdm can be used do find the period of a light curve.  To use it, do not forget to choose the range of periods in which to make the search. It is worth remembering the following pdm commands:

    h -> plot data in HJD (the first graph)
    k -> fit the period
    p -> plot the phase diagram of the period at cursor position


    Zero point of the system of magnitudes

    (Contributed by Karleyne MG da Silva)

    It is possible that you need the results of photometry in units of absolute flux, not magnitude. An example is the use of the program CYCLOPS (Cyclotron in polar shocks), which calculates the continuum emission from polars: if you want to model the optical flux in more than one band, the fluxes should be in the correct units. We recommend using the values listed in the table below to convert the magnitudes to the standard system for flux.

    Constants to convert magnitude to flux in the Johnson-Cousins system
    Filter
    Frequency
    (10 14 Hz)
    Wavelength
    (micron)
    Flux of mag = 0
    (Jy)
    (1)
    Flux of mag = 0
    (Jy)
    (2)
    Fluz of mag  = 0
    (Jy)
    (3)
    U
    8.33
    0.36
    1880
    1810
    1823
    B
    6.18
    0.44
    4650
    4260
    4130
    V
    5.45
    0.55
    3950
    3640
    3781
    V *
    5.45?



    3540
    Rc
    4.68
    0.64

    3080
    Ic
    3.79
    0.79

    2550
    R
    4:28
    0.70
    2870

    2941
    I
    3:33
    0.90
    2240

    2636

    Note: To convert flux per unit of wavelength (lambda) to flux per unit of frequency, multiply the result by (c / lambda^2), where c is the speed of light.

    References

    (1) Lena, P., Lebrun, F., Mignard, F., 1998, Observational Astrophysics, pag.91
    (2) Bessel, 1979, PASP, 91, 589 - UBVRI photometry.II - The Cousins VRI system, its temperature and absolute flux calibration, and relevance for two-dimensional photometry
    (3) Cox, AN, 2000, Allen's Astrophysical quantities

    * For Vega's band V, according to reference 3, flux = 3540 Jy or 3.44e-8 (W / sq micron).


    That's it! You did differential photometry using polarimetric data. Now put your results in an article.



    General Tips



                 Before run setjd, make sure if there are the following keywords and they have the correct values:
                  If necessary, use hedit to correct the above keywords.

                To create a file with a list of the HJD of all files, run:


    How to register images

    You can register images in different ways. Below we describe the procedure using the routine xregister. An example of parameters used for this routine is shown below.

    Steps:

    xregister creates a file with various information about the calculation of displacements (parameter shifts). It can be useful in case of problems.

    xregister includes a header (header) that records the shifts applied through alteration / inclusion of variables LTV1 and LTV2.The process of "trim" may have already included these variables, in which case they will have the values of the total offsets (trim + record).Note that the procedure to keep track of the changes made to the image using specific fields (fields) of the header is widely used by IRAF.

    About the normalization option in the pccdgen


    The polarization (i.e., the Stokes parameters) is calculated by the modulation of the ratio between the difference and the sum of ordinary and extraordinary fluxes. Specifically, pccdgen calculates the following quantity:

               extraord(i) - ord(i)
    z(i)  = ----------------------------   (1)
               extraord(i) + ord(i)

    where

    extraord(i) : counts of extraordinary beam in waveplate position i
    ord(i): counts of ordinary beam in waveplate position i

    In the case of retardance = 180 deg (half-wave plate), you can prove that the sum of the ordinary fluxes is equal the sum of the extraordinary fluxes if you are using a number of waveplate positions that is multiple of 4: image 1 to image 4 or 1-8, for instance. In this case, you can use the following expression for z(i):

               extraord(i) - ord(i) * k
    z(i)  = ----------------------------        (2)
               extraord(i) + ord(i) * k

    where

    k = [sum in i of extraord(i)] / [sum in of ord(i)]

    The k value (calculated from the data) carries a correction to diferent sensibilities of the detector for incoming beams of ortogonal polarizations, as is the ord. and extrarod. ones, or other effects.

    In the pccdgen, the option norm=yes turns on the normalization on k. The option norm=no assumes k = 1, what is equivalent to use the expression (1) above.

    For retardances different from 180 deg, the expected value of k is not 1. So the normalization cannot be applied. The pccdgen has been modified in such way that is the waveplate is not halfwave the norm is fixed in "no". But you should take care and verify the header of the logfile created.


    POLARIMETRIC STANDARDS

    On standard polarized stars
    Hsu, J.-C.; Breger, M.
    http://ads.nao.ac.jp/cgi-bin/nph-bib_query?bibcode=1982ApJ...262..732H&db_key=AST


    An atlas of Hubble Space Telescope photometric, spectrophotometric, and polarimetric calibration objects
    Turnshek, D. A.; Bohlin, R. C.; Williamson, R. L., II; Lupie, O. L.; Koornneef, J.; Morgan, D. H.
    http://ads.nao.ac.jp/cgi-bin/nph-bib_query?bibcode=1990AJ.....99.1243T&db_key=AST


    The Hubble Space Telescope Northern-Hemisphere grid of stellar polarimetric standards
    Schmidt, Gary D.; Elston, Richard; Lupie, Olivia L.
    http://adsabs.harvard.edu/abs/1992AJ....104.1563S

    A Medium-Resolution Search for Polarmetric Structure: Moderate Y Reddening Sightlines
    Wolff, Michael J.; Nordsieck, Kenneth H.; Nook, Mark A.
    http://adsabs.harvard.edu/abs/1996AJ....111..856W


    Standard Stars for Linear Polarization Observed with FORS1
    Fossati, L.; Bagnulo, S.; Mason, E.; Landi Degl'Innocenti, E.
    http://adsabs.harvard.edu/abs/2007ASPC..364..503F

    Systematic variations in the wavelength dependence of interstellar linear polarization

    ESO Polarimetric Standard Stars
    http://www.eso.org/sci/facilities/paranal/decommissioned/isaac/tools/swp1.html
    http://www.eso.org/sci/facilities/paranal/instruments/fors/inst/pola.html

    Compilation from Instituto Astrofisica de Canarias
    http://www.ing.iac.es/Astronomy/observing/manuals/html_manuals/wht_instr/isispol/node39.html








    PARAMETERS OF SOME TASKS USED

    acha_zero

    indata = "bd32_r.dat" File / directory entry
    (outmac = "think") File / output list of macrolide
    (platebeg = 0) blade position - beginning
    (plateend = 90) Position the blade - final
    (step = 1) Step to vary lamina
    (mode = "ql")

    ccdproc

    images = "@ arq" List of CCD images to correct
    (output = "") List of output CCD images
    (ccdtype = "") CCD image type to correct
    (max_cache = 0) Maximum image caching memory (in Mbytes)
    (NoProces = no) List processing steps only?\ n
    (fixpix = no) Fix bad CCD lines and columns?
    (overscan = no) Apply overscan strip correction?
    (trim = yes) Trim the image?
    (zerocor = yes) Apply zero level correction?
    (Darkcore = no) Apply dark count correction?
    (flatcor = yes) Apply flat field correction?
    (illumcor = no) Apply illumination correction?
    (fringecore = no) Apply fringe correction?
    (readcor = no) Convert zero level image to readout correction?
    (scancor = no) Convert flat field image to scan correction?\ n
    (readaxis = "line") Read out axis (column | line)
    (fixfile = "") File describing the bad lines and columns
    (biassec = "") Overscan strip image section
    (trimsec = "[10:521,1:512]") Trim data section
    (zero = "../ .. / bias / biasave ") Zero level calibration image
    (dark = "") Dark count calibration image
    (flat = "../ .. / flat / v / flatvave ") Flat field images
    (illum = "") Illumination correction images
    (fringe = "") Fringe correction images
    (minreplace = 1.) Minimum flat field value
    (ScanType = "shortscan") Scan type (shortscan | longscan)
    (nscan = 1) Number of short scan lines \ n
    (interactive = no) Fit overscan interactively?
    (function = "legendre") Fitting function
    (order = 1) Number of polynomial terms or spline pieces
    (sample = "*") Sample points to fit
    (naverage = 1) Number of sample points to combine
    (niterate = 1) Number of rejection iterations
    (low_reject = 3.) Low sigma rejection factor
    (high_reject = 3.) High sigma rejection factor
    (grow = 0.) Rejection growing radius
    (mode = "ql")


    coords_real
    (objeto = "hh160")        Root of the output files
    (referencia = "ESO_POSS2UKSTU_Red.fits") Reference image
    (imgccd = "zhh160c1_001.fits") CCD image (your data!)
    (espccd = 0.35)           Plate scale in the CCD image (arcsec/pixel)
    (xtamccd = 2048.)          CCD size in X direction (pixels)
    (ytamccd = 2048.)          CCD size in Y direction (pixels)
    (norde = "right")        Norte direction in CCD image
    (ost = "bottom")       East direction in CCD image
    (inclina = 0.)             Angle between axis and equatorial reference fra
    (recentra = yes)            Recenter objects?
    (centrobox = 5.)             centering box width in scale units
    (flist1 = "tmp$coords6338e")
    (flist2 = "tmp$coords6338g")
    (mode = "q")           

    cria_dat

    varim = "@ mag" Input mag list
    (outdate = "dat") root output file is where we put the difference in position
    (interval = 8) Number of images in 1 datfile
    (flistvar = "tmpvar9215co)
    (mode = "ql")


    fitskypars

    (salgori = mode) Sky fitting algorithm
    (annulus = 50.) Inner radius of sky annulus in scale units
    (dannulu = 10.) Width of sky annulus in scale units
    (skyvalu = 0.) User sky value
    (smaxite = 10) Maximum number of sky fitting iterations
    (sloclip = 0.) Lower clipping factor in percent
    (shiclip = 0.) Upper clipping factor in percent is where we put the difference in position
    (snrejec = 50) Maximum number of sky fitting rejection iteratio
    (sloreje = 3.) Lower K-sigma rejection limit in sky sigma
    (shireje = 3.) Upper K-sigma rejection limit in sky sigma
    (khist = 3.) Half width of histogram in sky sigma
    (binsize = 0.1) binsize of histogram in sky sigma
    (smooth = no) Boxcar smooth the histogram
    (rgrow = 0.) Region growing radius in scale units
    (mksky = no) Mark sky annul on the display
    (mode = ql)

    flatcombine

    input = @ arch List of flat field images to combine
    (output = flatave) Output flat field root name
    (combine = average) Type of combine operation
    (reject = avsigclip) Type of rejection
    (ccdtype =) CCD image type to combine
    (process = no) Process images before combining?
    (subsets = no) Combine images by subset parameter?
    (delete = no) Delete input images after combining?
    (clobber = no) Clobber existing output image?
    (scale = mode) Image scaling
    (statsec =) Image section for computing statistics
    (nlow = 1) minmax: Number of low pixels to reject
    (nhigh = 1) minmax: Number of high pixels to reject
    (nkeep = 1) Minimum to keep (pos) or maximum to reject (neg)
    (mclip = yes) Use median in sigma clipping algorithms?
    (lsigma = 3.) Lower sigma clipping factor
    (hsigma = 3.) Upper sigma clipping factor
    (rdnoise = 0.) ccdclip: CCD readout noise (electrons)
    (gain = 1.) ccdclip: CCD gain (electrons / DN)
    (snoise = 0.) ccdclip: Sensitivity noise (fraction)
    (pclip = -0.5) pclip: Percentile clipping parameter
    (blank = 1.) Value if there are no pixels
    (mode = ql)

    macrolide

    (file_in = "imagem_4.log) PCCD output file (. log) or list (@ *)
    (file_out = "imagem_4.out") output file (. out)
    (minimun = "full") minimum? (first, full)
    (flist = "")
    (flistvar = "")
    (mode = "ql")


    order

    file_in = teste.coo coordinate file from DAOFIND
    file_out = test output file
    (shiftx = 33.5) x-axis distance of pair (in pixels) is where we put the difference in position
    (shifty = 4.5) y-axis distance of pair (in pixels)
    (deltax = 2) error in x-axis distance permitted
    (deltaY = 2) error in y-axis distance permitted
    (deltamethrin = 1) error in magnitude permitted
    (= left side) position of top object (right | left)
    control (= no) include only first pair?
    (flist1 =)
    (mode = ql)


    PCCD to l / 2 [deprecated]

    filename = input file (. dat)
    (nstars = 1) number of stars (maximum 2000)! number of stars
    (NHW = 16) number of postions of wave-plate (maximum 16)! number of positions the blade
    (nap = 10) number of apertures (maximum 10)! number of openings, usually 10
    (calc = "c") analyzer: calcite (c) / polaroid (p)! let c calcite
    (readnoise = 5.) CCD readnoise (adu)! place the appropriate value for the CCD used, see / home1/claudiavr/iraf/lembretes
    (gain = 0.82) CCD gain (e / adu)! idem
    (deltatheta = 0.) correction in polarization angle (degrees)! set zero
    (zero = 0.) Position zero of the l / 4 plate! set zero
    (fileOut = "test") output file (. log)! name of output file, will append Å. loga ¡
    (fileexe = "/ home1/claudiavr/pccd/pccd.exe) PCCD execute file (. exe)
    (mode = "ql")

    PCCD to l / 4 [deprecated]

    filename = input file (. dat)
    (nstars = 1) number of stars (maximum 2000)! number of stars
    (NHW = 16) number of postions of wave-plate (maximum 16)! number of positions the blade
    (nap = 10) number of apertures (maximum 10)! number of openings, usually 10
    (calc = "c") analyzer: calcite (c) / polaroid (p)! let c calcite
    (readnoise = 5.) CCD readnoise (adu)! place the appropriate value for the CCD used, see / home1/claudiavr/iraf/lembretes
    (gain = 0.82) CCD gain (e / adu)! idem
    (deltatheta = 0.) correction in polarization angle (degrees)! set zero
    (zero = 0.) Position zero of the l / 4 plate! set zero
    (fileOut = "test") output file (. log)! name of output file, will append Å. loga ¡
    (fileexe = "/ home1/claudiavr/pccd/vccd.exe) PCCD execute file (. exe)
    (mode = "ql")

    PCCDGEN MODE l / 2

    filename = "dat.001 input file (. dat) # put the name of the dat file that you created
    (nstars = 1) number of stars (max. 2000)
    (WaveLook â„¢ package = "half") wave-plate used? (half, quarter, other) # keep this option
    (retar = 180.) waveplate of retardance (degrees) # keep this value
    (NHW = 16) total number of wave-plate positions in f # input may be necessary to modify this number
    (pospars = "") wave-plate positions used to calculus? : e
    (nap = 10) number of apertures (max. 10)
    (calc = "c") analyzer: calcite (c) / polaroid (p)
    (readnoise = 0.82) CCD readnoise (adu) # use the appropriate values for the CCD used
    (gain = 5.) CCD gain (e / adu) # ditto above
    (deltatheta = 0.) correction in polarization angle (degrees) # always use 0 for standards
    (zero = 50.) Zero # waveplate of this number is not used in the reduction of data obtained with a blade half-wave
    (fileOut = "teste.log") output file (. log) # output file - one option is to use the name of the star
    (fileexe = "/ home / claudiavr / iraf / pccdpack / PCCD / pccd4000gen08. exe" **) PCCD # executable path in hydra
    (new_module = yes)            Did you use the new polarimetric module (after
    (norm = yes) include normalization?
    (flist = "pccdgen8522ij)
    (line1 = "")
    (line2 = "")
    (mode = "al")
    ** Check if the latest version.

    PCCDGEN MODE l / 4

    filename = "dat.001"       input file (.dat)
    (nstars = 1)              number of stars (max. 2000)
    (wavetype = "quarter")      wave-plate used ? (half,quarter,other,v0other)
    (retar = 90.)            retardance of waveplate (degrees)
    (nhw = 16)             number of total wave-plate positions in input f
    (pospars = "")             wave-plate positions used to calculus? :e
    (nap = 10)             number of apertures (max. 10)
    (calc = "c")            analyser: calcite (c) / polaroid (p)
    (readnoise = 7.18)           CCD readnoise (adu)
    (ganho = 0.62)           CCD gain (e/adu)
    (deltatheta = 0.)             correction in polarization angle (degrees)
    (zero = 73.)            Zero of waveplate
    (fileout = "teste.log")    output file (.log)
    (fileexe = "/home/claudia/iraf/pccdpack/pccd/pccd4000gen11.e") pccd execut **
    (new_module = yes)            Did you use the new polarimetric module (after
    (norm = yes)            include normalization?
    (flist = "pccdgen4875ha")
    (line1 = "")            
    (line2 = "")            
    (mode = "ql")          

    ** Check if the latest version.

    phot


    image = "@ arq" The input image (s)
    skyfile = "" The sky input file (s)
    (coords = "hd110984.ord) The input coordinate files (s) (default: image.c
    (output = "default") The output photometry file (s) (default: image.m
    (plotfiles = "") The output plots metacode file
    (datapars = "") Data dependent parameters
    (centerpars = "") Centering parameters
    (fitskypars = "") Sky fitting parameters
    (photpars = "") Photometry parameters
    (interactive = no) Interactive mode?
    (radplots = no) Plot the radial profiles in interactive mode?
    (icommands = "") Image cursor: [xy wcs] key [cmd]
    (gcommands = "") Graphics cursor: [xy wcs] key [cmd]
    (wcsin =) _.wcsin) The input coordinate system (logical, tv, physica
    (wcsout =) _.wcsout) The output coordinate system (logical, tv, physic
    (cache =) _.cache) Cache the input image pixels in memory?
    (verify =) _.verify) Verify critical parameters in non-interactive m
    (update =) _.update) Update critical parameters in non-interactive m
    (verbose =) _.verbose) Print messages in non-interactive mode?
    (graphics =) _.graphics) Graphics device
    (display =) _.display) Display device
    (mode = "ql")          


    refer - for the IAG with CCD101


    (file_sel = "hh83.sel") select input file (. sel)
    (file_txt = "hh83.txt") output file (. txt)
    (xo = 511.) x-coordinate in reference image
    (yo = 474.) y-coordinate in reference image
    (xoi = 531.) x-coordinate reference frame in CCD
    (yoi = 638.) y-coordinate reference frame in CCD
    (epimg = 0.88) image plate scale (arcsec / pixel)
    (epccd = 0.62) CCD plate scale (arcsec / pixel)
    (ximage = 1024.) x image size (pixels)
    (yimagem = 1024.) and image size (pixels)
    (xSides = 1024.) x CCD size (pixels)
    (yside = 1024.) and CCD size (pixels)
    (north = "left") position in north-CCD field?
    (east = "top") position in east-CCD field?
    (tilt = 0.) angle between axis respect to equate. system?
    (recen = yes) recenter?
    (imgrefer = "skvhh83.fits") reference image (. imh,. fits)
    (cbox = 10.) centering box width in scale units
    (flist = "")
    (flist1 = "")
    (line = "")
    (mode = "al")

    select

    file_out = "hh83.out" macrolide input file (. out)
    file_ord = "obj1c10001.ord" order input file (. ord)
    (file_sel = "hh83.sel") output file (. sel)
    (polmin = 2.) S / N minimum
    (polinf = 0.) minimum polarization (range: 0 - 1)
    (Polmax = 0.1) maximum polarization (range: 0 - 1)
    (maiors = no) select between higher sigma and sthe?
    (stheomax = 1.) theor. maximum error?
    (thetainf = 0.) theta minimum (range: 0 - 180)
    (thetasup = 180.) theta maximum (range: 0 - 180)
    (deltatheta = 52.13) delta theta
    (coorq = 0.) Q correction
    (cooru = 0.) U correction
    (xpixmax = 1000.) x ccd size (pixels)
    (ypixmax = 1000.) ccd y size (pixels)
    (outgraph = yes) create eps from graphic output?
    (vecconst = "10000") scale for fieldplot
    (north = "left") position in north-CCD field?
    (east = "top") position in east-CCD field?
    (binpol = 0.01) for in-binwidth histogram
    (thetafit = yes) fit theta-histogram?
    (gaussparst = "gausspars") parameters for fit theta-histogram (: e to edit)
    (bintheta = 10.) binwidth for theta-histogram
    (thetamin = 0.) theta-theta-minimum for histogram
    (thetamax = 180.) maximum for theta-theta-histogram
    (starelim = "") file with stars to eliminate
    (meanvalue = yes) print mean values?
    (ivdata = no) iv date?
    (flist = "")
    (flist1 = "")
    (flist2 = "")
    (line = "")
    (line1 = "")
    (mode = "ql")

    tvmark - herehere

    txdump

    textfiles = "*. mag.1" Input apphot / daophot text database (s)
    fields = "image, msky, nsky, rapert, sum, area" Fields to be extracted
    expr = "yes" Boolean expression for record selection
    (headers = no) Print the field headers?
    (parameters = yes) Print the parameters if headers is yes?
    (mode = "ql") Mode of task

    vecplot - for the telescope IAG / OPD with CCD101

    (files_txt = "hh83.txt") vectors with input file (. txt)
    (file_img = "skvhh83.fits") reference image (. imh,. fits)
    (file_ps = "skvhh83") output postscript file
    (glut = "psikern) Graphics LUT table (psikern)
    (bin = 1.) scale for binning (pixels)
    (devps = "psi_port") postscript device to use
    (xorig = 1.) x-origin image is offset
    (yorig = 1.) y-origin image is offset
    (pvec = yes) plot vectors?
    (posvec = "middle") vector position respect to object?
    (range = 10000.) scale plot
    (title = "") title string?
    (escalatitle = yes) title with scale?
    (longescala = 5) scale size (%)
    (pimage = yes) overplot image?
    (niveisfull = no) zrange lower and higher levels?
    (z1 = 4850.) custom zrange lower level?
    (z2 = 6500.) custom zrange higher level?
    (mapsao = "none") saocmap file name
    (typeimg = yes) negative image?
    (typestar = no) plot star number?
    (repositions = yes) locus of star number repositioned?
    (typefont = "medium") character size
    (pext = no) plot the extinction?
    (file_ext =) extinction image
    (newcont = "") newcont parameters (: e to edit)
    (deligi = yes) delete igi file?
    (flist0 = "")
    (flist1 = "")
    (line1 = "")
    (mode = "ql")


    xregister

    input = "@ arq" Input images to be registered
    reference = "sds0001" Input reference images
    regions = [340:420,340:420] Reference image regions used for registrati
    shifts = "shifts.txt" Input / output shifts database file
    (output = "@ arq") Output images registered
    (databasefmt = no) Write the shifts in database file format?
    (append = no) shifts Open database for writing in append mode
    (records = "") List of shifts database records
    (coords = "") Input coordinate files defining the initial shi
    (XLAG = 0) Initial shift in x
    (ylag = 0) Initial shift in y
    (dxlag = 0) incremental shift in x
    (dylag = 0) incremental shift in y
    (background = "none") Background function fitting
    (border = INDEF) Width of border for background fitting
    (loreject = INDEF) Low side k-sigma rejection factor
    (hireject = INDEF) High side k-sigma rejection factor
    (apodized = 0.)             Fraction of endpoints to apodized
    (filter = "none") Spatially filter the data
    (correlation = "discrete") Cross-correlation function
    (xwindow = 40) Width of correlation window in xreducao_pol.html:
    (ywindow = 40) Width of correlation window in y
    (function = "centroid") Correlation peak centering function
    (xcbox = 20) X box width for centering correlation peak
    (ycbox = 20) Y box width for peak fitting correlation
    (interp_type = "linear") interpolantico
    (boundary_typ = "nearest") Boundary (constant, nearest, reflect, wrap)
    (constant = 0.)             Constant for constant boundary extension
    (interactive = no) Interactive mode?
    (verbose = yes) Verbose mode?
    (graphics = "stdgraph) The standard graphics device
    (display = "stdimage) The standard image display device
    (gcommands = "") The graphics cursor
    (icommands = "") The image display cursor
    (mode = "ql")          

     zerocombine

    input = @ arch List of zero level images to combine
    (output = biasave) Output zero level name
    (combine = average) Type of combine operation
    (reject = avsigclip) Type of rejection
    (ccdtype =) CCD image type to combine
    (process = no) Process images before combining?
    (delete = no) Delete input images after combining?
    (clobber = no) Clobber existing output image?
    (scale = none) Image scaling
    (statsec =) Image section for computing statistics
    (nlow = 0) minmax: Number of low pixels to reject
    (nhigh = 1) minmax: Number of high pixels to reject
    (nkeep = 1) Minimum to keep (pos) or maximum to reject (neg)
    (mclip = yes) Use median in sigma clipping algorithms?
    (lsigma = 3.) Lower sigma clipping factor
    (hsigma = 3.) Upper sigma clipping factor
    (rdnoise = 0.) ccdclip: CCD readout noise (electrons)
    (gain = 1.) ccdclip: CCD gain (electrons / DN)
    (snoise = 0.) ccdclip: Sensitivity noise (fraction)
    (pclip = -0.5) pclip: Percentile clipping parameter
    (blank = 0.) Value if there are no pixels
    (mode = ql)

    zerofind

    indata = "dat.073 Input file
    (outmac = "ZeroQI") Ma crawl output file
    (emin = "first") macrolides: minimum? (first | full)
    (platebeg = 0) Initial zero (degrees)
    (plateend = 90) Final zero (degrees)
    (step = 1) step is zero (degrees)
    (mode = "ql")

    ==> Do not forget to pccdgen



    Acknowledgments

    I would like to thank people who have contributed to this cookbook:  Cristiane Godoy Targon,  Karleyne M. G Silva, Silvia Regina dos Santos,  and Victor de Souza Magalhães. I am also indebted to Ryan Campbell for the first English version of this guide.