Jurg Ott / 15 July 2018
Rockefeller University, New York

Statistical Genetics Utility programs


2BY2 program -- Fisher's exact test in 2x2 table, 1sided and 2sided p-values
3BY3 program -- Haplotype frequencies and D' for pair of SNPs
BINOM program  -- Compute confidence interval on proportions
CHIPROB program -- Compute p-values for chi-square with n df
CONTING program -- Compute chi-square and p-value for contingency tables
HET program -- Heterozygosity estimates for codominant marker systems
HIST program -- Quick screen histogram from a set of observations
HWE program -- Test Hardy-Weinberg equilibrium (HWE)
HWE2 program -- Exact test of HWE for biallelic markers (SNPs)
IBD program -- Compute IBD probabilities for 2-locus disease model
LODS program -- Equivalent numbers of recombinants from lod score curve, and lods for numbers of recombinants

MAPFUN program -- Apply map functions to recombination fractions
NOCOM program (separate document) -- Analyze mixture of normals possibly with power transformation
NORINV program -- Find normal deviate for given tail (error) probability
NORPROB program -- Find tail (error) probability from given normal deviate
ODDSRATIO program -- homogeneity of OR in 2 x 2 tables
PAR program -- Compute Population Attributable Risk
PVALUES program -- Combine independent p-values by Fisher and Z method
RELRISK program -- replaced by ODDSRATIO program
SEXDIF program -- Convert male-female recombination fractions to map distances
TPROB program -- Find p-value associated with given t statistic
TREND program -- Compute chi-square for trend in 2 x 3 table of genotypes


Updated -- The conting program now has an added feature: For 2 x 3 contingency tables, the table entries are interpreted as numbers of genotypes, with rows = cases/controls and columns referring to three genotypes, 1/1, 1/2, 2/2, in that order. In addition to computing the usual chi-square for this 2 x 3 table, the program constructs the associated 2 x 2 table of alleles and computes its chi-square and permutation-based p-value. The latter is immune to population admixture (stratification, substructure).

In the TREND program, equation (15.1) has been replaced by (15.6), Armitage et al (2002), to better allow for small numbers of observations.

Windows: The UTIL package may be obtained as a set of small programs. Simply copy the UtilWin.zip file into a specific directory (folder), for example, c:\util, and unzip it. Then, double-click on UP (up.exe file). Note that all extracted files must be in the same folder, or be accessible by the path.

Linux: Compiled programs and source code is available here. The source code is the same as in the Windows distribution. In Ubuntu Linux, executable code should be made executable with the chmod command and copied, for example, into the /usr/local/bin folder.

Most programs are interactive. To use interactive programs in batch mode, an input file may be created (for example, with the Windows Notepad program) containing exactly the information asked by the program during interactive operation; the program is then run with redirection of standard input and output. An example is given for the HET program, below.

The UTIL package is a set of programs useful in statistical genetics. Information about most of the methods underlying these programs may be found in Ott (1999). All but the IBD and NOCOM programs are written in Free Pascal in a manner as close to standard Pascal as possible. The Free Pascal compiler is available at no charge and is largely compatible with Turbo Pascal but does not suffer from Turbo Pascal's many restrictions. It is available for Windows, Linux, etc. Programs may be used as they come and there is generally no need to recompile them. Linux users will find additional information in a separate document.


Generally, a statistical test of a hypothesis is carried out at a certain significance level, α1, where α1 is the probability of a false positive test result. When m independent tests are carried out, each at the α1 level, the probability that at least one of them leads to a false positive result is given by αm = 1 – (1 – α1)m, where αm > α1 may be regarded as the overall significance level. To keep αm at some predetermined value, for example, 0.05, one must carry out each individual test at the α1 level, which is given by
α1 = 1 – (1 – αm)1/mαm/m (Bonferroni correction). However, when tests are non-independent, this correction is known to be quite conservative. The most accurate and unbiased way of computing significance levels is with permutation tests.

2BY2 program

Fisher's exact test (based on the hypergeometric distribution) is carried out for 2 × 2 tables. The maximum number of observations is set equal to 100,000. In addition to p-values, the program computes the disequilibrium parameter, both in absolute value and in % of its maximum, and the odds ratio (approx. relative risk; see also ODDSRATIO program).

 4  16
 1  21


 5  37


The empirical significance level (p-value) is computed in a one-sided and two-sided manner (see p. 136 in Armitage et al, 2002) and is given by the total probability of all those tables (with fixed marginal frequencies) whose individual probabilities are as small or smaller than the probability of the observed table. Example data from page 135 in Armitage at al (2002) are shown in the table on the right. Values to be entered in the 2BY2 program would be 4 16 1 21. The resulting 1-sided p-value is 0.1435, the corresponding mid-p value is 0.0809 (counting only half of the probability of the observed data), and the 2-sided p-value is 0.1745. In comparison, the likelihood ratio chi-square furnishes a (2-sided) p-value of 0.1131.

In addition to p-values, the program also computes the odds ratio (approximate relative risk), its 95% confidence interval, and linkage disequilibrium parameters D and D', where D' = D/Dmax (for D > 0) or D/Dmin (for D < 0). As outlined below, D and D' are not relevant when the data are cases and controls.

Such 2 × 2 tables occur, for example, in case-control studies when some individuals are exposed to a risk factor while others are not. In particular, the risk factor may be a specific marker genotype and the non-risk category would then be made up of all individuals not carrying the risk genotype. For case-control designs, D and D' are not defined. These disequilibrium parameters are only reasonable when the 2 × 2 table contains numbers of alleles at each of two loci. For example, when rows 1 and 2 refer to alleles #1 and #2, respectively, and analogously for columns, then for the numbers in the above table the program furnish D = –0.1246 and D' = –1. For ease of interpretation, D' takes the sign of D although, by definition, D' ranges from 0 to 1.

In a case-control association study, data may be broken down into subgroups depending on the values of non-genetic risk factors so as to make individuals in a given subgroup more homogeneous (Ott 2004). Results from different subgroups may then be combined via their p-values (see PVALUES program). For such combinations of results obtained from one-sided tests, one should preferably use mid-p values rather than p-values as the latter will on average be larger than 0.5 when the null hypothesis is true (Armitage et al 2002, p. 137). However, for a single table, the p-values represent the accurate empirical significance levels.

3BY3 program

For a 3 × 3 table of genotypes at two SNPs, this program estimates haplotype frequencies and determines D'.

BINOM program

The BINOM program is interactive and 1) computes class probabilities for binomial data, or 2) confidence limits for p on the basis of an observed proportion, k/n, where the number k of "successes" is binomially distributed as Bin(p, n).

Multinomial data may be handled by considering each of a number of classes at a time, lumping together the other classes hence making it a binomial problem (Murphy 1982, p. 296). The overall confidence coefficient for confidence intervals obtained in this manner will then be larger than the confidence coefficient specified for each class (see section on multiple tests). A well-known example of a multinomial problem is that of estimating gene frequencies for more than two alleles.

1) CLASS PROBABILITIES: BINOM will ask for values of n (number of observations) and p (probability of 'success' for an observation). One may enter either a single value of k = observed number of successes, or two values of k, k1 and k2. The program will then calculate the binomial probability, P(k), or the sum of these probabilities for all k-values between k1 and k2 (including k1 and k2), assuming that k follows a binomial distribution, Bin(n, p). The number, n, currently has an upper limit of 100,000.

Among the many uses of the BINOM program, consider the following application. Assume that under the null hypothesis, each observation (opportunity for recombination) has probability 0.5 of resulting in a 'success' (i.e., a recombination). In n = 20 observations, only 4 (rather than the 10 expected) were recombinants. Is the observed proportion of recombinants, t = 4/20 = 0.20, significantly smaller than the expected proportion of 50%? The empirical significance level is defined as the probability, under the null hypothesis, of obtaining an outcome as extreme or more extreme than the one observed. Using BINOM with n = 20 and p = 0.5, one finds that the k-values from 0 through 4 have a combined probability of occurrence of 0.0059. The result is, thus, significant at the 0.01 level. Note that in linkage analysis, more stringent significance levels are generally required, e.g., 0.0001.

2) CONFIDENCE INTERVALS: The user will be asked to specify the lower (EL) and upper (EU) error probabilities associated with the lower and upper endpoint of the confidence interval. Usually, EL and EU will be equal (eg, 0.025 each for a 95% confidence coefficient). The lower endpoint, pL , is determined numerically such that the sum of the binomial probabilities, P(i, pL, n), i = k..n, is equal to EL; pU is found analogously such that EU is equal to the sum of the P(i, pU, n), i = 0..k. The numerical method used for solving these equations is that of halving the interval. Statistics based on Pfanzagl (1966).

The results will be accurate to the number of decimal places chosen for output. Thus, be aware that a large number of decimal places and/or a large number, n, may require long computing times on older machines. For numerical reasons, the endpoint of confidence intervals cannot be smaller than 0.0001 or larger than 0.9999.

CHIPROB program

This program computes the p-value associated with a given value of chi-square and its number of degrees of freedom (df). The method is based on formulas 26.4.4 and 26.4.21 in Abramowitz and Stegun (1968) (exact calculation of p-values). For χ2 > 169 and df > 1, an approximation is used, corresponding to formula 26.4.14 in Abramowitz and Stegun (1968).

CONTING program

For contingency tables with give numbers of rows and columns, this program interactively calculates likelihood ratio or Pearson chi-squares and associated p-values. For 2 x 3 tables it interprets the two rows as corresponding to control and case individuals, with the three columns referring to SNP genotypes 11, 12, and 22, in that order. In addition to computing the regular chi-square and nominal p-value for this table of genotypes, conting will also find the associated 2 x 2 table of alleles, compute its associated chi-square and the corresponding permutation-based p-value, which is immune to population stratification. Note: The program requires a positive integer seed for its random number generator, which it will read from the seed.txt file if present, and if not, it will create a seed based on the system clock. The number of permutations used is 100,000, including the observed data, so that the smallest possible significance level is 1/100,000 = 0.00001.

HET program

For codominant marker systems, this program calculates maximum likelihood and unbiased estimates of heterozygosity (see Weir 1996). Also, standard errors are computed (based on Nei and Roychoudhury 1974) and used to provide confidence intervals for true heterozygosity.

HIST program

The HIST program is interactive and almost self-explanatory. It produces on the screen a simple histogram for a number of quantitative observations. The number of classes is 20. Observed values x may be subjected to a power transformation,
y = (xr1)/r + r, where the use can choose the exponent (power) r. Note that a limiting value of r = 0 corresponds to y = ln(x), and r = 1 (default value) is equivalent to y = x. A sample input file is provided. It was obtained by generating random normal deviates with a standard deviation of 2, where the first 150 observations have mean 10 and the next 50 observations have mean 15.

HWE program

This interactive program tests deviations from Hardy-Weinberg equilibrium (HWE) in codominant systems. It requests the number of alleles and observed genotypes. As noted in program output, genotypes must be furnished in the order, 1/1, 1/2, 2/2, 1/3, 2/3, 3/3, 1/4, and so on. The program compares observed numbers of genotypes with the numbers of genotypes expected under HWE, where the latter are computed on the basis of allele frequencies estimated from the genotype frequencies (null hypothesis, H0). For observed numbers, the relative cell frequencies are the estimates of the genotype probabilities (alternative hypothesis, H1). For the comparison between observed and expected numbers of genotypes, likelihood ratio chi-square is computed. If n is the number of alleles, the number of parameters estimated under H0 and H1 is n1 and n(n + 1)/2 – 1, respectively. Thus, the number of df for the chi-square test is n(n1)/2. Results are unreliable with small numbers of observations. Then, more elaborate approaches are advocated (Guo and Thompson 1992, Zaykin et al. 1995). The program also lists the log likelihoods (natural logarithms) under H0 (HWE) and H1 (no HWE).

HWE2 program

This program carries out exact (probability) tests of HWE, using a formula on page 99 in Weir (1996). Consider three genotypes, AA, AB, and BB at a SNP, and corresponding observed numbers of genotypes, nAA, nAB, and nBB. For a constant total number of observed genotypes and number of A alleles, the program evaluates all possible numbers of genotypes consistent with these constant numbers of genotypes and A alleles, and computes the probability of occurrence of each such set of genotypes. The empirical significance level p is then obtained as the sum of all those probabilities of occurrence that are smaller than or equal to the probability of occurrence of the observed set of genotypes. The current program version works with a total number of observations of up to 1,000,000.

IBD program (developed by Harald Göring)

1. Description of program

The program computes expected frequencies of ibd sharing from both parents given that (exactly) two offspring are affected with a 2-locus trait. The two trait loci are unlinked with each other. Locus parameters are the penetrances for the 9 two-locus genotype pairs and population allele frequencies at each locus.

Each locus is assumed to be in Hardy-Weinberg equilibrium. In particular, absence of assortative mating and full viability/fertility are assumed. All meioses are taken to be informative with regards to ibd sharing at both disease loci (i.e. 100% informative).

The program also computes deviations from independence between the ibd sharing at the two loci, where the expected values under independence are computed from the marginals for the two loci. In addition, trait prevalence based on the entered parameter values is computed.

2. Algorithm

The program goes through all possible parental genotypes at both loci. (The order of alleles is taken into account for convenience of programming; i.e. genotype ab is different from ba.) For each parental genotype, all possible transmission possibilities to two offspring are considered and ibd sharing at each locus with regards to both parents is computed. The ibd sharing is weighted by the (unconditional) probability of the joint parental 2-locus genotypes and the probability that both sibs are affected. The weighted ibd sharing is added to the appropriate cell in the ibd sharing table. Finally, the frequencies of 2-locus ibd sharing for all cells in the 2-locus ibd sharing table is divided by the sum of the weights (see above) so that the frequencies in the table sum to 1.

Locus 2

Locus 1

bb  bB/Bb  BB


f1     f2     f3
f4     f5     f6
f7     f8     f9

3. Usage

The program prompts the user for input from the keyboard for 9 penetrances, f1 - f9, at the two loci jointly. Input order must be as shown on the right (a, b = wildtype alleles; A, B = disease alleles). Also, the allele frequencies of the trait-predisposing alleles at both trait loci need to be entered. Output appears on the screen.

4. Example

Locus 2

Locus 1

  b         B



0.2024  0.2262
0.2262  0.3452



0.4286  0.5714


Consider the following theoretical example (Frankel & Schork 1996): f3 = f7 = 1, f5 = 0.5, all other penetrances being equal to 0, with all allele frequencies being equal to 0.50. The program will compute IBD sharing for a 3 × 3 genotype table and a 2 × 2 allele table. Results for the 2 × 2 allele table are shown on the left. While under this model there is no marginal effect in case-control association studies, in affected sib-pair linkage analysis there clearly is increased IBD sharing although the increase over the null value of 0.5 is small.

LODS program

This program calculates equivalent observations of recombinants and nonrecombinants, as introduced by J.H. Edwards (for details, see Ott 1999). For a given maximum lod score Z and recombination fraction t at which Z occurs, the program assumes phase-known data and computes the numbers k of recombinants and total number n of meioses leading to Z and t, where t = k/n. Analogous calculations are carried out for any two pairs of values (Z, t) on the lod score curve.

MAPFUN program

For seven map functions, MAPFUN will calculate the map distance for given recombination fraction or vice versa. The program is interactive and self-explanatory. Recombination fractions (theta values) will have to be given as decimal fractions (not in %), map distances in Morgans (not cM).

As an example, the M option expects input of a recombination fraction and will compute the corresponding map distance. With the option MS (S standing for sum), MAPFUN will calculate a running addition of the map distances and corresponding recombination fractions for each of the map functions.

The map functions used are Haldane, Kosambi, Carter-Falconer, Rao, Felsenstein, Sturt, and Binomial. The latter assumes a maximum of N binomially distributed crossovers.

To convert map distances from one metric into another (e.g., from Haldane into Kosambi cM), one first converts the map distances in each interval between adjacent loci into recombination fractions and then converts the recombination fraction in each interval into the desired type of map distance (Keats, Ott, and Conneally 1989). Clearly, with today's dense marker maps, there is no longer much need for mapping functions.

NORINV program

This program computes the inverse of the normal distribution function, that is, it provides the normal deviate, z, for a given tail probability P or Q of the normal distribution. It is based on formula 26.2.14 in Abramowitz and Stegun (1968, page 932). Under option 1 (upper tail probability, Q), the program reports the corresponding normal deviate, z, and the density of the normal curve at z.

NORPROB program

This program computes the upper tail probability, Q, associated with a normal deviate, z. It is based on several formulas in Abramowitz and Stegun (1968); formulas 26.2.12 and 26.2.14, tables 26.1 and 26.2.


This program replaces an earlier program called RELRISK. It is based on the combination of the log(odds ratio) (OR) over 2 x 2 contingency tables by the Woolf (1955) method (see Armitage et al 2002). It focuses on two factors, each of which is either present or absent. One of these may be a risk factor and the other a response variable. For example, the layout of each table may be as follows:

                                           Risk factor
             Cases Controls               present absent
---------------------------       ----------------------
Risk factor    a      b      or   Cases      a       b
not at risk    c      d           Controls   c       d
---------------------------       ----------------------

For a set of 2 × 2 tables (observed numbers, a, b, c, d in the four cells of each table), odds ratios (approx. relative risks) are calculated in each table and tests of homogeneity are carried out for the tables in a set. Observations in the different tables are considered independent. For example, in a given table, the two rows may correspond to case and control individuals, and the two columns may represent the two alleles at a given SNP marker. Different tables (same SNP) may correspond to different ethnicities, or to groups of people with different non-genetic risk factors. As recommended by Breslow and Day (1980), 1/2 is added to each table entry.

The program is interactive and prints results after each set of tables. The input data (numbers of observations in the four cells per 2 × 2 table) are expected to be entered on the keyboard, but an input file (e.g., OR.txt) may be created and the program run with the command, oddsratio <OR.txt. The program creates an output file called ORout.txt.

Example data as quoted in Table 4.3, page 145, of Breslow and Day (1980):

1 0 9 106
4 5 26 164
25 21 29 138
42 34 27 139
19 36 18 88
5 8 0 31

Resulting output produced by ODDSRATIO program:
Group  OR      95% confid.interval
 1   1.5714      0.5333 4.6304
 2   1.9412      0.5904 6.3819
All  1.7289      0.7768 3.8479
Chi-square for het: 0.0664 1 df, p = 0.796691

NOTE: The odds ratio for "All" is the weighted average of table odds ratios as given by the Woolf method; it is not the OR for the summed table entries.

PAR program

The population attributable risk (PAR) estimates the proportion of cases in the total population that are attributable to a given risk factor. Based on equation (19.35) on page 684 in Armitage/Berry/Matthews (2002), the PAR program computes this quantity for the following input table:

                                           Risk factor
             Cases Controls               present absent
---------------------------       ----------------------
Risk factor    a      c       =   Cases      a      b
not at risk    b      d           Controls   c      d
---------------------------       ----------------------

yes   no


83     3
73   14

Note that the order of observations is crucial. The quoted equation only applies to tables with odds ratios, OR > 1. The program will also compute 95% confidence intervals for PAR values. However, confidence intervals are based on normal approximations and endpoints may be negative, in which case they should be interpreted as zero, or when they exceed 1 they should be interpreted as 1. The table on the right shows example data from page 679 in Armitage/Berry/Matthews (2002). The estimated PAR and associated 95% confidence intervals are 0.7857 (0.2810, 0.9361).

PVALUES program

This program combines independent p-values by two different methods as follows.

1) Fisher method. When p-values from n independent investigations should be combined for a total judgment in the form of one p-value, R.A. Fisher's method (page 99 in Fisher [1970]) specifies that one should transform each value of p, which has a uniform distribution under the null hypothesis, into c = –2 × LN(p), which has a chi-square distribution with 2 df. The resulting n c-values are added together. Their sum, Σ(c), represents a chi-squared variable with 2n df. The PVALUES program carries out these transformations and applies the CHIPROB program to arrive at a total p-value. For example, assume that three independent tests (not necessarily chi-square tests) have furnished the p-values of 0.011, 0.047, and 0.35. The combined p-value is then equal to 0.008. Elston (1991) has published interesting comments about Fisher's method.



















The case of n = 2 (two independent p-values, p1 and p2) is particularly easy to handle as the distribution of chi-square with 4 df has an explicit form (Abramowitz and Stegun 1968): Their combined p-value is given by  p12 = p1 × p2 × [1 – LN(p1 × p2)]. As an example application, assume that a study furnished a result of p1 = 0.01. A replication study was carried out with approximately the same sample size (it should really be much larger than the original sample size, see Lernmark and Ott [1998]), resulting in a larger p-value of p2 = 0.12. Is this result valuable and does it confirm the original study? Combination of the two p-values furnishes p12 = 0.0093, which is smaller than the p-value for the original study. Generally, one may ask, "what value of p2 must not be exceeded so that a combination of p1 and p2 is (smaller than or) equal to p1?" The table on the right shows some results.

Technical note on boundary values: Fisher's method does not work when one of the p-values (perhaps estimated through computer simulation) is equal to zero. It may then be reasonable to replace the estimate by a value midway between 0 and the upper boundary of a suitable confidence interval. In the PVALUES program, an input value smaller than 10290 (in particular, p = 0) will be replaced by 10290, and an input value larger than 1 – 10290 (in particular, p = 1) will be replaced by this boundary value. If these replacements are undesirable users must make their own adjustments.

2) Z transform method. Alternative methods have been proposed for combining p-values from n independent tests, with some of them said to be superior to Fisher's method (Whitlock 2005). Here, the unweighted Z-transform is applied, which is analogous to the probit transformation. Each p-value is transformed into a normal deviate via the normal probability transformation, that is, z = Φ-1(p), where Φ is the normal probability distribution function. The sum, S = Σ(zi), has a normal distribution with mean 0 and variance n, so that S/√(n) has again a normal N(0, 1) distribution, from which the combined p-value is computed as the normal distribution function associated with S/√(n), that is, as p = Φ[S/√(n)]. For example, the three p-values above (0.011, 0.047, and 0.35) result in a combined significance level of 0.006.

Boundary values: For numerical reasons, an input value smaller than the lower bound in the Fisher method (see above) is replaced by this lower bound. An input value larger than 0.999999999999999 will be replaced this upper boundary value. Again, however, values of p0 and p1 will not generally yield reasonable results.

Note: In my experience, this transformation furnishes p-values with highly unusual distributions; my preference is to stick to the Fisher method.

SEXDIF program

Calculates conversions among

Tm = male recombination fraction
Tf = female recombination fraction
Ta = sex-averaged recombination fraction (assuming equal number of meioses in both sexes)
Xm = male map distance
Xf = female map distance
R = Xf/Xm

For example, SEXDIF can transform a pair of values (Tm, Tf) into (Tm, R), that is, for given male and female recombination fractions, it calculates the ratio of female-to-male map distance. The program is interactive and self-explanatory. Also, for a given sex-averaged recombination fraction and R, it solves for Tm and Tf values. With dense genetic maps, there is no longer much need for mapping functions.

TPROB program

Interactive program to calculate the two-sided tail probability for a given value t of the t distribution with a given number of degrees of freedom (whole  numbers only, no fractional degrees of freedom). The method is based on equations 4.2-4.4, page 96, in Johnson & Kotz (1970).

TREND program

For a 2 × 3 table with 2 rows (cases, controls or vice versa) and 3 columns (SNP genotypes A/A, A/B, B/B), this program carries out the (approximate) Cochran-Armitage test for trend in proportions across columns. It is based on equation (15.1) in Armitage/Berry/Matthews (2002). For example, the following data,

   19  29  24
  497 560 269

furnish chi-square (1 df) = 7.1927, p = 0.0073, indicating evidence for a difference in trend in the two rows.


Abramowitz and Stegun (1968) Handbook of mathematical functions. Dover, New York

Armitage P, Berry G, Matthews JNS (2002) Statistical Methods in Medical Research, 4th edition. Blackwell, Oxford

Breslow NE, Day NE (1980) Statistical methods in cancer research, vol 1--The analysis of case-control studies. International Agency for Research on Cancer, Lyon

Elston RC (1991) On Fisher's method of combining p-values. Biom J 33, 339-345

Fisher RA (1970) Statistical Methods for Research Workers, 14th edition. Hafner/MacMillan, New York

Frankel & Schork (1996) Who is afraid of epistasis? Nat Genet 14, 371-373

Guo SW, Thompson EA (1992) Performing the exact test of Hardy-Weinberg proportion for multiple alleles. Biometrics 48, 361-372

Johnson SM (1963) Generation of permutations by adjacent transposition. Math Comp 17, 282-285

Johnson NL, Kotz S (1970) Continuous univariate distributions-2. Houghton Mifflin, Boston

Keats BJB, Ott J, Conneally PM (1989) Human Gene Mapping 10 - Report of the committee on linkage and gene order. Cytogenet Cell Genet 51, 459-502

Lehmer DH (1964) The machine tools of combinatorics; chapter 1 in: Applied combinatorial mathematics, ed. Beckenbach EF. Wiley, New York

Mantel N, Haenszel W (1959) Statistical aspects of the analysis of data from retrospective studies of disease. J Natl Cancer Inst 22, 719-748

Murphy EA (1982) Biostatistics in Medicine. Johns Hopkins University Press, Baltimore

Nei M, Roychoudhury AK (1974) Sampling variances of heterozygosity and genetic distance. Genetics 76, 379-390

Nijenhuis A, Wilf HS (1978) Combinatorial algorithms for computers and calculators, second edition. New York: Academic Press

Ott J (1985) A chi-square test to distinguish allelic association from other causes of phenotypic association between two loci. Genet Epidemiol 2, 79-84

Ott J (1992) Strategies for characterizing highly polymorphic markers in human gene mapping. Am J Hum Genet 51, 283-290

Ott J (1995) Estimating crossover frequencies and testing for numerical interference with highly polymorphic markers. In Genetic Mapping and DNA Sequencing, Vol. 81 in "The IMA Volumes in Mathematics and its Applications," eds. Terry Speed and Michael S. Waterman. New York: Springer, pp 49-63

Ott J (1997) Testing for interference in human genetic maps. J Mol Med 75, 414-419

Ott J (1999) Analysis of human genetic linkage, 3rd edition. Johns Hopkins University Press, Baltimore

Ott J (2004) Association of genetic loci -- Replication or not, that is the question. Neurology 63, 955-958 (review)

Pfanzagl J (1966) Allgemeine Methodenlehre der Statistik II. Sammlung Göschen Band 747, Walter de Gruyter, Berlin

Terwilliger JD, Ott J (1994) Handbook of Human Genetic Linkage. Johns Hopkins University Press, Baltimore

Weir BS (1996) Genetic Data Analysis II. Sinauer, Sunderland

Whitlock MC (2005) Combining probability from independent tests: the weighted Z-method is superior to Fisher's approach. J Evol Biol 18, 1368-1373

Woolf B (1955) On estimating the relation between blood graoup and disease. Ann Hum Genet 19, 251-253

Zaykin D, Zhivotovsky L, Weir BS (1995) Exact tests for association between alleles at arbitrary numbers of loci. Genetica 96, 169-178