next up previous
Next: Test Cases Up: Permutation Testing Made Practical Previous: Time Complexity

Comparison to Other Methods

Our system of excluding activated voxels from the empirical distribution is a variant of the `step-down' procedure, in which the empirical distribution is recomputed at every step of Phase 3 so as to exclude voxels that have been activated. Since complete recomputation of the distribution is computationally expensive, Holmes et al. [1996] proposed a hybrid between this procedure and single-step permutation testing: rather than computing one adjusted probability and excluding one activated voxel on every iteration of Phase 3, their method iterates in `jumps' in which all voxels with adjusted probabilities less than alpha are declared activated and excluded en masse. While this hybrid procedure is more sensitive than the plain permutation test, it still fails to identify as many voxels as the complete step-down version of the algorithm. Although as observed by Holmes et al. it would be impractical to recompute the empirical distribution on every iteration of Phase 3, we note that such de novo recomputation can be avoided using the substitution procedure that we outline above, in which replacements for deleted elements are precomputed in Phase 2 and applied in Phase 3.

In a method developed for analysis of PET images, Heckel et al. [1998] suggest ordering the sequence in which randomised permutations are used, in such a way as to minimise the number of changes between successive permutations. This method facilitates incremental computation of correlation values since the only time points that need to be examined are the ones whose corresponding points in the ideal time series differ from those of the previous permutation. The sums computed for the previous permutation can then be updated accordingly, reducing the total number of memory accesses. Although the problem of finding such a minimum-change ordering for a given set of permutations is NP-hard in the general case (indeed, it reduces fairly directly to the well known Travelling Salesman Problem), Heckel et al. note that a good approximate solution can be computed in M2 time. They note further that in typical implementations, constant factors are such that the savings in the MNT term may more than make up for this extra M2 term.

The savings produced by Heckel's optimisation depend strongly on the computing hardware on which the algorithm is implemented. Modern developments such as high-speed cache memory, pipelined instruction processing, and vectorisation make multiple memory references much less of a performance concern. On our test platform, a 500MHz Alpha 21164 processor (Compaq Computer Corporation, Houston, Texas) with 8K Level 1 cache and 96K level 2 cache, using optimised code generated by the Digital UNIX C compiler, a test of this optimisation made no appreciable difference in execution time. Heckel's method is a valuable option, but only for certain types of computing systems.


next up previous
Next: Test Cases Up: Permutation Testing Made Practical Previous: Time Complexity