Blog
Statistics for Sustainable Development > Blog > The design of the design of experiments
The design of the design of experiments
Maybe a more accurate title for this piece would be ‘the
design of the designer of experiments’, but that is not quite right either. The story is actually one of evolution
through a series of chance happenings, though I hope the result is intelligent
design as far as the experiments are concerned.
We all do experiments all the time – try something, see what
happens and learn something in the process. That is not good enough for
science. As Richard Feynman said, “The first principle is you must not fool
yourself, and you are the easiest person to fool”. So, scientists developed principles and tools
for designing experiments that would be robust against biases and allow the
level of certainty, or uncertainty, of results to be quantified. No more
fooling. These principles were written down by R A Fisher in his 1925 book Statistical
Methods for Research Workers, and that book has been the basis of experimental
design for the research areas in which I work – mainly agriculture, ecology and
environmental science – ever since.
However,
things have been changing, and I was prompted to reflect on these changes as I
moved books on experimental design from my old house and took them to the Stats4SD
office. Arranging them on a shelf, I realised they represented in part the way
my thinking and understanding has evolved during the 40+ years I have been
helping researchers design experiments.
I did have a
copy of Fisher (1925), but gave it away years ago, so at the left-hand side of
my shelf are some of the books firmly aligned with the Fisher tradition. Finney’s
book was the first from which I started to grasp theory. Cox made it all seem
so simple. Box, Hunter and Hunter had the whole subject tidied up with clever optimal
designs for many situations. Kempthorne derived the almost mystical connection
between randomisation - and model-based analyses, though I never really
understood why it happens. Cochran and Cox presented a catalogue of designs
into which researchers seemed to fit any problem, and Gomez and Gomez gave
recipes for the design and analysis of, apparently, any useful design. These
two books were the standard references for thousands of agricultural scientists
around the world.
The next
group of books represents the beginning of a shift away from starting with a
theory of design and making the research question fit it, to acknowledging that
real research problems and contexts rarely allow those neat designs in the
catalogues to be used. The sizes of blocks are not all equal, the number of
varieties is not a perfect square, not all ewes have two lambs. Roger Mead’s
book emphasised how a good experimental design could be chosen based on
principles, even if there was no exact theory or the design could not be shown
to be optimal. I had the privilege of being taught by Roger Mead and those
messages still underlie much of how I think about experimental design. Since
then, I have also understood something represented by books, such as those on
experimental with perennials, experiments for tree improvement and ecosystem
experiments. The application area matters. All the design theory and principles
will not help you design a good tree trial if you don’t understand something about
trees: how they grow, how they are managed and measured, and what experimenters
are really trying to find out about them. Somewhere I lost my copy of
Robinson’s Practical Strategies for Experimenting, a helpfully practical
book which started with trying to place experiments within a larger context of
how research works, for example by asking whether a designed experiment is
actually needed.
When the
participatory principle became a standard part of much agricultural and
environmental research, experimental design had to be adapted to the fact that
people participating did not always have the same objectives as a researcher. In
this framework, experiments have multiple purposes and designs to trade-off
multiple interests and concerns. Most importantly, designs have to be
negotiated between interested parties. Some principles long assumed to be
fundamental to experimental research, such as random allocation of treatments
to units, might need to be questioned. For example, why exactly do we
need to randomise in this situation and what will be lost if we don’t? Might
the errors and confusion caused by trying to randomise or blind a trial be a
bigger threat to validity than using a systematic layout that is assumed to be
‘as good as random’?
At the
righthand side of that shelf are a few books that are products of some of the
research in which I have been involved. They are full of discoveries, insights,
new theory and new practices. In almost every case, these are not based on the
results of an experiment, but from assembling the results of many
experiments and other non-experimental studies. This assembling of facts is
done retrospectively, but can we design experiments that might make it more
efficient? The standard approaches to experimental design, that we still teach
and expect agricultural scientists to understand, are based on the notion there
is ‘an effect’ of treatments, which can be measured by differences in response
between two or more treatment groups. The experiment is designed to estimate
that effect, or test hypotheses about it. But in applied sciences, like
agriculture and much of ecology, we know there is not ‘an effect’, but a
complex pattern of responses that depend on conditions and context. This is true
when we measure a biophysical response such as crop yield, and even more so
when we measure a human and subjective response, such as opinions about the
crop. We can design experiments that can help us understand what is happening
in these situations. They incorporate elements of design of non-experimental
and may be limited by what can be deduced from observational studies, and they
can look very different from the designs in those classic books.
Over the
last few weeks, I have been working with a network of farmers, development NGOs
and researchers who are setting up a study with up to 1000 people who are
learning about principles and practices for improved making and use of compost.
They will be setting up an experiment with all participants making compost in
two or more ways, each based on their own choices, and evaluating their effects
on soils and crop production. The design is being negotiated to match the aims,
interests, possibilities and constraints of all those involved. I think we have
come up with something that is feasible, useful and perhaps new. What we don’t
have is the theory and sets of derived principles on which it is based. We are
still waiting for someone to write that book.
Author: Ric Coe
Ric’s main focus is on improving the quality and effectiveness of research for development using the application of statistical principles and ideas. He is particularly interested in research design, including the design of complex integrative research projects.
0 comments for "The design of the design of experiments":
Add a comment:
We run an anonymous commenting system. If you are not logged in, we do not collect any information on who you are when you leave a comment. This means we manually confirm comments before they appear on the site.
If you want to have a comment you submitted deleted, please contact us, giving the date of the comment and name of the article.