On-Farm Trials to Farmer’s Trials – How Evolving Perspectives Drive Evolving Methods
The evolving history of research in support of smallholder farmers has been told before, and it is a story that is still unfolding as problems and situations keep evolving. This blog examines some of the changes happening now and the way they impact on the research methods we need.
Unsurprisingly, the research methods used in farmer’s trials have had to keep up with the changes witnessed over time. Those classical experimental designs that agricultural students all over the world learn about - the nicely replicated randomised block design, or some of its clever and intricate derivatives – still have a role but are certainly not enough. During the 1980s there was a distinct shift in research paradigms to include participatory research. The principle is that farmers are not just recipients of research results: if the research is for their benefit, they have an ethical right to influence it, and their involvement can make it more relevant and efficient. During that time and into the 1990s, we saw the development of methods for participatory experiments, widely described in books and manuals, and are still extensively used today.
Now there are further shifts in the way research works in support of smallholders, and this is again changing some of the ways we think about experimental designs and some of the methods needed. These changes include the recognition of heterogeneity and complexity, the need to consider farmer’s systems and situations (not just technologies), and the way the distinction between research and development has blurred.
What does this mean for the way we see experiments and design them?
There are many implications and here I describe three of the changes I have noticed and needed to implement.
- From Noise to Information
|Observation||Conventional experiments||Farmer’s experiments|
|Trial results from different farms are highly variable||The variation is noise and treated as ‘experimental error’ in the analysis. Aim to reduce it by careful design. Use averages across farms to show treatment effects. Farmers are replicates – replicate as necessary.||The variation arises because farms and farmers are different. The variation represents the way the world works. Aim to understand it and use it to give nuanced and context-specific results. Farmers take part in experiments for multiple reasons but are never just replicates.|
A common researcher’s complaint about on-farm participatory trials is that the data are highly variable. The classic experimental design position is that this variation is error or noise that obscures the effects you are looking for. You therefore design and manage the trial to minimise it and replicate enough to average out what is left. Now we recognise that at least part of that variation is real in that it reflects the real variation between farmers and their situations. People are not all the same, so if we are serious about the needs of farmers we need to start trying to understand, and not remove or hide the variation between them.
- From On-Farm Trials to Farmer’s Experiments
|Characteristic||On-Farm Trials||Farmer’s Experiments|
|Overall aim||Use the systematic methods of scientific experimentation to generate data relevant to decision-making by farmers and others.|
|Project which they are part||A technology development project, aiming to test options in real farm situations and to get farmers input to help that development.||A farmer-researcher collaboration aiming to use science to support farmers’ interests with science|
|Origins||Making agricultural research more relevant to farmers||Increasing options and information for farmers’ informed choices|
|Farmers participating||Small numbers (10-100) of strategically selected farmers||Large numbers (hundreds or thousands) of project beneficiaries|
|Farmer roles||Recipient and implementer||Partner negotiating and influencing all stages|
|Primary information generated for different interest groups||
Farmers: not clear
Researchers: Differences in means for different options
Farmers: Experience for decisions
Researchers: Variation across farms, diversity and interactions with context
|Primary use of information||Generating recommendations||Generating information for farmer decision-making.|
|Typical designs||Each farmer testing all options||Each farmer testing self-selected options|
|Heterogeneity||Aim to control or avoid||Aim to characterise and understand|
|Measurements||Responses measured by researchers||Responses and context measured by farmers|
On-farm and participatory trials were seen as part of developing technologies or recommendations for farmers. But if research is seen as helping farmers understand what’s going on so they can make better decisions, several aspects of the methods and designs have to change as well. For example, they can join in in negotiating the design, which may result in farmers not all doing exactly the same thing.
- From Standard Methods to Adapted Detail
|Element||Researcher’s standard methods||Farmers’ self-recording||Trained youth from community||Participatory assessment at season end|
|Pest and disease||Regular visits for identification and counting||Regular recording of damage score||Regular recording of damage score||Perception at end of season|
|Production||Yield at harvest, dried and weighed||Score||Volume of grain using standard container||Perception at end of season|
|Who measures and records||Technician||Farmers using record cards||Youth using ODK||Recorded at a group meeting|
|Advantages||Standardised, objective, comparable with others||Regular observation throughout season. High farmer engagement||Enthusiastic engagement. Learning transferable skills||Directly relevant to farmers decisions and interests. Generates discussion|
|Disadvantages||Costly. No farmer involvement or ownership. Disconnected from farmer interests||Literacy. Some farmers not motivated. Questions on quality||Costs of setting it up. Reliability.||Difficult to compare across sites or learn from large-N.|
The way research methods have to be adapted to meet multiple, new objectives are illustrated nicely with an example from a recent legume variety trial to be completed by about 100 farmers. The design involved each farmer growing 5 varieties to assess yield and pests. The question arose as to what should be measured and how the data should be collected.
“Easy” said the breeders, “We do this all the time. Let the technician measure yield and record pests using the standard methods”.
But then as the discussion continued, other alternatives emerged; each with its own advantages and disadvantages. Wouldn’t farmers be more engaged if they collected their own data? Maybe we can get some of the unemployed youngsters in the village interested in collecting data electronically, and get good data while giving them an interesting experience? Or perhaps what will be most relevant to farmer’s decision-making are their preferences, so are these what should be recorded? None of these are right or wrong and there are more alternatives and schemes that merge several of them. The point is that there might be good reasons for doing something other than ‘the standard’.
If Statistics and Statisticians are to make a contribution in this evolving domain of research with farmers, then they too have to evolve. This means learning new concepts, new methods and new ways of implementing them. In my experience, this requires working closely with the researchers and farmers driving the process. But what else does it take – any ideas? And What is going to happen next?
Author: Ric Coe
Ric’s main focus is on improving the quality and effectiveness of research for development using the application of statistical principles and ideas. He is particularly interested in research design, including the design of complex integrative research projects.
0 comments for "On-Farm Trials to Farmer’s Trials – How Evolving Perspectives Drive Evolving Methods":
Add a comment:
We run an anonymous commenting system. If you are not logged in, we do not collect any information on who you are when you leave a comment. This means we manually confirm comments before they appear on the site.
If you want to have a comment you submitted deleted, please contact us, giving the date of the comment and name of the article.