You are here

Share page with AddThis

Scrutinise those trial results

Vegetables
01.08.2017
Through modern technology, see how Syngenta's R&D team develop and deliver new products to the potato industry

By Scott Mathew, Senior Solutions Development Lead @HortApplication

Growers are often presented trial results at product information days and evenings about new products and innovations. I think it’s always a good idea to approach such trial results carefully.

 

Questions you should ask

Sometimes the results appear overwhelmingly impressive and, for good reason, as they sometimes are. On other occasions, when you dig a little deeper, all may not be as it seems.

When being presented with research results, you are entitled and certainly encouraged to ask questions. The first question I would ask is, “Are the results statistically significant?” If the answer is no, then the trial doesn’t tell you anything, no matter how glossy the brochure or how well-presented the graphs or tables appear. An altered or disproportionate graph axis can accentuate differences that are not necessarily there.

 

Factors to take into consideration

Meaningful trials are set up in a way that they can be statistically analysed. Field trials by their very nature contain what researchers often call background noise, factors which may influence a trial's results. Background noise includes things like variable soil type or moisture content, weather, weed pressure, insect attack. There are a range of other possible variables. Good site selection and proper site management can minimise background noise, although at any given site there will always be some variation.

 

The 2 key R’s in research

The key R’s of research are to replicate the treatments and randomise the sequence of treatments. Replicating or repeating treatments allows the researcher to mathematically separate treatment effects from the background variation or differences (e.g. soil types) between plots. Randomising the sequence of treatments reduces the odds that spatial variability (e.g. soil type) will unfairly influence treatment effects.

 

The value is in the statistics

There are a range of statistical tests researchers can do. These are best left to the experts, however, "LSD" is a term you should familiarise yourself with.

The term “least significant difference” or LSD is often reported within trials. Basically, the LSD is used to compare means of different treatments that have an equal number of replications. With the LSD, a significance level of 0.1, (LSD 0.1) would mean the researcher was 90% certain that the treatments were indeed different and not just due to random chance. An LSD value of 0.05 would be even better with 95% certainty the treatments were different. LSDs are the most common quoted statistic because they allow data to be eyeballed, without requiring a deep understanding in statistics first.

 

Replicate, replicate

The next thing I would ask is, “How many times have you been able to replicate these results?" In other words, have the trial results been reproduced at a number of different sites with varying environmental conditions and over a number of years or seasons.

 

Local relevance?

Finally, you should ask the question, “Is the trial relevant to my growing operations, pest and disease management programs or crop protection chemical usage?” I recently sat in on a meeting where WA fungicide resistance trial results were quoted to an eastern states audience. Use patterns and pest and disease pressure can vary between regions and states so it can be misleading to take a ‘one cap fits all’ approach.

 

Always ask questions

Just remember that these information sessions are designed to introduce you to new products and innovations. The more questions you ask, the better the understanding you can gain about the product, the relevance of the trials and its potential fit within your growing operations or spray program.

 

If you are uncertain about introducing any new technology, you should seek advice from your trusted adviser.