There are few business environments as unpredictable as a farm field. A variety of variables can affect a field’s yield from season-to-season.
Despite the many high-tech tools farmers have to record their history and use it to predict and adjust for future outcomes, questions remain.
How does data collected from on-farm research compare to small plot samples? What standards should be used to evaluate that information? How should it be disseminated among farmers, dealers and suppliers?
Joshua McGrath, University of Kentucky extension specialist in Agricultural Soil Management, and Joe Luck, University Nebraska-Lincoln associate professor of Biological Systems Engineering, sat down to discuss these challenges at the 2018 Agricultural Equipment Technology Conference (AETC) in Louisville earlier this year. Here’s an excerpt of the discussion and you can read more from their conversation here.
Joshua McGrath: “I had a friend in the industry call me a couple of weeks ago and we spent hours talking about research. You do small plot research, and theoretically you’re taking these really precise 2 row combines and you’re measuring this yield, and then you’re looking at that variability. So you’ve got 6 replications of treatment A, and then you’re using that variability. So that’s measured precision — a standard deviation of the mean yield response — where the mean yield response is 150 bushels plus or minus 20 bushels, right?
“You assume those plots represent that variability within the field, but you get a completely different number. You’re converging on the performance within a field when you go to a longer plot. And it comes down to a question of confidence. What confidence in a role do you select to separate? The question is ‘What do we need?’
“You’re talking about on-farm research and it’s scaled, so we have this ability to spatially measure, to manage in this very fine scale, but do we even have the recommendations to support that ability? Then there’s the question, ‘How do we assess that performance?’ It has to be through on-farm production scale trials. And what is our standard for that data? Who handles that data? Who interprets that data? Because I’ve seen people interpret data, and I look at it and go, ‘That’s not what I see’ or, ‘I wouldn’t do it that way.’ You look at the comparisons folks are doing and you go, ‘You’re not comparing apples to apples.’ I think someone has to step in and be the unbiased arbiter of that on-farm research.”
Joe Luck: “You look at the systems. We generate these as applied data sets and people have pretty high expectations of what that should be. If I go out and spread a field with dry fertilizer and have a little left in the tank, I could just run around the field again or run up and down the field somewhere. But that doesn’t get recorded anywhere. It gets recorded, but it’s not summed in the math. That area of the field’s been affected. We’ve actually talked to producers that are starting to see yield data error artifacts in their prescription maps.”
McGrath: “Supposedly we can all share all this data, but how do we manage its quality? We’re all sharing data. We have this metadata and a lot of the companies are selling recommendation systems based off of all the data they’re collecting from their growers. They’re bringing all this data in and they’re looking at these big trends, saying, ‘Well, this is how much potassium I need, right?’ But how do you manage the quality of that data, especially from a machinery standpoint, because yield is basically what we’re talking about again. We don’t have good yield data that’s not high resolution. What do we need to measure?”
Luck: “That’s right. We talked about competence and analyzing the data, so when we do our farm research studies, any of the studies, we go the marginal net return route so we incorporate the cost of the product, the response, the yield, etc. The confidence levels that you analyze at is a big factor in where you make your decision.”
McGrath: “What do I need to measure? I can measure this, and it tells me I need to apply that. I don’t know what to measure, and I don’t know how to convert that measurement to application. We assume a lot of this is known, and it just isn’t."