Agriculture serves a wide range of purposes, and new requirements and objectives continue to be added. Besides food and fiber production, we expect modern agricultural systems to conserve soils and biodiversity, regulate water and carbon cycles, provide fuel, generate employment, and offer many other ecosystem services.1 Whether agriculture succeeds in delivering all these services depends on a complex array of cultural, technological, educational, political, legal, demographic, sociological, climatic, and economic drivers. The goals and values of people working on farms also influence agricultural outcomes.2

Predicting how farms respond to changes—such as new farming practices, price shocks, or climatic events—is very difficult. There is normally no way of knowing with precision how such changes will play out. This dilemma has often left people making decisions on agricultural systems with little certainty that these decisions are right.

But it is not only decision-makers that struggle with the complexity of agricultural systems. Researchers are also challenged by how to study them effectively. Many common research methods are not well equipped to deal with complexity. They are designed for investigating systems that can easily be controlled and manipulated and for testing hypotheses about their behavior, aiming to identify generally applicable rules that help us understand how these systems work. While there continues to be great need for research that follows these principles, such work rarely allows comprehensive assessment of system dynamics. When it comes to supporting practical decisions on complex agricultural systems affected by many uncertain, related, and dynamically changing variables, classical hypothesis testing based on controlled experiments is of little relevance.

The Unsurmountable Complexity Challenge

Many studies have tried to precisely predict agricultural outcomes, often using complex models fed with large datasets.3,4 It is striking that virtually all successful simulations dealt with relatively simple settings, mostly working on highly mechanized single-crop systems, with homogeneous soils and advanced management of nutrients, water, pests, diseases, and weeds.5 Simulations also generally assume well-functioning input and output markets and predictable social and economic environments.

We suspect that successful simulations for fairly simple systems are the main reason many agricultural scientists have confidence in their models. While many models convincingly describe photosynthesis, nutrient uptake, or light competition,6,7 the impacts of pests and diseases, labor constraints, and weather extremes are often either excluded or not captured sufficiently. Many researchers have seen opportunities for precise modeling, even when some components of systems get a little more complex.8,9 For instance, we could possibly make models that describe tree-crop interactions,10 other intercropping situations,11 or biotic stresses.12 However, such models will probably never be able to become sufficiently complex for simulating many real-life agricultural systems.

Figure 1. Complex agroforestry systems in Africa—difficult to study with purely data-driven research approaches.

We work on agroforestry systems, which are agricultural systems that integrate trees (or shrubs) with crops and/or livestock.13 Such systems are widespread throughout the tropics and subtropics, especially among smallholder farmers. In addition to their biophysical settings, smallholder farms are normally shaped by a host of economic, social, and cultural factors that influence farm performance. They are also highly variable,14 so that generalizations about them become very problematic (Figure 1). There is little hope for making precise predictions for such systems. Fortunately, we probably don’t need such predictions for good decision making.

A possible answer to the challenge of modelling complex systems could be a massive increase in data collection. Unfortunately, this strategy is often not promising for complex systems, because it would devour far more resources—time and money—than are available in most contexts. Complex models can also be quite error-prone because they often involve the simulation of many processes, each of which may introduce inaccuracies.15 System analysts should therefore aspire to initially build lean but balanced models (Figure 2), in which all major processes are adequately represented, rather than models that cover some parts of the system in detail but largely ignore others.

 

Towards Meeting the Complexity Challenge: Accept Inevitable Limitations

In the face of system complexity and data scarcity, which seem ubiquitous throughout much of the developing world (but not only there), it is hard to be optimistic about the ability of research to deliver meaningful decision support. This is not helped by reports indicating that most ‘research for development’ is never actually considered in decision-making processes.16 In many cases, better communication by researchers could amend the situation, but we suspect that quite often decision-makers realize that studies do not address systems as comprehensively as they should. Where research fails to consider impacts on critical stakeholders, site-specific risk factors, or institutional constraints, people with intimate knowledge of the local context may easily dismiss the research findings.

This leaves researchers who aim to facilitate development with a problem: how can research meaningfully support decision processes? The first step towards a solution is accepting the inevitable limitations: no matter how hard we try, we cannot eliminate uncertainty on complex agricultural systems! Accepting that complexity and uncertainty are part of the systems that we are attempting to manage is the first step towards a solution. In fact, research approaches that can accommodate complexity and uncertainty do exist in other disciplines but are not yet commonly used in Agricultural Sciences.

 

Decision Analysis: A Promising Solution for the Complexity Challenge

Having to understand complex systems sufficiently well to make decisions on them, even without perfect information, is a very common challenge. Similar situations are regularly faced by entrepreneurs deciding on whether to launch new products, by judges having to decide on court cases, by governments contemplating new policies, and in a large number of other contexts. In fact, we meet similar decision challenges in our everyday lives all the time.

Such decision dilemmas are the object of interest of Decision Theory. This discipline has a long history of working on exactly the kind of problem agricultural decision-makers face: how to make risky decisions on complex systems with limited information. This problem has attracted the attention of researchers working in many scientific fields, including economics, psychology, sociology, mathematics, computer science, and statistics. Thanks to the combined efforts of this community and the abundance of potential applications, pragmatic approaches, known as Decision Analysis, have been developed to support real-life decisions. Decision Analysis is widely applied in many contexts, including business decision support,17,18 public health intervention planning,19,20 legal reasoning,21 policy process support,22 and natural resource management.23

So far, research for agricultural development has not seen broad application of Decision Analysis methods. We posit that embracing this discipline and its principles could constitute a solution for the difficulty agricultural research has been having with supporting decisions. We are working to introduce pragmatic Decision Analysis approaches into research for development in order to overcome the disconnect between research and practice that has been standing in the way of evidence-based decision-making.

 

The Principles of Decision Analysis

 

Focus on a Decision

‘Decision Analysis’ is concerned with making rational recommendations on how decisions should be taken. Decisions are situations where a decision-maker or decision-making body can choose between at least two alternative options, with some uncertainty as to which option is preferable. Decision Analysis aims to identify the rational choice, based on the current state of knowledge and preferences of decision-makers. This motivation draws our attention away from the classic scientific pursuit of trying to understand how the system works towards the more focused context determined by a particular decision question. The analysis then no longer needs to describe all parts of a system but can instead focus on the parts that stand to be affected.

 

Use the Current State of Knowledge

Much modern research, including research for development, is very much focused on empirical data, as opposed to other sources of information. We often assume that we know next to nothing about a particular issue until we have collected data on it. On the other hand, we place great—and probably often unwarranted—trust in results from surveys or experiments. This so-called ‘frequentist’ mindset originates from the common belief that science should be objective and that scientists’ beliefs and values should not be allowed to interfere with analyses. The problem with this mindset is that studying complex systems is very difficult if our starting point is a blank slate. Even given that most frequentist researchers naturally consider earlier work in designing their studies, it is difficult to comprehensively describe systems with this approach. An alternative mindset is the so-called ‘Bayesian’ approach to research, which allows analysts to insert their initial state of knowledge—their prior beliefs—into their studies. These prior beliefs, which can be updated through additional information, can serve as a starting point for systems analysis.

The difference between these views on the scientific process has substantial implications for our ability to study complex systems. While the frequentist approach requires us to first invest significant effort in data collection, before we can say anything at all about a system, the Bayesian approach allows us to progress towards a coarse understanding of system dynamics relatively quickly and much more cheaply. This cost and time effectiveness is a prerequisite for research that supports decisions in real time.

 

Include Experts, Stakeholders, and Decision-Makers

If researchers without much knowledge on a particular system make a model of that system, the results are often not very useful. This is why decision analysts engage subject matter experts and stakeholders—often the best available source of information—in participatory processes to harvest their knowledge and construct models that reflect their beliefs and priorities. Besides improving the models, this participation allows for considering different perspectives on the decision and—especially if decision-makers themselves participate—it also raises the chance that research outputs will be considered when the decision is finally made.

 

Explicitly Express Uncertainty

Figure 2. ‘Liebig’s barrel’ of model precision (borrowing from an illustration commonly used to illustrate the concept of essential plant nutrients). The precision we can expect from our model is limited by the process we understand least (where the barrel loses water). More detailed information on aspects we already understand well will not make our models much more precise.

In working with expert knowledge, it is crucial to adopt robust procedures to acknowledge that the information we use is uncertain. We can express uncertainty about variable values by using probability distributions that describe our beliefs about the true values. If we adopt simulation techniques that can work with such distributions, we can then also express our expectations of decision outcomes in a similar manner. We cannot offer certainty about what the outcome of the decision will be, but we can produce a plausible range for its impacts. Given that uncertainty about decision outcomes is inevitable in practice, this may be the most honest answer science can provide. Common methodologies used to implement such analyses are Monte Carlo simulation or Bayesian Networks, which allow representing uncertainty in variable values and to some extent even in the processes involved in translating decisions into outcomes.

An important obstacle to including uncertainty in simulations is the observation that most people, including experts, are not very good at accurately expressing their state of knowledge in quantitative terms.17,24 Experts are commonly overconfident, meaning they think they know more than they actually know. For instance, an expert who says she is 90 percent confident that a value is within a specified range, is likely to be right less than 90 percent of the time. Overconfidence is only one of a large number of cognitive biases that have been described.25 Decision analysts often attempt to counteract such biases by subjecting experts to so-called calibration training, where they are made aware of their biases and instructed in techniques that help to overcome them.17,21

 

Consider Everything that Matters

The capacity to work with uncertain information opens new opportunities for taking holistic perspectives on systems that consider everything the experts, stakeholders, and decision-makers that we work with think should be included. This may often include factors for which there are no hard data or that are difficult to measure in principle. However, if they are expected to affect system dynamics, it is possible to express these expected effects in quantitative terms. Decision analysts have referred to such factors as ‘intangibles,’ and many instances of their successful inclusion into decision models have been reported.17

Not having to be absolutely precise also opens opportunities for expanding the range of outcome dimensions we consider. If, for instance, we want to predict the impacts of a decision to adopt agroforestry practices, we now no longer have to restrict our assessment to outcomes that can be precisely measured, such as the yields of annual crops. Instead, we can now estimate other outcome dimensions, such as the benefits of soil conservation, sequestered carbon, fuelwood, etc., even though in the absence of data these estimates may initially remain quite uncertain. For reliable decision support, inclusion of such factors is critical.

 

Use the Value of Information to Prioritize Decision-Specific Research

A key concept in Decision Analysis is the Value of Information. It expresses that not all uncertainties associated with a decision need to be reduced to reach a good decision. There are normally many knowledge gaps whose closure would contribute very little additional clarity to the decision challenge. Conversely, some variables typically stand out with substantial information values, meaning that investments in their measurement could significantly facilitate decision-making. Value of Information analysis aims to identify such decision-specific research priorities. It has often been shown that the most pertinent knowledge gaps only become apparent after reaching an initial understanding of the overall decision context and analyzing the uncertainties. One might therefore look at Decision Analysis as a transdisciplinary umbrella for systems analysis, which serves to first appreciate the way the entire system works, before evaluating its performance based on the current state of knowledge and then pointing out where measurements would be most useful. In this way, Decision Analysis can integrate expert knowledge with available data, providing a much better basis for supporting decisions than either source of information on its own.

 

Decision Analysis in Development Practice

For the past four years, we have been applying Decision Analysis methods in research for agricultural development. We started by using the well-established procedures of Applied Information Economics in partnership with the developer of this approach.17 The process starts with participatory analysis of the decision problem. Decision analysts convene decision-makers, stakeholders, and potentially additional experts to jointly develop a decision model (Figure 3). Participants are encouraged to bring up any factors they deem important for the decision, in particular the various costs, benefits, and risks, as well as the objectives and concerns of decision-makers and stakeholders. This information is arranged into a conceptual model, which aims for a balanced representation of the entire decision context. It should not include excessive detail on some parts of the model while disregarding other important parts (Figure 2).

Figure 3. Illustration of the Decision Analysis process used at the World Agroforestry Centre. Decision-makers, stakeholders, and analysts join hands in a participatory analysis of the decision in question. This joint understanding is translated into a transdisciplinary decision model. After parameterizing the model based on the current state of knowledge, using various sources of information, probabilistic simulation can indicate plausible ranges of outcomes for decision alternatives. Models can be refined based on supporting research on key knowledge gaps identified by Value of Information analysis.

The analyst converts the conceptual model into a mathematical model, translating stakeholder inputs into equations as accurately as possible. The members of the model-building team, and possibly additional experts, are critical informants in parameterizing the model, especially where no reliable data are available. Even when there are data, they often need to be filtered or adapted to the given context. All experts are subjected to calibration training to make their estimates as reliable as possible. The major techniques we apply are based on a substantial body of research in cognitive psychology and have been described in detail by Douglas Hubbard and others.17,24,26 Experts are then requested to estimate their state of knowledge for all uncertain variables. With this information, simulations can be run, producing plausible outcome ranges for alternative decision options. In many cases, these simulations reveal a preferred course of action. Where no clearly preferable option emerges, Value of Information analysis can identify the most important knowledge gaps, which can then be narrowed by targeted research. With the new information, the model can be run again. The process is repeated until decision-makers feel confident that they can make a well-informed decision.27

Applications in Development

Over the past four years, we have used this process in a number of decision contexts. In one of the first applications, we built a simple decision model for estimating the yield benefits that African smallholder farmers can expect from introducing Conservation Agriculture principles. Unlike most other studies, our decision model considered not only biophysical factors that can easily be measured but also less tangible aspects, such as land tenure or access to markets and information. Including these influences, which were considered in the form of calibrated expert estimates, we found that, in many types of socio-economic settings, farmers stand to gain little from introducing Conservation Agriculture, even though their biophysical setting appears favorable.28

Working with teams of scientists involved in water, land, and ecosystem research, we evaluated several potential development decisions, ranging from establishment of a large dam in Laos to the use of payments for ecosystem services to manage urban water supply in Kenya. Addressing a controversial decision in northern Kenya, we modeled plans to ensure the water supply to a dryland city by tapping an aquifer and transporting water through a 100-km pipeline. We convened stakeholder workshops and worked with local experts to model outcomes of this intervention for several stakeholder groups. Our main finding was that implementing this project carried high risks for all stakeholders. Key uncertainties included the feasibility of a commercial water supply business, the extent and valuation of a reduction in infant mortality, and the risk of political interference (Figure 4).27

Figure 4. Distribution of projected overall net impacts of constructing a groundwater-fed water supply pipeline for Wajir, a town in northern Kenya, and key uncertainties determined through Decision Analysis procedures.

We have also evaluated the potential of several agricultural interventions in East Africa, the applicability of a Decision Analysis framework for monitoring and evaluating development projects, and the prospects of strengthening resilience through large-scale irrigation, watering boreholes for livestock, or improving roads with innovative technology. Current projects include a cost–benefit analysis for small reservoirs in West Africa and the nutrition impacts of tree-based agriculture in East Africa. We have also reflected on the benefits of using Decision Analysis methods for monitoring the Sustainable Development Goals (SDGs), which could provide a low-cost alternative to large-scale data collection, while actually supporting decision-makers aiming to further the cause of the SDGs.29 We have published some of our tools in an open-access analysis package and started exploring the use of Bayesian Networks as an additional Decision Analysis strategy, including for project management.30,31

 

The Way Forward

We feel confident that the tools and methods of Decision Analysis can lead to major progress in the analysis of complex systems, especially where concrete decisions are contemplated. The ability to make projections even in the absence of precise information opens opportunities to support a much wider range of decisions than would be feasible with a purely data-driven approach. Working directly with decision-makers on the concrete decisions they face can bridge the gap between science and practice, fostering a solution-oriented dialogue that allows science to truly inform decision-processes.

This dialogue requires decision-makers and experts to make explicit their expectations of how the impacts of the decision will unfold, with particular focus on trade-offs and risks. This allows for identifying potential weaknesses in the intervention that is decided on and strengthening it by modifying the intervention design. Decision models can also explicitly capture decision-makers’ preferences by eliciting directly from these decision-makers value estimates or utility weights to be assigned to various costs and benefits. This can be critically important for capturing real constraints to the adoption of new technologies.

Decision models can be of value even after a decision has been made. As an intervention is implemented, measurements can be taken on many variables that are uncertain in the beginning, allowing continuous updating of impact projections. Expected impacts can be adjusted for the effects of variable factors, such as the weather or political stability, which may strongly impact intervention outcomes. Decision models are useful tools for intervention impact evaluation, because they allow comparison of actual project outcomes with targets that are realistic given the occurrence of influential events beyond the project’s control. Such ‘random’ factors are difficult to account for when impact evaluation relies on before–after comparisons that do not consider causal relationships in the intervention’s impact pathway.

Decision Analysis principles have potential as an entry point to transdisciplinary systems analysis. They allow analyses to start with a coarse understanding of the system of interest before zooming in to the detailed system components on which research is needed. This forms a contrast to traditional multidisciplinary approaches that start with robust research on particular system elements but often struggle to put the various pieces together to generate system understanding.

There have been challenges in applying Decision Analysis in development. Most researchers and many stakeholders in development have been trained in data-driven research approaches, making many uncomfortable with making estimates and with using information that is not thoroughly supported by data. Moreover, when given the freedom to insert into models everything they think worthy of inclusion, stakeholders may come up with models that are far from accurate. They can fail to consider important processes, dedicate attention to unimportant ones, and—intentionally or inadvertently—introduce their personal biases and opinions. Good facilitation can safeguard against this to some extent, but a residual risk remains. Where initial models are wrong and analysts fail to recognize this, the Decision Analysis approach to knowledge generation may not produce useful results. Finally, analysts may introduce their own biases and interpretations into the decision model because they normally have to make at least some choices when translating participatory models into computer code.

Farmers in Senegal discuss a young, income-generating papaya tree.

In light of these challenges, however, it is important to recognize that the primary motivation of Decision Analysis is to improve the way people make decisions. Hence, its use has to lead to better decisions than unaided intuition, which is often the only alternative. Decision models should not be compared to hypothetical resource-intensive research projects serving the same purpose, because such projects are very rarely a realistic possibility.

Our experience so far has shown that decision analysts should be skilled in facilitation, mathematical modeling, and ideally in the subject matter of the model—skills that rarely coincide in one person. Teams of analysts with complementary skills can be an effective solution. In the longer term, the necessary skill base for wider deployment of Decision Analysis in research for development could be produced through a shift in the educational focus of development-related study courses, away from an exclusive reliance on data-driven research methods and towards more systems-oriented approaches. This could produce a generation of decision analysts, which could make full use of pragmatic Decision Analysis methods to further the cause of sustainable development.

References

  1. De Groot, RS, Alkemade, R, Braat, L, Hein, L & Willemen, L. Challenges in integrating the concept of ecosystem services and values in landscape planning, management and decision making. Ecological Complexity 7(3), 260–272 (2010).
  2. McConnell, DJ & Dillon, JL. Farm management for Asia: a systems approach (FAO Farm Systems Management Series – 13) (Food and Agriculture Organization of the United Nations, Rome, 1997).
  3. van Ittersum, MK et al. Yield gap analysis with local to global relevance—a review. Field Crops Research 143, 4–17 (2013).
  4. Rosenzweig, C et al. Assessing agricultural risks of climate change in the 21st century in a global gridded crop model intercomparison. Proceedings of the National Academy of Sciences 111(9), 3268–3273 (2014).
  5. Asseng, S et al. Uncertainty in simulating wheat yields under climate change. Nature Climate Change 3(9), 827–832 (2013).
  6. Keating, BA et al. An overview of APSIM, a model designed for farming systems simulation. European Journal of Agronomy 18(3–4), 267–288 (2003).
  7. Van Noordwijk, M, Khasanah, N, Lusiana, B & Mulia, R. WaNuLCAS 4.0. Background on a model of water, nutrient and light capture in agroforestry systems (World Agroforestry Centre – ICRAF, SEA Regional Office, Indonesia, 2011).
  8. Huth, NI, Carberry, PS, Poulton, PL, Brennan, LE & Keating, BA. A framework for simulating agroforestry options for the low rainfall areas of Australia using APSIM. European Journal of Agronomy 18(1–2), 171–185 (2002).
  9. Araya, A et al. Assessment of maize growth and yield using crop models under present and future climate in southwestern Ethiopia. Agricultural and Forest Meteorology 214, 252–265 (2015).
  10. Luedeling, E et al. Field-scale modeling of tree–crop interactions: Challenges and development needs. Agricultural Systems 142, 51–69 (2016).
  11. Tsubo, M, Walker, S & Ogindo, H. A simulation model of cereal–legume intercropping systems for semi-arid regions: I. Model development. Field Crops Research 93(1), 10–22 (2005).
  12. Munier-Jolain, N, Guyot, S & Colbach, N. A 3D model for light interception in heterogeneous crop: weed canopies: Model structure and evaluation. Ecological Modelling 250, 101–110 (2013).
  13. Nair, PKR. An Introduction to Agroforestry (Kluwer Academic Publishers, Dordrecht, 1993).
  14. Coe, R, Sinclair, F & Barrios, E. Scaling up agroforestry requires research ‘in’ rather than ‘for’ development. Current Opinion in Environmental Sustainability 6, 73–77 (2014).
  15. Passioura, JB. Simulation models: Science, snake oil, education, or engineering? Agronomy Journal 88(5), 690–694 (1996).
  16. Clapp, A, DauSchmidt, N, Millar, M, Hubbard, D & Shepherd, K. A survey and analysis of the data requirements for stakeholders in African agriculture (World Agroforestry Centre (ICRAF), Nairobi, 2013).
  17. Hubbard, DW. How to Measure Anything – Finding the Value of Intangibles in Business (Wiley, Hoboken, 2014).
  18. Howard, RA & Abbas, AE. Foundations of Decision Analysis (Prentice Hall, 2015).
  19. Konijeti, GG, Sauk, J, Shrime, MG, Gupta, M & Ananthakrishnan, AN. Cost-effectiveness of competing strategies for management of recurrent Clostridium difficile infection: a decision analysis. Clinical Infectious Diseases 58(11): 1507-1514 (2014).
  20. Claxton, K, Griffin, S, Koffijberg, H & McKenna, C. How to estimate the health benefits of additional research and changing clinical practice. BMJ 351: h5987 (2015).
  21. Fenton, N & Neil, M. Risk Assessment and Decision Analysis with Bayesian Networks (CRC Press, Boca Raton, 2012).
  22. Carley, M. Rational Techniques in Policy Analysis: Policy Studies Institute (Heinemann Educational Books, London, 2013).
  23. Williams, BK & Johnson, FA. Value of information in natural resource management: technical developments and application to pink?footed geese. Ecology and Evolution 5(2), 466–474 (2015).
  24. Tversky, A & Kahneman, D. Judgment under uncertainty: Heuristics and biases. Science 185(4157), 1124–1131(1974).
  25. Croskerry, P, Singhal, G & Mamede, S. Cognitive debiasing 1: origins of bias and theory of debiasing. BMJ Quality & Safety 22(Suppl 2), ii58–ii64 (2013).
  26. Klein, G. Performing a project premortem. Harvard Business Review 85(9), 18-19 (2007).
  27. Luedeling, E et al. Fresh groundwater for Wajir – ex-ante assessment of uncertain benefits for multiple stakeholders in a water supply project in Northern Kenya. Frontiers in Environmental Science 3, Article 16 (2015).
  28. Rosenstock, TS et al. Targeting conservation agriculture in the context of livelihoods and landscapes. Agriculture, Ecosystems & Environment 187, 47–51 (2014).
  29. Shepherd, K et al. Policy: Development goals should enable decision-making. Nature 523(7559), 152–154 (2015).
  30. Göhring, L & Luedeling, E. decisionSupport: Quantitative Support of Decision Making under Uncertainty. CRAN archive [online] (2015). https://cran.r-project.org/web/packages/decisionSupport/.
  31. Yet, B et al. A Bayesian network framework for project cost, benefit and risk analysis with an agricultural development case study. Expert Systems with Applications 60, 141–155 (2016).

Eike Luedeling

Eike is a Senior Decision Analyst at the World Agroforestry Centre in Nairobi, Kenya, and a Senior Scientist at the Center for Development Research at the University of Bonn, Germany. His work focuses...

Leave a comment

Your email address will not be published. Required fields are marked *