IFPRI Discussion Paper 02314 December 2024 Market Information and R&D Investment under Ambiguity A Framed Artefactual Experiment with Plant Breeding Professionals Carly Trachtman Berber Kramer Jérémy do Nascimento Miguel Markets, Trade, and Institutions Unit INTERNATIONAL FOOD POLICY RESEARCH INSTITUTE The International Food Policy Research Institute (IFPRI), a CGIAR Research Center established in 1975, provides research-based policy solutions to sustainably reduce poverty and end hunger and malnutrition. IFPRI’s strategic research aims to foster a climate-resilient and sustainable food supply; promote healthy diets and nutrition for all; build inclusive and efficient markets, trade systems, and food industries; transform agricultural and rural economies; and strengthen institutions and governance. Gender is integrated in all the Institute’s work. Partnerships, communications, capacity strengthening, and data and knowledge management are essential components to translate IFPRI’s research from action to impact. The Institute’s regional and country programs play a critical role in responding to demand for food policy research and in delivering holistic support for country-led development. IFPRI collaborates with partners around the world. AUTHORS Carly Trachtman (c.trachtman@cgiar.org) is an Associate Research Fellow in the Markets, Trade, and Institutions (MTI) Unit of the International Food Policy Research Institute (IFPRI), Washington, DC. Berber Kramer (b.kramer@cgiar.org) is a Senior Research Fellow with IFPRI’s MTI Unit, Nairobi, Kenya. Jérémy do Nascimento Miguel (jeremy.do-nascimento-miguel@u-bordeaux.fr) is Postdoc in Economics at the University of Bordeaux, Bordeaux, France. Notices 1 IFPRI Discussion Papers contain preliminary material and research results and are circulated in order to stimulate discussion and critical comment. They have not been subject to a formal external review via IFPRI’s Publications Review Committee. Any opinions stated herein are those of the author(s) and are not necessarily representative of or endorsed by IFPRI. 2 The boundaries and names shown and the designations used on the map(s) herein do not imply official endorsement or acceptance by the International Food Policy Research Institute (IFPRI) or its partners and contributors. 3 Copyright remains with the authors. The authors are free to proceed, without further IFPRI permission, to publish this paper, or any revised version of it, in outlets such as journals, books, and other publications. mailto:c.trachtman@cgiar.org mailto:b.kramer@cgiar.org mailto:jeremy.do-nascimento-miguel@u-bordeaux.fr iii Abstract Investments in R&D are often made under ambiguity about the potential impacts of various projects. High-quality, systematic market research could help reduce that ambiguity, including in investments in agricultural research-for-development, such as plant breeding. Using an online framed artefactual experiment with a diverse sample of breeding experts working in various disciplines across the world, we ask how market information and information quality influences breeding experts’ investments in prospects with ambiguous returns, and how the quality and source of information affect willingness to pay for market information. We find that providing market information leads participants to make more prioritized (rather than diversified) decisions. However, participants do not consider differences in information quality, instead over- extrapolating from noisy and biased information signals. Finally, while most participants are willing to use experimental funds to purchase market information, around half prefer lower quality information even if higher quality information is available at the same price. We conclude that prioritizing R&D projects with greater impact opportunities will require better awareness among decision-makers of quality issues in various types of market research. Keywords: Agricultural R&D, plant breeding, investment prioritization, ambiguity, experiment iv Acknowledgments This work was undertaken as part of the CGIAR Research Initiative on Market Intelligence. The CGIAR Initiative on Market Intelligence brings together strategic information on future crops, market segments and trait priorities aligned to the needs and preferences of farmers, agri-business and consumers. We would like to thank all funders who supported this research through their contributions to the CGIAR Trust Fund: https://www.cgiar.org/funders/. The experiment was approved by IFPRI’s Institutional Review Board as protocol EPTD-22-1268. We thank Robert Andrade, Melanie Connor, Matty Demont, Guy Hareau, Gaudiose Mujawamariya, Pauline Mundi, Dina Najjar, Valerien Pede, and Vivian Polar for their support in recruiting participants for this study. We are grateful for valuable Ruth Hill’s valuable comments on a previous version of this manuscript. Any opinions expressed here belong to the authors, and do not necessarily reflect those of IFPRI, Market Intelligence or CGIAR. All errors are our own. https://www.cgiar.org/funders/ 1 Introduction Each year, the public sector invests millions of dollars in agricultural research-for-development for the Global South. For example, since its inception in 1971, the CGIAR has spent around $60 billion (in present value terms), with a large share of the funds spent on improving staple crops for smallholder farmers through plant breeding [Alston et al., 2022]. Ex post evidence has shown many, though not all, of these investments to be highly profitable [Gollin et al., 2021]. However, there is ambiguity about the potential social return on investment of various agricultural R&D projects. Rather than investing funds in systematic market research to identify breeding pipelines with the greatest impact opportunities and potential demand (an important pathway to impact), ambiguity is managed by diversifying funds across a large number of pipelines, potentially reducing the impacts of agricultural R&D in the Global South. As environmental conditions worsen and breeding program objectives become more complex over time, a key question is how reducing this ambiguity could help breeding programs concentrate funds in the most promising potential investments. Historically, breeders and other biophysical scientists have played a central role in setting breeding objectives in public sector programs. Yet, with limited information about consumers’ wants and needs [Orr et al., 2018], priorities were often set based on informal relationships with small sets of farmers or other customers, rather than through more systematic, scientific approaches [Cobb et al., 2019]. Extrapolating population-level needs and preferences based on a small, potentially selected sample of farmers is problematic, and yet such small, localized “participatory” approaches are currently still seen as a gold standard in many breeding programs. Moreover, efforts to conduct market research at a larger scale are perceived to compete for funds that could instead be used for plant breeding itself. In recent years, several breeding programs have started encouraging more “transdisciplinary” decision-making processes, which involve the inclusion of a wider range of stakeholders who may have a broader sense of diverse farmers’ and consumers’ needs and preferences, including social scientists [Amoak et al., 2023]. However, the involvement of a more diverse set of experts does not necessarily mean that the way information is gathered and combined to make breeding investment decisions has become more systematic. Indeed, in a recent global survey of plant breeding professionals, less than 45% of respondents say that their team uses a systematic process for collecting and sharing market information in designing target product profiles [Rice et al., 2024]. Moreover, existing power dynamics and attitudes appear to have inhibited some of these stakeholders, and especially social scientists, from truly having a meaningful say in these processes [Cullen et al., 2023]. One way to encourage these teams to systematically gather and combine market information to direct invest- ments towards breeding pipelines with larger impact opportunities is to incentivize development of widely-adopted varieties with strong impacts in targeted market segments. In Rice et al. [2024], only around half of CGIAR and National Agricultural Research and Extension Systems (NARES) breeding professionals report that adoption rates of developed varieties influence job promotions, as compared to 88% of private sector respondents. Such incentives would be less effective if high-quality market information on impact opportunities is unavailable, though, or if decision-makers over-extrapolate from lower-quality market information. For example, due to statistical sampling properties, research that uses a larger, more representative sample may provide a more precise estimate of what farmers value than a similar small exercise with only a few farmers. Yet due to higher costs, teams may not invest scarce funds in obtaining higher quality evidence. Similarly, if decision-makers place less weight on information coming from disciplines in which they have less training (for instance, social science in the case of plant breeders, or natural sciences in the case of economists), they may not invest in working with actors from those other disciplines to collect market information that could be of high quality. In this paper, we study whether and how breeding professionals from various disciplines use market informa- tion to prioritize investment projects with greater potential return on investment. Specifically, by means of an 1 online incentivized experiment with ambiguity about the returns on investment in different breeding pipelines or investment opportunities, we address four main research questions. First, does reducing ambiguity through the provision of market information change the degree to which breeding professionals prioritize between investment opportunities? Second, do respondents internalize differences in information quality and differences in source? Third, are respondents willing to spend scarce resources on market information? Fourth, does this willingness to pay depend on information quality and/or source? Additionally, we explore heterogeneity in investment behavior based on observable respondent characteristics. Given the challenge of exogenously varying program incentives and the availability of information in a high-stakes real-world decision environment, we implement an artefactual framed experiment. The experiment asks breeding professionals involved in plant breeding for the Global South to allocate investment funds between two pipelines with varying probabilities of future success. During a first set of rounds, we vary participants’ access to information about the probability of success of each pipeline, as well as the quality of this information. After completing these rounds, where market information is provided free of charge, participants are offered the opportunity to purchase market information, with the prices and quality of available market information varying between rounds. We also vary between participants the source of information (for instance whether the information is framed as coming from an agronomist or economist) to explore whether there might be biases in how information is valued depending on its source. Our main results are as follows. First, we find that under ambiguity, breeding professionals tend to split funds equally between pipelines, while they use ambiguity-reducing market information to prioritize more between investment opportunities. This is especially the case when removing ambiguity entirely such that participants know the true success probability of each investment option, but even signals that reduce ambiguity only to some extent help participants concentrate their investments. At the same time, we find that participants do not internalize differences in information quality. If participants were fully internalizing differences in information quality when making decisions, then we would not expect to see stronger investment prioritization in response to lower quality signals. Instead, even when evaluating the same investment options, participants often prioritize more when low- quality signals are provided, in a pattern suggesting that they take the estimated success probabilities at face value, rather than incorporating differing uncertainty or bias. We do not see systematic differences in investments based on the source of information. If anything, participants receiving information from an economist are slightly more likely to trust biased signals compared to those receiving their information from an agronomist. Additionally, participants are willing to pay for market information to inform their investments. When signals are equally priced, the most commonly purchased piece of information is the highest quality (precise and accurate), but around half of respondents choose to purchase one of the lower quality signals offered. Purchasing low-quality signals in this scenario cannot be fully explained by low task comprehension, suggesting inherent preferences for certain research methods even when they come with lower precision and accuracy, such as small participatory trials. Finally, while the source of information generally does not affect willingness to pay, participants do purchase additional pieces of market information when matched to an expert from their own discipline (i.e., ‘agronomist’ for breeders and natural scientists, ‘economist’ for social scientists) in rounds where all market information is inexpen- sive. Taken together, these results suggest that more systematic evidence-based decision-making and prioritization in breeding programs will require more than just incentives; it will also require addressing a revealed preference for lower-quality (i.e., biased and inaccurate) evidence among breeding professionals. This study most directly contributes to the literature in two key domains. First, we add to the body of empirical and synthesis work studying how breeding programs and professionals incorporate various types of market information in decision-making processes [Rice et al., 2024, Falloon et al., 2015, Egesi et al., 2024, Tarjem, 2023, Ojwang et al., 2023, Cairns et al., 2022, Ashby and Lilja, 2004, Ceballos et al., 2021, Occelli et al., 2024, Puerto, 2024]. While most of this work consists of retrospective case studies assessing how a specific type of market 2 information (like climate projections or gender-disaggregated preferences) is incorporated in a single breeding program, we introduce experimental evidence of how a global and diverse population of breeding professionals uses market information more generally, and how these professionals respond to exogenous variation in the quality of this information. Second, this study contributes to the experimental literature exploring the impacts of information on decision- making under ambiguity. Much of the existing literature uses laboratory experiments with a standard subject pool of university students to explore how information of differing qualities and types affects ambiguity preferences [Aggarwal and Mohanty, 2022, Arad and Gayer, 2012, Peysakhovich and Karmarkar, 2016, Bricet, 2018]. More relevant to our work, Batteux et al. [2023] and Couture et al. [2024] conduct framed experiments in which individuals choose to make risky investments under the provision of more and less precise information signals. Our work adds on this by also introducing biased information, exploring participants’ willingness to pay for information signals of varying quality, and by using a non-standard subject pool of breeding professionals from all over the world. The rest of the paper proceeds as follows. Section 2 describes the experimental design, including the experimental task and the treatments that we varied within and between participants to address our research questions. Section 3 introduces strategies used for sampling and recruitment, and describes the resulting experimental sample. Section 4 presents results on how a participant’s investment decisions respond to the provision of market information. Section 5 presents results regarding participant’s decisions to purchase such market information. Section 6 concludes. 2 Experimental Design 2.1 Investment Task The experiment was conducted online and was framed as a series of 12 investment tasks (summarized in Table 1). Participants started with a first set of two “fundamental” rounds where they made investment decisions under both “no ambiguity” and “full ambiguity” scenarios (Part 1). This was followed by a second set of six rounds during which participants received free “market research” information (of varying quality across rounds) to partially reduce the ambiguity and help inform their investment decisions (Part 2). A third and final set of four rounds let participants first choose whether to purchase such market information before making their investment decision (Part 3). Figure 1 depicts the investment tasks in these three parts. In each round, a participant was presented with two independent breeding pipelines, named Pipeline 1 and Pipeline 2, and their task was to allocate their investment budget between the two pipelines (there was no outside option). Earnings from a given round would equal the sum of the shares invested in a given pipeline multiplied by the payment from that pipeline, which depends on the market’s use of pipeline varieties. A pipeline could either develop varieties highly valued in the future, which we framed as 30 tons of seed being used by the market (and a $30 payoff to the participant); or the varieties developed within a pipeline could end up being moderately valued, with only 10 tons of seed being used (and a $10 payment to the participant).1 For instance, a participant investing 60% in Pipeline 1, which turns out to be a success, and 40% in Pipeline 2, which turns out to be a failure, would earn 0.6*30+0.4*10=$22. Participants allocated investment shares to each pipeline using two sliders representing the two pipelines. For simplicity, shares could be moved in increments of 0.1 only, so the value invested in each pipeline could be any value in {0, 0.1, 0.2 . . . , 1} and shares were required to sum to 1. At the beginning of each round, the default positions for both sliders were set to zero (and hence had to be moved to make them sum to 1), as not to bias respondents toward any particularly investment split. Tables including all information about each pipeline, including payoffs under the two possible outcomes, were included in every investment task screen for easy reference. Notably, besides 1For brevity, we refer to these two outcomes here as “success” and “failure,” respectively, though this terminology was not presented to participants. 3 one practice example, participants were not shown whether each pipeline resulted in success or failure after making their investment choice. 2.1.1 Market Information Critically, in most rounds, participants did not know the probability that each pipeline would yield a success, making the payoff of each investment ambiguous. In Part 2, participants were presented with signals based on “market research”, which provided an estimate of the probability of success of each pipeline, and in Part 3, they were offered the opportunity to purchase these research-based signals. We refer to the probabilities of success of Pipelines 1 and 2 as Z1 and Z2 respectively, and hence signals provided described potential values of these two parameters. The signals were framed as coming from data collection exercises, as the main type of market research that breeding teams might use in a real-world context. There were four possible signal types that participants would encounter. These signals varied in terms of their precision and accuracy levels. The text for each signal type is as follows; between subject variations in language are indicated in parentheses with a slash mark (/). 1. Precise, Accurate Signal (PA): “An (agronomist/economist) conducted a rigorous scientific study with 2,000 farmers to estimate Z1 and Z2. True values of Z1 and Z2 are not observable, and so the study’s estimates may not be correct. However, because the study’s methodology is quite rigorous, their estimates will not be very noisy. Specifically, estimates of both Z1 and Z2 will be no greater than 10 more or less than their true values.” 2. Imprecise, Accurate Signal (IA): “A team of breeders (had a series of conversations/performed a partic- ipatory varietal selection exercise) with 20 local (value chain stakeholders/farmers) to estimate Z1 and Z2. True values of Z1 and Z2 are not observable, and so the exercise’s estimates may not be correct. Given the somewhat small sample of (value chain stakeholders/farmers), the estimates are a bit noisy. Hence their estimates of both Z1 and Z2 will be no greater than 30 more or less than their true values. 3. Precise, Biased Signal (PB): “An (agronomist/economist) conducted a rigorous scientific study with 2,000 farmers to estimate Z1 and Z2. True values of Z1 and Z2 are not observable, and so the study’s estimates may not be correct. Importantly, due to the way the study’s sampling was done, the set of farmers who evaluated potential varieties from Pipeline (1/2) were much richer than the average farmers. Hence (Z1/Z2)’s estimate will be upward biased by about 5 percentage points. At the same time, because the study’s methodology is quite rigorous, their estimates will not be very noisy. Specifically, estimates of both Z1 and Z2 will be no greater than 10 more or less than their expected values.” 4. Imprecise, Biased Signal (IB): “A team of breeders (had a series of conversations/performed a partic- ipatory varietal selection exercise) with 20 local (value chain stakeholders/farmers) to estimate Z1 and Z2. True values of Z1 and Z2 are not observable, and so the exercise’s estimates may not be correct. Importantly, due to the way the study’s sampling was done, (stakeholders who evaluated potential varieties from Pipeline (1/2) are those who tend to cater to richer than average consumers/the set of farmers who evaluated po- tential varieties from Pipeline (1/2) were much richer than the average farmers). Hence (Z1/Z2)’s estimate will be upward biased by about 5 percentage points. Given the somewhat small sample of (value chain stakeholders/farmers), the estimates are a bit noisy. Hence their estimates of both both Z1 and Z2 will be no greater than 30 more or less than their expected values.” To summarize mathematically, for k ∈ {1, 2} , the PA signal’s estimate is drawn from [Zk − 10, Zk + 10], the IA’s signal estimate is drawn from [Zk − 30, Zk + 30], PB’s signal estimate is drawn from [Zk − 5, Zk + 15] and 4 IB’s signal estimate is drawn from [Zk − 25, Zk + 35].2 The PA signal is the highest quality, given that it provides the smallest range of possible values for the true values of Z1 and Z2, and doesn’t require a mental correction to recenter one pipeline’s estimate around its true mean. Conversely, the IB signal is the lowest quality, as it both provides a very large possible range of values based on the point estimate provided, and requires a mental adjustment to recenter one of the estimates around it’s true value. Figures 1c and 1d show information of how market information was presented to participants. Estimates from the market research would be placed in the tables describing the pipelines, though they were clearly labeled as being estimates, rather than true values. In rounds where multiple estimates were provided, the number coming from each would be clearly indicated. The accompanying text explaining the signal type was presented below the pipeline information tables, but above the decision slider, to make sure participants did not miss it. 2.1.2 Purchasing Market Information In the third set of rounds (Part 3), participants were endowed with 10 tokens that they could use to either purchase market information signals or invest in “varietal development.” The idea was to mimic the trade-off that breeding teams face in reality, where limited resources can either go towards conducting high quality market research or towards the breeding process itself. An investment in varietal development corresponds to an increase in the payoffs under both the success and failure outcomes of both pipelines. Each token invested in varietal development increases payoffs by $0.50. For example, investing all 10 tokens in varietal development increases payoffs from $10 to $15 under failure and from $30 to $35 under success. Investing only 6 tokens in varietal development increases payoffs of each pipeline to $13 under failure and $33 under success, but investing the remaining 4 tokens in market research gives the participant information on the probability of success for both pipelines. The progression of information that a participant would see in one of these rounds can be seen in Figure 2. First, participants would see a menu of possible pieces of market research they could purchase (Figure 2a). The menu summarized the framing, information about accuracy and precision, and the price. No new types of signals were introduced; all signal types correspond to one of the four signals with which the participant had already gained experience from the second set of rounds with free market information (as described above). Once purchase decisions were finalized, participants would then see an investment task screen, similar to what they were shown during the rounds with free market information (Figure 2b). The updated payoffs under success and failure (depending on how much was invested in varietal development) would appear in the table. In case participants forgot the attributes of the signal(s) they purchased, they could hover over the name of the market research project and be reminded of that information (Fig 2c). 2.2 Experimental Variation: Within Participants The experiment has both within- and between-subject variation in treatments. Table 1 summarizes treatments with within-subject variation, which varied between experimental rounds. First, we vary what type of information is available about the success probabilities of each pipeline. In the first part, we either provide no information (round 1), or the true success probabilities (round 2). These rounds serve as useful benchmarks of how participants invest when the returns are fully ambiguous or not ambiguous at all. In the second part, participants receive free market information. They either receive one information signal, with one round for each of the four signal types described above (rounds 3 to 6), or two information signals simultaneously (rounds 7 and 8). This allows us to analyze how participants respond to information of different 2In practice, signals values were drawn from a uniform distribution over the specified range and rounded to the nearest whole number in {0, 1, . . . 100}. Signal values were censored to fit between 0 and 100. This was mentioned to participants in rounds where 0 or 100 appeared. 5 quality types and how they respond to receiving more than one signal. The latter point is critical given that breeding teams often have to aggregate information across multiple, potentially conflicting, sources of evidence. In the third part, participants have the opportunity to purchase any signal type, with the price of different signals varying across rounds. In round 9, all signals are relatively cheap (2 tokens), and hence participants can choose to buy all five available signals (two PA signals plus one signal of each other type). In round 10, all signals are relatively expensive (8 tokens), meaning that participants can only choose up to one signal to purchase. In rounds 11 and 12, we introduce price premiums for improved precision and accuracy, respectively, with the higher quality signal costing twice as much as a lower quality signal. In both rounds, participants can choose to invest in one PA signal for 8 tokens or buy up to two IA signals (round 11) or two PB signals (round 12) for 4 tokens each. A final treatment that is varied within subjects is related to the set of pipelines evaluated and their true probability of success. In Part 2, we have two sets of pipelines that participants may be evaluating, and we randomize which set of pipelines they evaluate in each round. Ideally, we would want to see how individuals invest under different information signals holding the set of pipelines evaluated constant. However, we feared that participants could realize that they were evaluating the same investment multiple times, and that this would influence their behavior. We also wanted to make sure the results did not depend on the specific underlying investments. Given that having more than two sets of pipelines to evaluate for each round was computationally complex and hence may have made the software work slowly, we randomized which of the two was evaluated independently for each round. 2.3 Experimental Variation: Between Participants We randomized at the participant level the framed source of available market information. For precise (P) signals, we randomized whether the information was produced based on research conducted by an economist or an agronomist. While this information is not truly meaningful in the context of the investment task, we hypothesized that implicit bias against other disciplines may cause participants to subconsciously put less trust in estimates coming from scientists in said disciplines. However, we decided that randomizing this within participants had a significant potential to induce experimenter demand effects, and hence randomized between participants. Similarly, for the imprecise (I) signals we similarly (but independently) randomized whether the information was framed as coming from farmers or value chain stakeholders, to see if there were any implicit preferences for farmers versus other stakeholders’ opinions. 2.4 Measuring Investment Prioritization Our main outcome variable is the degree of prioritization in a participant’s investments, that is, the extent to which a participant chooses to allocate a greater share to (and places more priority on) one investment pipeline compared to the other pipeline. To quantify the degree of prioritization, we use the absolute difference of the shares invested in each pipeline. Mathematically, this is: |Share in Pipeline 1 − Share in Pipeline 2|. The greater the value of this variable, the more prioritized one investment is over the other; if funds are equally split with 0.5 invested in each pipeline, this variable takes on a value of 0, and if the entire budget is invested in one pipeline, this variable takes on a value of one. In between these two extremes, the prioritization metric is monotonically increasing in max(Share in Pipeline 1,Share in Pipeline 2). 2.5 Procedures The experiment was administered online through Qualtrics. We estimated the experiment to take about 30-45 minutes to complete, and informed participants that they would earn between US$30-55 dollars: a $20 participation fee to be awarded upon completing the experiment, and an additional $10-35 based on decisions made within the 6 experiment. Specifically, we informed participants that they would be paid based on their earnings in one of the 12 rounds, and that this round would be randomly selected after completing all rounds of the experiment. To send these payments, we provided digital Visa gift cards, which they could redeem with any company accepting payments using Visa credit or debit cards.3 Participants from a few countries in which the provider does not offer digital gift cards were informed that they were not eligible to participate in the experiment.4 After providing consent to participate, participants answered around 10 questions about their demographic characteristics and their involvement in the breeding process. Next, they completed a module with a first set of introductory experimental instructions that contextualized the investment task and provided directions on how to complete the task. Comprehension questions were asked periodically throughout module. If participants got any of these questions wrong on their first try, they had to repeat that section of the module. If they got a question wrong twice, they were shown the correct answer with an accompanying explanation. Participants then engaged in the 12 rounds of investment tasks, which were broken up into the three parts described above. The three parts were always presented in the same order, though the order of rounds within each part was randomized between subjects. Before each part, there was an additional information module introducing the relevant concepts for that part. 3 Sample Description The study targeted “plant breeding professionals”, defined as either plant breeders or other experts working directly with plant breeders. The goal of imposing a somewhat broad definition was to include not only breeders but also natural and social scientists as well as seed system experts who may participate in decision-making processes within breeding programs, or who might be on transdisciplinary breeding teams. Given the lack of a clear, existing list of plant breeding professionals to draw from, we implemented respondent-driven sampling methods to construct our sample. Recruitment began in October 2023 and was initially done by advertising the experiment at meetings of plant breeding professionals who predominantly serve the Global South (including the 2023 International Rice Congress and the African Plant Breeders Association Conference) as well as through participant referrals from CGIAR colleagues.5 Later, we limited access to only individuals who had already completed a concurrently running survey (promoted through the same channels) with the same target population, and appeared to genuinely work in plant breeding. We closed the experiment in June 2024. Characteristics of the sample can be found in Table 2. Around 61% of the sample consists of plant breeders, whereas around 25% are other natural scientists, 6% social scientists, and 9% other types of seed system experts. The majority of respondents works in breeding programs in either South Asia (e.g., Bangladesh, India, Pakistan), Eastern/Central Africa (e.g., Kenya, Malawi, Zambia), or Western/Central Africa (e.g., Ghana, Nigeria, Cote d’Ivoire). The 22% of the sample working outside these regions is made up of participants working in Central/West Asia and North Africa (7% of the total sample), Latin America (4.5% of the total sample), and various other regions. Most come from public sector breeding programs with 45% being from NARES institutions and 25% being from CGIAR Centers. The rest of the sample is evenly split between academic institutions and private sector companies (primarily from regional seed companies). The most common crops on which respondents work are key staples: rice, wheat, and maize. Moreover, only 7.5% of respondents do not work on at least one of the staple crop 3In practice, for respondents in a select number of countries, we needed to shift to other types of gift cards. This depended on what was available in each country through the rewards provider we worked with, and participants were made aware of this as part of the consent form. 4This list includes: Afghanistan, Belarus, Central African Republic, Cuba, Democratic Republic of the Congo, Ethiopia, Iran, Iraq, Kosovo, Lebanon, Libya, Mali, Myanmar, Nicaragua, North Korea, Papua New Guinea, Russia, Somalia, South Sudan, Sudan, Syria, Tibet, Timor Leste, Uganda, Ukraine, Vanatu, Venezuela, Yemen, and Zimbabwe. 5While we did not technically impose that participants had to work in a breeding program serving the Global South (as opposed to the Global North), our recruitment strategy made it such that in practice almost all respondents did. 7 categories listed in the table. Around 71% of respondents are male, which is consistent with science, including plant breeding, historically being a male dominated field. Respondents are also highly educated with 40% having a master’s as their highest degree and 42% having a PhD, which makes sense given the technical expertise required in plant breeding. Most participants are likely mid-career, with around 59% of respondents being between the ages of 35-54. Finally, the distributional breakdown of participant nationalities is very similar to the breakdown of the countries where individuals work. Around two thirds of participants likely completed the survey in one sitting, which we define as the time from the first click to survey completion being less than 2 hours. Other respondents may have opened the link and then decided to complete the survey later when they had more time, meaning they may have still completed the survey in one sitting, but this would be more difficult to assess. Out of those who completed the survey in one sitting, it took participants around 1 hour on average. Hence we imagine most participants put significant effort into this task. Given that most participants are highly educated, we expected a high task comprehension rate as well, despite its complexity. Of the 7 comprehension questions assessed, the mean and also median respondent answered 5 questions correctly on their first attempt. By their second attempt, the mean and median respondent answered 6 questions correctly. Hence we believe that most participants likely understood the exercise relatively well, though perhaps not perfectly. 4 Results: Effect of Market Information on Investment Decisions 4.1 Responses to Provision of Market Information First we explore whether the provision of any information causes participants to change their investments. Specifically, we compare round 1, where the returns to each pipeline investment are fully ambiguous, to round 2, where the returns to each investment are unambiguous, and to rounds 3-8, where returns are somewhat ambiguous. Figure 3 shows the average degree of investment prioritization in each round. Participants concentrate their investments more into one pipeline as they face less ambiguity. Under full ambiguity, the average degree of prioritization is 0.19. Note that a value of 0.2 would correspond to an investment of 0.6 in one pipeline and 0.4 in the other, hence on average, participants are investing fairly similar shares in both pipelines under this condition. On the contrary, when there is no ambiguity and participants know the true success probability of each investment, their average degree of prioritization more than doubles to 0.44. The average degree of prioritization sits between these two scenarios at 0.36 in Part 2, where market information reduces ambiguity only partially. We can quantify these differences through regressions, as presented in Table 4. Here we can also control for individual, round order, and investment quality (which pipeline set was evaluated) fixed effects. The results reaffirm the patterns that emerged in the raw data: when ambiguity over the returns to each investment is decreased, pri- oritization between investments increases. Alleviating the ambiguity to an extent by providing market information increases the degree of prioritization by about 0.15, whilst alleviating it fully leads to an increase in prioritization of 0.25. This is equivalent to shifting 7.5 or 12.5% (respectively) of one’s investment budget from the pipeline with a smaller investment share to the pipeline with the larger investment share. Additionally, the estimated coefficients are statistically different from each other at the 1% level, hence having “no ambiguity” induces more prioritization than having “some ambiguity”.6 In sum, these findings suggest that participants do indeed use ambiguity-reducing market information to prioritize one pipeline in their investments. 6This comparison between “full” and “some” ambiguity must be interpreted with caution, because the investment options without ambiguity were different from the investment options in the scenario with “some ambiguity”. Hence, while it is valid to compare both to the “full ambiguity” case (where the underlying probabilities of success could theoretically be anything), it may not be valid to directly compare the “some ambiguity” and “no ambiguity” conditions. 8 4.2 Responses to Signal Quality Next, we ask whether participants adjust for differences in signal quality. For rounds 3-8, where market infor- mation provides estimates of the probability that a pipeline produces widely adopted varieties, participants were evaluating one of two sets of underlying investments. If participants were mentally adjusting for differences in signal quality, we would expect roughly the same degree of prioritization regardless of the type of information provided, or at least not greater prioritization in response to lower quality signals. However, Figure 4 shows that participants do not fully adjust their investments in response to signal quality. For instance, participants receiving imprecise signals tend to prioritize more than those receiving precise signals (unless two imprecise signals are provided). We can also see this pattern from regressions presented in Table 5, where we compare the degree of prioritization under the receipt of various market information signals to the omitted group of having no signals (full ambiguity). In column one, we include dummy variables for the unique combinations of signals provided during a round, while in column two, we include indicators of having received a given signal type for each of the four types, and then a separate indicator for having received multiple signals. Controlling for the investment being evaluated and for individual and round order fixed effects, we observe systematic differences in investment prioritization based on signal type. Notably, in column 1, the coefficient on receiving 1 Precise/Accurate Signal is statistically different from receiving 1 Imprecise/Accurate signal (at the 10% significance level), receiving 1 Imprecise/Biased Signal (at the 5% level), and receiving 2 Imprecise/Accurate Signals (at the 1% level), but not from receiving one Precise/Biased Signal. This suggests that participants mostly systematically over-extrapolate in response to noisy signals as opposed to biased ones. Additionally, receiving 2 Imprecise/Accurate signals seems to possibly confuse respondents, causing them to prioritize much less. This is confirmed in column 2, where we observe a significant negative impact of receiving two signals. Some of this confusion may stem from cases where the signals “disagree” about which pipeline is more profitable. Looking at rounds in which two signals are provided for each of the two investment sets, the only case of signal disagreement about which pipeline has a higher success probability is in the 2 Imprecise/Accurate Signals for one set of pipelines. We see in Figure 7a, that prioritization is especially low in this case. 4.2.1 Response Heterogeneity Next, we ask whether there is heterogeneity to responses in signal quality based on observable characteristics. We explore heterogeneity based on gender (Table 6), education level (Table 7), institution type (Table 8), and area of expertise (Table 9). The results are also summarized visually in Figure 5. Men and women tend to have a similar investment pattern, though men seem to prioritize slightly more on average. A notable difference is that under perfect information, women make significantly less prioritized decisions, and don’t treat the non-ambiguous signal differently from ambiguous signals on average. Education does not seem to matter much either. The only notable difference is that experts with less than a master’s degree seem to have both a higher and a more consistent level of prioritization across signal types. In terms of institution types, we see a much more dramatic response to difference in signal quality from individuals at academic and private sector institutions. Academics in particular prioritize very little when they have no information. In terms of expertise, social scientists also seem particularly sensitive to signal quality differences. This suggests that simply adding social scientists to teams may not increase attention to the quality of evidence. We may also be concerned that some of the failure of participants to adjust to information quality differences could be driven by them not understanding the experiment or not taking the task seriously. However, in Figure 6 we see that this does seem to be the case. Those participants that understood better (get at least 6/7 comprehension checks correct by the second try) tend to invest similarly to those who have a lower level of comprehension. The only difference is that those with a better understanding tend to prioritize less when they have no information. 9 Time, a proxy for experimental effort, does seem to be associated with investment decisions. Those who take a longer time to finish (still omitting anyone taking over 2 hours) seem more responsive to differences in signal type. Participants who take less than 40 minutes are less responsive to signals and prioritize investments more overall. This does not mean that the over-response to quality differences was a result of people moving through the exercise too quickly and not paying attention. 4.2.2 Explaining Signal Response Why do we see large differences in investment decisions depending on the signal type provided? That is, what is it about the information presented that leads to a different investment by respondents? First, it is useful to note that for the two unique sets of pipelines that were evaluated, we do not see the same signal types inspiring the most and least prioritized investments. We see in Figure 7 that while the imprecise/accurate signal inspires the highest level of prioritization in investment set 1, that the same signal inspires relatively less prioritization in investment set 2. This is also true for the other signal types. Hence the difference in investment is likely not due to the contextual information describing the signal, which does not vary between investment sets. Instead, it seems possible that respondents accept the estimated success probability estimates at face value, and do not incorporate the bias and the noise in lower quality signals. In Figure 8, we calculate the difference between the success probabilities of the two pipelines that is implied by the signals provided in a given round, and plot this against the average degree of prioritization among participants. The idea is that if individuals are only responding to the signal values rather than the bias and noise introduced in generating the signals, then the greater a pipeline’s estimated success probability is compared to the other pipeline, the more they will prioritize a pipeline, regardless of signal bias and noise. We can see that seems to roughly be the case, as there is a clear linear relationship between the difference in signal values and the degree of prioritization. Table 10 shows that even after controlling for the true differences in the success probabilities of the two pipelines, every percentage point increase in a pipeline’s probability of success compared to the other pipeline based on signal values increases prioritization by 0.5% points. So for a given set of investments, if the apparent gap between the probability of success (as suggested by the signal value) increases by 10 percentage points, a participant shifts 2.5% of the investment budget from one pipeline to the other. 4.2.3 Response to Signal Source Finally, we looked at whether investment choices and/or responses to signal quality varied by the framed source of the information. Figure 9 shows that this does not seem to be the case. We do not see any systematic patterns in investment based on: whether the precise information was framed as coming from an economist or from an agronomist; whether the precise information was coming from an expert in the same discipline or not; or whether the imprecision information was framed as coming from farmers or value chain stakeholders.7 5 Willingness to Pay for Information Signals 5.1 Signal Purchase Decision Next, we investigate whether participants are willing to pay for information signals that will help them prioritize between investments. Looking at Figure 10, which shows the number of signals purchased in each round, we find that regardless of the prices or the number of signals available, the median participant chooses to purchase one 7For disciplinary match analysis, breeders and natural scientists are considered to come from the same field as the agronomist (natural science) and social scientists are considered to come from the same field as the economist (social science). Other seed system experts are not included in this analysis. 10 signal. Even when the price of a signal effectively costs $4, or 20% of the difference in payoffs between the success and failure conditions (the equal price-high scenario in Figure 10b), about 93% of participants still choose to purchase a signal. In the case where participants can afford to buy multiple signals (up to 5, where each effectively costs $1 in Figure 10a), 96% choose to buy at least one signal, and 38% buy multiple signals, suggesting some participants see at least $1 of additional value from purchasing each extra signal. In the other rounds where respondents could purchase up to two signals (Precision premium and Accuracy premium rounds), 86-87% of participants choose to purchase one signal, with only 7% buying two signals. However, in these cases, buying two signals means having to purchase low quality signals, as opposed to potentially one higher quality one. So the choice of purchasing only one signal in these levels does not necessarily imply that additional marginal value would not be gleaned from an additional signal. 5.2 Demand for Quality Critically, not all participants prefer the highest quality information signal. This is evident from purchase decisions in the rounds where all information signals were equally priced, shown in Figure 11. Because higher and lower quality signals cost the same in these rounds, we can interpret the signals that participants choose as the ones they truly prefer. Figure 11a shows what signals respondents chose to purchase when all signals cost $1, and they could purchase up to 5 signals. With two precise/accurate signals on offer, we count a participant choosing to buy at least one of those signals under “PA1” in the graph, and if they bought both, they are also counted under “PA2”. Notably, while 96% of participants buy a signal of any type, only 63% buy at least one precise/accurate signal. Hence while most participants place the highest value on the best evidence, around one third of the sample does not. Moreover, out of those who choose to purchase only one signal, only 46% purchase the highest quality signal. Of those who chose to purchase exactly two signals, only 49% choose to purchase the two highest quality signals. This divergence among participants in preference for quality also comes out clearly in the round where signals are equally priced but expensive, meaning participants can only afford to choose one. We see in Figure 11b that despite the significant price increase, 93% of participants still choose to purchase a signal. Yet, only 44% of participants purchasing a signal choose the highest quality precise/accurate signal. While still the most frequently purchased signal, around 32% of signal purchasers choose the imprecise/accurate signal. We may wonder whether this decision to purchase more imprecise signals reflects a true preference or simply a misunderstanding of the experimental task. It is plausible that this reflects a true preference, given that if an individual takes the signal estimates at their face value without adjusting for quality, imprecise/accurate signals may seem more informative. Participants may also prefer the framing of the low-quality signals, which are framed as coming from exercises conducted by breeders as opposed to agronomists and economists. However, we also explore the hypothesis that participants choose lower quality signals because they do not understand the task. We compare signal choice when only one equally-priced signal can be purchased between participants who got at least 6 of the 7 comprehension questions right by the second try and those who did not. Figure 12 shows that participants with lower comprehension levels are relatively more likely to buy a low quality signal; 59% of these participants choose to buy a low quality signal, as opposed to 46% of participants with higher comprehension levels. However, as purchasing a low quality signal is still common among the latter group of participants, a lack of comprehension does not fully explain participants’ prference for lower quality signals. 5.3 Willingness to pay for precision versus accuracy Participants’ willingness to pay for quality also varies by quality dimension; participants are on average willing to pay much more for increased accuracy than for increased precision. To see this, first note that in rounds where all signals were equally priced, participants were much more likely to purchase an imprecise signal as opposed to 11 a biased one. We can see this from rounds where the higher quality signal is double the price of a lower quality signal ($4 vs. $2), whose results are shown in Figure 13. Regardless of whether the price premium is for increased precision or accuracy, the most popular choice of signal is the expensive, high-quality one. Yet, when the imprecise signal is half the price of the precise one (Figure 13a), participants frequently purchase one imprecise signal instead. Indeed, about 46% of participants buy the precise signal and 41% one imprecise signal–only a 5% difference. This is not the case when participants must decide whether to buy the more expensive accurate signal versus the cheaper biased signal (Figure 13b). In this case, about 56% of participants buy the accurate signal, while only 30% buy the biased signal. This suggests that participants are more aware of the detriments of extrapolating off of biased information than of noisy information. However, the story is not as simple as visually comparing the two graphs in Figure 13 might imply; it is not the case that 10% of the sample switches from buying the cheap imprecise signal (when there is a precision premium) to the expensive unbiased signal (when there is an accuracy premium) while everyone else makes roughly the same choice between rounds. Table 11 compares the purchasing decisions of individuals in the accuracy premium and precision premium rounds, which can illuminate which individuals are willing to pay for higher precision, higher accuracy, both, or neither. Interestingly, despite the shares of individuals buying two signals and no signals staying almost constant between rounds, it is not the same individuals performing these behaviors in both rounds. A little less of a third of the sample (31%) demands the high quality information in both rounds, and a around a fifth of the sample demands low quality signal(s) in both rounds–18% always buying one low quality signal and 2.5% buying one or two low quality signals. The majority of the remaining half of the sample consists of “ switchers”: 27.5% who buy noisy signals but not biased ones, and 16.5% who buy biased signals but not noisy ones. Hence there are “switchers” in both directions, but about 10% more bias-averse switchers than noise-averse ones. To understand these different behavior patterns, Table 12 characterizes the individuals who consistently demand high-quality information, consistently demand low-quality information, or switch between rounds (favoring either noisy or biased signals). The “Always High Quality” group refers to those who choose to purchase the highest quality signal in both rounds and the “Always Low Quality” group refers to those who choose to purchase any low quality signal(s) in both rounds. The “Bias Averse” group buys imprecise signals but not biased ones (either opting to buy an accurate signal or no signal at all when there is an accuracy premium) and the “Noise Averse” group buys biased signals but not noisy ones (either opting to buy a precise signal or no signal at all when there is a precision premium). Overall, we see very few statistically significant differences between these types of purchasers (in part likely due to small sample size). However, there are a few patterns perhaps worth noting. Older participants (55 and older) are most likely to buy only high quality signals, perhaps due to greater experience on the job, and hence more attention to information quality. Additionally, while social scientists never choose to buy only low quality signals in both rounds, they are more likely to be either bias-averse or noise-averse than to consistently choose the high quality signal. This is in contrast with natural scientists who are most likely to consistently choose high quality signals. Breeders’ most common response is to buy low quality signals in both rounds, though they are not significantly more likely to be in this category as opposed to any other category. Notably, participants with higher task comprehension are not more likely to consistently choose high quality signals. If anything, they are likely to be noise averse–opting for cheap, biased signals if offered but avoiding noisy signals. This may be because these participants find it conceptually easier to mentally “correct” for bias than for noise. Taken together, there are not many systematic correlates of a information quality preferences, and these differences in preferences for information quality may exist within countries, institutions, or even individual breeding teams. 5.4 Willingness to Pay Based on Signal Source Finally, we look at whether the perceived source of the information signal has any bearing on purchasing decisions. Figure 14 displays the shares of participants who purchased each type of signal, disaggregated by 12 whether the information was framed as coming from an expert in their same discipline (“same”) versus in another discipline (“different”). In 3 of the 4 rounds, we see no significant difference in purchasing behavior based on the framing, but when signals are equally priced and cheap, the framing does affect purchasing decisions. Critically, individuals are more likely to purchase a precise signal if that signal is coming from an expert from the same discipline (whereby the source in the case of precise signals was framed as either an agronomist or an economist). Moreover, in Figure 15, we see that in the same round, participants purchased a greater number of signals overall if precise signals were framed as coming from an expert from the same discipline: in that case, participants purchased on average 1.6 signals, while purchasing only 1.4 signals if framed as coming from someone in another discipline. These averages are statistically different from each other at the 5% level. This suggests that individuals may be implicitly biased against information produced by less familiar disciplines at least to some extent, which may hinder successful transdisciplinary collaboration. We also consider differences in purchasing decisions depending on whether imprecise signals were framed as coming from a participatory varietal selection exercise with farmers or from a series of conversations with value chain stakeholders. Interestingly, when there is a premium charged for precision, this framing seems to matter. Specifically, individuals are more likely to opt for the precise signal and not for the imprecise signal when the information is framed as coming from conversations with stakeholders. Perhaps participants perceive a “series of conversations” to be less rigorous than a “participatory selection exercise” and hence imprecision is more salient in the former (even though the actual quality of the signal stays constant). As further evidence supporting this hypothesis, there is no similar effect of the stakeholder framing when there is a premium for accuracy. If anything, in that case, individuals are more likely to purchase two biased signals when framed as coming from stakeholders as opposed to farmers. 6 Conclusion In this study, we conduct an experiment with plant breeding professionals to analyze the effect of providing market information that reduces ambiguity around returns on breeding investment prospects, and can thereby help prioritize investments across prospects. We vary the quality and framed source of this information and study how this affects both investment decisions and willingness to pay for information. Reassuringly, we find that study participants pay attention to market information and prioritize more in their investments as a result. However, participants do not appear to factor in differences in information quality when informing their decisions; they invest as if the noise in imprecise signals provides useful information, which may lead to suboptimal decision-making regardless of their underlying risk preferences. Receiving multiple symbols can help temper this over-reaction to noise; when multiple signals seemingly disagree, respondents prioritize only slightly more than if they have no market information at all, and when the two signals seemingly “agree”, respondents do not prioritize more compared to when they receive only one imprecise signal. Additionally, the framed source of the market information does not systematically impact participants’ investment choices. We also find that participants value market information, as the vast majority of respondents voluntarily pur- chase information signals, regardless of the price. However, most respondents choose to purchase only one signal, even when given the opportunity to obtain up to five different pieces of market information. This suggests that participants see little marginal value (or even negative value) in obtaining additional signals, possibly because evaluating multiple signals increases decision complexity. This is perhaps troubling given that the information environment in our experiment is much simpler than in the real world. Even more noteworthy is that while many participants prefer higher quality information, a non-negligible share of respondents buys lower quality signals even when they could have obtained higher quality signals at the same price. This pattern can only be partially explained by comprehension issues, and it is worth noting that lower comprehension in the task may be correlated 13 with preferences over information quality. More participants seem averse to biased signals than to imprecise ones, suggesting that participants might be more cognizant of some information quality issues than others. Finally, while the framed source of information had little impact on actual investment decisions, it does appear to affect willingness to pay for information. When information signals are relatively cheap and participants can purchase up to five signals, participants for whom signals are framed as coming from an expert outside of their discipline purchase less information compared to those whose signals come from an expert in a similar discipline. While acknowledging there are natural limitations to what we can learn from a framed lab-style experiment, we see key policy implications emerging from this work. First of all, we find an inherent demand for market information among breeding professionals. Hence the narrative that breeding programs still remain primarily “supply-driven,” with breeders prioritizing pipelines yielding the greatest technological advancement rather than those the market demands, is not accurate. Instead the problem lies in the types of market information consulted, the ways in which it is used, and the ability to aggregate across multiple sources of market information. Critically, our results suggest the importance of finding ways to promote the use of high quality evidence from multiple sources. Moreover, incentives alone are unlikely to get decision-makers to pay attention to information quality. Even with real money on the line, study participants fell victim to over-extrapolation of noise as actual information. The money on the line in our experimental context was nowhere close to the amounts of money invested in breeding pipelines, which might lead us to believe that in the “real world”, large enough incentives may induce more attention to information quality. Yet our interactions with respondents suggested that our reward did constitute an important sum of money, which is more than the hourly wage in many of the countries in which they are located. In the real world, individual contributions to investment decision-making and varietal development may be even harder to define, whereas in our experiment the respondent was the sole responsible decision-maker. Hence, in some sense, we could also expect respondents to be less responsive to more realistically structured incentives at the breeding program level than in the experiment. Critically, this means that programs seeking to encourage the development of market-demanded, impactful varieties through incentives alone may be unsuccessful. Especially given that small- scale participatory trials and focus groups have become a norm in the breeding discipline, additional capacity building and/or contract stipulations may be needed to encourage breeding programs to seek out higher quality market information. This study also has critical implications for the proliferation of transdisciplinary breeding methods. First, introducing social scientists into breeding teams should not be seen as the “panacea” to ensure breeding programs invest in breeding pipelines with greater impact opportunities. In our experiment, social scientists working with breeding programs over-extrapolate from noisy information just like other experts in our experiment. Yet, given that participants readily change their investment decisions based on a single, imprecise piece of information, it is critical to ensure that there are multiple perspectives in the room when decisions are being made. If even uninformative framing about whether an estimate came from an “agronomist” or an “economist” can affect the willingness to invest in market information, then more must be done to facilitate productive collaboration between scientists from different disciplines. Only then can teams make well-informed investment decisions that respond to consumers’ multidimensional wants and needs in an increasingly complex world. References D. Aggarwal and P. Mohanty. Influence of imprecise information on risk and ambiguity preferences: Experimental evidence. Managerial and Decision Economics, 43(4):1025–1038, 2022. J. M. Alston, P. G Pardey, and X. Rao. Payoffs to a half century of cgiar research. American Journal of Agricultural Economics, 104(2):502–529, 2022. 14 D. Amoak, D. Najjar, B. Belcher, M. Connor, V. R. Banda, B. Teeken, and D. Muungani. Transdisciplinary approaches for market intelligence research: Theory, practice, and implications for designing product profiles in crop breeding. 2023. A. Arad and G. Gayer. Imprecise data sets as a source of ambiguity: A model and experimental evidence. Management Science, 58(1):188–202, 2012. J. A. Ashby and N. Lilja. Participatory research: does it work? evidence from participatory plant breeding. 2004. E. Batteux, A. Bilovich, Z. Khon, S. G. Johnson, and D. Tuckett. When do consumers favor overly precise information about investment returns? Journal of Experimental Psychology: Applied, 29(2):302, 2023. R. Bricet. Preferences for information precision under ambiguity. THEMA (THéorie 752 Economique, Modélisation et Applications), Université de, 753, 2018. J. E. Cairns, F. Baudron, K. L. Hassall, T. Ndhlela, I. Nyagumbo, S. P. McGrath, and S. M. Haefele. Revisiting strategies to incorporate gender-responsiveness into maize breeding in southern africa. Outlook on Agriculture, 51(2):178–186, 2022. H. Ceballos, C. Hershey, C. Iglesias, and X. Zhang. Fifty years of a public cassava breeding program: evolution of breeding objectives, methods, and decision-making processes. Theoretical and Applied Genetics, 134(8):2335– 2353, 2021. J. N. Cobb, R. U. Juma, P. S. Biswas, J. D. Arbelaez, J. Rutkoski, G. Atlin, T. Hagen, M. Quinn, and E. H. Ng. Enhancing the rate of genetic gain in public-sector plant breeding programs: lessons from the breeder’s equation. Theoretical and applied genetics, 132:627–645, 2019. S. Couture, S. Lemarié, S. Teyssier, and P. Toquebeuf. The value of information under ambiguity: a theoretical and experimental study on pest management in agriculture. Theory and Decision, 96(1):19–47, 2024. B. Cullen, K. A. Snyder, D. Rubin, and H. A. Tufan. ‘they think we are delaying their outputs’. the challenges of in- terdisciplinary research: understanding power dynamics between social and biophysical scientists in international crop breeding teams. Frontiers in Sustainable Food Systems, 7:1250709, 2023. C. Egesi, E. G. N. Mbanjo, R. Kawuki, B. Teeken, I. Y. Rabbi, R. Prempeh, L. Jiwuba, D. Njoku, H. Kulembeka, F. Gwandu, et al. Development of portfolio management tools in crop breeding programs: a case study of cassava in sub-saharan africa. Frontiers in Sustainable Food Systems, 8:1322562, 2024. P. Falloon, D. Bebber, J. Bryant, M. Bushell, A. J. Challinor, S. Dessai, S. Gurr, and A.-K. Koehler. Using climate information to support crop breeding decisions and adaptation in agriculture. World Agriculture, 5(1):25–43, 2015. D. Gollin, C. W. Hansen, and A. M. Wingender. Two blades of grass: The impact of the green revolution. Journal of Political Economy, 129(8):2344–2384, 2021. M. Occelli, R. Mukerjee, C. Miller, J. Porciello, S. Puerto, E. Garner, M. Guerra, M. Gomez, and H. Tufan. A scoping review on tools and methods for trait prioritization in crop breeding programmes. Nature Plants, 10(3): 402–411, 2024. S. O. Ojwang, J. J. Okello, D. J. Otieno, J. M. Mutiso, H. Lindqvist-Kreuze, P. Coaldrake, T. Mendes, M. Andrade, N. Sharma, W. Gruneberg, et al. Targeting market segment needs with public-good crop breeding investments: A case study with potato and sweetpotato focused on poverty alleviation, nutrition and gender. Frontiers in Plant Science, 14:1105079, 2023. 15 A. Orr, C. M. Cox, Y. Ru, and J. Ashby. Gender and social targeting in plant breeding. 2018. A. Peysakhovich and U. R. Karmarkar. Asymmetric effects of favorable and unfavorable information on decision making under ambiguity. Management Science, 62(8):2163–2178, 2016. S. Puerto. Innovation and technological mismatch: Experimental evidence from improved seeds. 2024. B. Rice, B. Kramer, and C. Trachtman. Barriers to using market intelligence in plant breeding: Evidence from a survey of breeding professionals. Market Intelligence Brief Series, 2024. I. A. Tarjem. Tools in the making: the co-construction of gender, crops, and crop breeding in african agriculture. Gender, Technology and Development, 27(1):1–21, 2023. 7 Tables Table 1: Description of Experimental Variation by Round Part Round Information Signal Cost 1 1 None (Full Ambiguity) - 1 2 Perfect (True Values; No Ambiguity) Free 2 3 Precise/Accurate (PA) Free 2 4 Imprecise/Accurate (IA) Free 2 5 Precise/Biased (PB) Free 2 6 Imprecise/Biased (IB) Free 2 7 2 Imprecise/Accurate (2 IA) Free 2 8 Imprecise/Accurate + Precise/Biased (IA + PB) Free 3 9 Equal Price (Cheap)–2 PA, IA, PB, IB 2 tokens 3 10 Equal Price (Expensive)–PA, IA, PB, IB 8 tokens 3 11 Premium for Precision–PA, 2 IA 8 tokens (PA) or 4 tokens (IA) 3 12 Premium for Accuracy–PA, 2 PB 8 tokens (PA) or 4 tokens (PB) 16 Table 2: Sample Characteristics Statistic Mean St. Dev. N Gender: Female 28.9% 45.5 197 Nationality: South Asian 33.2% 47.2 199 Nationality: Eastern/Southern African 22.6% 41.9 199 Nationality: Western/Central African 22.1% 41.6 199 Nationality: Other 22.1% 41.6 199 Age: 34 and Under 32.0% 46.8 197 Age: 35 - 44 40.1% 49.1 197 Age: 45 - 54 19.3% 39.6 197 Age: 55 and Older 8.6% 28.2 197 Education: Below Master’s 17.6% 38.2 199 Education: Master’s Degree 40.2% 49.2 199 Education: Doctorate (PhD) 42.2% 49.5 199 Main Expertise: Breeder 60.8% 48.9 199 Main Expertise: Natural Scientist 25.1% 43.5 199 Main Expertise: Social Scientist 9.0% 28.8 199 Main Expertise: Other 5.0% 21.9 199 Institution: Nat’l Agr. Research/Extension 45.2% 49.9 199 Institution: CGIAR Center/Int’l Org. 24.6% 43.2 199 Institution: University/Academic 16.6% 37.3 199 Institution: Private Sector 13.6% 34.3 199 Work Location: South Asia 32.2% 46.8 199 Work Location: Eastern/Southern Africa 24.1% 42.9 199 Work Location: Western/Central Africa 21.6% 41.3 199 Work Location: Other 22.1% 41.6 199 Crop: Rice 32.7% 47.0 199 Crop: Wheat 28.1% 45.1 199 Crop: Maize 22.6% 41.9 199 Crop: Grains, Legumes, and Dryland Cereals 22.6% 41.9 199 Crop: Roots, Tubers, and Bananas 12.1% 32.6 199 Note: Numbers of observations below 199 reflect respondents choosing “prefer not to say” on a given question. Categories in each block are mutually exclusive, except for the “crop” category in which respondents could choose multiple crops, including others not listed, (or they could choose the option “I don’t work on specific crops”). Grains, Legumes, and Dryland Cereals (following CGIAR definitions) includes: sorghum, groundnut, soybean, cowpea, pearl millet, pigeonpea, chickpea, finger millet, and lentils. Similarly, roots, tubers, and bananas primarily includes: potatoes, cassava, sweet potatoes, yams, and bananas. Table 3: Completion Time and Comprehension Statistic Mean St. Dev. Median Min Max N Likely Completed in 1 sitting (<2 hours) 66.3% 47.4 1 0 1 199 Total Minutes (if completed in <2 hours) 60.6 25.6 59.6 8.8 115.0 132 Comprehension Checks Correct on First Try (out of 7) 4.9 1.5 5 0 7 199 Comprehension Checks Correct by Second Try (out of 7) 6.0 1.1 6 2 7 199 Note: We observe the time that the online Qualtrics form was first started to when the activity was submitted, not the time to complete the actual experimental exercise. Hence it is possible that many people started the survey, saw that it would take a bit of time and decided to come back to it later. In the lead up to the exercise, respondents completed 7 comprehension questions in chunks that each contained information about the exercise and 1-2 questions checking understanding. If not all questions in the chunk were answered correctly on the first try, the participant had to go through it again. If on the second try there was still a question wrong, the right answers would be displayed with explanations. 17 Table 4: Investment Prioritization vs. Levels of Ambiguity Dependent variable: Degree of Prioritization No Ambiguity 0.253∗∗∗ (0.027) Some Ambiguity 0.153∗∗∗ (0.029) Individual FE Y Round Order FE Y Investment Quality FE Y Observations 1,592 R2 0.327 Adjusted R2 0.226 Note: Standard errors are clustered at the participant level. ∗ denotes significance at the 10% level, ∗∗ at the 5% level and ∗ ∗ ∗ at the 1% level. “Some Ambiguity” pools over all levels in which some market information was provided (for free) that was not fully certain. “Degree of Prioritization” is the absolute difference between the shares invested in each pipeline. Omitted category is full ambiguity. 18 Table 5: Investment Prioritization vs. Signal Quality Dependent variable: Degree of Prioritization (1) (2) Precise/Accurate (PA) 0.186∗∗∗ 0.136∗∗∗ (0.042) (0.036) Imprecise/Accurate (IA) 0.227∗∗∗ 0.176∗∗∗ (0.041) (0.034) Precise/Biased (PB) 0.183∗∗∗ 0.132∗∗∗ (0.041) (0.025) Imprecise/Biased (IB) 0.261∗∗∗ 0.210∗∗∗ (0.043) (0.042) 2 Imprecise/Accurate (2 IA) 0.085∗∗ (0.040) Imprecise/Accurate + Precise/Biased (IA + PB) 0.217∗∗∗ (0.040) 2 Signals -0.142∗∗∗ (0.023) Individual FE Y Y Round Order FE Y Y Investment Quality FE Y Y Observations 1,393 1,393 R2 0.362 0.362 Adjusted R2 0.248 0.248 Note: Standard errors are clustered at the participant level. ∗ denotes significance at the 10% level, ∗∗ at the 5% level and ∗ ∗ ∗ at the 1% level. In column 1, each possible signal/combination of signals is treated as a unique object, so for example in a level with 2 imprecise/accurate signals provided, only “2 Imprecise/Accurate” will be equal to 1. In column 2, we instead uses the indicators to describe that a given type of signal is available as part of the bundle of signals, and use “2 Signals” to denote that multiple signals were provided. So the level with 2 imprecise accurate signals will have both “Imprecise/Accurate” and “2 Signals” equal to one. Omitted category is receiving no signal. 19 Table 6: Investment Prioritization vs. Signal Quality by Gender Dependent variable: Degree of Prioritization Men Women (1) (2) Precise/Accurate 0.157∗∗∗ 0.077 (0.045) (0.063) Imprecise/Accurate 0.197∗∗∗ 0.130∗ (0.041) (0.071) Precise/Biased 0.140∗∗∗ 0.111∗∗ (0.030) (0.051) Imprecise/Biased 0.225∗∗∗ 0.183∗∗ (0.052) (0.080) 2 Signals −0.153∗∗∗ −0.122∗∗ (0.027) (0.049) Individual FE Y Y Round Order FE Y Y Investment Quality FE Y Y Observations 980 399 R2 0.372 0.343 Adjusted R2 0.257 0.206 Note: Standard errors are clustered at the participant level. ∗ denotes significance at the 10% level, ∗∗ at the 5% level and ∗ ∗ ∗ at the 1% level. The indicators denote that a given type of signal is available as part of the bundle of signals, and we use “2 Signals” to denote that multiple signals were provided. 20 Table 7: Investment Prioritization vs. Signal Quality by Education Dependent variable: Degree of Prioritization Below Master’s Master’s PhD (1) (2) (3) Precise/Accurate 0.206∗ 0.113∗∗ 0.130∗∗ (0.103) (0.051) (0.055) Imprecise/Accurate 0.175∗ 0.193∗∗∗ 0.167∗∗∗ (0.086) (0.053) (0.053) Precise/Biased 0.196∗∗∗ 0.112∗∗∗ 0.129∗∗∗ (0.058) (0.039) (0.038) Imprecise/Biased 0.206 0.234∗∗∗ 0.200∗∗∗ (0.126) (0.064) (0.059) 2 Signals −0.182∗∗∗ −0.150∗∗∗ −0.109∗∗∗ (0.056) (0.036) (0.037) Individual FE Y Y Y Round Order FE Y Y Y Investment Quality FE Y Y Y Observations 245 560 588 R2 0.310 0.343 0.419 Adjusted R2 0.146 0.214 0.306 Note: Standard errors are clustered at the participant level. ∗ denotes significance at the 10% level, ∗∗ at the 5% level and ∗ ∗ ∗ at the 1% level. The indicators denote that a given type of signal is available as part of the bundle of signals, and we use “2 Signals” to denote that multiple signals were provided. 21 Table 8: Investment Prioritization vs. Signal Quality by Institution Type Dependent variable: Degree of Prioritization Academic CGIAR/Int’l Org. NARES Private (1) (2) (3) (4) Precise/Accurate 0.274∗∗∗ 0.163∗ 0.125∗∗∗ −0.039 (0.092) (0.090) (0.045) (0.088) Imprecise/Accurate 0.338∗∗∗ 0.130 0.169∗∗∗ 0.142 (0.089) (0.083) (0.047) (0.087) Precise/Biased 0.234∗∗∗ 0.120∗ 0.147∗∗∗ −0.026 (0.045) (0.064) (0.034) (0.061) Imprecise/Biased 0.321∗∗∗ 0.214∗∗ 0.239∗∗∗ −0.012 (0.098) (0.099) (0.057) (0.111) 2 Signals −0.260∗∗∗ −0.104∗ −0.129∗∗∗ −0.156∗∗ (0.058) (0.052) (0.033) (0.062) Individual FE Y Y Y Y Round Order FE Y Y Y Y Investment Quality FE Y Y Y Y Observations 231 343 630 189 R2 0.428 0.257 0.449 0.346 Adjusted R2 0.289 0.096 0.342 0.175 Note: Standard errors are clustered at the participant level. ∗ denotes significance at the 10% level, ∗∗ at the 5% level and ∗ ∗ ∗ at the 1% level. The indicators denote that a given type of signal is available as part of the bundle of signals, and we use “2 Signals” to denote that multiple signals were provided. 22 Table 9: Investment Prioritization vs. Signal Quality by Area of Expertise Dependent variable: Degree of Prioritization Breeder Natural Scientist Social Scientist Other (1) (2) (3) (4) Precise/Accurate 0.122∗∗∗ 0.181∗∗ 0.281∗∗ −0.061 (0.045) (0.072) (0.114) (0.157) Imprecise/Accurate 0.175∗∗∗ 0.231∗∗∗ 0.243∗∗ −0.046 (0.043) (0.071) (0.106) (0.102) Precise/Biased 0.111∗∗∗ 0.179∗∗∗ 0.265∗∗∗ −0.089 (0.032) (0.046) (0.067) (0.102) Imprecise/Biased 0.190∗∗∗ 0.262∗∗∗ 0.444∗∗∗ −0.101 (0.049) (0.094) (0.136) (0.166) 2 Signals −0.150∗∗∗ −0.150∗∗∗ −0.161∗∗∗ −0.051 (0.028) (0.055) (0.052) (0.097) Individual FE Y Y Y Y Round Order FE Y Y Y Y Investment Quality FE Y Y Y Y Observations 847 350 126 70 R2 0.402 0.319 0.427 0.436 Adjusted R2 0.291 0.172 0.246 0.173 Note: Standard errors are clustered at the participant level. ∗ denotes significance at the 10% level, ∗∗ at the 5% level and ∗ ∗ ∗ at the 1% level. The indicators denote that a given type of signal is available as part of the bundle of signals, and we use “2 Signals” to denote that multiple signals were provided. Table 10: Investment Prioritization vs. Signal Values Dependent variable: Degree of Prioritization Difference in Signal Values 0.005∗∗∗ (0.001) Individual FE Y Round Order FE Y Investment Quality FE Y Observations 796 R2 0.495 Adjusted R2 0.319 Note: Standard errors are clustered at the participant level. ∗ denotes significance at the 10% level, ∗∗ at the 5% level and ∗ ∗ ∗ at the 1% level. Levels in which a participant received one ambiguous signal are included. Signal values are measured from 0 to 1%. 23 Table 11: Relationship between Signal Purchases in Different Rounds PA Signal PB Signal 2 PB Signals None PA Signal 31% 9% 5% 1% IA Signal 19% 18% 0.5% 3.5% 2 IA Signals 4% 1.5% 0.5% 1% None 2.5% 1.5% 1% 1% Table 12: Comparison of Purchaser Types (1) (2) (3) (4) Always Always Bias Noise Variable High Quality Low Quality Averse Averse Gender: Female 28.3% 22.5% 32.7% 33.3% Nationality: South Asian 32.8% 39.0% 30.9% 21.2% Nationality: Eastern/Southern African 21.3% 17.1% 27.3% 27.3% Nationality: Western/Central African 21.3% 12.2% 29.1%2 27.3% Nationality: Other 24.6% 31.7%3 12.7% 24.2% Age: 34 and Under 33.9% 36.6% 27.3% 33.3% Age: 35-44 32.2% 43.9% 41.8% 51.5% Age: 45-54 15.3% 17.1% 27.3%4 9.1% Age: 55 and Over 18.6%2,3 2.4% 3.6% 6.1% Education: Below Master’s 23.0% 12.2% 12.7% 27.3% Education: Master’s Degree 36.1% 53.7% 38.2% 36.4% Education: Doctorate (PhD) 41.0% 34.1% 49.1% 36.4% Main Expertise: Breeder 54.1% 68.3% 63.6% 54.5% Main Expertise: Natural Scientist 36.1%3 24.4% 14.5% 30.3% Main Expertise: Social Scientist 3.3% 0.0% 16.4%1,2 15.2%2 Main Expertise: Other 6.6%4 7.3% 5.5% 0.0% Institution: Nat’l Agr. Research/Extension 41.0% 46.3% 47.3% 48.5% Institution: CGIAR Center/Int’l Org. 23.0% 22.0% 29.1% 27.3% Institution: University/Academic 19.7% 14.6% 16.4% 15.2% Institution: Private Sector 16.4% 17.1% 7.3% 9.1% Work Location: South Asia 32.8% 39.0%4 29.1% 18.2% Work Location: Eastern/Southern Africa 26.2% 14.6% 27.3% 30.3% Work Location: Western/Central Africa 19.7% 12.2% 29.1%2 27.3% Work Location: Other 21.3% 34.1%3 14.5% 24.2% Crop: Rice 36.1% 36.6% 27.3% 27.3% Crop: Wheat 27.9% 39.0%4 23.6% 18.2% Crop: Maize 21.3% 14.6% 23.6% 30.3% Crop: Grains, Legumes, and Dryland Cereals 27.9% 19.5% 21.8% 21.2% Crop: Roots, Tubers, and Bananas 14.8% 9.8% 16.4% 6.1% High Comprehension 63.9% 65.9% 70.9% 78.8% N 61 41 55 33 Note: Superscripts indicate that the mean is significantly greater the mean in the column number indicated by the superscript at the 5% level, using a t-test of means. Not included in any column are individuals who did not purchase a signal in either round (2) and individuals who purchased a precise/accurate signal in one round but none in the other round (7). 24 8 Figures 25 Figure 1: Investment Decision Examples (a) Full Ambiguity (b) No Ambiguity (c) IA Signal (d) IA and PB Signals 26 Figure 2: MI Purchase Example (a) Purchase Decision (b) Accessing Market Information (c) Reminder of Information Quality (shown as textbox when hovering over project name) 27 Figure 3: Investment Prioritization Across Different Levels of Ambiguity (Raw) “Some Ambiguity” pools over all levels in which some market information was provided (for free) that was not fully certain. “Degree of Prioritization” is the absolute difference between the shares invested in each pipeline. Orange bars show standard errors. Figure 4: Investment Prioritization Across Different Types of Signal Quality “P” stands for precise, “I” stands for imprecise, “A” stands for accurate, and “B” stands for biased. 28 Figure 5: Heterogeneity in Signal Response by Observable Characteristics (a) Gender (b) Education (c) Institution Type (d) Expertise 29 Figure 6: Heterogeneity in Signal Response by Experimental Comprehension and Effort (a) Comprehension (b) Time to Complete Experiment Figure 7: Investment Prioritization Across Different Quality Signals (Holding fixed each pipeline’s probability of success) (a) Set 1 Underlying Values (b) Set 2 Underlying Values Set 1 and Set 2 denoted two unique sets of pipelines with fixed underlying probabilities of success. Hence within each panel, we see choices over the exact same investments when provided with different signals of varying quality. 30 Figure 8: Investment Prioritization vs. Differences in the Signal Value of the Two Pipelines Presented Points represent the average level of investment over all levels in which a particular difference in signal value occurred. Levels with 2 signals are excluded. The blue line is a line of best fit through the plotted points. 31 Figure 9: Response to Quality by Framed Source (a) Framing: Scientist Expertise (b) Framing: Expertise Matches Participants (c) Framing: Farmers vs. Stakeholders 32 Figure 10: Number of Signals Purchased by Level (a) Equal Price–Low (b) Equal Price–High (c) Precise-Expensive/Imprecise-Inexpensive (d) Accurate-Expensive/Biased-Inexpensive 33 Figure 11: Types of Signals Demanded Under Equal Price Scenarios (a) Low Prices (b) High Prices Color of the bars in Panel A indicates the total number of signals a participant purchased. PA1 indicates the “first” Precise/Accurate Signal a participant purchased and PA2 indicates that they also bought an additional Precise/Accurate Signal. (This is why no one who only bought one signal also purchased PA2.) 34 Figure 12: Signal Purchase Choice by Comprehension Level Median value is 6/7 questions answered correctly. 35 Figure 13: Types of Signals Demanded Under Unequal Price Scenarios (a) Precision Differential (b) Accuracy Differential 36 Figure 14: Signal Purchase Behavior by Whether Framed Expert is Same Type (a) Equal, Low Price (b) Equal, High Price (c) Precision Premium (d) Accuracy Premium 37 Figure 15: Number of Signals Purchased in the Equal Low Price Condition by Whether Expert is Framed as the Same Type 38 Figure 16: Signal Purchase Behavior by Whether Framed Expert is Same Type (a) Equal, Low Price (b) Equal, High Price (c) Precision Premium (d) Accuracy Premium 39 ALL IFPRI DISCUSSION PAPERS All discussion papers are available here They can be downloaded free of charge INTERNATIONAL FOOD POLICY RESEARCH INSTITUTE www.ifpri.org IFPRI HEADQUARTERS 1201 Eye Street, NW Washington, DC 20005 USA Tel.: +1-202-862-5600 Fax: +1-202-862-5606 Email: ifpri@cgiar.org https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ifpri.org%2Fpublications%2Fsearch%3Ff%255B0%255D%3Drecord_content_record_type%253A88&data=05%7C01%7CG.Hollerich%40cgiar.org%7Ce3b4ee82573f451f6e4d08daa0aedd8d%7C6afa0e00fa1440b78a2e22a7f8c357d5%7C0%7C0%7C637998970136989389%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=wWeOaEE9%2FqCb%2FZq%2BC3ju7dNdzEyreK%2B7QkNn92MyGwU%3D&reserved=0 http://www.ifpri.org/ mailto:ifpri@cgiar.org Introduction Experimental Design Investment Task Market Information Purchasing Market Information Experimental Variation: Within Participants Experimental Variation: Between Participants Measuring Investment Prioritization Procedures Sample Description Results: Effect of Market Information on Investment Decisions Responses to Provision of Market Information Responses to Signal Quality Response Heterogeneity Explaining Signal Response Response to Signal Source Willingness to Pay for Information Signals Signal Purchase Decision Demand for Quality Willingness to pay for precision versus accuracy Willingness to Pay Based on Signal Source Conclusion Tables Figures