Return to Microeconomics

4.4. Economic threshold
4.4.1 Intuitively we can get a feel for some kind of threshold pest population
4.4.1.1. If the pest population (and the resulting damage) is low enough, it does not pay to take control measures
4.4.1.2. As the pest population continues to rise, it reaches a point where the resulting damage would justify taking control measures
4.4.2. Unfortunately, our intuitive notion of an "economic threshold" has been corrupted by many rather loose and completely different definitions of the term in the early IPM literature:
4.4.2.1. "The maximum pest population that can be tolerated at a particular time and place without a resultant economic crop loss"
4.4.2.2. "The density of a pest population below which the cost of applying control measures exceeds the losses caused by the pest". (Glass, 1975)
4.4.2.3. "That point at which the incremental cost of pest control is equal to the incremental return resulting from pest control" (Thompson and White, 1979) (also "economic injury level" - Stern, 1959)
4.4.2.4. "The pest population at which pest control measures must be taken to prevent the pest population from rising to the economic injury level" (Stern, 1959) (also "action threshold")
4.4.3. Recognizing that some authors used very different definitions for the terms "economic threshold" and "economic injury level," we have to be careful in reading the IPM literature.
4.4.4. In recent years Larry Pedigo has straightened out some of the confusion. In his 1989 textbook "Entomology and Integrated Pest Management," he revived Stern's original definition of the economic injury level, and to make it a practical management tool, he expressed it in terms of variables that can be estimated empirically.
4.4.4.1. Pedigo's definition of the economic injury level (EIL) is derived from the decision criterion in partial budget analysis:
4.4.4.2. Considering that the partial revenue is the yield loss prevented by controlling the insect population, and simplifying the partial cost to only the cost of the insect control (which is probably valid in most cases), the above inequality can be written
where "loss" is the proportion of the yield lost per insect. Rearranging terms, we get
Note that this assumes that the loss is directly proportional to the insect population, or in other words, loss is a linear function of insect population. Pedigo expressed his definition of economic injury level as

EIL = C/(VIDK)

where:

EIL = economic injury level
C = cost of insect control
V = value of a unit of the crop
I = injury units per insect
D = damage (proportion of yield lost) per injury unit
K = proportionate reduction in injury

This equation expands the simple proportion of yield lost per insect into a term for "Injury", which represents the physiological effects of insect feeding, "damage", which is a measurable loss in yield or quality per unit of injury, and a dimensionless constant, K, which represents the proportionate reduction in injury as the result of the insect control.

4.4.4.3. Empirically determined economic injury levels have proven very useful in reducing the numbers of insecticide sprays that are necessary for controlling many insect pest species. It is a reactive rather than a proactive approach and therefore may not be applicable to pests whose populations develop too rapidly to be managed by any reactive means (e.g., many plant pathogens).
4.5. Optimization
4.5.1 Introduction
4.5.1.1. Until now we have discussed management decisions only a single management variable at a single moment in time.
  • Real management situations are rarely that simple

  • We usually have to consider several management variables, all of which simultaneously affect crop yield

  • And/or we might have to make these decisions at several times throughout the season
4.5.1.2. Optimization is a systematic procedure for finding the "best" solution (or solutions!) to a complex problem
4.5.1.3. It is necessary to explicitly state the objective
  • To maximize something or to minimize something

  • e.g., to maximize yields, maximize profits, minimize effort, or minimize costs

  • It is possible to optimize only one objective at a time

  • The solution(s) is/are the combination of decision variables that optimizes the objective
4.5.2. Simulation
4.5.2.1. By repeated execution of a computer simulation model with different values for the input variables
4.5.2.2. e.g., suppose we have 3 partially resistant cultivars which require different levels of fungicide application to control a fungus disease
  • Unfortunately in this example the more resistant varieties have either a lower quality or lower yield than the most susceptible one

  • Execute the simulation with a range of fungicide levels (number of sprays per season) for each of the cultivars

  • Using the marginal analysis procedure, determine the profit for each set of input values

    • In this example Cultivar C is the most susceptible, and Cultivar A the least susceptible

    • In the absence of fungicides Cultivar A would yield maximum profit

    • There are two optimum solutions, 3 sprays on Cultivar B and 4 sprays on Cultivar C
4.5.2.3. The simulation approach is usually faster and cheaper than doing the optimization empirically in the field
4.5.2.4. The simulation approach is limited by the number of runs of the simulation that is feasible
  • In this example 20 runs were required -- not unreasonable

  • Suppose we had 5 cultivars, 4 different fungicides, 3 spray schedules, 7 levels of fungicide and wanted to look at the mean and variance of the profit using the past 10 seasons of weather data

    • 5 x 4 x 3 x 7 x 10 = 4200 runs

    • If each run cost $.50 on the supercomputer, the cost would be $2100

    • If each run took 1 min on a microcomputer, it would take about 3 days of continuous computing
4.5.3. Linear programming
4.5.3.1. Programming here refers to planning, not computer programming
4.5.3.2. Is used to solve problems of allocation (e.g., allocation of land area in a crop rotation plan, allocation of effort in a pest monitoring scheme, etc.)
4.5.3.3. The procedures are illustrated in the following simple example:
  • Define the objective function

    • Allocate weed control costs between herbicide application and hand weeding to maximize profit

    • In this example (for simplicity) we will make the yield and price constant and set the total revenue at $250/acre

    • Therefore, maximizing profit in this example means minimizing cost

  • Identify the constraints

    • Cost constraint: if x is the cost of hand weeding and y is the cost of herbicide application, then

    • Weed control effectiveness constraint:

      • To achieve the yield that gives us the above total revenue ($250), we must invest at least $300/acre in hand weeding

      • The amount of hand weeding required can be reduced by $2 for every $1 spent on the herbicide

      • Therefore, the constraint is given by

    • Herbicide label constraint:

      • The amount of herbicide is limited because of possible phytotoxicity; the cost of the maximum allowable rate is $100/acre

      • Label constraint:

  • The optimum region of feasible solution occurs where the constraint regions overlap

  • The optimum solution (if the objective is to minimize cost) occurs where a line parallel to the cost constraint line (equal costs) is as far away from the cost constraint line as possible while remaining within the optimum region
4.5.3.4. Computer software exists for a wide range of linear programming applications
  • Can handle a large number of allocation variables (not limited to 2 dimensions as we are on a 2-dimensional graph)

  • Can handle huge numbers of constraint functions

  • The models must be linear, or at least must be able to be approximated by linear functions

  • These models are not dynamic (allocation through time), but they can approximate a dynamic solution by repeating the analysis at intervals through time

  • Linear programming cannot handle stochastic models, but probability distributions can be created by repeating the analysis with different constraints that vary according to known probability distributions
4.5.4. Dynamic programming
4.5.4.1. Particularly useful for solving sequential decision problems
  • The optimum sequence of decisions is not simply a matter of making the optimum decision at every decision point

  • The optimum sequence is often counter-intuitive
4.5.4.2. A simple 2-decision-period example
  • Suppose at each decision period we have 3 possible alternatives:

    • Do nothing; no cost; no insect mortality

    • Low dose of insecticide; costs $20/acre; kills 1/3 of insects

    • High dose of insecticide; costs $100/acre; kills 3/4 of insects

  • Further suppose that the total revenue accumulated during a time period is equal to $200 minus $1 times the pest population at the beginning of the time period. (Each insect does $1 worth of damage.)

  • Suppose that the insect populations increase 3-fold during each time period

  • If we start with a pest population of 72, the accumulated profits during the first period are as follows:

    • No insecticide: $200 - 72 - 0 = $128

    • Low dose: $200 - (2/3)72 - 20 = $132

    • High dose: $200 - (1/4)72 - 100 = $82

  • The pest populations at the beginning of time period 2 are:

    • No insecticide: 72 x 3 = 216

    • Low dose: (2/3)72 x 3 = 144

    • High dose: (1/4)72 x 3 = 54

  • The profits accumulated during time period 2 and the final insect populations are shown in the accompanying figure

  • Making the optimum decision at each decision point would dictate using the low dose of insecticide at each decision point for a total profit of $132 + 84 = $216

  • However, the optimum sequence would be to use the high dose for the first spray and nothing for the second: $82 + 146 = $228

  • This example simply illustrates the need for a systematic optimization procedure
4.5.4.3. The number of possible decision combinations increases with the power of the number of decision points
  • For our trivial example, N = 32 = 9

  • If we had 5 control alternatives and 7 decision points,

    N = 57 = 78125

  • The dynamic programming algorithm does not analyze all possible combinations but selects possible sets of decisions according to certain rules
4.5.4.4. The dynamic programming technique can handle nonlinear models, stochastic models, and a large, but limited, number of decision variables
4.5.4.5. Dynamic programming can be used where the system can be adequately modeled with a relatively modest number of variables
4.6. Economics of public pest management programs

Return to IPM Home Page