Environmental scanning is the first step in the strategic management process

“Strategic management can be defined as the process of making decisions with respect to scarce resources, in order to achieve goals.” It is a difficult skill for many people to grasp and it has been studied extensively. With this knowledge comes important information that helps you navigate business life successfully.

The “what are the 7 steps of the strategic management process?” is a question that many people have asked. The first step in the strategic management process is to define your objectives and goals, which can be achieved through various methods.

Environmental scanning is the first step in the strategic management process

The design, execution, and assessment of a company’s long-term business strategies is referred to as strategic management. The first phase in strategic management is strategy formulation, which entails obtaining, assessing, and arranging information.

What are the phases in the strategic management process, then?

Goal-setting, analysis, strategy creation, strategy execution, and strategy monitoring are the five steps of the process.

  1. Clarify Your Objectives. The objective of goal-setting is to define your company’s vision.
  2. Information should be gathered and analyzed.
  3. Create a plan of action.
  4. Put your plan into action.
  5. Control and evaluate.

What are the six stages in the strategic management process, and what are they? 6 Steps of the Strategic Management Process:

  • Creating a company’s vision and mission.
  • Organizational Strategic Analysis.
  • Developing Objectives
  • Formulation of a strategy
  • Implementation of the strategy.
  • Control and evaluation of strategic initiatives.

What is the first phase in the strategic management process quizlet is also a popular question.

Establishing the purpose and vision is the first stage in the Strategic-Management Process. Establish the grand plan using environmental scanning in the second stage of the Strategic-Management Process.

What are the five stages involved in planning?

Five Important Steps in the Planning Process

  • Step 1: Make a list of your goals. To go on the road to retirement, you must first figure out where you want to go.
  • Step 2: Decide on your investment strategy.
  • Step 3: Assess your investments.
  • Step 4: Select an Appropriate Investment Strategy
  • Step 5: Carry out the plan and review it on a regular basis.

Answers to Related Questions

What is the definition of a strategic process?

Defining the organization’s strategy is part of the strategic management process. It’s also described as the process through which managers choose from a variety of strategies for the company that will help it perform better.

What is the definition of a strategic mission?

Strategic mission is a form of marketing strategy that is based on an organization’s basic ideology. It is a long-term strategy for achieving the company’s mission and vision objectives and goals. The steps for developing a strategic mission are as follows: Strategic Goals Prahalad, Hamel.

What are the five components of a strategic plan?

A strategy is a collection of options that work together. These options are divided into five categories that managers must evaluate when making decisions: venues, differentiators, vehicles, staging and pace, and economic logic.

What should be included in a strategy?

The following are the primary components of a basic strategic plan:

  • Aspirations, mission, and vision
  • Values that are important to you.
  • Strengths, weaknesses, opportunities, and dangers are all factors to consider.
  • Objectives, strategies, and operational tactics are all part of the plan.
  • Streams of finance and measurements

What exactly do you mean when you say “mission”?

Statement of Purpose Definition: A brief written declaration of your business aims and principles; a phrase explaining a company’s purpose, markets, and competitive advantages. A mission statement explains what a company is, why it exists, and why it exists.

What’s the best way to come up with a strategy?

Here are six easy steps to help you create a successful company strategy:

  1. Gather information. To know where you’re going, you must first understand where you are now.
  2. Make a vision statement for your company.
  3. Create a mission statement for your company.
  4. Determine your strategic goals.
  5. Tactical Strategies.
  6. Management of performance.

What exactly do you mean when you say “competitive advantage”?

A competitive advantage is a competitive advantage acquired over rivals by providing customers with more value, either via lower pricing or by delivering superior goods and service that justify higher costs.

What are the different sorts of strategies?

Different types of strategies include:

  • Corporate strategies, sometimes known as grand plans, may be divided into four categories: growth, stability, retrenchment, and combination.
  • Business Level Plans: The rivalry is at the heart of business-level strategies.
  • Functional Approaches:

What is the definition of strategy formulation?

The process through which an organization picks the most suitable courses of action to accomplish its set objectives is known as strategy formulation. This process is critical to an organization’s success because it establishes a framework for the activities that will result in the desired outcomes.

In the strategic management process, what is the second step?

Examination of the External Business Environment: The analysis of the external business environment is the second phase in the strategy development process. It’s about researching or watching what’s going on in the external business environment, as well as any changes that have occurred.

What is the procedure for planning?

The procedures a corporation takes to establish budgets to direct its future operations are referred to as the planning process. Strategic plans, tactical plans, operational plans, and project plans are some of the papers that may be created. The following are the stages in the planning process: Set goals for yourself. Create tasks to achieve those goals.

What do you mean by strategic skills?

Strategic thinking is a method of defining how individuals consider, appraise, perceive, and shape the future for themselves and others. They know how to think strategically and how to create a visioning process. They possess both abilities and utilize them to compliment one another.

What are the four different styles of planning?

The four forms of planning utilized by managers will be explained in this course, including strategic, tactical, operational, and contingency planning. Single-use plans, ongoing plans, policy, process, and regulation are all terms that will be specified.

What are some planning examples?

Examining a few instances of organizational planning might help you improve your own.

  • Planning for the development of the workforce. The goal of workforce development is to produce a diversified, high-performing workforce comprised of loyal and pleased workers.
  • Planning for products and services.
  • Expansion Strategies
  • Budgeting is an important aspect of financial planning.

What role does planning have in your life?

Assists in achieving goals: Every company has a set of goals or targets. It continues to work hard to achieve these objectives. Planning aids an organization in achieving these goals with relative ease and speed. Planning also aids an organization in avoiding haphazard (unplanned) actions.

When five or six first-order impacts have been identified or when the space around the initial event is occupied, the process is repeated for each first-order impact. Again, the task is to determine the possible impacts if this event were to occur. The second-order impacts are linked to their first-order impacts by two lines. These steps are repeated for third- and fourth-order impacts, or as far as the group would like to go. Typically, third- and fourth-order impacts are sufficient to explore all of the significant impacts of the initial event. Usually a group identifies several feedback loops; for example, a fourth-order impact might increase or decrease a third- or a second-order impact. The value of impact networks lies in their simplicity and in their potential to identify a wide range of impacts very quickly. If more impacts or higher-order impacts need to be considered, the process is repeated.

A simple example of the use of an impact network illustrates the impact of the elimination of tenure in higher education (Wagschall 1983). As shown in figure 7, the immediate or first-order consequences of the event were perceived to be (1) reduced personnel costs, (2) more frequent turn-over of faculty, and (3) an improvement in the academic quality of the faculty. Each consequence then becomes the center of an impact network, and the search for impacts continues. For example, the improvement of the faculty's academic quality causes improved learning experiences, students' increased satisfaction with their education, and the accomplishment of more research. The reduction in personnel costs produces stronger faculty unions, more funds for non-personnel items, and decreased costs per student. Increased faculty turnover produces a decrease in average faculty salary, an increase in overall quality of the faculty, and a decrease in the average age of the faculty. Each consequence in turn becomes the center of the third-order impact network, and so on. A completed impact network is often very revealing. In one sense, it serves as a Rorschach test of the authoring group or the organization because the members of the group are most likely to identify impacts highlighting areas of concern. In another sense, by trying to specify the range of second-order impacts, new insights into the total impact of a potential development can be identified. For example, while an event may stimulate a majority of small, positive, first-order impacts, these first-order impacts may stimulate a wide range of predominantly negative second-order impacts that in total would substantially reduce if not eliminate the positive value of the first-order impacts. Feedback loops may promote the growth of an impact that would far outweigh the original estimate of its importance.

Forecasting
Scanning typically leads to the identification of more issues than the organization can reasonably expect to explore in depth, given its limitations of time, money, and people. Simple evaluation techniques like those described in the previous section can help reduce the set of candidates to manageable size. The surviving issues can then be subjected to detailed forecasting, analysis, and policy evaluation. Many methods have been developed for forecasting. This section surveys the range of methods, beginning with several varieties of the simplest, most popular type of forecasting, individual judgmental forecasting. It then briefly describes techniques of mathematical trend extrapolation and group forecasting, cross-impact models, and scenarios.

Implicit forecasting

According to Yogi Berra, "You can observe a lot just by watching." And much of what can be observed is the future. Despite the constant flood of assertions about the accelerating pace of change, despite endless warnings about impermanence and future shock, despite the vigor of the minor industry that produces one book or report after another that begins by telling us that we are on the verge of a societal transformation every bit as profound as the industrial revolution (all of which may actually be true), the present still foreshadows the future. If only we knew the past and present well enough, far fewer "surprises" would catch us unaware in the future. It pays to watch, and it especially pays to watch the largest systems-government, education, transportation, primary metals, finance, health care, energy--for they usually change very slowly and only after protracted debate and consensus building.

No one should have any difficulty with the notion that many of the developments causing turmoil and confusion in each of these systems today were being widely discussed--even passionately advocated or resisted--at least 10 or more years ago. Five or 10 years from now no one should find it hard to look back to today and discover that the same was true.

Environmental scanning is the first step in the strategic management process
It pays to watch . . . the largest systems . . . for they usually change very slowly.

Administrators in large institutions know that very long lead times are often required before major decisions can be initiated and fully implemented. They also know that the environment can change in peculiar, sometimes unpredictable ways while these decisions are coursing through the system. The result can be that by the time the decisions should have been fully implemented, the world will have changed so much that they must be abandoned or radically altered. To the extent, however, that the original expectations were shattered by forces arising from large systems, why should administrators be surprised by the outcome.' They may be exceedingly disappointed that they have persevered in a losing battle, but they should not be surprised.

Real surprises usually come from failing to keep track of small-scale developments in the external environment, not from excluding small-scale developments within one's own system. By systematically following these external developments it is possible not only to anticipate the directions and potential impacts of the slower, more pronounced, more profoundly influential changes but also to obtain the early warning needed for timely adjustments of strategy. Emerging patterns of events, the ebb and flow of particular sets of issues that can be revealed by close monitoring, provide a basis for forecasts relevant to policy. These forecasts are intuitive, to be sure, and perhaps seen only dimly in outline, but they are nonetheless the best forecasts available.

Even when the output from scanning consists of forecasts, we must still make our own judgments about the future, because we must decide what is relevant and we must make judgments as to whether we agree with the given forecasts. The same process is at play when we read newspapers, journals, reports, and government documents or listen to a broadcast. We constantly make personal forecasts on the basis of sparse and fragmented historical data in an attempt to distill the future that may be implied.

This process of trying to infer the future by mentally extending current or historical data is sometimes called "implicit forecasting." Such forecasting is obviously as useful as it is unavoidable when it comes to obtaining an appreciation of the broad outlines of possible futures. By itself, however, implicit forecasting is not sufficient when it comes to making today's decisions about our own most important long-range issues-the direction of a career, the development of a profession, the survival of an institution, department, or program, for example. In such cases, the need is also for methods that deal much more formally, systematically, and comprehensively with the nature and likely dynamics of future events, trends, and policy choices.

It is easy to see why our implicit forecasts of the general context are progressively less trustworthy as the questions at stake become more important. These forecasts are entirely subjective, they are no doubt idiosyncratic, they are often made on topics we are unqualified to assess because of a lack of relevant experience or knowledge, they rest very largely on unspoken arguments from historical precedent or analogy, and they are haphazard in that they are made primarily in response to information we receive that is itself usually developed haphazardly or opportunistically.

As futures research has developed since the mid-1960s, much work has gone into the invention and application of techniques intended to overcome these and other limitations of widely practiced methods of forecasting. In general, the newer methods are alike in that they tend to deal as explicitly and systematically as possible with the various elements of alternative futures, the aim being to provide the wherewithal for users to retrace the steps taken. The following paragraphs highlight some of these methods.

Genius forecasting

Apart from implicit forecasting, the most common approach to forecasting throughout history has been for a single individual simply to make explicit guesstimates about the future. In their weaker moments, many bright and otherwise well-informed people-including even futures researchers-are sometimes cajoled into offering such guesstimates, which typically take the form of one-line forecasts ("cancer will be cured," "no ship will ever be sunk by a bomb," or "the end is near"). But if they are persuaded to reflect on the future in a widely ranging way, to try to articulate the underlying logic of affairs and its likely evolution over time, to reason through the obvious alternatives and imagine the not so obvious ones, when in short they offer a careful but creative image of the future in its richness and complexity, then a much different process is involved. It has no common name, but in futures research it is often lightly called "genius forecasting." It is a powerful and highly cost-effective way to obtain forecasts if the "genius" is indeed thoughtful, imaginative, and well read in many areas.

The disadvantages of genius forecasting are clear enough to require no enumeration here. "In the end, genius forecasting depends on more than the genius of the forecaster; it depends on luck and insight. There may be many geniuses whose forecasts are made with full measure of both, but it is nearly impossible to recognize them a priori, and this, of course, is the weakness of the method" (Gordon 1972, p. 167).

If used properly, however, the strengths of the method usually outweigh its weaknesses. The probability of the integrated forecast produced by the "genius" is certain to be virtually zero. Time will show that the forecast was oversimplified, led astray by biases, and ignorant of critical possibilities. Yet the genius has the ability to identify unprecedented future events, to imagine current policies that might be abandoned, to assess the interplay of trends and future events in a far more meaningful way than any existing model can, to trace out the significance of this interplay, to identify opportunities for action that no one else might ever see, and to explain assumptions and reasoning. Although the genius forecast will be both "wrong" and incomplete, it will nevertheless have provided something very useful: an intelligent base case.

Occasionally, genius forecasts can serve as the only forecasts in a study. This approach makes excellent sense in studies being accomplished under severely constrained time and resources. Increasingly in futures research, however, studies are begun by commissioning one or more genius forecasts, which take the form of essays or scenarios of one sort or another. With them in hand, the investigators explore them carefully for omissions and inconsistencies, and then the forecasts are carefully pulled apart to identify the specific trends, events, and policies that appear to warrant detailed evaluation; that is, the most uncertain, problematical, intractable, and potentially valuable statements about the future can be selected. Being able to launch a more sophisticated forecasting effort from such a basis is much better than having random thoughts and blank paper.

Extrapolation of mathematical trends

Most forecasters and some practitioners of futures research use techniques of mathematical trend extrapolation that are well understood, rest on a fairly adequate theoretical foundation, convey the impression of being scientific and objective, and in skilled hands are usually quick and inexpensive to use. One of the most commonly used techniques is regression analysis, one purpose of which is to estimate the predicted values of a trend (the dependent variable) from observed values of other trends (the independent variables). Hierarchical regression models are sometimes referred to as "causal" models if an observed statistical relationship exists between the independent and dependent variables, if the independent variables occur before the dependent variable, and if one can develop a reasonable explanation for the causal relationship. A forecast of the independent variables makes possible a forecast of the dependent ones to which they are statistically linked, whether the case is simple or complex. In either case, however, the purpose behind causal regression models is always to explain complex dynamic trends (for example, college and university enrollment patterns) in terms of elementary stable trends (for example, demographics or government spending).

When cause is not an essential factor, trends are often forecast using time as the independent variable. Much of the "trend extrapolation" in futures research takes this form. Common methods of time-series forecasting being used today are the smoothing, decomposition, and autoregression/moving average methods. Smoothing methods are used to eliminate randomness from a data series to identify an underlying pattern, if one exists, but they make no attempt to identify individual components of the underlying pattern. Decomposition methods can be used to identify those components--typically, the trend, the cycle, and the seasonal factors--which are then predicted individually. The recombination of these predicted patterns is the final forecast of the series. Like smoothing methods, decomposition methods lack a fully developed theoretical basis, but they are being used today because of their simplicity and short-term accuracy. Autoregression is essentially the same as the classical multivariate regression, the only difference being that the independent (predictor) variables are simply the time-lagged values of the dependent (predicted) variable. Because time-lagged values tend to be highly correlated, coupling autoregression with the moving average method produces a very general class of time-series models called autoregression/moving average (ARMA) models.

All regression and time-series methods rest on the assumption that the historical data can, by themselves, be used to forecast the future of a series. In other words, they assume that the future of a trend is exclusively a function of its past. This assumption, however, will always prove false eventually because of the influence of forces not measured by the time series itself. That is to say, unprecedented sorts of events always occur and affect the series, which is precisely why the historical data are so irregular.

These difficulties have not deterred many traditional analysts and long-range forecasters from using such methods and thereby generating dubious advice for their sponsors. Within futures research, however, these techniques--when used well--are applied in a very distinctive way. The objective is not to foretell the future, which is obviously impossible, but to provide purely extrapolative base-line projections to use as a point of reference when obtaining projections of the same trends by more appropriate methods. What would the world look like if past and current forces for change were allowed to play themselves out? What if nothing novel ever happened again? The only value of these mathematical forecasting techniques in futures research is to provide answers to these remarkably speculative questions. But once they are answered, a reference will have been established for getting on with more serious forecasting.

For example, in a study by Boucher and Neufeld (I 98 1), a set of I I I trends was forecast 20 years hence both mathematically (using an ARMA technique) and judgmentally (using the Delphi technique). Analysis of the results showed that the average difference between the two sets of forecasts was over 15 percent. By the first forecasted year (which was less than a year from the date of the completion of the Delphi), the divergence already averaged more than 10 percent; by the 20th year, it had reached 20 percent. This result is interesting because even experienced managers usually accept mathematical forecasts uncritically. They like their apparent scientific objectivity, they have been trained in school to accept their plausibility, and acceptance has been reinforced by an endless stream of such projections from government, academia, and other organizations. Seeing judgmental and mathematical results side-by-side can thus be most instructive. Moreover, as some futures researchers believe, if the difference between such a pair of projections is 10 percent or more, it is probably worth examining in depth.

The Delphi technique

Given the limitations of personal forecasting (implicit or genius) and of mathematical projections, it is now common--and usually wise--to rely on systematic methods for using a group of persons to prepare the forecasts and assessments needed in strategic planning. Experience suggests, however, that at least five conditions must be present before the decision to use a group should be made: (1) No "known" or "right" answers exist or can be had (that is, acceptable forecasts do not exist or are not available); (2) equally reputable persons disagree about the nature of the problem, the relative importance of various issues, and the probable future; (3) the questions to be investigated cross disciplinary, political, or jurisdictional lines, and no one individual is considered competent enough to cope with so many subjects; (4) cross-fertilization of ideas seems worthwhile and possible; and (5) a credible method exists for defining group consensus and evaluating group performance.

The fifth condition is especially important--and often slighted. As a matter of fact, the emphasis one places on this consideration often determines the method of group forecasting one chooses. If, for example, the person seeking the forecasts will be content with an oral summary of the results (or perhaps a memo for the record), then a conventional face-to-face meeting of some sort may be the appropriate method. If, at the other extreme, it is known that the intended user will insist on having a detailed comprehensive forecast and that the persons whose views should be solicited would never speak openly or calmly to each other at a face-to-face meeting, then a different scheme for eliciting, integrating, and reporting the forecasts would surely be required.

Considerations like these were responsible in large part for the invention of what is no doubt the most famous and popular of all forecasting methods associated with futures research: the Delphi technique. Delphi was designed to obtain consensus forecasts from a group of "experts" on the assumption that many heads are indeed often better than one, an assumption supported by the argument that a group estimate is at least as reliable as that of a randomly chosen expert (Dalkey 1969). But Delphi was developed to deal especially with the situation in which risks were inherent in bringing these experts together for a face-to-face meeting--for example, possible reluctance of some participants to revise previously expressed judgments, possible domination of the meeting by a powerful individual or clique, possible bandwagon effects on some issues, and similar problems of group psychology. The Delphi method was intended to overcome or minimize such obstacles to effective collaborative forecasting by four simple procedural rules, the first of which is desirable, the last three of which are mandatory.

First, no participant is told the identity of the other

members of the group, which is easily accomplished if, as is common, the forecasts are obtained by means of questionnaires or individual interviews. When the Delphi is conducted in a workshop setting--one of the more productive ways to proceed in many cases--this rule cannot be honored, of course.

Second, no single opinion, forecast, or other key input is

attributed to the individual who provided it or to anvone else. Delphi questionnaires, interviews, and computer conferences all easily provide this protection. In the workshop setting, it is more difficult to ensure, but it can usually be obtained by using secret ballots or various electronic machines that permit anonymous voting with immediate display of the distribution of answers from the group as a whole.

Third, the results from the initial round of forecasting

must be collated and summarized by an intermediary (the experimenter), who feeds these data back to all participants and invites each to rethink his or her original answers in light of the responses from the group as a whole. If, for example, the participants have individually estimated an event's probability by some future year, the intermediary might compute the mean or median response, the interquartile range or upper and lower envelopes of the estimates, the standard deviation, and so forth, and pass these data back to the panelists for their consideration in making a new estimate. If the panelists provided qualitative information as well--for example, reasons for estimating the probabilities as they did or judgments as to the consequences of the event if it were actually to occur--the role of the intermediary would be to edit these statements, eliminate the redundant ones, and arrange them in some reasonable order before returning them for the group's consideration.

Fourth, the process of eliciting judgments and estimates

(deriving the group response, feeding it back, and asking for re-estimates in light of the results obtained so far) should be continued until either of two things happens: The consensus within the group is close enough for practical purposes, or the reasons why such a consensus cannot be achieved have been documented.

In sum, the defining characteristics of Delphi are anonymity of the estimates, controlled feedback, and iteration. The promise of Delphi was that if these characteristics were preserved, consensus within the panel would sharpen and the opinions or forecasts derived by the process would be closer to the "true" answer than forecasts derived by other judgmental approaches.

Thousands of Delphi studies of varying quality have been conducted throughout the world since 1964, when the first major report on the technique was published (Gordon and Helmer 1964). The subjects forecast have ranged from the future of absenteeism in the work force to the future of war and along the way have included topics as diverse as prospective educational technologies, the likely incidence of breast cancer, the future of the rubber industry, the design of an ideal telephone switchboard, and the future of Delphi itself. Some of these studies proved to be extremely helpful in strategic planning; a few virtually decided the future of the sponsoring organization. But most had little or no effect, apart from providing general background information or satisfying a momentary curiosity about this novel method of forecasting.

Part of the problem in many cases is that practitioners have had false hopes. The literature conveys the impression that Delphi is so powerful and simple that anyone can "run one" on any subject. What the literature often fails to mention is that no established conventions yet exist for any aspect of study design, execution, analysis, or reporting. Intermediaries, who are the key to useful and responsible results, are very much on their own. As novices they should examine studies by others, but because these studies are all different, it may be very difficult to find or recognize good models. Even with an excellent model in hand, the newcomer cannot fully appreciate what it means to use it. Only through practice can one discover the significance of four key facts about Delphi: (1) The amount of information and data garnered through the process can and will explode from round to round; (2) good questions are difficult to devise, and the better the design of the questions asked, the more likely it is that good participants will resign from the panel out of what has been called the BIF factor--boredom, irritation, and fatigue--because they will be asked to answer the same challenging questions again and again for each trend or event in the set they are forecasting; (3) the likelihood of such attrition within the panel means not that the questions should be cheapened but that large panels must be established so that each participant will have fewer questions to answer, which is very time consuming; (4) Delphi itself does not include procedures for synthesizing the entire set of specific forecasts and supporting arguments it produces, so that when the study is "completed," the work has usually just begun. And if, as one hopes, the intermediary and the panelists take the process and the questions seriously, the probability is high that the schedule will slip, the budget will be overrun, and so on and on.

Another reason that success with Delphi is hard to achieve is that, despite 20 years of serious applications, very little is known about how and why the consensus-building process in Delphi works or what it actually produces. No wide-ranging research on the fundamentals of the method has been done for more than a decade. According to Olaf Helmer, one of the inventors of Delphi, "Delphi still lacks a completely sound theoretical basis.... Delphi experience derives almost wholly either from studies carried out without proper experimental controls or from controlled experiments in which students are used as surrogate experts" (Linstone and Turoff 1975, p. v). The same is true today. The practical implication is that most of what is "known" about Delphi consists of rules of thumb based on the experience of individual practitioners.

For example, a goal of Delphi is to facilitate a sharpening of consensus forecasts from round to round of interrogation. And, in fact, there probably has yet to be a Delphi study in which the consensus among the participating experts did not actually grow closer on almost all of the estimates requested (as measured by, say, a decline in the size of the interquartile range of estimates). Yet the limited empirical evidence available on this phenomenon is replete with suggestions that increased consensus is produced only in slight part by the panelists' deliberations on the group feedback from the earlier round. The greater part of the shift seems to come from two other causes: (1) The panelists simply reread the questions and understood them better, and (2) the panelists are biased by the group's response in the preceding round of interrogation (that is, they allow themselves to drift toward the mean or median answer). The difficulty posed by this situation--which is far from atypical of the problems presented by Delphi--is that no way has yet been found to sort out the effects of these different influences on the final forecast. Accordingly, the investigator must be extremely careful when interpreting the results. Claims that Delphi is "working" are always suspect.

On the positive side, though again as a strictly practical, non-theoretical matter, Delphi appears to have a number of important advantages as a group evaluation or forecasting technique. It is not difficult to explain the essence of the method to potential participants or to one's superiors. It is quite likely that some types of forecasts could not be obtained from a group without the guarantee of anonymity and the opportunity for second thoughts in later rounds (certainly true when hostile stake holders are jointly evaluating the implications of policy actions that might affect them differently). Areas of agreement and disagreement within the panel can be readily identified, thanks to the straightforward presentation of data. Perhaps most important, every participant's opinion can be heard on the forecasts in every round, and every participant has the opportunity to comment on every qualitative argument or assessment. For this reason, it becomes much easier to determine the uncertainties that responsible persons have about the problem under study. If the panelists are chosen carefully, a full spectrum of hopes, fears, and other expectations can be defined.

Environmental scanning is the first step in the strategic management process
Every participant's opinion can be heard on the forecasts in every round, and every participant has the opportunity to comment.

When successes with Delphi occur, it would seem that the explanation is not that the panel converged from round to round (which, as indicated earlier, almost always happens). Nor is it that the mean or median response moved toward the "true" answer (which is something that no one could know at the time). Rather, it is that the investigation was conducted professionally and that the results did in fact have the effect of increasing the user's understanding of the uncertainties surrounding the problem, the range of strategic options available in light of those uncertainties, and the need to monitor closely the possible, real-world consequences of options that may actually be implemented.

Delphi has been used in many policy studies in higher education. In one case, it was used to determine priorities for a program in family studies (Young 1978). Nash (1978), after reviewing its use in a number of studies concerning educational goals and objectives, curriculum and campus planning, and effectiveness and cost-benefit measures, concluded that the Delphi is a convenient methodology appropriate for a non-research-oriented population. The technique has also been used in a number of planning studies (Judd 1972). For example, it was used as a tool for getting planning data to meet the needs of adult part-time students in North Carolina (Fendt 1978).

In general, the more successful practitioners of Delphi appear to have tried to follow the 15 steps presented in figure 8. These "rules" may appear platitudinous, and virtually no one has ever followed all of them in a single Delphi. Yet the intrinsic quality and practical value of Delphi results are certain to be a function of the degree to which they are followed.

FIGURE 8
STEPS IN A PERFECT DELPHI
  1. Understand Delphi (for example, that at least two rounds of interrogation are necessary).
  2. Specify the subject and the objectives. (Don't study "the future." Study alternative futures of X--and do so with clear purpose.)
  3. Specify whether the forecasting mode to be adopted is exploratory or normative--or some clear combination of both.
  4. Specify all desired products, level of effort, responsibilities, and schedule.
  5. Specify the uses to which the results will be put, if they are actually achieved.
  6. Exploit the methodology and substantive results developed in earlier Delphi studies.
  7. Design the study so that it includes only judgmental questions (except in extreme cases), and see to it that these questions are precisely phrased and cover all topics of interest as specifically as possible.
  8. Design all rounds of the study before administering the first round. (Don't forget that this step includes the design of forms or software for collating the responses.)
  9. Design the survey instrument so that the questions are explained clearly and simply, can be answered as painlessly as possible, and can be answered responsibly.
  10. Include appropriate historical data and a set of assumptions about the future in the survey instrument so that the respondents will all be dealing with future developments in the context of the same explicit past and "given" future.
  11. Assemble a group of respondents capable of answering the questions creatively, in depth, and on schedule, and large enough to ensure that all important points of view are represented.
  12. Collate the responses wisely, consistently, and promptly.
  13. Analyze the data wisely, consistently, and promptly.
  14. Probe the methodology and the substantive results constantly during and after the effort to identify problems and important needed improvements.
  15. Synthesize and present the final results to management intelligently.

Other group techniques

Delphi is generally considered one of the better techniques of pooling the insight, experience, imagination, and judgment of those who are knowledgeable in strategic matters and who have an obligation to deal with them responsibly. Many other ways, however, can be used to exploit the power of groups in forecasting and futures research: brainstorming, gaming, synectics, the nominal group technique, focus groups, and others, including the Quick Environmental Scanning Technique (QUEST), the Focused Planning Effort (FPE), and the Delphi Decision Support System (D2S2). The last three are discussed in this section because they are currently used in futures research.

QUEST (Nanus 1982) was developed to quickly and inexpensively provide the grist for strategic planning: forecasts of events and trends, an indication of the interrelationships among them and hence the opportunities for policy intervention, and scenarios that synthesize these results into coherent alternative futures. It is a face-to-face technique, accomplished through two day-long meetings spaced about a month apart. The procedure produces a comprehensive analysis of the external environment and an assessment of an organization's strategic options.

A QUEST exercise usually begins with the recognition of a potentially critical strategic problem. The process requires a moderator, who may be an outside consultant, to facilitate posing questions that challenge obsolete management positions and to maintain an objective perspective on ideas generated during the activity. The process also requires a project coordinator, who must be an "insider," to facilitate translating the results of QUEST exercises into scenarios that address strategic questions embedded in the organizational culture.

QUEST involves four steps. The first step, preparation, requires defining the strategic issue to be analyzed, selecting participants (12 to 15), developing an information notebook elaborating the issue, and selecting distraction-free workshop sites.

The second step is to conduct the first planning session. It is important that at least one day be scheduled to provide sufficient time to discuss the strategic environment in the broadest possible terms. This discussion includes identifying the organization's strategic mission, the objectives reflected in this mission, key stake holders, priorities, and critical environmental events and trends that may have significant impacts on the organization. Much of this time will be spent evaluating the magnitude and likelihood of these impacts and their cross-impacts on each other and on the organization's strategic posture. Participants are encouraged to focus on strategic changes but not on the strategic implications of these changes. This constraint is imposed to delay evaluations and responses until a complete slate of alternatives is developed.

The third step is to summarize the results of the first planning session in two parts: (1) a statement of the organization's strategic position, mission, objectives, stake holders, and so on, and (2) a statement of alternative scenarios illustrating possible external environments facing the organization over the strategic period. It is important that the report be attributed to the group, not sections to particular individuals. Correspondingly, it is important that the report reflect that ideas were considered on the basis of merit, not who advanced them. The report should be distributed a few days before the second group meeting, the final step.

The second meeting focuses on the report and the strategic options facing the organization. These options are evaluated for their responsiveness to the changing external environment and for their consistency with internal strengths and weaknesses. While this process will not produce an immediate change in strategy, it should result in directions to evaluate the most important options in greater depth. Consequently, a QUEST exercise ends with specific assignments vis-à-vis the general nature of the inquiry needed to evaluate each option, including a completion date.

The Focused Planning Effort was developed in 1971 (Boucher 1972). Like QUEST, it is an unusual kind of face-to-face meeting that draws systematically on the judgment and imagination of line and staff managers to define future threats and opportunities and find practical actions for dealing with them. Because the process is perfectly general--that is, it can be used to address any complex judgmental questions on future mission or strategic policy--the range of applications has been widely varied. In recent years, topics have ranged from the potential merit of technologies to improve agricultural yields, to alternative futures for the data communications industry, to the assessment of human resources in the future.

The FPE has the following features, which in concert make it a distinctive approach to strategic forecasting and policy assessment:

  • All topics relevant to the subject chosen for investigation are explored, one by one and in context with each other. An FPE seeks to be comprehensive. Typically, the participants define the organization's mission, objectives, and goals, and then identify, forecast, and evaluate several issues: (1) the elements of their business environment, including relevant prospective social, economic, technological, and political developments; (2) the alternatives open to the organization; (3) criteria for deciding among the alternatives vis-à-vis the organization's mission, objectives, and goals; (4) the degree to which each important alternative satisfies the criteria; and (5) the dynamic cross-support interrelationships among the preferred alternatives.
  • No idea is off-limits. As in brainstorming, the first objective is to expand the group's sense of the options available.
  • All participants have a full and equal opportunity to influence the outcome at each step. In particular, each participant evaluates every important issue raised after it has been examined in face-to-face discussion by the group.
  • These individual evaluations become the group's response, but the range of opinion (that is, the uncertainty or lack of consensus) is captured and serves as a basis for clarifying differences and sharpening the group's final judgment.
  • Thus, the participants typically respond to the opinion of the group, not to the opinions of individuals within the group. In this way, team building is enhanced and personal confrontations avoided.
  • The FPE is highly systematic, thanks to the use of an interlocking combination of methods that have proven successful in structuring and eliciting judgment. Unlike QUEST, which uses a fixed combination of techniques, the mix used in an FPE varies depending on the subject, the number of participants, and the time available. It can include relevance tree analysis, brainstorming, the Delphi technique, subjective trend extrapolation, polling, operational gaming, cross-support and cross-impact analysis, and scenario development. And while such techniques are used in the FPE, they are not given a particular prominence; they are treated as means, not ends.
  • All judgments on important issues are quantified through individual votes, usually taken on private ballots. This quantification permits objective comparisons of the subjective inputs. Anonymous voting enables everyone to speak his mind.
  • The judgment of the group as a whole is available to each participant at the completion of every step of the FPE. These results then become the basis of the next step, thus helping to ensure that each part of the problem being addressed is dealt with in a context.
  • The major results of the FPE are available at the end of the activity, in writing, and each participant has a copy of the results to take with him.

The FPE process has three parts. The first--pre-meeting design--is the key. Each FPE requires its own design, and the process does not involve a pat formula. The design phase usually requires 10 to 15 days, spread over a few calendar weeks. During this phase, the problem is structured, needed historical data are collected, the FPE logic is defined in detail, and first-cut answers to the more important questions are obtained through interviews or a questionnaire or both. These preliminary answers serve as a check on the FPE design and as a basis for the discussion that will occur during the FPE itself. Ordinarily, this information is gathered from a larger group of people than the one that will participate in the FPE.

The final design is usually formulated in two ways: first, as an agenda, which is distributed to the participants, and, second, as a set of written "modules," each describing a specific task to be completed in the FPE, its purpose, the methods to be used, the anticipated outcomes, and the time allotted for each step in the task. These modules serve as the basis of the sign-off in the final pre-FPE review.

The second part of the process is the FPE itself. The number of participants can range from as few as seven or eight to as many as 20 to 25. The FPE normally requires two to three full days of intensive work, though FPEs have run anywhere from one to 12 days. The period can be consecutive or be spread out in four-hour blocks over a schedule that is convenient to all participants. Typically, the FPE is preceded by a luncheon or dinner meeting and a brief roundtable discussion, which serves to break the ice and helps to clarify expectations about the work to follow.

The FPE can be manual or computer-assisted. D2S2TM, developed by the Policy Analysis Company, uses a standard floppy disk and personal computer, usually connected to a large-screen monitor or projector (Renfro 1985). The larger the group of participants, the greater the desirability of using such computer assistance. Not only is the collation of individual votes greatly speeded; in addition, the software developed by some consulting organizations that provide the FPE service (for example, the ICS Group and the Policy Analysis Company) can reveal the basis of differences among subgroups of the participants and draw certain inferences that are implied by the data but not readily apparent on the basis of the estimates themselves. In D2S2TM, it includes confidence weighting, vote sharing, and vote assignment.

Although the design of the FPE is quite detailed, it is never rigid. On-the-spot changes are always required during the FPE in light of the flow of the group's discussion and the discoveries it makes. But the design makes it possible to know the opportunity costs of these adjustments and hence when it is appropriate to rein in the group and return to the agenda.

The final part of the process is post-meeting analysis and documentation of the results and specification of areas requiring action or further analysis. Although the principal findings will be known at the end of the FPE, this post-meeting activity is important because the results will have been quantified, and it is necessary to transcend the numbers and capture in words the reasons for various estimates, the basis of irreducible disagreements, and the areas of greatest uncertainty. Additionally, it may be necessary to perform special analyses to distill the full implications of these results.

Cross-impact analysis

Cross-impact analysis is an advanced form of forecasting that builds upon the results achieved through the various subjective and objective methods described in the preceding pages. Although as many as 16 distinct types of cross-impact analysis models have been identified (Linstone 1983), an idea common to each is that separate and explicit account is taken of the causal connections among a set of forecasted developments (perhaps derived by genius forecasting or Delphi). Among some futures researchers, a model that includes only the interactions of events is called a cross-impact model. A model that includes only the interactions of events on forecasted trends but not the impacts of the events on each other is called a "trend impact analysis" (TIA) model. In the general case, however, cross-impact analysis" is increasingly coming to refer to models in which event-to-event and event-to-trend impacts are considered simultaneously. Constructing such a model involves estimating how the occurrence of each event in the set might affect ("impact") the probability of occurrence of every other event in the set as well as the nominal forecast of each of the trends. (These nominal trend forecasts may be derived through mathematical trend extrapolation or subjective projections.) When these relationships have been specified, it then becomes possible to let events "happen"--either randomly in accordance with their estimated probability or in some prearranged way--and then trace out a distinct, plausible, and internally consistent future. Importantly, it also becomes possible to introduce policy choices into the model to explore their potential value.

Development of a cross-impact model and defining the cross-impact relationships is tedious and demanding. The most complex model that can be built today (using existing software) can include as many as 100 events and 85 trends. Although they may seem like small numbers--after all, how many truly important problems can be described with reference to only 85 trends and 100 possible "surprise" events?--consider the magnitude of the effort required to specify such a model. First, it is necessary to identify where "hits" exist among pairs of events or event-trend pairs. For a model of this size, 18,400 possible cross-impact relationships need to be evaluated (9,900 for the events on the events and 8,500 for the events on the trends). This evaluation is done judgmentally, usually by a team of experts. Experience suggests that hits will be found in about 20 percent of the possible cases, which means that some 3,700 impacts of events on events or events on trends will need to be described in detail.

How are they described? In the most sophisticated model, seven estimates are required to depict the connection between an event impacting on the probability of another event: 

  1. Length of time from the occurrence of the impacting event before its effects would be felt first by the impacted event;
  2. The degree of change in the probability of the impacted event at that point when the impacting event would have its maximum impact;
  3. The length of time from the occurrence of the impacting event until this maximum impact (that is, change in probability) would be achieved;
  4. The length of time from the occurrence of the impacting event that this maximum impact level would endure;
  5. If the maximum impact might taper off, the change in probability of the impacted event when its new, stable level were reached;
  6. The length of time from the occurrence of the impacting event to reach this stable impact level;
  7. A judgment as to whether or not these effects had been taken into account when estimating the probability of the impacting and impacted events in the Delphi.

Eight cross-impact factors need to be estimated to describe the hit of an event on a trend. The first seven are the same as those specified above, except that estimates 2 and 5 are not for changes in probability but for changes in the nominal forecasted value of the trend. The eighth estimate specifies whether the changes in the trend values are to be multiplicative or additive.

In short, if we have 3,700 hits to describe and if, say, 60 percent of them (2,220) are impacts of events on events and 40 percent (1,480) are of events on trends, then 27,380 judgments must be made to construct the model (that is, 2,220 x 7 + 1,480 x 8). With these estimates, plus the initial forecasts of the probability of the events and the level of the trends, the model is complete. It can then be run to generate an essentially unlimited number of individual futures. In one version of cross-impact analysis, developed at the University of Southern California, the model can be run so that the human analyst has the opportunity to intervene in the future as it emerges, introducing policies that can change the probabilities of the events or the level of the trends. This model operates as follows:

  1. The time period is divided into annual intervals.
  2. The cross-impact model computes the probabilities of occurrences of each of the events in the first year.
  3. A random number generator is used to decide which (if any) of the events occurred in the first year. (It should perhaps be emphasized that once the estimated probability of an event exceeds zero, the event can happen. No one may think it will happen, or conversely everyone may be convinced that it will. If it happens--or fails to happen--the event is a surprise. In cross-impact analysis, events are made to "happen" in accordance with their probability; that is, a 10 percent event will happen in 10 percent of all futures, a 90 percent event will happen in 90 percent of them, and so on. One would be surprised indeed if he or she were betting on a future world in which the 10 percent event was expected not to happen but did, and the 90 percent event was expected to happen but did not.)
  4. The results of the simulated first year are used to adjust the probabilities of the remaining events in subsequent years and the trend forecasts for the end of the first year and their projected performance for the subsequent years.
  5. The computer reports these results to the human analysts interacting with the simulation and stops, awaiting additional instructions.
  6. The human analysts assume that the simulated time is real time and assess the result as they think they would had this outcome actually taken place. They decide which aspects of their strategy (if any) they would change and input these changes to the computer model, which then simulates the next year's results using the same procedure described for the first year.
  7. The simulation repeats these steps until all of the years in the strategic time period have been decided (Enzer 1983, p. 80).

When all intervals are complete, one possible long-term future is described by modified trend projections over time, the events that occurred and the years in which they occurred, a list of the policy changes introduced by the analysts, and the impacts of those policy changes on the resulting scenario. The analysts may also prepare a narrative describing how they viewed the simulated conditions and how effective their policy choices appeared in retrospect.

By repeating the simulation many times, perhaps with different groups of analysts, it is possible to develop a number of alternative futures, thereby minimizing surprise when the transition is made from the analytic model to the real world. Perhaps the most important contribution that the USC model (or cross-impact methods generally) can make in improving strategic planning, however, is in its continued use as the strategic plan is implemented (Enzer 1980a, 1980b). The uncertainty captured in the initial model will be subject to change as anticipations give way to reality. Such changes may in turn suggest revisions to the plan.

Models of such complexity are expensive to develop and currently can be run only on a large, mainframe computer. For these reasons, their use is warranted only in the most seriously perplexing and vital situations. A number of less complex microcomputer-based cross-impact models are under development, however. For example, the Institute for Future Systems Research, Inc. (Greenwood, South Carolina), has developed a cross-impact model that can be run on an Apple Ile. Although in the alpha stage of development, this model has the capability of 30 events and 20 policies impacting three trends.

Much simpler models are commonplace. In essence, they are the same, but the rigorous calculations required for complex models can be approximated manually while preserving much of the qualitative value of the results, such as identifying the most important events in a small set. In the simplified manual calculation, the impact of the event is multiplied times its probability: A 50 percent probable event will have 50 percent of its impact occur, a 75 percent event will have 75 percent of its impact occur, and so on. This impact probability is calculated and added or subtracted, depending on its direction, to the level of the extrapolated trend at point a (see figure 9). The event-impacted forecast for the years from b are determined by connecting points b and a with the dashed line as shown. This process is repeated for each of the potential surprise events until a final expected value of the event-impacted indicator is developed. The event with the highest product of probability and impact is the most important event or the event having the greatest potential impact on the trend. This simple calculation is the basis of cross-impact analysis, though the detail and complexity (not to mention effort and cost) can be much greater in computer simulations. (For a more detailed discussion of this approach, including an example from the field of education, see Renfro and Morrison 1982.)

FIGURE 9
QUALITATIVE EXAMPLE OF AN EVENT-IMPACTED INDICATOR
Environmental scanning is the first step in the strategic management process
Source: Renfro and Morrison 1982.

Policy impact analysis

Most of the techniques of futures research developed in the last 20 years provide information about futures in which the decision makers who have the information are presumed not to use it; that is, new decisions and policies are not included in the futures described by these techniques (Renfro 1980c). The very purpose of this information, however, is to guide decision makers as they adopt policies designed to achieve more desirable futures--to change their expected future. In this sense, traditional techniques of futures research describe futures that happen to the decision makers, but decision makers use this information to work toward futures that happen for them. Apart from policy-oriented uses of cross-impact analysis, policy impact analysis is the first model that focuses on identifying and evaluating policies, strategies, and decisions designed to respond to information generated by traditional techniques of futures research.

The steps involved in policy impact analysis are based on the results obtained from the probabilistic forecasting procedure outlined previously. When the events have been ranked according to their importance (their probability weighted impacts), these results are typically fed back to the group, panel, or decision makers providing the judgmental estimates used to generate the forecast. As this group was asked to select and evaluate the surprise events, they are now asked to nominate specific policies that would modify the probability and impact of those events. Decision makers may change the forecast of a trend in three principal ways: first, by implementing policies to change the probability of one or more of the events that have been judged to influence the future of the trend; second, by implementing policies to change the timing, direction, or magnitude of the impact of one or more of the events; and third, by adopting policies that in effect create new events. If all or most of the important events affecting a trend have been considered, then new events should have little or no direct impact on the indicator. For some events, such as the return of double-digit inflation, it may not be possible for the decision makers at one university to change the events' probability, but it may be possible to affect the timing and magnitude of their impacts if they did occur. For example, it may not be possible to affect the president's decision to issue a particular executive order, such as cutting federal aid to higher education, but its impact can be diminished if administrators develop other sources of funding. Usually it is possible to identify policies that change both the probability and the impact of each event (Renfro 1980a).

Policies are typically nominated on the basis of their effect on one particular event. To ensure that primary (or secondary) impacts on other events do not upset the intended effect of the policy, the potential impact of each policy on all events should be reviewed, easily done by the use of a simple chart like the one shown in figure 10.

Environmental scanning is the first step in the strategic management process
A policy can change the probability of an event by making it more or less likely to occur.

FIGURE 10
POLICIES-TO-EVENT MATRIX
Environmental scanning is the first step in the strategic management process
Source: Renfro 1980b.

Policies can impact the forecasts of an indicator in three ways: through the events, through the events and directly on the trends, and directly on the trends only. The relationship of policies to trends to the indicators might be envisioned as shown in figure 11. The policies that affect the indicator through events have four avenues of impact. A policy can change the probability of an event by making it more or less likely to occur, or a policy can change the impact of an event by increasing or changing the level of an impact, changing the timing of an impact, or changing both level and timing of an impact (see figure 12). (If a computer-based routine is used in policy impact analysis, numerical estimates must be developed to describe completely the shape and timing of the impacts, which, for the impact of one event on a trend, may require as many as eight estimates. These detailed mathematical estimates quickly mushroom into a monumental task that can overwhelm the patience and intellectual capacities of the most dedicated professionals if the task is not structured and managed to ease the burden. For a discussion of the details of the numerical estimates, see Renfro 1980b.)

FIGURE 11 RELATIONSHIP OF POLICIES TO EVENTS TO TRENDS:

THREE WAYS POLICIES IMPACT TRENDS

Environmental scanning is the first step in the strategic management process
Source: Renfro 1980b.

FIGURE 12
IMPACT OF POLICY CHANGES
Environmental scanning is the first step in the strategic management process
Source: Renfro 1980b.

The new estimates of probability and impact are used to recalculate the probabilistic forecasts along the lines outlined earlier. The difference between the probabilistic forecast and the policy-impacted forecast shows the benefit of implementing each of the policies identified. Completed output of all of the steps results in three forecasts: the extrapolated surprise-free forecast, the probabilistic event-impacted forecast, and the policy-impacted forecast.

To illustrate, suppose that the policy issue being studied is enrollment in liberal arts baccalaureate programs and that measurements of those enrollments since 1945 are part of the database available to a research study team. Further assume that those enrollments were forecast to decrease over the next 10 years, although the desired future would be one in which they would remain the same or increase. In this stage of the model, the team would first identify those events that could affect enrollments adversely--for example, a sudden jump in the rate of inflation, sharply curtailed federally funded financial aid, a significant cut in private financial support, and so on. The team would also identify events that could positively affect enrollments--for example, commercial introduction of low cost, highly sophisticated CAI programs for use on personal computers for mid-career retraining, a new government program to help fund the efforts of major corporations to provide continuing professional education programs for their employees, and so on. Such events may positively affect enrollments because a widely held assumption of liberal arts education is that it facilitates the development of thinking and communication skills easily translatable to a wide variety of requirements for occupational skills.

The next step would be to identify possible policies that could affect those events (or that could affect enrollments directly). For example, policies could be designed to increase enrollments by aggressively pursuing marketing strategies lauding the value of a liberal arts education as essential preparation for later occupational training. This strategy could be undertaken with secondary school counselors and students and with first- and second-year undergraduates and their advisors. Graduate and professional school faculty could be encouraged to consider adopting and publicly announcing admissions policies that grant preferential consideration to liberal arts graduates. Another policy could be to form coalitions with higher education organizations in other regions to press for increased federal aid to students and to institutions. With respect to the potential market in the business, industrial, and civil service sectors, policies with respect to establishing joint programs to provide liberal arts education on a part-time or "special" semester basis could be designed and implemented.

Policies could also be designed to maintain enrollments within the current student population. For example, one policy could concern an "early warning" system to identify liberal arts students who may be just experiencing academic difficulty. Others could be designed to inhibit attrition by improving the quality of the educational environment. Such policies would involve establishing faculty and instructional development programs and improving student personnel services, among others.

Next, the policies need to be linked formally to the events they are intended to affect, and their influence can then be evaluated. (As part of this process it is also important to look carefully at the cross-impacts among the policies themselves, as several of them may work against each other.) The result of this somewhat complex activity is a policy-impacted forecast for undergraduate baccalaureate programs, given the implementation of specific policies designed to improve enrollments. Thus, competing policy options may be evaluated by identifying those policies with the most favorable cost-benefit ratio, those having the most desirable effect, those with the most acceptable trade-offs, and so on.

Figure 13 is an example of a complete policy impact analysis where one may examine the relationship of an organizational goal for a particular trend, the extrapolative forecast, the probabilistic forecast, and the policy-impacted forecast. Note that the distinction between the projected forecasts is the result of the difference between the assumptions involved; that is, the extrapolative forecast does not include the probable impact of surprise events, whereas the probabilistic forecast does. Furthermore, the probabilistic forecast includes not only the effects of events on the trend but also the interactive effects of particular events on the trend. The policy impact forecast not only incorporates those features distinguishing probabilistic forecasts; it also includes estimates of the impact of policies on events affecting the trend as well as on the trend itself.

FIGURE 13
EXAMPLE OF A COMPLETE POLICY IMPACT ANALYSIS
Environmental scanning is the first step in the strategic management process
Source: Renfro 1980c.

Evaluation occurs when the policy impact analysis model is iterated after the preferred policies have been implemented in the real world. That is, the process of monitoring begins anew, thereby enabling the staff to evaluate the effectiveness of the policies by comparing actual impacts with those forecast. Implementation of this model requires that a data base of social/educational indicators be updated and maintained by the scanning committee to evaluate the forecasts and policies and to add new trends as they are identified as being important in improving education in the future, that new and old events be reevaluated, and that probabilistic forecasts be updated to enable goals to be refined and reevaluated. This activity leads to the development of new policies or reevaluated old ones, which in turn enables the staff to update policy impacted forecasts (Morrison 1981b). (The techniques of futures research described here, particularly the probabilistic forecasting methods, have been developed only within the last 10 to 20 years, and they have been used primarily in business and industry, with mixed results. The success of this model depends upon the ability of the staff to identify those events that may affect a trend directly or indirectly, accurately assign subjective probabilities to those events, design and obtain a reliable and valid data base of social/educational indicators, and specify appropriate factors that depict the interrelationships among the events, the trends, and the policies. The efficacy of the policy impact analysis model depends upon the close interaction of the research staff and decision-makers within each stage of the model.)

The scenario
A key tool of integrative forecasting is the scenario--a story about the future. Many types of scenarios exist (Boucher 1984), but in general they are written as a history of the future describing developments across a period ranging from a few years to a century or more or as a slice of time at some point in the future. The scenarios as future history are a more useful tool in planning because they explain the developments along the way that lead to the particular circumstances found in the final state in the future.

A good scenario has a number of properties. To be useful in planning, it should be credible, it should be self-contained (in that it includes the important developments central to the issue being addressed), it should be internally consistent, it should be consistent with one's impression of how the world really works, it should clearly identify the events that are pivotal in shaping the future described, and it should be interesting and readable to ensure its use. Scenarios have been used both as launching devices to stimulate thinking about the future at the beginning of a study and as wrap-ups designed to summarize, integrate, and communicate the many detailed results of a forecasting study. For example, the information generated in the policy impact analysis process can easily be used to generate scenarios. A random number generator is used to determine which events happen and when. This sequence of events provides the outline of a scenario. With this technique, a wide range of scenarios can quickly be produced.

Frequently, several alternative scenarios are written, each based upon a central theme. For example, in the 1970s many studies on energy resources focused on three scenarios: (1) an energy-rich scenario, in which continued technological innovations and increased energy production eliminate energy shortages; (2) a muddling-through scenario, in which events remain essentially out of control and no resolution of the energy situation is realized; and (3) an energy-scarce scenario, in which we are unable to increase production or to achieve desired levels of conservation.

By creating multiple scenarios, one hopes to gain further insight into not only the potential range of demographic, technological, political, social, and economic trends and events but also how these developments may interact with each other, given various chance events and policy initiatives. Each scenario deals with a particular sequence of developments. Of course, if the scenarios are based on the results from earlier forecasting, the range of possibilities should already be reasonably well known, and the scenarios will serve to synthesize this knowledge. If, however, the earlier research has not been done, then the scenarios must be made of whole cloth. This practice is very common; indeed, some consulting organizations recommend it. Such scenarios can be quite effective, as long as the user recognizes that the product is actually a form of genius forecasting and shares all of the strengths and weaknesses of that approach.

Slice-of-time scenarios serve to provide a context for planning; indeed, they are similar to the budgeting or enrollment assumptions that often accompany planning instructions. Yet instead of single assumptions for each planning parameter, a range of assumptions may be considered. In turn, assumptions for different parameters are woven together to form internally consistent wholes, each of which forms a particular scenario, and the set may then be distributed as background for a new cycle of planning.

Multiple scenarios communicate to planners that while the future is unknowable, it may be anticipated and its possible forms can surely be better understood. In the language of strategic planning, a plan may be assessed against any scenario to test its "robustness." An effective plan, therefore, is one that recognizes the possibility of any plausible scenario. For example, in a planning conference with the president, the academic vice president might speculate how a particular strategy being proposed would "play itself out" if the future generally followed Scenario I and, then, what would happen given Scenario 11. Heydinger has developed several plausible scenarios for higher education, which, although lacking the specificity required for actual institutional planning purposes, convey the flavor of a scenario (see figure 14).

The analysis of multiple scenarios requires attention to a number of factors discussed elsewhere in this monograph improbable yet important developments (Heydinger and Zentner 1983). Moreover, in developing the scenarios, it is helpful to recognize that they can be used to describe futures on almost any level of generality, from higher education on the national level to the outlook for an individual department. In addition, agreement on a "time horizon" is necessary. Because many colleges and universities depend heavily on enrollment for income, the time horizon might be 15 years, a foreseeable horizon with regard to college attendance rates, students' demographic characteristics, and composition of the faculty.

FIGURE 14
POSSIBLE SCENARIOS FOR HIGHER EDUCATION
  1. The Official Future Enrollments are down, and while adult and part-time students are more numerous, their presence has not offset the decline of traditional-age students. One in 10 state colleges has closed in the last seven years, and 25 percent of liberal arts colleges have closed since 1980. With the supply of traditional college-age students resurging, however, a mood of optimism is returning to campuses. Industry establishes its own training facilities at an unheard-of pace and competes with higher education for the best postgraduate students.

    In high-tech areas, cooperative research arrangements with industry are commonplace. Most campuses now find that academic departments divide into the "haves" (technology-related areas) and "have-nots" (humanities and social sciences).

  2. Tooling and Retooling
  3. With job skills changing at an ever-quickening pace, individuals now make several career changes in a lifetime, and college is still considered the best place for training. Nationwide enrollment has thus fallen only 1.5 percent.
    Students are more serious about their studies. Passive acceptance of poor teaching is a relic of the past, and lawsuits by students are common. The implicit view that the professor is somehow superior to the student (left over from the days of in loco parentis) is gone. As students focus almost exclusively on job skills, faculty who prize the liberal arts become a minority.
  4. Youth Reject Schooling
  5. The plummeting economy makes structural unemployment a reality. With fewer job openings that require a college degree, all but the most elite youth reject formal schooling. Most young people, weaned on fast-paced information with instant feedback, come to find college teaching methods archaic.
    Student bodies are smaller and more homogeneous, comprised mainly of those who can afford the high cost of post-secondary education. A spirit of elitism grows on campus. Among faculty, the mood is one of "minding the store" while waiting for better days.
  6. Long-Term Malaise
  7. The long-awaited enrollment decline hits, with full force, and the advent of lifelong learning never materializes. The slumping economy forces the states to make deeper funding cuts and close some public campuses.
    Faculty attention is focused on fighting closure, and little discussion of programmatic change is evident. Feeling themselves under increasing pressure, many of the best faculty flee the academy. Higher education becomes a shrunken image of its former self.
  8. A New Industry Is Born
  9. High technology creates a burgeoning demand for job skills. To meet the new challenge, some professional schools break away from their parent university to set up independent institutions. Private corporations establish larger training programs. Even individuals now hang out a shingle and offer educational training. Amid this explosion of new educational forms, the traditional research university breaks down. Community colleges flourish as they adapt to the new needs of the educational market.
Source: Richard Heydinger, cited in Administrator 3 (1): 2-3.

Scenario development is essentially a process of selecting from the total environment those external and internal elements most relevant to the purpose of the strategic plan. This process might well embrace information on demographic characteristics of students, legislative appropriations, research contracts, the health of the economy, public opinion (about the value of a college degree, for example), developments in the field of information processing and telecommunications, and so on.

Furthermore, assumptions about the behavior of a particular variable in a particular scenario must be explicated. Thus, if the size and composition of the 18- to 20-year-old cohort were the variable under consideration, different assumptions might be developed vis-a-vis college attendance rates. One scenario, for example, might assume that in 1995 the number of students in attendance would be the same as in 1983 but that the number of students in the 25to 45-year-old group would equal the number of students in the 18- to 24-year-old group. An alternative scenario might assume that the number of students would increase by 1995 and that most of them would be third-generation students in the 18- to 21 -year-old group. Similar assumptions must be developed for each variable included in the scenario.

Explicating these assumptions is the most important part of creating scenarios and can require a good deal of prior research or, in the case of genius forecast scenarios, great experience, knowledge, and imagination. Once the assumptions are established, however, the nature of each scenario is established. Accordingly, to ensure that they are credible within the institution, it may be worthwhile to review them with local experts. For example, for key factors concerning students, the admissions office might be consulted. For economic variables, the economics department should be consulted. Such consultations are likely not only to improve the quality of the final products but also to build "ownership" into the scenarios, thereby enhancing the chances that they will be considered reasonable possibilities throughout the institution.

In addition to their other advantages, multiple scenarios force those involved in planning to put aside personal perspectives and to consider the possibility of other futures predicated on value sets that may not otherwise be articulated. Grappling with different scenarios also compels the user to deal explicitly with the cause-and-effect relationships of selected events and trends. Thus, multiple scenarios give a primary role to human judgment, the most useful and least well used factor in the planning process. Scenarios therefore provide a useful context in which planning discussions may take place and provide those within the college or university a shared frame of reference concerning the future. (See Heydinger and Zentner (1983) for a more complete discussion of multiple scenario analysis; see also Boucher and Ralston (1983) and Hawken, Ogilvy, and Schwartz (1982) for a more detailed discussion of the types and uses of scenarios.)

Goal Setting
Some years ago, in what was apparently the first serious attempt to understand the range and severity of difficulties that face long-range planners, UCLA's George Steiner surveyed real-world experiences in U.S. corporations (Steiner 1972). Steiner's questionnaire, which was cornpleted by 215 executives in large corporations (typically, long-range planners themselves), presented a list of 50 possible planning pitfalls, invited the respondents to suggest others, and then asked three basic questions for each: (1) How would you rank the pitfalls by importance? (2) Has your own corporation recently fallen into any of the pitfalls, partly or completely? (3) If it has, how great an impact has the pitfall had on the effectiveness of long-range planning in your company?

Steiner used the answers from the first questions more or less global assessment of the influence of the pitfalls on long-range planning-to rank order the items. He did not, however, exploit the much more interesting information about actual experience revealed by the answers to the second and third questions. Fortunately, he published the raw data in an appendix. An analysis of those data produces a very different picture of the obstacles to effective planning than does his rank-ordered list. If, for example, one looks for the pitfalls that the largest percentage of companies confess they have recently encountered, partly" or "completely," the top 10 items are those shown in figure 15. This list is most instructive for planners in all types of organizations, including educational institutions, but seven of these 10 items did not appear anywhere among Steiner's top 10!

Far more significant, however, are the results from the third question, which asked the impact of the pitfalls on the effectiveness of the organization's long-range planning. After all, some mistakes or barriers are more serious than others. If one ranks all of the pitfalls on the basis of the frequency with which real-world planners cited them as having great negative impacts on their effectiveness, another list of the top items emerges (see figure 16). Again, the list is different from Steiner's, but this time five of his candidates appear.

FIGURE 15 THE TEN PLANNING PITFALLS MOST COMMONLY FALLEN INTO BY THE LARGEST PERCENTAGE OF CORPORATIONS:

RESULTS FROM A SURVEY

Pitfall Number

Pitfall Percentage of Corporations

Rank

49

Failing to encourage managers to do good long-range planning by basing rewards solely on short-range performance measures.

82

1

16

Failing to make sure that top management and major line officers really understand the nature of long-range planning and what it will accomplish for them and the company.

78

2-4

24

Becoming so engrossed in current problems that top management spends insufficient time on long-range planning, and the process becomes discredited among other managers and staff.

78

2-4

47

Failing to use plans as standards for measuring managers' performance.

78

2-4

31

Failing to make realistic plans (as the result, for example, of overoptimism and/or overcautiousness).

74

5

50

Failing to exploit the fact that formal planning is a managerial process that can be used to improve managers' capabilities throughout a company.

71

6

10

Failing to develop a clear understanding of the long-range planning procedure before the process is actually undertaken.

69

7

28

Failing to develop company goals suitable as a basis for formulating long-range plans.

67

8

37

Doing long-range planning periodically and forgetting it between cycles.

65

9-10

39

Failing, on the part of top management and/or the planning staff, to give departments and divisions sufficient information and guidance (for example, top management's interests, environmental projections, etc.).

65

9-10

Source: Steiner 1972.

FIGURE 16 THE ELEVEN PLANNING PITFALLS WITH GREATEST IMPACT ON THE EFFECTIVENESS OF CORPORATE LONG-RANGE PLANNING:

RESULTS FROM A SURVEY

Pitfall Number

Pitfall Percentage Answering "Much" Rank

28

Failing to develop company goals suitable as a basis for formulating long-range plans. 

43

1-2

42

Failing, by top management, to review with department and division heads the long-range plans they have developed. 

43

1-2

24

Becoming so engrossed in current problems that top management spends insufficient time on long-range planning, and the process becomes discredited among other managers and staff. 

40

3

45

Top management's consistently rejecting the formal planning mechanism by making intuitive decisions that conflict with formal plans. 

37

4

38

Failing to develop planning capabilities in major operating units. 

36

5

7

Thinking that a successful corporate plan can be moved from one company to another without change and with equal success. 

35

6

3

Rejecting formal planning because the system failed in the past to foresee a critical problem and/or did not result in substantive decisions that satisfied top management. 

34

7

49

Failing to encourage managers to do good long-range planning by basing rewards solely on short-range performance measures. 

34

8

1

Assuming that top management can delegate the planning function to a planner. 

33

9-11

23

Assuming that long-range planning is only strategic planning, or just planning for a major product, or simply looking ahead at likely development of a present product (that is, failing to see that comprehensive planning is an integrated managerial system). 

33

9-11

32

Extrapolating rather than rethinking the entire process in each cycle (that is, if plans are made for 1971 through 1975, adding 1976 in the 1972 cycle rather than redoing all plans from 1972 to 1975). 

33

9-11

Source: Steiner 1972.

The results for pitfall 28 clearly underscore the importance of appropriate goal setting in an organization. Not only is failure to do it well one of the most frequently encountered barriers to long-range planning (as indicated in figure 15); it also surfaces at the top of the list of pitfalls that can most debilitate comprehensive planning (as shown in figure 16). Moreover, this finding has a certain face validity, for even if an organization has a good idea of what it wants to be (if, that is, it has what is known in strategic planning as a good "mission statement"), it is exceedingly improbable that its forecasting and planning will be fruitful in the absence of clear, actionable statements about how it will know if it is getting there. Such statements are variously called "goals" or "objectives."

Some confusion surrounds these terms in the planning literature. Most authors assert that objectives are more general than goal statements, that objectives are long range while goals are short range, that objectives are non-quantitative ("to provide students with a thorough grounding in the humanities") while goals are quantitative ("to require each student to complete two years of instruction in English, philosophy, and history"), that objectives are "timeless" statements ("to provide quality education that properly equips each student for his chosen career") while goals are "time-pegged" ("to implement a program of education, career counseling, and placement by 1989 such that at least 60 percent of graduates find employment for which they are qualified by virtue of their education at this institution"), and so on. But other authors argue other positions. This problem of vocabulary is in large part one of hierarchies or levels of discourse, as one person's objective can obviously be another person's goal (see Granger 1964 or Kastens 1976, chap. 9). For purposes of this paper, the terms are used interchangeably to mean simply a broad but non-platitudinous statement of a fundamental intention or aspiration for an organization, consistent with its mission. Metaphorically, a goal or objective in this sense is like a trend around which the actual performance of the institution is expected to fluctuate as closely as possible.

The purpose of goals is to provide discipline. More specifically, the "objectives for having objectives" include:

  • To ensure unanimity of purpose with the organization.
  • To provide a basis for the motivation of the organization's resources.
  • To develop a basis or standard for allocating an organization's resources.
  • To establish a general tone or organizational climate, for example, to suggest a businesslike operation.
  • To serve as a focal point for those who can identify with the organization's purpose and direction and as an explication to deter those who cannot from participating further in the organization's activities.
  • To facilitate the translation of objectives and goals into a work-breakdown structure involving the assignment of tasks to responsible elements within the organization.
  • To provide a specification of organizational purposes and the translation of these purposes into goals (that is, lower-level objectives) in such a way that the cost, time, and performance parameters of the organization's activities can be assessed and controlled (King and Cleland 1978, p. 124).

The last two purposes lead especially to management control systems, such as the Planning-Programming-Budgeting system, Zero-Based Budgeting, and Management by Objectives.

To these ends, goals are necessary for every formal structure within an organization, including temporary task forces. If, for example, futures research itself is recognized as a distinct function, the failure to specify goals adequately can lead the futures researcher to assume that his or her domain includes all possible future states of affairs. But the job then becomes futile; all too often the planner is reduced to rummaging in the I future, looking willy-nilly for the hitherto unanticipated but "relevant" possibility (Boucher 1978).

Steiner's surprise that pitfall 28 ranked so high on the list of dangerous pitfalls prompted his asking several respondents why they had given it such prominence. Their answers clarify some of the attributes of an "unsuitable" goal:

  • It is too vague to be implemented ("optimize profits" or "establish the best faculty").
  • It is excessively optimistic. For example, an educational institution with a total annual budget of $10 million would be deluding itself if it sought to "establish the nation's premier faculty in physics."
  • It is clear enough to those on the top level who formulated it, but it provides "insufficient guidance" to those on lower levels.
  • Finally, it simply has not been formulated. For example, top management has recognized the need to develop goals for lower levels and lower levels would clearly welcome them, but management has not yet been able to specify goals.

How are goals or objectives developed? The short answer is that because they are about the future, they must at bottom be subjective and judgmental. In many organizations, especially small ones, no formal process is required to capture these judgments: The ultimate goals, at least, are the articulated or unarticulated convictions of the founder or top executives about how the organization is likely to look if everyone works intelligently to achieve the mission in the years ahead. The absence of a formal goal-setting process need not mean that the organization is doing something wrong. Indeed, for some of the largest and best-run firms in corporate America, it would appear that the presence or absence of such a process apparently does not matter greatly; what matters more is that a vision is shared and is regularly reinforced by the key people through direct, persistent contact with everyone else. For these companies, this process is a part of what has been called "Management by Wandering Around"--to discover what employees, customers, suppliers, investors, and other stake holders actually think about the organization and its products or services (Peters 1983). By reinforcing a vision through such contacts, these companies are able to adjust their behavior by comparing their mission, goals, and interim performance toward those goals and then shucking subgoals that are blocking the performance they seek.

No educational institution, to our knowledge, practices Management by Wandering Around. Educational planners and policy makers are more likely to use a formal process for setting goals of some sort, particularly those recommended by business schools for use in strategic planning. The many models available (Granger 1964; Hughes 1965; King and Cleland 1978; Steiner 1969a) tend to be bad models in at least one respect: Almost without exception they fail to recognize the contribution that futures research itself can make to the process of setting goals. The tendency in the literature--and hence in practice--is to suggest that one should, of course, look ahead at the organization's alternative external and internal environments, but, having done that job, one should then proceed to other, more or less independent things, such as setting goals. But futures research can contribute much to this activity, and it can make this contribution directly. Indeed, when futures research is operating in the normative mode, goals or objectives may be its principal output.

The key to exploiting this source of information is for the organization to explicitly establish the preliminary statement of goals as one of the goals of its futures research. We can make this notion more tangible by a simple example. King and Cleland (1978, p. 148 ff.), among others, recommend a process of goal setting that is based largely on "claimant analysis." In that procedure, each of the organization's claimants, or stake holders, is identified--for a public university or college, for example, they might include the trustees, the faculty, other employees, the students, government on all levels, vendors of one sort or another, competing universities, alumni, the local community, and the general public--and each group's principal "claims" on the organization listed. The claims of students, for example, might include obtaining a quality education, varied extracurricular opportunities, contact with faculty, a good library and computer center, non-bureaucratic administrative support services, and so on. Then, for each such claim, a numerical measure is developed, whether direct or indirect. Although the measures will often be difficult to specify, especially in an enterprise as soft as education, the effort should be made. (For example, the quality of education at an institution can be measured in a variety of indirect ways, from counting the number of applications or the number of dropouts to summing the scores on teacher rating sheets, to tracking the results of outside evaluations of the institution's own schools or departments, to measuring the socioeconomic status of alumni.) Finally, past and current levels of these measures are compared to discern whether the institution has been moving toward fulfilling each claimant's proper expectations. When it has not, the institution has found a new objective. When it has, the current objective has been sustained or rejustified.

This process--whatever its merits--could be strengthened considerably through futures research. If we know who our claimants have been and are now, it is immediately relevant to ask how the nature and mix of claimants might change in the future--or how it should be made to change. The same is true for the claims they might make. By the same token, having measures of their claims, it is clearly worthwhile to project these measures into the future, perhaps using a technique like Delphi, to see what surprises may lie ahead, including conflicts among forecasted measures. With projections of the measures, it is readily possible to ask about the forces that might upset these projections, using a method like cross-impact analysis. Having these results makes it possible to explore the potential efficacy of alternative strategies. Discovering how these strategies might work can then be the source of insight into the need for new or revised goals--goals that not only are responsive to present conditions but also are likely to provide useful guidance as the future emerges. And all of these considerations could then easily be wrapped up in a small set of scenarios (or planning assumptions), which could serve as a framework for the development of future strategic and operational plans.

Implementation
Forecasting and goal setting work together to define two alternative futures: the expected future and the desired future. The expected future is one that assumes that things continue as they are. It is the "hands-off " future, in which decision makers do not use their newly acquired information about the future to change it. The desired future is the "hands-on" one, and it assumes that whatever the decision makers decide to do works and works well. In stable environments, the two worlds are the same for complacent administrators. But where stability is vanishing and complacency is much too dangerous (as seems to be the case in education today), management must lead in taking a final active step in the strategic planning process: to establish the policies, programs, and plans to move the organization from the expected future to the desired future.

Environmental scanning is the first step in the strategic management process
Forecasting and goal setting work together to define two alternative futures: the expected future and the desired future.

If forecasting and goal setting have been done rigorously and professionally, much of the information needed to accomplish this stage is already identified. A complete forecast contains the structure, framework, and context in which it was produced so as to enable the user to identify appropriate policy responses (De Jouvenel 1967), which can then be implemented. Bardach (1977), Nakamura and Smallwood (1980), Pressman and Wildavsky (1973), and Williams and Elmore (1976) include excellent discussions of this type.

Monitoring
Monitoring is an integral part of environmental scanning and of strategic planning. Although the specific functions of monitoring are different in the two processes, they serve the same purposes--to renew the process cycle.

In many planning models, monitoring constitutes one of the first steps, for it is in this step that areas of study are identified and the indicators descriptive of those areas selected. These indicators are then prepared for analysis through the development of a data bank, which can then be used to display trend lines showing the history of the indicators. For example, if enrollments are the area of concern, it is important to select indicators that have historically shown important enrollment patterns and can be expected to do so in the future. That is, one would collect data containing information about entering students (sex, race, age, aptitude scores, major, high school, and rank in the school's graduating class) and perhaps how these students fared while enrolled (grade point average, graduation pattern, and so on). Furthermore, one might select information concerning characteristics of entering college students in similar institutions or nationally in all institutions so that entering students at one's own institution could be compared with others. Such comparisons are readily available through data gathered by the Cooperative Institutional Research Program, an annual survey of new college freshmen conducted by UCLA and the American Council on Education (Astin et al. 1984) and available directly from ACE or from the National Center for Education Statistics.

In this first role of monitoring, historical information is developed and prepared for analysis. This role depends upon the identification of selected areas for study. In the model described here, the areas for study would be developed around the issues identified from environmental scanning and rated as important during evaluation. Monitoring begins its initial cycle at this point in strategic planning. That is, indicators that describe these prioritized issues are selected and prepared for analysis during forecasting.

A number of criteria determine the selection of variables in this cycle. For example, does the trend describe a historical development related to the issue of concern? Is the trend or variable expected to describe future developments? Are the historical data readily available? Gathering data is expensive, and novel sources of data will introduce errors until new procedures are standardized and understood by those supplying the data.

A primary consideration involves the reliability and accuracy of the data. Several writers have dealt thoroughly with criteria for developing and assessing reliable and valid historical data (see, for example, Adams, Hawkins, and Schroeder 1978 and Halstead 1974), but information contained in variables derived from the data must be independent of other factors that would tend to mislead the analysis. For example, if the issue concerns educational costs, is this measurement independent of inflation?

Finally, history must be sufficient so that the data cover the cycle needed for projections; for example, if one is projecting over 10 years, are 10 years of historical data available on that trend?

The second role of monitoring begins after decision-makers have developed goals and alternative strategies to reach those goals and have implemented a specific program to implement policies and strategies to move toward the goals. That is, new data in the area of concern are added for analysis so that managers can determine whether the organization is beginning to move toward its desired future or is continuing to move toward the expected future. For example, if the strategies discussed during implementation to increase liberal arts enrollment were employed, the second cycle of the monitoring stage would involve collecting data on enrollments and comparing "new" data to "old" data. Thus, in effect, monitoring is the stage where the effects of programs, policies, and strategies are estimated. The information thus obtained is again used during forecasting. In this fashion, the planning cycle is iterated.

For the environmental scanning model, the specific techniques of monitoring are a function of where an issue is in the development cycle of issues. For some issues, it may be useful to apply some concepts from the emerging field of issues management. (The Issues Management Association was first conceived in 1982 and formally established in 1983 with over 400 members. The major concepts and methods of issues management are still in the experimental and developmental stages.) The issues development cycle shown in figure 17 focuses on how issues move from the earliest stages of changing values and emerging social trends through the legislative process to the final stages of federal regulations (Renfro 1982). This model is used to understand the relative development stages of issues and to forecast their likely course of developments. Thus, one can see, for example, how the publication of Rachel Carson's Silent Spring led to a social awakening of the problems of environmental pollution, which eventually culminated in the formation of the Environmental Protection Agency in 1970. Similarly, Betty Friedan's The Feminine Mystique helped to organize and stimulate the emerging social consciousness of the women's movement.

Environmental scanning is the first step in the strategic management process
Copyright 1983 by Policy Analysis Co., Inc. Used by permission.

Championing issues through publications is not a new phenomenon. Upton Sinclair used the technique at the turn of the century to alert the country to the issue of food safety in Chicago's meat packing houses with The Jungle. Richard Henry Dana used it in Two Years before the Mast, published in 1847, to alert the country to the plight of seamen, whose lives were in many ways similar to those of slaves. Thomas Paine's revolutionary pamphlet, Common Sense, may be the earliest use of the technique in this country.

Other key stages in the development of public issues are a defining event, recognition of the name of a national issue, and the formation of a group to campaign about the issue. The early stages have no particular order, but each has been essential for dealing with most recent public issues. For example, the nuclear power issue had everything except a defining event to put it into focus until Three Mile Island. Usually the defining event also gives the issue its name--Love Canal, the DC-10, the Pinto. Of course, all events do not make it through these stages, and many--if not most--are stopped somewhere along the way.

In addition to these general requirements for the development of an issue, several specific additional criteria are needed to achieve recognition by the media: suddenness, clarity, confirmation of preexisting opinions or stereotypes, tragedy or loss, sympathetic persons, randomness, ability to serve to illustrate related or larger issues, the arrogance of powerful institutions for the little guy, good opportunities for photos, and articulate, involved spokesmen. Issues that eventually appear in the national media usually have histories in the regional and local media, where many of the same factors operate (Naisbitt 1982).

At this stage, an issue is or already has been recognized by Congress--recognition being defined by the introduction of at least one bill specifically addressing the issue. Now the issue must compete with many others for priority on the congressional agenda.

For those issues legislated by Congress and signed into law by the president, the regulatory process begins. The basic guidelines for writing new rules are the Administrative Procedures Act (APA) and Executive Order 12291, which requires streamlined regulatory procedures, special regulatory impact analyses, and plain language. After the various notices in the Federal Register, proposed rules, and official public participation, the regulations may go into effect. This process usually takes three to ten or more years, making the evolving regulatory environment relatively easy to anticipate using this model and a legislative tracking and forecasting service like Legiscan® or CongresScan™ or following developments in the Congressional Record.

This model of the national public issues process is of course continuously evolving. The early stages have shifted from national issues with a single focus to national issues with many local, state, or regional foci--as the drunk driving, child abuse, spouse abuse, and similar issues demonstrate. The legislative/regulatory process has also been evolving. First, many of the regulations themselves became an issue, especially those dealing with horizontal, social regulation rather than vertical, economic regulation. Regulations for the Clean Air Act, the Equal Employment Opportunity Commission, the Clean Water Act, the Occupational Safety and Health Administration, the Environmental Protection Agency, and the Federal Trade Commission, among others, have all defined new issues and stimulated the formation of new issue groups, which, like the original issue group, came to Congress for relief. Thus, Congress now is deeply involved in relegislation between organized, opposing issue groups--a slow, arduous process with few victories and no heroes.

With Congress stuck in relegislation at such a detailed level so as to itself redraft federal regulations, new issues are not moving through Congress. As a result, the list of public issues pending in Congress without resolution continues to grow. Frustrated with congressional delays, issue groups are turning to other forums--the courts, the states, and directly to the regulatory agencies. No doubt the process of recycling issues seen by Congress will emerge here eventually (see

figure 18).
Environmental scanning is the first step in the strategic management process
Copyright 1983 by Policy Analysis Co., Inc. Used by permission.

The emergence of the states as a major forum for addressing national public issues is not related to new federalism, which is a fundamentally intergovernmental issue. States are taking the lead on a wide range of issues that a decade ago would have been resolved by Congress--the transportation and disposal of hazardous wastes, the right of privacy, the right of workers to know about carcinogens in their work environment, counterfeit drugs, Agent Orange, and noise pollution. The process of anticipating issues among the states requires another model, one focused not on the development of issues across time but across states. In most states, legislators do not have the resources or the experience to draft complicated legislation on major public issues. Moreover, issues tend to be addressed or dropped within one session of the legislative body, and such a hit-or-miss process is almost impossible to forecast. Thus, the legislative ideas from the first state to address an issue are likely to become de facto the national standard for legislation among the other states. The National Conference of State Legislators and the Council of State Governments encourage this cribbing from one state to another, even publishing an annual volume of "Suggested State Legislation." A state legislator need only write in his or her state's name to introduce a bill on a major public issue. The process of forecasting legislative issues across the states then involves tracking the number of states that have introduced bills on the issue and the states that have passed or rejected those bills. While the particular language and detailed implementation policies will of course vary from state to state, this model is reasonably descriptive of the process and represents the current state of the art (Henton, Chmura, and Renfro 1984).

Like the model of the national legislative process, this model has been refined several times. Some states tend to lead on some particular issues. While it was once theorized that generic precursor states exist, this concept has been found to be too crude to be useful today. On particular issues, the concept still has some value, however. Oregon, for example, tends to lead on environmental issues; it passed the first bottle bill more than 10 years ago. California and New York lead on issues of taxes, governmental procedures, and administration. Florida leads on the issues of right of privacy.

The piggy-backing of issues is also important. Twenty-two states have passed legislation defining the cessation of brain activity as death. The issue is an important moral and religious one but without substantial impact on its own. Seven states have, however, followed this concept with the concept of a "living will"; that is, a person may authorize the suspension of further medical assistance when brain death is recognized. This piggybacked issue has tremendous importance for medical costs, social security, estate planning, nursing homes, and so on.

A state forecasting model would be incomplete without another phenomenon, policy cross-over. Occasionally after an issue has been through the entire legislative process, the legislative policy being implemented is reapplied to another related issue without repeating the entire process. The concept of providing minimum electric service to the poor, the elderly, and shut-ins took years to implement, but the concept was reapplied to telephone service in a matter of months. And telephone companies did not foresee the development.

The monitoring stage of the strategic planning process therefore involves tracking not only those variables of traditional interest to long-range planners in higher education (enrollment patterns, for example) but also issues identified through environmental scanning. Moreover, by identifying issues as to where they are in the development cycle of issues, more information is introduced for iteration in the planning process.

PREVIOUS | CONTENTS | NEXT