2 Dixon MonashStyle CGE - Macroeconomia (2024)

CH

APTER22

The MONASH Style of Computable

General Equilibrium Modeling: A

Framework for Practical Policy Analysis

Peter B. Dixon�, Robert B. Koopman��, Maureen T. Rimmer�

�Centre of Policy Studies, Monash University

��US International Trade Commission

Abstract

Hand

ISSN

MONASH models are descended from Johansen's 1960 model of Norway. The first MONASH model

was ORANI, used in Australia's tariff debate of the 1970s. Johansen's influence combined with insti-

tutional arrangements in their development gave MONASH models distinctive characteristics, facili-

tating a broad range of policy-relevant applications. MONASH models currently operate in numerous

countries to provide insights on a variety of questions including:

the effects on:

macro, industry, regional, labor market, distributional and environmental variables

of changes in:

taxes, public consumption, environmental policies, technologies, commodity prices, interest

rates, wage-setting arrangements, infrastructure and major-project expenditures, and known

levels and exploitability of mineral deposits (the Dutch disease).

MONASH models are also used for explaining periods of history, estimating changes in technologies

and preferences and generating baseline forecasts. Creation of MONASH models involved a series of

enhancements to Johansen's model, including: (i) a computational procedure that eliminated

Johansen's linearization errors without sacrificing simplicity; (ii) endogenization of trade flows by

introducing into computable general equilibrium (CGE) modeling imperfect substitution between

imported and domestic varieties (the Armington assumption); (iii) increased dimensionality allowing

for policy-relevant detail such as transport margins; (iv) flexible closures; and (v) complex functional

forms to specify production technologies. As well as broad theoretical issues, this chapter covers

data preparation and introduces the GEMPACK purpose-built CGE software. MONASH modelers have

responded to client demands by developing four modes of analysis: historical, decomposition,

forecast and policy. Historical simulations produce up-to-date data, and estimate trends in tech-

nologies, preferences and other naturally exogenous but unobservable variables. Decomposition

simulations explain historical episodes and place policy effects in historical context. Forecast

simulations provide baselines using extrapolated trends from historical simulations together with

specialist forecasts. Policy simulations generate effects of policies as deviations from baselines. To

emphasize the practical orientation of MONASH models, the chapter starts with a MONASH-style

policy story.

book of CGE Modeling - Vol. 1 SET � 2013 Elsevier B.V.

2211-6885, http://dx.doi.org/10.1016/B978-0-444-59568-3.00002-X All rights reserved. 23

24 Peter B. Dixon et al.

Keywords

MONASH computable general equilibrium models, flexible closures, computable general equilibrium

forecasting, policy-oriented modeling, telling a computable general equilibrium story, Johansen’s

computable general equilibrium influence

JEL classification codes

C68, C63, D58, F16, F14

2.1 INTRODUCTION

This chapter describes the MONASH style of CGE modeling, which started with the

ORANI model of Australia (Dixon et al., 1977, 1982). MONASH models are directly

descended from the seminal work of Leif Johansen (1960). The influence of Johansen

combined with the institutional arrangements under which MONASH models have

been developed has given them distinctive technical characteristics, facilitating a broad

range of policy-relevant and influential applications. MONASH models are now

operated on behalf of governments and private sector organizations in numerous

countries, including Australia, the US, China, Finland, Netherlands, Malaysia, Taiwan,

Brazil, South Africa and Vietnam.1 The MONASH approach underlies the worldwide

Global Trade Analysis Project (GTAP) network (see Chapter 12 by Hertel in this

Handbook).

The practical focus of MONASH models reflects their history. ORANI was created

in the IMPACT Project e a research initiative of the Industries Assistance Commission

(IAC). The IAC was the agency of the Australian government with responsibility for

advising policy makers on the economic and social effects of tariffs, quotas and other

protective devices against imports.

To understand why the IMPACT Project was established and why it produced

a model such as ORANI, we need to go back to the federation of the Australian

colonies. Before federation in 1901, the dominant British colonies on the Australian

continent were New South Wales, which followed a free trade policy, and Victoria,

which adopted high tariffs against manufactured imports. After a heated debate,2 the

federated nation adopted what was close to the Victorian policy, setting protection of

manufacturing industries at an average rate of about 23%. Protection increased during

the 1930s and continued to rise after World War II, reaching rates of more than 50%

for some industries. Resentments about protection persisted and intensified as rates

rose, especially in the export-oriented state of Western Australia. By the 1960s,

1 A complete technical exposition of a modern MONASH model is Dixon and Rimmer (2002) together with sup-

porting web material at http://www.monash.edu.au/policy/monbook2.htm. See also Honkatukia (2009).

2 See Glezer (1982, chapter 1).

http://www.monash.edu.au/policy/monbook2.htm

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 25

Australia’s protectionist stance was being challenged analytically by leading economists

such as Max Corden (e.g. Corden, 1958) and by the 1970s, there was demand by

policy makers for a quantitative tool for analyzing protection. Policy makers wanted to

know how people whose jobs would disappear with lower protection could be

reabsorbed into employment. The IAC responded in 1975 by setting up the IMPACT

Project with the task of building an economy-wide model that could be used to trace

out the effects of changes in protection for one industry on employment prospects for

other industries.

The arrangements for the IMPACT Project maximized the probability of

a successful outcome. It had a sharp focus on a major policy problem (protection),

thereby attracting policy-oriented, ambitious economists. It had an outstanding initial

director, Alan A. Powell, who was a highly respected applied econometrician.

Without blunting its practical orientation, Powell conducted the Project at arms

length from the bureaucracy. Under his leadership, IMPACT was an open environ-

ment that allowed talented young economists to flourish. The outcome was the

ORANI model, the first version of which was operational in 1977. By providing

detailed quantification of the effects of cuts in protection on winners as well as losers,

and by showing where jobs would be gained as well as lost, ORANI helped in the

formation of an anti-protection movement that eventually prevailed and converted

Australia from high protection in the mid-1970s to having almost free trade by the

end of the century.

With changes in political circ*mstances, the IMPACT Project left the IAC in 1979.

The IMPACT team was split between the University of Melbourne and La Trobe

University. Nevertheless, it continued to work as a group and in 1984 was reunited at the

University of Melbourne. Since 1991 the team has operated as the Centre of Policy

Studies (CoPS) at Monash University. Throughout its 37-year history, CoPS/IMPACT

has maintained an extraordinary level of staff cohesion. Three researchers have been with

the Project continuously for 37 years, six others have devoted more than 20 years to the

Project, while many others have served more than 10 years. Seven researchers have been

promoted at the Project to the rank of Full Professor. What explains the Project’s success

and longevity?

One factor is that since its beginning, when Powell set the standards,

,

in flow (i,s,j,k) and a(i,s,j,k,r) is the percentage

change in the use of margin r per unit of flow (i,s,j,k). In many ORANI simulations

variables such as a(i,s,j,k,r) were interpreted as changes in technology.

To reduce the computational dimensions of ORANI, (2.7) was used to substitute out

x(i,s,j,k,r), i.e. (2.7) was deleted and x(i,s,j,k,r) was replaced by the right-hand side of (2.7)

wherever it appeared in the rest of the model. By this process, the dimensions of the

matrix to be inverted in (2.6) were reduced to a manageable size: about 200� 200 in the

1977 version of ORANI and about 400� 400 in the 1982 version.22

While variables and equations disappear from a model during condensation, no

information is lost. Results for eliminated variables can be recovered by backsolving

using the eliminated equations. One implication of this is that eliminated variables are

necessarily endogenous.

Through condensation in a Johansen linear framework, problems of dimensionality

were largely removed. This gave two advantages: (i) the full dimensionality of available

input-output tables could be used and (ii) computationaletheoretical compromises were

reduced. For example, in ORANI there was no inhibition on computational grounds

about including a high-dimension equation such as (2.7) if this was considered the

theoretically appropriate specification.

2.3.2.3 Closure flexibility in the Johansen framework

Johansen used just one closure. However, his framework was readily extended to

encompass closure flexibility. This was done in ORANI by leaving the allocation of

22 The condensations of the two versions are described in Sutton (1976, 1977) and Dixon et al. (1982, pp. 207e229).

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 49

variables between y (endogenous) and x (exogenous) in Johansen’s equation (2.2) as

a user choice. This imparted an important degree of flexibility.

If, for example, the focus was on the short-run effects of a policy, then capital in each

industry was treated exogenously, unaffected in the short-run. At the same time, rates of

return were treated endogenously. Simulations conducted under this closure were

thought to reveal effects that would emerge after about 2 years.23 If a long-run focus

were required, then the closure was reversed. It was assumed that deviations in rates of

return would be temporary. Thus, in long-run simulations, rates of return were exog-

enous while capital stocks adjusted endogenously to allow rates of return to be main-

tained at their initial levels.

An early ORANI study that took advantage of closure flexibility was that by Dixon

et al. (1979). This was commissioned by the Crawford Group, set up by the Australian

Government in 1977 to report on macro and industry policies to achieve a broad-based

industry and regional recovery from what was then a deeply recessed situation. To widen

the appeal of the ORANI results and defuse criticism, simulations were conducted under

two closures. In both closures real wages were treated exogenously, reflecting their

determination in what was at the time a legalistic, centralized system that could produce

outcomes with little resemblance to those that would be expected from market forces.

The closures differed with respect to capital utilization and exports.

In what was referred to as a neoclassical closure, capital used in each industry was set

exogenously to fully employ the capital available to the industry. Rental rates adjusted

endogenously to ensure compatibility between demand for capital and the exogenously

given levels of capital usage. Exports in the neoclassical closure were determined by the

interaction of production costs in Australia and price-elastic foreign demands.

In what was referred to as a neo-Keynesian closure,24 the rental rate on capital was

treated as a profit mark-up and linked exogenously in each industry to variable costs per

unit of production.25 Capital in use was treated endogenously. Exports were assumed

rigid and to make this computationally possible, a phantom export subsidy was endo-

genized for each commodity.

Despite these seemingly radical differences in closure, the policy implications of the

two sets of simulations were the same: a combination of reduction in the real costs of

employing labor and an expansion in demand offered the best prospect for a broad-based

recovery. Real cost reduction would stimulate trade-exposed industries and regions

while demand expansion would stimulate the rest of the economy. Naturally, the

23 This was worked out by Cooper and McLaren (1983) who compared ORANI comparative static short-run results

with those produced by a continuous-time macro model. See also Breece et al. (1994) and Dixon (1987).

24 The terms neoclassical and neo-Keynesian have been used by a number of authors, but never in quite the same way.

For a discussion of closure possibilities in early CGE models with associated nomenclature, see Rattso (1982) and

Robinson (2006).

25 For a more recent application of this idea in a dynamic setting, see Dixon and Rimmer (2011).

50 Peter B. Dixon et al.

question arose as to what policies could reduce labor costs in an acceptable way while

expanding demand. One answer, illustrated by ORANI simulations in Corden and

Dixon (1980), was a wage-tax bargain under which workers forego wage increases in

return for tax cuts or improvements in social capital. Such bargains were an important

part of Australian economic policy in the 1980s.

In the 1990s, the idea of flexible closures was extended in dynamic MONASH

models to allow for four modes of analysis: historical, decomposition, forecast and policy.

These are described in Section 2.5.

2.3.2.4 Complex functional forms in the Johansen framework

Early CGE modelers outside the Johansen school worried that the use of the Johansen

linear percentage-change format was limiting with respect to model specification. For

example, Dervis et al. (1982, p. 137) comment that:

26 C

Johansen linearized the general equilibrium model (in logarithms) and so was able to solve it by

simple matrix inversion.. Since then there have been advances in solution methods that permit

CGE models to be solved directly for the levels of all endogenous variables and so permit model

specifications that cannot easily be put into log-linear forms.

Far from being limiting, the Johansen framework simplified the introduction into CGE

modeling of the advanced functional forms that were being developed in this period. For

example, consider the CRESH26 cost-minimizing problem:

choose Xi; i ¼ ;.n

to minimize :

Xn

i¼1

Pi � Xi (2.8)

subject to:

Xn

i¼1

Xi

Z

�hi

�Qi

hi

¼ a; (2.9)

where Z is output, Pi and Xi are input prices and quantities, and the Qi, hi and a are

parameters, with theQi values being positive and summing to one and the hi values being

less than one but not precisely zero.

On the basis of problem (2.8)e(2.9) it is difficult to obtain an intuitive understanding

of the input-demand functions: they have no closed form levels representation. Given

values for hi, values forQi and a can be determined on the basis of input-output data, but

RESH was introduced as a generalization of CES by Hanoch (1971).

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 51

this is technically awkward. By contrast, the Johansen-style percentage change repre-

sentation of the input-demand functions is readily interpretable and easily calibrated. As

shown in Section 2.4.4, (2.8)e(2.9) leads to:

xi ¼ z� si � ðpi � pÞ; i ¼ 1;.; n; (2.10)

where xi, z and pi are percentage changes in the variables represented by the corre-

sponding uppercase symbols, si is a positive substitution parameter defined by si¼ 1/

(1e hi), and p is a weighted average of the percentage changes in all input prices

defined by:

p ¼

Xn

k¼1

S

#

k � pk: (2.11)

The weights S

#

k are modified cost

,

shares of the form:

S

#

k ¼ Sk � skPn

i¼1 Si � si

: (2.12)

where Sk is the share of k in costs.

Reflecting constant returns to scale, (2.10) implies that a 1% increase in output,

holding input prices constant, causes a 1% increase in demand for all inputs. An increase

in the price of i relative to the average price of all inputs causes substitution away from i

and towards other inputs. The sensitivity of demand for i with respect to its relative price

is controlled by the parameter si. If this parameter has the same value for all i, then (2.10)

takes the familiar CES form. However, if we wish to introduce differences between

inputs in their price sensitivity then this can be done by adopting different values for si.

Once values have been assigned for si, calibration can be completed on the basis of cost

shares (Sk) from input-output data.

Dixon et al. (1992, pp. 124e148) give derivations of Johansen-style demand and

supply equations for a variety of optimizing problems based on CES, CET, Translog,

CRESH and CRETH functions. In all these cases, the Johansen-style input-demand

functions or output-supply functions look like (2.10): the percentage change in the

particular input (output) equals the percentage change in the relevant activity variable

minus (plus) a substitution (transformation) term that compares the percentage change in

the particular price with a share-weighted average over the percentage changes in the

prices of all the substitutes (transformates).

More generally, all differentiable input demand functions and output supply

functions can be represented in a Johansen format. Usually the Johansen represen-

tation is more transparent than the levels representation. Perhaps reflecting this,

rapid progress was made in the adoption of sophisticated functional forms in the

ORANI model.

52 Peter B. Dixon et al.

2.4 EXTENDING JOHANSEN’S COMPUTATIONAL FRAMEWORK: THE

MATHEMATICAL STRUCTURE OF A MONASH MODEL

This section is a broad technical overview of MONASH modeling. We start in Section

2.4.1 by describing a MONASH model as a system of m equations in n variables. We

emphasize two points: (i) the variables and equations are concerned with a single period,

usually thought of as year t, and (ii) we always have an initial solution, i.e. a set of values

for the n variables that satisfy the m equations. Starting from the initial solution, other

solutions can be obtained by derivative methods. We describe the Johansen/Euler

method used for MONASH models. Section 2.4.2 shows how periods are linked in

MONASH models to make them dynamic. Section 2.4.3 establishes the existence of the

initial solution for each year t. MONASH models are written largely as equations in

which the variables are percentage changes in prices and quantities in year t away from

their initial solution. The derivation of percentage change equations from underlying

levels equations is discussed in Section 2.4.4. An overview of the GEMPACK software

used in building, solving and analyzing MONASH models is presented in Section 2.4.5.

Section 2.4.6 provides some notes on the creation of a database for a MONASH-style

CGE model.

2.4.1 Theory of the Johansen/Euler solution method

A MONASH model can be represented as a system of m equations in n variables:

FðX ;YÞ ¼ 0; (2.13)

where F is a vector of m functions, X is the vector of nem variables chosen to be

exogenous and Y is the vector if m variables chosen to be endogenous.

In discussing (2.13) it is convenient to assume that we are dealing with a national

model with annual periodicity.27 In such a model the vector (X,Y ) includes flow

variables for year t at the national level representing quantities and values of demands

and supplies. Other variables in (X,Y ) refer to stocks or levels at an instant of time,

examples being capital stocks at the start of year t and at the end of year t, and the level

of the exchange rate at the start and end of year t. (X,Y ) also includes lagged variables,

e.g. the lagged consumer price index for year t, which is the consumer price index for

year te 1.

The m equations include links between flow variables in year t provided by market-

clearing conditions, zero-pure-profit conditions and demand and supply equations

derived from optimizing problems. The equations also impose links between stock and

27 MONASH-style regional and multicountry models are discussed by Giesecke and Madden in Chapter 7 and Hertel

in Chapter 12 of this Handbook. MONASH models with quarterly periodicity can be found in Adams et al. (2001)

and Dixon et al. (2010).

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 53

flow variables. For example, end-of-year capital stocks are linked to start-of-year capital

stocks via investment and depreciation during the year. Lagged adjustment processes may

be included among the equations. For example, wage rates in year t might be related to

lagged consumer prices in year t.

This brings us to the first critical point in understanding the MONASH paradigm. A

MONASH model is a system of equations connecting variables for year t. These can be

current variables, lagged variables, stock variables or flow variables, but they are all

variables for year t.

To solve the model for year t we need a method for computing the value for the Y

vector in (2.13) corresponding to the year t value for the X vector. If (2.13) were small

and sufficiently simple we might contemplate solving it explicitly to obtain the

relationship:

Y ¼ GðXÞ; (2.14)

However, in realistic practical CGEmodels (2.13) consists of many thousands of variables

connected by non-linear relationships. In these circ*mstances, solution via discovery of

an explicit form for G is out of the question.

This brings us to the second critical point in understanding the MONASH paradigm.

While we can rarely have an explicit form for the solution function G, we can always

have an initial solution, i.e. a vector ð�XðtÞ; �YðtÞÞ that satisfies:

Fð�XðtÞ; �YðtÞÞ ¼ 0 or equivalently �YðtÞ ¼ Gð�XðtÞÞ: (2.15)

As will be discussed in Section 2.4.3, ð�XðtÞ; �YðtÞÞ usually represents the situation in

a particular year, often year te 1. With an initial solution in place, and assuming that F is

differentiable,28 further solutions can be computed by derivative methods. These involve

estimation of the partial derivatives,GX, of theG function but not explicit representation

of G itself. With GX we can estimate the effects on Yof moving X from its initial value,

�XðtÞ, to its required value for year t, X(t).

The derivative method used in MONASH models to move from the initial solution

for year t to the final solution is the Johansen/Euler method.29 We named this method in

recognition of the contributions of Johansen (1960) who, as described in Section 2.3,

applied a one-step version of it to solve his CGE model of Norway, and Euler, the

eighteenth century mathematician who set out the theory of the method as an approach

to numerical integration.30

28 Non-differentiabilities associated with complementarity conditions are discussed in Horridge et al. in Chapter 20 of

this Handbook.

29 Other derivative methods are described in Dervis et al. (1982, pp. 491e496).

30 Early followers of Johansens’s one-step method include Taylor and Black (1974), Staelin (1976), Dixon et al. (1977),

and Keller (1980). The multistep version or the Johansen/Euler method was developed by Dixon et al. (1982).

Another early application of the multistep approach is Bovenberg and Keller (1984).

54 Peter B. Dixon et al.

In Johansen/Euler computations we start by replacing the original system of non-

linear equations in X and Y with a linear system in which the variables are changes in X

and Y:

FXð�XðtÞ; �YðtÞÞ � DX þ FY ð�XðtÞ; �YðtÞÞ � DY ¼ 0; (2.16)

where FXð�XðtÞ; �YðtÞÞ and FY ð�XðtÞ; �YðtÞÞ are the m� (nem) and m�m matrices of

first-order partial derivatives of F with respect to X and Yevaluated at ð�XðtÞ;

,

�YðtÞÞ and

DX and DY are (nem)� 1 and m� 1 vectors of deviations in the values of the variables

away from ð�XðtÞ; �YðtÞÞ. The left-hand side of (2.16) is an approximation to the vector of

changes in the F functions caused by changing the variable values from ð�XðtÞ; �YðtÞÞ to

ð�XðtÞ þ DX; ���YðtÞ þ DYÞ. As we are looking for a new solution to (2.13), we put the

vector of approximate changes in F equal to zero. We recognize that in going from the

initial solution for year t to the new solution, we must leave the values of the F functions

unchanged from zero.

From (2.16), we obtain:

DY ¼ Bð�XðtÞ; �YðtÞÞ � DX ; (2.17)

where Bð�XðtÞ; �YðtÞÞ is the GX matrix evaluated at ð�XðtÞ; �YðtÞÞ and computed

according to:31

Bð�XðtÞ; �YðtÞÞ ¼ �FY ð�XðtÞ; �YðtÞÞ�1 � FXð�XðtÞ; �YðtÞÞ: (2.18)

Equation (2.17) is a version of Johansen’s linear approximation of the true relationship

between changes in X and changes Y. By setting DX at XðtÞ � �XðtÞ we can estimate

the required value for Y in year t as:

Y 1

1 ðtÞ ¼ �YðtÞ þ Bð�XðtÞ; �YðtÞÞ � DX ; (2.19)

where Y 1

1 ðtÞ is the estimate obtained in the first step (superscript) of a one-step

(subscript) procedure.

The linearization errors generated in the application of (2.19) are illustrated in

Figure 2.4 for a two-variable, one-equation model in which abcd is the true

relationship between X and Y and ebf is Johansen’s linear approximation e a straight

line tangent to the true relationship at the initial solution. In using (2.19) to

compute the effect of moving X from its initial value �XðtÞ to its final value X(t), the

linearization error is fc, i.e. the gap between the Johansen solution, Y 1

1 ðtÞ, and the

true solution, Y(t).

31 We assume that FY ð�XðtÞ; �YðtÞÞ is non-singular. Via the implicit functions theorem, this is equivalent to assuming the

existence of a unique G function such that F(X(t), G(X(t))¼ 0 for all X in a neighborhood of �XðtÞ. If FY ð�XðtÞ; �YðtÞÞ

is singular, then the Johansen method will fail. However, this is not a computational problem. Any method should fail

because the model does not imply that movements in Y are uniquely determined by movements in X in the

neighborhood of �XðtÞ.

Figure 2.4 Johansen solution.

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 55

The reason for linearization errors in the use of (2.19) to solve (2.13) can be found in

(2.16), which led to (2.19). The left-hand side of (2.16) only approximates the vector of

changes in the F functions caused by changes in X and Y. As we move away from

ð�XðtÞ; �YðtÞÞ, the partial derivatives of the F functions are also moving. On the left-hand

side of (2.16), we evaluate the effects on the F functions of movements in the variables

with the partial derivatives fixed at their initial values. More accurate solutions of (2.13)

can be achieved by multistep Johansen/Euler calculations where we allow for changes in

the partial derivatives of F.

In a two-step Johansen/Euler computation, we impose the change, DX, in the

exogenous variables in two equal steps. In the first step we use (2.19) to compute:

Y1

2 ðtÞ ¼ �YðtÞ þ Bð�XðtÞ; �YðtÞÞ � DX

2

: (2.20)

Y1

2 ðtÞ is an estimate of the solution of (2.13) at X ¼ X1

2 ðtÞ, where:

X1

2 ðtÞ ¼ �XðtÞ þ DX=2; (2.21)

56 Peter B. Dixon et al.

and the superscript 1 and the subscript 2 in (2.20) and (2.21) denote values reached at the

end of the first step in a two-step procedure. In the second step we re-evaluate the partial

derivatives of F at ðX1

2 ðtÞ;Y 1

2 ðtÞÞ, recompute the B matrix according to:

BðX1

2 ðtÞ;Y 1

2 ðtÞÞ ¼ �FY ðX1

2 ðtÞ;Y 1

2 ðtÞÞ�1 � FXðX1

2 ðtÞ;Y 1

2 ðtÞÞ; (2.22)

and obtain our two-step estimate ½Y2

2 ðtÞ� of the required year t value of Y as:

Y 2

2 ðtÞ ¼ Y 1

2 ðtÞ þ BðX1

2 ðtÞ;Y1

2 ðtÞÞ �

DX

2

: (2.23)

We can expect the two-step answer, Y 2

2 ðtÞ, to be considerably closer to one-step than

the one-step answer, Y 1

1 ðtÞ. This is illustrated in Figure 2.5 in which the linearization

error for the two-step computation, sc, is much smaller than that for the one-step

computation, fc. In drawing Figure 2.5 we have assumed that BðX1

2 ðtÞ;Y 1

2 ðtÞÞ is a good

approximation to BðX1

2 ðtÞ;GðX1

2 ðtÞÞÞ. In other words, we have assumed that moving off

the solution line abwcd in the first step does not invalidate formula (2.22) as an

Figure 2.5 Two-step Johansen/Euler solution.

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 57

approximation to the slope of the solution line at X1

2 ðtÞ. The formal justification of this

assumption is given in the Appendix: it depends on the derivatives of B with respect Y

being bounded so that B does not move too far away from the slope on the solution line

as Y moves away from the solution line.

Another feature of Figure 2.5 worthy of comment is that sc is about half of fc, i.e. as

we double the number of steps (from 1 to 2) the linearization error is halved. This is not

just an artifact of the particular illustration in Figure 2.5. As discussed in the Appendix, it

is a quite general phenomenon. It can be exploited via Richardson’s extrapolation to

obtain highly accurate solutions from a small number of low-step computations. For

example, with:

Y 2

2 ðtÞ � YðtÞz0:5ðY 1

1 ðtÞ � YðtÞÞ; (2.24)

we can often generate a highly accurate extrapolated estimate of Y(t) on the basis of

one- and two-step solutions as:

Y1;2

extrapðtÞ ¼ 2 � Y 2

2 ðtÞ � Y 1

1 ðtÞ: (2.25)

Even more accurate solutions can be obtained via higher-step computations and asso-

ciated extrapolations.32

As an alternative to working with a system of equations such as (2.16) connecting

changes in variables, it is usually more convenient to work with a system in which most

of the variables are percentage changes. Johansen’s model was mainly in percentage

changes as are MONASH models. The advantage of percentage changes is that they

are immediately interpretable without worrying about units. However, for some

variables, those that may pass through zero in a simulation, percentage changes are not

an option.

Starting from (2.16) we can produce a mixed system in which some variables are

changes and some are percentage changes by replacing relevant components, DXi and

DYj, of DX and DY with percentage change variables:

xi ¼ 100 � DXi

�XiðtÞ and yj ¼ 100 � DYj

�YjðtÞ ; (2.26)

and relevant columns of FX and FY by:

½0:01 � �XiðtÞ � FX;ið�XðtÞ; �YðtÞÞ� and ½0:01 � �YjðtÞ � FY ; jð�XðtÞ; �YðtÞÞ�; (2.27)

where FX ;ið�XðtÞ; �YðtÞÞ and FY ; jð�XðtÞ; �YðtÞÞ are the m� 1 vectors of derivatives of

F with respect to the ith component of X and the jth component of Y.

32 Equation (2.25) is the simplest version of Richardson’s extrapolation. For other versions, see Dahlquist et al. (1974,

p. 269).

58 Peter B. Dixon et al.

Using the mixed system we can proceed to a linearized form of our model (corre-

sponding to equation 4.5) that can be written as:

y ¼ bð�XðtÞ; �YðtÞÞ � x; (2.28)

where bð�XðtÞ; �YðtÞÞ is derived using matrices incorporating (2.27), and y and x are

deviation vectors with percentage changes for most variables but changes for some

variables such as the balance of trade for which zero is a realistic value.

With minor changes in the calculation of (X,Y ) at the end of each step, the mixed

linearized system can be used in a multistep Johansen/Euler computation in the same

way as the pure change system.

2.4.2 Linking the periods: dynamics

Assume that we have a solution, (X(0),Y(0)), for our model depicting the situation in

year 0. Then we can use this as an initial solution for year 1:

ð�Xð1Þ; �Yð1ÞÞ ¼ ðXð0Þ;Yð0ÞÞ: (2.29)

From here we can use the Johansen/Euler technique to generate the required solu-

tion for year 1 by applying shocks reflecting the difference between X(0) and X(1).

The changes dY in the endogenous variables generated in this process can be inter-

preted as growth between year 0 and year 1. As shown in Figure 2.6, we can create

a sequence

,

of solutions showing year-on-year growth through any desired simulation

period.

In a year-on-year sequence of solutions, start-of-year stock variables in the required

solution for year t adopt the values of end-of-year stock variables in the required solution

Figure 2.6 Sequence of solutions using the required solution for year te 1 as the initial solution for

year t.

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 59

for year te 1. Consider, for example, a situation in which the start-of-year and end-of-

year quantities of capital in industry j in year te 1 are given by:

K start

j ðt � 1Þ ¼ 10 and Kend

j ðt � 1Þ ¼ 12: (2.30)

In the initial solution for year t, we have:

�K start

j ðtÞ ¼ 10 and �Kend

j ðtÞ ¼ 12: (2.31)

In using the Johansen/Euler method to generate the required solution for year t, we

must make sure that the start-of-year capital stock for industry jmoves up by 20%, from

its initial value of 10 to its required value of 12. If we include start-of-year stock

variables among the components of X, then the required year-to-year changes can be

imposed exogenously via shocks. More convenient methods are available via the use of

hom*otopy equations. For example, we can include in (2.13) equations of the form:

K start

j ðtÞ ¼ �K start

j ðtÞ þ ð�Kend

j ðtÞ � �K start

j ðtÞÞ � U : (2.32)

where the barred coefficients referring to the initial solution are treated as parameters,

and U is a variable (known as a hom*otopy variable) whose initial value is zero and final

value is one.

With U on zero, (2.32) is satisfied by the initial solution (i.e. K start

j ðtÞ ¼ �K start

j ðtÞ).

When U moves to one, K start

j ðtÞ moves to its required value, K start

j ðtÞ ¼ �Kend

j ðtÞ.

The sequence of annual solutions depicted in Figure 2.6 is recursive (i.e. the solution

for year 1 uses year 0 as a starting point, the solution for year 2 uses year 1 as a starting

point, etc.) In models with forward-looking expectations, a simple recursive approach

will not work: in computing the solution for year 1 we need information on year 2.

Nearly all MONASH calculations have been conducted with static or adaptive expec-

tations so that the recursive approach is adequate. However, as described in Dixon et al.

(2005), it is possible to handle forward-looking expectations by an iterative method

while retaining an essentially recursive approach. First, we set the model up with static

expectations and solve it recursively for years 1, 2, ., T. This gives us the basis for

guessing values for variables in years tþ 1 and beyond when we are computing the

solution for year t. With these guesses in place, we repeat the recursive sequence of

solutions. The guesses for forward-looking variables are refined from sequence to

sequence.33

33 Another method of solving models with forward-looking variables is to compute all years simultaneously. This

method was developed by Wilcoxen (1985, 1987) and Bovenberg (1985). See also Malakellis (1998, 2000). A

disadvantage of simultaneous-solution methods is that they are feasible only if the underlying model is small.

60 Peter B. Dixon et al.

Many MONASH computations are not concerned with the year-on-year

evolution of the economy. For example, in a decomposition analysis we may wish to

use a MONASH simulation to explain economic developments across a period of

several years, say 1992e1998. In this case, the initial solution for 1998 is the situation

in 1992, i.e.:

ð�Xð1998Þ; �Yð1998ÞÞ ¼ ðXð1992Þ;Yð1992ÞÞ; (2.33)

and the simulation consists of looking at the effects on the endogenous variables of

moving the exogenous variables from their 1992 values to their 1998 values. In such

a simulation, it is no longer appropriate to assume that start-of-year stock values in the

required solution equal end-of-year stock values in the initial solution. In our example,

this would entail the unwarranted assumption that stock values at the start of 1998 were

the same as stock values at the end of 1992.

2.4.3 Developing a solution for year 0 from the input-output data

2.4.3.1 Solution for year 0: overview

To implement the Johansen/Euler method (or any other derivative method) we need

a starting point, (X(0),Y(0)), which is a solution for year 0. As explained earlier, once we

have a starting solution we can generate other solutions. However, how do we get

a starting solution?

Most of the components in (X(0),Y(0)) can be derived from input-output or social

accounting data for year 0. We start by explaining this in general terms. Then we will

look more specifically at the input-output database for a typical MONASH model.

Input-output data are normally given as values. To separate out prices and quantities

we can adopt quantity units that are compatible with all prices in year 0 being one. For

example, if the price of a bushel of wheat is $4, then we adopt the quarter bushel as the

quantity unit for wheat. If the input-output data shows a flow of $1 billion of wheat from

farmers to bakers, then we say that 1 billion units (quarter bushels) of wheat are sold to

bakers.

Given the balance conditions in input-output data, we can be sure that the quantities

and prices derived in this way are compatible with demand/supply equality and zero pure

profits. What about equations derived from utility maximization and cost minimization

problems? These are satisfied with prices on one and the resulting quantities implied by

input-output data via calibration of the parameters or the introduction of shift variables.

For example, if households maximize a CobbeDouglas utility function so that demand

for commodity i (Ci) is related to the price of commodity i (Pi) and to total consumption

(CTOT ) by:

Ci ¼ ai � CTOT

Pi

for all commodities i; (2.34)

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 61

then the parameter ai is calibrated or estimated as:

ai ¼ Cið0Þ � Pið0Þ

CTOTð0Þ for all commodities i; (2.35)

where Pi(0) is set at one, and Ci(0) and CTOT(0) are obtained from the household

column of the input-output data. With ai set via (2.35), it is clear that the input-output

values for Ci, Pi and CTOT satisfy (2.34). More generally, all of the demand and supply

equations in MONASH models (and models built in other input-output/social-

accounting-matrix traditions) contain sufficient free parameters and shift variables so that

they can be satisfied by the initial input-output data.

Input-output tables may not cover all of the flow variables in a model. For example,

MONASH models include variables making up the balance of payments and the public

sector budget. Additional data tables are necessary to provide an initial solution for these

variables (see Dixon and Rimmer, 2002; pp. 212e219). As well as flow variables,

MONASH models contain stock variables. Year 0 data are required for variables such as

start-of-year capital stocks by industry, start-of-year foreign debts and assets and start-of-

year public sector liabilities. Values for end-of-year stock variables in year 0 can be

derived from start-of-year values and relevant year 0 flow variables.

2.4.3.2 Solution for year 0 and the input-output database for a MONASH model

The input-output database for a typical MONASH model is illustrated in Figure 2.7.

These data not only provide the bulk of the year 0 solution, but they also give an

immediate impression of the model’s properties. By looking at the input-output data we

can see the levels of commodity, industry and occupational disaggregation. We can also

see: whether imported and domestic good i are treated as distinct varieties; whether

margins and indirect taxes are taken seriously and a distinction is made between

purchasers’ and producer prices; and whether there are industries that produce more

than one commodity (multiproduct industries) and commodities that are produced by

more than one industry (multi-industry products).

,

The data in Figure 2.7 has three parts: an absorption matrix; a joint-production

matrix; and a vector of import duties. The first row of matrices in the absorption matrix,

BAS1, ., BAS6, shows flows in year 0 of commodities to producers, investors,

households, exports, public consumption and inventory accumulation. Each of these

matrices has C� S rows, one for each of C commodities from S sources. C can be large.

For example in USAGE, a MONASH-style model of the US, there are over 500

commodities.34 S is usually 2: domestic and imported. However, it can be larger to

facilitate analyses in which it is important to identify imports from different countries.

For example, the US International Trade Commission (2007, 2009) uses a version of

34 See US International Trade Commission (2004) and Dixon and Rimmer (2004).

Absorption Matrix

1 2 3 4 5 6

Prod-

ucers

Invest-

ors

House-

holds

Exports Govern-

ment

Invent-

ories

Size ← I → ← I → ← 1 → ← 1 → ← 1 → ← 1 →

Basic Flows

C×S

BAS1 BAS2 BAS3 BAS4 BAS5 BAS6

Margins

C×S×N

MAR1 MAR2 MAR3 MAR4 MAR5 MAR6

Sales

Taxes

C×S

TAX1 TAX2 TAX3 TAX4 TAX5 TAX6

Labor

M

LABOR C = Number of commodities

I = Number of industries

Capital

1

CAPITAL

S = Number of sources, usually 2 (dom & imp)

M = Number of occupations

N = Number of commodities used as margins

Land

1

LAND

Production

Taxes

1

TAX0

Joint

Production

Matrix

Import

Duty

Size ← I → Size ← 1 →

C

MAKE

C

TARIFF

Figure 2.7 Input-output database for a typical MONASH model.

62 Peter B. Dixon et al.

USAGE with 23 import sources (S¼ 24) to capture the effects of country-specific

import quotas.

BAS1 and BAS2 each have I columns, where I is the number of industries (usually

approximately the same as the number of commodities). The typical component of

BAS1 is the value of good i from source s [good (i,s)] used by industry j as an input to

current production, and the typical component of BAS2 is the value of (i,s) used to create

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 63

capital for industry j. As shown in Figure 2.7, BAS3, ., BAS6 each have one column.

Most MONASH-style models recognize one household, one foreign buyer, one cate-

gory of public demand and one category of inventory demand. These dimensions can be

extended in work concerned with income distribution, free trade agreements and

multiple levels of government.

All of the flows in BAS1, ., BAS6 are valued at basic prices. The basic price of

a domestically produced good is the price received by the producer (that is the price paid

by users excluding sales taxes, transport costs and other margin costs). The basic price of

an imported good is the landed-duty-paid price, i.e. the price at the port of entry just

after the commodity has cleared customs.

Costs separating producers or ports of entry from users appear in the input-output

data in the margin matrices and in the row of sales-tax matrices. The margin matrices,

MAR1, ., MAR6, show the values of N margin commodities used in facilitating the

flows identified in BAS1, ., BAS6. Typical margin commodities are the domestic

varieties of wholesale trade, retail trade, road transport, rail transport, water transport, air

transport, natural gas and other pipelines. Each of the matrices MAR1, ., MAR6 has

C� S�N rows corresponding to the use of Nmargin commodities in facilitating flows

of C commodities from S sources. The sales tax matrices TAX1, ., TAX6 show

collections of sales taxes (positive) or payments of subsidies (negative) associated with

each of the flows in the BAS matrices.

Payments by industries for M occupational groups are recorded in Figure 2.7 in the

matrix LABOR. In models and applications focusing on labor market issues, such as

training needs and immigration, M can be large. For example, some versions of the

USAGE model distinguish 750 occupations.

In most MONASHmodels, payments by industries for the use of capital and land are

recorded in the input-output data as vectors: CAPITAL and LAND in Figure 2.7.

However in studies concerned with food security and biofuels, the land dimension has

been disaggregated (see, e.g. Winston, 2009). The vector TAX0 shows collections of

taxes net of subsidies on production.

The final two data items in Figure 2.7 are TARIFF and MAKE. TARIFF is a C� 1

vector showing tariff revenue by imported commodity. The joint-product matrix,

MAKE, has dimensions C� I. Its typical component is the output (valued in basic

prices) of commodity c by industry i.

Together, the absorption and joint-production matrices satisfy two balance condi-

tions. (i) The column sums of MAKE, which are values of industry outputs, are identical

to the values of industry inputs. Hence, the jth column sum of MAKE equals the jth

column sum of BAS1, MAR1, TAX1, LABOR, CAPITAL, LAND and TAX0. (ii) The

row sums of MAKE, which are basic values of outputs of domestic commodities, are

identical to basic values of demands for domestic commodities. If i is a non-margin

commodity, then the ith row sum of MAKE is equal to the sum across the (i,‘dom’)-rows

64 Peter B. Dixon et al.

of BAS1 to BAS6. If i is a margin commodity, then the ith row sum of MAKE is equal to

the direct uses of domestic commodity i, i.e. the sum across the (i,‘dom’)-rows of BAS1

to BAS6, plus the margins use of commodity i. The margins use of i is the sum of the

components in the (c,s,i )-rows of MAR1 to MAR6 for all commodities c and sources s.

To obtain a year 0 solution for MONASH flow variables from a database such as that

in Figure 2.7, we start by defining quantity units for commodities as the amounts that had

a basic price of one. Now we can read from BAS1, ., BAS6 and MAR1, ., MAR6

both values and quantities of commodity demands. Similarly, we can read from MAKE

both values and quantities of commodity supplies. With basic prices of commodities

assigned the value one, the input-output data quickly reveals purchasers prices for year 0.

For example, the year 0 purchasers price for good i from source s bought by industry j for

use in current production is

P1ði; s; jÞ ¼ ½BAS1ði; s; jÞ þ

XN

n¼1

MAR1ði; s; j; nÞ þ TAX1ði; s; jÞ�=BAS1ði; s; jÞ:

(2.36)

Do year 0 prices and quantities defined in this way satisfy MONASH equations

specifying that:

Quantity demanded of domestic product i ¼ quantity supplied (2.37)

Value of output from industry j ¼ value of js inputs plus production taxes (2.38)

Purchasers values ¼ basic values plus margins and sales taxes? (2.39)

The balancing properties of the input-output data ensure that the values we have

assigned to year 0 prices and quantities satisfy (2.37) and (2.38). Equation (2.39) is

satisfied via definitions of year 0 purchasers prices such as (2.36).

MONASH models contain many more equations connecting input-output variables

than those indicated by (2.37)e(2.39). All of these additional equations contain either

free parameters and/or free variables. That is, they contain parameters or variables for

which we are free to assign values that allow the equations to be satisfied by our year

0 values for prices and quantities. For example, consider the equation:

X1MARGði; s; j; rÞ ¼ X1ði; s; jÞ � A1MARGði; s; j; rÞ; (2.40)

where X1MARG(i,s,j,r) is the use of margin-commodity r (e.g., road transport) to

facilitate the flow of intermediate input i from source s (domestic or imported) to

industry j, X1(i,s,j ) is the use of good i from source s by industry j as an intermediate

input and A1MARG(i,s,j,r) is the use of margin-commodity r per unit of flow of

intermediate input (i,s) to industry j.

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 65

From our input-output data, we have already assigned year 0 values to X1MAR-

G(i,s,j,r ) and X1(i,s,j ). However, A1MARG(i,s,j,r) is free. If X1(i,s,j ) is non-zero, then

we ensure

,

that (2.40) is satisfied by the year 0 quantities read from our input-output data

by choosing the year 0 value for A1MARG(i,s,j,r) to be the ratio of the year 0 values of

X1MARG(i,s,j,r ) andX1(i,s,j ). If X1(i,s, j ) is zero, thenA1MARG(i,s,j,r ) can be assigned

any value provided X1MARG(i,s,j,r ) is also zero. If X1(i,s,j ) is zero but X1MARG(i,s,j,r )

is not zero, then we have a data error requiring correction.

Now consider a less trivial example. Part of the theory of MONASH models is that

industry j chooses its current inputs of domestic and imported good i to minimize costs

subject to a CES constraint in which the industry’s requirements for good i are

proportional to its activity level, Z( j ), i.e. industry j chooses:

X1ði; s; jÞ; s ¼ fdom; impg;

to minimize:

X

s

P1ði; s; jÞ X1ði; s; jÞ; (2.41)

subject to:

Zð jÞ ¼

�X

s

X1ði; s; jÞ�rði; jÞ

d1ði; s; jÞ

��1=rði; jÞ

; (2.42)

where the d1(i,s,j ) are non-negative parameters summing to one over s and r(i,j ) is

a substitution parameter assigned a value greater than e1 (but not precisely zero)

reflecting econometric estimates or views about import/domestic substitution.

Problem (2.41)e(2.42) leads to equations for the ratio of domestic to imported inputs

of the form:

X1ði; dom; jÞ

X1ði; imp; jÞ ¼

d1ði; dom; jÞ

d1ði; imp; jÞ �

P1ði; imp; jÞ

P1ði; dom; jÞ

�1=ð1þrði;jÞÞ

: (2.43)

Values can be assigned to the parameters d1(i,s, j ), s¼ dom and imp, to ensure that

(2.43) is satisfied by the year 0 values for X1(i,s, j ) and P1(i,s, j ), together with the value

for the substitution parameter r(i, j).

A few examples is not a proof of the existence of a year 0 solution, (X(0),Y(0)), to

(2.13). A complete proof for any model involves working through every equation,

identifying free parameters or variables. This is not difficult, but it is tedious.

2.4.4 Deriving change and percentage change equations

MONASH models are represented as linear systems of the form:

AðV Þ � v ¼ 0; (2.44)

66 Peter B. Dixon et al.

where V is an n� 1 vector of initial values or values generated during a multistep process

for the variables (denoted as (X,Y ) in the previous subsection), A is an m� n matrix of

coefficients each of which is a function of V and v is a vector of changes and percentage

changes in the variables away from their values in V.

In this subsection we describe how equations that make up the change/percentage

change system (2.44) can be derived from the levels system (2.13).

Most equations in (2.44) can be derived from the corresponding equation in (2.13)

by the application of the three rules in Table 2.3. For example, the multiplication and

power rules applied to (2.43) give the percentage change equation:

x1ði; dom; jÞ � x1ði; imp; jÞ ¼ sði; jÞ � ðp1ði; imp; jÞ � p1ði; dom; jÞÞ; (2.45)

where the lowercase x and p are percentage changes in the variables represented by

the corresponding uppercase symbols and s(i, j ), which equals 1/(1þ r(i, j )), is the

elasticity of substitution in industry j between domestic and imported units of

commodity i.

In representing optimization problems in system (2.44), it is often convenient to use

percentage change versions of the first-order conditions. For example, in the optimi-

zation problem (2.8)e(2.9), the first-order conditions are:

Pi ¼ L � X

hi�1

i

Zhi

�Qi; i ¼ 1; .; n; (2.46)

where L is the Lagrangian multiplier, together with the constraint (2.9). These

conditions can be represented in (2.44) as:

pi ¼ lþ ðhi � 1Þ � xi � hi � z; i ¼ 1;.; n; (2.47)

X

j

ðxj � zÞ � Sj ¼ 0; (2.48)

Table 2.3 Rules for deriving percentage-change equations

Representation in

Levels Percentage changes

Multiplication rule U¼RW 0 u¼ rþ w

Power rule U¼Ra 0 u¼ ar

Addition rules U¼RþW 0 Uu¼RrþWw

or u¼ Srrþ Sww

U, R andWare levels of variables, u, r and w are percentage changes, a is a parameter and Sr and Sw are shares evaluated at

the current solution. In the first step of a Johansen/Euler computation, the current solution is the initial solution. Hence,

Sr¼R(0)/U(0) and Sw¼W(0)/U(0). In subsequent steps, Sr and Sw are recomputed as U, R andWmove away from their

initial values.

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 67

where again we use lowercase symbols to represent percentage changes in the variables

denoted by the corresponding uppercase symbols and:

Sj ¼

Xj

Z

�hj

�Qj=

"X

k

Xk

Z

�hk

�Qk

#

; j ¼ 1;.; n: (2.49)

It is apparent from (2.46) that Sj is the share of total costs accounted for by input j.

While (2.47) and (2.48) can be used in (2.44) to represent optimization problem

(2.8)e(2.9), we may prefer to eliminate the percentage change in the Lagrangian

multiplier, l. After a little algebra we can obtain (2.9) and (2.11).35

Not all of the equations in (2.44) can be derived by the simple rules in Table 2.3.

Occasionally more complicated differentiations are required. For example, consider the

levels equation

R ¼ Rnorm þC � ln

G �Gmin

Gmax �G

�Gmax � T

T �Gmin

: (2.50)

Variants of this equation are used in MONASH models to relate an industry’s capital

growth (G) through year t36 to the industry’s expected rate of return (R) and its normal

rate of return (Rnorm).

37 In (2.50), Gmin, Gmax, T and C are parameters with C positive

and Gmin< T

then via (2.50) the growth in capital through the year is at its trend value (G¼ T ).

Capital growth will exceed trend (G> T ) if the expected rate of return is greater than

the normal rate of return (R>Rnorm). Capital growth will never move above Gmax (as

R/N, G/Gmax). Similarly capital growth will never move below Gmin (as

R/eN, G/Gmin). By choosing suitable values for Gmin and Gmax, we can ensure

that our model always implies growth rates for capital in a realistic range. To obtain

a form of (2.50) suitable for inclusion in (2.44) we can totally differentiate both sides.

This gives

del R � del Rnorm � C �

1

G �Gmin

þ 1

Gmax �G

� del G ¼ 0: (2.51)

In (2.51), del_R, del_Rnorm and del_G are change variables and form part of the v vector

in (2.44). We use change variables because R, Rnorm andG are variables for which zero is

a sensible value. The coefficients on del_R, del_Rnorm and del_G in the relevant row of

the A matrix are 1, e1 and �C�f1=ðG �GminÞ þ 1=ðGmax �GÞg.

35 The first move in deriving (2.10) and (2.11) from (2.47) and (2.48) is to multiply (2.47) by Si/(hie 1) and then sum

over i.

36 G is Je 1, where J the ratio of capital at the end of year t to capital at the start of year t.

37 Details on the MONASH treatment of capital growth are in Dixon and Rimmer (2002, pp. 189e198).

68 Peter B. Dixon et al.

2.4.5 Introduction to the GEMPACK programs for solving and analyzing

MONASH models

MONASH models are built, solved and analyzed using GEMPACK. The existence of

this software explains much of the popularity of MONASH-style models throughout the

world. GEMPACK is described in detail by Horridge et al. in Chapter 20 of this

Handbook. In this subsection we start with some brief comments on GEMPACK’s

history. Then we look at the structure of how model solutions are computed via

GEMPACK. This will be helpful in summarizing the technical material that has been

covered so far in this section.

The computer code for the first MONASH model (the ORANI model) was

developed by Sutton (1977). This code was effective and handled what was for the time

a very large CGE model. However, Sutton’s programs were model specific: they solved

the ORANI model. In 1980 Ken Pearson started work on GEMPACK. His objective

was to create a suite of programs that could be used to solve any CGE model in the

Johansen/MONASH tradition. The first version of GEMPACK was used for teaching

in 1984 and shortly after that was adopted by Australian CGE modelers. The first

GEMPACK manuals were published in 1986 (see Codsi and Pearson, 1986). Early

journal descriptions of GEMPACK

,

are Pearson (1988) and Codsi and Pearson (1988).

Evidence of the success of GEMPACK is its adoption over the last 25 years by thousands

of CGE modelers.

The structure of a GEMPACK solution of a MONASH model is illustrated in

Figure 2.8. GEMPACK users start by presenting their model in TABLO code. This is

a language close to ordinary algebra. For example, assume that model (2.13) consists of

three equations and nine variables:

DTOTðiÞ ¼

X2

j¼1

Dði; jÞ; i ¼ 1; 2; 3; (2.52)

where DTOT(i ) is total demand for good i and D(i,j ) is demand for i by user j. In

percentage-change form the model is:

dtotðiÞ ¼

X2

j¼1

Sði; jÞ � dði; jÞ; i ¼ 1; 2; 3; (2.53)

where lowercase symbols are percentage changes in variables represented by the corre-

sponding uppercase symbols and S(i,j ) is the share of user j in the total demand for good i.

Largely self-explanatory TABLO code for the model is shown in Table 2.4.

TABLO code has two main roles in GEMPACK. The first is to give a set of

instructions for reading a database, which we can think of as revealing a value for V, and

using it to evaluate A(V ) in (2.44). In the example in Table 2.4 the declaration of

Figure 2.8 GEMPACK solution of a MONASH model.

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 69

variables is an instruction to create an Amatrix with nine columns, called d(C1,U1),.,

d(C3,U2), dtot(C1),., dtot(C3). The command starting with ‘Equation’ is an instruction

that the A matrix should have three rows called E_dtot(C1), ., E_dtot(C3). The

equation itself is an instruction that the row E_dtot(Ci) should contain 1 in the dtot(i )

column and eS(i,j ) in the d(i,j ) column. The Read and Formula commands contain

instruction on how to evaluate S(i,j ) from a dataset (e.g. input-output data of the form

shown in Figure 2.7). In Figure 2.8 we show the data to be used in the kþ 1th step in an

n-step sequence as Data (Vk

n ), k ¼ 0, ., n e 1.

Table 2.4 Sample of TABLO code

File DATA # Data file #;

Set COM # Commodities # (C1eC3);

Set USER # User # (U1eU2);

Coefficient ! Declaration of coefficients !

(All,i,COM)(All,j,USER) BAS(i,j) # Data for demand for i by j #;

(All,i,COM) TBAS(i) # Total demand for i #;

(All,i,COM)(All,j,USER) S(i,j) # Share of j in demand for i #;

Read BAS from file DATA Header“BAS”;

Formula

(All,i,COM) TBAS(i)¼ Sum(j,USER, BAS(i,j) );

(All,i,COM)(All,j,USER) S(i,j) ¼ BAS(i,j)/TBAS(i);

Variable ! Declaration of variables !

(All,i,COM)(All,j,USER) d(i,j) # Demand for i by j #;

(All,i,COM) dtot(i) # Total demand for i #;

Equation E_dtot

(All,i,COM) dtot(i) ¼ Sum(j,USER, S(i,j)�d(i,j));

Update

(All,i,COM)(All,j,USER) BAS(i,j) ¼ d(i,j);

70 Peter B. Dixon et al.

Having evaluated A(V ) with the initial database to obtain AðV 0

n Þ, we introduce the

closure. This can be done via a subprogram that lists the exogenous variables.

GEMPACK can now split the A matrix into AXðV 0

n Þ and AY ðV 0

n Þ.

The next two subprograms introduce the shocks (movements in exogenous variables)

and specify the solution method (e.g. Johansen/Euler with n steps). From these subpro-

grams, GEMPACK can compute the shocks to be applied in the first step of the n-step

sequence. All the information has now been assembled to enable GEMPACK to compute

the movements in the endogenous variables in the first step of the n-step sequence.

This brings us to the second main role of TABLO. It provides instructions for

updating to a new database, Data ðV 1

n Þ, that incorporates the movements in the variables

imposed and generated in the first step. In Table 2.4, the update instruction says that the

data item BAS(i,j ) should be increased by d(i,j )%. Once Data ðV 1

n Þ has been created,

GEMPACK is ready to undertake the second step of the n-step sequence, and so on

through the n steps.38

The core set of programs outlined in Figure 2.8 has been supplemented since the

mid-1980s by an ever-expanding set of wonderfully useful GEMPACK features

contributed by Ken Pearson and his colleagues including George Codsi, Mark

38 Apart from the two roles described here, TABLO also provides instructions for condensation, see Section 2.3.2.2.

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 71

Horridge, Jill Harrison and Michael Jerie. For example, AnalyseGE allows GEM-

PACK users to see the value of any coefficient (i.e. any function of database items) or

any variable in a particular solution via point-and-click applied to the TABLO repre-

sentation of the model. ViewSOL allows GEMPACK users to see a series of simulation

results in a variety of styles (e.g. year-on-year growth, cumulative growth from an initial

year or cumulative difference between two series of results) and a variety of formats

(graphs or numbers). These aids greatly enhance the user’s ability to undertake

MONASH-style modeling.

2.4.6 Creating a database for a MONASH model

One of the most difficult and least teachable CGE skills is the compilation of a database

for year 0. For MONASH models, the central components are a set of input-output

accounts as illustrated in Figure 2.7 and estimates of capital stock by industry.39 What

makes compilation of these data so hard is that, for the most part, they must be gleaned

from bulletins prepared by statistical agencies for purposes far removed from CGE

modeling. The accounting conventions adopted by the agencies are often opaquely

documented and tortuous to follow.

Here, we flag some of the difficulties, drawing on our experience in preparing

a year 0 database for the USAGE model of the US. However, experience for the US

may not be directly relevant for other countries. In creating a practical, policy-relevant

CGE model for any country, there is no substitute for the time-consuming work of

looking at the data presented by the statistical agencies, thinking about what it means,

working out the underlying accounting conventions, and contacting the agencies and

asking questions.40 The practice adopted by some CGE modelers of delegating the

data preparation task to research assistants is inappropriate for models intended for

serious policy analysis.

2.4.6.1 Input-output data published by the BEA

The starting point for the USAGE database was the 498� 498 input-output data for

1992 published by the BEA (Bureau of Economic Analysis, 1998). The first challenge in

using these data was to sort out the meaning of mysterious rows and columns designed by

the BEA to give desired row and column sums. For example, the BEA wanted the

household consumption and export columns to add to total household consumption and

total exports as shown in the National Income and Product Accounts (NIPA) for 1992.

39 Other data items required for year 0 include the balance of payments and the public sector budget. These are not

discussed here but are described in Dixon and Rimmer (2002, pp. 212e219).

40 For the US, the main statistical agency supplying data relevant for CGE modeling is the BEA. Their officers,

particularly Karen Horowitz, were extremely helpful in answering the numerous questions that arose as we prepared

the database for the USAGE model.

72 Peter B. Dixon et al.

In making estimates for their input-output tables of consumption expenditures dis-

aggregated by commodity, the BEA felt unable to distinguish between expenditures by

residents and expenditures by visitors. They recorded all consumption expenditures on

each commodity in the household consumption column and did not include expen-

ditures by visitors in the export column. On the other hand, NIPA data excludes total

visitor expenditure from the estimate of total household consumption and includes it in

total exports. To achieve their objective of NIPA compatibility, the BEA included in

their input-output tables a row and corresponding user column labeled ‘Rest-of-world

adjustment to final use’. The row contains two non-zero entries: a negative entry in

,

the

household column representing expenditures by visitors and a positive entry in the

export column representing the same thing. The column consists entirely of zeros.

Initially in using the BEA data we deleted both the column and row for ‘Rest-of-world

adjustment to final use’. Eventually we dealt with the issue satisfactorily by using data

from the BEA’s Tourism Satellite Accounts (Okubo and Planting, 1998) to itemize

visitor expenditures which we reallocated from the household consumption column to

a new industry, Export tourism. We modeled the output of this industry as being entirely

exported.

After further adjustments we were able to present the BEA input-output data in the

form shown in Figure 2.9 where: PV1, ., PV6 represent direct uses of commodities

(not identified by import/domestic source) valued in producer prices; MAR1, .,

MAR6 represent margins on the flows in PV1,., PV6; PVM represents imports; LAB,

TAX0 and OVA are a breakdown of value added into compensation of employees,

indirect taxes and other value added. MAKE represents commodity outputs by indus-

tries. The sum down a column of [PV1, MAR1, LAB, TAX0, OVA] matches the

corresponding column sum of MAKE. For non-margin commodities the row sums of

[PV1, PV2,., PV6, ePVM] match the corresponding row sums of MAKE. Finally, for

any commodity n which is a margin, the sum across the n-row of PV1, ., ePVM plus

all the n-entries in MAR1, ., MAR6 matches the commodity n-row sum in MAKE.

To move from Figure 2.9 to a MONASH-style input-output database of the form

shown in Figure 2.7 it was necessary to consider conventions in the BEA data con-

cerning: valuation of flows and the recording of indirect taxes; imports; public sector

demands, particularly the use of negative entries; investment; and value added. The

following subsections describe some of these conventions and our efforts to cope with

them. However, in the space available we cannot be comprehensive. Many important

details must be omitted concerning, for example, the BEA treatment of: real estate agents

and home ownership; royalties; scrap and used and second hand goods; auto rental;

secondary production; capital stocks in public sector enterprises; and foreign ownership

of US capital.41

41 Documentation on these issues is available from the authors.

Absorption Matrix

Prod-

ucers

Invest-

ors

House-

holds

Exports Govern-

ment

Invent-

ories

-Imports

Size ← I→ ← 1→ ← 1 → ← 1 → ← 35 → ← 1 → ← 1 →

Commod-

ity flows

C

PV1 PV2 PV3 PV4 PV5 PV6 -PVM

Margins

C × N

MAR1 MAR2 MAR3 MAR4 MAR5 MAR6 0

Labor

1

LAB

Taxes

1

TAX0

C = Number of commodities (= 483)

I = Number of industries (= 493)

N = Number of commodities used as margins (= 8)

Other value

added

1

OVA

Joint Production

Matrix

Size ← I →

C

MAKE

Figure 2.9 Schematic representation of BEA benchmark input-output data for 1992.

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 73

2.4.6.2 Valuation and treatment of indirect taxes

All commodity flows in the BEA data, and therefore in Figure 2.9, are valued at

producer prices, i.e. basic values (prices accruing to producers) plus sales and excise

taxes. In tables at producer prices, the indirect tax row (TAX0) normally represents

taxes paid on the sales of the industry’s products together with production taxes and

taxes on the use of primary factors.42 Hence, we expected to find large entries in the

Tobacco and Petrol columns of TAX0. However, these entries were only moderate,

whereas the entries in the Wholesale and Retail columns were surprisingly large. We

found that in the BEA tables, taxes are recorded in the column of the industry that

collects the taxes. Apparently, tobacco and petrol taxes are collected largely by

wholesalers and retailers.

In these circ*mstances, we expected large amounts of wholesale and retail margins

to be associated with sales of tobacco and petrol, reflecting large taxes associated with

the wholesaling and retailing of these products. For sales to consumers we did, in fact,

find large wholesale and retail margins in MAR3 associated with sales of tobacco and

42 For a description of the standard types of input-output tables (basic values, producer values and purchasers values with

either direct or indirect allocation of imports), see Dixon et al. (1992, chapter 2).

74 Peter B. Dixon et al.

petrol. For example, wholesale and retail margins on petrol sales to households were

$58.7 billion on a producer value of only $51.4 billion. We suspected that most of the

$58.7 billion was tax paid by wholesalers and retailers on their sales of petrol to

households. On petrol sales to industries the ratio of wholesale and retail margins to

producer value was only about 28%, i.e. about a quarter of the value of the ratio

applying to petrol sales to consumers. We suspected that this was the result of two

factors: (i) lower taxes on industry use of petrol than on household use, and (ii) lower

payments by industry than by households to wholesalers and retailers per gallon of

petrol, i.e. genuinely lower margins.

More generally, the practice of allocating taxes to the collecting industry is unsatis-

factory for CGE purposes. For example, without knowing the tax content of retail and

wholesale margins associated with consumer purchases of tobacco and petrol, we cannot

project effects on tax collections and retail and wholesale activity of changes in consumer

demands for these products. For USAGE we needed to reclassify indirect taxes so that

they were excluded from wholesale and retail margins and so that they were associated

(as in Figure 2.7) with the purchases which give rise to them. In doing this we were

assisted by the BEA who gave us about 10 000 items of unpublished data showing

indirect taxes by commodity and user. In most cases the BEA indicated where the item

was placed in their published producer value input-output tables. This enabled us to

work out, for example, how much of PV3(Cigarettes), MAR3(Cigarettes, Wholesale)

and MAR3(Cigarettes, Retail) in Figure 2.9 were in fact sales taxes. With this infor-

mation, we reduced these three flows to basic values and made corresponding adjust-

ments to TAX3(Cigarettes) and to the values of Cigarettes, Wholesale and Retail outputs

in the MAKE matrix.

2.4.6.3 Imports

The BEA input-output tables adopt indirect allocation of imports. Consequently in

PV1, ., PV6 in Figure 2.9, competing imports are aggregated with output from

domestic producers. Imports valued at producer prices (which for the BEA tables

are landed-duty-paid prices) are shown as negative entries in a single import column

(ePVM). The first problem we noticed with the BEA’s treatment of imports is that

three of the entries in the import column were positive, seemingly implying

negative levels of imports for Wholesale trade, Water transport and Non-ferrous

metal ores.

After inquiries with the BEA we found that import duties were recorded as if they

were negative imports of Wholesale trade. This treatment has a column and row logic,

but it is opaque for CGE modeling. The column logic is that the BEA wanted the total

of the import column to reflect the cost to the US of imports (payment to foreigners).

As part of achieving this they needed to deduct duties from the total of the landed-

duty-paid values recorded in the import column. The row logic is that the BEA

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 75

f

f

f

.

:

recorded duties as a tax (part of TAX0 in Figure 2.9) on the sales of the Wholesale

industry, the industry they deemed to have collected the import duties. With the

producer value of the output of the Wholesale industry inflated in this way, the BEA

needed to inflate the value of sales of the Wholesale industry. Negative imports o

wholesale services achieved this purpose. The BEA supplied

,

us with an unpublished

disaggregation of import duties by commodity, enabling us to form the Tariff vector in

Figure 2.7, and to undo the BEA’s treatment of import duties by zeroing out the

Wholesale entry in the import column and making corresponding deductions from

TAX0 in the Wholesale column and from the (Wholesale, Wholesale) entry in the

MAKE matrix.

For Water transport, the seemingly negative value of imports arose from the BEA’s

treatment of water transport services provided by US shipping in delivering imports to

US ports. The cost of these services is embedded in the landed-duty-paid value o

imports recorded in the import column. Treating US-provided water transport services

on imports as negative imports rather than as exports was motivated by the objective o

ensuring that the total for the import column reflected the cost to the US of imports

Using unpublished data provided by the BEAwe were able to reclassify negative imports

of Water transport as positive exports, leaving only genuine imports of Water transport

(e.g. cruises by US residents on foreign ships) in the import column. Similar adjustments

were necessary for Air transport, although the problem was less obvious in the BEA data

because the Air transport entry in the import column had the expected sign, negative

genuine imports of Air services outweighed US-provided air services embedded in

US imports.

In the case of Non-ferrous metal ores the negative value for imports was due to the

BEA’s treatment of gold. For this item, the BEA is willing to estimate net imports, but

not imports and exports separately. For 1992 the BEA estimated that net imports of gold

were negative. We reclassified these negative imports as exports.

The second problem with the BEA’s treatment of imports is that it provides no

disaggregation of imports by using industry or final demander. Such a disaggregation is

required for a MONASH-style database, see Figure 2.7. Again the BEA came to the

rescue with unpublished data enabling us to turn the import column into an import

matrix.

2.4.6.4 Public sector demands

The BEA tables give 35 columns of government expenditures: there are 35 columns in

PV5 and MAR5 in Figure 2.9. Fourteen of these columns refer to government

consumption activities and have labels such as Federal Government consumption expenditures,

national defense. The remaining 21 columns refer to government investment activities and

have labels such as Federal Government gross investment, national defense. Of the 21

investment activities, 14 are investment counterparts of the 14 consumption activities.

76 Peter B. Dixon et al.

For each of the 35 activities the corresponding column in Figure 2.9 shows the

commodity composition of public expenditure. For example, the column for State

and local government consumption expenditures, elementary and secondary public school

systems (column 9800C1) shows expenditures totaling $224.107 billion accounted

for mainly by expenditure of $186.326 billion on General government (commodity

820000). While most of the expenditures in PV5 are positive, some are negative. In

column 9800C1 of PV5, for example, there are expenditures of: e$2.680 billion

on Eating and drinking places (commodity 740000), e$3.078 billion on Elementary

and secondary schools (commodity 770401), and e$0.002 billion on Pens, etc.

(commodity 640501). In response to our queries the BEA explained that negative

entries in the government vectors are government sales, e.g. sales of Eating and

drinking (lunch program) by State and local government schools. In their input-

output tables, the BEA follows the convention of making the row sum for

a commodity across PV1, ., PV6, ePVM,43 equal to the value of domestic non-

government production.44 The restriction to non-government production is achieved

by the negative entries in PV5.

For CGE modeling it is inappropriate to treat only non-government enterprises as

producers and it is counter-intuitive to treat government outputs as negative demands.

Consequently we dropped the BEA convention. We converted the 14 columns for

government consumption activities and the seven investment columns with no

consumption counterpart into industries. (The treatment of the remaining 14 govern-

ment investment activities is described in Section 2.4.6.5.) In creating these 21 new

government industries we regarded the positive entries from the original BEA columns

as inputs and the negative entries as outputs. Thus, we interpreted the data in column

9800C1 of PV5 and MAR5 as showing that government industry 9800C1 produced

output of $229.867 billion. This was the sum of the positive entries in column 9800C1 of

PV5 andMAR5. The negative entries were entered as positives in theMAKEmatrix and

interpreted as showing that industry 9800C1 produced $2.680 billion of Eating and

drinking places (commodity 740000), $3.078 billion of Elementary and secondary schools

(commodity 770401) and $0.002 billion of Pens, etc. (commodity 640501). In addition,

industry 9800C1 produced $224.107 billion of its principal product (¼ 229.867e

2.680e 3.078e 0.002), which was designated as commodity 9800C1. We assumed that

the sales of industry 9800C1’s principal product were entirely to a single category of

government final demand. Sales of industry 9800C1’s other outputs were already

accounted for in Figure 2.9 in purchases by households and other demanders of

commodities 740000, 770401 and 640501.

43 Here we consider only non-margin commodities.

44 An exception is the output of General government where the commodity row sum is entirely government production.

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 77

The BEA’s government columns (PV5, MAR5) did not include any purchases of

labor or other primary factor inputs. These inputs were accounted for via purchases of

General government (commodity 820000), which is produced by industry 820000 entirely

from labor and other value added. This left us unable to distinguish in our modeling

between the composition of primary factor inputs in different government activities.

2.4.6.5 Investment by investing industry

The investment column in Figure 2.9 shows gross private fixed investment by

commodity. For a MONASH-style model we need to give investment an industry

dimension, see Figure 2.7. Our main data source for doing this in the case of the USAGE

model was a 163 commodity by 64 industry matrix of private investment expenditures

published by the BEA (BEA product NDN-0224).45 The commodities in this matrix

mapped easily to the input-output commodities for which the BEA’s input-output tables

showed non-zero investment. Thus, the investment matrix provided an adequate basis

for giving the input-output investment column a 64-industry dimension. Within the 64

industries we allocated investment expenditures on each commodity to component

industries at the detailed USAGE level (approximately 500 industries) using indicators

such as other value added and employment. Thus, we assumed that all component

industries within a 64-order industry had the same commodity composition of invest-

ment expenditures.

This procedure covered only private sector industries, not our 21 government

industries. The BEA government investment expenditure vectors became the

investment vectors for 14 of the government industries. We left the other seven

government industries with no investment (zero entries in their columns of BAS2,

MAR2 and TAX2 in Figure 2.7). With investment in 14 government industries, we

recognized that these industries must have capital stocks with corresponding rentals.

In handling this we replaced expenditures on General government by these 14 industries

with entries in the LABOR and CAPITAL vectors in Figure 2.7. These entries

represented the primary factor constituents of General government46 expenditures by

the 14 industries.

2.4.6.6 Value added, self-employment

,

and capital stocks

The value-added section of input-output table provides the main data for CGE models

on resource constraints. Perhaps reflecting the interests and times of Wassily Leontief, the

originator of input-output economics, published input-output tables often lack adequate

detail on value added for CGEmodeling. Writing in the 1930s,47 Leontief saw his input-

45 See Bonds and Aylor (1998).

46 Recall that the General government industry has only primary factor inputs.

47 See, e.g. Leontief (1936).

78 Peter B. Dixon et al.

output system as a means of estimating the effects on employment by industry of demand

stimulation policies in an environment of high unemployment and excess capacity,

a situation in which resource constraints are unimportant. Consequently, relative to the

demand side of his model, Leontief gave little emphasis to value added. This bias in the

presentation of input-output tables has continued even in countries in which full

employment and inflationary conditions were present for much of the second half of the

twentieth century making resource constraints of paramount interest. Apart from taxes

which we have already discussed, the BEA input-output tables divide value added for

each industry in the US into only two categories: Compensation of employees and Other

value added (LAB and OVA in Figure 2.9).

For a CGE model we require the measure of labor input in each industry to be

compensation of employees plus the value of non-payroll labor (the self employed and

family helpers). Data from the Bureau of Labor Statistics (BLS) indicates that about

10% of all jobs are held by non-payroll workers and that for some industries this

percentage is much higher. For example, non-payroll workers hold about half the

jobs in agriculture. In developing the USAGE database, we imputed a wage (dis-

cussed below) to non-payroll workers in each industry. We then adjusted the BEA

value-added data by removing the estimated values of non-payroll labor from the

OVA row and adding them to the LAB row. BLS data also allowed us to disaggregate

the adjusted LAB row into a large number of occupations. This has been important in

studies such as that described in Section 2.2 concerned with immigration and other

labor market issues.

We interpreted the entries in the adjusted OVA vector as rental on capital.48

However, in many cases further adjustments to OVAwere necessary so that our database

implied reasonable values for rates of return on capital. In working out the implied rate of

return for industry j, we divided OVAj by the value of capital stock (Kj) and deducted the

rate of depreciation (Dj).

Estimates of capital stocks and depreciation rates are important in dynamic CGE

models but unfortunately relevant data are scarce. For the USAGE model the main data

source was the BEA’s dataset NDN-0216 (see Bureau of Economic Analysis, 1999)

which gives usable data for capital stocks, investment and depreciation rates for the

economy divided into 55 sectors.

An unattractive feature of these data is that they are classified to sectors on a company

and ownership basis. For example, the capital stock for the construction sector in NDN-

0216 refers to fixed capital owned by companies whose principal activity is construction.

For modeling purposes we want to know how much capital is used in construction

48 In the initial version of USAGE, rental on land was not modeled. Agricultural land was included in later versions

concerned with biofuels; see, e.g. Winston (2009) and Gehlhar et al. (2010).

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 79

activities. Capital used in construction activities can differ sharply from the NDN-0216

concept for several reasons:

• Non-construction companies may undertake construction (e.g. mining companies

may drill new wells, a construction activity) and therefore own capital that is used for

construction activities.

• Construction companies may operate across several non-construction activities and

therefore own capital that is used for non-construction activities.

• Construction companies may hire capital from financial institutions and

therefore use capital in construction activities that is not owned by construction

companies.

While the capital and investment data in NDN-0216 are on a company and ownership

basis, the investment data in NDN-0224 (used in our estimation of investment by

industry, see Section 2.4.6.5) are on an activity basis.49 By comparing sectoral investment

fromNDN-0216 with a 55 sector aggregation of our input-output investment estimates,

we made an assessment of the extent to which the company/ownership capital data in

NDN-0216 was likely to be a satisfactory basis for estimating capital stocks by industry

defined on an activity basis. For most sectors investment on the two bases was reasonably

compatible. However, for some sectors the differences were dramatic. For example,

NDN-0224 showed investment in the construction sector of $32 billion, whereas

NDN-0216 showed $6 billion. It appears that construction in the US is carried out to

a large extent by companies that do not specialize in construction or by construction

companies using rented capital.

Despite the NDN-0216 and NDN-0224 incompatibilities, we had no choice but to

use NDN-0216 as the basis for the USAGE capital stock estimates. To estimate capital

stocks on an activity basis at the 55 sector level, we assumed that depreciation rates (Dj)

and capital growth (Ij/KjeDj) for sector j calculated on an ownership basis from NDN-

0216 also applied on an activity basis.

With sectoral capital stocks on an activity basis estimated in this way, we were able to

calculate implied sectoral rates of return. This calculation gave rates of return for 26 of

the 55 sectors outside the range 0e20. We considered estimates outside this range to be

unrealistic and likely to cause difficulties in simulations.

Our first step in dealing with this problem was to revisit the issue of self-

employment. For our initial estimates of imputed wages of self-employed workers we

used the average wage rate of employees. Now for each of the 55 sectors we looked at

the effects on OVA and implied rates of return of varying the self-employed/

employee wage ratio between 0.5 and 4. In most sectors self-employment is

49 NDN-0224 is compiled using input-output conventions. Under input-output conventions, capital is assigned to the

industrial activity for which it is used. NDN-0216 is compiled using NIPA conventions. Under these conventions,

capital is assigned to owning industries regardless of how it is used.

80 Peter B. Dixon et al.

unimportant and variations in the wage ratio have little effect on estimated rates of

return. However, for some sectors, we were able to make a plausible change in the

wage ratio and at the same time produce a more realistic rate-of-return estimate. For

health services, we raised the wage ratio to 2, thereby recognizing that self-employed

health professionals are likely to be paid considerably more than health employees.

This reduced the estimated rate of return in health services from 21% to a more

reasonable 14%. For construction, on the other hand, we lowered the wage ratio, to

0.5. This seems reasonable because self-employed construction contractors (which

include handymen) are likely to be paid considerably less than construction

employees of major firms. The adjustment in the construction wage ratio increased

the estimated rate of return in for the sector from an unlikely e14.3% to a less

unreasonable e3.5%.

Having done as much as we could with the wage ratio, we were still had 18 sectors

with estimated rates of return outside the range 0e20. For each of these 18 sectors we

reset the value of capital stock. For sectors having initial estimated rates of return of over

20%, we raised our estimate of their capital stocks so that their rates of return fell

,

the Project has

been an enjoyable place to work with high levels of cooperation between researchers

and generous treatment of colleagues. A second factor is that the Project has generated

a continuous stream of challenging, satisfying inter-related activities. These include:

data management and preparation; formulation of solution algorithms; development of

software; translation of policy questions into forms amenable to modeling; creation of

theoretical specifications to broaden the range of CGE applications and improve

existing applications; checking of model solutions; interpretation of results and

deducing their policy significance; and delivery of persuasive reports.

26 Peter B. Dixon et al.

The third and perhaps the most important factor underlying the Project’s success and

longevity is the adaptability of MONASH models, which are its main product. After the

initial work on protection, these models have provided insights on an enormous variety

of questions including:

the effects on:

macro, industry, regional, labor market, distributional and environmental

variables

of changes in:

taxes, public consumption, social-security payments, environmental policies,

technologies, international commodity prices, interest rates, wage-setting

arrangements and union behavior, infrastructure and major-project expenditures,

and known levels and exploitability of mineral deposits (the Dutch disease).

In addition, MONASH models are used for: explaining periods of history; estimating

changes in technologies, consumer preferences and other unobservable variables; and

creating baseline forecasts. With this flexibility of its central product, the Project has

maintained the long-term interest of its researchers. Model flexibility has also been

critical to the financing of the Project. Starting in the 1980s, the Project has been

increasingly reliant on the sale of contract and subscription services. In 2010, these

services accounted for 80% of the budget of CoPS, which had a professional and support

staff of 20. Without application flexibility, CoPS could not have remained viable as

a predominantly commercial entity within a university.3

The rest of this chapter is organized as follows. In Section 2.2 we tell a MONASH-

style policy story. We start this way for four reasons.

(i) We want to emphasize that the primary purpose of MONASH models is practical

policy analysis.

(ii) Before presenting technical details, we want to demonstrate the ability of

MONASH models to generate policy-relevant results that can be communicated

in a convincing way to people without CGE backgrounds.

(iii) Wewant to illustrate the technique of explaining CGE results in a macro-to-micro

manner that avoids circularity. This requires finding an ‘exogenous’ starting point.

(iv) We want to motivate the study of CGE modeling by providing a thought-

provoking analysis that illustrates its strengths.

Section 2.3 starts by outlining Johansen’s model. It then describes Johansen’s legacy to

MONASH modelers. This includes: the representation of models as rectangular systems

of linear equations in changes and percentage changes of the variables; a transparent

solution procedure that directly generates a solution matrix showing the elasticities of

endogenous variables with respect to exogenous variables; a mode of analysis and result

3 More detailed accounts of the history of the IMPACT Project and its reincarnation at CoPS can be found in Powell

and Snape (1993) and Dixon (2008).

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 27

interpretation built around the solution matrix; and the use of back-of-the-envelope

(BOTE) models to aid interpretation and management of the huge volume of results that

flow from a full-scale CGE model. The final part of Section 2.3 describes five inno-

vations that were made in the journey from Johansen’s model to ORANI: (i) imple-

mentation of a Johansen/Euler procedure that eliminates linearization errors without

sacrifice of simplicity and interpretability; (ii) endogenization of trade flows by the

introduction of imperfect substitution between imported and domestic varieties of the

same commodity and downward-sloping foreign demand curves; (iii) vastly increased

dimensionality that allows the incorporation of policy-relevant detail such as transport

and trade margins; (iv) flexible closures; and (v) the use of complex functional forms such

as CRESH to specify production technologies.

Section 2.4 is the most technically demanding part of the chapter. Sections 2.4.1 and

2.4.2 set out the mathematical structure of MONASH models. A supporting Appendix

contains the mathematics that underlies the multistep Johansen/Euler solution procedure.

Section 2.4.3 demonstrates that we can always find an initial solution for a MONASH

model mainly via the input-output database. There is no need to explicitly locate this

solution, but the fact that it exists means that derivative methods, such as the Johansen/

Euler procedure, can be used to compute required solutions (i.e. solutions with required

values for the exogenous variables). Section 2.4.4 shows how the percentage change

equations that form Johansen’s rectangular system are derived from levels equations.

Section 2.4.5 is an overview of GEMPACK.4 This suite of programs solves MONASH

models, and is used for interrogating data and results. Section 2.4.6 discusses problems in

transforming published input-output tables into databases for policy-relevant CGE

models. We take as an example the transition from tables published by the Bureau of

Economic Analysis (BEA) to a database for USAGE, a MONASH-style model of the US.

Conventions and definitions adopted in published input-output tables vary from country

to country. Consequently, the specifics of our experience are not immediately transferable

outside the US. However, the general principle that CGE modelers need to work hard to

understand input-output conventions is broadly applicable. Among other things, they

need to figure out conventions adopted by their statistical agency concerning: valuations

(basic prices versus producer prices versus purchasers prices); reconciliation with the

national accounts; imports (direct or indirect allocation); investment (commodity versus

industry); self-employment; and the treatment of imputed rents in housing.

Section 2.5 describes how MONASH models have evolved in response to demands

by consumers of CGE modeling services. These consumers are concerned primarily

with current policy proposals. Frommodelers they want results showing policy effects on

finely defined constituent groups, not just effects on macro variables and coarsely defined

sectors. They want results from models that have up-to-date data, detailed disaggregation

4 GEMPACK is described fully in Horridge et al. in Chapter 20 of this Handbook.

28 Peter B. Dixon et al.

and accurate representation of relevant policy instruments. In trying to satisfy these

demands, MONASH modelers have developed four modes of analysis: historical,

decomposition, forecast and policy. Historical simulations are used to produce up-to-

date data for MONASHmodels as well as estimates of trends in technologies, preferences

and other naturally exogenous but unobservable variables. Decomposition simulations

are used to explain periods of history and to place the effects of policy instruments in an

historical context. Forecast simulations provide a baseline picture of likely future

developments in the economy using extrapolated trends from historical simulations and

forecasts from specialists on different parts of the economy. Policy simulations generate

the effects of policies as deviations from baseline forecasts.

Section 2.6 summarizes the main ideas in the chapter.

To a large extent the sections are self-contained. Consequently, readers can choose

their own path through the chapter. Some of the material in

,

to 20%.

For sectors having initial estimated rates of return below zero, we lowered our estimate of

their capital stocks so that their rates of return rose to zero. We then spread these final

estimates of activity-based sectoral capital stocks to constituent USAGE industries

mainly according to our estimates of other value added.

2.5 RESPONDING TO THE NEEDS OF CGE CONSUMERS: THE FOUR

CLOSURE APPROACH

From their beginnings in the 1970s, MONASH models have been produced to satisfy

the needs of consumers of CGE services in the public and private sectors. These are real

needs expressed via willingness to pay from limited budgets. This means that the

evolution of MONASH models has been largely demand driven. Section 2.5.1

describes what it is that consumers of CGE services demand. Section 2.5.2 then

describes how, with MONASH models, we have tried to satisfy these demands via

simulations conducted under four closures: historical, decomposition, forecast

and policy.

2.5.1 What consumers of CGE services want

Both public and private sector consumers of services based on MONASH models are

mainly concerned with current policy proposals. In assessing the quality of modeling

services they assign heavy weight to: up-to-date data; detailed disaggregation in the focus

area and accurate representation of relevant policy instruments; and disaggregated results.

While not directly demanded by consumers, we have found that servicing their needs is

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 81

made easier if we can produce forecasts showing likely developments in the economy

with and without the policy under consideration and decomposition analyses quanti-

fying the role of similar policy changes in the past.

2.5.1.1 Up-to-date data

Consumers are often well-informed about the latest statistics for their particular

industries of interest. If they see conflict between what they know and data in

a model, they lose confidence in all aspects of the model and its results. We suffered an

example of this in 2006 when we were working on the US International Trade

Commission’s flagship publication concerned with the economy-wide effects of

removing all major import restraints (US International Trade Commission, 2007).

At a late stage in this work a US International Trade Commission Commissioner, who

was knowledgeable about the Textile and clothing sector, drew our attention to data

showing that US Apparel output in 2004 was $34 billion, yet our model’s database was

showing $61 billion. Our $61 billion was an estimate and overlooked data in the latest

Annual Survey of Manufactures. In the context of the overall project, the problem

seemed relatively minor. Nevertheless, although rectifying it involved considerable

delays, this could not be avoided. It was essential for the credibility of the entire

project. While satisfying consumer demands for accurate up-to-date data is a chore for

CGE modelers, there is often a genuine payoff in terms of improved real-world

relevance. In our example, failure to recognize that $34 billion was the right number

would have led to an overstatement of the welfare gain from removing import

restraints on apparel.

2.5.1.2 Detailed disaggregation in the focus area and accurate representation

of relevant policy instruments

Policy proposals often call for the application of complicated instruments at a fine level

of industry/commodity disaggregation. This sometimes causes consumers of modeling

services to demand model features that stretch producers of these services to their

limits or beyond. In the US, for example, our colleague Ashley Winston (Winston,

2009) has responded to demands by consumers interested in biofuel policy by

extending a 500 commodity model to include as separate commodities: corn; switch

grass; crop residue; cellulosic materials; organic byproducts; corn ethanol; dried

distillers grains with solubles; cellulosic ethanol; advanced ethanol; gasoline; diesel; and

Other fuels. Reflecting consumer demands, Winston also incorporated explicit

complementarity conditions specifying the operation of tariff rate quotas on imports of

Sugar and other agricultural products together with 72 types of agricultural land. To

achieve all this required highly skilled theoretical, computing and data work over

a long period of time. Being stretched to meet consumer demands can lead to

productive and creative outcomes, as in Winston’s case. However, being stretched can

82 Peter B. Dixon et al.

cause difficulties for CGE modelers in terms of budgets, time constraints and research

priorities. CGE modelers must sometimes be firm in asking their customers to set the

problem (e.g. work out the effects of replacing x % of imported oil with domestically

produced biofuels) but not to dictate the way in which the modeler should tackle the

problem. While a natural inclination of consumers is to think that highly elaborate

modeling is called for, producers can often find shortcuts. [See, e.g. Dixon et al.

(2007) in which the biofuel issue was tackled as a technological change in the

production of motor fuels.] In these cases, the consumer is usually convinced of the

adequacy of the short cut when defensible results are produced on time and within

budget.

2.5.1.3 Disaggregated results

Consumers of modeling services want more than bottom-line aggregate welfare and

GDP effects. The real policy debate is often about reallocating large revenues across

industries and factors: there can be big winners and losers even when the bottom line

is small. Table 2.5 is an example of the kind of information that consumers find

useful. It shows results from a US International Trade Commission study on the

effects of imposing a Steel Safeguard tariff. The US International Trade Commission

estimated that the imposition of the tariff would result in a net loss in GDP of $30.4

million. This tiny net effect reflects large gains for the government in tariff revenue

and for the Iron and steel industry in capital income, offset by losses in labor income

and capital income in other industries, particularly those that use iron and steel

inputs. Ability to provide disaggregated information is a CGE strength and demands

by consumers for this information justifies the retention in CGE models of consid-

erable detail.

2.5.1.4 Baseline forecasts

Many consumers of CGE analyses have little background in economics. It does not come

naturally to them to think in terms of the effect on variable i of changes in policy j

Table 2.5 CGE results: income changes from an iron and steel safeguard tariff

Income changes ($ million)

Tariff revenue 649.9

Labor income e386.0

Capital income e294.3

Iron and steel industry 239.5

Input suppliers to iron and steel 67.4

Other industries (including steel users) e601.2

GDP e30.4

Source: US International Trade Commission (2003, pp. 4e5).

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 83

holding all other exogenous variables constant. A current example in Australia is the

heated debate concerning the government’s buyback of water rights from farmers along

the Murray and Darling rivers. The aim is to increase water flow in the rivers, thereby

improving the downstream environment. Implementation of the policy coincided with

a severe drought and depressed conditions in rural communities. Economists’ estimates

showing that the effects of buyback on rural economic activity are negligible are not

accepted because community leaders and their constituents are not separating the effects

of buyback from the effects of drought.

We have found that the presentation of a baseline without the policy change

together with a projection with the policy change helps consumers to separate out

the effects of the policy change from the effects of other factors. Figure 2.10 is

a diagrammatic presentation that has been used to advantage

,

by the US International

Trade Commission to explain the effects of unilateral trade liberalization by the US

involving the dismantling of all major import restraints. The figure shows USAGE

results for percentage changes between 2005 and 2013 in outputs of textiles and

apparel in a baseline without liberalization (circles) and an alternative projection that

includes liberalization (crosses). The circles immediately tell consumers that most of

the textile/apparel sector is in decline and that none of it is likely to achieve growth

Figure 2.10 Percentage changes in outputs of textiles and apparel, baseline projection and liber-

alization, 2005e2013. Source: USAGE results presented by the US International Trade Commission (2009,

p. 49).

84 Peter B. Dixon et al.

to match that of GDP. For about half the industries in the sector, the crosses and

circles are close together indicating that liberalization would have only a minor effect

on their prospects. For five industries (Narrow fabrics, Thread, Knit fabrics, Yarn

and textile finishing n.e.c., and Pleating and stitching), liberalization is projected to

have a severely negative effect on output growth: the gap between the crosses and

circles is more than 15 percentage points. In the case of Narrow fabrics, liberalization

converts relatively strong growth into contraction. For the other four industries,

liberalization converts poor prospects into substantially worse prospects. As explained

by the US International Trade Commission (2009) and more fully in Fox et al.

(2008), the five textile/apparel industries worst affected by liberalization would all

suffer from loss of export markets. These markets depend on rules of origin which

give some countries an incentive to import textile inputs from the US. With

sufficient US content in their textile/apparel exports, these countries gain access to

the US market at zero tariff. With liberalization, which reduces the tariff to zero on

US imports from all countries, the incentive to source intermediate inputs from the

US disappears.

While a baseline is valuable from a presentational point of view, its role goes deeper

than that. As discussed in Section 2.5.2.3, answers to policy questions can be improved by

generating them as deviations around a realistic baseline forecast. There are at least three

other reasons (discussed later in this Handbook50) for baseline forecasting.

(i) Consumers are interested in the baseline: they want to know where we think the

economy is going, not just how the economy will be affected by a particular policy

change or other shock to the economy.

(ii) A forecast is necessary in calculating adjustment costs associated with a policy

change.

(iii) Forecasting opens up a possibility for model validation and model improvement.

2.5.1.5 Historical decomposition analyses

Another useful device for helping consumers to separate out the effects of policy

changes from the effects of other factors is an historical decomposition. Table 2.6 is

an example. It shows results from a 1987e1994 decomposition simulation with

Australia’s MONASH model undertaken to support a report by the Industry

Commission (1997). The Commission was investigating the effects of reductions,

proposed for 2001, in the tariff applying to imports of Motor vehicles and parts

(MVP).51 The technique of historical decomposition is described in Section 2.5.2.2.

Here we will simply explain the results.

50 See Section 19.6 of Dixon and Rimmer in Chapter 19 of this Handbook.

51 Dixon et al. (1997) gives the details of the motor vehicle decomposition study. Another decomposition study, focused

on the determinants of growth in Australia’s international trade, is described in Dixon et al. (2000).

Table 2.6 Output of the Australian MVP industry, 1987e1994

Driving factor Percentage effect

1. Shifts in foreign demands and import supply curves e4.8

2. Changes in protection e5.6

3. Technical change 24.4

4. Growth in aggregate employment 16.7

5. Changes in import/domestic preferences e4.0

6. Changes in required rates of return e7.0

7. Other factors e5.2

Total 14.5

Source: Extracted from results reported in table 5.5 of Dixon and Rimmer (2002).

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 85

As shown in the last row of Table 2.6, the output of Australia’s MVP industry grew

between 1987 and 1994 by 14.5%. Our historical decomposition simulation attributes

this growth to seven factors.

The first is shifts in the positions of foreign demand curves for Australian exports and

foreign supply curves for Australian imports. Between 1987 and 1994 these shifts were

generally favorable. Holding constant all other exogenous variables (protection, tech-

nology, etc.) MONASH showed that the shifts in these curves gave Australia an

improvement in its terms of trade of nearly 20%. However, this was bad for the MVP

industry, reducing its output by 4.8%. The industry was damaged by good news for the

rest of the economy via exchange rate effects. Improvement in the terms of trade

strengthens Australia’s real exchange rate. The MVP industry faces considerable

competition from imports and real appreciation associated with terms-of-trade

improvement weakened its competitive position.

The second factor is changes in protection. Between 1987 and 1994, tariff were

reduced on almost all imports. TheMVP tariff cut reduced the landed-duty-paid price of

MVP imports by 6.5%. Although the import/domestic substitution elasticity for MVP

products is high (2.55), the damage to MVP output was limited to 5.6% (row 2). The

MVP industry benefited from cuts in tariffs on its inputs and from real exchange rate

devaluation associated with tariff reductions more generally.

The third factor is technical change throughout the economy.52 In the MVP

industry, technical change favored intermediate inputs and capital relative to labor but

there was almost no net improvement in total factor productivity. The large (24.4%)

contribution to growth in MVP output attributed to technical change in Table 2.6

arises from two indirect sources. (i) Technical change is a major driver of GDP growth

52 This is an amalgam of the effects of many types of technical change: input-saving for each intermediate and primary

factor flow to each industry, margin-saving, and input-saving in the creation of units of capital.

86 Peter B. Dixon et al.

which in turn contributes to growth in demand for MVP products. (ii) Between 1987

and 1994 there was a large increase throughout Australian industries in the use of MVP

products per unit of output. In our calculations this was treated as an MVP-using

technical change. Rather than being strictly technological, much of the increase in

MVP inputs reflected the exploitation of a loophole in Australia’s tax laws in this

period which allowed employers to give workers tax-free use of company cars for

private purposes in lieu of taxable income.

The fourth factor is growth in aggregate employment. Together with changes in

technology, employment growth was responsible for most of the growth between 1987

and 1994 in GDP.53 Thus, like row 3, row 4 of Table 2.6 shows a major contribution

(16.7%) to the output of the MVP industry.

The fifth factor refers to changes in import/domestic preferences reflected by

changes in import/domestic quantity ratios beyond those that can be explained by

changes in import/domestic price ratios. Between 1987 and 1994, rationalization

of the Australian MVP industry reduced the variety of Australian-produced cars.

Simultaneously, there was an increase in the variety of imported cars available to

Australian consumers.54 This generated a strong twist in preferences in favor of

imported cars. However the damage to MVP output shown in row 5 of Table 2.6

is only 4.0%. This entry includes not only the effects of the MVP twist, but the

effects of all other import/domestic twists. These were generally in

,

Changes in

technologies,

tastes, etc

1998-2005

Forecasts for

technologies,

tastes, etc,

2005-2013

Baseline

paths for

shift

variables

Policy

shocks

F’ casts, other

naturally

exogenous

variables,

2005-2013

Expert

forecasts for

naturally

endogenous

variables

2005-2013

USAGE

calibrated

at 2005

USAGE

calibrated

at 2005

Forecast

closure

Baseline

forecasts for

industries, etc

Policy

closure

Policy

forecasts for

industries, etc

Decomposition

closure

USAGE

calibrated

at 1998

USAGE

calibrated

at 1998

Output of

simulation

Output of

simulation

Output of

simulation

Output of

simulation

Model and

closure

Model and

closure

Model and

closure

Model

and

closure

Data and

shocks

Shocks for

simulation

Shocks

for

simulation

2. Historical 3. Baseline forecast 4.Policy

Figure 2.11 Connections between four modes of analysis with MONASH-style models.

The

M

O

N

A

SH

Style

of

C

om

putable

G

eneralEquilibrium

M

odeling:A

Fram

ew

ork

for

PracticalPolicy

A

nalysis

89

90 Peter B. Dixon et al.

2.5.2.1 MONASH-style historical simulations

When the US International Trade Commission study was undertaken, the latest USAGE

database was for 1998 and there were no published input-output data for a year beyond

that date. The US International Trade Commission required a baseline and policy

simulation for 2005e2013. Thus, the first job was to move the USAGE database forward

to 2005. To do this, we performed an historical simulation. As shown in panel 2 of

Figure 2.11, we started with USAGE calibrated with input-output and other data for

1998, and shocked it with observed movements between 1998 and 2005 in both

naturally exogenous and naturally endogenous variables.

As is typical in historical simulations, the shockednaturallyexogenous variables included

tax rates, tariff rates, public expenditure and population. The shocked naturally endogenous

variables included standard macro variables and a large number of industry and commodity

variables. Absorbing macro variables requires endogenization of naturally exogenous

propensities. For example, to allow growth in household consumption to be set exoge-

nously at its observed value requires endogenization of the average propensity to consume.

Absorbing micro observations requires endogenization of corresponding naturally exoge-

nous taste, technology and trade variables. For example, data on growth in consumption of

tobacco products (a naturally endogenous variable) is absorbed by allowing themodel to tell

us endogenously that there was a change in consumer preferences (a naturally exogenous

variable) against this product. As indicated in the output column of panel 2 in Figure 2.11,

the historical simulation undertaken for theUS International TradeCommission produced

the required up-to-date data for 2005 (including an input-output table) which, in principal,

incorporated all statistical information that was available in 2005. It also produced estimates

of changes between 1998 and 2005 in tastes, technologies, required rates of return and

positions of export demand and import supply curves.

While the broad ideas underlying an historical simulation are straightforward,

coping with the details of the data make the process time-consuming and difficult. For

example, the USAGE model in our 1998e2005 historical simulation had 500 indus-

tries/commodities but data availability made it necessary to introduce micro shocks at

a variety of different levels of disaggregation: 397-order export and import values from

the US International Trade Commission; 160-order import prices from the BLS;

100-order export prices from the BLS; 56-order private consumption quantities and

prices from the BEA; 20-order public consumption quantities and prices from the

BEA; 68-order industry outputs and value-added prices from the BEA; 60-order

occupational wage rates from the BEA; and 338-order industry employment from the

BLS. Each of these micro data concepts were defined and absorbed in the USAGE

historical simulation via special purpose equations. For example, to allow us to use BLS

data on employment by 338 industries, we included in USAGE equations of the form:

lBLSðqÞ ¼

X500

i¼1

Sði; qÞ � lUSAGEðiÞ; q ¼ 1; 2;.; 338 (2.54)

91

X338

alabUSAGEðiÞ ¼

q¼1

Mði; qÞ � fBLSðqÞ; i ¼ 1; 2;.; 500; (2.55)

where lBLS(q) is growth in employment in BLS sector q, lUSAGE(i ) is growth in employ-

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis

ment in USAGE industry i, S(i,q) is the share of BLS sector q’s employment accounted for

by USAGE industry i, alabUSAGE(i ) is labor-saving technical change in USAGE industry i,

fBLS(q) is a shift variable for BLS sector q, and M(i,q) is a coefficient that has value 1 if

USAGE industry i is part of BLS sector q and zero otherwise. For simplicity we assume in

this example that each USAGE industry is contained in just one BLS sector.

Equation (2.54) defines growth in employment by BLS sector q in terms of growth in

employment in component USAGE industries. In the historical simulation the naturally

endogenous variable lBLS(q) was exogenized and shocked with the value implied by the

BLS data on employment. Correspondingly, fBLS(q) was endogenized. Via (2.55), we

imposed the assumption that labor-saving technical change was the same in each USAGE

industry i contained in BLS sector q.

The obvious alternative to within-model determination of USAGE industry

employment growth via equations such as (2.54) and (2.55) is to assume, outside the

model, that BLS sector q’s employment growth applied to each USAGE industry in the

sector. However, we prefer the within-model approach because it allows the allocation of

employment growth in sector q to component USAGE industries to be informed by

other information used in the historical simulation. For example, if USAGE industries 1

and 2 are both in BLS sector q and other information in the historical simulation

indicates that output in industry 1 grew rapidly relative that in industry 2, then it is

reasonable to suppose that employment in industry 1 grew rapidly relative to employ-

ment in industry 2. This will be the result in an historical simulation under the uniform-

within-sector technology assumption implemented in the model via (2.54) and (2.55)

but not under the uniform-within-sector employment assumption implemented outside

the model. For a more general discussion of the advantages of within-model allocation

procedures, see Dixon and Rimmer (2002, pp. 200e201).

2.5.2.2 MONASH-style decomposition simulations

Once an historical simulation is completed, then we can perform a decomposition

simulation. The decomposition simulation uses the same model and data as the historical

simulation, but a different closure. All of the exogenous variables in the decomposition

closure are naturally exogenous. As indicated by the arrows from panel 2 to panel 1 in

Figure 2.11, these naturally exogenous variables were shocked in the decomposition

simulation with the same values they had (either exogenously or endogenously) in the

historical simulation. Consequently, the decomposition simulation produces the same

results as the historical simulation.

92 Peter B. Dixon et al.

The reason for performing decomposition simulations is that they allow us to

decompose movements in macro and industry variables into parts attributable to

different driving forces. This is done by partitioning the exogenous variables and

separately computing the effects of the shocks for the each subset.55 The results obtained

in this way are a legitimate decomposition to the extent that the exogenous variables in

the decomposition simulation can be thought of as varying independently of each other.

In setting up the decomposition closure, the exogenous variables are chosen with exactly

this property in mind. Thus, in the decomposition closure we find on the exogenous list

variables representing policy instruments, technologies,

,

tastes, required rates of return

and positions of export demand and import supply curves. All of these can be considered

as independently determined and all can be thought of as making their own contribu-

tions to movements in endogenous variables such as incomes, consumption, exports,

imports, outputs, employment and investment.

2.5.2.3 MONASH-style forecast simulations

MONASH-style forecast simulations are conducted with models calibrated to data for

a recent year. These data are often generated by an historical simulation. As indicated by

the solid arrow from panel 2 to panel 3 in Figure 2.11, the 2005e2013 baseline forecast

for the US International Trade Commission project was created by USAGE calibrated

with data for 2005 created by our historical simulation for 1998e2005.

In creating shocks to generate a baseline forecast, we draw as much as possible on the

work of specialist forecasting organizations. In many countries, well-informed forecasts

are available from organizations covering different aspects of the economy. In Australia,

for example, macro forecast are provided by Access Economics and the Australian

Treasury; forecasts for volumes and prices of agricultural and mineral exports are

provided by the Australian Bureau of Agricultural and Resource Economics; and

forecasts for tourist numbers are provided by the Bureau of Tourism Resear ch. In the

US, macro forecasts are provided by the Congressional Budget Office, the US

Department of Agriculture and the BLS; and forecasts for an array of energy variables are

provided by the Energy Information Administration. All these forecasts are prepared by

large teams of economists with considerable expertise. In forecast simulations with

MONASH-style models we try to take advantage of their knowledge.

We do this by exogenizing variables for which there are reputable expert forecasts and

using these forecasts as shocks. To accommodate macro forecasts we endogenize macro

55 Because USAGE is a non-linear system, the effect on endogenous variable i of movements in exogenous variable

j cannot be computed unambiguously: the effects of movements in any exogenous variable depend on the values

adopted for other exogenous variables. To resolve this problem we, in effect, carry out decomposition simulations in

a linear system in which derivatives of endogenous variables with respect to exogenous variables are evaluated at

a half-way point between the initial and final values of the exogenous variables. The computations can be done

conveniently in GEMPACK, see Harrison et al. (2000).

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 93

propensities. To accommodate micro forecasts we endogenize corresponding micro shift

variables. For example, if forecasts are available from the Energy Information Admin-

istration on the sale of electricity to US industries and households, then we endogenize

electricity-using technical change in US industries and an electricity preference variable

for US households. In Figure 2.11, the input of expert forecasts occurs mainly in the area

in panel 3 marked ‘Expert forecasts for naturally endogenous variables, 2005e2013’. We

may also have expert input in the area marked ‘Forecasts, other naturally exogenous

variables, 2005e2013’ for tariff rates and other naturally exogenous variables.

Because we know less about the future than the past, MONASH-style forecast

closures are more conventional than historical closures. In forecast closures most dis-

aggregated technology and preference variables are exogenous. In setting their forecast

values, we rely heavily on extrapolations from historical simulations. This is indicated by

the dotted arrow connecting panel 2 with panel 3 in Figure 2.11.

There are two outputs from a forecast simulation. The first is a baseline forecast for

a potentially huge set of disaggregated variables. The forecasts start from an up-to-date

database and incorporate technology, preference and trade trends derived from recent

history together with expert forecasts for macro variables and for whatever micro

variables are covered by specialist forecasting organizations. The second output is

forecasts for movements in naturally exogenous shift variables, such as the average

propensity to consume and electricity-saving technical change, that were endogenized to

absorb expert forecasts.

2.5.2.4 MONASH-style policy simulations

Policy closures are similar to decomposition closures. In policy closures naturally

endogenous variables (such as macro variables and sales of electricity) are endogenous.

They must be allowed to respond to the policy change under consideration. Corre-

spondingly, in policy closures naturally exogenous variables (such as the average

propensity to consume and electricity-saving technical change) are exogenous. If there

are no policy shocks, a policy simulation generates the same solution as the baseline

forecast. With no policy shocks all of the exogenous variables would have the same values

as in the baseline: this is indicated by the arrows from panel 3 to panel 4 in Figure 2.11.

Thus the differences between results in a policy simulation and the baseline forecast are

entirely due to policy shocks. Under the assumption that the non-policy exogenous

variables are genuinely independent of the policy, these differences can be interpreted as

the effects of the policy.

The effects of any given policy depend on the structure of the economy. For example,

the removal in the US of tariffs on imports of Textiles and apparel will have a different

effect on the economy if the domestic sector accounts for 0.7% of aggregate employment

(as it did in 1998) than if it accounts for only 0.2% of aggregate employment (as is likely

in 2015). In considering policy proposals we want to know the likely effects in the future.

94 Peter B. Dixon et al.

These depend on the structure of the economy in the future. Thus, for policy analysis, it

is a major advantage to be able to calculate policy effects in the MONASH style as

deviations from a baseline that gives a plausible picture of the future structure of the

economy.

2.6 CONCLUDING REMARKS

Here are the ideas that we hope readers will take from this chapter.

First, results from detailed CGE modeling can be explained in a convincing manner

to people without CGE backgrounds. We illustrated this in Section 2.2 by explaining

USAGE results for the effects of restricting the supply of unauthorized immigrants to the

US workforce. Our explanation relied on elementary microeconomics (e.g. demand and

supply curves) and on identifying key data items (e.g. numbers of unauthorized workers

in different occupations). Explaining results in a way that is accessible to people with

backgrounds in economics, but not CGE modeling, is necessary for CGE modeling to

be influential in policy circles. Policy advisors cannot effectively carry our results forward

unless they have confidence in them. They can only have sufficient confidence to defend

our results if they understand them.

A second idea from Section 2.2 is that CGE results are often best explained in

a macro-to-micro, non-circular sequence. For example, our explanation of the USAGE

results for the effects of restricting the supply of unauthorized immigrants started with

aggregate employment and aggregate capital. Then we moved to the expenditure side of

the national accounts and eventually to occupations.

A third idea illustrated in Section 2.2 is that disaggregated CGE modeling can

produce results that are credible, new, policy-relevant and not available from aggregated

models. An example in Section 2.2 is the Occupation-mix effect. Identifying this effect

depended on having a model with considerable labor market disaggregation. Critics,

with the benefit of our explanations, sometimes suggest that our results are obvious and

did not require the application of a large-scale model. Our response

,

is that it was the

model that alerted us, and we suspect them, to the result. We would not have thought of

the Occupation-mix effect and numerous other subtle results, let alone quantified them,

without a detailed MONASH-style model.

The main idea in Section 2.3 is that Johansen is still worth reading. His 1960 book sets

out a simple effective computing technique based on a representation of a model as

a rectangular systemof linear equations in changes and percentage changes of the variables.

He then introduces a BOTEmethod for interpreting results and applies it in an analysis of

the matrix showing the elasticities of endogenous variables with respect to exogenous

variables. He uses this matrix in several applications including a decomposition of history

and a validation check of his model’s forecasting performance. The addendum in his 1974

book shows Johansen’s enthusiasm for having his model used, developed and scrutinized

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 95

in policy departments of the Norwegian government. By starting with Johansen’s simple

linear framework, MONASH modelers were able to make rapid progress in the 1970s

with innovations in the specification of international trade, dimensionality, closure

flexibility and the use of complex functional forms. They also eliminated Johansen’s

linearization errors. This was done without sacrificing simplicity and transparency by

introducing the Johansen/Euler multistep method.

The first key idea in Section 2.4 is that the initial solution is important. It can be

derived mainly from the input-output database. Then derivative methods can be used to

compute other solutions either for the same year (comparative statics) or for a linked

sequence of years (dynamics). The derivative method used by MONASH models is

Johansen/Euler. This can be applied routinely even for very large models using

GEMPACK software.

Another idea in Section 2.4 is that creation of a database from available statistics for

a detailed policy-relevant CGE model is a major task requiring skill and perseverance.

It is certainly too hard for a lightly supervised research assistant.

The central idea in Section 2.5 is that the primary purpose of CGE modeling is to

assist in policy formation. Policy advisors on trade, microeconomic reform, the envi-

ronment, labor markets, natural resources and taxation want models with high levels of

disaggregation, up-to-date data and accurate representations of policy instruments.

These wishes should be respected by producers of CGE services. In trying to satisfy

consumer demands, MONASH modelers have devised the four-closure approach:

historical; decomposition; forecast; and policy.

The final idea is that research in CGE modeling benefits from a team environment.

The enduring team at CoPS/IMPACT has facilitated the creation and application of

MONASH models in several ways. First, it has allowed members of the team to adopt

a degree of specialization in theory, data, computing and application. The most obvious

payoff from specialization has been the development of the GEMPACK software

alongside the models. The GEMPACK group, headed within CoPS/IMPACT by Ken

Pearson and Mark Horridge, understands and responds to modeling needs as they

emerge and anticipates future needs. However, this is not the only payoff from

specialization. Team members specialize on particular countries (e.g. Australia, US and

China) and particular issues (e.g. labor markets, energy and environment). While

building their own specialist knowledge, they absorb general techniques (e.g. the four

closure approach) from other members of the team. Transfer of knowledge within the

team is of particular advantage to new members who start with fully functioning models

and draw on many years of accumulated experience from people who know how to

adapt models for particular applications. A second benefit of an enduring team has been

the accumulation of modeling improvements. With a long collective memory, CoPS/

IMPACT is able to maintain ambitious, large-scale, continuously improving models that

frequently generate insights that are not available from small single-purpose models.

96 Peter B. Dixon et al.

APPENDIX: THEORETICAL JUSTIFICATION FOR THE JOHANSEN/EULER

SOLUTION METHOD

For many people, the least convincing aspect of Figure 2.5 as an explanation of the

Johansen/Euler method is the assumption that the slope of rs (an ‘off-solution’ slope) is

a good approximation to the derivative of Y with respect to X on the solution line at w.

Once they have doubts about that assumption, then their confidence in the theoretical

underpinnings of the method is seriously eroded.

In this Appendix we provide some reassurance by proving a proposition concerning

the convergence of Johansen/Euler solutions as the number of steps approaches infinity.

This proposition was first proved in the context of an n-variable/m-equation CGE

model by Dixon et al. (1982, section 35). Here we set out the proof for a two-variable/

one-equation model. Nothing essential is lost from the mathematical argument by

cutting down the dimensions. Being able to treat X and Y as scalars eliminates the need

for some rather clumsy matrix notation. We also provide an explanation of the idea

mentioned in Section 2.4.1 underlying Richardson’s extrapolation: doubling the

number of steps in a Johansen/Euler computation tends to halve the linearization error.

A.1 Convergence proposition for the Johansen/Euler method

Proposition. Assume that we are dealing with a two-variable/one-equation model

in which the endogenous variable, Y, is a differentiable function of the exogenous

variable, X:

Y ¼ GðXÞ: (A.1)

While we do not know the form of G, assume that we do know how to evaluate

a function B(X,Y ) which has the property that:

BðX ;YÞ ¼ GXðXÞ if Y ¼ GðXÞ; (A.2)

where GX(X ) is the Jacobian matrix of G evaluated at X. In the scalar case we are

considering, GX(X ) is simply vY=vX where Y is given by (A.1).

Assuming that the derivative of GX with respect to X and the derivative of B(X,Y )

with respect to Y are bounded over the relevant domain of (X,Y ), then the Johansen/

Euler method will converge, i.e. given a starting point ð�X ; �YÞ satisfying:

�Y ¼ Gð�XÞ; (A.3)

then:

lim

n/N

Yn

n ¼ Gð�X þ DXÞ; (A.4)

where DX is any given change in X and Yn

n is the n-step estimate of Gð�X þ DXÞ.

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 97

Proof. We denote the values of X and Y reached in the rth step of an n-step

computation by Xr

n and Yr

n. Then:

Xr

n ¼ X0

n þ

�r

n

� DX ; r ¼ 1;.; n; (A.5)

and:

Yr

n ¼ Yr�1

n þ

1

n

� BðXr�1

n ;Yr�1

n Þ � DX; r ¼ 1;.; n; (A.6)

where:

X0

n ¼ �X and Y 0

n ¼ �Y : (A.7)

We denote the true value of Y corresponding to Xr

n as

�Yr

n, i.e.:

�Yr

n ¼ GðXr

nÞ: (A.8)

Note that (A.8), (A.7) and (A.3) imply that:

�Y 0

n ¼ Y 0

n ¼ �Y : (A.9)

By applying Taylor’s theorem we can relate the true value for Y in the first step of an

n-step procedure to the starting value of Y by:

�Y 1

n ¼ �Y 0

n þ

1

n

� BðX0

n ; �Y

nÞ � DX þ

1

2 � n2

�G0;n

XX : (A.10)

In (A.10), G

0;n

XX is (DX )2 multiplied by the derivative of GX evaluated between X0

n and

X1

n . More generally, we use the notation:

G

r;n

XX ¼ GXXðXr;nÞ � ðDXÞ2; r ¼ 0; 1;.; n� 1; (A.11)

where GXX is the derivative of GX, that is the second derivative of G; and Xr,n is

a particular point between Xr

n and Xrþ1

n . Combining (A.6) and (A.10), and using (A.7)

allows us to relate the true value of Y in the first step of the n-step procedure to the

estimated value by:

�Y1

n ¼ Y 1

n þ

1

2 � n2

�G0;n

XX : (A.12)

Again applying Taylor’s theorem, we relate the true value of Y in the second step in an

n-step procedure to the true value in the first step by:

�Y 2

n ¼ �Y 1

n þ

1

n

� BðX1

n ; �Y

1

nÞ � DX þ

1

2 � n2

�G1;n

,

for Forecasting and Policy: A

Practical Guide and Documentation of MONASH. Contributions to Economic Analysis 256. North-

Holland, Amsterdam.

Dixon, P.B., Rimmer, M.T., 2004. The US economy from 1992 to 1998: results from a detailed CGE

model. Econ. Rec. 80 (Special Issue), S13eS23.

Dixon, P.B., Rimmer, M.T., 2009. Restriction or Legalization? Measuring the Economic Benefits of

Immigration Reform. Trade Policy Analysis Paper 40. Cato Institute, Washington DC. Available from.

http://www.freetrade.org/node/949.

Dixon, P.B., Rimmer, M.T., 2010a. US imports of low-skilled labor: restrict or liberalize? In: Gilbert, John

(Ed.), New Developments in Computable General Equilibrium Analysis of Trade Policy. Frontiers of

Economics and Globalization 7. Emerald Publishing, Lewes, pp. 103e151.

Dixon, P.B., Rimmer, M.T., 2010b. Optimal tariffs: should Australia cut automotive tariffs unilaterally?

Econ. Rec. 86, 143e161.

Dixon, P.B., Rimmer, M.T., 2010c. Johansen’s contribution to CGE modelling: originator and guiding

light for 50 years. CoPS/IMPACT Working Paper G-203. Available from: http://www.monash.edu.

au/policy/ftp/workpapr/g-203.pdf.

Dixon, P.B., Rimmer, M.T., 2011. You can’t have a CGE recession without excess capacity. Econ. Model.

28, 602e613.

Dixon, P.B., Parmenter, B.R., Ryland, G.J., Sutton, J., 1977. ORANI, A General Equilibrium Model of

the Australian Economy: Current Specification and Illustrations of Use for Policy Analysis, Volume 2

of the First Progress Report of the IMPACT Project. Australian Government Publishing Service,

Canberra.

Dixon, P.B., Parmenter, B.R., Sutton, J., 1978. Spatial disaggregation of ORANI results: a preliminary

analysis of the impact of protection at the state level. Econ. Anal. Pol. 8, 35e86.

Dixon, P.B., Powell, A.A., Parmenter, B.R., 1979. Structural Adaptation in an Ailing Macroeconomy.

Melbourne University Press, Melbourne.

Dixon, P.B., Parmenter, B.R., Sutton, J., Vincent, D.P., 1982. ORANI: A Multisectoral Model of the

Australian Economy. Contributions to Economic Analysis 142. North-Holland, Amsterdam.

Dixon, P.B., Parmenter, B.R., Powell, A.A., Wilcoxen, P.J., 1992. Notes and Problems in Applied General

Equilibrium Economics. North-Holland, Amsterdam.

Dixon, P.B., Malakellis, M., Rimmer, M.T., 1997. The Australian Automotive Industry from 1986e87 to

2009e10: Analysis Using the MONASH Model. A Report to the Industry Commission. Centre of

Policy Studies and IMPACT Project, Monash University.

Dixon, P.B., Menon, J., Rimmer, M.T., 2000. Changes in technology and preferences: a general equi-

librium explanation of rapid growth in trade. Aust. Econ. Paper 39, 33e55.

Dixon, P.B., Pearson, K.R., Picton, M.R., Rimmer, M.T., 2005. Rational expectations for

large CGE models: a practical algorithm and a policy application. Econ. Model. 22,

1001e1019.

Dixon, P.B., Osborne, S., Rimmer, M.T., 2007. The economy-wide effects in the United States of

replacing crude petroleum with biomass. Energ. Environ. 18, 709e722.

http://www.freetrade.org/node/949

http://www.monash.edu.au/policy/ftp/workpapr/g-203.pdf

http://www.monash.edu.au/policy/ftp/workpapr/g-203.pdf

102 Peter B. Dixon et al.

Dixon, P.B., Lee, B., Muehlenbeck, T., Rimmer, M.T., Rose, A.Z., Verikios, G., 2010. Effects on the US

of an H1N1 epidemic: analysis with a quarterly CGE model. J. Homeland Security Emerg. Manage. 7

(1) article 75. Available from: http://www.bepress.com/jhsem/vol7/iss1/75.

Dixon, P.B., Johnson, M., Rimmer, M.T., 2011. Economy-wide effects of reducing illegal immigrants in

US employment. Contemp. Econ. Pol. 29, 14e30.

Fallon, J., 1982. Disaggregation of the ORANI employment projections to statistical divisions e Theory,

ORANI Research Memorandum, Archive OA0160, IMPACT Project, Melbourne.

Fan, Z., 2008. Armington meets Melitz: introducing firm heterogeneity in a global CGE model of trade.

J. Econ. Integrat. 23, 575e604.

Fox, A., Powers, W., Winston, A., 2008. Textile and apparel barriers and rules of origin: what’s left to gain

after the agreement on textiles and clothing? J. Econ. Integrat. 23, 656e684.

Gehlhar, M., Somwaru, A., Dixon, P.B., Rimmer, M.T., Winston, R.A., 2010. Economy-wide

implications from US bioenergy expansion. Am. Econ. Rev.: Papers Proceedings 100,

172e177.

Glezer, L., 1982. Tariff Politics: Australian Policy-making 1960e1980. Melbourne University Press.

Griswold, D.T., 2002. Willing Workers: Fixing the Problem of Illegal Mexican Migration to the United

States. Trade Policy Analysis Paper 19. Cato Institute, Washington, DC. Available from: http://www.

cato.org/pubs/tpa/tpa-019.pdf.

Hanoch, G., 1971. CRESH production functions. Econometrica 39, 695e712.

Harrison, W.J., Horridge, J.M., Pearson, K.R., 2000. Decomposing simulation results with respect to

exogenous shocks. Comput. Econ. 15, 227e249.

Honkatukia, J., 2009. VATTAGE e A Dynamic, Applied General Equilibrium Model of the Finnish

Economy. Research Report 150. Government Institute for Economic Research, Helsinki.

Hudson, E.A., Jorgenson, D.W., 1974. US energy policy and economic growth, 1975-2000. Bell J. Econ.

Manage. Sci. 5, 461e514.

Industry Commission, 1997. The Automotive Industry, Volumes I and II. Industry Commission Report

58. Australian Government Publishing Service, Canberra.

Johansen, L., 1960. A Multisectoral Study of Economic Growth. Contributions to Economic Analysis 21.

North-Holland, Amsterdam.

Johansen, L., 1974. A Multisectoral Study of Economic Growth, Second enlarged ed. Contributions to

Economic Analysis 21. North-Holland, Amsterdam.

Keller, W.J., 1980. Tax Incidence: A General Equilibrium Approach. Contributions to Economic Analysis

134. North-Holland, Amsterdam.

Leontief, W.W., 1936. Quantitative input-output relations in the economic system of the United States.

Rev. Econ. Stat. 18, 105e125.

Malakellis, M., 1998. Should tariff reductions be announced? An intertemporal computable general

equilibrium analysis. Econ. Rec. 74, 121e138.

Malakellis, M., 2000. Integrated MacroeMicro-Modelling under Rational Expectations: with an Appli-

cation to Tariff Reform in Australia. Physica-Verlag, Heidelberg.

Melitz, M.J., 2003. The impact of trade in intra-industry reallocations and aggregate industry productivity.

Econometrica 71, 1695e1725.

Okubo, S., Planting, M., 1998. US travel and tourism satellite accounts for 1992. Sur. Curr. Bus., 8e22.

July.

Pearson, K.R., 1988. Automating the computation of solutions of large economic models. Econ. Model. 5,

385e395.

Powell, A.A., Lawson, T., 1990. A decade of applied general equilibrium modelling for policy work. In:

Bergman, L., Jorgenson, D., Zalai, E. (Eds.), General Equilibrium Modeling and Economic Policy

Analysis. Basil Blackwell, Boston, MA, pp. 241e290.

Powell, A.A., Snape, R.H., 1993. The contribution of applied general equilibrium analysis to policy

reform in Australia. J. Pol. Model. 15, 393e414.

Rattso, J., 1982. Different macro closures of the original Johansen model and their impact on policy

evaluation. J. Pol. Model. 4, 85e97.

http://www.bepress.com/jhsem/vol7/iss1/75

http://www.cato.org/pubs/tpa/tpa-019.pdf

http://www.cato.org/pubs/tpa/tpa-019.pdf

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 103

Rector, R., Kim, C., 2007. The Fiscal Cost of Low-Skill Immigrants to the US Taxpayer. Heritage Special

Report SR-14. The Heritage Foundation, Washington, DC. Available from: http://www.heritage.

org/Research/Immigration/sr14.cfm.

Robinson, S., 2006. Macro model and multipliers: Leontief, Stone, Keynes, and CGE models. In: de

Janvry, A., Kanbur, R. (Eds.), Poverty, Inequality and Development: Essays in Honor of Erik Thor-

becke. Springer, New York, pp. 205e232.

Scarf, H.E., 1967. On the computation of equilibrium prices. In: Fellner, W. (Ed.), Ten economic studies

in the tradition of Irving Fisher. Wiley, New York, pp. 207e230.

Scarf, H.E., 1973. The Computation of Economic Equilibria. Yale University Press, New Haven, CT.

Spurkland, S., 1970. MSG e a tool

,

in long-term planning. Report prepared for the First Seminar

on Mathematical Methods and Computer Techniques. UN Economic Commission for Europe,

Varna.

Staelin, C.P., 1976. A general equilibrium model of tariffs in a non-competitive economy. J. Int. Econ. 6,

39e63.

Strayhorn, C.K., 2006. Undocumented Immigrants in Texas: A Financial Analysis of the Impact to the

State Budget and Economy. Special Report. Office of the Comptroller of Texas, Austin, TX.

Sutton, J., 1976. The Solution Method for the ORANI Module. Preliminary Working Paper OP-03.

IMPACT Project, Melbourne.

Sutton, J., 1977. Computing Manual for the ORANI Model. IMPACT Computing Document C1-01.

IMPACT Project, Melbourne.

Taylor, L., Black, S.L., 1974. Practical general equilibrium estimation of resource pulls under trade

liberalization. J. Int. Econ. 4, 35e58.

Taylor, L., Bacha, E.L., Cardoso, E.A., Lysy, F.J., 1980. Models of Growth and Distribution for Brazil.

Oxford University Press for the World Bank, New York.

US International Trade Commission, 2003. Steel: Monitoring Developments in the Domestic Industry.

Investigation TA-204-9,3. Steel-Consuming Industries: Competitive Conditions with Respect to Steel

Safeguard Measures. Investigation 332-452, Publication 3632. US ITC, Washington, DC. Available

from: http://www.usitc.gov/publications/safeguards/3632/pub3632_vol3_all.pdf.

US International Trade Commission, 2004. The Economic Effects of Significant US Import Restraints:

Fourth Update. Investigation 332-325, Publication 3701. US ITC, Washington, DC.

US International Trade Commission, 2007. The Economic Effects of Significant US Import Restraints:

Fifth Update. Investigation 332-325, Publication 3906. US ITC, Washington, DC.

US International Trade Commission, 2009. The Economic Effects of Significant US Import Restraints:

Sixth Update. Investigation 332-325, Publication 4904. US ITC, Washington, DC.

Wilcoxen, P.J., 1985. Numerical Methods for Investment Models with Foresight. IMPACT Project

Preliminary Working Paper IP-23. IMPACT Project, Monash University, Clayton.

Wilcoxen, P.J., 1987. Investment with Foresight in General Equilibrium. IMPACT Project Preliminary

Working Paper IP-35. IMPACT Project, Monash University, Clayton.

Winston, R.A., 2009. Enhancing Agriculture and Energy Sector Analysis in CGE Modelling: An

Overview of Modifications to the USAGE Model. CoPS/IMPACT Working Paper G-180 Available

from: http://www.monash.edu.au/policy/elecpapr/g-180.htm.

http://www.heritage.org/Research/Immigration/sr14.cfm

http://www.heritage.org/Research/Immigration/sr14.cfm

http://www.usitc.gov/publications/safeguards/3632/pub3632_vol3_all.pdf

http://www.monash.edu.au/policy/elecpapr/g-180.htm

2. The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis

2.1 Introduction

2.2 Telling a CGE Story

2.2.1 Tighter border security

2.2.1.1 Macroeconomic effects

2.2.1.2 Effects on the occupational composition of legal employment

2.2.1.3 Effects on the welfare of legal households

F1: Direct effect

F2: Occupation-mix effect

F3: Legal-employment effect

F4: Capital effect

F5: Public-expenditure effect

F6: Terms-of-trade effect

2.2.2 Engaging the audience: hoped-for reactions

2.3 From Johansen to ORANI

2.3.1 Johansen model

2.3.2 Building on the Johansen legacy: creating the ORANI model

2.3.2.1 Imperfect substitution between imports and domestic products: the Armington specification

2.3.2.2 Incorporation of policy-relevant detail requiring large dimensionality

2.3.2.3 Closure flexibility in the Johansen framework

2.3.2.4 Complex functional forms in the Johansen framework

2.4 Extending Johansen’s Computational Framework: The Mathematical Structure of a MONASH Model

2.4.1 Theory of the Johansen/Euler solution method

2.4.2 Linking the periods: dynamics

2.4.3 Developing a solution for year 0 from the input-output data

2.4.3.1 Solution for year 0: overview

2.4.3.2 Solution for year 0 and the input-output database for a MONASH model

2.4.4 Deriving change and percentage change equations

2.4.5 Introduction to the GEMPACK programs for solving and analyzing MONASH models

2.4.6 Creating a database for a MONASH model

2.4.6.1 Input-output data published by the BEA

2.4.6.2 Valuation and treatment of indirect taxes

2.4.6.3 Imports

2.4.6.4 Public sector demands

2.4.6.5 Investment by investing industry

2.4.6.6 Value added, self-employment and capital stocks

2.5 Responding to the Needs of CGE Consumers: The Four Closure Approach

2.5.1 What consumers of CGE services want

2.5.1.1 Up-to-date data

2.5.1.2 Detailed disaggregation in the focus area and accurate representation of relevant policy instruments

2.5.1.3 Disaggregated results

2.5.1.4 Baseline forecasts

2.5.1.5 Historical decomposition analyses

2.5.2 Four-closure approach

2.5.2.1 MONASH-style historical simulations

2.5.2.2 MONASH-style decomposition simulations

2.5.2.3 MONASH-style forecast simulations

2.5.2.4 MONASH-style policy simulations

2.6 Concluding Remarks

Appendix : Theoretical Justification For The Johansen/Euler Solution Method

A.1 Convergence proposition for the Johansen/Euler method

A.2 Richardson's extrapolation

Acknowledgments

References

,

Section 2.4, particularly

Section 2.4.6 on input-output accounting, would be difficult to read passively straight

through. Input-output conventions are important, but tortuous and slippery. We hope

that by scanning this subsection readers will get an idea of what is involved. They may

then find it useful to return to the material if they are constructing or assessing a detailed

policy-relevant model.

2.2 TELLING A CGE STORY

One of our graduate students recently asked us how to cope with skeptics: who will not

believe anything from a model unless all the parameters are estimated by time-series

econometrics; who harp on about the input-output data being outdated; who highlight

what they see as the absurdity of competitive assumptions and constant returns to scale;

who insist that general equilibrium means that all markets clear, thus ruling out real-

world phenomena such as involuntary unemployment; and who claim that, like a chain,

CGE models are only as strong as their weakest part.

Our advice is to get the results up front. Do not start by telling the audience about

general features of the model. The idea is to tell a story that is so interesting and engaging

that general-purpose gripes about CGE modeling are at least temporarily forgotten in

favor of genuine enquiry about the application under discussion. The assumptions that

really matter for the particular application can then be drawn out. The aim is to lead the

audience to an understanding of what specific things they need to believe about behavior

and data if they are to accept the results and policy conclusions being presented.

Here, we will try to follow our own advice. We will tell a CGE story without

explicitly describing the model. We will use BOTE calculations to identify assumptions

and data items that matter for the results. We will rely on explanatory devices such as

demand and supply diagrams that are accessible to all economists, not just those with

a CGE background. Only when we have given an illustration of what a MONASH-style

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 29

CGE application can deliver will we turn our attention in the rest of the chapter to the

technicalities of MONASH modeling.

Our illustrative CGE story concerns the effects on the US economy of tighter border

security to restrict unauthorized immigration. This is a good CGE topic for two reasons.

First, it is a contentious policy issue with many people in the political debate demanding

greater government efforts to improve border security and reduce unauthorized

immigration. Popular opinion is that unauthorized immigrants do economic damage to

legal residents of the US by generating a need for increased public expenditures and by

taking low-skilled jobs. However, these opinions may not be the whole story. This brings

us to the second reason that tighter border security is a good CGE topic. To get beyond

popular opinions we need to look at interactions between different parts of the economy

(i.e. we need to adopt a general equilibrium approach). We need to quantify the effects of

varying the supply of unskilled foreign workers; on wage rates and employment

opportunities of US workers in different occupations; on output, employment and

international competitiveness in different industries; on public sector budgets; and on

macroeconomic variables including the welfare of legal US residents.

2.2.1 Tighter border security

In 2005 there were about 7.3 million unauthorized foreign workers holding jobs in the

US, about 5% out of total employment of 147 million. On business-as-usual assumptions

unauthorized employment was expected to grow to about 12.4 million in 2019, about

7.2% out of total employment of 173 million. As unauthorized immigrants have low-paid

jobs, their share in the total wage bill is less than their employment share. In the business-

as-usual forecast, their wage bill share goes from 2.69% in 2005 to 3.64% in 2019.

In our CGE policy simulation we analyze the effects of a reduction in unauthorized

employment caused by a restriction in supply. Specifically, we imagine that starting in

2006 the US implements a successful policy of tighter border security that has a long-run

effect (2019) of reducing unauthorized employment by 28.6%: from 12.4 million in the

baseline (business-as-usual situation) to 8.8 million in the policy situation. We have in

mind policies that increase the costs and dangers of unauthorized entry to the US. These

policies are represented in our model as a preference shift by foreign households against

US employment. However, the exact nature and size of the policy is not important. Our

focus is on the long-run effects of a substantial reduction in supply of unauthorized

employment, however caused.

2.2.1.1 Macroeconomic effects

In the long run, we would not expect a policy implemented in 2006 to have a significant

effect on the employment rate of legal workers. Thus, we would expect the policy to

reduce total employment in 2019 by about 3.6 million (¼ 12.4e 8.8). That is, we would

expect a reduction in total employment in the US of about 2.1% (¼ 100 � 3.6/173).

-2.5

-2

-1.5

-1

-0.5

2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019

GDP

Employment, effective labor input

Employment jobs

Aggregate capital

-2.1

-1.6

Figure 2.1 GDP, employment and capital with tighter border security (percentage deviations from

baseline).

30 Peter B. Dixon et al.

This is confirmed by the ‘Employment jobs’ line in Figure 2.1 that shows results from

our CGE model for the effects on employment of the tighter border security policy as

percentage deviations from the business-as-usual forecast.

Higher up the page in Figure 2.1 we can see the line showing deviations in

‘Employment, effective labor input’. In this measure, aggregate employment falls if the

economy gains a job in a low-wage occupation but loses a job in a high-wage occu-

pation. Under the assumption that wage rates reflect the marginal product of workers,

deviations in effective labor input show the effects of a policy on the productive power of

the labor input. With unauthorized immigrants concentrated mainly in low-wage

occupations it is not surprising that Figure 2.1 shows smaller percentage reductions in

effective labor input than in number of jobs. Whereas our tighter-border policy reduces

jobs in the long run by 2.1%, it reduces effective labor input by only 1.6%.

In the long run, we would not expect a tighter border-security policy to have an

identifiable effect on theUS capital/labor ratio, that is the amount of buildings andmachines

used to support each unit of effective labor input. Underlying this expectation is the

assumption that rental per unit of capital equals the value of the marginal product of capital:

Q

Pg

¼ A � Fk

K

L

; (2.1)

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 31

whereQ is the rental rate for a unit of capital, Pg is the price of a unit of output (the price

deflator for GDP), A represents technology, K and L are aggregate inputs of capital and

effective labor, and A � Fk is a monotonically decreasing function derived by differen-

tiating an aggregate constant-returns-to-scale production function [A � F(K,L)] with

respect to K.

On the assumption that the cost of making a unit of capital (the asset price) moves in

line with the price of a unit of output, the left-hand side of (2.1) is closely related to the

rate of return on capital. In the long run we would not expect changes in border policy

to affect rates of return.5 These are determined by interest rates and perceptions of risk,

neither of which is closely linked to border policy. Thus we would expect little long-run

effect on the left-hand side of (2.1). On the right-hand side we would not expect any

noticeable impact of border policy on US technology, represented

,

by A. We can

conclude that K/L will not be affected noticeably by changes in border policy. This is

confirmed in Figure 2.1 where the long-run reduction of 1.6% in effective labor input is

approximately matched by the long-run percentage reduction in capital. Figure 2.1

shows that the long-run deviation in GDP is also about e1.6%. This is consistent with

both K and L having long-run deviations of aboute1.6%, together with our assumption

that border policy does not affect technology (A).

Figure 2.2 shows the deviations in the expenditure components of GDP. In the long

run these are all negative and range around that for GDP. The long-run deviations in

private consumption, public consumption and imports are less negative than that for

GDP while those for exports and investment are more negative than that for GDP.

We can understand these results as a sequence. The first element is that tighter border

security improves the US terms of trade. This is a benefit from having a 1.6% smaller

economy that demands less imports (thereby lowering their price) and supplies less

exports (thereby raising their price). The second element is that terms-of-trade

improvement allows private and public consumption to rise (as shown in Figure 2.2)

relative to GDP. This is because an improvement in the terms of trade increases the prices

of the goods and services produced by the US relative to prices of the goods and services

consumed by the US, allowing the US to sustain a higher level of consumption for any

given level of output (GDP).

The third element in understanding the long-run results in Figure 2.2 concerns

investment. In the very long run, the change in immigration policy that we are

considering will have little identifiable effect on the growth rate of labor input.

Consequently it will have little effect on the growth rate of capital and therefore on the

5 As will be explained shortly, there is a long-run increase in the terms of trade. Despite this, the cost of making a unit of

capital (which includes import prices but not export prices) does not fall relative to Pg (which includes export prices

but not import prices). This is mainly because in the long run the construction industry suffers an increase in its labor

costs relative to other industries reflecting its intensive use of unauthorized labor.

Figure 2.2 Expenditure aggregates with tighter border security (percentage deviations from

baseline).

32 Peter B. Dixon et al.

ratio of investment to GDP.6 As can be seen in Figure 2.1, the deviation line for capital is

still falling slightly in 2019 indicating that the capital stock has not fully adjusted to the

1.6% reduction in labor input.7 With capital still adjusting downwards in 2019, the

investment to GDP ratio is still below its eventual long-run level. However, in terms of

contributions to GDP, the positive gaps in 2019 between the consumption and GDP

deviations outweigh the negative gap between the investment and GDP deviation (the

ac and bc gaps in Figure 2.2 weighted by private and public consumption easily

outweigh the dc gap weighted by investment). This explains why the long-run deviation

in imports in Figure 2.2 is less negative than that in exports.

The fourth element concerns the long-run relationships between the GDP deviation

and the trade deviations. Although we have now understood why Figure 2.2 shows

a long-run increase in imports relative to exports, we have not explained why these two

6 The rate of growth of capital is given by k¼ (I/Y ) � (Y/K )e d, where I and Y are investment and GDP, and d is the

rate of depreciation (treated as a constant). Under our assumptions, the change in immigration policy does not affect k

or Y/K in the long run. Therefore, it does not affect I/Y.

7 It is apparent that the K/L ratio is heading to a slightly lower long-run value than in the baseline. This is because the

cost of making a unit of capital increases slightly in the long run relative to the price deflator for GDP, reflecting the

intensive use of unauthorized labor in the construction industry.

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 33

deviations straddle the GDP deviation. If a reduction in the supply of unauthorized

immigrants were particularly harmful (cost increasing) to export-oriented industries

then it would be possible for both the import and export deviation lines at 2019 to be

below that of GDP. Similarly, if a reduction in the supply of unauthorized immigrants

were particularly harmful to import-competing industries then it would be possible for

both the import and export deviation lines at 2019 to be above that of GDP. As shown in

Dixon et al. (2011), the long-run effects of reduced unauthorized immigration on the

industrial composition of activity are quite small, with no pronounced bias against or in

favor of either export-oriented or import-competing industries. The lack of bias is

a consequence of unauthorized employment being spread over many industries and

representing only a small share of costs in almost all industries. Thus, the required gap

between imports and exports is achieved with the import deviation line above that of

GDP in the long run and the export deviation line below that of GDP.

Two further points about the effects of reduced unauthorized immigration on trade

are worth noting. (i) The increase in imports relative to exports is in quantity terms.

With an improvement in the terms of trade there is no deterioration in the balance of

trade. (ii) The increase in imports relative to exports is facilitated, as shown in Figure 2.2,

by a long-run increase in the real exchange rate (an increase in the nominal exchange rate

relative to the foreign/US price ratio).

The final aspect of the long-run results in Figure 2.2 that we will explain is the

relative movements in public and private consumption. Public consumption falls relative

to private consumption because consumption of public goods by unauthorized immi-

grants is high relative to their consumption of private goods. In the baseline forecast for

2019, unauthorized immigrants account for 3.7% of public consumption, but only 2.4%

of private consumption.

The short-run results in Figure 2.2 are dominated by the need for the economy to

adjust to a lower capital stock than it had in the baseline forecast. In the short run, the

policy causes a relatively sharp reduction in investment. With US investment at the

margin being financed mainly by foreigners, a reduction in investment weakens demand

for the US dollar. Consequently, a reduction in investment weakens the US exchange

rate. This temporarily stimulates exports and inhibits imports. As the downward

adjustment in the capital stock is completed, investment recovers, causing the real

exchange rate to rise, exports to fall and imports to rise.

2.2.1.2 Effects on the occupational composition of legal employment

The starting point for the explanation of the long-run macroeconomic results was the

finding that a 28.7% cut in unauthorized employment reduces effective labor input

by 1.6%. But why 1.6? Recall that in the baseline forecast the share of the US wage

bill accounted for by unauthorized employment in 2019 is 3.64%. This suggests that

a 28.7% cut in unauthorized employment should reduce effective labor input by only

34 Peter B. Dixon et al.

1.0% (¼ 3.64 � 0.287). The explanation of the discrepancy (1.0 versus 1.6) hinges on

changes in the occupational mix of legal US employment.

Table 2.1 gives occupational data for 2005 and deviation results for 2019. Column (1)

shows the share of unauthorized immigrants in the wage bill of each US occupation. The

occupational classification was chosen to give maximum detail on employment of

unauthorized immigrants, with about 90% of their employment being spread across the

first 49 occupations. The last occupation, ‘Services, other’, accounts for about 60% of US

employment, but only

,

10% of unauthorized employment. Columns (2) and (3) show the

long-run effects of the supply-restriction policy on employment and real wage rates of

legal US workers by occupation. In broad terms, the employment results in Table 2.1

show a long-run transfer of legal workers from ‘Services, other’, an amalgam of

predominantly high-skilled, high-wage jobs, to the occupations that currently employ

large numbers of unauthorized immigrants. The correlation coefficient between the

deviations in jobs for legal workers (column 2) and unauthorized shares (column 1) is

close to one. In occupations vacated by unauthorized immigrants, legal workers not only

gain jobs, but also benefit from significant wage increases. The correlation coefficient

between the employment and wage results in columns (2) and (3) is also close to one.

The long-run change in occupational mix implied by column (2) does not mean that

existing US workers change their occupations. For each occupation, restricting the

supply of unauthorized workers presents legal workers with opportunities to replace

unauthorized workers. On the other hand, the economy is smaller, generating a negative

effect on employment opportunities for legal workers. The positive replacement effect

dominates in the low-wage occupations that currently employ large numbers of unau-

thorized immigrants. The negative effect of a smaller economy dominates in high-wage

occupations that currently employ few unauthorized immigrants. Thus, there is an

increase in vacancies in low-wage occupations relative to high-wage occupations,

allowing low-wage occupations to absorb an increased proportion of new entrants to the

workforce and unemployed workers. Another way of understanding the change in the

occupation mix of legal workers is to recognize that the labor market involves job

shortages. At any time, not everyone looking for a job in a given occupation can find

a job in that occupation. So people settle for second best. The college graduate who

wants to be an economist settles for a job as an administrative officer; the high-school

graduate who wants to be a police officer settles for a job in private security; the

unemployed person who wants to be a chef settles for a job as a short-order cook; and so

on. Through this shuffling process, a reduction in supply of unauthorized immigrants

reduces the skill composition of employment of legal workers. It lowers the contribution

of these workers to effective labor input, explaining the 1.0 versus the 1.6 discrepancy.

We refer to this as a negative Occupation-mix effect. The idea of an Occupation-mix

effect will be familiar to students of the history of US immigration. As described by

Griswold (2002, p. 13), the inflow of low-skilled immigrants early in the twentieth

Table 2.1 Occupational data for 2005 and deviation results for 2019

Occupation

% deviation in 2019

Unauthorized immigrants:

% of labor costs in 2005 Legal jobs Legal real wage

(1) (2) (3)

1. Cooks 15.6 4.20 1.89

2. Grounds maintenance 24.8 7.45 3.19

3. House keeping and cleaning 22.0 6.56 2.82

4. Janitor and building cleaner 10.4 2.31 1.19

5. Miscellaneous agriculture worker 34.3 10.70 4.55

6. Construction laborer 23.9 7.10 3.16

7. Transport packer 24.6 7.37 3.19

8. Carpenter 15.1 3.90 1.92

9. Transport laborer 7.2 1.09 0.71

10. Cashier 4.7 0.31 0.43

11. Food serving 6.4 0.88 0.62

12. Transport driver 4.0 e0.09 0.25

13. Waiter 5.7 0.64 0.53

14. Production, miscellaneous assistant 8.3 1.07 0.72

15. Food preparation worker 13.3 3.42 1.61

16. Painter 24.9 7.46 3.31

17. Dishwasher 22.7 6.83 2.86

18. Construction, helper 24.8 7.42 3.30

19. Retail sales 2.4 e0.50 0.11

20. Production, helper 20.4 5.54 2.52

21. Packing machine operator 23.6 6.88 3.01

22. Butchers 21.0 6.20 2.74

23. Stock clerk 4.6 0.26 0.40

24. Child care 5.2 0.56 0.51

25. Miscellaneous food preparation 14.5 3.80 1.74

26. Dry wall installer 35.8 11.43 4.87

27. Nursing 2.8 e0.01 0.29

28. Industrial truck operator 8.5 1.47 0.87

29. Transport, cleaners 15.8 4.24 1.93

30. Automotive repairs 6.3 0.88 0.64

31. Sewing machine operator 18.8 4.95 2.39

32. Concrete mason 22.6 6.61 3.00

33. Roofers 28.2 8.64 3.78

34. Plumbers 7.1 1.07 0.80

35. Personal care 5.7 0.91 0.66

36. Shipping clerk 5.2 0.35 0.43

37. Brick mason 22.5 6.56 2.97

38. Carpet installer 21.4 6.21 2.82

39. Laundry 15.5 4.22 1.93

(Continued )

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 35

Table 2.1 Occupational data for 2005 and deviation results for 2019dcont'd

Occupation

% deviation in 2019

Unauthorized immigrants:

% of labor costs in 2005 Legal jobs Legal real wage

(1) (2) (3)

40. Other production workers 9.1 1.57 0.91

41. Maintenance and repairs 2.2 e0.71 e0.01

42. Repair, helper 16.8 4.56 2.09

43. Welder 6.2 0.31 0.41

44. Supervisor, food preparation 3.4 e0.20 0.22

45. Construction supervisors 3.4 e0.27 0.27

46. Farm/food/clean, other 6.1 0.61 0.53

47. Construction, other 5.5 0.38 0.49

48. Production, other 4.6 e0.11 0.21

49. Transport, other 3.2 e0.40 0.13

50. Services, other 0.4 e1.27 e0.13

Total 2.6 e0.16 e0.46

36 Peter B. Dixon et al.

century induced native-born US residents to complete their education and enhance their

skills. In our terms, that was a positive Occupation-mix effect.

Before leaving Table 2.1, it is worth commenting on the deviations shown in the

‘Total’ row. The reduction (0.16%) in employment of legal workers is caused by the shift

in the composition of their employment towards low-skilled occupations. These occu-

pations have relatively high equilibrium rates of unemployment, which we have assumed

are unaffected by immigration policy. It is sometimes asserted that cuts in employment of

unauthorized immigrants would reduce unemployment rates of low-skilled legal

workers. While our modeling suggests that there would be increases in the number of

jobs for legal workers in low-skilled occupations, this does not mean that unemployment

rates in these occupations would fall. With cuts in unauthorized immigration, low-skilled

legal workers might find themselves under increased pressure from higher-skilled workers

who can no longer find vacancies in higher-skilled occupations.

The overall reduction of 0.46% in the wage rate of legal workers seems surprising at

first glance. Column (3) of Table 2.1 shows an increase or a negligible decrease in wage

rates for legal workers in all occupations except ‘Services, other’ in which the wage rate is

reduced by 0.13%. However, the average hourly wage rate of legal workers is reduced by

the shift in the occupational composition of their employment to low-wage jobs.

2.2.1.3 Effects on the welfare of legal households

The headline number that policy makers are often looking for from a CGE study is the

effect on aggregate welfare. In the present study, we take this as referring to long-run

Table 2.2 Long-run (2019) percentage effects of tighter border security on consumption of legal

residents

F1 Direct effect e0.29

F2 Occupation-mix effect e0.31

F3 Legal-employment effect e0.11

F4 Capital effect e0.24

F5 Public-expenditure effect 0.17

F6 Terms-of-trade effect 0.23

BOTE totals e0.55

CGE result e0.52

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 37

(2019) private and public consumption by legal US residents. We find that a reduction of

28.7% in unauthorized employment caused by tighter border security would generate

a sustained annual welfare loss for legal residents of 0.52% (about $80 billion in 2009

dollars).

This result can be explained in terms of the six factors indicated in Table 2.2 by

F1eF6. As detailed in Dixon et al. (2011), each of these factors can be quantified by

a BOTE calculation. The total of the BOTE calculations (e0.55) is an accurate estimate

of the CGE result (e0.52). This gives us confidence that we have adequately identified

the

,

data and mechanisms in our model that explain the result. Here, we briefly describe

the factors and their quantification.

F1: Direct effect

With a reduction in supply, the wage rate of unauthorized workers will rise. This is

illustrated in Figure 2.3 in which DD is the demand curve for unauthorized labor in

2019, SS is the supply curve in the baseline forecast and S0S0 is the supply curve with the

tighter security policy in place. The numbers shown in the diagram are taken from our

simulation: the policy reduces unauthorized employment in 2019 from 12.4 to 8.8

million and increases annual wage rates for unauthorized workers by 9.2%, from $52,660

to $57,500 (2019 dollars).

If workers are paid according to the value of their marginal product then the loss in

output (represented by GDP) from reducing employment is the change in the area under

the demand curve, area (abcd) in Figure 2.3. The change in the total cost to employers

of unauthorized immigrants is area (gaef), the increase in costs associated with the in-

crease in the unauthorized immigrant wage rate, minus area (ebcd), the reduction in

costs associated with employment of less unauthorized immigrants. Ignoring taxes, the

analysis so far suggests that the Direct effect of cutting illegal employment (the change in

GDP less the change in the costs of employing illegal workers) is a loss represented by

area (abfg). As indicated in Figure 2.3, this area is worth $51.5 billion. Taxes complicate

the situation in two ways. (i) The change in the area under the demand curve is an

Figure 2.3 Demand for and supply of illegal immigrants in 2019.

38 Peter B. Dixon et al.

underestimate of the loss in GDP because indirect taxes mean that wage rates are less than

the value of the marginal product of workers. (ii) Unauthorized immigrants pay income

taxes. Consequently, area (ebcd) overstates the saving to the US economy associated

with paying wages to 28.6% fewer unauthorized immigrants and area (gaef) overstates

the cost to the US economy of paying higher wage rates to unauthorized immigrants

who remain in employment. After adjusting for taxes, the final estimate that Dixon et al.

(2011) obtained for the Direct effect was a loss of $77.3 billion. This causes a 0.29%

reduction in consumption by legal households (row 1, Table 2.2).

F2: Occupation-mix effect

Restricting the supply of unauthorized immigrants changes the occupational mix

of employment of legal workers, reducing their average hourly wage rate by 0.46% (Table

2.1). In the baseline forecast for 2019, wages are 66% of the total income of legal resi-

dents.8 A 0.46% reduction in average wage rates translates into a 0.31% (¼ 0.46 � 0.66)

reduction in the ability of legal residents to consume private and public goods.

8 This is GNP (i.e. GDP less net income flowing to foreign investors) minus post-tax income accruing to unauthorized

immigrant.

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 39

F3: Legal-employment effect

As explained in Section 2.2.1.2, we assume that equilibrium rates of unemployment are

higher for low-skilled occupations than for high-skilled occupations, leading in our

simulation to a reduction in legal employment of 0.16% (Table 2.1). With wages being

66% of the total income of legal residents, this reduces their income and consumption by

0.11%.

F4: Capital effect

If a change in immigration policy had no effect on savings by legal residents (including the

government) up to 2019, then it would have no effect on US ownership of capital in 2019.

In this case, if a change in immigration policy led to a reduction in the stockof capital in the

US, then it would lead to a corresponding reduction in the stock of foreign-owned capital,

with little effect on capital income accruing to legal households. Nevertheless, they would

suffer a welfare loss because the US treasury would lose taxes paid by foreign owners of US

capital. Via the Direct effect and other negative effects in Table 2.1, the tighter border-

security policy reduces saving by legal households throughout the simulation period. Thus,

in the policy run, legal households own less US capital in 2019 than they had in the baseline

forecast and lose the full-income of this lost capital. As explained in Section 2.2.1.1, the

policy causes a long-run reduction in US capital stock of about 1.6%. This is split

approximately evenly between reductions in foreign-owned and US-owned capital.

Taking account of the tax effects of the loss of foreign-owned capital and the full-income

effects of the loss of US-owned capital, we find that the 1.6% reduction in capital

contributes e0.24% to sustained long-run welfare of legal households.

F5: Public-expenditure effect

With a reduction in the number of unauthorized immigrants working in the US, the

public sector would cut its expenditures, particularly on elementary education, emer-

gency healthcare and correctional services. This would allow either cuts in taxes or

increased provision of public services to legal households. This effect contributes 0.17%

to sustained long-run welfare of legal households.9 It should be noted that F5 encom-

passes only public sector expenditures and does not take account of taxes paid by illegal

immigrants. These taxes are accounted for in F1 where we compute the Direct

contribution of illegal immigrants to GDP net of their post-tax wages.

F6: Terms-of-trade effect

In our simulation, the cut in unauthorized immigration reduces the prices of the goods

and services that are consumed in the US relative to the prices of goods and services that

9 The underlying data on public expenditures on unauthorized immigrants were taken from Rector and Kim (2007) and

Strayhorn (2006).

40 Peter B. Dixon et al.

are produced in the US. In 2019, the policy-induced reduction in the price index for

private and public consumption relative to that for GDP is 0.23%. This increases the

consuming power of legal households by 0.23%. The main reason for the relative decline

in the price of consumption is the improvement in the terms of trade, discussed in

Section 2.2.1.1. A terms-of-trade improvement generally reduces the price index for

consumption (which includes imports, but not exports) relative to that for GDP (which

includes exports, but not imports).

2.2.2 Engaging the audience: hoped-for reactions

When we present a CGE story to an audience we are hoping for certain reactions. We

want the audience to engage on the topic, not on prejudices and general views about

CGE modeling. Whether we get the desired reaction depends on how well we have told

the story in terms of mechanisms that are accessible to people without a CGE

background.

In the case of our tighter border-security story, we hope that the audience is

enthusiastic enough to want to know about extensions. For example, what would

happen if the reduction in unauthorized employment were achieved by restricting

demand through more rigorous prosecution of employers rather than restricting supply?

More radically, what would happen if we replaced unauthorized immigrants by low-

skilled guest workers? If we have told our story sufficiently well then audiences or readers

of our papers can go a long way towards answering these questions without relying on

our model.

The effects of demand restriction can be visualized in terms of Figure 2.3 as an inward

movement in DD rather than SS. If the demand policy were scaled to achieve the same

reduction in unauthorized employment as in the supply policy, then we would expect

similar results for F2eF6. which depend primarily on the reduction in the number of

unauthorized workers. At first glance we might expect the Direct effect (F1) for

demand-side restriction to be more favorable than that for supply-side restriction: with

demand-side restriction, wage rates for unauthorized workers fall rather than rise.

However, when we think of the gap between the

,

supply-restricted wage (da in

Figure 2.3) and the demand-restricted wage (dh) as being absorbed by prosecution-

avoiding activities, then we can conclude that even the Direct effect will be similar under

the demand- and supply-side policies. Thus, on the basis of F1eF6 we would expect

little difference in the effects of equally scaled demand- and supply-side policies. This is

confirmed in Dixon et al. (2011).

The guest-worker question arose from comments by Dan Griswold of the Cato

Institute (a free-trade think tank in Washington, DC). After seeing a presentation on the

negative welfare result in Table 2.2, he asked whether welfare would increase if there

were more low-skilled immigrants employed in the US rather than less. This led to

a consideration of a program under which US businesses could obtain permits to legally

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 41

employ low-skilled immigrants. In terms of Figure 2.3, we can envisage such a program

as shifting the supply curve outward e at any given wage, more low-skilled immigrants

would be willing to enter the US under a guest-worker program than under the present

situation in which they incur considerable costs from illegal entry. The outward shift in

the supply curve would reverse the signs of the six effects identified in Table 2.2. As

shown in Dixon and Rimmer (2009), a permit charge paid by employers could be used

to control the number of low-skilled immigrants. It would also be a useful source of

revenue, effectively transferring to the US treasury what are currently the costs to

immigrants of illegal entry.

A second hoped-for reaction from audiences is well-directed questions about

robustness and sensitivity. By this we mean questions about data items and parameter

values that can be identified from our story as being important for our results. In

presentations of our work on unauthorized immigration, we welcome questions con-

cerning: our baseline forecasts for unauthorized employment (7.3 million in 2005

growing to 12.4 million in 2019); our data on the occupational and industrial compo-

sition of unauthorized and legal employment; our assumptions about the level of public

expenditures associated with unauthorized employment; our choice of values for the

elasticities of demand and supply for unauthorized workers; our adoption of a one-

country framework that ignores effects outside the US; and other key ingredients of our

story. When questions are asked about the existence, uniqueness and stability of

competitive equilibria, then we suspect that our presentation has not effectively led the

audience to understand how they should assess what we are saying.

A third satisfying reaction is curiosity about results for other dimensions. In our story

here we have concentrated on macro and occupational results but for an interested

audience we could also report industry results: our simulations were conducted at a 38-

industry level. Greater industrial disaggregation can be introduced for organizations with

a particular industry focus. For example, in a current study on unauthorized workers in

agriculture, the US Department of Agriculture has expanded the industrial dimension to

70, emphasizing agricultural activities.

For readers of this chapter, we hope that our story has done two things. (i) We hope

that it has demonstrated how CGE results can be explained in non-circular, macro-to-

micro fashion. As in this story, we have found that in explaining most CGE results the

best starting point is the inputs to the aggregate production function. In this case we

started with what a cut in unauthorized immigration would do to aggregate employ-

ment. We then moved to the effect on aggregate capital. From there we went to the

expenditure side of GDP. Eventually we worked down to employment by occupation.

(ii) We hope that our story has aroused curiosity about some methodological issues. We

have mentioned the business-as-usual forecast or baseline. How is this created in

a MONASH model? We have reported policy-induced deviations. How are policy

simulations conducted and what is their relationship to baseline simulations? We have

42 Peter B. Dixon et al.

worked with considerable labor market disaggregation: 50 occupations by two birth-

places by two legal statuses by 38 industries. How dowe cope with large dimensions? We

have shown year-by-year results. How do we handle dynamic mechanism such as capital

accumulation that provide connections between years?10 These are among the issues

discussed in the rest of this chapter.

2.3 FROM JOHANSEN TO ORANI

Modern CGE modeling has not evolved from a single starting point and there are still

quite distinct schools of CGE modeling. While Johansen (1960) made the first CGE

model, there were several other largely independent starting points including the

contributions of Scarf (1967, 1973), Jorgenson and associates (e.g. Hudson and

Jorgenson, 1974) and the World Bank group (e.g. Adelman and Robinson, 1978; Taylor

et al., 1980). Each of these later contributors adopted a style quite distinct from that of

Johansen: different computational techniques, estimation methods, approaches to result

analysis and issue focuses. In the case of the MONASHmodels, the ancestor is Johansen.

His style was simple, effective and adaptable. It facilitated the inclusion of policy-relevant

detail in CGE models and opened a path to result interpretation via BOTE explanations.

In this section we describe Johansen’s model and the extensions that were made in

creating the ORANI model.

2.3.1 Johansen model

Johansen presented his 22-commodity/20-industry model of Norway as a system of 86

linear equations connecting 86 endogenous and 46 exogenous variables:

AX � xþ AY � y ¼ 0; (2.2)

where x and y are 46� 1 and 86� 1 vectors of exogenous and endogenous variables, and

AX and AY are matrices of coefficients of dimensions 86�46 and 86�86 built mainly

from Norwegian input-output data for 1950 supplemented by estimates of income

elasticities for consumer demand.

The 46 exogenous variables are: aggregate employment (1); aggregate capital (1);

population (1); Hicks-neutral primary factor technical change in each industry (20);

exogenous demand for each commodity (22); and the price of non-competing imports

(1).11 The 86 endogenous variables are: labor input and capital input by industry

(2� 20); output and prices by commodity (2� 22); the average rate of return on capital

10 For dynamic mechanisms that are specific to our immigration work, such as vacancy-induced occupational shuffling,

we refer readers to Dixon and Rimmer (2010a).

11 We are sometimes asked about the numéraire in Johansen’s model. It is the nominal wage rate which is exogenously

fixed on zero growth and then omitted from the model.

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 43

(1); and aggregate private consumption (1). All of the variables in Johansen’s system are

growth rates or percentage growth rates. Johansen derived the equations in (2.2) from

underlying levels forms. For example, in (2.2) he represented the CobbeDouglas

relationship:

Zj ¼ N

gj

j � Kbj

j � e3j�t; (2.3)

between the output in industry j (Zj) and labor and capital inputs (Nj and Kj) as:

zj � gj � nj � bj � kj � ej ¼ 0; (2.4)

where zj, nj and kj are percentage growth rates in Zj, Nj and Kj, and ej is the rate of

technical progress.

From (2.2) Johansen solved his model, i.e. expressed growth rates in endogenous

variables in terms of exogenous variables, as:

y ¼ b � x; (2.5)

where b is the 86� 46 matrix given by:

b ¼ �A�1

Y � AX : (2.6)

Johansen was fascinated by the b matrix in (2.6) and devoted much of his book to

discussing it. The b matrix shows the sensitivity (usually an elasticity) of every endog-

enous variable with respect to every exogenous variable. Johansen regarded the 3956

entries in the

,

b matrix as his basic set of results and he looked at every one of them. His

management strategy for coping with 3956 results was to use a simple one-sector BOTE

model as a guide. The BOTE model told him what to look for and what to expect in his

full-scale model.

For example, the BOTE model suggested that the entries in the bmatrix referring to

the elasticities of industry outputs with respect to movements in aggregate capital and

labor should lie in the (0,1) interval. This follows from a macro version of equation

(2.4).12 With one exception, this expectation was fulfilled: the bmatrix shows a negative

entry for the elasticity of equipment output with respect to an increase in aggregate

employment. Following up and explaining exceptions is an important part of the BOTE

methodology. In this way we can locate result-explaining mechanisms in the full model

that are not present in the BOTE model. In other words, we can figure out what the full

model knows that the BOTE model does not know. In the case of the equipment-

output/employment elasticity, the explanation of the negative result is that an increase in

aggregate employment changes the composition of the economy’s capital stock in favor

of structures and against equipment. This morphing of the capital stock reduces

12 That is z¼ g �nþ b �kþ e, where g and b are parameters with values between 0 and 1.

44 Peter B. Dixon et al.

maintenance demand for equipment, thereby reducing the output of the equipment

industry.13

One of the most interesting parts of b is the submatrix relating movements in industry

outputs to movements in exogenous demands. At the time when Johansen was writing,

Leontief ’s input-output model, with its emphasis on input-output multipliers, was the

dominant tool for quantitativemultisectoral analysis. In Leontief ’s model, if an extra unit of

output from industry j is required by final users, then production in j must increase by at

least one unit and production in other industries will increase to provide intermediate

inputs to production in j. Further rounds of this process can be visualizedwith suppliers to j

requiring extra intermediate inputs. Thus, in Leontief ’s picture of the economy, developed

in the depressed 1930s,14 industries are in a complementary relationship, with good news

for any one industry spilling over to every other industry. Johansen, working in the

booming 1950s challenged this orthodoxy. His industry-output/exogenous-demand

submatrix implies diagonal effects that in most cases are less than one and off-diagonal

effects and are predominantly negative. Rather than emphasizing complementary

relationships between industries, Johansen emphasized competitive relationships. In

Johansen’s model, expansion of output in one industry drags primary factors away from

other industries. Only where there are particularly strong input-output links did Johansen

find that stimulation of one industry (e.g. food) benefits another industry (e.g. agriculture).

Having examined the b matrix, Johansen used it to decompose movements in

industry outputs, prices and primary factor inputs into parts attributable to observed

changes in exogenous variables. In making his calculations he shocked all 46 exogenous

variables with movements representing average annual growth rates for the period

around 1950. In discussing the results of his decomposition exercise, Johansen paid

particular attention to agricultural employment. This was a contentious issue among

economists in 1960. On the one hand, diminishing returns to scale suggested that relative

agricultural employment would grow with population and perhaps even with income

despite low expenditure elasticities for agricultural products. On the other hand, agri-

culture was experiencing rapid technical progress, suggesting that employment in

agriculture might not only fall as a share of total employment but might even fall in

absolute terms. Johansen was able to separate and quantify these conflicting forces. He

found that growth in capital, employment and population around 1950 caused relatively

strong increases in agricultural employment, consistent with diminishing returns to scale

interacting with increased consumption of food. However, the dominant effect on

agricultural employment was technical change. This was strongly negative, leaving

agriculture with net declining employment.

13 For a fuller explanation of Johansen’s negative equipment-output/employment elasticity, see Dixon and Rimmer

(2010c, p. 7).

14 Leontief (1936).

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 45

In another exercise, Johansen performed a validation test. He compared observed

average annual growth rates around 1950 in endogenous variables such as industry

outputs, employment and capital inputs with the total effects calculated in his decom-

position exercise. He used this comparison to pinpoint weaknesses in his model and to

organize a discussion of real-world developments. For agriculture, he found that the

computed growth rate in output closely matched reality, but that the computed growth

rate in employment was too high while that in capital was too low. This led to

a discussion of reasons, not accounted for in the model, for exodus of rural workers to the

cities. For forestry, the computed growth rates for output and primary factor inputs were

too high. He thought that the income elasticity of demand for forestry products may have

been set too high and also that there may have been a taste change, not included in his

model, against the use of forestry products as fuel. By going through his results in this

way, Johansen developed an agenda for model improvement.

2.3.2 Building on the Johansen legacy: creating the ORANI model

MONASH modelers owe an enormous intellectual debt to Johansen.15 (i) They pre-

sented their models as linear systems in changes and percentage changes, Johansen’s

equation (2.2). This simplified the interpretation of their models and facilitated teaching.

(ii) They adopted Johansen’s solution equations (2.5) and (2.6). This enabled them to

solve models in the 1970s and 1980s with much larger dimensions than was possible with

other styles of CGE modeling. Johansen’s use of BOTE models was also taken up and

extended in the MONASH paradigm. (iii) MONASH modelers followed Johansen in

using the bmatrix to understand properties of their models, to explain periods of history

via decomposition analysis (see Section 2.5) and to perform validation exercises (see

Dixon and Rimmer in Chapter 19 of this Handbook).

While Johansen’s techniques were simple and effective, adopting them came at a cost.

His solution equations (2.5) and (2.6) give only an approximation to effects implied by

the underlying non-linear model: (2.5) and (2.6) produce solutions that are subject to

linearization errors.16 At the time that Johansen was developing his model, incurring this

cost was a computational necessity. Later CGE pioneers were keen to avoid linearization

errors and perhaps this caused them to overlook the strengths of Johansen’s approach to

CGEmodeling. In any case, sustained development of Johansen’s style of CGEmodeling

was not undertaken until work commenced in Australia on the ORANI model, a decade

15 In their overview of the IMPACT Project’s first 10 years of operation, Powell and Lawson (1990, pp. 265e266)

identify the decision to use Johansen strategies as a key ingredient in the Project’s success.

16 Johansen recognized this problem and reported (Johansen, 1974) experience with a method implemented in the late

1960s by Spurkland (1970) for calculating accurate solutions. Spurkland’s method used (2.5) and (2.6) to obtain an

approximate solution, and then moved to an accurate solution via a general non-linear equation method such as

Newton’s algorithm. Spurkland’s method sacrificed Johansen’s simplicity and was rather awkward to implement. It

was not

,

widely adopted.

46 Peter B. Dixon et al.

and a half after the publication of Johansen’s book.17 As described in Section 2.4.1,

linearization errors were avoided in ORANI and later MONASH models without

sacrificing Johansen’s simplicity and interpretability. This was done by adopting

a multistep extension of Johansen’s solution method.

The multistep solution method was not the only innovation in the creation of

ORANI. As outlined in Sections 2.3.2.1e2.3.2.4, other innovations were: the treatment

of imports and competing domestic products as imperfect substitutes; the incorporation

of policy-relevant detail requiring large dimensionality; allowance for closure flexibility;

and inclusion of complex functional forms.

2.3.2.1 Imperfect substitution between imports and domestic products: the

Armington specification

Johansen paid little attention to trade, simply setting net exports exogenously for all

commodities except non-competing imports, which were handled as Leontief inputs to

production.18 For a trade-focused model, a more elaborate approach is required. The

builders of ORANI turned to Armington (1969, 1970) who had built a 15-country

trade model in which each country produced just one good, but consumed all 15 goods,

treating the goods from different countries as imperfect substitutes. In ORANI, imports

were disaggregated by commodity rather than country of origin. For each using agent

(industries, capital creators, households and government), imports of a commodity were

specified as constant elasticity of substitution (CES) substitutes for the corresponding

domestic commodity.

The import/domestic substitution elasticities (named Armington elasticities in

Dixon et al. 1982) were econometrically estimated for about 50 commodities by Alaouze

et al. (1977) and Alaouze (1977) using a quarterly database assembled for this purpose on

import and domestic prices and quantities for the period 1968(2) to 1975(2). This work

is summarized in Dixon et al. (1982, pp. 181e189).19 With its Armington specification,

ORANI produced results in which imports responded in a realistic manner to changes in

the relative prices of imported and domestic goods, avoiding import-domestic ‘flip-flop’.

This refers to extreme and unrealistic movements in the share of a country’s demands for

a commodity that are satisfied by imports. It occurs in long-run simulations with models

in which import/domestic price ratios are allowed to play a role in import/domestic

17 There were some important one-off flurries using Johansen’s techniques in the mid-1970s (see, e.g. Taylor and Black,

1974; Staelin, 1976; Bergman, 1978; and Keller, 1980).

18 This exogenous treatment of trade was also the approach of Hudson and Jorgenson (1974) for the US. Adelman and

Robinson (1978) in their study of Korea set exports of some commodities exogenously and fixed the share of exports

in domestic output for other commodities. For most imports, Adelman and Robinson fixed the import share in

domestic demand. Taylor et al. (1980, chapter 7) in their study of Brazil exogenized exports and related imports to

industry outputs and final demands via exogenous coefficients.

19 For an overview on recent work on Armington elasticities and other elasticities used in modeling international trade,

see Hillberry and Hummels in Chapter 18 of this Handbook.

The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 47

choice and imported and domestically produced units of a commodity are treated as

perfect substitutes. Flip-flop can also be a problem with exports. When export prices are

taken as given and long-run supply curves are flat, there is a tendency for models to show

extreme and unrealistic specialization in the commodity composition of exports. The

ORANI modelers avoided export flip-flop by the introduction of downward-sloping

export demand curves (Dixon et al. 1982; pp. 195e196; Dixon and Rimmer, 2002; pp.

222e225).20 Even for a small country, downward-sloping export demand curves can be

justified by attributing Armington behavior to foreigners (i.e. by assuming that they treat

imports of any given commodity from different countries as imperfect substitutes).

Following ORANI, the Armington specification has been adopted almost univer-

sally in CGE models, although there is some dissatisfaction with this approach. The

Armington specification with elasticity values in the empirically relevant range leads to

negative terms-of-trade effects that outweigh efficiency gains for countries undertaking

unilateral tariff cuts even from quite high levels (e.g. 30%) (Brown, 1987). This is

worrying to people who believe that low tariffs are always better than high tariffs. For

a discussion of the relevant issues, see Dixon and Rimmer (2010b). While no alternative

to Armington for practical CGE modeling has emerged, incorporation of ideas from

Melitz (2003) seems promising, see Fan (2008) and Balistreri and Rutherford in Chapter

24 in this Handbook. The Melitz specification introduces productivity differences

between firms within industries. Efficiency effects of tariff cuts are increased by allowing

for elimination of low-productivity firms. However, potentially large terms-of-trade

effects remain.

2.3.2.2 Incorporation of policy-relevant detail requiring large dimensionality

Policy makers want detail. They want results for identifiable industries (e.g. motor

vehicle parts), not vague aggregates (e.g. manufacturing). They want results for regions,

not just the nation. Consequently, ORANI was designed from its outset to encompass

considerable detail. The first version had 113 industries (Dixon et al., 1977) and was

quickly endowed with a facility for generating results for Australia’s eight states/terri-

tories (Dixon et al., 1978).21 Later, this facility was extended to 56 substate regions,

Fallon (1982). The imperative of providing results that were persuasive in policy circles

meant that ORANI was equipped not only with industry and regional detail, but also

with detail in other areas that were normally ignored by academics. For example, from its

outset ORANI had a detailed specification of margin costs (road transport, rail transport,

air transport, water transport, wholesale trade and retail trade). Recognition of margin

20 As recognized by Johansen (1974), Taylor and Black (1974) avoided extreme flip-flop in their trade-oriented model

by adopting a short-run closure (fixed capital in each industry), thereby giving supply curves a positive slope. With his

focus on long-run tendencies, Johansen could not adopt the Taylor and Black approach. Instead, he continued to

treat exports and competitive imports exogenously.

21 Also see Giesecke and Madden in Chapter 7 of this Handbook.

48 Peter B. Dixon et al.

costs is important in translating the effects of tariff changes (that impact basic prices) into

implications for purchasers prices (that influence demand responses). Attention to such

details was important in providing results that could be believed by policy makers.

Detail expands dimensionality. In ORANI, the dimensions of the AY matrix in (2.2)

were far too large to allow direct solution via (2.6). This dimensionality problem was

overcome by a process of condensation in which high-dimension variables were

substituted out of the computational form of the model. For example, consider the

variable x(i,s,j,k,r), which represents the percentage change in the use of margin

commodity r (e.g. road transport) to facilitate the flow of commodity i from source s

(domestic or imported) to industry j for purpose k (current production or capital

creation). In a model with 100 commodities/industries and 10 margin commodities this

variable has 400,000 components. These were explained in ORANI by 400,000

Johansen-style linear percentage change equations:

xði; s; j; k; rÞ ¼ xði; s; j; kÞ þ aði; s; j; k; rÞ; (2.7)

where x(i,s,j,k) is the percentage change

,for Forecasting and Policy: APractical Guide and Documentation of MONASH. Contributions to Economic Analysis 256. North-Holland, Amsterdam.Dixon, P.B., Rimmer, M.T., 2004. The US economy from 1992 to 1998: results from a detailed CGEmodel. Econ. Rec. 80 (Special Issue), S13eS23.Dixon, P.B., Rimmer, M.T., 2009. Restriction or Legalization? Measuring the Economic Benefits ofImmigration Reform. Trade Policy Analysis Paper 40. Cato Institute, Washington DC. Available from.http://www.freetrade.org/node/949.Dixon, P.B., Rimmer, M.T., 2010a. US imports of low-skilled labor: restrict or liberalize? In: Gilbert, John(Ed.), New Developments in Computable General Equilibrium Analysis of Trade Policy. Frontiers ofEconomics and Globalization 7. Emerald Publishing, Lewes, pp. 103e151.Dixon, P.B., Rimmer, M.T., 2010b. Optimal tariffs: should Australia cut automotive tariffs unilaterally?Econ. Rec. 86, 143e161.Dixon, P.B., Rimmer, M.T., 2010c. Johansen’s contribution to CGE modelling: originator and guidinglight for 50 years. CoPS/IMPACT Working Paper G-203. Available from: http://www.monash.edu.au/policy/ftp/workpapr/g-203.pdf.Dixon, P.B., Rimmer, M.T., 2011. You can’t have a CGE recession without excess capacity. Econ. Model.28, 602e613.Dixon, P.B., Parmenter, B.R., Ryland, G.J., Sutton, J., 1977. ORANI, A General Equilibrium Model ofthe Australian Economy: Current Specification and Illustrations of Use for Policy Analysis, Volume 2of the First Progress Report of the IMPACT Project. Australian Government Publishing Service,Canberra.Dixon, P.B., Parmenter, B.R., Sutton, J., 1978. Spatial disaggregation of ORANI results: a preliminaryanalysis of the impact of protection at the state level. Econ. Anal. Pol. 8, 35e86.Dixon, P.B., Powell, A.A., Parmenter, B.R., 1979. Structural Adaptation in an Ailing Macroeconomy.Melbourne University Press, Melbourne.Dixon, P.B., Parmenter, B.R., Sutton, J., Vincent, D.P., 1982. ORANI: A Multisectoral Model of theAustralian Economy. Contributions to Economic Analysis 142. North-Holland, Amsterdam.Dixon, P.B., Parmenter, B.R., Powell, A.A., Wilcoxen, P.J., 1992. Notes and Problems in Applied GeneralEquilibrium Economics. North-Holland, Amsterdam.Dixon, P.B., Malakellis, M., Rimmer, M.T., 1997. The Australian Automotive Industry from 1986e87 to2009e10: Analysis Using the MONASH Model. A Report to the Industry Commission. Centre ofPolicy Studies and IMPACT Project, Monash University.Dixon, P.B., Menon, J., Rimmer, M.T., 2000. Changes in technology and preferences: a general equi-librium explanation of rapid growth in trade. Aust. Econ. Paper 39, 33e55.Dixon, P.B., Pearson, K.R., Picton, M.R., Rimmer, M.T., 2005. Rational expectations forlarge CGE models: a practical algorithm and a policy application. Econ. Model. 22,1001e1019.Dixon, P.B., Osborne, S., Rimmer, M.T., 2007. The economy-wide effects in the United States ofreplacing crude petroleum with biomass. Energ. Environ. 18, 709e722.http://www.freetrade.org/node/949http://www.monash.edu.au/policy/ftp/workpapr/g-203.pdfhttp://www.monash.edu.au/policy/ftp/workpapr/g-203.pdf102 Peter B. Dixon et al.Dixon, P.B., Lee, B., Muehlenbeck, T., Rimmer, M.T., Rose, A.Z., Verikios, G., 2010. Effects on the USof an H1N1 epidemic: analysis with a quarterly CGE model. J. Homeland Security Emerg. Manage. 7(1) article 75. Available from: http://www.bepress.com/jhsem/vol7/iss1/75.Dixon, P.B., Johnson, M., Rimmer, M.T., 2011. Economy-wide effects of reducing illegal immigrants inUS employment. Contemp. Econ. Pol. 29, 14e30.Fallon, J., 1982. Disaggregation of the ORANI employment projections to statistical divisions e Theory,ORANI Research Memorandum, Archive OA0160, IMPACT Project, Melbourne.Fan, Z., 2008. Armington meets Melitz: introducing firm heterogeneity in a global CGE model of trade.J. Econ. Integrat. 23, 575e604.Fox, A., Powers, W., Winston, A., 2008. Textile and apparel barriers and rules of origin: what’s left to gainafter the agreement on textiles and clothing? J. Econ. Integrat. 23, 656e684.Gehlhar, M., Somwaru, A., Dixon, P.B., Rimmer, M.T., Winston, R.A., 2010. Economy-wideimplications from US bioenergy expansion. Am. Econ. Rev.: Papers Proceedings 100,172e177.Glezer, L., 1982. Tariff Politics: Australian Policy-making 1960e1980. Melbourne University Press.Griswold, D.T., 2002. Willing Workers: Fixing the Problem of Illegal Mexican Migration to the UnitedStates. Trade Policy Analysis Paper 19. Cato Institute, Washington, DC. Available from: http://www.cato.org/pubs/tpa/tpa-019.pdf.Hanoch, G., 1971. CRESH production functions. Econometrica 39, 695e712.Harrison, W.J., Horridge, J.M., Pearson, K.R., 2000. Decomposing simulation results with respect toexogenous shocks. Comput. Econ. 15, 227e249.Honkatukia, J., 2009. VATTAGE e A Dynamic, Applied General Equilibrium Model of the FinnishEconomy. Research Report 150. Government Institute for Economic Research, Helsinki.Hudson, E.A., Jorgenson, D.W., 1974. US energy policy and economic growth, 1975-2000. Bell J. Econ.Manage. Sci. 5, 461e514.Industry Commission, 1997. The Automotive Industry, Volumes I and II. Industry Commission Report58. Australian Government Publishing Service, Canberra.Johansen, L., 1960. A Multisectoral Study of Economic Growth. Contributions to Economic Analysis 21.North-Holland, Amsterdam.Johansen, L., 1974. A Multisectoral Study of Economic Growth, Second enlarged ed. Contributions toEconomic Analysis 21. North-Holland, Amsterdam.Keller, W.J., 1980. Tax Incidence: A General Equilibrium Approach. Contributions to Economic Analysis134. North-Holland, Amsterdam.Leontief, W.W., 1936. Quantitative input-output relations in the economic system of the United States.Rev. Econ. Stat. 18, 105e125.Malakellis, M., 1998. Should tariff reductions be announced? An intertemporal computable generalequilibrium analysis. Econ. Rec. 74, 121e138.Malakellis, M., 2000. Integrated MacroeMicro-Modelling under Rational Expectations: with an Appli-cation to Tariff Reform in Australia. Physica-Verlag, Heidelberg.Melitz, M.J., 2003. The impact of trade in intra-industry reallocations and aggregate industry productivity.Econometrica 71, 1695e1725.Okubo, S., Planting, M., 1998. US travel and tourism satellite accounts for 1992. Sur. Curr. Bus., 8e22.July.Pearson, K.R., 1988. Automating the computation of solutions of large economic models. Econ. Model. 5,385e395.Powell, A.A., Lawson, T., 1990. A decade of applied general equilibrium modelling for policy work. In:Bergman, L., Jorgenson, D., Zalai, E. (Eds.), General Equilibrium Modeling and Economic PolicyAnalysis. Basil Blackwell, Boston, MA, pp. 241e290.Powell, A.A., Snape, R.H., 1993. The contribution of applied general equilibrium analysis to policyreform in Australia. J. Pol. Model. 15, 393e414.Rattso, J., 1982. Different macro closures of the original Johansen model and their impact on policyevaluation. J. Pol. Model. 4, 85e97.http://www.bepress.com/jhsem/vol7/iss1/75http://www.cato.org/pubs/tpa/tpa-019.pdfhttp://www.cato.org/pubs/tpa/tpa-019.pdfThe MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis 103Rector, R., Kim, C., 2007. The Fiscal Cost of Low-Skill Immigrants to the US Taxpayer. Heritage SpecialReport SR-14. The Heritage Foundation, Washington, DC. Available from: http://www.heritage.org/Research/Immigration/sr14.cfm.Robinson, S., 2006. Macro model and multipliers: Leontief, Stone, Keynes, and CGE models. In: deJanvry, A., Kanbur, R. (Eds.), Poverty, Inequality and Development: Essays in Honor of Erik Thor-becke. Springer, New York, pp. 205e232.Scarf, H.E., 1967. On the computation of equilibrium prices. In: Fellner, W. (Ed.), Ten economic studiesin the tradition of Irving Fisher. Wiley, New York, pp. 207e230.Scarf, H.E., 1973. The Computation of Economic Equilibria. Yale University Press, New Haven, CT.Spurkland, S., 1970. MSG e a tool,in long-term planning. Report prepared for the First Seminaron Mathematical Methods and Computer Techniques. UN Economic Commission for Europe,Varna.Staelin, C.P., 1976. A general equilibrium model of tariffs in a non-competitive economy. J. Int. Econ. 6,39e63.Strayhorn, C.K., 2006. Undocumented Immigrants in Texas: A Financial Analysis of the Impact to theState Budget and Economy. Special Report. Office of the Comptroller of Texas, Austin, TX.Sutton, J., 1976. The Solution Method for the ORANI Module. Preliminary Working Paper OP-03.IMPACT Project, Melbourne.Sutton, J., 1977. Computing Manual for the ORANI Model. IMPACT Computing Document C1-01.IMPACT Project, Melbourne.Taylor, L., Black, S.L., 1974. Practical general equilibrium estimation of resource pulls under tradeliberalization. J. Int. Econ. 4, 35e58.Taylor, L., Bacha, E.L., Cardoso, E.A., Lysy, F.J., 1980. Models of Growth and Distribution for Brazil.Oxford University Press for the World Bank, New York.US International Trade Commission, 2003. Steel: Monitoring Developments in the Domestic Industry.Investigation TA-204-9,3. Steel-Consuming Industries: Competitive Conditions with Respect to SteelSafeguard Measures. Investigation 332-452, Publication 3632. US ITC, Washington, DC. Availablefrom: http://www.usitc.gov/publications/safeguards/3632/pub3632_vol3_all.pdf.US International Trade Commission, 2004. The Economic Effects of Significant US Import Restraints:Fourth Update. Investigation 332-325, Publication 3701. US ITC, Washington, DC.US International Trade Commission, 2007. The Economic Effects of Significant US Import Restraints:Fifth Update. Investigation 332-325, Publication 3906. US ITC, Washington, DC.US International Trade Commission, 2009. The Economic Effects of Significant US Import Restraints:Sixth Update. Investigation 332-325, Publication 4904. US ITC, Washington, DC.Wilcoxen, P.J., 1985. Numerical Methods for Investment Models with Foresight. IMPACT ProjectPreliminary Working Paper IP-23. IMPACT Project, Monash University, Clayton.Wilcoxen, P.J., 1987. Investment with Foresight in General Equilibrium. IMPACT Project PreliminaryWorking Paper IP-35. IMPACT Project, Monash University, Clayton.Winston, R.A., 2009. Enhancing Agriculture and Energy Sector Analysis in CGE Modelling: AnOverview of Modifications to the USAGE Model. CoPS/IMPACT Working Paper G-180 Availablefrom: http://www.monash.edu.au/policy/elecpapr/g-180.htm.http://www.heritage.org/Research/Immigration/sr14.cfmhttp://www.heritage.org/Research/Immigration/sr14.cfmhttp://www.usitc.gov/publications/safeguards/3632/pub3632_vol3_all.pdfhttp://www.monash.edu.au/policy/elecpapr/g-180.htm2. The MONASH Style of Computable General Equilibrium Modeling: A Framework for Practical Policy Analysis2.1 Introduction2.2 Telling a CGE Story2.2.1 Tighter border security2.2.1.1 Macroeconomic effects2.2.1.2 Effects on the occupational composition of legal employment2.2.1.3 Effects on the welfare of legal householdsF1: Direct effectF2: Occupation-mix effectF3: Legal-employment effectF4: Capital effectF5: Public-expenditure effectF6: Terms-of-trade effect2.2.2 Engaging the audience: hoped-for reactions2.3 From Johansen to ORANI2.3.1 Johansen model2.3.2 Building on the Johansen legacy: creating the ORANI model2.3.2.1 Imperfect substitution between imports and domestic products: the Armington specification2.3.2.2 Incorporation of policy-relevant detail requiring large dimensionality2.3.2.3 Closure flexibility in the Johansen framework2.3.2.4 Complex functional forms in the Johansen framework2.4 Extending Johansen’s Computational Framework: The Mathematical Structure of a MONASH Model2.4.1 Theory of the Johansen/Euler solution method2.4.2 Linking the periods: dynamics2.4.3 Developing a solution for year 0 from the input-output data2.4.3.1 Solution for year 0: overview2.4.3.2 Solution for year 0 and the input-output database for a MONASH model2.4.4 Deriving change and percentage change equations2.4.5 Introduction to the GEMPACK programs for solving and analyzing MONASH models2.4.6 Creating a database for a MONASH model2.4.6.1 Input-output data published by the BEA2.4.6.2 Valuation and treatment of indirect taxes2.4.6.3 Imports2.4.6.4 Public sector demands2.4.6.5 Investment by investing industry2.4.6.6 Value added, self-employment and capital stocks2.5 Responding to the Needs of CGE Consumers: The Four Closure Approach2.5.1 What consumers of CGE services want2.5.1.1 Up-to-date data2.5.1.2 Detailed disaggregation in the focus area and accurate representation of relevant policy instruments2.5.1.3 Disaggregated results2.5.1.4 Baseline forecasts2.5.1.5 Historical decomposition analyses2.5.2 Four-closure approach2.5.2.1 MONASH-style historical simulations2.5.2.2 MONASH-style decomposition simulations2.5.2.3 MONASH-style forecast simulations2.5.2.4 MONASH-style policy simulations2.6 Concluding RemarksAppendix : Theoretical Justification For The Johansen/Euler Solution MethodA.1 Convergence proposition for the Johansen/Euler methodA.2 Richardson's extrapolationAcknowledgmentsReferences

2 Dixon MonashStyle CGE - Macroeconomia (2024)
Top Articles
How to Strip Hair Color at Home, According to Stylists
9 Hair Color Removers to Fix Your Botched Dye Job
The Civil Rights Movement: A Very Short Introduction
Family Day returns to Dobbins bigger than before
Sarah Burton Is Givenchy's New Creative Director
How Much Is Vivica Fox Worth
Phun.celeb
Nosetf
Best Taq 56 Loadout Mw2 Ranked
Saydel Botanica
Craigsist Houston
Rice explains personal reason for subdued goal celebration against Ireland
Randolph Leader Obits
Anime Souls Trello
Kinoprogramm für Berlin und Umland
Summoner Weapons Terraria
Browse | Obituaries | Enid News and Eagle
Cloud Cannabis Utica Promo Code
Free Bubble Letters Generator | Add bubble letters with a click!
Varsity Tutors, a Nerdy Company hiring Remote AP Calculus AB Tutor in United States | LinkedIn
Tuition Fee Compensation
Waitlistcheck Sign Up
Craigslist Apartments For Rent Ozone Park
Ashley Kolfa*ge Leaked
Crazy Rays Price List
craigslist: northern MI jobs, apartments, for sale, services, community, and events
Squeezequeens
Point After Salon
Best Pizza Marlton
Kristen Stewart and Dylan Meyer's Relationship Timeline
Barber Gym Quantico Hours
Louisiana Physical Therapy Jurisprudence Exam Answers
Https://Gw.mybeacon.its.state.nc.us/App
Wie funktioniert der Ochama Supermarkt? | Ladenbau.de Ratgeber
Deborah Clearbranch Psychologist Georgia
Cheap Motorcycles For Sale Under 1000 Craigslist Near Me
Mvsu Canvas
Craigslist Creative Gigs
Circuit Court Peoria Il
Hooda Math—Games, Features, and Benefits — Mashup Math
2015 | Ducati 1299 Panigale S Test
The Complete Guide to Flagstaff, Arizona
168 Bus Schedule Pdf 2022
Dean of Students | Alcohol & Drug Policies
Jefferey Dahmer Autopsy Photos
Mystery Mini Icon Box
8569 Marshall St, Merrillville, IN 46410 - MLS 809825 - Coldwell Banker
Ascensionpress Com Login
Walgreens Bunce Rd
Corn And Tater Fest 2023
Morphe Aventura Mall
Iemand ervaring met FC-MOTO??
Latest Posts
Article information

Author: Twana Towne Ret

Last Updated:

Views: 5313

Rating: 4.3 / 5 (44 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Twana Towne Ret

Birthday: 1994-03-19

Address: Apt. 990 97439 Corwin Motorway, Port Eliseoburgh, NM 99144-2618

Phone: +5958753152963

Job: National Specialist

Hobby: Kayaking, Photography, Skydiving, Embroidery, Leather crafting, Orienteering, Cooking

Introduction: My name is Twana Towne Ret, I am a famous, talented, joyous, perfect, powerful, inquisitive, lovely person who loves writing and wants to share my knowledge and understanding with you.