Thursday, July 30, 2009

7. IN LIEU OF A SUMMARY: CONSENSUS AND DISSENSUS BETWEEN THE VARIOUS APPROACHES

Looking at the relationship of the various theoretical developments, there are complementarities as well as incompatibilities. Although modern Industrial Economics has come a long way, it seems still firmly rooted in the structure-conductperformance paradigm. Not only have the main questions remained the same; the basic conjectures have also largely remained unchanged. What has changed is the toolkit: whereas the Harvard paradigm used to be primarily inductive, modern Industrial Economics has turned deductive, being based on game theory. Moreover, modern Industrial Economics does not solely focus on structural factors anymore, but tries to incorporate the behavioural incentives of the relevant actors.

Transaction Cost Economics also takes the behavioural incentives of the actors explicitly into account. Many of its representatives also draw heavily on game theory and there is thus substantial overlap with modern Industrial Economics. This is, e.g., documented in the textbook by Tirole (1988) who is one of the leading representatives of The New Industrial Organisation; for a number of chapters, he draws heavily on Joskow’s lectures delivered at MIT. Joskow is, of course, one of the leading representatives of Transaction Cost Economics.

Yet, there are a number of incompatibilities between NIO and TCE. The most important one seems to be the underlying standard of reference: the NIO remains within traditional neoclassic thinking: define an abstract welfare standard, compare reality with it, if reality diverges substantially, devise some policy in order to get reality closer to the theoretical standard. Transaction Cost Economics believes that such an approach is of little help. Based on the notion of Comparative Institutional Analysis, it redefines efficiency in a way that takes the specific constraints explicitly into account. Traditional welfare economics has identified a host of so-called “market failures”. TCE explicitly recognises that it is not only the market that can fail but also government and bureaucracies. Taking these failures into account, one can often not improve the current situation. If that is the case, it is called “efficient”, even though it does not fulfil the tight criteria named by more traditional approaches. Though representatives of both approaches talk of “efficiency”, they mean completely different things.

Representatives of TCE start from the assumption that the borders of a firm are the result of transaction cost minimising strategies. Contracts and agreements that have generally been subsumed under behaviour “in restraint of competition” need to be re-evaluated: these can be horizontal, vertical, and conglomerate ones. It was representatives of TCE who were able to show that these are often entered into with the goal of saving on transaction costs and that they were thus not necessarily restraining competition. Market structure in the traditional sense does thus not play a decisive role in TCE anymore. In recent decades, new forms of cooperation between firms have emerged.

Structural approaches towards competition policy still seem to be the dominant ones. One possible reason is that it is still very difficult to measure transaction costs as well as other central notions of TCE such as asset specificity, uncertainty, and frequency. In order to gain further ground, representatives of TCE should thus think hard about hands-on approaches how to deal with these issues. After having delineated the relevant market, structure-oriented economists need to do some simple number games to come up with concentration ratios and the like.

So far, only three approaches – namely the traditional structure-conductperformance paradigm, the New Industrial Organisation, and Transaction Cost Economics – have been mentioned. As fully-fledged approaches, they seem indeed to be dominating discussions on competition policy. But what about the other two approaches and their relationship to these three more important ones?

“Chicago” did not only develop as a critique to Harvard but also to antitrust policy US style. Its representatives had the impression that there were many inconsistencies in antitrust policy as practiced in the US during the 60s and 70s. This is the reason why Bork (1978) gave his book the subtitle “A policy at war with itself.” Many of the shortcomings pointed at by representatives of “Chicago” have been corrected in the meantime: state-mandated barriers were not only recognised as a serious impediment to competition but were dismantled to a large degree during the privatisation and deregulation policies observed in many countries in the 80s and 90s (see also Chapter III for more on this).

The representatives of contestability theory did not carry out an attack against Harvard as sweeping as Chicago did. Yet, on theoretical grounds, they were often taken more seriously than the Chicago boys as they argued out of the same paradigm and came to the conclusion that under certain, carefully specified conditions, structure did not matter at all for the results (“performance”) to be expected in a market. Although contestability theory has been criticised because these conditions seem very seldom – if ever – to apply in reality, it has also made an important effect on competition theory: according to it, the effectiveness of potential competition crucially depends on the significance of barriers to entry. Part of the message is almost identical to that of Chicago, although the representatives of contestability come from a different theoretical background: (competition) policy should focus on reducing state-mandated barriers to entry, as this will increase the likelihood of beneficial results of the competitive process.

In order to make the various points of consensus and dissensus among the various approaches even more concrete, we will discuss the ways in which they deal with one issue that all approaches that pretend to be applicable to policy issues need to deal with somehow: the recognition that mistakes can occur and how one deals with that possibility. Two types of mistakes can be distinguished:

– Type I errors: Efficiency-increasing and thus welfare increasing mergers are wrongly prohibited.

– Type II errors: Mergers that are not efficiency-enhancing and thus not welfare-increasing are wrongly allowed.

This classification of possible errors is based on welfare economic grounds. From a welfare economic point of view, the gains of any merger can be expressed in terms of increased productive efficiency. But mergers can also cause allocative inefficiencies if they enable a firm to be powerful.11 The competition authority thus needs to make a trade-off between gains in productive efficiency and losses in allocative efficiency. It can only commit two mistakes. It can either exaggerate the expected allocative inefficiencies (and turn notified mergers down although they should be passed) or it can overestimate the gains in productive efficiency (and clear the merger although it should be prohibited). These two types of errors thus reflect the welfare economic approach towards mergers.

For competition policy this does, however, not mean that only those mergers should be allowed that explicitly generate efficiencies. In competition policy, all mergers should be passed as long as they do not overly restrain competition. Problems only arise if a merger threatens to overly restrain competition. Only in that case should efficiency considerations play an explicit role. From a welfare economic point of view, one would then ask whether allocative inefficiency can be expected to be overcompensated by gains in productive efficiency.

Any competition authority faces the dilemma of having to trade off the two types of errors against each other. If the authority decides to take a tougher stance on mergers, thus letting fewer mergers pass, it reduces the probability of committing type II errors but simultaneously increases the probability of committing type I errors. The inverse relationship also holds: if a competition authority decides to take a more relaxed stance on mergers, thus letting more mergers pass, it reduces the likelihood of committing type I errors but simultaneously increases the probability of committing type II errors. The choice is thus a genuine dilemma.

Table 3: The Trade-off Between Type I and Type II Errors

It is, of course, tempting to think of the “optimal” decision concerning the trade-off that would supposedly consist of minimizing the overall costs expected. The costs caused by type I errors consist of the unrealised efficiency gains that would have resulted had the mergers that were in fact forbidden been implemented. But these are not the entire costs: Every decision by a competition authority contains signals concerning likely future decisions: if it takes a tough stance on a particular merger, it can be expected to take a similarly tough stance on similar mergers. This will most likely lead some potentially welfare-increasing mergers to never be seriously pursued because every merger prohibited is connected with huge costs for the notifying parties. These could be called the dynamic effects of Type I errors.

The costs of type II errors are primarily caused by allowing welfare-reducing mergers. Allocative inefficiencies will be reflected in higher prices and lower quantities. But there is also a dynamic aspect to type II errors: if companies expect a liberal decision-practice in merger control, this will affect the number and quality of the mergers notified. It is possible that mergers will be attempted for other reasons than for improvements in efficiency such as market power. Here, the competition authorities do not send signals that would reduce an adverse selection in mergers (Basenko/Spulber 1993, 11).

For identifying an optimum, the costs of both error types need to be compared. It is in the evaluation of the costs expected with committing the two error types that the approaches presented in this chapter differ: Representatives of the Chicago approach would rather commit an error of type II than type I because they believe that errors of type I are – at least on average – more costly. Type II errors can be corrected ex post but there is no clearly identifiable ex post correction mechanism with type I errors: mergers that are not efficiency-enhancing but that are passed nevertheless are still subject to the market test: if other producers are more productive or meet consumer preferences better than the merged company the new company will lose market shares – and profits. If it is too large, capital markets are expected to correct for this (Manne 1965). In many jurisdictions, competition authorities can let mergers pass but can check the behaviour of firms that are supposed to dispose of a market dominant position. This is thus an additional channel to keep the costs of type II errors low. But if efficiency-enhancing mergers are wrongly prohibited there is no ex post market test. Efficiencies can simply not be realised.

Representatives of the Chicago approach thus argue that costs of type I errors regularly outweigh costs of type II errors. Judge Easterbrook, e.g., argues (1984, 15): „... the economic system corrects monopoly more readily than it corrects judicial errors ... in many cases the costs of monopoly wrongly permitted are small, while the costs of competition wrongly condemned are large.“

Representatives of the Harvard approach seem to be more likely to argue in favour of committing type I rather than type II errors. Traditionally, representatives of the Harvard approach have been much more critical with regard to the market than have representatives of Chicago. This is obviously reflected in their evaluation of the costs due to type I errors in comparison to type II errors.

Representatives of Transaction Cost Economics have also explicitly dealt with the issue of wrong decisions in merger policy. In a recent paper, Joskow (2002, 6) writes: „The test of a good legal rule is not primarily whether it leads to the correct decision in a particular case, but rather whether it does a good job deterring anticompetitive behaviour throughout the economy given all of the relevant costs, benefits, and uncertainties associated with diagnosis and remedies.“ The dynamic effects of errors are clearly recognised here. Moreover, Joskow clearly recognises that our knowledge concerning cause-effect-relationships is very limited and that enforcement agencies have only very limited knowledge at their disposal. Rather than taking a clear stance on what type of error rather to commit, the policy advice seems to point to broad and general rules. This can, however, not be easily reconciled with the specific type of efficiency defence that TCE stands for, namely efficiencies based on asset specificity, uncertainty, and frequency.

In concluding, it can be said that with regard to competition theory, a lot has been learned over the last couple of decades. In theory, a competition policy based on sound economic reasoning should thus be possible. The problem to be solved is to devise rules that allow taking the intricacies of a specific case explicitly into account but is yet general and robust enough to allow for a high degree of predictability. Succumbing to economic trends is not a good advice here as they have often turned out to be short-lived fads. Before we develop some proposals how this could possibly achieved in chapter IV, we turn to the description of some business trends that an up-to-date merger policy should probably take explicitly into account.