Monday, August 3, 2009

2.5.3. Improvements Due to Trends in the Business Environment

In chapter III, it was shown that the business environment of many companies has fundamentally changed. This holds true for both the supply and the demand side of the market. The value of brands can be subject to quick erosion; consumer loyalty seems to have become fickle in many cases. This means that high market shares can also erode quickly. Predictions concerning future market shares have become virtually impossible. Merger policy should react to these changes, but what is the adequate response? In this section, two proposals how merger policy could take the globalised, and rapidly changing business environment explicitly into account are advanced: one is concerned with the time dimension of the relevant market, the other one with its geographic dimension.

Taking the time dimension adequately into account In a rapidly changing business environment, reliable predictions concerning future market shares have become virtually impossible. But a responsible merger policy needs to take likely developments into account: if today’s high market share resulting out of a merger is not very likely to persist tomorrow, the merger should be passed. Some observers have proposed that the time length that competition authorities should recognise in their decisions should be extended to five or even more years. This is, however, mission impossible: if short-term predictions are impossible, long-term predictions are just as unfeasible.

Drawing on market shares in merger analysis is based on the hypothesis that they can be translated into market power and can thus be used to the detriment of consumers. But what if the equation “high market share = low consumer rent” does not hold anymore? Rapid change induced either by supply or by demand side factors (or by both) can prevent the emergence and the use of market power because it leads to the possibility of unstable market structures. Rapid change is thus the crucial variable. Structure could only be consolidated – and probably used in order to raise price and restrict output – if change is slowed down. Firms with a dominant position might thus have an incentive to try to slow down change. But often, they will not be in a position to be successful in that endeavour: if they have competitors who expect to gain by innovating, they will not be successful. If consumption patterns are subject to rapid change, they will not be successful either.

We thus propose that competition authorities analyse (1) the speed of change in a given industry and (2) who controls the factors that are responsible for rapid change in a given industry. Ascertaining the speed of change in a given industry is, of course, not easy. Counting the number of patents will not do as some innovations never get patented and as new products do not necessarily have to rely on patentable knowledge (SONY created the Walkman drawing on readily available techniques). As already mentioned, rapid change can be due to supply-side factors, but also to demand-side factors. Demand side factors are certainly not beyond any influence from the supply side (marketing), but are difficult to control. In some cases, control over necessary inputs (resources, patents, public orders etc.) can seriously constrain the capacity to be innovative. In such cases, merger policy should be restrictive. If parties willing to merge do, however, not control the process, the merger should pass even if a highly concentrated market structure is the – temporary – result.

The Guidelines on horizontal mergers deal with the issue of markets in which innovations are important. Notified mergers can be prohibited if the two most innovative firms want to merge even if they do not command important market shares. According to the Commission, mergers between the most innovative firms in a market can lead to a substantial impediment of effective competition. But there is no empirical evidence which would prove that mergers between innovation leaders do indeed result in a slowing down of the speed of innovation in the market (Orth 2000). At the end of the day, the implementation of this new criterion can mean that leaders in innovation are effectively sanctioned for being “too” innovative. Since innovation is the key to competition in these markets, it would rather be the competitors than competition, which is protected by this proposal.

Taking the geographic dimension adequately into account In chapter III, it was shown that deregulation and privatisation have occurred on a worldwide scale. It was concluded that international interaction costs have in many industries been reduced to such an extent that one can talk of truly global markets. This does not seem to be adequately reflected in merger policy practice. Very often, markets are still delineated on a much narrower scale.

What is crucial for the geographical distinction is the possibility of bordercrossing trade, in particular imports. It is thus insufficient to look at actual trade statistics. What should, instead, be taken explicitly into consideration is the sheer possibility of trade. This can be ascertained by analysing the relevant transport costs as well as costs due to state-mandated barriers to entry. This procedure thus explicitly acknowledges the close relationship between defining the relevant market and ascertaining the relevance of barriers to entry.

Predictability could be further advanced if the Porter classification of completely globalised, partially globalised, and regional markets were taken into consideration by the Commission in the definition of geographical markets. If the firms knew ex ante how their industry was classified, predictability would be greatly increased.

2.5.2. Improvements Due to Theoretical Developments

Take Efficiencies Explicitly into Consideration

As spelt out above, mergers may increase market power but still be welfareincreasing if they enable the merging companies to realise efficiency gains. In U.S. merger policy, the efficiency defence has been around for a while: Efficiencies are explicitly mentioned in the U.S. Merger Guidelines of 1992, but they had already been applied based on a publication of the Federal Trade Commission from 1984. The 1997 revision of the Guidelines meant a more restrictive application of efficiencies. The Guidelines now demand that the proof of cost advantages be clear and unequivocal; they specify more precisely what is meant by efficiencies. Some other legislatures have followed suit and have also incorporated an efficiency defence into their merger law. It can now be found in the U.K., but also in Australia, Canada, and New Zealand. The Guidelines on horizontal mergers that went into effect in May 2004 also contain an efficiency defence. Factually, efficiency considerations had, however, already played some role even before (see, e.g., NORDIC SATELLITE, AÉROSPATIALE/DE HAVILLAND, and HOLLAND MEDIA GROEP).

Efficiency Defence in the US


Drawing on the various versions of the US Merger Guidelines issued between 1968 and 1997, it is possible to describe the development of the relevance that efficiencies have played in US Merger Control Policy. The Guidelines that were issued in 1968 (U.S. Department of Justice 1968) are very restrictive concerning the possibility to take efficiency effects explicitly into account. Cost advantages as a reason to justify a merger were basically not accepted by the Department of Justice, except if exceptional circumstances were present. The realization of product and process innovations were the prime candidates for such exceptional circumstances. In its decisions, the Department of Justice consistently denied the presence of such exceptional circumstances. The main reasons offered by the Department of Justice for its restrictive stance on accepting efficiencies as a justification for mergers were that cost savings could also be realized through internal firm growth and that efficiency claims were notoriously difficult to verify.
This critical stance was retained in the 1982 version of the Guidelines (U.S. Department of Justice 1982). They reiterated that efficiencies could only play a role in extraordinary cases However, these extraordinary cases did not play a significant role in merger policy. In 1984, the Department of Justice explicitly introduced efficiencies into the Merger Guidelines (U.S. Department of Justice 1984). According to them, significant efficiencies could play a role if there was “clear and convincing evidence“ in their favour and it was impossible to realize them through alternative means. This was thus the first time that efficiencies appeared as a criterion of their own for justifying mergers. But simply pointing at expected efficiencies was never sufficient for getting a merger passed. They could only play a role in conjunction with other reasons.
The 1992 revision of the Merger Guidelines did not lead to substantial changes with regard to efficiencies (U.S. Department of Justice/FTC 1992). But the precondition of “clear and convincing evidence” was cancelled. This was not, however, accompanied by a reversion of the burden of proof. It is thus still with the merging firms. All in all, this made it easier to draw on the efficiency defence (Stockum 1993). Finally, the 1997 version of the Guidelines made the rather general formulations with regard to efficiencies more concrete (U.S. Department of Justice/FTC 1997).
The Revision of the Merger Guidelines of April 1997 served the purpose of making the recognition of efficiencies more concrete. Clearly specified criteria are to make it easier for firms to advance efficiency arguments and easier for courts to decide cases in which efficiencies play a role (Kinne 2000, 146). The Merger Guidelines (1997, 31) demand that “... merging firms must substantiate efficiency claims so that the Agency can verify by reasonable means the likelihood and magnitude of each asserted efficiency, how and when each would be achieved (and any costs of doing so), how each would enhance the merged firm’s ability and incentive to compete, and why each would be mergerspecific.” And further: “Efficiency claims will not be considered if they are vague or speculative or otherwise cannot be verified by reasonable means.” Relevant from a competition policy point of view are the so-called „cognisable efficiencies.“ These are savings in marginal costs, which can be realised in the short run. Cost savings in the areas of Research and Development, procurement or management are classified as not being sufficiently verifiable and in that sense not „cognisable.“

Efficiency aspects are estimated by following a three-step procedure:
(1) Determination of merger specific efficiencies Only cost savings caused by the merger itself are recognised. Efficiency advantages caused by increased market power must not be recognised.
(2) Analysis of the relevance of merger specific efficiencies The merging firms must document

– when, how, and to what degree and having to incur what costs efficiency gains
will be realised,
– why the claimed efficiencies can only be realised by a merger, and
– how these efficiencies will affect the competitiveness of the merging firms.
If the firm’s documentation is to the satisfaction of the competition authorities, the efficiencies will be evaluated as “cognisable.“

(3) Evaluation
After having estimated the size of the expected efficiencies they will be compared with the disadvantages that the consumers will most likely have to incur. The higher the expected disadvantages (measured by post-merger HHI, possible unilateral effects, and the relevance of potential competition), the more “... extraordinarily great cognisable efficiencies would be necessary to prevent the merger from being anticompetitive” (Merger Guidelines, 32). The creation of a monopoly as the consequence of a merger can never be compensated by efficiency arguments.
The 1997 version of the Guidelines clarified under what conditions efficiencies could be taken into account in notified mergers. Compared to the vague formulations of the 1992 Guidelines, this meant a substantial improvement. It is thus encouraging that the European Commission used the 1997 version of the U.S. Guidelines as a model rather than the earlier versions.

An increase in the predictability of European Merger Policy is possible if the basis on which efficiency considerations are taken into account are spelt out explicitly. Firms can then form expectations on whether the efficiencies that they expect to realise as a consequence of a merger will be taken into account by the Commission or not. But given that predictions concerning the size of realisable efficiencies rest on shaky grounds, there will be quarrels as to realistic levels. This will limit the gains in predictability.

The inclusion of efficiencies in the recently published Guidelines is thus welcomed. There, the Commission demands that the efficiencies have to “benefit consumers, be merger-specific, and be verifiable.” The Commission loosely follows the US-Merger Guidelines here. Unfortunately, the Guidelines contain a number of indeterminate legal terms, which need to be made more concrete. The terms just mentioned are described but concrete conditions that need to be met for efficiencies to be taken into account are not named. The US-Merger Guidelines are more concrete here and predictability could be further increased if the Commission were to follow the US Guidelines in this regard too.

The main issue to be decided for the inclusion of efficiencies are the criteria that need to be fulfilled. In the US, it is three criteria, namely (1) that gains in efficiency need to be passed on to consumers in the form of lower prices or increased quality, (2) that efficiencies must be merger-specific, and (3) that they must be verifiable. As just pointed out, the Commission is to use similar criteria in Europe. In order to ascertain possible consequences, these three criteria will be dealt with in a little more detail.

The probability that efficiencies will be passed on to consumers appears to be higher if they originate in lower marginal costs rather than in lower fixed costs. Expected cost savings in, e.g., the administrative overhead, do not have an impact on the marginal costs of a company. Reductions in its fixed costs will, however, only be passed on to consumers if the degree of competition is sufficiently high on a given market. But the reason for drawing on efficiencies as a justification for letting a merger pass is exactly that the creation of a market dominant position is suspected. It makes thus sense to weigh reductions in marginal costs higher than reductions in fixed costs. Reductions in marginal costs are to be expected if the merged entity is expected to be able to realise efficiencies on the supply side as well as during its production process, e.g., via economies of scale or scope. The more important the input factors on which the merged entity can save, the more important can the efficiencies expected to be. This is reflected in the Guidelines which attribute more weight to savings in marginal costs but do not exclude that savings in fixed costs might also lead to efficiencies that might benefit the consumer.

Additionally, efficiencies that can be realised in R & D are also highly welcome. Improved R & D will not only increase efficiency of the undertakings concerned, but will put other companies under pressure to improve their R & D. Overall welfare gains are thus to be expected.

The criterion that efficiencies need be merger-specific is more problematic than estimating whether efficiencies will be passed on to consumers. The basic notion is, of course, quite straightforward: efficiencies are only to be recognised as an offsetting argument if the merger is the only way to realise them. If there are other ways, such as internal growth, joint ventures, licensing agreements or other ways of cooperation between the firms, efficiency-gains are not to be used as an argument offsetting the creation of a market position which gives rise to serious concerns. From an economic point of view, the notion is not as straightforward as it first might seem.

In order to make the point, we draw on transaction cost economics, whose representatives are used to think in alternative institutional arrangements. Suppose that the management of an undertaking has incentives to choose that institutional arrangement that promises to maximise profits. It is then unclear why management should opt for a merger if an institutional alternative such as a joint venture or a license agreement could do the trick. A merger regularly involves high organisation costs. These costs will only be incurred if institutional alternatives are expected to be less beneficial. Moreover, it remains unclear why a joint venture is to be preferred over a merger. Joint ventures have often been the source of some cartel-like arrangements. Merger-specificity thus appears as a problematic criterion that should play no role in merger policy.

Critics might object that the criterion of merger specificity primarily serves to prevent the genesis of allocative inefficiencies that are due to market power resulting from a merger. But the efficiency defence will only trump a prohibition if the merger is expected to make the consumer better off, i.e. if it increases consumer surplus. But if post-merger prices need to be lower than pre-merger prices, the probability of allocative inefficiencies appears to be negligible. If, on the other hand, the criterion of merger specificity is applied rigidly, the could mean that mergers will be prohibited – or not even projected – although they would increase overall welfare. In order to protect consumers against profits due to market power, checking for the change in consumer surplus is sufficient.

Empirical Evidence on Efficiencies

The effects of mergers on the efficiency of merged firms have been subject to extensive empirical analysis. The main results can be summarized as follows (Gugler et al. 2003, Mueller 1997, Tichy 2001): Mergers often lead to higher profit levels while turnover decreases. This would mean that mergers increase market power but not efficiency. According to the available empirical studies, these effects appear independent of the sector or the country in which the merger occurs. The reduction in turnover is more pronounced in conglomerate mergers than in horizontal mergers (Caves/Barton 1990). On the other hand, and in contrast to the results just reported, one half of all mergers lead to a reduction in profits and in turnover which would mean that mergers lead to reduced efficiency (Gugler et al. 2003). If profits increase, these increases can be explained with increased market power as well as with increased efficiency. Market power effects are correlated with large mergers, efficiency effects with small mergers (Kleinert/Klodt 2001).

Concerning the relationship between mergers and technical progress, some studies did not find any significant correlation between the two (Healy/Palepu/ Ruback 1992). Some studies have found a negative correlation (Geroski 1990, Blundell, Griffith, Reened 1995), while others found a weakly positive correlation between mergers and technical progress (Hamill/Castledine 1986). The authors pointed at the possible relevance of the acquisition of patents as the consequence of mergers. The empirical results are thus not unequivocal and it is difficult to draw policy implications.

The few cases in which positive efficiencies as a consequence of mergers appear to be unequivocal seem to have occurred in mergers in which the merging parties were producing very similar products. Efficiencies are thus more likely in horizontal mergers with the merging firms producing close substitutes (Ravenscraft/Scherer 1989, Gugler et al. 2003). On the other hand, conglomerate mergers seem to be rather unlikely to lead to additional efficiencies. With regard to vertical mergers, no significant savings in transaction costs were found. Rather, the increased difficulty of entering onto a market and the extension of dominant positions in upstream or downstream markets seem to dominate (Gugler et al. 2003).

The empirical evidence with regard to the efficiency-enhancing effects of mergers is thus mixed at best. Yet, this mixed empirical evidence is not a sufficient reason for not relying on efficiencies in merger control. It is not entirely clear whether these studies do indeed measure what they pretend to measure. The most frequently relied upon indicator for the success – or failure – of a merger is the development of the profits, share prices or the return on turnover. The connection between these two indicators and the efficiency of a firm is, however, everything but clear-cut. Moreover, it is in the nature of these tests that the development of these indicators in the absence of a merger must remain systematically unknown.

This means that it would be premature to conclude from lower share prices or reduced profit margins that the merger must have been inefficient. The available studies provide some important insights into the conditions under which mergers are likely to be a success or a failure. But they appear to be far from conclusive. This is why they do not provide sufficient evidence against the incorporation of an efficiency defence into merger control.

The critical issue in the recognition of efficiencies certainly is the capacity to assess them. It is quite comprehensible that the burden of proof should be with the entities that want to merge. But that does not solve the problem of information asymmetries. It was already pointed out above that the main problem with the explicit consideration of efficiencies is that none of the concerned actors has any incentives to reveal them according to their true expectations. The pragmatic question thus is whether any second-best mechanisms can be thought of. Over the last number of years, a literature on the virtues of independent agencies has developed. It started out with the analysis of the effects of independent central banks but has been extended to a number of areas (Voigt and Salzberger 2002 is an overview). It is conceivable to delegate the task to evaluate the realisable efficiencies of a proposed merger to an independent “efficiency agency” that would specialise in such estimates. A similar suggestion has already been made by Neven et al. (1993). Unfortunately, the Guidelines on horizontal mergers did not realize any of these proposals. A number of indeterminate legal terms is used, but they are not made sufficiently concrete.

Assess Importance of Asset Specificity
When describing the insights of Transaction Cost Economics, it was pointed out that (1) asset specificity, (2) uncertainty, and (3) the frequency of interactions all played into the optimal governance structure. It was assumed that firms tried to economise on transaction costs and that unified governance – i.e., large firms – could be the result. This means that transactions cost arguments are basically efficiency arguments. They are dealt with separately here because they are intimately connected with one specific theoretical development, namely Transaction Cost Economics. Once invested, highly specific assets make the firm that has invested them subject to
opportunistic behaviour by its interaction partners. This might lead to investment rates below the optimum level. It can therefore be in the interest of both sides of an interaction to agree on specific governance structures in order to reduce the risk of being exposed to opportunism. This insight has potential consequences for merger policy: the higher the degree and amount of specific assets involved, the better the justification for a unified governance structure, in this case for a merger. In order to be able to take asset specificity explicitly into account, one eitherneeds to measure it or to use proxies for it. As described in the part on Transaction Cost Economics in chapter II, four kinds of asset specificity are usually distinguished, namely (1) site specificity (costs of geographical relocation are great), (2) physical asset specificity (relationship-specific equipment), (3) human asset specificity (learning-by-doing, especially in teams comprising various stages of the production process), and (4) dedicated assets (investments that are incurred due to one specific transaction with one specific customer). Physical proximity of contracting firms has been used as a proxy for site specificity (e.g., by Joskow 1985, 1987, 1990
and Spiller 1985) and R&D expenditure as a proxy for physical asset specificity. With regard to both human asset specificity and dedicated assets, survey data have been used.

It is thus possible to get to grips with asset specificity empirically. Since the merger rationale in cases of high asset specificity is quite straightforward, it shouldbe taken into account explicitly.

Assess Importance of Uncertainty
The theoretical argument concerning uncertainty has great similarities with the argument concerning asset specificity: if interactions could be made beneficial for all parties concerned, they might still not take place if too high a degree of uncertainty is involved. In cases like that, welfare could be increased if the interested parties are allowed to form a common governance structure in order to cope with uncertainty. With regard to merger policy, this means that in mergers in which uncertainty plays an important role, the evaluation should be somewhat less strict than in cases in which uncertainty is marginal.

Getting to grips with uncertainty empirically is no mean feat. In the literature, various proxies have been discussed; volatility in sales is one of them. Others (Walker and Weber 1984, 1987) have proposed to focus on one specific kind of uncertainty, namely “technological uncertainty”, measured as the frequency of changes in product specification and the probability of technological improvements. Given that technological uncertainty seems to have dramatically increased, it seems worthwhile to take it into account explicitly. The argument is that mergers are more likely in markets with high uncertainty as proxied by high volatility in sales or high technological uncertainty. These mergers are potentially welfare-increasing and should thus be passed.

Assess Importance of Frequency
Frequency is the last component to the Transaction Cost Economic Trias of asset specificity, uncertainty, and frequency. The argument is that the more frequent interactions between two specified parties take place, the higher the potential benefits from a unified governance structure. The implications for merger policy are obvious: assess the frequency with which parties willing to merge interact. The more frequent it is, the higher the chance that efficiencies could be realised, the more relaxed the competition policy stance should be.

2.5.1. Simple Tools

Delineate Relevant Market Taking Both Demand and Supply Side Into Account The above analysis has shown that in order to delineate the relevant product market, the Commission relies heavily on the demand side. This has often led to too narrow a definition of the relevant market. The consequence of this narrow approach – which is also documented in the Commission’s Notice on the definition of the relevant market – is that the application of the SSNIP test remains incomplete: if prices were raised by 5 or 10 per cent, new competitors might be induced to enter the market and the price increase would thus turn out to be unprofitable. The current practice is thus incoherent and should be modified.

Some of the business trends described in Chapter III clearly pointed to the increased relevance of supply-side substitutability. For example, many steps in the value chain of a firm have been outsourced over the last years. Often, this has been achieved via management buy-outs or spin-offs. The newly arisen independent suppliers are frequently not confined to work exclusively for their former “parent company” anymore, but move as an independent supplier in the market. Their products can thus be bought by anybody. This possibility to outsource many steps of the value chain means that it has become easier for entirely new firms to enter into a market, as they do not have to make heavy investments. Supply-side substitutability has therefore increased in relevance and should be taken into account regularly by the Commission in its delineation of the relevant market.

Take the case of VARTA/BOSCH, already shortly alluded to above. In this case, the Commission identified two different product markets, namely the one for starter batteries on the original equipment market and the one for starter batteries on the replacement market. With regard to the goods’ characteristics, there is no difference between these batteries. Nevertheless, the Commission identified two different groups demanding batteries: car producers on the one hand, and car maintenance and repair dealers on the other. But the Commission did not check whether the suppliers of replacement batteries were capable to provide batteries for the original car equipment. If this had been the case, it would have constrained the merged entity VARTA/BOSCH considerably.

This is an example in which the Commission did not test sufficiently for supply side substitutability. It is proposed here that it should be taken into consideration if a small but non-temporary price increase would enable producers to modify their supply and to offer it on the market within a reasonable time span. This was the case in VARTA/BOSCH and supply-side substitutability should, therefore, have been taken into account.

Reliance on Quantitative Methods to Delineate the Relevant Market This proposal essentially remains within the Harvard paradigm, i.e., the overwhelming importance attributed to the structure of markets is in no way questioned.

The delineation of the relevant market is often the single most important step in merger analysis: if it is delineated broadly, chances that a proposed merger will be approved are high and vice versa. The quest for “objective” procedures that allow the firms participating in a merger to predict the way the Commission will delineate the relevant market can therefore be easily understood. Quantitative methods are believed by some to be more objective than the currently used procedures. We thus set out to describe some quantitative tools before evaluating them critically.

In determining the geographically relevant market, the Commission has primarily relied on three criteria: (1) transport costs, (2) differences in prices, (3) differences in market shares caused by regional differences or differences in customer behaviour. At times, the last two criteria appear to be questionable: there can very well be single geographic markets with different prices and different market shares of the competitors in different parts of that market. What is important, however, is that prices are interdependent. Interdependency can be checked for using quantitative techniques.

A variety of quantitative methods are discussed in the literature (Bishop/Walker 1999, 167ff.). Two of them appear especially noteworthy, namely price-correlation analyses and shock analyses.

Price-correlation analysis looks at price changes of various products over a period of time. In case one observes high correlations between two or more products, they are grouped into the same market. In NESTLÉ/PERRIER, the correlation between still and sparkling mineral waters was high whereas the correlation between mineral waters and other soft drinks (such as Coke) was low. It was concluded that still and sparkling waters belong to the same product market whereas mineral waters and other soft drinks do not. As is well known, there are many spurious correlations. A high correlation coefficient as such is thus not sufficient for grouping two or more
products onto the same market. Other potentially important variables such as the business cycle, inflation, seasonality, etc., need to be controlled for.

In order to group two or more products into one single market, a value judgment is necessary, namely above what levels of the correlation coefficients one would group the products into a single market. Correlation coefficients can take on any value between –1 and +1, +1 indicating a perfect positive relationship and –1 a perfectly inverse relationship. A correlation of 0 means that the two time series are not correlated in a statistically meaningful way. It has been proposed not to use a fixed number (say .75) beyond which a single market should be assumed but to draw a different line depending on the individual case. The line could be established by taking the (average of the) correlation coefficients of products that everybody agrees on belong to one single market as a benchmark. If the correlation coefficients of the products whose grouping is questionable were sufficiently close to this benchmark, they would also be counted as being part of the same product market. Of course, there might be quarrels as to (1) agreeing on the definition of the benchmark products, and (2) agreeing what “sufficiently” close means in an individual case.

Application of price correlation analysis is not restricted to determining the relevant product market but can also be extended to ascertaining the relevant geographic market: if price levels move together in various regions or states, they would be grouped into a single market.

Price-correlation analysis is, however, not without problems. Its results crucially depend on the level of aggregation chosen. The higher the level of aggregation, the more average values enter the analysis. Drawing on average values will, however, blur the results. Carrying out price-correlation analyses, one should therefore choose a rather low level of aggregation. Another problem is that a low correlation of prices can, e.g., be caused by time lags that occur only with regard to one product due to production-specific reasons. Inversely, a high correlation can be caused by common influences that occur although the products are on two different markets. Energy prices in energy-intensive production processes could be an example. In order to control for this problem, price-correlation analysis should therefore be combined with so-called “unit root tests”. The underlying idea is very simple: one checks whether the time-series under consideration converges to a constant at least in the long run. Following a shock, one would observe deviations from the long-term trend but these would only have a transitory character and the time-series would return to its long-term path.

Figure 8:Price ratio between goods 1 and 2

In the long run, the price of good 1 is constant relative to the price of good 2. Deviations from the long-run trend are only temporary. In such a case, both products should be grouped into the same market. Shock-analysis looks at price changes subsequent to exogenous shocks that were unpredictable for the market participants. Examples for such shocks are political crises, wars, currency crises, etc. Shocks can change the business environment for market participants. Reactions to them can reveal information about how firms perceive of relevant markets themselves. Take an unanticipated change in exchange rates. This can lead to price disparities between currency areas. If the relevant geographic market extends beyond currency areas, this should be reflected in modified trade flows and a tendency of prices to converge.

Entry of a new product sometimes comes as a “shock” to competitors. Their reaction to new products can help ascertain whether they perceive themselves to be acting in the same market. This could be reflected in price reductions, but also in increased spending for advertising.

It would, however, not make sense to demand that the Commission use shock analysis on a regular basis simply for the reason that a shock might not always be readily available. As the European economies have moved together ever closer, dramatic differences have become less likely. The most obvious example for this is the introduction of the euro, which makes currency shocks within Euroland virtually impossible. A problem with the application of shock analysis is that all analysts have to agree that a certain event was really unpredictable and did thus constitute a genuine shock.

The so-called Elzinga-Hogarty Test can be used as a tool to further clarify the correct delineation of the relevant geographic market. It is an example for trade pattern tests. It is based on the presumption that if there is either substantial export from a region or a substantial import into an other region, regions are closely intertwined and can be grouped together as forming a single geographic market. It should be stressed from the outset that turning the test around could mean to commit a fallacy of reversed causation: From the absence of any interregional trade, one cannot convincingly conclude that one is dealing with two (or more) separate geographical markets.

The Elzinga-Hogarty procedure establishes two criteria, namely LIFO (little in from outside) – a low import share and LOFI (little out from inside) – a low export share. If either of the criteria is not fulfilled, there is the presumption that one is dealing with a single geographical market. In applying the test, one needs, again, to make a value judgment: Here, it needs to be decided beyond which level import or export can be called “significant”.

At times, use of quantitative methods can be of help in delineating the relevant market in its various dimensions. Their use by the firms willing to merge should thus be encouraged by the Commission. On the other hand, all available quantitative methods display some shortcomings (which are dealt with quite extensively by Froeb and Werden 1992). They should thus only be one instrument for the delineation of the relevant market.

Reliance on Quantitative Methods to Assess Dominance The current practice assesses concentration by drawing on concentration ratios (particularly CR1, CR4, and CR8; see Schmidt 2001, 137). These indicators only comprise information concerning market structure before and after a notified merger. As such, they do not enable the Commission to make any predictions concerning the intensity of competition, the use of competitive parameters such as prices etc. Assessing dominance solely on the basis of these indicators is therefore of little help.


Figure 9: A hypothetical example for price concentration analysis

Drawing on price concentration analysis could enhance predictive power. The assumption underlying the entire structure-conduct-performance approach is that high concentration is a good proxy for market power. High degrees of concentration are thus expected to lead to higher prices. Price concentration analysis compares different geographical markets in which different degrees of concentration are observed. It is asked whether higher degrees of concentration tend indeed to lead to higher prices.

Figure 9 is a hypothetical example. The horizontal axis serves to denote the degree of concentration found on a specific geographic market; the vertical axis contains the price found there. Every icon represents one country. Simple econometric analysis reveals that – in the example – higher concentration rather tends to lead to lower prices. If that is observed, a merger in one of these markets cannot be expected to lead to higher prices. It should thus be passed.

Some Critical Remarks Concerning Quantitative Techniques Quantitative techniques are appealing. They seem to indicate exactness, which is conducive to predictability. Yet, the precision that can be gained by drawing on quantitative techniques should not be overestimated. One still needs decisions what to count – and what not. But simply counting certain entities will not do trick: One needs criteria by which to judge the numbers that one has come up with. Value judgments concerning the definition of threshold levels and the like will thus remain necessary. Still, one could argue that value judgments are necessary in any event and that having made them, quantitative techniques can increase the objectivity of the assessments made by the Commission.

The Commission operates under tight resource constraints. Demands that it should regularly carry out a certain number of tests thus seem to have little chance of implementation. It is established practice that notifying parties offer one or more quantitative techniques in order to make their case. Predictability could be improved by publishing a list of quantitative tools that the Commission will consider. But the Commission will need to keep the competence of deciding whether the tools have been adequately used. Such a published list should not be interpreted as being enumerative though as that would inhibit the use of innovative techniques unknown at present.

Assessing the Importance of Customer Loyalty for Delineating the Relevant Market More Systematically The existence of brand labels and customer loyalty has been a crucial factor in the definition of product markets and the ascertainment of barriers to entry in the decision practice of the European Commission. Given that the brand labels and the noname labels are functionally equivalent, the central question is whether these products are substitutable or not. In the case of SCA/METSÄ TISSUE, the Commission denied the existence of substitutability between brand and no-name labels and hence defined two different product markets exactly along these borders. But this has certainly not been the only case in which the Commission believed that the existence of a brand label constituted a serious barrier to entry.

Quite frequently, the European Commission decides to delineate separate product markets if customer loyalty with regard to certain products exists. This leads often to too narrow a delineation of markets. The way customer loyalty is taken into account often appears unsystematic. Based on sound economic reasoning one would ask in what situations customer loyalty leads to significant gaps in substitution given that products are functionally equivalent. If gaps in substitution are substantial, then the delineation of separate product markets seems reasonable. We propose to take customer loyalty only into account if a transaction cost rationale for them can be named. We thus propose to distinguish between rationally explainable and rationally unexplainable customer loyalty and argue that only the first kind should play any role in merger policy.

The emergence of consumer loyalty can be explained with the presence of asymmetric information between suppliers and buyers. In general, one can expect uncertainty to be present on the side of consumers with regard to the quality and other characteristics of the goods offered by the suppliers. The supplier can be assumed to be informed on the characteristics of the goods offered ex ante, the buyer, on the other hand, will often only be able to ascertain the quality after having consumed the good, i.e., ex post. It has been shown that such information asymmetries can lead to the breakdown of markets (Akerlof 1970). One consequence of uncertainty with regard to quality is higher transaction costs that are caused by additional information-gathering activities. In such a situation, consumer loyalty can be interpreted as a mechanism to reduce uncertainty and the transaction costs accruing as a consequence of uncertainty. As soon as a consumer has found a good that conforms to his/her preferences, s/he will repeat consuming the good even if other suppliers offer other equally good products. Consumer loyalty can thus be conceptualised as a device to save on transaction costs.

With regard to information asymmetry, economic literature distinguishes between three kinds of goods, namely search-, experience- and trust goods (Nelson 1970). With search goods, consumers can acquire information on the relevant characteristics of the good before buying it. After having bought it, the information can be verified. With experience goods, reliable information on the characteristics is not obtainable before buying the good. The characteristics of the good have to be “experienced” after having bought it. With trust goods, the relevant characteristics are neither ascertainable before nor after buying the good.

In merger policy, the good’s characteristics with regard to information asymmetries should thus be taken into account. A rather narrow delineation based on substitution gaps can only be justified if one deals with long-lived experience goods or trust goods. In ascertaining the relevant information asymmetries, focus should be on the relevance of screening activities on the part of the consumers and signalling activities on the part of the supplying firms. The harder screening for the consumers, the more relevant will signalling activities be. Customer loyalty is a ubiquitous phenomenon. Based on the approach proposed here, it could be taken into account in a more systematic fashion than was hitherto the case.

It is thus proposed that customer loyalty is only used in order to delineate different product markets if one is dealing with durable experience goods or trust goods. For search goods and non-durable experience goods, reliance on customer loyalty will lead to rather spurious delineations.

A representative of the traditional welfare economic approach might object that the creation of brand labels can enable firms to set prices above marginal costs and that this would mean losses in allocative efficiency and the transformation of consumer into producer rents. But this objection overlooks that the utility of some consumers can be increased if they are offered differentiated products that better cater to their preferences than standardised ones. For some consumers, it is even utilityenhancing if they can buy brand labels for a higher price and can thus differentiate themselves from other consumers (Veblen-Effect). It can hence be asked if it should be the task of competition policy to protect consumers from this kind of market power as it would prevent some consumers from reaping additional utility. This argument should be considered in the light of obvious resource constraints present in competition authorities. It is thus argued that competition policy should rather focus on those cases that could mean a monopolisation of entire markets.

In defining relevant product markets, recognition of consumer loyalty should thus play a marginal role at best. It should also be recognised that the creation of brand labels is a genuine entrepreneurial task, which necessitates substantial investment and is connected with important risks. A producer will only be able to create customer loyalty if he offers a product with convincing quality characteristics or other traits that cater to the preferences of the consumer. From an economic point of view, the ensuing increase of his price-setting range also has advantages: it entails incentives to offer products that best meet consumer preferences. As long as functional substitutability is guaranteed, firms successful in creating a brand and customer loyalty should not be punished for successful entrepreneurial activity.

The relevance of consumer loyalty should not be overestimated either. It is true, a newcomer would have to invest in the creation of a new brand. But it is also true that an incumbent has to invest in conserving a brand label and reputation. Although the creation of new brands often involves the necessity of substantial investment, there are exceptions: the firm producing the energy drink Red Bull, e.g., established its product with very little advertising expenditure successfully against brands like Coca Cola and Pepsi. This case proves that consumer loyalty does not necessarily constitute a substantial barrier to entry.

2.5. Proposals Towards Enhancing Predictability

In this section of chapter IV, we have so far dealt with the standard approach used to delineate relevant markets and to asses dominance, we have pointed to some possible consequences of theoretical developments as well as some consequences following from business trends and we have described and critically evaluated the current practice of the European Commission with regard to these issues. We now set out to make some proposals how the current practice of the European Commission could be improved. These are grouped into three sections: simple steps to improve predictability, improvements based on theoretical developments, and improvements taking the changes in business environment explicitly into account.

2.4.3. Assessing Dominance

After having delineated the relevant market, an assessment has to be made whether a merger would significantly impede effective competition and in particular whether it would create or strengthen a dominant position. As already mentioned in the introduction to this chapter, our discussion here will be confined to the way the Commission has been assessing dominance in the past. According to the European Merger Regulation, mergers that create or strengthen a dominant position that prevents effective competition from taking place in the Common Market (or in a substantial part of it) need to be declared as incompatible with the Common Market [Art. 2 (3) MR].

The European Merger Regulation does not provide an explicit answer to the question of when a dominant position is created or strengthened. The notion of a dominant position was around long before the Merger Regulation was passed. It is part of Art. 82 TEC. There are a number of decisions by the Court of Justice which the Commission explicitly recognises in its own decisions. According to the Court, a firm is considered to have a dominant position if it has the capacity to develop market strategies independently from its competitors. This implies that the firm has at its disposal some leeway that is not controlled by either its competitors (horizontal aspect) or by its suppliers or consumers (vertical aspect).

This means that the Commission has to analyse a number of aspects in order to assess whether a firm will command a dominant position. The Regulation [Art. 2 (1) lit. b MR] explicitly mentions the economic and financial power of the firms participating in the merger, the options available to suppliers and customers, their access to supply and outlet markets, legal and factual barriers to entry, the development of supply and demand as well as the interests of intermediate and final consumers. In assessing horizontal mergers, the Commission has regularly analysed four structural factors (European Commission 1992, 408; Neven/Nuttal/Seabright 1993, 101; and Schmidt/Schmidt 1997, 184):

(1) The market position of the merged firm;
(2) The strength of remaining competition;
(3) Buyer power;
(4) Potential competition.

With regard to the market position of the merged firm, the Commission has been reluctant to publish degrees of concentration that it considers to be critical. It seems nevertheless possible to form various classes of market shares. If the combined market share is below 25%, single dominance can regularly be excluded (recital 15 of the old Merger Regulation, now recital 32). With a market share of up to 39%, it will only be assumed rarely. If market shares are between 40 and 69%, the assessment will have to take the relevance of actual and potential competition explicitly into account. But generally, market shares of 40% and more are interpreted as a strong indication that a dominant position might exist (Jones/González-Diaz 1992, 132.). Besides aggregate market shares, the financial strength of the participating firms, their technological know-how, existing capacities, their product range, their distribution nets and long-term relationship with important customers are taken into account. In evaluating the market shares, the relevant market phase is explicitly recognised. High market shares in high-technology growth sectors are thus evaluated less critically than the same market shares in markets with slow growth.

The strength of remaining competition is evaluated by drawing on the market shares of the remaining competitors, their financial strength, their know-how and production capacities as well as their distribution nets. In case, a new market leader emerges as the result of the merger, the difference in market share between the merged firm and the next largest competitors is considered. Additionally, the number of remaining competitors is counted in order to assess the alternatives in being supplied with the relevant products.

In ascertaining buyer power, the Commission focuses primarily on the bargaining strength of the other side of the market. It is assumed to have considerable bargaining power passed relatively small market shares (5-15%).

Potential Competition is recognised if there is clear evidence for a high possibility that quick and substantial entry by either established competitors or entirely new firms will take place. Potential competition must be perceived as a threat by the merged firm in the sense that it is sufficient to prevent it to act independent from competitive pressure post-merger.

Moreover, the Merger Regulation prescribes the Commission to take the development of technological as well as economic progress into account as long as it serves the consumer and it does not prevent competition [Art. 2 (1) lit. b MR]. This element has been interpreted as meaning that efficiency aspects should be taken into consideration in evaluating proposed mergers (Noel 1997, 503; Camesasca 1999, 24). But this element is only taken into consideration if the notifying parties claim that their merger serves technological or economic progress. It is the firms who carry the onus of proof. This criterion was checked in AÉROSPATIALEALENIA/
DE HAVILLAND, GENCOR/LONRHO and NORDIC SATELLITE DISTRIBUTION. In all these cases, the Commission believed the efficiency gains as insufficient. In the history of European Merger Regulation, not a single case has been passed in which explicit mention of the positive technological or economic effects was made. Efficiency considerations have thus only played a marginal role in the Commission’s decision-making practice (Schmidt 1998, 250).

In assessing dominance, market shares are still the single most important criterion, although the Commission insists that market shares as such are not sufficient for the assumption of a dominant position. This would be the case, if the other criteria spelt out above reduce the relevance of market shares. The Commission in its decisions in ALCATEL/TELETTRA, MANNESMANN/HOESCH and MERCEDES- BENZ/KÄSSBOHRER stressed the compensating effects. What is problematic with these offsetting factors is the enormous leeway that the Commission has in applying and interpreting them. This is especially noteworthy with regard to the evaluation of potential competition. In some cases, it was liberally assumed to be relevant (MANNESMANN/VALLOUREC/ILVA, MERCEDES-BENZ/KÄSSBOHRER, SIEMENS/ITALTEL), in others, it was interpreted quite restrictively (RTL/VERONICA/ ENDEMOL, ST. GOBAIN/WACKER CHEMIE/NOM, VOLVO/SCANIA and METSO/SVEDALA). This enormous leeway has thus led to inconsistencies in the Commission’s decisions.

The revised Merger Regulation (139/2004) has introduced a new substantive criterion in order to ascertain the compatibility of a notified merger with the common market. Before the revision, a merger that created or strengthened a dominant position had to be prohibited. This criterion was extended through the revision. Now, mergers that threaten to significantly impede on effective competition won’t be accepted (SIC test for short). The Commission has thus moved most of the way towards the Substantial Lessening of Competition (SLC) test prevalent in Anglosaxon countries. That the EU uses a different acronym for the new test seems to be an issue of semantics rather than of substance. The introduction of the SIC test has substantially increased the Commission’s power to prohibit notified mergers. It is the purpose of the new criterion to include mergers that do create additional market power but that do not quite reach the threshold of a dominant position yet. In its Guidelines, the Commission names a number of factors whose presence would lead it to expect that competition would be substantially impeded. The Commission proposes to distinguish between coordinated and non-coordinated effects.

Following the enactment of the Guidelines on horizontal mergers, market shares as an important criterion for ascertaining whether a proposed merger is compatible with the common market have gained additional ground. This implies a further strengthening of the very traditional Structure-Conduct-Performance paradigm. The introduction of the Hirschman Herfindahl Index attests to that. According to the Commission, a merger is supposed to raise serious doubts if it leads to an aggregate HHI value of 2000 and if the merger leads to an increase in the value of the indicator of at least 150 points. Although these threshold values are above those used in US merger control (1800, 100), they might well lead to a tougher merger policy. Suppose two companies with a market share of 8 and 10 per cent want to merge and there are four other companies that each hold 20.5 per cent of the market. In that case, the post-merger HHI would amount to 2005, its increase would be 160. The Commission would thus raise serious doubts of the proposed merger being in accordance with the Merger Regulation. Notice that recital 32 of the MR declares that mergers that create a combined market share of below 25% are regularly not suspected to be problematic. In the example, a new firm with a combined market share of only 18% would make the Commission raise serious doubts. This means that the Regulation and the Notice are partially contradictory.

What are the possible effects of switching to SIC on predictability? It seems to make sense to distinguish two cost categories that can accrue as a consequence of the switch: (1) costs that arise due to reduced predictability during the transition to the new test and (2) costs that arise because predictability is lower under the new test even after the transition uncertainty has vanished.

It is certain that transition costs will accrue. These arise because the Commission has leeway in interpreting the new Regulation. This also holds for the Court of First Instance and the European Court of Justice. Ex ante, it is unclear how much precedence will be worth under the new Regulation. Firms willing to merge will therefore have problems predicting the likely actions of the European organs. If the costs of switching to another Regulation are higher than the discounted value of the improvements brought about by the new Regulation, than it would, of course, be rational not to switch criteria.18 It is argued in this section that predictability of European merger policy might not only suffer due to transition costs but due to a lower level of predictability even after transition.

Many people believe that European merger policy will become more restrictive after the switch to the SIC-test whereas the Commission insists that nothing substantial will change and the switch only serves to increase legal certainty. The competing expectations concerning the future of European merger policy are part of the transition costs just mentioned. We believe that the new Regulation does indeed contain the possibility to establish a more restrictive merger policy:

– Non-coordinated effects can now be taken into account even if they do not create or strengthen a dominant position. One could even argue that this amounts to an introduction of oligopoly control.

– Non-coordinated effects will apply to all kind of oligopolies. The Regulation does not constrain its application to “highly concentrated” oligopolies or the like.

– Numerals 61 to 63 of the Merger Guidelines are entitled “Mergers creating or strengthening buyer power in upstream markets”. This might lead to an increase in the number of cases in which vertical or conglomerate issues are named as a cause for concern.

– There has even been discussion whether the change of the substantial test will lead to a lowering of the threshold with regard to the SSNIP value used.

What is relevant with regard to predictability is not that merger policy will necessarily become more restrictive but simply that such a possibility exists and that it is unclear what will, in fact, happen.

2.4.2. The Relevant Geographic Market

The next important step in delineating the relevant market is to take the geographic dimension into account. Factually, the different dimensions are ascertained sequentially: first the product and then the geographic dimensions. The EU is no exception here. This is why we now turn to the geographic dimension. The general concept was already described in section 2.1 above. In a summary fashion the geographic market can be described as that area in which “the conditions of competition applying to the product concerned are the same for all traders” (Bellamy/Child 1993, 614). In the current practice of the Commission, the delineation of relevant geographic markets is dominated by four factors:

(1) Regional differences
In delineating the relevant geographic market, the European Commission often cites regional differences as a basis for a narrow delineation. Differences in national procurement, the existence of cross-border import duties, the need to access distribution and marketing infrastructure, and differences in languages are all cited as reasons why competitors from abroad can be disregarded. It can thus be claimed that in these cases, the Commission deemed regional differences as so important that it chose to delineate markets as national in scope.

The Commission considers differences in market shares as an important indicator for separate geographic markets (MERCEDES-BENZ/KÄSSBOHRER; VOLVO/ SCANIA). Remember that a market is defined as a set of products worth monopolising. The assumption that different market shares indicate different markets thus needs to be connected to this definition of relevant markets. But there does not seem to be a straightforward linkage. Moreover, there is no linkage between the similarity of market shares and the substitutability in demand or supply.

(2) Prices
If two regions are in the same relevant market, then the prices charged in one region should affect the prices charged in the other region. However, this is not the same as saying that prices in both regions must be precisely identical. The geographic extent of the relevant market should reflect similarity of price movements rather than the similarity of price levels. In VOLVO/SCANIA, however, differences in the prices for heavy trucks indicated different geographic markets, according to the Commission. Since prices differed from member-state to member-state, the Commission assumed that the markets for heavy trucks had to be delineated on the Member-State level.

(3) Consumer Preferences
According to the Commission, different consumer habits are another important indicator for the assumption that one is dealing with different geographic markets. Different consumer habits are interpreted as an important barrier to entry, which could inhibit interpenetration of markets. These could be based on different tastes, e.g., concerning beer or different user habits, e.g., concerning female hygiene products. Additionally, language and culture differences are taken as an indicator for different geographic markets (RTL/VERONICA/ENDEMOL; NORDIC SATELLITE DISTRIBUTION; BERTELSMANN/KIRCH/PREMIERE).

(4) Transport Costs
High transport costs can be an important cause for little trade between regions. According to the Commission, high transport costs are thus a reason for assuming different geographic markets. Firms are regularly asked concerning the maximum difference that distribution of their products seems worthwhile. The answers received serve to delineate the relevant geographic market. They played an important role in a number of cases, e.g., in CROWN CORK & SEAL/METALBOX and SCA/METSÄ TISSUE. But the Commission should recognise that differences in transport costs – transport cost disadvantages if you will – do not always justify the delineation of different markets. This can be the case if these disadvantages can be compensated by other advantages like economies of scale, which could be the result of higher output.

2.4.1. The Relevant Product Market

Although the concepts of demand-side substitutability and supply-side substitutability are widely recognised by the Commission (see the Commission’s Notice) and the Community Courts, they have not been applied consistently in the past. The test of demand-side substitutability employed by the Commission seeks to determine which products are sufficiently similar to be regarded by users as reasonable substitutes for one other. In the case of AÉROSPATIALE-ALENIA /DE HAVILLAND, the Commission stated that a relevant product market comprises, in particular, all those products that are regarded as interchangeable or substitutable by the consumer with regard to their characteristics, their prices and their intended use.

This Commission’s practice concerning the recognition of supply side effects has been subject to harsh criticism. Neven et al. (1993, 77) wrote: ”The procedures used for market definition are frequently inconsistent; in particular supply substitution is sometimes taken into account at the market definition stage and sometimes at the stage of assessing dominance, with usually no clear rationale for the difference in treatment. Supply substitution is also sometimes counted twice. The failure to take supply substitution into account will probably tend on average to result in excessively narrow market definitions. Although it is not possible to point with confidence to particular cases in which this bias has made a difference to the market definition adopted, we indicate one or two instances where it could have been significant.” This evaluation was written some two years after the European Merger Regulation had been passed. Here, we set out to ask whether the Commission’s practice has been modified during the last ten years.

At times, the Commission points to supply-side substitutability. If this is the case, it is often, however, not incorporated into the relevant market. An example for this practice is VOLVO/SCANIA.

While the concept of supply-side substitution is recognised in practice, experience suggests that the Commission’s delineation of the relevant market focuses principally on demand-side considerations – supply-side substitution, if it is considered at all, tends to be more of an after-thought. Moreover, supply-side considerations play no role in the product market definition according to Form CO. The Commission’s view of whether two products or regions should be included in the same relevant market thus depends almost exclusively on their substitutability from the perspective of consumers.

A worrying aspect of the rare use of supply-side considerations by the Commission is that the Commission defines separate markets on supply-side where demandside considerations would suggest a single market. In these cases, the Commission used differences in conditions of competition and distribution to identify different product markets, although the products themselves were fully interchangeable or even identical. In the case of VARTA/BOSCH, e.g., the Commission introduced a distinction between starter batteries supplied to the original equipment market and those supplied to the replacement market. However, the Commission did not consider the relevant question whether suppliers of replacement batteries would switch to supplying original equipment batteries if prices of the latter were to rise. Another example is the case of SCA/METSÄ TISSUE: here, the Commission defines separate markets for branded and private label products of toilet tissues and kitchen towels, although the products were produced by the same producers using the same technology and identical machines.

This approach is problematic and cannot be easily reconciled with the concept of the relevant market. From the perspective of economic theory, a relevant market is defined as a set of products worth monopolising. This has entered into antitrust practice by way of the SSNIP test that was described above. As long as a market conjectured to be the relevant one is not sufficiently attractive for being monopolised, it can thus not be the relevant market and needs to be broadened.

When defining the relevant product market, the Commission has focused its analysis primarily on three factors:

(1) Physical characteristics of the product and service The European Commission has stated that if two products are physically very different to the extent that that they cannot in fact be used for the same end use, they will not be considered to be substitutes. In the cases of RENAULT/VOLVO and VOLVO/SCANIA, e.g., the Commission defined separate markets for trucks (below 5 tons, between 5 and 16 tons, and over 16 tons), on the grounds that they were technically very different and had different end uses. If consumers in the segment between 5 and 16 tons can, in fact, not easily use larger trucks for the same purposes as the smaller ones, then the delineation of two distinct product markets seems to be justified. However, it appears at least not unlikely that there are also consumers who would substitute between the two segments under some circumstances. For instance, some consumers, in response to an increase in the price of 17-ton trucks, might choose to buy a 15-ton truck instead. Conversely, if the price of a 15-ton truck were to increase, some customers might switch to buying a 16 or 17-ton truck instead. Therefore, the question is whether enough consumers would switch demand in order to make an attempt to increase prices in one category unprofitable.

In the cases of MERCEDES-BENZ/KÄSSBOHRER, VOLVO/SCANIA and MAN/AUWÄRTER, the Commission distinguished three individual segments of bus markets: the market for city buses, for inter-city buses and for touring coaches. The Commission defined the market narrowly, in part, because there were differences in the specific use of the bus. Hence, it was argued that there was no demandside substitutability between low-floor city buses and double-decker touring coaches with toilet, kitchen and video. The various types of buses can be produced using exactly the same machinery. Hence, the possibility of supply-side substitution imposes constraints on the pricing of the various types of buses. This suggests that the relevant product market did indeed encompass all three specific types of bus.

(2) Product prices
The Commission has often inferred that two products are not reasonably substitutable if they have substantially different prices. Examples for this line of reasoning can be found in NESTLÉ/PERRIER, PROCTER&GAMBLE/VP SCHICKEDANZ, AÉROSPATIALE-ALENIA/DE HAVILLAND; KIMBERLY CLARK/SCOTT PAPER. Price differences have therefore been used to distinguish between products that are perceived as different products by consumers but that may be functionally substitutable. If differences in price are caused by different characteristics of the good, they can, indeed, justify the argument for the existence of more than one relevant market. But at times, differences in price are completely unrelated to functional substitutability such as between branded labels and distributors’ own labels. Differences in price are thus not a sufficient condition for delineating two or even more different markets. Were the price of a premium product to raise substantially, consumers might be willing to change to another good, which is not in the premium segment of the market but which has been cheaper all along.

The argument can be highlighted by drawing on a real-world example: In the cases of KIMBERLY-CLARK/SCOTT and SCA/METSÄ TISSUE, branded products compete with unbranded, i.e., private label equivalents. A question central to the assessment of competition was whether private label toilet tissue was in the same relevant market as branded tissues. Often, branded tissue costs more than private label tissue, despite the fact that there is little to distinguish between them on a physical basis. Indeed, manufacturers of branded products produce many private label products. Using differences in absolute prices to delineate relevant markets would place the branded product and private label product in separate relevant markets. It is obviously the case that part of this price differential is due to perceived quality differences, which is valued by consumers. Still, one would have to ask whether the pricing of branded label products was constrained by the prices of private label products. This is clearly the case; hence, price differences are not sufficient for identifying two different markets. Rather, it is the interdependence of prices that matters.

(3) Consumer Preferences
The Commission also regards consumer preferences as relevant to delineating relevant markets. Despite the existence of substitutes at similar prices, the Commission may hold that consumer loyalty will limit substitution away from the product concerned following a price rise. Whilst question marks can be put against using consumer loyalty for the purpose of either market definition or barriers to entry, issue with this fundamental question is not taken here. If consumer loyalty is particularly high, then a price rise by a hypothetical monopolist may not induce substitution, suggesting separate markets. But in most decisions it is unclear how the Commission could even measure the extent of consumer loyalty. Therefore, without an empirical
test about the importance of brand or consumer loyalty, this argument introduces high subjectivity into the assessment of competition. In section 2.5 of this chapter, we develop a proposal how the relevance of customer loyalty for the delineation of the relevant product market can be assessed on a systematic basis.