Monday, August 3, 2009

2.5.1. Simple Tools

Delineate Relevant Market Taking Both Demand and Supply Side Into Account The above analysis has shown that in order to delineate the relevant product market, the Commission relies heavily on the demand side. This has often led to too narrow a definition of the relevant market. The consequence of this narrow approach – which is also documented in the Commission’s Notice on the definition of the relevant market – is that the application of the SSNIP test remains incomplete: if prices were raised by 5 or 10 per cent, new competitors might be induced to enter the market and the price increase would thus turn out to be unprofitable. The current practice is thus incoherent and should be modified.

Some of the business trends described in Chapter III clearly pointed to the increased relevance of supply-side substitutability. For example, many steps in the value chain of a firm have been outsourced over the last years. Often, this has been achieved via management buy-outs or spin-offs. The newly arisen independent suppliers are frequently not confined to work exclusively for their former “parent company” anymore, but move as an independent supplier in the market. Their products can thus be bought by anybody. This possibility to outsource many steps of the value chain means that it has become easier for entirely new firms to enter into a market, as they do not have to make heavy investments. Supply-side substitutability has therefore increased in relevance and should be taken into account regularly by the Commission in its delineation of the relevant market.

Take the case of VARTA/BOSCH, already shortly alluded to above. In this case, the Commission identified two different product markets, namely the one for starter batteries on the original equipment market and the one for starter batteries on the replacement market. With regard to the goods’ characteristics, there is no difference between these batteries. Nevertheless, the Commission identified two different groups demanding batteries: car producers on the one hand, and car maintenance and repair dealers on the other. But the Commission did not check whether the suppliers of replacement batteries were capable to provide batteries for the original car equipment. If this had been the case, it would have constrained the merged entity VARTA/BOSCH considerably.

This is an example in which the Commission did not test sufficiently for supply side substitutability. It is proposed here that it should be taken into consideration if a small but non-temporary price increase would enable producers to modify their supply and to offer it on the market within a reasonable time span. This was the case in VARTA/BOSCH and supply-side substitutability should, therefore, have been taken into account.

Reliance on Quantitative Methods to Delineate the Relevant Market This proposal essentially remains within the Harvard paradigm, i.e., the overwhelming importance attributed to the structure of markets is in no way questioned.

The delineation of the relevant market is often the single most important step in merger analysis: if it is delineated broadly, chances that a proposed merger will be approved are high and vice versa. The quest for “objective” procedures that allow the firms participating in a merger to predict the way the Commission will delineate the relevant market can therefore be easily understood. Quantitative methods are believed by some to be more objective than the currently used procedures. We thus set out to describe some quantitative tools before evaluating them critically.

In determining the geographically relevant market, the Commission has primarily relied on three criteria: (1) transport costs, (2) differences in prices, (3) differences in market shares caused by regional differences or differences in customer behaviour. At times, the last two criteria appear to be questionable: there can very well be single geographic markets with different prices and different market shares of the competitors in different parts of that market. What is important, however, is that prices are interdependent. Interdependency can be checked for using quantitative techniques.

A variety of quantitative methods are discussed in the literature (Bishop/Walker 1999, 167ff.). Two of them appear especially noteworthy, namely price-correlation analyses and shock analyses.

Price-correlation analysis looks at price changes of various products over a period of time. In case one observes high correlations between two or more products, they are grouped into the same market. In NESTLÉ/PERRIER, the correlation between still and sparkling mineral waters was high whereas the correlation between mineral waters and other soft drinks (such as Coke) was low. It was concluded that still and sparkling waters belong to the same product market whereas mineral waters and other soft drinks do not. As is well known, there are many spurious correlations. A high correlation coefficient as such is thus not sufficient for grouping two or more
products onto the same market. Other potentially important variables such as the business cycle, inflation, seasonality, etc., need to be controlled for.

In order to group two or more products into one single market, a value judgment is necessary, namely above what levels of the correlation coefficients one would group the products into a single market. Correlation coefficients can take on any value between –1 and +1, +1 indicating a perfect positive relationship and –1 a perfectly inverse relationship. A correlation of 0 means that the two time series are not correlated in a statistically meaningful way. It has been proposed not to use a fixed number (say .75) beyond which a single market should be assumed but to draw a different line depending on the individual case. The line could be established by taking the (average of the) correlation coefficients of products that everybody agrees on belong to one single market as a benchmark. If the correlation coefficients of the products whose grouping is questionable were sufficiently close to this benchmark, they would also be counted as being part of the same product market. Of course, there might be quarrels as to (1) agreeing on the definition of the benchmark products, and (2) agreeing what “sufficiently” close means in an individual case.

Application of price correlation analysis is not restricted to determining the relevant product market but can also be extended to ascertaining the relevant geographic market: if price levels move together in various regions or states, they would be grouped into a single market.

Price-correlation analysis is, however, not without problems. Its results crucially depend on the level of aggregation chosen. The higher the level of aggregation, the more average values enter the analysis. Drawing on average values will, however, blur the results. Carrying out price-correlation analyses, one should therefore choose a rather low level of aggregation. Another problem is that a low correlation of prices can, e.g., be caused by time lags that occur only with regard to one product due to production-specific reasons. Inversely, a high correlation can be caused by common influences that occur although the products are on two different markets. Energy prices in energy-intensive production processes could be an example. In order to control for this problem, price-correlation analysis should therefore be combined with so-called “unit root tests”. The underlying idea is very simple: one checks whether the time-series under consideration converges to a constant at least in the long run. Following a shock, one would observe deviations from the long-term trend but these would only have a transitory character and the time-series would return to its long-term path.

Figure 8:Price ratio between goods 1 and 2

In the long run, the price of good 1 is constant relative to the price of good 2. Deviations from the long-run trend are only temporary. In such a case, both products should be grouped into the same market. Shock-analysis looks at price changes subsequent to exogenous shocks that were unpredictable for the market participants. Examples for such shocks are political crises, wars, currency crises, etc. Shocks can change the business environment for market participants. Reactions to them can reveal information about how firms perceive of relevant markets themselves. Take an unanticipated change in exchange rates. This can lead to price disparities between currency areas. If the relevant geographic market extends beyond currency areas, this should be reflected in modified trade flows and a tendency of prices to converge.

Entry of a new product sometimes comes as a “shock” to competitors. Their reaction to new products can help ascertain whether they perceive themselves to be acting in the same market. This could be reflected in price reductions, but also in increased spending for advertising.

It would, however, not make sense to demand that the Commission use shock analysis on a regular basis simply for the reason that a shock might not always be readily available. As the European economies have moved together ever closer, dramatic differences have become less likely. The most obvious example for this is the introduction of the euro, which makes currency shocks within Euroland virtually impossible. A problem with the application of shock analysis is that all analysts have to agree that a certain event was really unpredictable and did thus constitute a genuine shock.

The so-called Elzinga-Hogarty Test can be used as a tool to further clarify the correct delineation of the relevant geographic market. It is an example for trade pattern tests. It is based on the presumption that if there is either substantial export from a region or a substantial import into an other region, regions are closely intertwined and can be grouped together as forming a single geographic market. It should be stressed from the outset that turning the test around could mean to commit a fallacy of reversed causation: From the absence of any interregional trade, one cannot convincingly conclude that one is dealing with two (or more) separate geographical markets.

The Elzinga-Hogarty procedure establishes two criteria, namely LIFO (little in from outside) – a low import share and LOFI (little out from inside) – a low export share. If either of the criteria is not fulfilled, there is the presumption that one is dealing with a single geographical market. In applying the test, one needs, again, to make a value judgment: Here, it needs to be decided beyond which level import or export can be called “significant”.

At times, use of quantitative methods can be of help in delineating the relevant market in its various dimensions. Their use by the firms willing to merge should thus be encouraged by the Commission. On the other hand, all available quantitative methods display some shortcomings (which are dealt with quite extensively by Froeb and Werden 1992). They should thus only be one instrument for the delineation of the relevant market.

Reliance on Quantitative Methods to Assess Dominance The current practice assesses concentration by drawing on concentration ratios (particularly CR1, CR4, and CR8; see Schmidt 2001, 137). These indicators only comprise information concerning market structure before and after a notified merger. As such, they do not enable the Commission to make any predictions concerning the intensity of competition, the use of competitive parameters such as prices etc. Assessing dominance solely on the basis of these indicators is therefore of little help.


Figure 9: A hypothetical example for price concentration analysis

Drawing on price concentration analysis could enhance predictive power. The assumption underlying the entire structure-conduct-performance approach is that high concentration is a good proxy for market power. High degrees of concentration are thus expected to lead to higher prices. Price concentration analysis compares different geographical markets in which different degrees of concentration are observed. It is asked whether higher degrees of concentration tend indeed to lead to higher prices.

Figure 9 is a hypothetical example. The horizontal axis serves to denote the degree of concentration found on a specific geographic market; the vertical axis contains the price found there. Every icon represents one country. Simple econometric analysis reveals that – in the example – higher concentration rather tends to lead to lower prices. If that is observed, a merger in one of these markets cannot be expected to lead to higher prices. It should thus be passed.

Some Critical Remarks Concerning Quantitative Techniques Quantitative techniques are appealing. They seem to indicate exactness, which is conducive to predictability. Yet, the precision that can be gained by drawing on quantitative techniques should not be overestimated. One still needs decisions what to count – and what not. But simply counting certain entities will not do trick: One needs criteria by which to judge the numbers that one has come up with. Value judgments concerning the definition of threshold levels and the like will thus remain necessary. Still, one could argue that value judgments are necessary in any event and that having made them, quantitative techniques can increase the objectivity of the assessments made by the Commission.

The Commission operates under tight resource constraints. Demands that it should regularly carry out a certain number of tests thus seem to have little chance of implementation. It is established practice that notifying parties offer one or more quantitative techniques in order to make their case. Predictability could be improved by publishing a list of quantitative tools that the Commission will consider. But the Commission will need to keep the competence of deciding whether the tools have been adequately used. Such a published list should not be interpreted as being enumerative though as that would inhibit the use of innovative techniques unknown at present.

Assessing the Importance of Customer Loyalty for Delineating the Relevant Market More Systematically The existence of brand labels and customer loyalty has been a crucial factor in the definition of product markets and the ascertainment of barriers to entry in the decision practice of the European Commission. Given that the brand labels and the noname labels are functionally equivalent, the central question is whether these products are substitutable or not. In the case of SCA/METSÄ TISSUE, the Commission denied the existence of substitutability between brand and no-name labels and hence defined two different product markets exactly along these borders. But this has certainly not been the only case in which the Commission believed that the existence of a brand label constituted a serious barrier to entry.

Quite frequently, the European Commission decides to delineate separate product markets if customer loyalty with regard to certain products exists. This leads often to too narrow a delineation of markets. The way customer loyalty is taken into account often appears unsystematic. Based on sound economic reasoning one would ask in what situations customer loyalty leads to significant gaps in substitution given that products are functionally equivalent. If gaps in substitution are substantial, then the delineation of separate product markets seems reasonable. We propose to take customer loyalty only into account if a transaction cost rationale for them can be named. We thus propose to distinguish between rationally explainable and rationally unexplainable customer loyalty and argue that only the first kind should play any role in merger policy.

The emergence of consumer loyalty can be explained with the presence of asymmetric information between suppliers and buyers. In general, one can expect uncertainty to be present on the side of consumers with regard to the quality and other characteristics of the goods offered by the suppliers. The supplier can be assumed to be informed on the characteristics of the goods offered ex ante, the buyer, on the other hand, will often only be able to ascertain the quality after having consumed the good, i.e., ex post. It has been shown that such information asymmetries can lead to the breakdown of markets (Akerlof 1970). One consequence of uncertainty with regard to quality is higher transaction costs that are caused by additional information-gathering activities. In such a situation, consumer loyalty can be interpreted as a mechanism to reduce uncertainty and the transaction costs accruing as a consequence of uncertainty. As soon as a consumer has found a good that conforms to his/her preferences, s/he will repeat consuming the good even if other suppliers offer other equally good products. Consumer loyalty can thus be conceptualised as a device to save on transaction costs.

With regard to information asymmetry, economic literature distinguishes between three kinds of goods, namely search-, experience- and trust goods (Nelson 1970). With search goods, consumers can acquire information on the relevant characteristics of the good before buying it. After having bought it, the information can be verified. With experience goods, reliable information on the characteristics is not obtainable before buying the good. The characteristics of the good have to be “experienced” after having bought it. With trust goods, the relevant characteristics are neither ascertainable before nor after buying the good.

In merger policy, the good’s characteristics with regard to information asymmetries should thus be taken into account. A rather narrow delineation based on substitution gaps can only be justified if one deals with long-lived experience goods or trust goods. In ascertaining the relevant information asymmetries, focus should be on the relevance of screening activities on the part of the consumers and signalling activities on the part of the supplying firms. The harder screening for the consumers, the more relevant will signalling activities be. Customer loyalty is a ubiquitous phenomenon. Based on the approach proposed here, it could be taken into account in a more systematic fashion than was hitherto the case.

It is thus proposed that customer loyalty is only used in order to delineate different product markets if one is dealing with durable experience goods or trust goods. For search goods and non-durable experience goods, reliance on customer loyalty will lead to rather spurious delineations.

A representative of the traditional welfare economic approach might object that the creation of brand labels can enable firms to set prices above marginal costs and that this would mean losses in allocative efficiency and the transformation of consumer into producer rents. But this objection overlooks that the utility of some consumers can be increased if they are offered differentiated products that better cater to their preferences than standardised ones. For some consumers, it is even utilityenhancing if they can buy brand labels for a higher price and can thus differentiate themselves from other consumers (Veblen-Effect). It can hence be asked if it should be the task of competition policy to protect consumers from this kind of market power as it would prevent some consumers from reaping additional utility. This argument should be considered in the light of obvious resource constraints present in competition authorities. It is thus argued that competition policy should rather focus on those cases that could mean a monopolisation of entire markets.

In defining relevant product markets, recognition of consumer loyalty should thus play a marginal role at best. It should also be recognised that the creation of brand labels is a genuine entrepreneurial task, which necessitates substantial investment and is connected with important risks. A producer will only be able to create customer loyalty if he offers a product with convincing quality characteristics or other traits that cater to the preferences of the consumer. From an economic point of view, the ensuing increase of his price-setting range also has advantages: it entails incentives to offer products that best meet consumer preferences. As long as functional substitutability is guaranteed, firms successful in creating a brand and customer loyalty should not be punished for successful entrepreneurial activity.

The relevance of consumer loyalty should not be overestimated either. It is true, a newcomer would have to invest in the creation of a new brand. But it is also true that an incumbent has to invest in conserving a brand label and reputation. Although the creation of new brands often involves the necessity of substantial investment, there are exceptions: the firm producing the energy drink Red Bull, e.g., established its product with very little advertising expenditure successfully against brands like Coca Cola and Pepsi. This case proves that consumer loyalty does not necessarily constitute a substantial barrier to entry.