Micky Midha is a trainer in finance, mathematics, and computer science, with extensive teaching experience.
Updated On
Learning Objectives
Distinguish among the inputs to the portfolio construction process.
Evaluate the methods and motivation for refining alphas in the implementation process.
Describe neutralization and methods for refining alphas to be neutral.
Describe the implications of transaction costs on portfolio construction.
Assess the impact of practical issues in portfolio construction such as determination of risk aversion, incorporation of specific risk aversion, and proper alpha coverage.
Describe portfolio revisions and rebalancing, and evaluate the tradeoffs between alpha, risk, transaction costs, and time horizon.
Determine the optimal no-trade region for rebalancing with transaction costs.
Evaluate the strengths and weaknesses of the following portfolio construction techniques: screens, stratification, linear programming, and quadratic programming.
Describe dispersion, explain its causes, and describe methods for controlling forms of dispersion.
Implementation is the efficient translation of research into portfolios. Good implementation can’t help poor research, but poor implementation can foil good research. A manager with excellent information and faulty implementation can snatch defeat from the jaws of victory. Implementation includes both portfolio construction and trading.
Portfolio construction requires several inputs:
the current portfolio,
alphas,
covariance estimates,
transactions cost estimates, and
an active risk aversion.
Of these inputs, we can measure only the current portfolio with near certainty. The alphas, covariances, and transactions cost estimates are all subject to error. The alphas are often unreasonable and subject to hidden biases. The covariances and transactions costs are noisy estimates; we hope that they are unbiased, but we know that they are not measured with certainty. Even risk aversion is not certain. Most active managers will have a target level of active risk that we must make consistent with an active risk aversion.
Implementation schemes must address two questions. First, what portfolio would we choose given inputs (alpha, covariance, active risk aversion, and transactions costs) known without error? Second, what procedures can we use to make the portfolio construction process robust in the presence of unreasonable and noisy inputs? How do you handle perfect data, and how do you handle less than perfect data?
How to handle perfect data is the easier dilemma. With no transactions costs, the goal is to maximize value added within any limitations on the manager’s behavior imposed by the client. Transactions costs make the problem more difficult. We must be careful to compare transactions costs incurred at a point in time with returns and risk realized over a period of time.
Alpha And Portfolio Construction
Most active managers construct portfolios subject to certain constraints, agreed upon with the client. For example, most institutional portfolio managers do not take short positions and limit the amount of cash in the portfolio. Others may restrict asset coverage because of requirements concerning liquidity self-dealing, and so on. These limits can make the portfolio less efficient, but they are hard to avoid.
Managers often add their own restrictions to the process to make portfolio construction more robust. A manager may require that the portfolio be neutral across economic sectors or industries. The manager may limit individual stock positions to ensure diversification of active bets, or may want to avoid any position based on a forecast of the benchmark portfolio’s performance, etc.
Another way of reaching the same final portfolio is simply by adjusting the inputs. We can replace a very complicated portfolio construction procedure that leads to active holdings \(h^*_{P_A} \), active risk 𝜓∗ , and an ex ante information ratio IR with a direct unconstrained mean/variance optimization using a modified set of alphas and the appropriate level of risk aversion. The modified alphas are
We can replace any portfolio construction process, regardless of its sophistication, by a process that first refines the alphas and then uses a simple unconstrained mean/variance optimization to determine the active positions.
Alpha Analysis – Scaling The Alphas
Alphas have a natural structure: α = 𝑣𝑜𝑙𝑎𝑡𝑖𝑙𝑖𝑡𝑦 × 𝐼𝐶 × 𝑠𝑐𝑜𝑟𝑒. This structure includes a natural scale for the alphas. We expect the information coefficient (𝐼𝐶) and residual risk (volatility) for a set of alphas to be approximately constant, with the score having mean 0 and standard deviation 1 across the set. Hence the alphas should have mean 0 and standard deviation, or scale, of 𝑆𝑡𝑑 α ~ 𝑣𝑜𝑙𝑎𝑡𝑖𝑙𝑖𝑡𝑦 × 𝐼𝐶. Example – An information coefficient of 0.05 and a typical residual risk of 30 percent would lead to an alpha scale of 1.5 percent. In this case, the mean alpha would be 0, with roughly two-thirds of the stocks having alphas between —1.5 percent and +1.5 percent and roughly 5 percent of the stocks having alphas larger than +3.0 percent or less than —3.0 percent.
The decrease in the information coefficient that results from the decrease in the scale of the alphas can be calculated. The scale of the alphas will depend on the information coefficient of the manager. If the alphas input to portfolio construction do not have the proper scale, then rescale them.
Alpha Analysis – Trim Alpha Outliers
The second refinement of the alphas is to trim extreme values. Very large positive or negative alphas can have undue influence. As an example, the threshold for “large” values can be three times the scale of the alphas. A detailed analysis may show that some of these alphas depend upon questionable data and should be ignored (set to zero), while others may appear genuine. These remaining genuine alphas can be pulled in to three times scale in magnitude.
A second and more extreme approach to trimming alphas is to force them into a normal distribution with benchmark alpha equal to 0 and the required scale factor. Such an approach is extreme because it typically utilizes only the ranking information in the alphas and ignores the size of the alphas. After such a transformation, the benchmark neutrality and scaling must be rechecked.
Alpha Analysis – Neutralization
Neutralization is the process of removing biases and undesirable bets from alpha. It has implications, not surprisingly, in terms of both alphas and portfolios.
Neutralization is a sophisticated procedure, but it isn’t uniquely defined. We can achieve even benchmark neutrality in more than one way. This is easy to see from the portfolio perspective: We can choose many different portfolios to hedge out any active beta.
The choices will include benchmark, cash, industry, and factor neutralization.
Alpha Analysis – Benchmark And Cash Neutral Alphas
Benchmark neutralization – The first and simplest neutralization is to make the alphas benchmark-neutral. Benchmark neutralization means that the benchmark has 0 alpha., although the benchmark may experience exceptional return. Setting the benchmark alpha to 0 ensures that the alphas are benchmark-neutral and avoids benchmark timing.
If our initial alphas imply an alpha for the benchmark, the neutralization process re-centers the alphas to remove the benchmark alpha. From the portfolio perspective, benchmark neutralization means that the optimal portfolio will have a beta of 1, i.e., the portfolio will not make any bet on the benchmark.
Cash neutralization – In the same spirit, we may also want to make the alphas cash-neutral; i.e., the alphas will not lead to any active cash position.
It is possible to make the alphas both cash-neutral and benchmark-neutral.
Alpha Analysis – Risk-Factor Neutral Alphas
Risk-factor neutralization – The multiple-factor approach to portfolio analysis separates return along several dimensions. A manager can identify each of those dimensions as either a source of risk or a source of value added. By this definition, the manager does not have any ability to forecast the risk factors. Therefore, he or she should neutralize the alphas against the risk factors. The neutralized alphas will include only information on the factors the manager can forecast, along with specific asset information. Once neutralized, the alphas of the risk factors will be 0 .
For example, a manager can ensure that her portfolios contain no active bets on industries or on a size factor. A simple approach to making alphas industry-neutral can be: The (capitalization-weighted) alpha for each industry is calculated, and then the industry average alpha is subtracted from each alpha in that industry.
The alphas can be modified to achieve desired active common-factor positions and to isolate the part of the alpha that does not influence the common-factor positions.
Transaction Costs
In portfolio construction, apart from alpha and active risk, the third consideration are transactions costs i.e., the cost of moving from one portfolio to another. Accurate estimation of transactions costs can be as important as accurate forecasts of exceptional return. In addition to complicating the portfolio construction problem, transactions costs have their own inherent difficulties.
When we consider only alphas and active risk in the portfolio construction process, we can offset any problem in setting the scale of the alphas by increasing or decreasing the active risk aversion. Finding the correct trade-off between alpha and active risk is a one-dimensional problem. Transactions costs make this a two-dimensional problem. The trade-off between alpha and active risk remains, but now there is a new trade-off between the alpha and the transactions costs. We therefore must be precise in our choice of scale, to correctly trade off between the hypothetical alphas and the inevitable transactions costs.
The objective in portfolio construction is to maximize risk-adjusted annual active return. Rebalancing incurs transactions costs at that point in time. To contrast transactions costs incurred at that time with alphas and active risk expected over the next year requires a rule to allocate the transactions costs over the one-year period. We must amortize the transactions costs to compare them to the annual rate of gain from the alpha and the annual rate of loss from the active risk. The rate of amortization will depend on the anticipated holding period.
An example can be considered to demonstrate impacts of transaction costs. A risk free rate of zero is assumed along with the following:
Stock 1’s current price is $100.
The price of stock 1 will increase to $102 in the next 6 months and then remain at $102.
Stock 2’s current price is also $100.
The price of stock 2 will increase to $108 over the next 24 months and then remain at $108.
The cost of buying and selling each stock is $0.75.
The annual alpha for both stock 1 and stock 2 is 4 percent.
To contrast the two situations more clearly, it can be assumed that in 6 months, and again in 12 months and in 18 months, another stock like stock 1 can be found.
The sequence of 6-month purchases of stock 1 and its successors will each fetch a $2.00 profit before transactions costs. There will be transactions costs of $0.75, $1.50, $1.50, $1.50, and
$0.75 at 0, 6, 12, 18, and 24 months, respectively. The total trading cost is $6, the gain on the shares is $8, the profit over 2 years is $2, and the annual percentage return is 1 percent.
With stock 2, over the 2-year period, costs of $0.75 are incurred at 0 and 24 months. The total cost is $1.50, the gain is $8, the profit is $6.50, and the annual percentage return is 3.25 percent. With the series of stock 1 trade, an annual alpha of 4 percent is realized along with an annualized transactions cost of 3 percent whereas with the single deal in stock 2, an annual alpha of 4 percent is realized along with an annualized transactions cost of 0.75 percent. Hence, stock 2 is more profitable than stock 1.
For a 6-month holding period, the round-trip transactions costs are doubled to get the annual transactions cost, and for a 24-month holding period, we halve the round-trip transactions cost to get the annual transactions cost. There’s a general rule here: The annualized transactions cost is the round-trip cost divided by the holding period in years.
Practical Issues – Level Of Risk Aversion
An optimality relationship between the information ratio, the risk aversion, and the optimal active risk needs to be found out. Practically, a portfolio manager might not have an intuitive idea of optimal active risk aversion, but will have an idea about his information ratio and the amount of active risk he is willing to accept to get active returns. A measure of active risk aversion can be written as
Active Risk, or 𝜓P, is also known as Tracking Error.
We must be careful to verify that our optimizer is using percents and not decimals. Example – If our information ratio is 0.5, and a 10 percent active risk, we should choose an active risk aversion of
\( \frac{0.5}{2 \times 10} = 0.025 \)
Practical Issues – Aversion To Specific Risk
A second practical matter concerns aversion to specific as opposed to common-factor risk.
There are at least two reasons to consider implementing a higher aversion to specific risk.
First, since specific risk arises from bets on specific assets, a high aversion to specific risk reduces bets on any one stock. In particular, this will reduce the size of your bets on the (to be determined) biggest losers.
Second, for managers of multiple portfolios, aversion to specific risk can help reduce dispersion. This will push all those portfolios toward holding the same names.
Considering the above two reasons of specific factor risk aversion values will help a manager determine appropriate values to include in portfolio optimization.
Practical Issues – Proper Alpha Coverage
Proper alpha coverage refers to addressing situations where
The manager has forecasts of stocks that are not in the benchmark, or
The manager does not have alpha forecasts for stocks in the benchmark.
When the manager has forecasts of stocks that are not in the benchmark, it can be handled by expanding the benchmark to include those stocks, but with 𝑍𝐸𝑅𝑂 weight. This keeps stock n in the benchmark, but with no weight in determining the benchmark return or risk. Any position in stock n will be an active position, with active risk correctly handled.
To handle the second issue, where the manager does not have alpha forecasts for stocks in the benchmark , an approach of inferring alphas for some factors based on the alphas for other factors, can be applied. The alphas for the stocks in the benchmark with missing forecasts can be derived from the alphas of stocks in the same benchmark for which forecasts are available.
The following approach can be used:
Let 𝑁1 represent the collection of stocks with forecasts, and 𝑁O the stocks without forecasts. The value-weighted fraction of stocks with forecasts is \( H\{N_1\} = \sum_{n \in N_1} h_{B,n} \) or informally stated as value-weighted fraction of stocks with forecasts = sum of active holdings with forecasts
\(\frac{\text{average alpha for the stocks with forecasts}}{\text{weighted average of the alphas with forecasts}} = \frac{\text{value} – \text{weighted fraction of stocks with forecasts}}\)
To round out the set of forecasts, set α∗ = αn — α{𝑁1} for stocks in 𝑁1, and α∗ = 0 for stocks in 𝑁O. These alphas are benchmark-neutral. Moreover, the stocks we did not cover will have a zero, and therefore neutral, forecast.
Portfolio Revisions
If a manager knows how to make the correct trade-off between expected active return, active risk, and transactions costs, then frequent revision will not present a problem. In that case, the manager should revise a portfolio every time new information arrives. However, if the manager is not sure of his or her ability to correctly specify the alphas, the active risk, and the transactions costs, then the manager may resort to less frequent revision as a safeguard.
Even with accurate transactions costs estimates, the frequent trading and short time horizons would cause alpha estimates to exhibit a great deal of uncertainty, because the returns themselves become noisier with shorter horizons. Rebalancing for very short horizons would involve frequent reactions to noise, not signal. But the transactions costs stay the same, whether we are reacting to signal or noise. Hence, the manager must choose an optimal time horizon where the certainty of the alpha is sufficient enough to justify a given trade with respect to the transaction costs being incurred.
This trade-off between alpha, risk, and costs is difficult to analyze because of the inherent importance of the horizon. Since alpha is to be realized over some horizon, therefore the transactions costs must be amortized over that horizon.
Portfolio Revisions – Optimal No Trade Regions
The impact of new information can be captured, and a decision can be taken whether to trade or not, by comparing the Marginal Contribution to Value Added for Stock 𝑛, (𝑀𝐶𝑉𝐴n), to the transactions costs. The marginal contribution to value added shows how value added, as measured by risk-adjusted alpha, changes as the holding of the stock is increased, with an offsetting decrease in the cash position. As our holding in stock n increases, αn measures the effect on portfolio alpha. The change in value added also depends upon the impact (at the margin) on active risk of adding more of stock n.
The stock’s marginal contribution to active risk, 𝑀𝐶𝐴𝑅n, measures the rate at which active risk changes as we add more of stock n. The loss in value added due to changes in the level of active risk will be proportional to 𝑀𝐶𝐴𝑅. Stock n’s marginal contribution to value added depends on its alpha and marginal contribution to active risk, in particular:
If this 𝑀𝐶𝑉𝐴n value is between the negative cost of selling and the cost of purchase, the manager should not trade in that asset. i.e. the no-trade region is that region, where
Hence a band has been put around the alpha for each stock. As long as the alpha stays within that band, the portfolio will remain optimal, and there should be 𝑁𝑂 reaction to new information.
Portfolio Construction Techniques
There are as many techniques for portfolio construction as there are managers. Each manager adds a special twist. Despite this personalized nature of portfolio construction techniques, there are four generic classes of procedures that cover the vast majority of institutional portfolio management applications:
Screens
Stratification
Linear programming
Quadratic programming
Before we examine these procedures in depth, we should recall our criteria. We are interested in high alpha, low active risk, and low transactions costs. Our figure of merit is value added less transactions costs, i.e.
Screens are simple. Here is a screen recipe for building a portfolio from scratch:
Rank the stocks by alpha.
Choose the first 50 stocks (for example).
Equal-weight (or capitalization-weight) the stocks.
The screening technique can also be used for rebalancing. Example – Suppose we have alphas on 200 stocks.
The stocks can be divided into three categories: the top 40, the next 60, and the remaining 100.
Any stock in the top 40 is put on the buy list, any stock in the bottom 100 on the sell list, and any stock in the middle 60 on the hold list.
Starting with the current 50-stock portfolio, any stocks that are on the buy list but not in the portfolio, are bought.
Then any assets that are in the portfolio and on the sell list, are sold. The numbers 40, 60, and 100 can be adjusted to regulate turnover.
Portfolio Construction Techniques – Screens Advantages
Screens have several attractive features :
The screen is easy to understand, with a clear link between cause (membership on a buy, sell, or hold list) and effect (membership in the portfolio).
The screen is easy to computerize; it might be that mythical computer project that can be completed in two days!
The screen is robust. It depends solely on ranking. Wild estimates of positive or negative alphas will not alter the result.
The screen enhances alphas by concentrating the portfolio in the high-alpha stocks. .
It strives for risk control by including a sufficient number of stocks and by weighting them to avoid concentration in any single stock.
Transactions costs are limited by controlling turnover through judicious choice of the size of the buy, sell, and hold lists.
Portfolio Construction Techniques – Screens Disadvantages
Screens also have several shortcomings :
They ignore all information in the alphas apart from the rankings.
They do not protect against biases in the alphas. If all the utility stocks happen to be low in the alpha rankings, the portfolio will not include any utility stocks.
Risk control is fragmentary at best. In their consulting experience, the authors have come across portfolios produced by screens that were considerably more risky than their managers had imagined.
In spite of these significant shortcomings, screens are a very popular portfolio construction technique.
Portfolio Construction Techniques – Stratification
Stratification is glorified screening i.e. a better version of screening. The key to stratification is splitting the list of followed stocks into categories. These categories are generally exclusive. The idea is to obtain risk control by making sure that the portfolio has a representative holding in each category.
Example –
Assume that stocks are classified into 10 economic sectors and the stocks within each sector are classified by size: big, medium, and small. Thus, all stocks can be classified into 30 categories based on economic sector and size. The benchmark weight in each of the 30 categories is known.
To construct a portfolio, the screening exercise is repeated within each category. The stocks are ranked by alpha and placed into buy, hold, and sell groups within each category in a way that will keep the turnover reasonable. Then the stocks are weighted so that the portfolio’s weight in each category matches the benchmark’s weight in that category. Stratification ensures that the portfolio matches the benchmark along these important dimensions.
The stratification scheme is robust and has the same benefits as screening, plus some more :
Improving upon screening, it ignores any biases in the alphas across categories.
It is somewhat transparent and easy to code, having the same mechanism as screening for controlling turnover.
Stratification retains some of the shortcomings of a screen.
It ignores some information, and does not consider slightly over-weighting one category and underweighting another.
Often, little substantive research underlies the selection of the categories, and so risk control is rudimentary.
Reasonable risk control can be achieved if the categories are chosen well, but if some important risk dimensions are excluded, risk control will fail.
Portfolio Construction Techniques – Linear Programming
A linear program ( 𝐿𝑃 ) is space-age stratification. The linear programming approach9 characterizes stocks along dimensions of risk, e.g., industry, size, volatility, and beta. The linear program does not require that these dimensions distinctly and exclusively partition the stocks, and the stocks can be characterized along all of these dimensions. The linear program will then attempt to build portfolios that are reasonably close to the benchmark portfolio in all of the dimensions used for risk control.
Linear Programming Technique has the following merits:
It is possible to set up a linear program with explicit transactions costs, a limit on turnover, and upper and lower position limits on each stock.
It achieves the objective of maximizing the portfolio’s alpha less transactions costs, while remaining close to the benchmark portfolio in the risk control dimensions.
The linear program takes all the information about alpha into account and controls risk by keeping the characteristics of the portfolio close to the characteristics of the benchmark
Linear Programming Technique has the following demerits:
The linear program has difficulty producing portfolios with a prespecified number of stocks.
Also, the risk-control characteristics may not work at cross purposes with the alphas. For example, if the alphas tell you to shade the portfolio toward smaller stocks at some times and toward larger stocks at other times, you should not control risk on the size dimension.
Portfolio Construction Techniques – Quadratic Programming
Quadratic programming (𝑄𝑃) is the ultimate in portfolio construction. The quadratic program explicitly considers each of the three elements in our figure of merit: alpha, risk, and transactions costs. In addition, since a quadratic program includes a linear program as a special case, it can include all the constraints and limitations one finds in a linear program. Like other techniques, it cannot resolve non-availability of perfect data on its own, thus leading to mistakes in end results. After all, the result of any technique is only as good as the input data.
The quadratic program requires a great many more inputs than the other portfolio construction techniques. More inputs mean more noise. For example, a universe of 500 stocks will require 500 volatility estimates and 124,750 correlation estimates. The benefit of explicitly considering risk may or may not outweigh the cost of introducing additional noise. There are ample opportunities to make mistakes. It is a fear of garbage in, garbage out that deters managers from using a quadratic program. Small estimation error in covariances will not necessarily reduce value added significantly. But even moderate levels of estimation error for the covariances can significantly reduce value added, and with a relatively high estimation error, value added may even become zero or negative. Hence it is very important to have proper estimates of the relevant inputs, especially covariances. An example in the next slide will illustrate this point
Example – Suppose we consider a simple cash versus market trade-off. Let ς be the actual volatility of the market and a our perceived volatility. If 𝑉𝐴∗ is the optimal value added that we can obtain with the correct risk estimate ς, then the loss we obtain with the estimate 𝜎 is
This figure shows the percentage loss, \(\frac{Loss}{VA^*}\) as a function of the estimated market risk, assuming that the true market risk is 17 percent. In this example, market volatility estimates within 1 percent of the true market volatility will not hurt value added very much, but as estimation error begins to exceed 3 percent, the effect on value added becomes significant, especially if the error is an underestimate of volatility. In fact, an underestimate of 12 percent market volatility (5 percent below the “true” volatility) leads to a negative value added. Hence it is vital to have good estimates.
Source: Figure 5-2 2019 Financial Risk Manager Exam Part II: Risk Management Seventh Edition by Global Association of Risk Professionals
Dispersion
Dispersion can be client-driven. Portfolios differ because individual clients impose different constraints. One pension fund may restrict investment in its company stock. Another may not allow the use of futures contracts. These client-initiated constraints lead to dispersion, but they are completely beyond the manager’s control. But managers can control other forms of dispersion. Often, dispersion arises through a lack of attention. Separate accounts exhibit different betas and different factor exposures through lack of attention. Managers should control this form of dispersion.
Separate accounts with the same factor exposures and betas can still exhibit dispersion because of owning different assets. Often the cost of holding exactly the same assets in each account will exceed any benefit from reducing dispersion. In fact, because of transactions costs, some dispersion is optimal. If transactions costs were zero, rebalancing all the separate accounts so that they hold exactly the same assets in the same proportions would have no cost. Dispersion would disappear, at no cost to investors. With transactions costs, however, managers can achieve zero dispersion only with increased transactions costs. Managers should reduce dispersion only until further reduction would substantially lower returns on average because much higher transactions costs would be incurred. (See 60/40 example from Schweser)
Managing Dispersion
We have seen how some level of dispersion is optimal and have discussed why dispersion arises. The next question is whether dispersion decreases over time: Do dispersed portfolios converge, and how fast? In general, convergence will depend on the type of alphas in the strategy, the transactions costs, and possibly the portfolio construction methodology.
If alphas and risk stay absolutely constant over time, then dispersion will never disappear. There will always be a transactions cost barrier. An exact matching of portfolios will never pay. Furthermore, we can show that the remaining tracking error is bounded based on the transactions costs and the
where 𝑇𝐶 measures the cost of trading from the initial portfolio to the zero transactions cost optimal portfolio (which we will refer to as portfolio 𝑄), and we are measuring tracking error and risk aversion relative to portfolio 𝑄. With very high risk aversion, all portfolios must be close to one another. But the higher the transactions costs, the more tracking error there is. Given intermediate risk aversion of 𝜆A = 0.10 and round-trip transactions costs of 2 percent, and assuming that moving from the initial portfolio to portfolio 𝑄 involves 10 percent turnover, the above inequality implies tracking error upp bound of 1 percent.
Since tracking error is bounded, dispersion is also bounded. Dispersion is proportional to tracking error, with the constant of proportionality dependent on the number of portfolios being managed:
ф–1 denotes the inverse of the cumulative normal distribution function
𝑁 = number of portfolios
𝜓 is the tracking error of each portfolio relative to the composite
This figure displays this function. For a given tracking error, more portfolios lead to more dispersion because more portfolios will further probe the extremes of the return distribution.
If the alphas and risk vary over time, which is the usual case, then convergence will occur. It can be shown that with changing alphas and risk each period, the portfolios will either maintain or, more typically, decrease the amount of dispersion. Over time, the process inexorably leads to convergence, because each separate account portfolio is chasing the same moving target. These general arguments do not, however, imply any particular time scale.
In real-life situations, client-initiated constraints and client-specific cash flows will act to keep separate accounts from converging.
One final question is whether convergence can be increased by changing our portfolio construction technology. In particular, what if dual-benchmark optimization is used ? Instead of penalizing only active risk relative to the benchmark, active risk relative to either the composite portfolio or the optimum calculated ignoring transactions costs would also be penalized.
Dual-benchmark optimization can clearly reduce dispersion, but only at an undesirable price. Dual- benchmark optimization simply introduces the trade-off we analyzed earlier: dispersion versus return i.e. return has to be given up in order to lower dispersion.