In 1990, one of the biggest speculative bubbles in history burst in Japan. The Japanese benchmark index Nikkei 225 (hereinafter also referred to as Nikkei) plunged by more than 60% within five years and has not recovered to this day. The **maximum drawdown** for the entire history of the Nikkei amounts to an **unimaginable 80%** and spans almost two decades. The Japanese economy suffered for several years and the Japanese folklore refers to this momentous period as the **Lost Decades**.

In this blog post, we will look at the historical performance of the Nikkei and examine it in relation to the performance of the global stock market. In doing so, we will look at what is known as **home bias**, the tendency of investors to disproportionately overweight home market stocks in their portfolios. Home bias is a global phenomenon. [1]

The case of the Nikkei will show why such a strategy can have damaging consequences on returns and is a plea for global diversification in stock investments.

The Nikkei 225 is one of the world’s most important stock indices, tracking the performance of 225 companies traded on the Tokyo Stock Exchange. Among these companies are global players such as Toyota, Nissan and Canon. The Nikkei is a price-weighted index, meaning it is calculated excluding dividends and other special payments. Figure 1 shows the historical performance from **January 1965 to March 2021**. The maximum drawdown of approx. 80% is drawn in red. It stretched over almost two decades.

We obtained the historical price data of the Nikkei from Yahoo Finance (for more information, see Financial Data). The data covers the period from January 1965 to April 2021 and is available for most business days. The first ten measurement points of the Nikkei data can be seen in Table 1.

We will use the MSCI World as the benchmark for the analysis of the Nikkei data. We choose this index as a benchmark because it represents the lion’s share of the global stock market with the 1600 largest exchange-traded companies in the industrialized countries (including Japan, which currently accounts for 7.5% of the index). Even more suitable would be the MSCI ACWI, which includes emerging markets in addition to the industrialized countries. For this index, however, the available data does not extend far enough into the past. The MSCI World data goes back to 1970 and is available at monthly intervals. To compare the two indices, we will look at the MSCI World in price index form. The first ten measurement points of the MSCI World are shown in Table 2.

In the following, we will compare the performance of the Nikkei 225 with that of the MSCI World. Figure 2 shows the historical performance of the two indices over the last 50 years on a logarithmic scale.

The price trend of the Nikkei can be roughly divided into **three phases I, II, and III** with long-term trends. In phases I and III, the Nikkei rises just like the MSCI World, albeit with short-term setbacks. The situation is different in phase II, which stretches from 1990 to 2012. While the MSCI World more than doubled during this period, rising from 557 to 1315 points, the Nikkei fell from 38,064 to 9057 points. That is a drop of almost 80%. Although a correlation between the two indices can also be identified for this period, e.g. the bursting of the dotcom bubble in 2002 or the major financial crisis in 2008, the general trend for the two indices was the opposite.

What impact did this phase have on the Nikkei’s return distribution? To answer this question, Figure 3 shows histograms of the return distributions of the Nikkei and the MSCI World.

The distribution of returns shows that the distribution for the Nikkei is somewhat broader than for the MSCI World, i.e. months with particularly high negative or positive returns occur more frequently for the Nikkei. Nevertheless, the average returns for the total period show only minor differences. For the Nikkei, the average monthly return is 0.57% and the **annual return is thus 7.06%**, while the MSCI World is just above this with a monthly value of 0.64% and an **annual value of 7.96%**. It is important to note that these values are nominal values and no dividends have been taken into account.

But how do things look if we only consider the critical period II from 1990 to 2012? After all, this period spans two decades and even up to the present day, the record values of the Nikkei have not been reached again. The distribution of returns for this period is shown in Figure 4.

With the beginning of the 1990s, the speculative bubble on the national stock and real estate market burst in Japan. Figure 4 demonstrates the devastating consequences this development had on the returns of the Nikkei. While the MSCI World’s monthly return of 0.41% (**5.03% annually**) is slightly lower than for the period as a whole, the Nikkei fabricated a negative monthly return of 0.39% (**-4.78% annually**).

It is noteworthy that this negative return was not achieved over a short period of a few years, which is often the case during major stock market crashes, but over a period of more than two decades. Another important finding is that this development was limited to the Japanese stock market. Although returns on the global stock market were also lower for this period, price gains were still achieved in contrast to the Nikkei. The suspicion is therefore that the reasons for this prolonged bear market were of national origin.

From 1980 to 1985, the U.S. dollar experienced a 50% appreciation on the international currency markets against the currencies of the other G5 nations at that time (Great Britain, West Germany, France and Japan). This development led to great economic difficulties in the American industry, since imported products, including those from Japan, could be offered at much lower prices than domestic products. This development led to large imbalances in the trade balances to the disadvantage of the USA. After the U.S. industry was able to build up sufficient pressure on the government, the **Plaza Agreement** was concluded between the G5 nations in 1985 at the urging of the United States.

The aim of the Plaza Agreement was to eliminate imbalances in trade balances through controlled measures in the international currency market. Following the intervention under the Plaza Accord, the value of the yen fell from 238 per dollar in 1985 to 165 yen per dollar in 1986. [2]

However, the appreciation of the yen accelerated faster than planned as speculators, driven by the appreciation, pumped massive amounts of capital into buying yen and investing in the Japanese real estate and stock markets. The **Louvre Agreement** in 1987 attempted to counteract the yen’s appreciation, but to no avail. The massive inflow of capital as a result of the yen’s appreciation ultimately led to very high valuations of Japanese companies and real estate, and this ultimately led to one of the **largest speculative bubbles in history at the national level**. As a result of the strong appreciation of the yen starting in 1985, Japan’s international competitiveness deteriorated and the Japanese central bank failed to take appropriate measures to counteract it.

The fact that the speculative bubble on the Japanese stock market was able to develop in this form was thus largely fueled by political decisions at the international level, and the Japanese government did not find suitable means to counteract this development.

What conclusions can investors draw from the fall of the Nikkei? The multilateral Plaza Accord of 1985 was largely responsible for the emergence of the speculative bubble in Japan. As a result of the Plaza Accord, the yen appreciated dramatically against the U.S. dollar. The Japanese central bank failed to prevent or at least cushion the emergence of the speculative bubble with fiscal policy measures.

The result was a bear market for the Japanese benchmark index that lasted for more than two decades, with losses of up to 80%. Although this period was not the best for the global stock market either, prices for the MSCI World more than doubled during this time. So if you had been a typical Japanese stock investor with your typical dose of home bias during that time, you would have experienced painful losses in the stock market. The globally diversified investor with an investment in the MSCI World would have more than doubled his assets in nominal terms instead.

What is the **reason behind home bias**? Apparently, psychological factors play an important role here, especially the investor’s feeling that he knows the companies of his own country and is thus better able to judge their shares (stock picking by the active investor). The more competence one attributes to oneself, the greater the tendency toward home bias. Whether this self-attributed competence is of actual nature may be doubted.

So if you don’t trust yourself to reliably forecast the developments of the global economy and the international currency market and their implications for the domestic stock market, you should rather rely on a globally diversified stock portfolio. Of course, there will always be national stock markets, that will outperform a globally diversified index for a certain period of time, e.g. currently the US stock market . But if you cannot reliably forecast this market, you will not be able to take advantage of this circumstance.

- At the end of 1989, one of the biggest national speculative bubbles of all time burst on the Japanese stock market. Japan’s leading index, the Nikkei, recorded losses of up to 80% and has still not been able to regain its all-time high of that time.
- While the Nikkei succumbed to a bear market over a period of two decades with an average return of -4.78%, the MSCI World generated a positive annual return of 5.03%.
- Home bias can result in large losses in extreme cases or reduced returns in less drastic cases. With ETFs it is nowadays very easy and cost effective to avoid this risk and to build up a globally diversified stock portfolio.

What is your situation? Do you mainly have stocks or ETFs with a focus on a specific country or region in your portfolio and what are the arguments for this approach? Or do you go for a globally diversified approach?

[1]: Vanguard, https://personal.vanguard.com/pdf/ISGGAA.pdf

[2]: World Bank, https://data.worldbank.org/indicator/PA.NUS.FCRF?locations=JP

]]>For many investors, the MSCI World is the base index for their ETF portfolio. It tracks the development of the stock market in 23 industrialized countries, contains about 1600 companies and comprises about 85% of the market capitalization. In this blog post, we look at the spearhead of the MSCI World, the Top 10 positions by weight in the index, and thus the most valuable companies in the industrialized countries. The MSCI World Index weights companies by their market capitalization, or market value. As this is subject to daily fluctuations on the stock market, the weight with which the companies are represented in the index also fluctuates.

We look at the development over the last ten years: Which companies were featured, when were they featured, how heavily were they weighted, and has the cluster risk in the MSCI World increased over time?

To examine the performance of the Top 10 positions in the MSCI World, I collected data from the MSCI World factsheets. The data is available at monthly intervals from January 2011 to the present time (March 2021) and includes all positions in the MSCI World. Thus, we have information on the composition of the MSCI World for the past 123 months.

In addition to the weighting, the data also contain information on the industry and the domicile of the companies. Table 1 shows an example of the data for the current (as of March 2021) Top 10 positions of the MSCI World.

The table already indicates that the development of the MSCI Worlds depends to a large extent on US companies (currently, US companies account for approx. 66% of the index). [1] In addition, tech companies such as Apple, Microsoft, Amazon and Alphabet (parent company of Google, appears twice in the table with different share classes) currently have a particularly strong influence on events. But has that always been the case over the past decade?

In order to be able to take a closer look at the development over time, you will find below an animation of the composition of the Top 10 of the MSCI World for the last ten years. The animation shows a bar chart according to the weightings of the companies and is updated in monthly steps. In addition to the weights of the individual positions, the development of the cumulative weights of the Top 10 positions over time is also shown.

The animation reveals that the MSCI World Top 10 has by no means been static over the past ten years, but has seen a lively exchange. A total of 25 companies have been awarded the title of “Top 10 Member” over the past decade. For a better overview, these 25 companies are listed again in Table 2 together with their current weighting.

At the beginning of the period, more traditional companies, including those from the oil industry, appeared in the Top 10, e.g. ExxonMobil at No. 1, General Electric, Chevron, HSBC, Protector & Gamble and Nestle. Apple and Microsoft were also represented, but with a much smaller weight than today. In today’s Top 10, by contrast, the major tech corporations from the U.S. occupy a dominant role. Apple, Microsoft, Amazon, Alphabet and Facebook now occupy the top ranks and account for almost 13% of the MSCI World. The rise of tech stocks has accelerated significantly with the Corona pandemic in 2020.

Table 2 also shows that with the exception of three companies from tiny Switzerland and HSBC from the UK, all of the Top 10 positions over the past decade have come from the US. We will examine how the country weightings of MSCI indices have evolved over time in an upcoming blog post.

In the following section, we examine the cumulative weight of the top positions in the MSCI World in order to assess whether the cluster risk has increased. By cluster risk, in the context of stock indices, I mean the risk that arises when the performance of an index is significantly dominated by a small number of companies.

In addition to the cumulative weight of the Top 10, which was already shown in the animation above, we will also look at the development over time of the cumulative weights of the Top 20, Top 50 and Top 100 positions of the MSCI World. These are shown graphically in Figure 1.

Figure 1 illustrates clearly that the cluster risk in the MSCI World has increased over the last ten years. Whereas in January 2011 only around 9% of the weight was combined in the Top 10, today it is around 16%. For the Top 100, the figure has risen by around 5 percentage points. In total, The top 100 currently account for 45% of the MSCI World. As a reminder, the entire index comprises around 1600 companies, i.e. around 6% of the companies represented account for almost half of the index. The trend towards a concentration of weight in the top positions was further accelerated in 2020 by the run on tech stocks during the Corona pandemic. Time will tell if this is just a momentary outlier or if the trend will continue.

How can investors respond to this increasing cluster risk in the MSCI World over the past few years? In my view, it can make sense to add an ETF on the MSCI EM or MSCI EM IMI (also contains small cap stocks) to the portfolio or to replace the MSCI World with the MSCI ACWI. Both options add emerging market stocks to the portfolio and thus moderate the strong weighting of the Top 10 of the MSCI World (currently tech stocks). It also reduces the proportion of U.S. companies in the overall portfolio.

Another option would be to include factors such as small cap stocks (companies with small market capitalization) or value stocks (cheaply valued stocks). This can be achieved for example with ETFs on the MSCI World Small Cap and MSCI World Value. In general, it should be understood that if you only have the MSCI World and MSCI EM in your portfolio, you are not practicing strictly market neutral investing. The simple reason is that in such a portfolio companies with small market capitalization are completely neglected, although they have performed better than the overall market in the past (small cap effect).

- In total, 25 different companies have been part of the MSCI World Top 10 over the past decade.
- In the last decade, more tech companies have entered the Top 10.
- The cumulative weight of the Top 10, Top 20, Top 50 and Top 100 has increased sharply in recent years, i.e. a small number of companies increasingly determine the price development of the MSCI World. Thus, the cluster risk has increased.
- To reduce the high weighting of individual positions, it is possible to add other indices to the portfolio. Possibilities would be to include the MSCI EM and thus shares from emerging markets or to overweight factors such as value or small cap.

It was clear to me before writing the article that the largest positions in the MSCI World account for a large part of the weight. However, I would not have thought that this concentration has accelerated so much in recent years. I was also surprised by which companies were already in the Top 10. Would you have thought that the concentration of weight in the top positions of the MSCI World has increased so much in recent years? And which members of the Top 10 surprised you?

[1]: MSCI, https://www.msci.com/equity-fact-sheet-search search for MSCI World

]]>The hysteria of the all-time high is widespread in the financial media and unsettles investors. For years, stock markets have gone from all-time high to all-time high, and cries that stock markets are grossly overvalued are a constant companion. Graphical illustrations such as the price trend of the MSCI World in Figure 1 are often used to underpin the overvaluation of stocks. But is the **occurrence of an all-time high a suitable indicator** to assess the situation on the stock market at all, and should investors really base their investment decisions on it?

We use two different data sets to answer the question about the relevance of all-time highs**: Robert Shiller’s data and MSCI World data**. The Shiller data covers the performance of the **S&P 500 from 1871 to the present**. This is the longest period for which data is available on any stock market. The S&P 500 tracks the performance of the 500 largest publicly traded U.S. companies. Moreover, the Shiller data includes inflation-adjusted values in addition to nominal values, hence we can examine the impact of inflation on the frequency of all-time highs. We will use the price-only index, excluding dividends. Table 1 shows the first ten measurement points of the Shiller data. The measurement points are recorded at monthly intervals.

We do not only look at the US stock market, but at the nominal price development of the **MSCI World over the last 50 years** as well. It covers the development of the stock market in 23 industrialized countries and tracks around 85% of stocks by market capitalization. We also use the price index without dividends here. Unfortunately, inflation-adjusted values are not available for the MSCI World. The first ten measurement points for the MSCI World can be seen in Table 2. The measurement points are recorded at monthly intervals.

You can find further information and how to find the Shiller data and the MSCI World data here: Financial Data.

In order to answer the question about the relevance of all-time highs, we will examine three factors: the frequency and the time intervals between all-time highs as well as the average returns after reaching an all-time high (here we will refer to Gerd Kommer’s blog, a German expert on stock market research). We will also examine the impact of inflation on the frequency of stock market records.

First, let’s go back to Figure 1, which shows the price trend of the MSCI World. Indeed, in this plot, the increase in price seems to be accelerating. In absolute terms, this is true. But we have to look at the **percentage increase and not the absolute one**.

After all, an investor will not care whether his assets double at a price of USD 100 or USD 1000. What matters are the percentage changes and these are not adequately represented in a linear price scale as in Figure 1, because the absolute index value for recent times is much higher due to the compound interest effect. Therefore, in the linear representation, it looks as if the prices practically did not move at all until the 1980s, followed by an explosion starting in the 1990s.

**This misleading effect can be eliminated by using a logarithmic plot.** With this scale, the distances for identical percentage changes in the chart are equal in each case. Figure 2 therefore shows the Shiller data and MSCI World data with a logarithmic price scale. Additionally to the price developments, all all-time highs of the respective index are noted in red dots.

In some cases, the individual points are indistinguishable because the highs follow one another at short intervals. In the 1803 months of Shiller data, **there are a total of 302 all-time highs**. This corresponds to a **share of 16.7%**. In the MSCI World, the occurrence is even more frequent; **out of 615 months, 133 months set a new all-time high**. That’s a whopping **21.6%**.

The figure demonstrates that all-time highs are by no means a peculiarity but rather the rule. If we recall the results of the blog entry Returns of Stock Shares and ETFs over the last 50 years, we know that returns for stocks follow a probability distribution with a positive expected value, i.e., average returns are positive. For this trivial reason, the stock market tends to go up in the long run and reaching new highs is in the nature of things.

Gerd Kommer also addressed the myth of the all-time high in one of his blog posts. [1] He examined the Shiller data with respect to the average return after an all-time high is reached and could not find any detrimental effect on returns. On the contrary, for short- and medium-term periods of up to three years after reaching an all-time high, average returns were actually higher than the overall average. This could be related to the so-called **momentum factor**, which we will highlight in an upcoming blog post. It describes that companies with recent relatively high returns tend to maintain higher returns for a limited period of time.

The mere existence of an all-time high has no informational content regarding the valuation or overvaluation of the stock market. Indeed, **the absolute index value alone says nothing about the valuation of the market**, but is often perceived as a valuation indicator due to its negligent use in the financial media to generate headlines.

In order to be able to assess the value of a share, the price value must be considered in relation to another variable. **Fundamental business indicators** such as profit, cash flow or book value are often used for this purpose. This then results in ratios such as the price-earnings ratio (P/E ratio), price-cash flow ratio (P/CF) and price-book ratio (P/B), which are also frequently stated in the fact sheets of ETFs.

Nonetheless, **all-time highs are not regularly distributed**. In particular, the Shiller data from the S&P 500 reveals this fact. From 1871 to 1929, 77 stock market records occurred. Then followed a long dry period from 1930 to 1954 with not a single all-time high. After that it accelerates and in the following period up to the present 225 more records have been broken.

The fact that from 1929 to 1954 not a single all-time high was reached is certainly related to the devastating crises of that time, the Great Depression and the following World War II. After that, new all-time highs were reached again and again in batches. The stabilization that set in after the World War II and the interconnection of the global economic system were probably the main reasons for the stock market’s outstanding performance.

It is also striking that the **all-time highs usually occur in clusters**, i.e. they follow one another at short intervals. This circumstance could in turn be related to the momentum effect. In the following section, we will therefore look at the temporal distribution of all-time highs and answer the question of how much time passes on average until the next stock market record is reached.

Figure 3 shows this distribution for the S&P 500 and the MSCI World. For most all-time highs,** the time interval is only one month** in line with the clustering of all-time highs. For the S&P 500 the maximum time interval is 300 months and for the MSCI World 80 months. These stock market phases can be explained by drastic crises, for the S&P 500 data as mentioned above the Great Depression and World War II, and for the MSCI World the bursting of the dotcom bubble in March 2000.

The last question we want to address is what** impact the inflation **has on the frequency of stock market records. We base this question on the Shiller data, which provide inflation-adjusted values in addition to nominal values. If we look at the inflation-adjusted values, the number of months with new highs decreases from 302 to 172. **The share is thus reduced from 16.7% to 9.5%**. Although inflation leads to a reduction of real records by about one third, real records on the stock markets are by no means a rarity.

- The occurrence of a new all-time high alone does not provide any information regarding the valuation or overvaluation of the stock market and should therefore under no circumstances be used to make investment decisions. On the contrary, if one postpones an investment in the stock market solely because of the presence of an all-time high for fear of overvaluation, this can have a negative impact on the return achieved (opportunity costs).
- For the valuation of shares, the absolute price must be set in relation to business metrics such as profit, cash flow and book value.
- New stock market records are the rule rather than the exception for long-term periods. In the past, 16.7% of all months set a new stock market record for the S&P 500 and as much as 21.6% for the MSCI World. A subset of about one-third of these records disappear when inflation is taken into account (examined only for the S&P 500).
- All-time highs often occur in clusters and their time interval is in most cases only one month (momentum effect). The banal reason for reaching new all-time highs is that stock markets have positive average returns for sufficiently long periods of time.
- The average returns achieved after a new all-time high are not smaller than the historical average.

[1]: Blog von Gerd Kommer, https://www.gerd-kommer-invest.de/angst-vor-dem-allzeithoch/ (only in German)

]]>The reality we experience is only one manifestation of an infinite number of different paths. This assumption is more likely to be associated with a philosophical debate or the many worlds interpretation of quantum mechanics, but not necessarily with one of the standard methods of the financial industry. We are talking about the Monte Carlo simulation. In this article, we will shed light on this technique and present an online tool that you can use to simulate possible future scenarios for your stock/ETF portfolio yourself.

The procedure of Monte Carlo simulation was already published in 1949 by the scientists Metropolis and Ulam. [1] It is a method of mathematics in which a large number of random experiments are simulated according to a probability distribution. The results are then used to generate possible future scenarios.

To illustrate the process, we can consider the Saturday lottery draw as a Monte Carlo simulation in the physical world. Here, drawing a lottery ball corresponds to the random experiment mentioned above. However, since six balls are drawn in the lottery drawing rather than a single ball, the simulation of a possible scenario or path consists of successively drawing six balls. If we repeat this process several times, the result is a corridor of possible future scenarios and from this we can deduce how probable a particular scenario is.

In the lottery example, drawing “six correct numbers” would be an extreme scenario, because only very few paths lead to this result, while drawing no correct number is most likely, because most of the paths result in this scenario. Thus, one should come to the decision not to play the lottery. Besides these “simulations” in the physical world, Monte Carlo simulations with a large number of scenarios can be simulated with computers. We will use the lottery example again further below to describe the procedure for simulating stock returns. Instead of a lottery number, each ball can be labeled with a monthly return. Then drawing a lottery ball would correspond to randomly determining a monthly return.

What are the preconditions for simulating the development of the stock market with the help of the Monte Carlo simulation? First, we must assume that the stock market’s price development is random. This assumption is widespread among scientists, but it is not uncontroversial. [2],[3] However, if we make this assumption and we know the probability distribution underlying the returns, we can simulate the possible evolution of the stock market. If the number of simulated scenarios is large enough, insightful findings can be obtained.

In the last blog entry, we already determined a probability distribution for the returns of the global stock market. We used the MSCI World data of the last 50 years and derived a normal distribution from it, which provides in general a good description of the return distribution. However, stock market crashes with strongly negative returns, such as the Dotcom Bubble or the Great Financial Crisis, are significantly underestimated. The reason is that the distribution of returns is asymmetric or skewed and exhibits a so-called “fat tail” for strongly negative returns. The monthly return distribution of the MSCI World and the normal distribution derived from it with an expected value of 0.87% and a standard deviation of 4.3% are shown in Figure 1.

Since strongly negative returns are significantly underestimated by the normal distribution, we will run on the one hand Monte Carlo simulations relying on the normal distribution and on the other hand by relying on the actual return distribution. Based on the results, we can then assess whether the underrepresentation of months with strongly negative returns by the normal distribution has a decisive impact on the performance or the risk of a portfolio.

In the context of the Monte Carlo simulation, each possible path or future scenario consists of a sequence of random events weighted according to a probability distribution. In the simulation of a stock portfolio, the random events correspond to the random determination of a return. We will restrict ourselves to the monthly evolution of the return and consider the evolution of the portfolio for a period of 360 months, respectively 30 years. For our Monte Carlo simulation, this means that we need to randomly generate 360 monthly returns for each path according to one of the two probability distributions described above.

Figuratively, the process for the MSCI World actual return distribution can be thought of as follows. The distribution of actual returns, shown in green in Figure 1, consists of 613 monthly returns. Theoretically, one could get 613 balls and write one of the past returns of the MSCI World on each of them. Using a lottery drum, we could then randomly draw one of these balls. After each draw, the drawn ball is put back and the whole thing is repeated 360 times. This would then be the result of a single Monte Carlo simulation. For the normal distribution, one can proceed in the same way, the balls in the drum would only have to be weighted according to the normal distribution.

In the simulations, we assume that the return is an independent variable, i.e. the return of one month does not depend on the values of the returns of past months, respectively there are no correlations between the returns of different months. This assumption can be doubted if one looks, for example, at a long-lasting so-called bear market, in which prices consistently decline over a long period of time. We will look at how to incorporate such correlations into the model and how they play out in an upcoming blog post.

Since the process with the balls would be somewhat tedious and time consuming, we will use a computerized random number generator that can generate random returns according to the two distributions, but can achieve this far more efficiently. In total, we will run 10,000 Monte Carlo simulations for each of the two distributions, i.e. we will obtain 10,000 possible future scenarios for our portfolio each, based on the past performance of the MSCI World.

Of course, no one can know whether the MSCI World or the stock markets in general will develop similarly in the future as they did in the past. But since we do not have any other data or know any general laws concerning the stock market, the past provides the only possibility to gain knowledge. And with the Monte Carlo simulation we have a powerful tool to derive possible future scenarios from it.

In this section, we will analyze the results of the Monte Carlo simulations. We will compare the results of the simulations based on a normal distribution with the results obtained from simulations based on the actual return distribution.

To illustrate the process of the Monte Carlo simulation, you can watch a video with an exemplary simulation based on the normal distribution. We start with a portfolio value of 100 monetary units. On the left side you can see the normal distribution and a histogram of the simulated or randomly drawn returns, which slowly builds up over time. On the right side you can see the corresponding development of the portfolio value. With increasing time, the compound interest effect can be observed very nicely, since the absolute value of the portfolio now fluctuates much more strongly than at the beginning.

Table 1 shows exemplary results of Monte Carlo simulations with the normal distribution. The whole table has a total of 10,000 columns, one for each possible future scenario and 361 rows, one for each simulated month (the first row contains the initial value of the portfolio). From the last row we can already see that the final value of the portfolio after 30 years can vary a lot.

Of course, we cannot present all values in tabular form here due to the quantity of data points. However, in order to get an overall impression of the results of all simulations, Figure 2 shows the value developments of all 10,000 possible future scenarios. Since portfolio values may have grown considerably after 30 years of compound interest, we have used a logarithmic scale to also observe the development of very poorly performing portfolios with low absolute value. In the animation above, on the other hand, we have used a linear scale. The range of final values covers values from 70 monetary units to almost 50,000 monetary units. It is important to note that these are nominal values, that are not adjusted for inflation.

At first glance, the overall picture of all simulations between the normal distribution and the actual return distribution is not really distinguishable, except that for the actual returns, extreme scenarios where the final portfolio value is around 100 monetary units are more common. The black line represents the average value of the portfolios for all simulations at the given time. After 30 years, it is 2325 monetary units for the normal distribution and 2252 monetary units for the actual return distribution. So there is no significant difference. This is to be expected because of the identical expected values of both distributions.

Since Figure 2 is very confusing with the totality of all simulations, we consider below a histogram of all portfolio values in the year 20, i.e. 240 months after the start of the simulation. The histogram is practically a cut along all portfolio values at month 240, and the result can be seen in Figure 3.

At first glance, the histograms for normal distribution and actual return distribution look identical. Only the quantiles show that the portfolio values after 20 years for the actual return distribution is slightly shifted towards the lower values. Briefly on the meaning of quantiles: For example, the 90% quantile says that 90% of all simulations in the case of the actual return distributions have a portfolio value of up to 1500 monetary units, while this value is about 1550 for the normal distribution. But the difference between normal distribution and actual return distribution is again very small.

Another parameter that plays an important role in the evaluation of stock investments is the so-called maximum drawdown. A drawdown represents the loss between the high and the subsequent low within a certain period. The maximum of these drawdowns is then the maximum drawdown.

To illustrate the meaning, Figure 4 shows the maximum drawdown for the MSCI World over the last 50 years. It took place at the time of the Great Financial Crisis, peaking in October 2007 and bottoming in February 2009, with a maximum drawdown of a whopping 53.6%.

How does the maximum drawdown behave for our simulated scenarios? Figure 5 shows histograms for the maximum drawdown of all 10,000 Monte Carlo simulations for the normal distribution and the actual return distribution. Looking closely, one can see that for the actual return distribution, small maximum drawdowns up to 37.5% occur significantly less frequently than for the normal distribution. As a result, the average of the maximum drawdowns of all simulations for the actual return distribution is 1.8 percentage points larger.

The probability of temporarily higher losses is thus higher in the actual return distribution than in the normal distribution. This is consistent with the underrepresentation of events with strongly negative returns in the normal distribution. The maximum drawdown plays an essential role for investors from a psychological point of view. No investor likes to see a minus of 50% in his portfolio and there is a risk that the investor will actually realize these losses by selling.

Table 2 summarizes the average portfolio values after 5, 10, 20 and 30 years as well as the maximum drawdown of the Monte Carlo simulations for the normal distribution and the actual return distribution of the MSCI World.

- In this post, we have looked at Monte Carlo simulations for stock portfolios and investigated whether the results differ depending on whether the actual return distribution or a normal distribution is used.
- From Table 2, we can see that the portfolio values and corresponding quantiles at different points in time do not show substantial differences.
- The only variable that shows a significant difference is the maximum drawdown, whose average value for the actual return distribution is 1.8 percentage points larger and high temporary losses are thus more frequent.
- At the outset, we mentioned that in our Monte Carlo simulations we assume that the monthly returns are independent. However, since we have observed prolonged bear and bull markets in the past, the question arises to what extent past returns and future returns are correlated. In an upcoming blog article, we will look at the distribution of the time length of certain stock market phases. We will investigate whether these can be reproduced with an independent distribution of returns or whether we need to add correlations to our simple Monte Carlo simulation. For example, does a negative return follow a negative return with 50% probability?
- One book I can strongly recommend on the subject of Monte Carlo simulations for portfolios and the role of randomness in life is the book “Fooled by Randomness” by author Nassim Nicholas Taleb (see book recommendations for more information).
- There are tools on the Internet that allow you to run Monte Carlo simulations for stock portfolios online. See financial tools for more information.

Have you ever used Monte Carlo simulations to simulate possible future scenarios for your stock portfolio, ETF savings plans or withdrawal plans? Let me know in the comments.

[1]: Metropolis, N. C./Ulam, S. (1949): The Monte Carlo Method, Journal of the American Statistical Association, Vol. 44, No. 247, (Sep. 1949), S. 335-341.

[2]: Cootner, Paul H. (1964). *The Random Character of Stock Market Prices*

[3]: Lo, Andrew (1999). *A Non-Random Walk Down Wall Street*

Unpredictable events, such as the recent Corona Pandemic, have a massive impact on stock market prices. Such crises, but also the everyday chaotic price developments on the stock market, inevitably lead to the question whether the returns of the stock market can be modeled with the simple normal distribution or whether this assumption, which is predominantly made in financial statistics, is incorrect. The Brachiosaurus with its long tail provides a first clue to answer this question.

The normal distribution is a powerful tool used in many disciplines, describing a wide range of phenomena from the distribution of body size in the population to the irregular motion of molecules in liquids and gases. In financial statistics, the normal distribution is also used to describe the potential return and risk of a portfolio. The Black-Scholes model, which provides a theory for valuing financial options and is considered a milestone in financial mathematics, assumes a stock price that follows the normal distribution.

In order to define a normal distribution, you need two parameters: the expected value (aka the return potential) and the standard deviation (aka the risk). Because of its shape, the normal distribution is also called a bell curve. The normal distribution is symmetrical, so that the median and the expected value of the distribution are identical. With the help of the standard deviation, useful insights can be made regarding the probability of a measured value (see Figure 1):

- Approx. 68.3% of all measured values are one standard deviation away from the expected value.
- Approx. 95.5% of all measured values are within two standard deviations of the expected value.
- Approx. 99.7% of all measured values are within three standard deviations of the expected value.

As a consequence of this, measured values that are further than three standard deviations away from the expected value, have a very small probability and are cumulatively responsible for only about 0.3% of the measured points. This finding will play a central role in assessing the adequacy of the normal distribution to describe stock market returns.

In order to be able to assess the adequacy of the normal distribution for describing the returns of the stock market, we will consider the return distribution of the probably best known index of the index provider MSCI, the MSCI World.

The MSCI World comprises shares of the approximately 1600 largest companies by market capitalization from currently 23 industrialized countries. It forms the basis of the portfolios of many passive investors who invest in the stock market with the help of exchange-traded funds (ETFs). MSCI provides even broader indices such as the MSCI ACWI World IMI, which not only includes industrialized countries but shares of companies from emerging markets, as well as companies with smaller market capitalization, so-called small caps. However, historical data from this index only goes back to 1994, whereas data for the MSCI World has been recorded since 1970 and thus provides a better data basis.

More information about the MSCI World data and where to download it can be found here: https://en.guidingdata.com/financial-data/

The data used include the corresponding price at monthly intervals, starting in January 1970 and ending in February 2021. We use the so-called total returns form of the index, i.e., in addition to price gains, this form also includes dividend payments. We use monthly returns instead of annual returns to increase the number of available measurement points. The returns are nominal returns, i.e., the data are not adjusted for inflation. In total, the dataset includes index values for 613 months. Of these, 383 months have positive returns and 230 months have negative returns.

The first ten measurement points of the MSCI World data can be seen in Table 1. A look at the return column reveals that the 1970 stock market year was not an easy one for investors, with monthly negative returns of up to -9.26% (April 1970).

Figure 2 shows the development of the MSCI World over the last 50 years. In addition to the normal linear representation, the index is also shown with a logarithmic scale. The logarithmic representation has the advantage that developments in the range of smaller values can also be visibly represented, even if the data extends over several orders of magnitude (i.e. several powers of 10). Since the base (the index value) keeps increasing, developments closer to the present are exaggerated with a linear scale.

For example, an increase of 10% in November 2020 when the MSCI World was at $11 449 would have resulted in an absolute change of more than $1000, whereas a 10% increase at the beginning in January 1970 would have corresponded to an absolute change of only about $10. On a linear scale, the development in 2020 is shown much more dramatically than that in 1970, although the decisive percentage change is the same. Therefore, in the linear representation it looks as if the prices had practically not moved at all until the 1980s. This misleading effect can be eliminated by using a logarithmic plot. In this case, the distances for identical percentage changes in the chart are the same in each case.

We can now plot the returns we have calculated from the index values in a histogram as in Figure 3. We can also determine the expected value and standard deviations for the MSCI World return. The expected value for the monthly return is 0.87%. Taking into account the compound interest effect, this results in an annualized return of approximately 10.95%. The standard deviation of the monthly return is 4.3% and the annualized standard deviation is 14.9%.

These values for expected value and standard deviation are nominal values and therefore do not take inflation into account. The normal distribution resulting from the data can be seen as a red line in Figure 3. From the figure, we can see that the normal distribution is able to reproduce the basic characteristics of the actual return distribution. However, if we look at the region for strongly negative returns, we see that these returns are underestimated by the normal distribution compared to the actual returns. Apparently, the normal distribution performs poorly in predicting turbulent stock market periods, especially in the strongly negative range of -10% to -20%.

Another tool for examining differences between two distributions is the so-called quantile-quantile (QQ) diagram. This is a graphical tool in which the quantiles of two probability distributions are plotted against each other to check their consistency. In Figure 4, the actual data are shown as blue dots and the theoretical normal distribution is shown as the red diagonal. The more the blue dots deviate from the red line, the more the underlying distribution deviates from the normal distribution.

While the positive returns follow the normal distribution very well even for high values, returns smaller than approx. -7.5% are in part strongly underestimated by the normal distribution.

In contrast to the normal distribution, the actual distribution has a so-called “fat tail” for negative returns, similar to the long tail of the Brachiosaurus at the beginning. The actual distribution is asymmetric or skewed with a higher weighting of events with strongly negative returns, whereas months with exceptionally good returns occur less frequently. If we look at all the measurement points in the data that have a monthly return below -10%, we get the following table:

In total, there are ten measurement points that have a worse monthly return than -10%. These measurement points are related to major stock market crashes:

- The oil crisis in 1973
- I could not find any particular event related to the March 1980 decline
- The Black Monday in 1987
- The Japan crisis in 1990
- The Asian crisis in 1998
- In 2002 as a result of the dot-com bubble
- In September and October 2008 when stock markets around the world collapsed in response to the Lehman Brothers bankruptcy and the onset of the financial crisis
- In March 2020 as a result of the Corona Pandemic

Let us take as an example the return from October 2008 of a whopping -18.9%. Already on the basis of the standard deviation of 4.3%, we can deduce that such an event is extremely unlikely with a distance of 18.9/4.3 ≈ 4.4 standard deviations from the expected value. According to the normal distribution, such an event would possess a probability of only 0.0002 and would thus occur only every 4,884 months or 407 years. If we take into account other events from the last century, such as the First and Second World Wars and the Great Depression in the 1930s, which were also followed by drastic economic cuts, this value seems too optimistic.

How can we explain the existence of the “fat tail” for negative returns? A plausible explanation for this phenomenon can be found in the field of behavioral psychology, namely the so-called negativity bias. It states that people tend to be more influenced by negative events and emotions than by positive ones. Since most market players are human, this effect is also reflected in the stock markets. For example, when unexpectedly good economic data is announced, this tends to have a smaller effect than the negative equivalent.

This asymmetry in processing negative events makes sense for evolutionary reasons, because organisms that have adapted better to coping with life-threatening situations have been more likely to reproduce. In addition, missing opportunities with positive outcomes usually has less drastic consequences than ignoring dangers.

During drastic crises, such as last year’s Coronacrash, this effect is particularly evident. An article worth reading that provides an overview of the research on the negativity effect in the context of behavioral psychology is the review article “Bad is stronger than good”. [2]

So what conclusions can we draw from the research on the MSCI World data? The normal distribution is a good approximation for the returns of the global stock market. Positive returns and slightly negative returns are very well reproduced by the normal distribution, while strongly negative returns can be strongly underestimated by the normal distribution. It is important to keep in mind that there are large gaps in the data, especially in the negative range, which do not contain any measurement points at all. This is a clear indication that the historical period for which MSCI World data is available does not extend far enough into the past to make more reliable statements about the distribution of returns.

Nonetheless, the probabilities that follow from the normal distribution for impactful events such as the 2008 financial crisis are probably too low and lead to an underrepresentation of such events. Other MSCI indices, such as the MSCI ACWI IMI, which tracks 99% of the global equity market, provide similar results. However, the period for which these data are available is even shorter and conclusions drawn from them should be taken with even more caution. Only time will tell whether the “fat tail” for negative returns will persist in its current strong form.

An interesting question that now arises is to what extent the underestimation of events with strongly negative returns by the normal distribution affects the predicted return of stock portfolios. To investigate this, we will simulate different portfolios using the Monte Carlo method in the next article. One time we will simulate the performance of a portfolio using the normal distribution, while the other time we will use the actual historical returns of the MSCI World.

In addition, there are so-called skewed distributions in mathematics, which are able to reproduce “fat tails”. How these distributions compare to the actual historical data will also be highlighted in an upcoming blog post.