Mean-Variance Portfolio Optimisation

Hey guys,

Below is the link to download my MV Portfolio Optimisation spreadsheet.

MV Portfolio Optimisation

Basically, this spreadsheet determines how much of each stock an investor should hold for the next investment period (year/month/week/day) to minimise the variance (risk) of the portfolios returns for that period given a target return. The spreadsheet also plots the Efficient Frontier for the portfolio, and the required weight of each asset against portfolio target return.

Check out the instructional video below for a brief demonstration on how to use the spreadsheet.

Once the user has input the ticker symbol for each stock within their portfolio on the “Assets” sheet, they can initiate the download of all historical pricing data from Yahoo Finance. This data is then used to calculate the Variance-Covariance matrix based on a user-defined Holding Period (Day, Week, Month or Year). The inverse of the Variance-Covariance matrix can then be estimated, and the expected returns of each asset calculated. The expected return for each asset is the arithmetic mean of the historical returns for that asset based on the previously defined Holding Period, i.e. the Holding Period used to calculate the Variance-Covariance matrix. Now the user can optimise their portfolio, i.e. calculate the asset weights to minimise portfolio variance for a given target return, based on the theory below. The user will be prompted to input their desired target return as an annual figure when they click the “Optimise” button on the “Results” sheet. The target return will be appropriately scaled to coincide with the previously defined Holding Period. Finally, now the ‘a’ and ‘b’ vectors (refer below theory) have been calculated, the user can plot the Efficient Frontier of the portfolio, and each asset weight against target return.

The spreadsheet adopts Mean-Variance Portfolio theory, which is a mathematical technique that determines asset weights within a portfolio to minimise the variance (risk) of the portfolios returns given a target return for that portfolio. This spreadsheet assumes the portfolio is composed of only risky assets, specifically stocks listed on the Australian Stock Exchange (ASX). To minimise the variance for a given target return, MV Portfolio theory adopts the Method of Lagrange Multipliers, and after a bunch of tedious matrix algebra, the following relationship for portfolio weights is derived:

Where represents the Variance-Covariance matrix of the portfolio and µ represents a vector of each assets expected return. The ‘a’ and ‘b’ vectors shown in the above relationship are calculated and displayed in the “Results” sheet.

As can be seen from the above formula, the inverse of the Variance-Covariance matrix is required to compute the asset weights vector. Since the Variance-Covariance matrix is always square and generally non-singular, MS Excel’s “MInverse” function can normally calculate the inverse without issue. In the instance where there exists near-singular entries (values smaller than 1e-16) in the Variance-Covariance matrix, or if “MInverse” returns an error for some reason, the Moore-Penrose Pseudoinverse of the matrix will be estimated. Refer to my previous blog post, Moore-Penrose Pseudoinverse for more information on the algorithm used in this scenario.

So the reason why I went to all the trouble of building a VBA algorithm for calculating the Moore-Penrose Pseudoinverse was because the first Variance-Covariance matrix I tried to invert was singular (non-invertible). I stupidly assumed that this would be the case for all Variance-Covariance matrices… it isn’t. The Variance-Covariance matrix was only singular in that instance because I input the same stock twice (to represent the purchase of shares of the same company on two separate occasions) leading to multiple entries of the same value within the matrix, and the matrix to be non-invertible. In any case, the “Calculate Inverse” button on the “Results” sheet will produce at least an estimate of the inverse of the Variance-Covariance matrix. So the user can input the same ticker symbol any number of times and the portfolio can still be optimised but I would recommend the user only input each ticker symbol once, the calculation will be quicker this way and the result more simple.

I hope you find this spreadsheet useful. Let me know if you have any trouble or if you come up with any improvements I might be able to incorporate.

Enjoy!

Cheers,

Nick

Moore-Penrose Pseudoinverse Calculation

So this isn’t the type of spreadsheet I normally upload to nicksfinancetricks.com but it was something I spent quite a bit of time preparing and something that my research indicated was not easily found on the internet…

This spreadsheet calculates the Moore-Penrose Pseudoinverse (i.e. an estimate of the hypothetical inverse) of a matrix, a feature I thought was required for my soon to be released Mean-Variance Portfolio Optimisation spreadsheet. As it turned out, being able to calculate the Pseudoinverse of the Variance-Covariance matrix isn’t critical as this matrix is always square and seldom Singular. Nonetheless, this feature means an estimate for the inverse matrix can always be achieved, where the MS Excel built-in matrix inverse function can sometimes fail due to the presence of near singular values.

Below is the link to download my Moore-Penrose Pseudoinverse spreadsheet:

Moore-Penrose Pseudoinverse

I would appreciate any contributions to the VBA Subroutine if anyone is interested. The following link will take you to my Github repository:

Github – VBA Pseudoinverse

Ironically, what can be done in 1 line of code in MATLAB using the “pinv” built-in function took me almost 400 lines of code in VBA. But if you like working with spreadsheets like I do then you’ll appreciate the convenience of having the ability to calculate the pseudoinverse of a matrix and immediately use the result within your spreadsheet. Plus, not everybody has access to MATLAB.

Anyway, if you’re interested in the mechanics behind the pseudoinverse algorithm it’s probably worth reading on. There are three main components to the algorithm, these include QR Factorisation via Gram-Schmidt Orthogonalisation, Singular Value Decomposition (SVD), and finally the removal of any Singular Values so the pseudoinverse can be formulated.

The algorithm I used for the QR Factorisation function was obtained from Timothy Sauer – Numerical Analysis 2E. The following pseudo-code for the function was extracted from Sauer-Chapter 4.

For the SVD algorithm, I converted a MATLAB function prepared by a Paul Godfrey (23/10/2006) into a VBA function. Paul’s MATLAB function can be found on the Mathworks file exchange at the following link:

Mathworks – Simple SVD

Paul’s algorithm uses QR Factorisation on matrix-A (the matrix to be inverted) to gradually pull matrix-U out from the left and then uses QR Factorisation on matrix-A transposed to gradually pull matrix-V out from the right. The process makes matrix-A lower triangular and then upper triangular alternately. Eventually, matrix-A becomes both upper and lower triangular at the same time (i.e. diagonal) with the singular values on the diagonal. The algorithm outputs three matrices U, S and V where S is a diagonal matrix with any singular values and A = U*S*VT.

For the final step, I converted MATLAB’s own “pinv” function to a VBA function that uses the above SVD function in place of MATLAB’s “svd” built-in function. This step identifies any values in the diagonal S-matrix as singular if they are smaller than the user-defined tolerance. These singular values are removed from matrix-S so the inverse can be estimated without rounding errors forcing the result erroneously towards infinity. The function outputs matrix X of the same dimensions as AT so that A*X*A = A, X*A*X = X and A*X and X*A are Hermitian.

The other, more trivial, VBA functions required to produce the pseudoinverse are as follows:

  • EYE – returns identity matrix;
  • ZEROS – returns matrix composed entirely of zeros;
  • NORM – returns the length of a vector;
  • TRIU – returns the upper triangular part of a matrix;
  • DIAG – returns a column vector of the main diagonal elements of a matrix;
  • TransposeArray – returns the transpose of a matrix;
  • MultArrays – returns the matrix multiplication of two matrices of compatible size.

To assess the accuracy of the pseudoinverse, the result of multiplying matrix-A by matrix-X (pseudoinverse) is compared with the appropriately sized identity matrix. This is based on the notion that the inverse of a matrix multiplied by the matrix itself is equal to the identity, i.e. A*A-1 = EYE where EYE is the identity matrix. The difference between EYE and A*X is compared via the Frobenius Norm and printed to the “Start” sheet. The closer this number is to zero, the closer the pseudoinverse is to the hypothetical inverse.

The condition number of matrix-A is also estimated by dividing the largest absolute value found in the diagonal matrix-S (from SVD) by the smallest absolute value. This number can be used to estimate the expected number of correct digits produced by inverting matrix-A.

By way of comparison between MATLAB’s “pinv” function and the pseudoinverse algorithm I wrote in VBA, I’ve carried out a couple of tests. These tests were carried out on a 15-stock variance-covariance matrix. The results of which are summarised below:

Test Matrix-A size Tol. Cond. # Rank Singular values F-Norm (VBA) F-Norm (MATLAB) Max. Diff. Matrix Type
1 15 x 15 1e-16 8.27e+01 15 0 2.09e-14 1.56e-14 4.64e-11 Square
2 14 x 15 1e-16 8.24e+01 14 0 2.16e-14 1.96e-14 4.58e-11 Underdetermined
3 15 x 14 1e-16 8.24e+01 14 0 7.82e-02 1.00e+00 4.56e-11 Overdetermined
4 16 x 16 1e-16 6.52e+16 15 1 1.00e+00 1.00e+00 4.81e-11 Square
5 15 x 16 1e-16 4.90e+16 14 1 1.00e+00 1.00e+00 4.59e-11 Underdetermined
6 16 x 15 1e-16 6.61e+16 14 1 1.00e+00 1.41e+00 4.65e-11 Overdetermined

Where ‘Max. Diff.’ represents the maximum difference between MATLAB’s “pinv” function and my VBA algorithm.

So basically, for overdetermined matrices (matrices with more rows than columns, equivalent to systems with more equations than unknowns) and matrices with singular values, both VBA and MATLAB functions produce pseudoinverse estimations lacking in accuracy (since an accurate result is not achievable under these circumstances given double-precision data types and the tolerance specified). For underdetermined matrices (matrices with less rows than columns, or systems with fewer equations than unknowns) and square matrices without singular values, both MATLAB and VBA functions produce pseudoinverse estimations with up to approximately 14 decimal places of accuracy.

The MATLAB and VBA functions typically produce results of comparable accuracy according to the Frobenius Norms. The maximum differences observed between the results of the two functions are less meaningful in terms of accuracy but still serve as an indicator to correctness.

ASX Stock Screener

*** Yahoo Finance discontinued their API featuring meaning this spreadsheet can no longer download stock data. Sorry guys ***

Hey guys,

Below is the link to download my ASX Stock Screener spreadsheet.

Download Spreadsheet here!

The spreadsheet allows the user to screen for ASX listed stocks based on company data obtained from Yahoo Finance using the websites API. The “tags” I used in my spreadsheet were gathered from a blog by a lad named Ido Green at the following website:

Yahoo Finance (hidden) API

More information can be found here if the user desires.

The following instructional video briefly explains how to use the Screener.

After downloading and opening the spreadsheet the user first updates the list of ASX companies using button 1 on the “Screener” sheet. This extracts a .csv file from www.asx.com [full link here] and populates the “Screener” sheet with a complete and current list of ASX listed companies along with their ticker symbols and GICS Industry Group.

Next the user has the option of downloading the pre-set array of information for each company or customising the information that will be downloaded from Yahoo Finance via the websites API. To download the pre-set information, the user simply clicks button 2 on the “Screener” sheet. Alternatively, the user can add to or delete items shown in the list on the “Tags” sheet. The full list of headings available for download via Yahoo Finance API is shown on the “Yahoo API” page. The user can simply copy the desire headings along with their tags and paste them at the bottom of the list on the “Tags” sheet. Clicking button 2 now will download the user defined headings.

The user may need to apply a filter to the top row once all downloads have completed otherwise the screener is good to go. The user is free to use all the convenient features of Excel as they wish. I have already applied some conditional formatting to the first two columns, Price-EPS ratio and Price-Book Value ratio, which help to filter out possibly overpriced stocks. The user should keep in mind that a number of columns have already been formatted based on the pre-set headings which means the user may need to re-format some columns if the list of tags is changed substantially.

I hope you find this spreadsheet useful. Let me know if you have any trouble or if you come up with any improvements I might be able to incorporate.

Happy screening!

Cheers,

Nick

Stock valuation via DDM and CAPM with Beta Calculation

Hey guys,

Below is the link to download my Stock valuation via DDM and CAPM with Beta Calculation spreadsheet.

Stock Valuation spreadsheet

In brief, the spreadsheet automatically downloads stock, market, sector and rates data from various websites before performing a number of calculations to produce an estimate of the stocks intrinsic value. The spreadsheet calculates the stocks systematic risk (beta), cost of equity via the Capital Asset Pricing Model (CAPM), and intrinsic value via the Dividend Discount Model (DDM) and minor variations of.

The following instructional video runs through the steps for analysing an example stock, Transurban (TCL) in this instance, and provides a high-level description of the calculations which are carried out.

The second instructional video below runs through the VBA subroutines which perform most of the calculations in the background. This is not necessary viewing for people who simply want to use the spreadsheet without knowing the mechanics of the all calculations.

A more detailed explanation of the mechanics behind the spreadsheet follows…

Firstly, the spreadsheet, at the command of the user, downloads the following data from various websites:

  • Stocks previous closing price (among other key statistics) from Yahoo Finance;
  • Current Australian Government Bond yields from Bloomberg;
  • Current list of all the company’s listed on the ASX;
  • Stocks complete historical pricing data from Yahoo Finance;
  • Market (e.g. ASX200) complete historical index data from Yahoo Finance; and
  • GICS Sector (e.g. Industrials) complete historical index data from Yahoo Finance.

The spreadsheet can be updated on a daily basis simply by repeating the steps outlined in the above instructional videos, however, any new historical pricing or index data will be inserted above existing the data, i.e. the original data set won’t be replaced but added to thus improving accuracy of estimates over time.

After all the necessary data has been downloaded/updated, the spreadsheet can calculate the company’s systematic risk or Beta. This is done using daily, weekly, monthly and yearly Holding Period Returns (HPRs) allowing the user make a choice based on personal preference. In my opinion, the shorter the holding period, the more accurate the results, for example, there are more data points available for daily HPRs compared with yearly HPR.

For each holding period duration, Beta is calculated two ways for additional comparison. The company’s returns are compared with the markets returns (e.g. ASX200 returns) to produce a typically more accurate Beta estimate. Alternatively, the company’s returns are compared with the returns of the relevant GICS Sector Index (e.g. Industrials). I would typically only use Beta estimates produced via Market comparisons but sometimes it might be useful to review estimates produced via Sector comparisons.

Once Beta has been calculated, the cost of equity can be calculated via the Capital Asset Pricing Model (CAPM). The cost of equity is calculated for daily, weekly, monthly and yearly compounding intervals. For example, the weekly cost of equity value represents the rate of return an investor requires to invest in this company for 1 week given the company’s current systematic risk (beta).

The CAPM uses the current Australian government bond yield as the risk-free rate. The particular yield (2, 5, 10 or 15 year maturity) used is based on the user defined investment horizon. For long-term investors this will likely be 15 years, but 2 years might be more suitable for short-term traders. The selected risk-free rate adopted is appropriately scaled according to each holding period duration.

The Dividend Discount Model (DDM) is then applied for both constant growth and zero growth scenarios, where the growth factor is calculated based on the company’s Return on Equity (ROE) and Dividend Payout Ratio.

An alternative method to the DDM is also presented on the first CAPM sheet. This method compounds the user defined, previous years’ annual dividend forward by the growth rate for one to forty years. The present value of each of the expected future dividend payments is then calculated by discounting the cash flow with the appropriate cost of equity given the holding period duration. The intrinsic value is the sum of all discounted future cash flows up to the end of the user’s investment horizon. This is the same concept as the DDM except this method assumes a finite investment horizon (as opposed to the infinite investment horizon assumed by the classic DDM) and might prove more suitable if the growth rate used is greater than the cost of equity calculated.

There is a secondary method presented for calculating the company’s cost of equity and, ultimately, intrinsic value. This method is presented in the CAPM2 sheet and is fundamentally based on how the company’s average yearly returns compare with the markets or sectors average yearly returns. These average yearly returns are a compound average calculated with daily, weekly and monthly compounding periods. Rather than using yearly holding period returns which consider only one data point per year, and perhaps overlook large spikes and/or dips in stock price, this method considers the following:

  • 12 data points for monthly compounding yearly averages;
  • approximately 52 data points for weekly compounding; and
  • approximately 252 data points for daily compounding;

while still considering the covariance between yearly returns. This might be the user preferred method but in my opinion produces less accurate results than those obtained via first CAPM method.

I hope you find this spreadsheet handy and feel free to leave any comments you might have.

Stay tuned for the next spreadsheet which will calculate the variance of your portfolio with only a few clicks and strokes.

Cheers,

Nick

%d bloggers like this: