2019 Inquire Europe seminar (2/3): Deep learning framework for asset pricing models

Source: investors-corner.bnpparibas-am.com

Deep learning and asset pricing models

Following our first post, we now discuss the “Deep Learning in Asset Pricing” paper of Guanhao Feng from City University of Hong Kong. This proposes a departure from machine learning research that attempts to forecast directly financial asset returns and stumbles on the problem of a low signal-to-noise ratio in such returns. Instead, and to increase the chances of success, Guanhao presented a framework based on deep learning applied to the construction of asset pricing models.

Examples of such models for stock returns include the Capital Asset Pricing Model (CAPM), with just one factor, and the Fama-French models with three and five factors. The question addressed concerned using machine learning to build or improve such models.

Guanhao highlighted that researchers build asset pricing models by looking for factors, i.e. characteristics of stocks such as size, value and momentum, that can predict future stocks returns. If a factor gets it right, the sorting of the stocks by the factor at a given date is correlated with the future stock performance.

Thus, an easy way to demonstrate the predictive power of the factors is to use them to sort stocks and create investable long-short portfolios. The portfolio that buys the stocks with the highest expected returns and sells short the stocks with the lowest expected returns must generate positive and significant average returns over time if rebalanced at a frequency in line with the updates of the underlying characteristic data – typically monthly, quarterly or annually. Those factors are said to pay a factor premium.

What makes an asset pricing model good?

An asset pricing model is considered good when it relies on a parsimonious set of such factors that is exhaustive in explaining stock and portfolio returns. If the factors in the model are exhaustive, the returns of any stock or portfolio can be well explained by a mimicking linear combination of the model factors. That means that the stock or portfolio returns and the returns from the respective mimicking linear combination of factors should be essentially identical, i.e. the pricing errors should average zero over time. A good asset pricing model minimises pricing errors for all possible portfolios.

What researchers typically do is to use many characteristics and search for a parsimonious combination of factors that leads to pricing errors close to zero. But Guanhao argues that this approach suffers from a major drawback because the usefulness of the characteristics is tested statistically ex-post and the feedback for model fitting is never returned to the construction of the characteristics.

Even if this is, perhaps, not exactly true since traditional researchers are likely to experiment with different specifications of stock characteristics after seeing the outcome of a first specification, this can be automated with a deep learning model. The algorithm is asked to learn iteratively until the objective is reached: bring pricing errors down and close to zero. Using what is known as backpropagation, the model can be refitted sequentially with the feedback that is obtained from the changes in the pricing errors at every step in the model learning process.

How to set up a deep learning framework to construct such models?

The framework proposed starts by looking at an asset pricing model such as the Fama-French five-factor model through the eyes of a data scientist. Guanhao shows that the characteristics (size, value, etc.) can be seen as inputs into a neural network where

  • the first layer is the sorting of stocks by those characteristics
  • the hidden layer is the construction of the long-short portfolios from those sorts
  • the output layer is made up of the returns of those long-short portfolios.

The neural network can be trained using a large set of different portfolios. Guanhao used various portfolios to train the model: 49 industry portfolios, 25 size and price-to-book sorted portfolios and, finally, 147 portfolios built from sorting stocks by different variables. The framework was implemented using the library of TensorFlow. The approach provided an automatic factor generation based on characteristics and produced parsimonious models for returns with average returns that minimise pricing errors.

Guanhao used data from 1975 through 2017 for 3 000 stocks and a universe of factors that included all main categories: market beta, book-to-market ratio, dividend yield, earning-price ratio, asset growth, operating profitability, return on equity, volatility of returns, 12-month momentum and others.

The 1975-2002 sample was used as in-sample to train models with different layers and different number of factors. The 2003-2010 sample was used as a validation sample to select the best model. 2010-2017 was used as the test sample period.

In the most sophisticated examples, Guanhao allowed for more than one hidden layer in the model and included macro predictors such as the Treasury bill rate, inflation, long-term yield, and term spread. In such cases, the map of portfolio long-short stock weights on the stock characteristics becomes a more complex function determined by the neural network that includes the macro predictors and the product of macro predictors with characteristics.

Limited or no consistent improvements

While the asset pricing models that were created using deep learning showed significant improvement in the in-sample and validation periods, which perhaps should not be a surprise, there is only a marginal improvement in the test sample period.

And when training the model using the 147 portfolios, the framework resulted in overfitting. Moreover, when the framework was used to try to do better than the Fama-French three and five-factor models, there was no consistent improvement in either the test samples or even the validation samples. Only starting from CAPM produced an improvement. This should also not be a surprise.

In all, we see this as an interesting idea, but the results are not sufficiently encouraging. We are more inclined to rely on machine learning classification algorithms such as clustering as a tool for searching through factors even if this involves less automation and more pragmatism: it is easier to investigate the economic sense of factors as well as to research changes in factor correlations over time.

That is why we have been relying on those for our own proprietary factor models for both equity and corporate bond portfolios.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence