Skip to main content

Modeling Finance with autocorrelations from Physics

What does it take to model and digitize the six activities of fund management? 
The activities of fund management are six rational processes:

  1. screening,
  2. ranking,
  3. sizing,
  4. timing,
  5. weighting, and
  6. validation.

Screening is a process of filtering, and ranking is a process of sorting. Sizing is a process, which enables you to diversify to the highest ranks. Weighting and timing use estimation or regression techniques to compute the optimal weightings and timings aiming at optimizing an investment objective. Validation is the process of back testing with a preset holding period over a preset validation period. It enables you to search for strong and persistent correlations over time in the power spectrum of Wall Street's random price fluctuations.

The three challenges of quantitative finance
From a mathematical point of view, you only have to answer three questions when you want to maximize the returns and/or minimize the risks of your investment plan:

  1. Which stocks are most likely to increase/decrease in price as your long/short positions?
  2. How long should you hold on to each stock to maximize the price correlations over time?
  3. How much should you trade of each stock to maximize your investment objective?

Formally, to compute the answers to (1) – (3), you only need the historical asset prices. The statistics of historical asset prices is the domain of Quantitative Finance. CSI is our data provider of choice.

The three answers
The answer to the first question is a technique called Statistical Arbitrage. Each holding period you rank stocks by decreasing likelihood to increase in price, only taking the highest ranks. The answer to the second question is revealed by the power spectrum of the price fluctuations in line with the Wiener-Khinchin-Einstein theorem. The power spectrum is the chart where the investment objective is given as a function of the size of the holding periods. As investment objective, one usually takes maximizing the annual growth rate, minimizing the maximum drawdown, or maximizing the ratio of the two. The holding period with the largest annual growth rate, smallest maximum drawdown, or largest ratio of the two gives you the strongest autocorrelations over time. Hence, strong autocorrelations are associated with a single holding period used over economic upturns and downturns. The third question involves a technique widely applied and refined in signal processing using gradient descent. You expand the portfolio returns as a weighted sum of its individual stock returns during each holding period and compute the portfolio weightings while optimizing your investment objective. Such portfolios are called optimal portfolios.

How can a back test be your best predictor of success?
To compute the annual growth rate and maximum drawdown over economic upturns and downturns as a function of the size of the holding periods, you need to perform a back test over all holding periods spanning such upturns and downturns. It is often bluntly stated that a back test doesn’t give you any insight in the future success of your trading plan. To us, this is like bluntly denying the existence of correlations over time in the power spectra of Wall Street’s random price fluctuations. According to a 2005 quote of Jim Simons, the founder of RenTech, past performance is your best predictor of success. The actual returns of his Medallion fund are the undeniable empirical evidence that (simulated) past performance is indeed your best predictor of success: https://www.bloomberg.com/news/articles/2016-11-21/how-renaissance-s-medallion-fund-became-finance-s-blackest-box.

In-sample forecast versus out-of-sample forecast
Our software computes optimal portfolios using historical end-of-day exchange data up to and including the last trading day of each week, usually a Friday. It then forecasts that the these optimal portfolios stay optimal during next week, so that rebalancing takes place one trading day later at closing, usually a Monday. Our software computes the risks and rewards of the Friday portfolios as is done by an in-sample forecast. More importantly, it computes the risks and rewards of the Monday portfolios as is done by an out-of-sample forecast.

The six activities of portfolio management
The six activities (1) – (6) of fund management not only determine the past performance in terms of risks and rewards but also the composition of your current portfolio. Your current portfolio is computed at the beginning of the current holding period. The process of screening, for instance, may involve the screening for a minimum daily trading liquidity in terms of daily-dollar volumes. You may want to make sure that your stocks have a trading liquidity that is large compared to your investment per stock, so that you trade in Perfect Competition. Ranking of the screened stocks is established by Statistical Arbitrage, abbreviated to StatArb. Ranking may be driven by a drift term in the rate of change of the probability density function as well as by a diffusion process where the largest price fall has the largest probability to revert its course. Drift and diffusion are the two terms making up the rate of change of the probability density function in the Fokker-Planck equation. In quantitative analysis of the financial markets, they correspond to ranking mechanisms based on trend-following and trend-reversion.

Weighting according to Markowitz and according to Fama-French
The mathematical formulation of weighting was first introduced by Markowitz in 1952 by expanding the portfolio returns into a weighted sum of individual stock returns. He proposed to minimize the variance of the weighted portfolio returns and assumed a Normal probability distribution function for these returns. Standard regression then gives the weighting factors. Rather than using regression, we maximize the Reward/Risk ratio and search in a gradient ascent for this maximum ratio with the maximum drawdown acting as Risk and the sum of weighted annualized returns as Reward. In that case, the reward/risk ratio equals the MAR ratio. We use that ratio as the Performance Indicator (PI) of our investment systems, validated over several recessions. We have a well-balanced MAR ratio when MAR > 1. Maximizing the MAR ratio by varying the portfolio weightings is a form of gradient ascent techniques. By maximizing the MAR ratio, you calculate optimal portfolios. These optimal portfolios are weighted such that your investment objective is achieved with minimized risk and maximized rewards. In 1992, Fama and French extended the CAPM model by expanding a stock price into the weighted sum of market premiums or factors. This approach adds nuisance parameters but no new information to the optimization process of the weighting factors of Markowitz.

Machine learning (ML), neurolinguistic programming (NLP), and artificial intelligence (AI)
ML, NLP, and AI are techniques that parametrize assumption (3). As long as price fluctuations remain fundamentally unpredictable in line with assumption (1), employing these techniques result in fitting nuisance parameters that reveal no new information. In Einstein’s formulation of Brownian motion, the connection between past, present, and future of each individual fluctuation was governed by the diffusion equation with the coefficient of thermal diffusivity as fundamental parameter. Equivalent parameters for price movements quantify nuisance parameters, as for portfolio management only the parameters of screening, ranking, weighting, and timing are of immediate interest.

Does fundamental analysis give you a competitive edge?
Financial experts often give their Discounted Cashflow (DCF) projections for individual stocks or assets based on whatever information, insight or breaking news they have. These future projections are called analyst estimates. Such future projections or changes thereof can be imposed on the ranking process as additional conditions. These additional conditions influence the future dynamics of the markets. Hence, the investor who knows and understands these consequences first appears to be at an advantage.

Does technical analysis give you a competitive edge?
Besides DCF projections, some patterns of technical indicators are believed to possibly result in profitable trades. Such trading systems may require a time resolution of less than a quarter of a second, and that kind of playing field is not leveled for just any participant. In addition, the trend-following patterns of technical analysis have yet to be proven to be reliable based on objective scientific rationale. 

Does algorithmic trading give you a competitive edge?
Algorithmic trading digitizes the search for the best deal of the asset transfer at the time of the trade. Time resolution may get as small as a microsecond and becomes an important part of the trade. That makes this playing field not leveled for just any participant. In practice, this implies that the cost of rebalancing a calculated and optimized portfolio may vary per trading algorithm. The outcome of such trading algorithms may be different at different times for the same weighted portfolios. Therefore, the result of such algorithms are difficult to back test, and hence to validate. We prefer to use Market-On-Close orders (see below) that have a predictable cost structure in our field of Low-Frequency trading. We recognize that in the field of HF-trading, algorithmic trading may give a competitive edge for the costs of the trades.

Which market orders allow you to validate your game plan?
In principle, validation is possible with Market-On-Close (MOC) orders, and the screening for a minimum liquidity can be set so that trading conditions approach the one for Perfect Competition. Such orders appear to warrant a leveled competitive field. As long as the additional information is imposed on the estimation techniques after it is common knowledge, in our view, it will not make much of a difference. In that case, the additional information is already taken care of by the historical exchange data that are used by the estimation techniques. As long as the additional information has not been universally spread, investors who know about it may be at an advantage. The line between foreknowledge and universal knowledge is the field of compliance officers.

Are the weighted averages of the past the expectation values of the future?
Like in the field of insurance, weighted averages of the past are the expectation values of the future. To calculate the expectation values in the financial markets, no distribution functions are required, nor any set of rules governing the dynamics of financial markets is required. Only the clean historical exchange data are needed to provide the best fit between these data and the optimally weighted and timed portfolios. These optimal portfolios are optimally tuned to one’s own investment objective in a computational search for the portfolio weightings to balance risks and rewards.

So what does it take to model and digitize the activities of fund management?
You need a web designer, software programmer and system manager. The web designer develops the web layout of the user interface of the program, as all modern applications should interact through a browser. The software programmer writes the codes of the application (the five processes) and manages the cross-platforms events. The so-called quant is the one who understands the six processes of a fund manager and knows how to model these processes and put the models into software code. Usually another software programmer writes the event-driven cross-platform codes. The system manager takes care of the hosting, the setup and maintenance of the various server environments and the Content Management System. The use of Market-On-Close orders and the condition of Perfect Competition allow for reliable validations in foresight and hindsight.

Don’t banks have a public responsibility to not juggle the financial system into instabilities?
When banks want their quants to play with non-linear models and large positions, from a mathematical point of view it is known that optimization routines become prone to instabilities and local optima. Hence, when big players are going to play non-linear modelling, the game-outcome may be an unstable equilibrium.

Each private investor has his own set of choices and circumstances resulting in his own investment fund.
The six rational processes of fund management allow for the existence and computation of so-called optimal portfolios, each one being different for each given investment amount, for each given portfolio size, at each given time, and for each given set of correlation times. These portfolios are optimally tuned to investment objectives of balancing risks and rewards and to one’s own screening and ranking conditions given the quality of the data provider and the height of the broker fees. Practically each individual set of choices and circumstances results in its own unique set of optimal portfolios that can be validated over a preset number of years. It is a transparent process without proprietary hides.

Our findings after extensive back testing a variety of game plans can be summarized as follows:

  • Our StatArb and forecasting algorithms are competitive with the best in the market.
  • Strong and persistent correlations can be found in Wall Street's power spectrum of wide-sense stationary random price fluctuations.  
  • Weekly selecting the 6 highest-ranked long and short positions give expected returns that are competitive with RenTech's Medallion Fund over the past 30+ years.
  • For smaller and mid-sized investments, it pays off to design your own active investment plan tuned to your own individual needs and requirements.
  • In this field of opposing forces, you have the choice of ending up in a straitjacket of a common denominator or quantifying your own choices to let them optimally bloom.

The proof is always in the pudding
Any efficient ranking system should be able to rank each week during the past 34 years the top 6 long positions out of the presently 6800 active non-OTC stocks of Wall Street using first a few screening conditions and get about 67% annually compounded:

The MAR ratio 1.14 with a CAGR of 67%, YTD = 58.1% on Friday November 22, 2019, at closing, the individual win ratio for stocks is 53%, the portfolio win ratio is 58%, and the weekly stock turnover is 85%. The good thing is that there are as many trading strategies as there are retail quants, with each trading strategy tuned to the personal needs.

What is the available software?
We know of four software packages on the market that allow one to set up one’s own game plan. The first three ones are SmartQuant, QuantShare, DLPAL, and are based on technical indicators and focus on intraday or High-Frequency (HF) trading of individual assets. The last one, DigiFundManager, is based on selecting and getting to trade optimal portfolios, using only the clean historical Market-On-Close exchange data for its computations and is based on preset holding periods of multiple of weeks (Low-Frequency trading). Each optimal portfolio can be uniquely tuned to one’s personal choices, to one’s own circumstances and to one’s investment objectives in terms of balancing risks and rewards. The complete set of optimal portfolios is validated over a preset validation period. These four packages appear to us to be transparent in their concepts and modelling. We haven't seen evidence that High-Frequency trading should outperform Low-Frequency trading. That is understandable from the point of view that trend-following appears to be  less effective than trend-reversion, and that the latter one has little to do with the frequency of trading.  

Jan G. Dil and Nico C. J. A. van Hijningen,
November 28, 2019.