Forums  > Trading  > Portfolio Construction  
     
Page 1 of 1
Display using:  

gill


Total Posts: 190
Joined: Nov 2004
 
Posted: 2017-09-20 15:04
Hello

I have a question about optimal allocation between different trading strategies. Unlike classical portfolio optimization the task is more parctical since variance-covariance matrix between returns of individual strategies is unstable and is't not easy to find the drivers for performance of individual startegies (i.e. those are not carry trades or some sort of shoort vol trades)
I saw portfolio optimization with maximum entropy, anything else which is sensible ?
By sensible i mean a practical approach without risk of overfitting.


Maggette


Total Posts: 959
Joined: Jun 2007
 
Posted: 2017-09-20 15:30
I'am not sure how entropy fixes the problem (which I guess is instationarity...and that includes the correlation between return time series of your startegies).

I do like very very simple heuristics and numerical approaches. This might be stupid, but how about this:

"block - resample" your stragies" and use a meta-heuristic (like differntial evolution) to optimize your protfolio weights.

Let us assume you have 3 strategies x,z,y. These lead to three time series:

x: (x1,x2,x3,x4,x5,x6,....,xn)
y: (y1,y2,y3,y4,y5,y6,....,yn)
z: (z1,z2,z3,z4,z5,z6,....,zn)

By block resampling I mean you create a monte carlo simulation that simulates all three strategies at once by block resampling or block bootstrapping.
You draw random samples with replacement from your history of x,y,z for several time intevals of random length k.

Like for example your first sample could have k = 4 and is called s1 :
s1_x: (x3,x4,x5,x6)
s1_y: (y3,y4,y5,y6)
s1_z: (z3,z4,z5,z6)

The next (s2) for k = 2 might look like :
s2_x: (x1,x2)
s2_y: (y1,y2)
s2_z: (z1,z2)

The next again (s3) for k = 3 might be

s3_x: (x5,x6,x7)
s3_y: (y5,y6,y7)
s3_z: (z5,z6,z7)

Then you string together a history (this is a single path in a classical MC simulation) of all strategies:

x: s1_x + s2_x + s3_x
y: s1_y + s2_y + s3_y
z: s1_z + s2_z + s3_z

You do that m-times (m times 3 return series). Then you use differential evolution (or some other meta heuristic) to find the sharpe-ratio optimizing weights for x,y,z over all paths.

The thought here is you might catch autocorrelations of your strategies as well as cross-correlations between the strategies.

Ich kam hierher und sah dich und deine Leute lächeln, und sagte mir: Maggette, scheiss auf den small talk, lass lieber deine Fäuste sprechen...

ronin


Total Posts: 216
Joined: May 2006
 
Posted: 2017-09-20 15:32

It really depends on what you are optimising for.

We tend to worry about max drawdown, so we use some measures based on that. But that's just us.

"People say nothing's impossible, but I do nothing every day" --Winnie The Pooh

gill


Total Posts: 190
Joined: Nov 2004
 
Posted: 2017-09-20 16:43
Thank you Maggette!
Thats similar to a bootstrapping procedure used by economists to get a biggers sample size. Have you tried that approach ? Was out of sample performance consistent with your expectations?

gill


Total Posts: 190
Joined: Nov 2004
 
Posted: 2017-09-20 16:46
Thank you Ronin!
I was thinking about maximization of Sharpe ratio since returns can be approximated by normal distribution, but if you can share your approach for drawdown minimization I would very much appreciate it!

Maggette


Total Posts: 959
Joined: Jun 2007
 
Posted: 2017-09-20 16:48
Right. IMHO the classical boostratping just takes a sample of dimensions = number_ofStrategies times 1 =>[y_i,x_i,z_i], I wasn't aware that my great ;) idea to take samples of dimension: number_ofStrategies times k.

For my strategies (which are simple ETFs) I did this but my out of sample experience size is n = 3 :) (just went live with it). So can't say anything smart about it.


edit: I used differential evolution for the optimization part. You have to be carefull withe the parameters here, or your optimization is just fancy overfitting.

Ich kam hierher und sah dich und deine Leute lächeln, und sagte mir: Maggette, scheiss auf den small talk, lass lieber deine Fäuste sprechen...

EspressoLover


Total Posts: 240
Joined: Jan 2015
 
Posted: 2017-09-20 21:03
I agree with ronin, if you're talking about dynamic sophisticated trading strategies, CoVar is likely to be highly regime-dependent. Not because of any fundamental linkage, but because investors tend to treat all dynamic trading strategies the same during risk-off periods. For example non-agency MBS and equity stat-arb had zero correlation until August 2007, at which point they suddenly had near perfect correlation. The same multistrat redemptions that hit one spread contagion to the other. The Moskowitz paper on the Carry Factor is another example. Carry strategies across asset classes tend to have low correlation, except when the Global Carry Factor is in a steep drawdown, at which point coordinated unwinds across asset classes change the regime.

Doesn't always mean correlations go to 1, sometimes they're even inverted. Short-term liquid strategies, like HFT often do better during risk-off periods. The impact of distressed unwinds is small relative to the enhanced opportunities from market dislocations. Just taking a return matrix as a black-box doesn't really work. Even if you had perfect knowledge about future returns, that isn't enough. If you can quickly go to cash on button press when shit hits the fan that's worth a lot. Also depends on the tolerance of the invested capital. If things get really dislocated, will your investors tolerate the losses to stay around for the reversion?

gill


Total Posts: 190
Joined: Nov 2004
 
Posted: 2017-09-21 09:57
EspressoLover:
thats a prop book i was talking about, so an outflow is not a factor. And as long as i stay within the limits the risk management is okay with that. Although so far that was a big success without any serious drawdowns and I am not 100% sure about there reaction when things turn sour....

ronin


Total Posts: 216
Joined: May 2006
 
Posted: 2017-09-21 10:55
@gill,

There is no specific formalism. You have to use your judgment, like @espressolover says.

If there was a formalism, it would still involve optimizing the "return per unit of risk" of your portfolio. It's just about how you interpret the "unit of risk" part.

In the lognormal world, your basic unit of risk is the standard deviation. That leads to the classical portfolio theory.

Or you could use any other measure of risk, like VaR (which would still end up optimizing for Sharpe), or some tail measures like cVaR, ES or maxDD (which may end up quite different, depending on how skewed the individual strategies are).

But at the end of the day you will just end up with a bunch of numbers - it's up to you what you decide to do with those numbers.


"People say nothing's impossible, but I do nothing every day" --Winnie The Pooh

finanzmaster


Total Posts: 125
Joined: Feb 2011
 
Posted: 2017-10-03 12:09
I encountered this by wikifolio, where one can not only run his own strategy but also make a portfolio of such strategies (so called DACH-wikifolio).

One guy asked me to optimize his DACH-wikifolio.
However, since wikifolio publishes all info, I didn't look just at performances statistiscs of components but rather scrutinized, which stocks each component trades and so on.

If you do not have this info (and all that you have is time series of returns for each strategy) then you have to check for their stationarity.
And if they are (more or less) stationary, you may try e.g. my approach based on Kelly criterion

www.yetanotherquant.com - Knowledge rather than Hope: A Book for Retail Investors and Mathematical Finance Students
Previous Thread :: Next Thread 
Page 1 of 1