Forums  > Trading  > When to exit trading strategy  
     
Page 1 of 1
Display using:  

Energetic
Forum Captain

Total Posts: 1476
Joined: Jun 2004
 
Posted: 2018-04-16 19:26

This question came up in the other thread and I gave it some thought. Most generally, the answer will always be in the nature of model doing something unusual or unprecedented. But the threshold will have to be to some extent arbitrary. For example, one can decide on exceeding previous max DD by 5%. I can see why some people are not excited about such an approach. I am not too excited myself, mostly because I'm not such a big fan of DDs.

I want to detect the loss of signal based on relative performance of the strategy against its own history. By performance I mean that relative to the benchmark (despite the reservations that I may not have one). I also don't want it to be a one-time observation. I prefer to measure it over time.

Before I begin laying out possible implementation details, how does this sound in principle?

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

HitmanH


Total Posts: 459
Joined: Apr 2005
 
Posted: 2018-04-17 00:41
Very sensible.

What I've done, is in % terms, compiled some stats from a strategy's backtest - what we expect worst DD to be, 2nd worst DD, 3rd worst etc.
Then - when a strategy goes live - you can compare it to that, how is the vol of realised returns looking, DD profiles, Sharpe. Whats a 2x 1SD move, 3x SD move - you know tails are fat - but you can compile a profile of how fat they have been
Once you have enough data points - you can start subbing out the backtest with the realised results of the strategy itself.

We've started with counting instances in a window of x SD moves, and questioning - is there something in environment different, has market changed etc - rather that turn on / off - but very much the kind of think you're talking about - I think.

I would say do this for a specific strategy / model - don't do it for a fund or book as a whole...

ronin


Total Posts: 314
Joined: May 2006
 
Posted: 2018-04-17 11:49
There is no universal answer. It's also different depending on whether you are running a single strategy or a portfolio of strategies.

But basically, at any point you have to be asking "why am I running this strategy?"

You run some strategies for high returns. You run others for negative beta. Your run some others for diversification. Etc.

So you put together a list of targets for your return distributuion. Returns like this, volatility like that, correlations like these etc. Then, every once in a while, test your strategy against those hypotheses. What's the probability that these still hold?

Based on the answers, you make some decisions.

I don't want to comment directly on your target vs benchmark - it's up to you how you manage that. My only observation would be that optimising for performance vs benchmark would drive you to high beta. You have to think carefully whether that is what you are really looking for, or what you can sell.

"There is a SIX am?" -- Arthur

Energetic
Forum Captain

Total Posts: 1476
Joined: Jun 2004
 
Posted: 2018-04-17 18:30
I understand the difference between managing a portfolio and a single strategy.

In terms of single strategy I believe both of you are thinking in the same direction as I: checking characteristics of realized distributions vs. expected.

In my case, the goal was to beat S&P but beta is relatively low. But I do want to make sure that it does beat S&P at least under conditions when it did so historically.

What I can sell is an entirely different question. Probably nothing.

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

Nonius
Founding Member
Nonius Unbound
Total Posts: 12733
Joined: Mar 2004
 
Posted: 2018-04-17 19:50

"This question came up in the other thread and I gave it some thought. Most generally, the answer will always be in the nature of model doing something unusual or unprecedented. But the threshold will have to be to some extent arbitrary. For example, one can decide on exceeding previous max DD by 5%. I can see why some people are not excited about such an approach. I am not too excited myself, mostly because I'm not such a big fan of DDs.

I want to detect the loss of signal based on relative performance of the strategy against its own history. By performance I mean that relative to the benchmark (despite the reservations that I may not have one). I also don't want it to be a one-time observation. I prefer to measure it over time.

Before I begin laying out possible implementation details, how does this sound in principle?"

if you normally think the Sharpe should be X, and you subsequently see a max DD>> 1/X^2 (represented as a percent of average annual PL), then that's an indication something is wrong. that's if your returns are reasonably normally distributed. details left to reader. So a 4.5 sharpe strat shouldn't be having DDs much larger than 5% of annual PL.

on loss of signal, I'd prefer just checking on an ongoing basis the lead lag corr or the R^2 between the signal and the future return you're trying to predict; the strat has a bunch of other logic (threshold, sizing, costs, slippage,execution) that clouds the assessment of signal quality.

Chiral is Tyler Durden

Energetic
Forum Captain

Total Posts: 1476
Joined: Jun 2004
 
Posted: 2018-04-17 20:27
I'd say 1/X^2 is a rather soft criterion for strategies with Sharpe O(1).

I agree with your last point.

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

Nonius
Founding Member
Nonius Unbound
Total Posts: 12733
Joined: Mar 2004
 
Posted: 2018-04-17 20:56
hahaha, true on O(1), but then again, a sharpe 1 strat over time I'd expect to have sort of meanreverting PL....up one year, down another, flat over a few years, etc. like a punter or a CTA

not to say you're punting, I'm sure the Capn has some awesome strats!

Chiral is Tyler Durden

Energetic
Forum Captain

Total Posts: 1476
Joined: Jun 2004
 
Posted: 2018-04-17 22:39
Dude, check out the next thread:

http://www.nuclearphynance.com/Show%20Post.aspx?PostIDKey=186211

I'm told there's at least $10mln/yr to be made ;)

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

Energetic
Forum Captain

Total Posts: 1476
Joined: Jun 2004
 
Posted: 2018-04-17 22:56
I'll assume that my variable of interest, outperformance vs. benchmark, is somehow correlated or has a joint a distribution with returns of benchmark itself and maybe other variables, e.g. volatility. But for simplicity of exposition, let's say only with benchmark.

I have time series of say monthly returns for both variables. I sort it by benchmark's returns and bucketize. Within each bucket, I observed a bunch of strategy returns which I'll also sort from low to high.

In live trading, each month I observe the relative return of the strategy vs. benchmark. The I look in the bucket where the benchmark peformance fell and find the current score := percentile of relative performance in the bucket. For example, I chose to have a bucket for S&P returns [-1%,1%]. Assume that in backtesting, when  S&P returned between -1% and 1%, the strategy had returns {-5,-3,-1,0,1,2,4,5,7,8}%. Suppose in a given month of live trading, S&P returned 0 and the strategy returned 6%. Then the strategy scored in 80th percentile. If the strategy returned -4% then it scored 10. That's a current score.

How to work with current score? One low number is not necessarily a problem but a sequence or a majority of low numbers in recent history is a problem. A low number followed by high numbers is soon not a problem. I suppose EMA is a decent way of aggregating the current and recent scores. Roughly, I can probably forget about a bad month in about half a year if no new problems appear. So, I'd choose EMA with a decay factor of about 0.7 per month to build aggregate scores.

I could build a first approximation to  emprical joint distribution with my training set and then use the OOS set to build historical aggregate scores in backtest. Analyzing live performance, I could use the historical lows of aggregate scores as a benchmark for the current state of the strategy. If it crosses a historical low then it is certainly a sign of trouble.

Thoughts?

For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken

TonyC
Nuclear Energy Trader

Total Posts: 1264
Joined: May 2004
 
Posted: 2018-05-01 06:49
Hitman's idea strikes me as being a lot like the classic CUMSUM chart from "manufacturing quality control" stats, all the way back to Demming

flaneur/boulevardier/remittance man/energy trader

Energetic
Forum Captain

Total Posts: 1476
Joined: Jun 2004
 
Posted: 2018-05-22 18:10
I implemented what I described below with one small change: I re-centered the scores to zero for better visualization. Positive values mean that the strategy performs better than on average historically, conditional on the current performance of the benchmark. E.g. high values this Feb-Mar mean that the strategy not just performed well but better than it did during similar bear markets in the past.

Here's how it looks like for the last year of live trading. Both current and aggregate scores fluctuate around zero, as expected. Looks like I should get really worried when the aggregate score crosses -25 or so.


For every complex problem there is an answer that is clear, simple and wrong. - H. L. Mencken
Previous Thread :: Next Thread 
Page 1 of 1