Home / Money Management

Money Management

How Many Trades Should You Have At Once?

Ever wondered how increasing the number of trades in a period changes your performance? I’m really close to figuring that out to a very specifically defined calculation. The one missing variable is “correlation”. I don’t know how to account for it and how correlation impacts probability.

So for this experiment we’re going to assume binary outcomes of -100% or +250%. I’m going to assume my own variation of the kelly criterion strategy so I can model them at an equal amount of risk(?).

The logic is: If the maximum you can bet to maximize geometric growth at one single bet is 10%, then after a single bet you are left with 90% of cash. If you lose, you will be making a bet that is 9% of your initial capital anyways, so we can allocate 19% for two trades that we hold simultaneously provided there is zero correlation. If we are going to make a 3rd trade we are 81% cash so we can add an additional 8.1% risk and divide the total amount at risk by 3 and so on. It’s possible you can risk slightly more than 19% for 2 but less than 20% because a loss isn’t guarenteed. We will position as if the first position lost due to the possibility of it losing and the nature of overbetting providing less return at increased risk (where as underbetting provides less risk and return only declines slightly)


For some brief background on how the risk amount to maximize growth, see the Kelly criterion. Essentially you are going to take (O1^N1)*(o2^N2) where O1 is the wealth multiplying outcome at the given position size, and N1 is the number of times you produce that outcome. For instance, at 1% position size with 250% gain and 100% loss for outcomes, O1 is 1.025 because a 1% position producing a 250% gain multiplies  our wealth by 2.5% or a factor of 1.025.

This is calculated by


And the loss is .99 which is calculated by


So for 20 wins and 30 losses the equation for how much our wealth changes is (1.025^20)(.99^30).

The Kelly criterion reverse engineers the position size to solve for the position size which maximizes geometric rate of return based upon your assumptions over an infinite number of bets. As volatility increases, eventually return decreases. The kelly bet seeks the point at which you can no longer improve the return without much respect for the volatility. In reality, you’d probably wish to respect a fractional kelly strategy if you want to reduce risk. It’s a good way to compare systems at an equal amount of risk. Right now I’m using a modified kelly. It is similar logic as the kelly but not the kelly exactly to increase position size based upon multiple held positions.

You could manually adjust the position size for very large outcomes with a proportional amount of wins and losses until you can’t seem to increase it to approximate the solution, or just use a kelly criterion calculator. You can run more complex calculation with more than 2 outcomes, but for now I’m just using 2.

Optimal Bet Size

So the optimal bet size for a single bet is 9% given 35% chance of a 2.5 to 1 payout.

We can then solve for the optimal bet size for 40 bets assuming zero correlation between trades but with 40 trades held during the same exact time period. There’s an important distinction here. Rather than multiplying our wealth by 1.025 with each win, at this point we are only adding 0.025 to the total portfolio per win because we don’t get the benefits of compounding when the trade is placed. Similarly, if we hold 40 1% positions at once and lose them all, we don’t lose 1-(.99^40)=.331 or 33.1% but instead lose the full 40%. So the first formula is not sufficient in describing what happens to our wealth. This is also where correlation is a bigger liability than the formula currently realizes as the chance of greater drawdowns increases as the correlation increases.

As explained before, we are going to assume a full kelly and then an additional full kelly of risk with the remaining capital for each additional bet. Since the full kelly is 9%, that means the cash on hand remaining is .91 of our portfolio which can be multiplied for each of 40 bets to determine how much cash on hand to keep

or .91^40 to equal 2.3% which leaves 97.7% at risk divided by 40 is 2.4425% per bet.

We can repeat this for 30 bets, 20 bets, 10 bets and 5 bets to construct a table of optimal bet sizes per bet.

Bet size given total number of positions.

50 bets 1.98209% per bet
40 bets 2.44251% per bet
30 bets 3.13649% per bet
20 bets 4.241775% per bet
10 bets 6.105839% per bet
5 bets 7.519357% per bet
1 bet 8.9999999% per bet

Now we can construct a simulator that sums the total % gained per bet over a period for 12 periods and randomizes the outcome according to the probability.

Excel gives us a function =RAND() which delivers a number between 0 and 1. If that number is less than .35 it will deliver a 2.5 times the position size outcome. If it’s more than .35 it will deliver -1 times the position size as the outcome. All position sizes for a period will be summed up and the number 1 will be added and then multiplied to the portfolio size and then the fees for the period will be subtracted. 12 periods will be simulated giving us a yearly total. We can then run through 1,000 different yearly results and see the distribution of results, the average, and even estimate the compound annual rate of return

This way we can see when the benefits of diversification outweigh the costs for smaller 5 figure portfolios where fees eat into profits. I am probably over estimating fees slightly as I used $6 per trade and assumed buys and sells for all trades, where in reality there is only an opening trade for 100% loss trades.

The CAGR is a crude estimate as the simulator only gives me the first 100 results. I am basically taking the returns plus 1 and multiplying them all and estimating X where X^100 equals 1 minus the multiple factor of the first 100 results. The CAGR will be substantially less than the mean outcome. TO illustrate imagine a 25% average return where the results are -50% of your portfolio and then +100% The actual CAGR of an equal amount of -50% returns as 100% returns would be zero, not 25%. The CAGR reflects the loss due to volatility.

I assumed a 20,000 starting portfolio and $6 fees with the assumption that there was both a buy and a sell order for each trade. Trade fees were deducted after each period’s multiplier was applied.

40 trades @ 2.4425% position per bet

30 trades @ 3.1365% position per bet

20 trades @ 4.2418% position per bet

15 trades @ 5.0466% position per bet

10 trades @ 6.1058% position per bet

5 trades @ 7.5194% position per bet

1 trade @ 9% per bet

For 12 independent trades or 1 per 1 month period, the theoretical gain is .97% growth per bet or 1.0097^12=~12.28% growth per year… theoretically. But that’s over the time horizon of infinity and as you can see by the distribution should 1,000 traders have the same exact expectation, the actual results over a year can vary wildly. Also, with only a $20,000 account taking too much risk or not enough can result in problems should losses occur early because of the size of trading fees being a flat amount.

We find we can greatly enhance the return by adding more position sizes, but the benefit of diversification decline with each additional bet.

Adjusting For Correlation

While I think the above can give you good idea for how many trades for a given portfolio size you should hold at once (and we could easily adjust the calculation for half of the initial kelly bet), we still have yet to develop a system that adjusts bet size based upon correlation. What I believe is true is that as correlation approaches one, the total amount risked should approach the single kelly bet. Afterall, if you bet all your capital on multiple trades of the same coinflip, it would be no different than betting a single bet on that coinflip. In other words in our previous example as the correlation increases the total amount at risk should approach 9%. This means that the ideal bet in reality is somewhere between the bet size calculated above at the correlation at zero (that has been calculated as shown in the prior table) and the correlation of 1 which is 9% divided by the number of bets.

For instance, if the correlation was 0.50 across 20 bets.
A correlation of zero suggests [1-(.91^20)]/20=4.241775% per bet.
A correlation of 1 suggests 9%/20=.0045 or 0.45%
Since the correlation of 0.50 is the midpoint between 0 and 1 we can average the 2 and get 2.35%*

*but that’s only an approximation.

Unfortunately the relationship may not be linear, so while we can be sure the optimal bet size for maximizing CAGR across 20 simultaneously held bets is somewhere between 0.45% and 4.241775%, we can’t be sure it is the exact average of 0.0234589 or ~2.35% per bet.

I also want to look at “half kelly” strategies in 2 different ways. One is dividing the per trade bet by 2. So for 40 bets if we calculated 2.44% the half kelly could be 1.22% per trade. That halfs the total amount of capital at risk. The other is using a 4.5% number initially, and so a 0.955^40=~15.85% cash on hand or ~84.15% invested divided by 40 or 2.10365% per trade instead of 2.44%. We can see that that is still much more aggressive than halving the amount per bet.

Normally the half kelly solution provides 3/4ths the return at 50% of the volatility of the full kelly. This is really promising for multiple bets when we can reduce the amount at risked by only a small amount and still be at an equivilent of a half kelly strategy in some regards.

In the future I also want to come up with a different calculation such as solving for the “probability of a 50% decline or more in a year” (or probability of 100% gain for example). This is pretty easy to set up.

If the result is -50% or less in a year, a simple formula will give me a 1. Otherwise zero. The average is the probability of this event. This helps you better model the probability of achieving a certain result (such as 100% return) while measuring it against a probability of a negative outcome (such as 50% loss) so you have a different way to compare risk and reward of position size and number of trades and understand expectations.

For now it seems more bets is better up to a certain point where the quality of opportunities and expectations as well as the fees become problematic. It’s hard to identify where that is, even with thousands of simulations because of the increased “skew” (the expectations become increasingly dependent upon a smaller and smaller probability of a more and more spectacular outcome) as risk and number of trades increases. Also, as your bet size decreases the aggressiveness and increases the amount of trades, the fees should become more problematic which I think we will see in a half kelly and 1/4th kelly simulation. Lowering your position below 0.50% when you have a $20,000 portfolio for example might become a problem and eat into returns too much. As such as we seek to decrease our risk, we will eventually have to decrease our number of trades or else position size will be too small given the fees to provide as big of an edge.

Comments »

Trading System: Cliff Notes


1)Own no more than 20 *active* option positions

2)Own no more than 40 option positions total.

3)1% position size per option with 5 possible exceptions.

4)Those 5 exceptions may all be 2%. max 2 may be 3% and max 1 may be 4% positions.

5)Target max 50% of stock positions ~5 positions of 10% or less.

6)10% income position that is always on except sold to avoid margin.

7)Remaining capital with 1-5% in asset allocation options. (commodities, currency/cash, stocks/short VXX, bonds/income) plus possible 2% hedge.

8)Only take a trade where reward is 3 times the risk or more.

9)Monitor breadth to add more when it’s oversold or when it makes a strong breadth thrust off of oversold, add normally when it’s not overbought, and add proportional to rate of selling or slower when it’s overbought or trending down from overbought (until oversold signal).

*A position that falls 75% below it’s original value you basically write off as a loss.


-Watchlist is developed through OABOT’s top 400 and manually filtered from there to usually 20-60 names.

-Import watchlist into a spreadsheet with suggested stops and targets and current price

-Reward/risk will automatically update once it’s in the watchlist.

-Trading rules for entry that must follow the above portfolio rules and also an entry checklist which should also be using triggers.


1)As stocks are bought, input the stop, target, entry price into spreadsheet.

2)Since many of you are already trading, don’t worry about importing a large list of current holdings… just update the next trade until you phase out the old trades.

3)Stock must be below stop 5-10 minutes before trading close to trigger a sell.

4)Stock above target has a separate rules of waiting for the candle to close below prior candle low or failing to close above prior candle high Sell before the following candle’s close.

5)Which timeframe you use when stock is above target depends on condition but in general: With investments you may use monthly or weekly chart. With stock trades and options that have more than a week remaining til expiry you use daily chart. on options expiry week you might use a 1hr or 4hr chart. 1 day before options expiry you may use a 30 minute chart. On options expiry day you may use a 5m chart.

6)Generally update and check spreadsheet every hour. Also monitor watchlist stocks for purchase. On options expiry day check them every 10-15 minutes. One trick with that is if the stock is higher than the last time you checked, you don’t have to look at a chart.


Comments »

2014 Goals: Streamlining The Process Part 1

I find goals are far more likely to be successful when you allow time to let the old goals develop beyond what you intended and see where they take you and then make goal setting more of an active process to manage rather than a single event that could otherwise overwhelm. Goal setting around New Year’s day I feel has become more of a marketing ploy and a distraction if anything from what you really want to accomplishment. So what I like to do is let the old play out for awhile. Then I start with a relatively unspecific long term goal that I observe my past works progressing towards. In this case it is “streamlining the investment/trading process”.

It is certainly not the only goal as constant improvement, education, and improving my ability to carry out the strategies more efficiently and effectively are other important trading goals. But by limiting it to just one general concept, I can apply the focus this year towards it. That is a “big picture” idea as the spreadsheet will help me with just one of those elements which is analyzing and managing risks with more precision in what I can expect. From this idea I can begin work on aspects of it, for awhile. Based upon how much I can get done in a sample period, how much time I can spend per week, and how much I have to do… I can then have a goal that reflects reality.

I find that this method of goal setting allows for expedient results.

Setting financial goals is a bit tricky as it involves chance and uncertainty, but setting financial goals is something that can be done as you can see here.

So from that I have a number of ideas as how different elements may coordinate together to help me streamline the process. First I need to sort out some of the ideas of things I want in a perfect world
1)Market analysis, Sector Analysis and Industry Analysis.
2)Using the above, develop a Ranking or Grading Systems of individual stocks based upon UNIQUE classifications on whether or not the market, sector, industry, and classification of stock and market cap size are currently “in phase” currently, “on deck” or “not in phase”, that has DIFFERENT applications/formulas for ranking them according to the particular TYPE/classification of stock it is AND what stage it and the industry/sector is in.
3)A tool that can quickly look at both stock and option pricing and a manual assessment of expectation, probability and timing to compare risk/rewards and analyze the effects of using any one particular option with the others and with the stock and position sizing.
4)An ability to take the inputs and add it to the trading journal spreadsheet where I can track and manage the various allocations by a number of categorical breakdowns if wanted.
5)A more multifaceted trading simulator which considers multiple, simultaneous and overlapping approaches on different timelines and compares strategies of shifting allocation towards each strategy, adjusting risk and adding/reducing capital from your account over time.
6)Ultimately a more flexible, dynamical approach to allocation that not only has the flexibility to adjust to maintain certain general allocations without having to sell individual positions short of their targets to do so, but also adapt in their allocations according to the behavior and numbers and relates expectations from every asset class or strategy allocation to others as well as future opportunity.
7)Summarize all the data in order to factor in everything (fees, risk tolerance, alternative investments, expectation) and quickly convert the data into a recommendation based upon my own inputs to optimize the portfolio.

This will be a process that will take a lot of time in developing, that will likely be an ongoing project over the next few years, so I have to make the spreadsheet flexible enough to be able to change with my strategies and positions and expectations, so very few elements will be “set in stone” and most will be inputs which I can change and very few assumptions will be made that are fixed.

Comments »

How A Portfolio “Kill Switch” Can Change Everything About Long Term Expectations

We all most likely have some sort of psychological breaking point in which we would give up on the system, or in which we should probably re-examine our strategies, and in some cases, in which people go crazy and “break” to the point where they start revenge trading and no longer are actually trading their system. The prior risk management modeling I’ve done using a spreadsheet has not taken into account fees or how one handles draw downs. I have modified the spreadsheet such that there WILL be a fixed fee involved with trading (thus if you have a significant drawdown fees play a much bigger role as a percentage of your account), and I have added the ability to add a “kill switch”. Since I added fees at minimum, it’s necessary to have at least a kill switch of 100% (loss) so you don’t go on calculating $12 fees when your account is less than $12 and continue to go negative. In the meantime I set it up for 1000 hypothetical trades since we now have a more practical model I can simulate it for much longer time periods if need be.

So my hypothesis is people will give up after a drawdown of varying amounts. This drawdown is from the account HIGHS, not from the starting amount. Thus, for some people the end result after 1000 trading periods (whether they participate in all 1000 using the given strategy or not) will actually sometimes be better if they risk less to avoid this breaking point.

PLEASE understand that this simulator only uses one trade at a time as if you only play a single OTM option trade at a time and hold no trades simultaneously. In reality having more relatively uncorrelated trades at a smaller risk % per trade can often improve results at less risk, especially if you have multiple systems such as profitable investing and trading stocks and other asset classes in combination with your options. I will get around to testing partially correlated trades simultaneously and possibly entirely different systems in the future. For now the expectancy is based upon

p1 17%
p2 20%
p3 16%
p4 23%
p5 24%
w1 294%
w2 52%
w3 0%
w4 -64%
w5 -100%

My 2013 OTM option trading results at various risk levels and various kill switches to see a distribution of simulated results. For all accounts I will use a $10,000 starting portfolio and $12 for completed transaction (buy fee plus sell fee) just to use round numbers and so that fees is somewhat significant to the equation.

First let’s start with someone who is very risk averse and cannot handle even a 5% drawdown without feeling incredibly nervous and emotional and plans to quit at a 10%.

I had to think about the results and double check them at first, because I was surprisingly getting a significant skew right with only 1% risk.The mean is greater than the median. (i.e. the “average” is skewed by a smaller number of outliers such that the majority of the people are actually below the “mean”)

The reason was that at 1% with a 10% drawdown, you not only have too little capital at risk to the point where fees will eat away and a large percentage won’t get off the ground, but also that it is extremely common that at some point within the first 200 trades of those 1000 that you will hit the kill switch and very few will make it through all 1000 to produce significant gains which on average will be pretty high even with 1% risk (given that you can last that long). The overall average was still above the starting amount of $10,000, but in most cases, the fear of drawing down killed the traders hope of making money and taking risk. Increasing beyond 1% actually reduced average results because more risk = more volatility and greater probability that you will drawdown before giving your system a chance to compound it’s gains. 10% in theory is around “optimal” in terms of maximizing geometric return, but to someone who is risk adverse, it is probably less than 1%.

I then decided someone who is mildly risk adverse and doesn’t like 10% drawdowns, really starts to go crazy after 15% and ultimately feels compelled to shut down the account after a 20% drawdown. This was interesting because the average ending amount really increased from 1% to 2% but then decreased to from 2% to 5% to the point where you are better off risking 1% than 5% if you can’t handle a 20% drawdown.

I didn’t gather the histograms yet but here are the data points for 1000 monte carlo simulations of 1000 trades under each given condition.

drawdown switch 10p and 20p

30% and 40% drawdown was up next.

drawdown switch 30p and 40p


As you increase risk and become more lax on the “kill switch”, the worst case scenario gets worse, but so does the best case scenario. The skew actually becomes less noticable with a larger drawdown switch AND less risk per trade. Over the long run if you LET the trading system work, the results begin to normalize and cluster around the average. However increased risk increases the skew and depending on the drawdown kill switch may not improve even the MEAN average results. With these trades the approximate kelly criterion is 15% of capital but with even the risk aversion to withstand a 30% drawdown from high you are still best risking less than 1/3rd of the kelly.


So that brings us back to the kelly criterion graph which is entirely misleading if you are not aware of the other variables including personal aversion to risk, sample size, time period, etc. This graph is with the assumption of infinite time on your side because it was mathematically convenient. However, based upon some understanding of real data in a finite number of time given certain psychological barriers one has to cross among other things you know that for practical application even risking 1/3rd of this amount is incredibly aggressive when using an option based market strategy assuming one single uncorrelated bet at a time. We are also assuming KNOWN information and a fixed edge as opposed to a more uncertain one, both of which favor more caution. On the other hand, I believe I left a lot of money on the table and can improve my system by executing it and managing it more efficiently, and that eventually I will get around to testing how multiple trading systems work on the account. My theory is you can reduce risk and increase return by using complimentary systems (a stable consistent risk adverse system combined with an aggressive one both of which are relatively uncorrelated with the right mixture will compliment each other and both improve return and reduce risk while “normalizing the returns” over a finite number of time)


Hopefully this post is enlightening and helps you really analyze and understand risk, and I look forward to advancing my spreadsheet in the future to help me more thoroughly analyze a more multi-dimentional (synergistic) approach to risk.

Comments »

Modeling Your Past Trading Results At Different Risk Levels

So In the last few post I have really focused in on objectively modeling risk within a portfolio given a particular system. Before I go and amend the heck out of the spreadsheet to upgrade it to include the possible input of fees, of monthly addition of new capital to combat some of the decay, adding a “kill switch” input/function where the results are automatically capped at a particular loss if you draw down below a certain amount of your initial risk, and possible other features at some point I may add, I wanted to actually use it as is using objective numbers rather than an arbitrary 20% probability of a particular result with a set expectation.

So I have set up all my trading results not including open positions over 2013 using options while setting stock trades aside separately. I chose to include hedges in the calculation. I did not update a few trades including my 1200% gain in twitter calls. Here are how I looked at the results. I had 281 closed trades since I started tracking. I have room for 5 inputs of theoretical “results” so how I break this down may create a slight difference from reality and theory here, but this is just a model. I want to keep all 100% losses together separately. I have 71 trades that expired worthless. That is 71/281=~25.267%. That is better than I thought I would get because of the aggressiveness of the options.

Now, I want to take a few slightly better than break even and slightly worse that average zero%. I basically took any option trade that made between -18 to 18% and got 33 trades that effectively equal a scratch 33/281=~11.744%

Then I want to average all of the remaining losses not included in the “scratch” area. These most likely will be premium that I salvaged to avoid the 100% loss and those nearing expiration that had failed to move enough.. The average loss here is ~63.8143% and there were 74 trades in this category 74/281=~26.3345% The actual expectation was positive but less than 1% but I will round down to 0%

Now the WINS. Any win over 100% deserves its own category. There are 43 of these 43/281=~15.3025% of all trades with average ROI of ~293.0585%.

And the remaining WINS. These were most likely trades that either I managed poorly and took off before it reached my target, or ones in which I sold and/or rolled out as expiration forced the issue. There are 60 of these 60/281=~21.3523% for an average ROI of ~53.87795%.

So… Now we can define our system. I like to list the GAINS from highest to lowest for easy and consistent interpretation when I look at other systems or modify the expectations.
This is one way to show what the system looks like.


We confirm that the probabilities are correct because these all add up to one. 37% of my trades produce a win, 48% win or approximately break even. But the largest gains clearly outweigh the losses.

The old way I used to do things would plug this in a kelly criterion calculator, find out that a full kelly I could risk 10% per trade if traded a single trade at a time, and then use my own calculator that factored in fees and correlation and multiple bets at the same time and fees and based upon a $10,000 account would conclude that the “optimal” number of trades at a 60% combined correlation would be 14 trades at 2.6% risk per trade for total of 36.4% capital at risk. Then I would curb that to aim for maybe 7 trades at 2% each.

But now I have learned that 10% resembles more of a “lotto ticket” even after 300 trades. But, I have my baseline of 10% as the max and can make an entirely new distribution at 1% 2% 5% and 10% to show you the difference between this system and one with a 20% probability of either 150% 50% 0% -50% or -100%.
First let us redisplay the results from the arbitrarily determined system

Now that we have real numbers, I want to be a bit more thorough than just using 1000 simulations. I will bump it up to 10,000 simulations per risk level. I will leave it at 1,000 just so the numbers remain the same. Keep in mind that the kelly criterion for my trades in reality is 10% vs the theoretical system is 14% so 1% risk actually is MORE aggressive with my system than the theoretical one. As a result you should expect a higher standard deviation and a higher average. With a larger simulation alone you will get a larger minimum and a larger maximum as well anyways.

And here is what the histograms and data looks like


I think more telling than the distribution since it is so difficult to see at what level the large decline really starts when you deal with such large numbers is the sample equity curves. So I will run a few of those. Click here to look at the equity curve of the theoretical model. and here are a few sample equity curves modeled after my trading abilities.



1p risk


2p risk

2p risk2


5p risk

AND half a dozen examples of 10% risk and the vicious account volatility

10p6 10p5 10p4 10p 3 10p2 10p risk

It’s important to understand that 10 simultaneous trades at 1% functions much differently than 1 trade at 10% or 10 trades over 10 trading periods at 1%. Unfortunately it isn’t so easy to model this and the results depend greatly upon how correlated the trades end up being (the lower, the better provided you can do so while still having the system be as profitable). For informational purposes, assuming no fees, you will see 10 simultaneous trades at 1% each function as a cross between 1 trade over 10 trading periods and 10% as you get some of the low risk volatility benefits of small position size and some of the high return benefits of 10% risk. The result is usually a better return per risk.

Going forward, I am working on improving this simulator so that it can allow for additional inputs that will help test how fees, adding capital, multiple simultaneously partially correlated bets and having “complimentary systems” can potentially positively influence return while also reducing risk.

Comments »

Equity Curve Of Risk – How Risk Influences Expecations

In the last post I discussed how I used a system and position sizing simulator to look at the ENDING equity of thousands of traders trading a theoretical system. I mentioned I would be showing sample equity curves at a given amount of risk by pulling up a random trader. It’s a lot easier on the spreadsheet to get a better sense as you can just press the F9 key to recalculate the random iterations and thus instantly bring up an entirely new random equity curve with all the same settings. You can go through several examples in a short amount of time. It is a bit more time consuming to create new JPEG images of each of them and then post them here so I will only be showing a few.

To further illustrate the type of “risk” you are taking by a particular strategy I provide just one random “trader’s” equity curve of each of them. Understand that results may not be entirely typical but pay attention to the % drawdowns to get a broad sense of the type of risk you may look at and endure.

Please note: The actual expectations of the system you use will drastically impact the type of volatility you see with every 1% change in risk. These sample equity curves are only made with the trading system with an expectation of a 20% chance of each of a 50% loss, 50% gain, no change, 100% loss and 150% gain.

1% risk

1p risk

2% risk

2p risk

5% risk

5p risk

As you increase risk, the results become more polarized and more extreme, so I will provide a few examples for those at the supposed “optimal” risk percentage of 14% risk

14d2 14d 14.2 14

The phenomenal results of a few skew the results of the rest. The drawdowns are insane as you see 70% and 80% drawdowns.

Can you stand 80 trades of being down steadily as your account drives lower to HALF of what it started with? Most people can not and would capitulate so even putting 5% of your capital into this “system” becomes problematic. Granted multiple bets with a lower correlation that adds up to 5% or even more may be actually “lower risk” than 5%. Granted, you can potentially use strategies that actually profit from market overall volatility such as allocation models and rebalancing and modern portfolio theory and hedging and pairs trades and such, you can put in some income and weight a lot of your portfolio with stock that have more of a slow and steady drift upwards that 70% of the time actually provides more stability and increased liquidity that can comba the negative effects of account volatility. Granted, a MORE profitable system can allow you to risk quite a bit more without the same drawdown expectations…. But even so, we are talking about a winning system where even at 1/3rd of what some quants would suggest to be “optimal” over a finite amount of time the returns are very likely to be terrible over a significant period of time.

Can you see why long term capital management went bust now as they did not test their assumptions while taking only a small sliver of time in the past by which to evaluate their “expected risk”?

I could get into how uncertain the world is and how your estimated “edge” within a system is also not a certainty which is still an assumption that this model must make to provide results, but at least can be recalculated with different sets of expectations. But I hope that this post has been educational enough for you to make at a minimum slight, productive adjustments to your way of thinking, if nothing else.

Don’t blow up like LTCM… Test all of even your most basic assumptions… Evaluate your risk in as many ways as you can. Understand risk and how to manage it. Control your destiny rather than being a victim of your own emotional compulsions to sell at the worst point of time and capitulate just before your system takes off because your system is too volatile. Understand the dynamic nature of reality and how increasingly large leverage and risk may be increasingly more volatile while also being more vulnerable to small changes in the conditions by which you based your assumptions. Understand the need to be well capitalized and that fees aren’t factored in and more negatively impact the volatile systems that have an increased probability of drawing down significantly from the starting point. Constantly seek to let the facts guide your conclusions, and seek productive improvement on the way you look at things. Then risk can serve you, rather than you “getting Serrrrrrved” by risk.


Comments »

How I Let Data Guide My Conclusions and Results Of Thousands Of Monte Carlo Simulated Trades!

I stand to you today to announce that I have used data and simulation to prove myself wrong. Call me a flip flopper if you like, but I view this as a constructive thing as I have chosen to take the most profitable and beneficial path rather than the most comfortable. While some remain attached to certain ideas, I let the data guide my conclusion whenever possible. The human mind is full of biases and often too rigid on our ideas. Be open to examining the assumptions that you take for granted on a daily basis just as testing the assumption that “the world was flat” was a productive one, it is possible you may make significantly greater progress than all of those around you who resist the change in your ideas.

My previous paradigm was guided by this understanding of the relationship of risk:
Unfortunately, every model has certain “assumptions” it must make to construct any particular generalized “model”. It is usually not the model itself you should test, but the assumptions within the model, as well as your own personal assumptions which can only be done by data first. After adjusting and testing these assumptions and thinking more dynamically I can see that this is simply not practical as you will also see in a bit.

At first I had a simulator created to calculate all possible permutations of theoretical trades, but realized the simulator could be improved. Rather than continuing down the direction I was headed, I “flip-flopped” again, instead opting for constant improvement. I instead came up with a spreadsheet that uses the random number generator and a “Monte Carlo simulator” plugin that I view as much more efficient and flexible in terms of the duration of trades in which I want to test. Although it lacks the same degree of precision, it is a productive tradeoff as you can still increase precision in exchange for a more timely monte carlo simulation (with more random iterations).

I used the spreadsheet to look at returns dynamically over a finite amount of time such as 300 trades. Out of a thousand traders for example, some percentage may gain 20% while another percentage gains 100% and another percentage loses 50%. Using this data, A histogram plotting all simulated results of each of the thousand random iterations of 300 trades was made for various levels of risk given a particular system. The simulation allows for a change of any one of these inputs (probability of 5 different “results” of the trade, the ROI given each of these 5 “results” the number of traders randomly selected and the number of trades they make). You can even look through random equity curves across all 300 trades at a given risk factor and refresh it with a push of a button to pull up another random trader to get a better sense of drawdown within different points of the system over the course of those 300 trades.

Without further ado, here are some results!



pX=probability of event X.

wX=win % (ROI) given event X.

System: p1-5=20% W1=150% W2=50% w3=0% w4=-50% w5=-100% A winning system is presented.

Risk defined as capital at risk since this is an option strategy and you can lose the entire premium.

“optimal F % / full kelly = 14% risk”

Note the severe skew right. This means as you increase risk extreme outliers begin to skew the average higher than what is “typical”. Skew right means the mean (average) is way higher than the median (average). The “worst case scenario” grows with risk. The probability that you end with a lower than average result (that is not a typo) increases as risk increases risk given a finite amount of time. Eventually the probability of a poor result is so great that as you increase risk the long term geometric return suffers. If you are a true cowboy looking to become an “outlier” and willing to put in the risk, then perhaps that is okay with you, but just know that going beyond the “optimal” amount is destructive as you approach “an infinite number of trades”. Just know the type of CRAZY account volatility you will have to endure, and a large probability that you actually will end down even after 300 trades. That’s almost 6 years at 1 trade a week!

Since I took the time to create this spreadsheet, I can simulate thousands, or if I like, tens of thousands of traders trading anywhere from 1 to 300 trades (or more if I take 5 minutes to set up more) with a given system with a push of the button. I can instantly adjust the expectations of the trading system and see how the results change.

In the next post Titled “Equity Curve of Risk – How Risk Influences Expectations” I show some example equity curve of a particular risk percentages.


Comments »

Building Cover Sheet, Determining Probability of Results

Part 1 WoodShedder Time! Creating A Quantifiable Approach To Position Sizing

Part 2 Building A “Position System Simulator”

Part 3 Building Cover Sheet, Determining Probability of Results

Part 4 Adding To The Cover Sheet

Part 5: Setting Up The Calculations
In this part, we will go over as the title suggests, building a cover sheet. Ultimately this will summarize all the data on one sheet so it will be the only sheet we need to look at and adjust. We will then set up a formula to help us to determine the probability of each individual possible outcome of the 5 trades we have set up. Once you have all the possible combination of outcomes plotted from the first part, it’s time to begin to build a “cover sheet”.

The cover sheet will have all the “adjustments” made to the system on it. For example, you basically list how you manage your trade and the probabilities. W stands for your “win ratio” or your edge or ROI

w1 -20%
w2 -10%
w3 0%
W4 10%
W5 20%

This would signal that your system either takes a 10% loss or 20% loss and targets a 10% gain or 20% gain. The next portion assigns probability to that system.

P1 10%
P2 25%
P3 20%
P4 25%
P5 20%

Would indicate that you have a 10% chance of hitting your 20% loss, a 25% chance of hitting a 10% loss, a 20% of scratching out for an average of no gain, a 25% chance of a 10% profit and a 20% chance of a 20% profit.

So you make a cover sheet that for starters can be real simple and look something like this. It really doesn’t matter at this point what the data is, you will adjust that for your system once it’s done.

More will be added to this cover sheet later. The goal is to not have to look at any of the other sheets unless you are adding some other feature or modifying it. Once you are done, the cover sheet will be where you input the data AND give you all the information you need to draw any conclusions.

Now you can go back to your other sheet and use the “find and replace function” (CTRL+H) to replace each of the numbers generated with the corresponding data on the cover sheet. You don’t just want the data because then you have to redo the spreadsheet every time. Instead you want it to put a formula in place of each cell that will pull from the cover sheet. So you create a find and replace something like this.

In this particular example, I find and replace all cells with a “1” in them with =cover!C8 which pulls the data in the C8 cell (of the tab labeled “cover”) and puts it in place of each cell with a 1. The C8 cell, in this case, is the data corresponding to W1. Then repeat for each number. The 1-5 should correspond to W1-W5.

(Warning, you may need to rename the numbers first if one of the numbers in the formula you just created contains that particular number so you don’t end up changing it. Such as replacing the 1 with 11111, the 2 with 22222 and so on. This way if you have the formula as =cover!C10 and you find and replace the 1 it won’t change it to something like =cover!C=cover!C90. Also note that I had to highlight the cells I want to replace, otherwise it would have also replaced the ones next to it that are designated for P1-P5.)

Repeat for P1-P5 corresponding to the coversheet P1-P5.


Now we need to start with some sort of actual value and we will be able to see IF this scenario plays out what our portfolio will do. But that will be saved for another time. We also need to calculate the “probability” of that set of 5 trades in a row of occurring. With this information we will be able come up with things such as “probability of a return over X after 5 trades”. The idea is then to leverage that information to give us a probability of certain returns over 125 trades. I’m not yet sure what issues I will run into yet so one step at a time.

To get the probability of a particular individual event happening for each event, you simply multiply each of the cells dedicated towards probability or your “P1-P5” amounts together. Check your work by ensuring the sum of all these adds up to 100%.


I am hoping that people by this point are starting to “get” what I am doing and trying to do a little more. It is sometimes a bit difficult understanding formatting, programming and mathematical concepts and then translating that to language and hoping the people reading actually get something out of it. I think it is a little easier to show you images like this. But I appreciate those who bear with me through this process.

Bet sizing and making sure your “system” AND bet size both work together to accomplish your goals and risk tolerance is of utmost importance.

At risk of sounding repetitive, in order to drill down what I am saying, I will continue to rephrase it…  one side of the equation is “what’s one possible “return” given a set of data (for all possible sets of returns over 5 trades)”. The other side of the equation is “what’s the probability of each return occurring given this set of data (over the course of 5 consecutive trades)”. Once you have that you will be able to “sum” up and “sort” the data so you can determine the probability of each given set of returns.

For example, after 5 returns you will sort the overall portfolio “outcome” which will ultimately be a portfolio SIZE or percentage gain on your overall “bankroll”. You then can sum up all the data ABOVE or equal to a certain outcome. You then SUM up the probability of each outcome to determine the probability that you achieve a certain pace of returns.

If your goal over 10 years is to reach a certain “retirement” number, and you place 12.5 trades per year on average, you simply need to solve the optimal bet size and system to maximize the probability that you get there in 10 years by looking at the data over 125 trades and adjusting things like the bet size or valuing entire systems by their probability of getting you to your goals. Or perhaps you just need to not have a 20% drawdown. Then you simply look at the probability of a 20% drawdown and MINIMIZE it. Or perhaps you want the BEST return that you can get with a less than 30% chance of ending down 20% after a year of trade, or without having a full year that ends 20% from the last anytime throughout the 10 years (125 trades).

The simulator will not create a system for you, but will give you comfort in knowing that if you can find reliable data, you can maximize chance of reaching retirement or minimize your chance of a 20% drawdown, or whatever your goals are.

But we are getting ahead of ourselves here since this is still under construction and you will have to continue reading this series to see how I build the spreadsheet to accomplish this task.


Before we jump into determining a “return” after 5 trades, the next thing that will be adjusted is “how much” to risk per trade. We will have to update the cover sheet to include things like “starting bankroll, fee per trade, % of capital allocated per trade. We will have to use that data to come up with a formula to come up with an “ending amount”, after the series of 5 trades, and have it calculated automatically for each row. But that will all be covered in the next post.

Comments »

Building A “Position System Simulator”

Okay, so lets get started in the quantifiable approach to position sizing. The intro sort of explained what I am trying to accomplish.

Part 1 WoodShedder Time! Creating A Quantifiable Approach To Position Sizing

Part 2 Building A “Position System Simulator

Part 3 Building Cover Sheet, Determining Probability of Results (Position Size Simulator Part 2)

Part 4 Adding To The Cover Sheet

Part 5: Setting Up The Calculations

The first step is to conceptualize how we are going to simulate a position size and what we are going to accomplish. I don’t want the typical “Monte Carlo simulation” where you have the data spit out a TON of series of trades using some sort of random number generator and determine the results based upon that randomness. I have found other people who have done that.

What I basically am looking for is a tool that will–given a set of basic expectations of the trade–determine the probability that at the given bet size that I will finish down 20% after 100 trades. I could just look at total draw-down perhaps within those 100 trades, but that may be a little more complex in determining all the possible combination of trades that draw down. We will see.

I may also want to know the probability that I draw-down 50%. I also will want to know the probability that I hit a certain “threshold” for gains as well such as 100% after 100 trades. The idea is I may want to find the bet size that maximizes my chances of reaching my “goal result” after 100 trades using a system (or 10 years at 100 trades per year or whatever) and by knowing the system, I can predict the time and also know the risks.

For some goal result could be what you need to walk away from trading. For others it could be the amount you need to reach to retire. For others, it might just mark a change in strategy to a more long term oriented approach or more passive index ETF based strategy. Or maybe it allows you to go buy that business franchise you wanted.

This to me is far more productive than “optimizing your overall maximum gain” which is insane. The Kelly Criterion, “optimal F” or other position sizing strategies often assume an infinite number of trades to weather the volatility storm. In reality those systems could give you a much worse chance of getting to where you want to be, and a far too high chance of having such a significant draw-down that it will take ages to even get back to even.

Some run a “Monte Carlo strategy” where it will simulate 50 trades or 500 trades with the click of a button, and then you can repeat that several times and each time you randomize, there is a different result. But I don’t like that because it doesn’t help me maximize anything, it just gives me a good idea of several possible results.

Instead I want to run all the permutations (possibilities) that a series of multiple trades can work out, and use the likelihood of each result and sum up the data that is beyond certain thresholds and determine an exact percentage chance of achieving that result.

Unfortunately, the number of cells available for computation on excel may make a large number of trades difficult. In the version of excel I use, there are 1,048,576 cell rows available. For just 10 trades involving 5 possible outcomes, you have 5^10=9,765,625 combinations of results. If we cut this to 4 possible outcomes we have exactly enough cells. 4^10=1,048,576

Determining probabilities and results after only 10 trades is not really all that valuable. Instead, I will have to have one spreadsheet give me the data for a group of say 5 trades, then use that data to group in chunks of 5 trades what can happen, and then 5 groups of 5 trades to produce 25 trades, and then 5 groups of 25 to produce 125 trades. This is good enough for me to give me the kind of picture I am looking for.

This way the formulas can give me data, I can quick determine new data, plug that in the next spreadsheet tab and then gather new data and input that into the final spreadsheet tab (or have it automatically pull to the next spreadsheet tab if set up right).

What I plan to do, is set aside a macros that will automatically list all permutations for say 5 trades that you could have a result by putting up a number 1 through 5 that corresponds to a certain result.


As you can see the macros is running. It automatically spits out a number 1 through 5 and goes over all possible combinations of ways you can have a set of 5, 5 digit numbers. You can also see the formula that I set up in visual basics to run it. It was very difficult to get a screen shot of it midway through the process like this because it is very quick, but I did.

The next step is to copy and paste the data first directly next to the other data and THEN take that and copy and paste it into multiple sheets since there will be multiple groupings of 5 trades and the results of each of them. One of the sets of data is for the probability, the other is for the ROI given those sets of circumstances. You will see this done in a later post.

Then a find and replace function can be set up to replace each number with a formula that will pull from a “cover sheet” the data, as briefly mentioned  in the introduction.  [W1-W4, R1-R4, bet size]. I will then need to add another set of data that determines for each potential series of bets, a cash allocation percentage and what percentage into the system.

Eventually, my hope with the spreadsheet is that I can run the simulator for MULTIPLE trades at the same time at a given correlation run simultaneously. While I have constructed the spreadsheet/calculator that adjusts return for fees and multiple positions and given correlation based upon the “optimal bet percentage”, I have not yet figured out how I want to do this for the “simulator” yet.

Calculating multiple position sizes and determining the probability that both close down or up and by how much using only data such as correlation seems like a task that I am not quite sure how to do. Hopefully by talking my way through it I will come up with something as I develop this spreadsheet.

The next post I hope to accomplish a lot more and show you the developments of the spreadsheet. That’s all for today, class dismissed.

Comments »

WoodShedder Time! Creating A Quantifiable Approach To Position Sizing


Part 1 WoodShedder Time! Creating A Quantifiable Approach To Position Sizing

Part 2 Building A “Position System Simulator”

Part 3 Building Cover Sheet, Determining Probability of Results (Position Size Simulator Part 2)

Part 4 Adding To The Cover Sheet

Part 5: Setting Up The Calculations

Since Woodshedder is semi-retiring from IBC, I decided it was a good time to kick off a more quantified approach in his honour(sic). While he quantifies the entire trading systems, I will be just setting up a spreadsheet that allows you to input either back tested quantifiable data, or looking at your history of results running a more discretionary system. The spreadsheet will then help you to determine position sizing.

There will be 2 spreadsheets, one is very close to complete, but requires some review to ensure I have not messed up a formula somewhere. This spreadsheet helps to determine “expectations” given a set of possible outcomes over a set amount of trades.


I will explain all of this stuff someday and how I use it another time. Upon development of this spreadsheet, I have come to the conclusion that such leverage even at a fraction of this, is insanely reckless, and can produce hugely volatile swings in your bankroll that are entirely unnecessary. There is a very good chance that even over 50 trades that the system could mean losing a very large percentage of your bankroll. Since people don’t live forever they won’t be able to withstand “infinite” bets in a given system. I accounted for fees, the value of correlation and multiple bets, but it still isn’t enough for my taste.

Unfortunately, the first spreadsheet with all it’s formulas of modified kelly criterion / optimal F%, doesn’t give me enough data. Especially since I recently have learned what drastic draw-down such a strategy can produce and how if you happen to get that drastic draw-down it can take significantly longer to ever recover (perhaps longer than your lifetime), it can provide severe psychological risks (going on “tilt” as they call it in poker), and such volatility actually often makes fees do more harm betting more, than if you had just kept your bet small to begin with and not had such volatile swings in which the fees then become a more significant issue.

And so, I want to know the probability of ending up down after a certain number of trades, up 100%, down 20%, 50%, up 50% and things like this.

The goal is to construct a system simulator that takes a given system. The system will be defined with:

W1=ROI for result 1
W2=ROI for result 2
W3=ROI for result 3
W4=ROI for result 4
W5=ROI for result 5
P1=Probability of result 1
P2=Probability of result 2
P3=Probability of result 3
P4=Probability of result 4
P5=Probability of result 5

I have gone beyond this for my “optimal bet size” calculator. You can see 10 “results” but it can list up to 15 events (I hid a few rows in excel that aren’t visible in the jpeg). I could get more precise based upon a strategy such as a trailing stop that had several possible exits. I do not want to go beyond 5 results for the simulator.

So after 5 possible trades are listed, the “simulator” will be able to determine the set of data I want to know given the bet size listed, which can be modified until it gives me the set of data I am comfortable with, and give me a much better idea of what a given bet size will accomplish.

The next post is coming in a few minutes that will get into more details about building the simulator that I speak of that I will be building in front of you, screenshots and all.


Comments »