BREAKING
Joined Oct 26, 2011
150 Blog Posts

How Many Trades Should You Have At Once?

Ever wondered how increasing the number of trades in a period changes your performance? I’m really close to figuring that out to a very specifically defined calculation. The one missing variable is “correlation”. I don’t know how to account for it and how correlation impacts probability.

So for this experiment we’re going to assume binary outcomes of -100% or +250%. I’m going to assume my own variation of the kelly criterion strategy so I can model them at an equal amount of risk(?).

The logic is: If the maximum you can bet to maximize geometric growth at one single bet is 10%, then after a single bet you are left with 90% of cash. If you lose, you will be making a bet that is 9% of your initial capital anyways, so we can allocate 19% for two trades that we hold simultaneously provided there is zero correlation. If we are going to make a 3rd trade we are 81% cash so we can add an additional 8.1% risk and divide the total amount at risk by 3 and so on. It’s possible you can risk slightly more than 19% for 2 but less than 20% because a loss isn’t guarenteed. We will position as if the first position lost due to the possibility of it losing and the nature of overbetting providing less return at increased risk (where as underbetting provides less risk and return only declines slightly)

Background

For some brief background on how the risk amount to maximize growth, see the Kelly criterion. Essentially you are going to take (O1^N1)*(o2^N2) where O1 is the wealth multiplying outcome at the given position size, and N1 is the number of times you produce that outcome. For instance, at 1% position size with 250% gain and 100% loss for outcomes, O1 is 1.025 because a 1% position producing a 250% gain multiplies  our wealth by 2.5% or a factor of 1.025.

This is calculated by

1+(.01*2.5)=1.025

And the loss is .99 which is calculated by

1+(.01*-1)=.99.

So for 20 wins and 30 losses the equation for how much our wealth changes is (1.025^20)(.99^30).

The Kelly criterion reverse engineers the position size to solve for the position size which maximizes geometric rate of return based upon your assumptions over an infinite number of bets. As volatility increases, eventually return decreases. The kelly bet seeks the point at which you can no longer improve the return without much respect for the volatility. In reality, you’d probably wish to respect a fractional kelly strategy if you want to reduce risk. It’s a good way to compare systems at an equal amount of risk. Right now I’m using a modified kelly. It is similar logic as the kelly but not the kelly exactly to increase position size based upon multiple held positions.

You could manually adjust the position size for very large outcomes with a proportional amount of wins and losses until you can’t seem to increase it to approximate the solution, or just use a kelly criterion calculator. You can run more complex calculation with more than 2 outcomes, but for now I’m just using 2.

Optimal Bet Size

So the optimal bet size for a single bet is 9% given 35% chance of a 2.5 to 1 payout.

We can then solve for the optimal bet size for 40 bets assuming zero correlation between trades but with 40 trades held during the same exact time period. There’s an important distinction here. Rather than multiplying our wealth by 1.025 with each win, at this point we are only adding 0.025 to the total portfolio per win because we don’t get the benefits of compounding when the trade is placed. Similarly, if we hold 40 1% positions at once and lose them all, we don’t lose 1-(.99^40)=.331 or 33.1% but instead lose the full 40%. So the first formula is not sufficient in describing what happens to our wealth. This is also where correlation is a bigger liability than the formula currently realizes as the chance of greater drawdowns increases as the correlation increases.

As explained before, we are going to assume a full kelly and then an additional full kelly of risk with the remaining capital for each additional bet. Since the full kelly is 9%, that means the cash on hand remaining is .91 of our portfolio which can be multiplied for each of 40 bets to determine how much cash on hand to keep

or .91^40 to equal 2.3% which leaves 97.7% at risk divided by 40 is 2.4425% per bet.

We can repeat this for 30 bets, 20 bets, 10 bets and 5 bets to construct a table of optimal bet sizes per bet.

Bet size given total number of positions.

50 bets 1.98209% per bet
40 bets 2.44251% per bet
30 bets 3.13649% per bet
20 bets 4.241775% per bet
10 bets 6.105839% per bet
5 bets 7.519357% per bet
1 bet 8.9999999% per bet

Now we can construct a simulator that sums the total % gained per bet over a period for 12 periods and randomizes the outcome according to the probability.

Excel gives us a function =RAND() which delivers a number between 0 and 1. If that number is less than .35 it will deliver a 2.5 times the position size outcome. If it’s more than .35 it will deliver -1 times the position size as the outcome. All position sizes for a period will be summed up and the number 1 will be added and then multiplied to the portfolio size and then the fees for the period will be subtracted. 12 periods will be simulated giving us a yearly total. We can then run through 1,000 different yearly results and see the distribution of results, the average, and even estimate the compound annual rate of return

This way we can see when the benefits of diversification outweigh the costs for smaller 5 figure portfolios where fees eat into profits. I am probably over estimating fees slightly as I used $6 per trade and assumed buys and sells for all trades, where in reality there is only an opening trade for 100% loss trades.

The CAGR is a crude estimate as the simulator only gives me the first 100 results. I am basically taking the returns plus 1 and multiplying them all and estimating X where X^100 equals 1 minus the multiple factor of the first 100 results. The CAGR will be substantially less than the mean outcome. TO illustrate imagine a 25% average return where the results are -50% of your portfolio and then +100% The actual CAGR of an equal amount of -50% returns as 100% returns would be zero, not 25%. The CAGR reflects the loss due to volatility.

I assumed a 20,000 starting portfolio and $6 fees with the assumption that there was both a buy and a sell order for each trade. Trade fees were deducted after each period’s multiplier was applied.

40 trades @ 2.4425% position per bet

30 trades @ 3.1365% position per bet

20 trades @ 4.2418% position per bet

15 trades @ 5.0466% position per bet

10 trades @ 6.1058% position per bet

5 trades @ 7.5194% position per bet

1 trade @ 9% per bet

For 12 independent trades or 1 per 1 month period, the theoretical gain is .97% growth per bet or 1.0097^12=~12.28% growth per year… theoretically. But that’s over the time horizon of infinity and as you can see by the distribution should 1,000 traders have the same exact expectation, the actual results over a year can vary wildly. Also, with only a $20,000 account taking too much risk or not enough can result in problems should losses occur early because of the size of trading fees being a flat amount.

We find we can greatly enhance the return by adding more position sizes, but the benefit of diversification decline with each additional bet.

Adjusting For Correlation

While I think the above can give you good idea for how many trades for a given portfolio size you should hold at once (and we could easily adjust the calculation for half of the initial kelly bet), we still have yet to develop a system that adjusts bet size based upon correlation. What I believe is true is that as correlation approaches one, the total amount risked should approach the single kelly bet. Afterall, if you bet all your capital on multiple trades of the same coinflip, it would be no different than betting a single bet on that coinflip. In other words in our previous example as the correlation increases the total amount at risk should approach 9%. This means that the ideal bet in reality is somewhere between the bet size calculated above at the correlation at zero (that has been calculated as shown in the prior table) and the correlation of 1 which is 9% divided by the number of bets.

For instance, if the correlation was 0.50 across 20 bets.
A correlation of zero suggests [1-(.91^20)]/20=4.241775% per bet.
A correlation of 1 suggests 9%/20=.0045 or 0.45%
Since the correlation of 0.50 is the midpoint between 0 and 1 we can average the 2 and get 2.35%*

*but that’s only an approximation.

Unfortunately the relationship may not be linear, so while we can be sure the optimal bet size for maximizing CAGR across 20 simultaneously held bets is somewhere between 0.45% and 4.241775%, we can’t be sure it is the exact average of 0.0234589 or ~2.35% per bet.

I also want to look at “half kelly” strategies in 2 different ways. One is dividing the per trade bet by 2. So for 40 bets if we calculated 2.44% the half kelly could be 1.22% per trade. That halfs the total amount of capital at risk. The other is using a 4.5% number initially, and so a 0.955^40=~15.85% cash on hand or ~84.15% invested divided by 40 or 2.10365% per trade instead of 2.44%. We can see that that is still much more aggressive than halving the amount per bet.

Normally the half kelly solution provides 3/4ths the return at 50% of the volatility of the full kelly. This is really promising for multiple bets when we can reduce the amount at risked by only a small amount and still be at an equivilent of a half kelly strategy in some regards.

In the future I also want to come up with a different calculation such as solving for the “probability of a 50% decline or more in a year” (or probability of 100% gain for example). This is pretty easy to set up.

If the result is -50% or less in a year, a simple formula will give me a 1. Otherwise zero. The average is the probability of this event. This helps you better model the probability of achieving a certain result (such as 100% return) while measuring it against a probability of a negative outcome (such as 50% loss) so you have a different way to compare risk and reward of position size and number of trades and understand expectations.

For now it seems more bets is better up to a certain point where the quality of opportunities and expectations as well as the fees become problematic. It’s hard to identify where that is, even with thousands of simulations because of the increased “skew” (the expectations become increasingly dependent upon a smaller and smaller probability of a more and more spectacular outcome) as risk and number of trades increases. Also, as your bet size decreases the aggressiveness and increases the amount of trades, the fees should become more problematic which I think we will see in a half kelly and 1/4th kelly simulation. Lowering your position below 0.50% when you have a $20,000 portfolio for example might become a problem and eat into returns too much. As such as we seek to decrease our risk, we will eventually have to decrease our number of trades or else position size will be too small given the fees to provide as big of an edge.

Comments »

Breadth Is A Little Too Strong

breadth

Right now it seems that the breadth reading over various time frames is coming into the range where you need to be cognizant of any potential sell signals. Overbought can stay that way for a long time, but keep in mind that the more overbought a market becomes, the more amount of new buying it requires to sustain the newly established prices. Some strong markets can stay overbought for a long time as people wait for dips that aren’t available and people chase and even go on margin until they are flushed out.

As Charles Ponzi eventually learned, you need increasingly more new money to pay off the old. Don’t get me wrong, I’m not suggesting the market is a ponzi scheme, but that in certain ways the mechanism will operate the same way. More buying demand, higher prices and new investors… but too much buying demand and too many investors to pay off and suddenly prices are unsustainable for the time being. In a ponzi scheme the gig is probably up, but in an actual market backed by earnings the bullish trend can resume after a repositioning period.
The market is backed by solid earnings, but earnings are less important than how the market reacts to them. Price is governed only by buying demand and selling supply, everything else is a rationalization of WHY the buying occurs. It just so happens that strong earnings and valuations happens to be one of the more persuasive rationalizations for people to buy. However, should investors reach a point where they’re all tapped out long and can’t recruit any more buyers, the market will have to rest or sell off until people can come up with more cash and credit or more capital from new money enters the market or shifts from another asset.

In Paul Tudor Jone’s “Trader” documentary I believe he said something about how this occurred throughout the 80s in a strong bull market. It’d rally up overbought as people chased until they were over levered long and then it’d gap down. The market would force everyone to be leveraged long and then the crashes would be bullish as it would flush people out and they’d quickly try to chase after satisfying the margin clerks. PTJ’s take was that the market was governed by private and public debt and margin. Each had a cycle of when payment was due and when cash would leave the system, as well as the introduction of new debt that would inject capital into the system. This eventually lead up to the 1987 crash which many feared was a sign of a new depression on the way until it eventually took out the high.

So to be clear I am not, nor is the breadth predicting a crash, but I’m saying the market is backing itself into a corner where should this buying pressure continue, a selloff is perhaps the only healthy move left, an alternative is a buying mania that gets money off of the sideline and enters a more of a bubble market, or increased debt and margin so that buying persists and the market becomes increasingly vulnerable to a crash. The Feb 1st print of a VIX under 10 also may have signaled complacency.

I’m a little concerned about a correlated selloff ahead as the market has had such little correlation, such low VIX, and so many more stocks moving higher than moving lower over various time frames.

Breadth is usually more of a concern when it begins diverging from the market and showing decreased interest and increased selling under the surface from high levels, but be aware.

Comments »

Climbing The Risk Ladder

The human nature of man is to avoid pain (loss) and seek pleasure (gain). This is an inherent trait that causes individuals to make irrational decisions. We try to make the best decisions but individuals are prone to biases due to emotion.

One of the ways the market reacts to conditions is to look for confirmation of existing emotions and beliefs and ignore information that contradicts those emotions. When collective emotional extremes are met that creates a mispricing due to the way human nature seeks to swing from avoiding pain and seeking pleasure.

risk cycle

One layer of how markets climb the “risk ladder” is the longer term generational time frame shift from savings and bonds to stocks. This is a very, very long process as you can see by the history of 10 year interest rates.
10yr

Note that because assets are priced in relation to each other and the total amount of capital and credit available changes, drawing anything in isolation based upon where interest rates are is probably not a great idea. Speed or rate of change and trend may tell you something about the conditions of the market, but not the pricing itself. Interest rates compared to valuation metrics of the market adjusted for inflation and compared to pricing mechanisms of other assets can potentially tell you which assets are being neglected and which assets wealth has concentrated in. But the point of the illustration is to get an idea of the time frame and to understand how capital shifts and interest rates change.

Side note: There are generations of investors and traders who have never really experienced an uptrending rate of interest rates. That’s the pain trade because a declining number of people have experienced it as market participants, especially from the low point like in 1940’s and 1950s. That requires you to have been in the market for 65-75 years

risk reward

This image is more of a mostly “efficient market” graphical representation (with normally distributed mispricings). I believe there is more of an irrational component. When stocks are underowned and based upon emotional fear of loss, they represent lower risk AND higher reward. When stocks are overowned and priced based upon a manic seeking of gains, they represent higher risk AND lower reward. I believe the markets swing from extremes not only adjusting to interest rates, but also adjusting to the emotional component and using confirmation bias to rationalize their choice after the fact.

It’s not that people make rational decisions, it is that they like to think of themselves as rational creatures who are in fact behaving totally irrational most of the time. Rationalization is the explaining away of a thought in a way that seems legitimate enough to satisfy the ego. Their precious egos won’t allow themselves to think humans are incapable of seeing truth as it is so we literally invent ways to think acting according to our emotions is justified.

Sometimes emotions align with rational decision. For example, someone feels good when they get a good deal so they may choose a lower priced item when the decision is not complicated and there aren’t different features. Or they may avoid the emotional annoyance of traveling too far away to get a better priced one and it happens to be financially better. But when emotion is involved and there are multiple variables, you can be sure that a lot of people will make the emotional decision over the rational one.

 

sentiment chart

Take for example valuation investing. It seems totally rational to invest in securities with the lower P/Es. It seems totally rational to avoid the markets where the P/Es and P/Bs are high. But I think it’s a mechanism to rationalize herding.

Imagine there’s only $1,000 in circulation because of a huge deflationary vortex. Earnings have mostly gone negative and companies are going bankrupt. The P/Es are either enormously high or negative. Is it rational to buy? According to the “logic” of P/Es it is not rational at all. However, you have the stocks that everyone has been forced to sell and the economy where deflation has kicked in and maximum fear. My irrational filter that says everyone is irrational most of the time would say this is a great time to buy because everyone else is acting irrationally. And in fact 2008-2009 is a great example of when emotions ran high, deflation set in, Lehman went bankrupt and Bear Sterns was bought out for $1 per share. Then price action lead the way higher after several bear market rallies convinced people that there would only be short lasting bounces.

Now let’s say the opposite. The markets have totally inflated future growth expectations as well as most of the current earnings are due to a massive capital concentration into the US. Because of all the capital inflows, there are businesses raking in the cash as people have the highest amount of disposable income they’ve ever had. But the disposable income is based upon cash and credit available due to an inflated market. So any valuations that seem rational are not. Certainly in 1998-2000 there was a period where irrational got more irrational in internet stocks. No longer were prices driven by institution, but instead by the masses crowding into the idea. Because of a history of valuation no longer mattering, and the thesis that the internet would change the world economy (yes eventually it did but only after 15 years of pain) that people convinced themselves was rational, we had a period that extended beyond.

I also believe that at a certain point risk looks more like this:

IMG_0956

But that’s more a component of how you construct your portfolio than the availability and pricing of market securities. You can take the highest risk assets (like stock options) and make them work if you position size correctly. However, this also applies to global economies and how capital is allocated across the world. Initially increased allocation in riskier assets is healthy for a market. But there’s a certain point where allocation is too large and capital concentration becomes a liability, particularly as participants seek risk. Pushing prices to a point where it takes increasingly large percentage of the world’s capital to support it is what results in major tops in assets.

But this climbing of the “risk ladder” occurs on different wavelengths of time and correlates to sentiment.

risk cycle

I am working on slicing up the market into different layers to see what is consolidating. I also want to look at breadth of these ideas in the future to see what is happening now as well as what is setting up. I’ve noticed a bit of a shift happening where the scores have improved in the higher float short %. You can’t necessarily see it completely since I haven’t given you the context of history and while I’ve seen each update I’ve made I haven’t been able to show you how it has changed over time. But anything with a consolidation score above 500 seems to be where there are still stocks worth considering, although it usually seems like only the historically volatile names with the lower scores matter and the lower beta names seem to always score well due to how I’ve constructed the score. Some of that also has to do with the financials and transports and others making a large move and the rest is comparably less volatile than the move in Novemeber.

risk

So far it seems the market is not quite ready for the highest risk stocks, but I have noticed a shift taking place that’s hard to really articulate. It seems that stocks near their highs are coiling (possibly for a second leg up) while the stocks that have sold off and are near the lows are starting to set up while the stocks that have really advanced off their lows have either been resting or have been behaving more wildly, it’s hard to really say without combining the date with performance stats. Meanwhile the market has gained an appetite for slightly higher short interest names and the low float stocks that were once not at all showing signs of consolidation are starting to consolidate more than before.

I’m hoping development of indicators like this will allow me to get a better feel for what’s working, what’s setting up, how the market behaves, and ultimately what’s next.

There is another big project ahead in the future of automating the risk classification of stocks. I don’t know the details but stocks near the highs, higher float, higher market cap, lower short interest, greater institutional ownership higher profit margins and ROE, more consistent earnings, positive earnings and less beta and lower volatility on the long term time frame with a positive trend will provide sort of a “risk profile” and average “risk” level which then can be adjusted by sector and industry average and compared relative to the sector and industry average as well. I want to spot more in detail how the market goes through this risk cycle and when each layer is setting up. I want to look at sectors by risk cycle and look at the consolidation score, and of the industries with enough qualifying stocks I want to identify the setups by risk level and performance by risk level. I want to know what specifically to set up for and whether or not I can identify themes.

Comments »

Trade Review

2-trades

My entries were around the lower range of the day for both of these trades.

My exit for CAAS was made lower. BCLI met the target today so I’ll look for a failure to make new highs or take out a candle low or I may exit on a close below the target.

caas bcli

Maybe a reentry in CAAS?
caas

Comments »

Groups Setting Up

I set up the main tab in Neo to filter out most rising patterns. Bull flags and rising wedges are mostly gone and have been set aside to their own patterns if you should choose to scan for them. I haven’t really optimized this setting. For now anything with week AND monthly movement both up, or day AND week up more than 1%, stock 15% above 20 day moving average or 50% above 50 day moving average is classified separately for scans. Also, anything that has moved more than 60% from it’s 50 day low and has declined less than 50% from the 50 day high with a medium term consolidation range of 1.2x higher than the market is a “possible high tight flag”

I also filtered out exchange traded funds and any “errors”

The “consolidation score” now only relates to stocks that are consolidating sideways to downwards so if a bunch of stocks from the same sectors have higher scores they are much more likely to have a similar correlated setup going on.

As the Option Addict teaches, when you have an uncorrelated market and suddenly a group starts correlating that usually signals stocks ready to move as a group.

So for the first time since implementing these new changes I want to test out OABOT2.0/ Neo’s ability to spot groups setting.

neolearns

If a stock doesn’t have at least 10 names that qualified I chose not to bother looking at it. Perhaps 5 is enough in some cases.

groups

Aside from industries, I can also look at sectors, exchanges, and in the future I hope to be able to look at “short interest” “groups in similar ranges from highs to lows” and by institutional ownership. Some of these I may also add on by group so you can see the average short interest of a group and average institutional ownership… but that’s later down the road.

There’s a little bias towards stocks that generally do not move as I have not yet added in a system to reward stocks with higher than average movement just yet (I do that on my own when I run scans sometimes) The first number is the consolidation score of all those that qualified. The remaining numbers are the short term, medium term and long term for the entire group including those that were filtered out (I haven’t changed these yet). The next number is the average RSI of the group and the final number is the total number of stocks that qualified.

Now we can use pivot tables find the industries and highlight all of these stocks and paste them into finviz. I decided to also include all stocks of that group, not just those with a consolidation score over 1000.
see here.

Since I sorted them by industry, you can just page through the list and if you see any groups with several setups that look kind of the same you can focus on that group.

Apparal and Property Casualty & Insurance seem to be setting up together, but I’ve certainly seen more actionable themes. I’d say these groups may need a little bit more time to consolidate, but that doesn’t mean you can’t be early and find some individual names setting up. I could look down the list.

One potential problem I’ll have to fix that I just realized is I made a huge punishment to sort out stocks that are likely in an M&A situation (buyout offer) so that they don’t show up in my list. I’ll probably need to instead classify them as “possible M&A” and if they’re classified that way they won’t show up in the “falling wedges and triangle patterns” classification.

Aside from that you can just scan for individual setups .

I’m going to put in a slight bias for high short interest.

What I did was take short interest times 1,000 and then if any are above 1500 I capped the score at 1500. I added this to the total consolidation score. Here are the top 400 names in the “possible triangles and falling wedges” category. There may be some other false positives because I haven’t optimized and out of 400 some setups aren’t going to be perfect. Last time I found a record high of 107 names out of 400 that looked interesting enough to put on a watchlist.

Here’s the list. From this list I like to go through it manually. If I wanted to reduce this step I’d crank up a lot of the scoring methods and only look at the top 50… but in the process of eliminating false positives I’d also eliminate a lot of good setups because it didn’t pass one threshold.

Normally I’d post a condensed version of my own manual scan from that list but today I wanted to illustrate the results and provide a reference so I could check them later this weekend.

 

Comments »

Trading System: Cliff Notes

Rules:

1)Own no more than 20 *active* option positions

2)Own no more than 40 option positions total.

3)1% position size per option with 5 possible exceptions.

4)Those 5 exceptions may all be 2%. max 2 may be 3% and max 1 may be 4% positions.

5)Target max 50% of stock positions ~5 positions of 10% or less.

6)10% income position that is always on except sold to avoid margin.

7)Remaining capital with 1-5% in asset allocation options. (commodities, currency/cash, stocks/short VXX, bonds/income) plus possible 2% hedge.

8)Only take a trade where reward is 3 times the risk or more.

9)Monitor breadth to add more when it’s oversold or when it makes a strong breadth thrust off of oversold, add normally when it’s not overbought, and add proportional to rate of selling or slower when it’s overbought or trending down from overbought (until oversold signal).

*A position that falls 75% below it’s original value you basically write off as a loss.

Watchlist:

-Watchlist is developed through OABOT’s top 400 and manually filtered from there to usually 20-60 names.

-Import watchlist into a spreadsheet with suggested stops and targets and current price

-Reward/risk will automatically update once it’s in the watchlist.

-Trading rules for entry that must follow the above portfolio rules and also an entry checklist which should also be using triggers.

Management:

1)As stocks are bought, input the stop, target, entry price into spreadsheet.

2)Since many of you are already trading, don’t worry about importing a large list of current holdings… just update the next trade until you phase out the old trades.

3)Stock must be below stop 5-10 minutes before trading close to trigger a sell.

4)Stock above target has a separate rules of waiting for the candle to close below prior candle low or failing to close above prior candle high Sell before the following candle’s close.

5)Which timeframe you use when stock is above target depends on condition but in general: With investments you may use monthly or weekly chart. With stock trades and options that have more than a week remaining til expiry you use daily chart. on options expiry week you might use a 1hr or 4hr chart. 1 day before options expiry you may use a 30 minute chart. On options expiry day you may use a 5m chart.

6)Generally update and check spreadsheet every hour. Also monitor watchlist stocks for purchase. On options expiry day check them every 10-15 minutes. One trick with that is if the stock is higher than the last time you checked, you don’t have to look at a chart.


 

Comments »

Backtesting Patterns And Triggers

Some people like to short breakdowns of bearish patterns or “shorting into the hole”. Let me show you why that has been a terrible idea at least during a bull market.

The following is the results of simply buying the breakdown and holding for 3 months. No stop. No targets. No complicated additional signals. Virtually every bearish “breakdown” reverses so hard it ends up positive and many of them actually beat the market. You could have bought the broadening top breakdown or a bearish pennant breakdown or a diamond top or double top breakdown and end up beating the market.

contrarian buys

These results of course are volatile and ever changing, so don’t expect the future to necessarily resemble the past. Still it’s a good reminder of why fighting bull markets by “shorting into the hole” is probably a terrible idea.

Certainly you might argue that you’ll be in and out much sooner than 3 months, but had I put another criteria on it I’m sure the results would also have been bad for the short. Maybe you’ll find the one exit strategy that works for this strategy, but I’m not going to bet on it.

Let’s look at some bullish patterns of buying after the stock breaks out.(buying late)

bullish

Only falling wedges (which has been an awesome pattern) beat buying the top 3 traditionally bearish patterns after the breakdown.¬† Some of them didn’t even beat the dow. Buying falling wedges BEFORE they breakout is probably better still.

I don’t know if anything is necessarily actionable here in terms of adjusting your strategy because there was a time when diamond bottoms were one of the top patterns and falling wedges were only slightly above average at least if you bought from the breakout. There also was a time when inverse H+S worked brilliantly and times when it didn’t work very well.

But there is a lesson here.

Be early or don’t bother. (Generally).

Also, since the strategy is to be early, we could work to try to fine tune what exactly triggers an entrypoint and how we prioritize which stocks to enter in a watchlist. We can look at setup quality to some extent but that’s a little subjective and might only help us narrow down a large list to a smaller one.

So we can use backtesting for looking at “triggers”. If you can find a simple signal to trigger a trade within the context of a developing pattern and backtest it, it should beat the market if it’s a better signal than just “randomly” buying.

The bullish hammer candlestick pattern:

hammer

And the RSI(5) crossing below 20 (buying oversold).
RSI(5)

The RSI particularly is not a good signal to use on it’s own even though the results are good. This only backtested S&P stocks which reduces the risk of holding a stock going down to zero as a removal from the S&P would constitute a sell. There’s really no risk management if you’re buying something oversold without likely selling it more oversold. However, if you buy oversold with a developing pattern providing a clear support area, you can manage the stock by selling a failure to hold support and that actually probably makes some sense since there’s still a range of buyers now selling. Also, if we are using OTM options to buy we have a built in risk management mechanism of the full premium so we still have exposure after the pattern breaks down and it still can reverse.

A candlestick pattern can be used on its own in some regards since only holding it for 5 days limits the probability of it going down too far and has a proportional chance at an equal upside. Also, if you had to you could sell on a close below the candlestick pattern low itself or find a candlestick near support of other candles and sell on a close below one of those. Nevertheless, the only time I like using candlestick patterns on their own for a signal is as a hedge. It can be hard to find a bearish pattern worth trading during a bull market unless you have a process to quickly identify one that will at least underperform and reduce risk exposure, if not actually decline in a bull market.

There are other worthwhile triggers from your watchlist to activate a trade.

The purpose of a trigger should be one of the following (or more)

1)To increase chances of the trade working in your favor (the signal should have better than 50% chance of an equal move to the upside as to the downside).

2)To increase the risk/reward (the entry should be lower or closer to support).

3)To decrease the time waiting for the trade to work (The entry should be closer to the breakout point).

4)To narrow down a group of stocks to the one that is most likely to provide the best risk adjusted return.

Aside from RSI oversold or a hammer candlestick there are a few more triggers:
-Buy at support or near support or even below support intraday and exit on a close below support (better R/R)
-Pattern within pattern setup. (momentum of intraday pattern may trigger the actual pattern you’re trading to break out… plus this generally means greater consolidation and greater volatility/range expansion usually results)
-Breakout of pattern within pattern (Rather than buying before the intraday pattern goes you wait until it starts to move and this way you may get it closer to the regular pattern’s breakout point)
-Break above prior day low in bottom half of pattern (usually a move above the prior candle leads to some sort of short term price movement that may trigger a breakout)
-Tight multiday range (volatility compression leads to volatility expansion A.K.A. breakouts)
-Near Apex of pattern (shortens your holding period waiting for the pattern to develop and break)

If you are still having trouble deciding which stock to add from a watchlist, there are other factors you may consider:
-Picking the stock from your watchlist with the highest short interest
-Picking the highest reward/risk
-Looking at the underlying options and finding the option with the best reward/risk

-Using the best reward/risk and calculating the amount you have to pay to get that reward/risk for every remaining stock and putting a good till canceled limit order there and then just watching to make sure the pattern is still in tact and canceling the trade if it breaks.

Not all of these ideas are easily testable with the available backtesting tools, but you should at least have a process that clearly defines or provides rough guidelines for a method to decide which trade to enter, how many trades you can enter with the same expiry, how much max option exposure (current portfolio), max exposure (by initial purchase), max number of purchases in a single day/week/month… and a checklist to go through in the morning before trading and before placing each trade to help you navigate these decisions.

I’m still working on defining this, and I’ve been trading for >10 years so it isn’t a must… but I’m pretty sure I’d have better results if I was more organized and had a more precise process… At a minimum it’d bother me less if I missed a buy… because this way it won’t be because of lack of organization, but instead just because of the way I chose to define the system.

update: Here’s an outline of a trading system I’m working on to more clearly define decisions and to eliminate uncertainty with regards to decisions.

Generally you should aim for the best R/R on the trade you make itself if any one trade is clearly above the others… outside of that.

Here’s a possible priority list:

1)RSI (5) combined with intraday RSI oversold (1m,5m, or 30m) while in the lower half of the pattern and near support.

2)RSI(5) combined with hammer candlestick at support (rejecting breakdown of support)

3)Hammer candlestick at support

4)At pattern support independent of any other signal.

5)RSI (5)

6)Tight day range

7)Hammer candlestick somewhere near support

8)Pattern within pattern

9)Close above prior day

10)Breakout of pattern within pattern

11)Buy the failure of the pattern (breakdown) and wait longer.

12)Breakout of actual pattern but within 3% of breakout point and 10% of low.

I still kind of think that calculating the cost of an option at support at the current day and prioritizing by the risk/reward at that price (and then canceling the remaining orders if you hit your maximum) might be the best approach… but with multiple strike prices and expiry cycles and different targets depending on expiry cycles that can be a little challenging too.

Even having this priority list isn’t really enough to tell you how patient to be waiting for a priority 1 vs whether or not you should just take any one of these triggers. It can tell you if you have 5 stocks you like and only want to buy 2 how to decide which ones to choose, but it is only a small part of the trading system.

Comments »

Automatically Sorting Stocks By Patterns

One of the false positives I’ve been getting is the “rising wedge” pattern. This pattern represents upwards consolidation which is more of a bearish setup. While one can certainly trade this pattern, I want to separate it from the bullish patterns at a minimum.

I also currently don’t want to trade the bull flags just yet, although I want them to be available should I need them.

However, I may wish to trade “high tight flags” so I want to keep these separate.

Both patterns show at some point significant price rises so if I scan for significant price rises over a particular period of time or combination of periods, I may be able to filter some of these out and be left with mostly triangle and falling wedge patterns which is mostly what I want to trade.

If I do trade bull flag patterns, I want to start with the high tight flags.

High tight flags are technically defined as a doubling in price over 3 months or less followed by a consolidation period where it falls less than 20% from the high. These I would consider trading and want to know about them independent of bull flags and rising wedges. While I don’t have a 3 month period, I have a 50 day low and high which I can use to define a stock that is up 80% or more and falling less than 20% PLUS I will also want a 20% higher consolidation rank than the market average at the time to qualify. I set this up for high tight flags.

filters

The -1,0 and 1 represent stocks that I omit due to errors or lack of liquidity. While stocks like DRAM and GSAT and PHMD are satisfactory, there are still others that I wouldn’t touch. I could probably bump up the consolidation rank to 30% higher than average to filter some of these, but for now it works.

When done, I should be able to look among those labeled “not high tight flags” and have a separate filter from there to “rising wedges and bull flags” or “remaining stocks” (falling wedges and triangles). This way I can look at the patterns I want and ignore the ones I don’t.

As mentioned before, now I want to work on coming up with a filter for rising wedges and bull flags. The key here is I don’t want falling wedges and triangle patterns showing up as “bull flags and rising wedges”. Even if I have to leave a few rising wedges and bull flags in the remainder, that’s more acceptable to me than having tradable falling wedges be filtered out. I’m okay with keeping bull flags and rising wedges together because it is a challenge to separate them from each other since both consolidate upwards and move from high and low established ranges.

I intend on coming up with a few formulas and changing the details of the formulas to increase the amount of patterns that get filtered until I see a clear falling wedge or triangle show up as a “bull flag and rising wedge” and then tightening the filters so they’re gone. In this case, I probably want to filter out stocks that have no clear pattern but also happen to have been rising since the goal is not to clearly identify rising wedge and bull flags, but instead eliminate stocks from contention from the patterns I am actually looking for.

This should help me narrow down the list and then I can sort that list by consolidation rank and I will be left with triangles and falling wedges that are liquid enough and have the look that I like to trade…

Should I choose to also look through possible rising wedges and bull flags for patterns I can, but the goal is to reduce the number of bad and untradable patterns without removing any trading opportunities. After filtering the high tight flags, the rising wedges and bull flags I want to look for other patterns that I don’t want in the remaining. If I can filter out some bear flags or something that will improve the quality of what I’m looking at.

The perhaps more important reason behind doing this is that I want to know which industries have multiple falling wedges and triangle patterns because these are the industries I want to trade. Of those industries with multiple falling wedges and triangle patterns I want to be able to sort by average consolidation rank of these patterns. Stocks that set up together in a low correlation environment tend to break out together.

While the original OABOT did a pretty good job of compartmentalizing the setups by where relative to the highs and lows they were and adjusting based upon industry, the inability to rapidly split test variations in formulas prevented me from being able to make improvements quickly. THe new OABOT will eventually be much better once I go through every layer I want to.

The next step after this will probably be “risk factor” which will attempt to handicap a stocks level of “risk” by looking at beta, ATR, monthly volatility, change (over 1 quarter, 6months and 1year), distance from highs and lows and industry metrics vs the market and the individual stock vs it’s peers in sector, industry, market cap size, exchange or index its trading in and other variables.

I could also look at classifying stocks within 10% of highs, 10-25% below highs, 25%-40% below highs and stocks near lows and run a count by number of stocks in each category and consolidation rank of each of these categories and other variables to help me better understand what is setting up right now.

 

Comments »

A Peak Behind The Curtain

So when it comes to picking stocks, OA teaches that there’s certain points of the market where the appetite for risk is different. Right now I don’t have a great way with the OABOT 2.0 to capture that without making some changes. However, I am experimenting with a way I can modify the score of the screen I ran to make a quick list that approximates some higher risk names. So far it worked pretty well as I will demonstrate.

The original OABOT was designed to mostly focus on where a stock is in relationship to its highs or lows (and in some cases what the short interest and fundamentals were) except you could only classify it once, and because the stocks were ranked according to a limited amount of factors, the classifications were not all that good. However, what the classifications were good at doing was an adaptive scoring model that graded stocks near the highs with a different criteria once they were classified as “laggards” or “trash” so it did some good things.

The current version of the OABOT needs to be changed to somehow capture a measurement of risk. I suppose I could try regularly altering some of the exact numbers. For example when I’m looking for low risk stocks, I’d score absolute low volatility higher than relative volatility. When I’m looking for higher I’d flip that on the head. Or I could add a reward for stocks that have high total volatility relative to the amount of volatility compression. However, for the time being I have another idea.

In order to capture the higher risk names, I decided to make a pivot table and then copy and paste the names that hadn’t been filtered out by liquidity filters and error filters (I’m not real strict about these filters, they’re just to eliminate the stuff that has no pattern or errors in data that prevent me from scoring them)

1

Then I eliminated the worst 1000 or so consolidation score stocks No real reason for this amount.

excluded 3

Then from the remaining 3000 or so I chose to incorporate a stock’s ATR divided by price or average % movement in the last 14 days. Since these are usually numbers like 0.014 or 0.022, I multiplied the result by 70,000 which mostly gives you a number about equal or less than the consolidation score except for a few and combined it with the total consolidation score to give me a total score that favors the stocks who’s relative consolidation still represents a lot of movement.

2

 

Then I imported a few hundred names it ended up being 276 names (just done randomly). From this list I manually skimmed through looking for setups because there are still going to be false positives. I came up with 52 names that looked interesting enough to think about over the next week and posted them for viewing in After Hours chat. 18.8% of the names from the top 276 I liked enough for a closer look. I could have done more but I’m not really into bull flag patterns just yet and have not tried to filter out rising wedge patterns. For now I don’t mind the false positives because I’m looking to just make this a research tool.Should I choose to make this more about being more of a trading system or process I’ll want to be much more strict and just want a list of a few dozen only and then simply monitor them for some kind of buy signal on the intraday chart or else some sort of buy point.

Sample of list (full list in AH /w OA chat)

final-list

 

I’m thinking moving forward I may be able to use various indications of how much a stock has moved between beta and ATR and maybe some other indicators to indicate “risk” It isn’t perfect yet, but I basically just arbitrarily picked the standard. I can easily put some formula to create limits and criteria to cap a score or remove it from consideration of “higher beta” names and then seek to optimize those numbers (and this was follow up. I’d probably also create some sort of metric based upon days since it had its IPO which might also be a good method to identify names that represent a higher risk appetite.

Certainly this screen does pretty good at capturing the “4’s and 5’s” of the market that have not yet moved. It didn’t take a ton of additional effort. For now I’m not sure whether a lot of change is needed, but I certainly could send the time as I’m curious as to how the names with certain classifications are doing curently and have in terms of their consolidation score would change as a group.

For now I just have categorized stocks by market cap, indices/exchange averages, sector, industry,etc and am only looking at the consolidation rank on various timeframes and using the old version when I want to run breadth or look at where the capital is moving.

There’s a current small bias due to the way I created the score that favors the lower beta setup and this “hack” eliminates it completely by rewarding the stocks that have moved a lot per day on average in the 14 days but still have a generally high consolidation score.

 

 

 

 

Comments »