So when it comes to picking stocks, OA teaches that there’s certain points of the market where the appetite for risk is different. Right now I don’t have a great way with the OABOT 2.0 to capture that without making some changes. However, I am experimenting with a way I can modify the score of the screen I ran to make a quick list that approximates some higher risk names. So far it worked pretty well as I will demonstrate.
The original OABOT was designed to mostly focus on where a stock is in relationship to its highs or lows (and in some cases what the short interest and fundamentals were) except you could only classify it once, and because the stocks were ranked according to a limited amount of factors, the classifications were not all that good. However, what the classifications were good at doing was an adaptive scoring model that graded stocks near the highs with a different criteria once they were classified as “laggards” or “trash” so it did some good things.
The current version of the OABOT needs to be changed to somehow capture a measurement of risk. I suppose I could try regularly altering some of the exact numbers. For example when I’m looking for low risk stocks, I’d score absolute low volatility higher than relative volatility. When I’m looking for higher I’d flip that on the head. Or I could add a reward for stocks that have high total volatility relative to the amount of volatility compression. However, for the time being I have another idea.
In order to capture the higher risk names, I decided to make a pivot table and then copy and paste the names that hadn’t been filtered out by liquidity filters and error filters (I’m not real strict about these filters, they’re just to eliminate the stuff that has no pattern or errors in data that prevent me from scoring them)
Then I eliminated the worst 1000 or so consolidation score stocks No real reason for this amount.
Then from the remaining 3000 or so I chose to incorporate a stock’s ATR divided by price or average % movement in the last 14 days. Since these are usually numbers like 0.014 or 0.022, I multiplied the result by 70,000 which mostly gives you a number about equal or less than the consolidation score except for a few and combined it with the total consolidation score to give me a total score that favors the stocks who’s relative consolidation still represents a lot of movement.
Then I imported a few hundred names it ended up being 276 names (just done randomly). From this list I manually skimmed through looking for setups because there are still going to be false positives. I came up with 52 names that looked interesting enough to think about over the next week and posted them for viewing in After Hours chat. 18.8% of the names from the top 276 I liked enough for a closer look. I could have done more but I’m not really into bull flag patterns just yet and have not tried to filter out rising wedge patterns. For now I don’t mind the false positives because I’m looking to just make this a research tool.Should I choose to make this more about being more of a trading system or process I’ll want to be much more strict and just want a list of a few dozen only and then simply monitor them for some kind of buy signal on the intraday chart or else some sort of buy point.
Sample of list (full list in AH /w OA chat)
I’m thinking moving forward I may be able to use various indications of how much a stock has moved between beta and ATR and maybe some other indicators to indicate “risk” It isn’t perfect yet, but I basically just arbitrarily picked the standard. I can easily put some formula to create limits and criteria to cap a score or remove it from consideration of “higher beta” names and then seek to optimize those numbers (and this was follow up. I’d probably also create some sort of metric based upon days since it had its IPO which might also be a good method to identify names that represent a higher risk appetite.
Certainly this screen does pretty good at capturing the “4’s and 5’s” of the market that have not yet moved. It didn’t take a ton of additional effort. For now I’m not sure whether a lot of change is needed, but I certainly could send the time as I’m curious as to how the names with certain classifications are doing curently and have in terms of their consolidation score would change as a group.
For now I just have categorized stocks by market cap, indices/exchange averages, sector, industry,etc and am only looking at the consolidation rank on various timeframes and using the old version when I want to run breadth or look at where the capital is moving.
There’s a current small bias due to the way I created the score that favors the lower beta setup and this “hack” eliminates it completely by rewarding the stocks that have moved a lot per day on average in the 14 days but still have a generally high consolidation score.
Taken from early this morning less than an hour after the open. We sold off since then and the breadth is less overbought on daily basis now.
Most recent signal pulled nearly 2 and a half hours after market open. Still seeing bullish activity even as Russel reversed the gains and turned negative on the day. Seems like market is just digesting gains for now, and market is still largely in chase/short squeeze mode on the day.
I’m going to continue doing some work to improve the OABOT 2.0 a.k.a. “Neo”. Neo has been removed from the matrix of the market data and now must begin to be trained “how to see the code”. To start with Neo needs to develop how to discern reality from illusion which means being able to remove the setups that get him in trouble and add skills like a Kung fu master so that he can identify when to strike.
The next optimization step is ATR divided by beta. Beta is a measurement using I believe a year’s worth of data to measure average movement. ATR is a measurement of average price movement in dollars over the last 14 days. ATR divided by price gives you an average percentage move. When ATR is small relative to beta, in theory it suggests a more recent consolidation and range in the last 14 days than is typical. This is common among consolidation patterns.
In theory this should give you stocks moving less in the most recent 14 days than in the prior year.
So I made major changes to the scoring. What I noticed was the setups that I was getting tended to be a little bit longer developing.
The best patterns began to show up as I got after around 300 names. This suggests the punishment knocked off a few good names and the reward also rewarded too many bad setups. In other words, compared to before, the score is too high and not as important as some of the other variables. However, it may also be that the score is important when ATR/price divided by beta is less than 3% but much less important when it’s less than 2.5% and 2%. In fact, it’s even possible that taking an extreme reading like less than 0.50% and punishing stocks for being in that tight of range will eliminate many of the merger/aqcuisition types where the stock has a ceiling equal to the buyout offer and thus it doesn’t hardly move at all over 14 days. This can be tested later. For now I have to get through the main scoring and worry about that later.
Next test 1/6th the size 100/-33. I definitely thought the results improves from the 600/-200 but the original 12/-4 was still better. Now is something like 12/-4 optimal or is something like 20/-6 better? Or perhaps 4/-1 is more optimal. For now I’ll just return it to 12/-4 and leave a note to test 20/-6 and 4/-1 to try to improve it later. The effort to try to optimize this seems less worth the effort than to optimize other criteria so that’s all the work I need to do for now.
Now we have ATR vs monthly volatility. ATR measures average move over 14 day. Monthly volatility measures 30 day volatility. If 14 day “volatility” (in the form of average daily move during that timeframe) is less than 30 day volatility it should represent consolidation if they are calculated in similar ways. This is similar to the first test we did where we wanted weekly volatility (7 day) less than ATR/price (14 day) except now it’s 14 day less than 30 day and represents a little bit longer time horizon of consolidation. I think this is one we’ll want to switch to a larger score than we did last time. Prior to changing it this was the settings.
Last time with the 7 day / 14 day I may have started with too small of score even though I thought the score was pretty massive. It would have been better if I started with 1000/-750 just so I could rule out a higher score then I’d have a better sense of direction. Instead I went with 200/-150 and now I have to test 400/-300 and 100/-75 instead of just one additional test.
So I’m going to start this one off with 1000/-333. There certainly were some good setups. But also some not so good. In fact, some of the highest scores were filled with false positives of the merger/acquisition variety. That stuck out to me which led to changes I make that I describe later. But there were still some decent setups mixed in the top 50. That probably means that if ATR/Price is less than something like .25 or even .125 or lower I could actually substantially punish all stocks and probably eliminate a huge amount of false positives. After those the setups started to pop up. When I searched the top 300-400 range I couldn’t really see as many quality setups as in the 200-300 range but because the score was so large as was the punishment I decided to look at the 800-1000 ranked stock range and the setups improved. As predicted the score and/or punishment is too high. Since I started with such a large number directionally it is clear I have to reduce this number. I went with 800/-200 instead of 800/-266 so I reduced the punishment proportional to the reward. I’m pretty confident the punishment for not meeting these criteria was a little too high because of the quality of setups in the 800-1000 range. I’m also going to change the score for the less than .25 to -10,000 and punishment to zero so that it is obvious which ones were filtered out and I can quickly look at them. Then I can adjust this number to .125 or .33 depending on whether or not it filtered out too many good stocks or not enough merger-acquisition stocks. These were the only 12 stocks that the score filtered out. Looks like there’s a pretty low chance I’d be missing out on anything as a result of this score.
I changed the criteria to punish any stock less than .46 the volatility and I still didn’t really miss out on any and filtered out over 2 dozen. Occasionally I may miss a name, but it isn’t very likely. A stock scoring less than .50 was another requirement, there won’t be hardly any stocks between .46 and .50 and if I change it to something like .7 then there isn’t much room between .7 and .75 so I have to really change all of the points of measurement.
I also decided to test all stocks between .45 and .65 to see how many stocks are worth giving a good score in that range or if I should make additional adjustments. They weren’t necessarily all that great, filled with utilities and defensive names but also had a couple good setups. I decreased the amount of score that I gave those between .45 and .65 and eventually moved it up to .70.
I decided the same problem probably existed for the weekly volatility and it did so I had to remove those less than .4 it removed a few non merger+Acquisitions at .4 but mostly bond funds and defensive funds so I wasn’t bothered by it.
Because of the changes I made, I had to sort of slide all of the scores on both metrics. It currently looks like this but isn’t really necessarily anything close to optimal just yet, it’s just much better than before. As such these numbers are likely to change soon but for now I need to take a break.
I’m probably going to have to increase the first criteria’s scoring and decrease the second until I’m satisfied as I think the recent week having less consolidation than the I also may decide to play with the sliders and the exact multipliers of volatility.
It seems that the best stocks may actually be in a range of volatility of not too quiet and not too volatile, but generally less is better. So I’ll have to figure out where that range is to continue to make improvements from here.
I was also going to test weekly change divided by beta, but since ATR divided by beta wasn’t super helpful I probably don’t need to change this one too much so for now I’m going to skip it and circle back to it.
Everything remaining that I want to test in the intermediate term rank is related to the moving averages relative to other volatility metrics. If a stock has departed significantly from the moving average then it represents a larger move away from a range, the goal will be to find the levels at which we can remove stocks as going to far and what comparative metrics helps with this. Perhaps a stock with smaller change relative to the 20 day moving average than the monthly volatility or smaller daily or weekly change relative to the 20 day will capture volatility contraction. It’s really not clear. A 20 day moving average can also use as a reference point to manage risk with a stop on a cross under the average if the stock is above it, so a stock being close to the 20 day may represent a good entry.
I will probably test the long term consolidation (monthly to quarterly change and beta) rank before the short term.
It is unfortunate that there isn’t a “quarterly volatility” or “yearly volatility” or more yearly based volatility metrics aside from beta and even 5 year and 10 year volatility metrics. But for now that’s the limitations I am working with. If the 20 day moving average works well, maybe I’ll use the 50 day and 20 day vs the 50 day and such.
What’s left to do?
1)Adjusting the moving average based rankings
2)Adjusting the long term consolidation rank
3)Adjusting the weightings for the “total consolidation rank” (which combines short term, long term and intermediate term)
4)Possibly adding 50 day moving average into long term consolidation rank.
5)Cycling through a couple more times to fine tune the score.
6)Possibly making more significant changes to eliminate sort of the rising wedge patterns or reclassifying them so that when I sort through stocks I can eliminate them if I’d like.
6)Considering categorizing the stocks based on setup by using the stock’s proximity to highs/lows and behavior and making unique scores in the changes over the week/day/month as well as change in trend that signal the pattern or type of setup. Then if a stock meets this criteria, having a separate score just for that “pattern”. This is probably the best way to go. I sort of did something similar the first time I did OABOT but I’ll probably just copy and paste most of the old method rather than start over.
7)Probably setting up a better “summary”/”cover” tab. I like stocks that set up together, but if I don’t have any sort of way to differentiate a bullish consolidation pattern vs a bearish one it won’t be as useful.
Right now it looks something like this
It has stocks categorized by market, by sector, by industry and market cap size is next to be added. This is another reason I’d like to see how certain “types” of stocks are doing like “stocks near highs”, “stocks IPOd in the last 2 years”, stocks 5-15% off highs, stocks 15-40% off highs, stocks near lows, etc. That’d probably be a good way to get a feel for the market’s risk appetite, but I can also use a feature that sums up a list of individual stocks and lists the average score.The last cover tab was more about what is “working” now and where money was flowing. This is more about consolidation ranking but I may add other breadth metrics and such on there as well. I’m all ears to new ideas too.
I probably will have a duplicate of my spreadsheet because for the cover tab I need to have very few if any false positives that might skew the average where as a research tool to find specific stocks I have no problem looking at several false positives.
I also have a portion of the main tab setup for entering tickers if you just want to see how a group of say the FANG stocks compares to a hand pick group or market averages, or your own GARP index or whatever. This is how I constructed the S&P, Dow, Russel2000, etc market indicies average scores.Comments »
I’ve started to recover the OABOT and recreate the consolidation score. It turns out that subscribing to finviz elite does allow immediate updates on stock scores as long as you are logged in at the time in internet explorer. (firefox doesn’t work for some reason, may be a settings problem) I’m not as mad at them as I was for shutting off the export function without a warning, but it is a little bit of a hassel to log in first and redo things as well as pay the dollars for subscription but so far no one else that I’m aware of has really stepped up and offered nearly as much data to compile and sort through and set up formulas to in a way that I can export to excel and have formulas pre set to adjust the way I want to. Originally http://investexcel.net/ had a spreadsheet that would pull all the data from finviz with the push of a button. Basically I altered it by creating a formula that would pull from the tab that had the data and then I set up my own formulas. I wouldn’t really know how to create a formula to do that automatically with another free or paid data source.
There’s a list of 400 stocks that I pulled based upon the top “Intermediate consolidation score” as I applied via my OABOT spreadsheet. From those I manually scan and pull all the ones I like. I pulled 39 names. I probably could have pulled more if I wanted to grab the ones that all have the similar “bull flag” pattern like WABC, but I decided not to
For some reason I’m having trouble posting the link so for now you have to copy and paste it.
What I will be doing next is going through a process attempting to improve either the quality of the results or the quantity of results from the 400 I list.
The process will look something like this:
1)First run the scan as currently exists.
2)Change something about the scan’s formula.
3)Compare the results and repeat until the results improve.
I want to basically start with the extremes on each end and move towards the middle until I can’t seem to improve it or it no longer is noticeable or worth the improvements.
Fortunately the new spreadsheet is better designed for quick adjustments. I created a formula tab that allows me to change lots of numbers in the formula without scanning through computer gibberish to do so.
All I have to do is change a number and all the formulas and results change to reflect these changes.
One criteria is that the stocks weekly volatility is less than a multiple of the ATR/price (% daily movement average), it awards points, and if it fails to meet the criteria a punishment (negative points) is applied. I can adjust the reward, the multiplier and the punishment for failing to try to remove stocks or include stocks in the top 400 that wouldn’t otherwise make it. The goal is to change the formulas dramatically and pay particularly close attention to the stocks at the top of the ranking to see how they change if I change the number to zero so no points are rewarded and then change it to 200 or something extreme so it makes up a larger percentage of the total intermediate term score. By doing this I’m better able to see if this criteria is a good criteria to even have at all, or if it should be removed completely, or if it should be one of the primary indicators used. Most likely it will be somewhere between and that’s what I’m after.
The ATR measures a stock’s daily movement each day over the course of 14 days to come up with an average. In theory if the weekly volatility is less, it means the last 7 days are less volatile than the previous 7 but that may not necessarily always be the case depending on how the volatility is calculated…
That will be the first criteria I test. For now I have 4 separate scores so that a stock that has weekly volatility less than 1x ATR/price is awarded points. A stock that has less than .75x ATR, .5x ATR and .25x ATR are given additional accumulative awards.
original 400 stocks
The next stage will be to see if the quality and/or amount of names I pull improves.
The next criteria I went +200 and -150 for all 4 of these criteria. The one below involving ATR/price divided by beta is a separate metric that I will test another time.
I can see which is more important later and if an even score or exponential score is best and whether scoring something like .10x or less than 2x would be more appropriate. Is it better to use this as something to make sure the recent week isn’t totally out of control or to actively seek weeks that are certainly quiet even if it eliminates some that aren’t super quiet according to this criteria but are still reasonable? We’ll eventually find out. For now we want to know how powerful this SET of criteria is together and then we can make adjustments to figure out which part matters more.
So with 200/-150 as the scores I came up with a new list of 400 and a new grouping. I maybe should have went with 1000/-500 because there was still an overlap of names, but that’s okay.
By doing this I actually got 59 names. I’m not sure if this is because there was a single signal that was more dominant or if this is a sign that this indicator needs to be scored higher or if there’s just more value in multiple scans. Certainly there was some overlap.
I think what I will do is remove the tickers I have in common and compare the ones that are different. After doing this I find that not only does the 200/-150 score develop more names, but I like them better. So that means the best solution is probably closer to 200/-150 than 20/-12. We don’t know if the exact solution should be higher than 200/-150 or more like 115/-85. I also see advantages for multiple screens. Nevertheless, for now the 200/-150 stays and we will look to test something like 400/-300 and 100/-75 to see if either of those improves the results in my opinion. If the results are not significantly better I’ll probably just pick one and stop there. The other thing is, I may circle back to this after I change other variables for the better, so for now I’m just trying to get a quick approximation of about where the better results lie. Later I can test punishing the stocks more than I reward or only rewarding with no punishment for the stocks that fail. I can also test scoring stocks that meet every criteria less than the previous criteria or more than the previous (for example if weekly also is .25x less than ATR do I score 1000/-500 or something like 20/-5.
As you can see there’s a lot to test, so I’m taking sort of a pragmatic/split testing system based approach and trying to quickly move on to other things to keep improving this. I wasn’t nearly this efficient the first time I made the score and making changes was more difficult so I should be able to really improve this tool rather quickly by running a similar process.
For now the short term consolidation score doesn’t need any changes as it serves it’s purpose of making sure the stock isn’t breaking out or down on the daily basis without any false positives (although there may be false negatives that get filtered out that are still good to trade). But I want to keep improving different variables in the intermediate term and then get to the long term consolidation rank and seek improvements there as well.Comments »
I’ve been busy due to family issues that have came up.
I just finished the formulas for short, medium, and long term. I still have to do the busy work of making changes so that the medium and long term formulas will apply for every single stock. Some formulas need to remain “locked in” so it’s always accessing one specific cell from the formula tab. Other formulas need to change to correspond to a completely different stock and set of data.
The Long Term Consolidation rank will use quarterly and monthly performance and compare one another. It will also look at the overall range of quarterly and the overall range of monthly. It will also look at total monthly volatility as well as monthly change in price and look at the change/beta.
The consolidation score on it’s various levels has a formula tab that makes it easy to change the formulas and know exactly what you are changing. This makes it possible to tailor these formulas to the individual.
Eventually I also will allow a more complex layer of formulas that actually adjusts the individual stock’s score based upon the strength of the score of the industry that it’s in. If many stocks in the same industry are setting up, these stocks will get a little bit of a boost.
The next step may take some time before I am able to find the time to set it all up, but it involves setting up risk factors, comparing the risk factor of individual stocks to their industry. I may adjust the risk score as well as the overall score based upon relative strength and industry cycles, and I may even apply an input for “market phase” which would then apply a bonus to stocks that are “up next”.
All of this is time consuming though and I want to get a functional tool available, so the first one probably won’t be very complete, but will be complete enough to come up with a list of stocks that are filled with most of the names that are consolidating.
The sort of end point if I am able to free up more time to work on it would be a stock market simulator based upon projections and setting up subjective probabilities of certain performance results. Then I could set up a simulation that would only take a minute to run a few thousand possible outcomes over several iterations.
In other words, what happens if we draw several random numbers for each stock, then based upon the probabilities, convert that number to performance, then recalculate the scores and probabilities of certain outcomes of all stocks based upon how those performances change a stock’s score? …Then what if we interpret the next random number based upon these new probabilities, and repeat the process for to represent weeks or months of performance? …Then what if the simulation would record the final result and run this entire series of simulations thousands of times for a particular spot? This is what the simulation would determine.
The coiled up stocks will tend to move more, the stocks in industries making breakouts will also tend to breakout, the stocks all setting up together will have a higher probability of breaking out and having more significant amounts of capital leading to larger moves. As certain moves develop, it will influence the development of other setups because of the tendency to influence group scores.
The point of all of this will be to project probabilities of a few outcomes for a particular option trade, converting that set of expectations to a risk adjusted growth rate of your portfolio, and compare the top performing trades to determine the best risk adjusted decisions individually.
It should also be possible based upon the tendency of trades to either correlate together or have opposite correlations to possibly try to construct a more optimal portfolio that accounts for the edge and correlation related to a stock’s performance rather than broad theory that starts with assumption that the market is random.
In other words, rather than simulating expected outcome of an individual stock, you can set up a mock portfolio with a particular weighting and look at the probability of the TOTAL performance. Since certain stocks from the same industry may expect to outperform but when they don’t there is a greater chance that neither will, having a portfolio consisting of all of the best stocks may not have the best return on risk. By adjusting the total portfolio as well as the weightings and finding stocks with inverse correlations but positive return on risks, you should be able to normalize your downside without costing you. Below is an illustration of how with the perfect balance of positive expectation and negative correlation adding a trade with a lower expectation can actually be more constructive to risk and reward than one and produce a much better return on risk than either trade independently.
I wish I had the time to really set this up well in a few weeks to a month, but unfortunately it will take longer than that. There’s plenty of theory out there about reducing correlation through efficient market hypothesis and appropriate diversification, but that is assuming an efficient mostly random market. There is a greater edge to be had if you can successfully handicap the market, but that doesn’t entirely eliminate the benefits of appropriate weightings and mitigation of risk through reducing portfolio correlations.Comments »
The short term consolidation score looks at:
Daily Change Relative To ATR
Daily Change Relative To Weekly.
The short term represents any movement less than 1 week, but really only involves the daily moves because of lack of data between 1-6 days. It also scales the measurements so that as a stock moves less, it gets more points added to the score.
The intermediate term consolidation gives us more data.
Here are a few possible considerations to score:
1)Weekly change vs monthly performance
2)Weekly volatility vs monthly volatility
3)ATR (14 average daily move) ratios…*
4)Weekly volatility vs beta
5)Weekly volatility overall
6)Weekly performance vs beta
7)Absolute weekly performance not too extreme in either direction.
8)Distance from 20 day moving average?
9)Distance from 20 day moving average relative to some volatility metrics**
We can tinker a bit more with how one thing relates to another in different layers to provide a lot more clarity on whether or not the stock is undergoing volatility compression or volatility expansion in the last 7-20 days.
*ATR divided by price gives us a percentage daily move as an average over the last 14 days. While this only refers to the daily movement, it’s a function of 14 day volatility on daily basis. If the weekly volatility is smaller relative to the ATR (14 day volatility), it tends to represent a stock that is contracting in volatility. If the ATR is smaller than the monthly, it also tends to represent a stock that has contracted in volatility over the last 14 days more so than the last 30. You can also apply a beta adjusted bonus so high beta stocks (stocks with typically more volatility over a longer time frame) that also have a low ATR (less volatility over last 14 days), score well.
**A stock that is closer to it’s 20 day moving average will tend to either have moved less over the last 20 days OR be near it’s “balance” in it’s range suggesting it’s close to the Apex if the chart is fully consolidated. However,a stock with a higher beta, or ATR, or monthly or weekly volatility may be able to tolerate a little bit more movement from the 20 day volatility and still have no confirmed breakout or breakdown. Also, a stock that has moved less in the last 14 days or week than it has over it’s 20 day average may be consolidating. The closer a stock is to the 20 day moving average, typically the less it’s moved over those 20 days, or at least the more likely that it has stayed in an equally proportionally rangebound area or regressed to the mean recently.
Also:A stock in an uptrend with a 20 day moving average under the 50 day may represnet short term weakness and consolidation within an uptrend. A stock in a downtrend with the 20 day above the 50 day may represent a stock that may be consolidating and possibly forming a bottm, particularly if the stock is also above the 20 day moving average. This sort of deduction from data probably won’t be used, but may be a decent idea to pair with consolidation data to find chart patterns. It would best be used in a binary (1 for yes, 0 for no) and as an additional filter (E.G. you can set up a table to show you stocks that score over 80 that also pass this filter of “patterning” from uptrend.) Defining the stock as being in a longer term uptrend can be based upon a stock being above a 200 day moving average or 50 day being above the 200, distance from 52 week low, distance from 52 week high or other data. A little off topic here so I will scratch it.
A volatility measurement tends to measure a stock’s standard deviation of movement over a particular time frame. A stock will move within 1 std deviation ~68% of the time if it is normally distributed. 95% will be contained within 2 std deviations and 99.7% of the data set will be contained within 3 standard deviations. Whether or not stocks movements are actually normally distributed has famously come into question by Nassib Taleb in his books “Black Swan” and “fooled by randomness”, but for the purpose of measuring volatility, this doesn’t matter unless we are going to sell option premiums expecting moves within a particular range.
This is a bit different than looking at a stock’s average movement, but still relate to movement over a time period.
So as long as we evaluate each stock with the same measurement, comparing ATR average daily movement over 14 days) to weekly or monthly volatility (standard deviation of movement over the time period) should still get us an idea of how volatility has changed over time.
By looking for contracting volatility over time and comparing volatility over various time frames as well as the magnitude of the change, we are able to look on a 1 day, 1 week, 2 week, and 1 month period of time and compare volatility relative to a stock’s long term comparative movements relative to the S&P (beta), and not only get a good idea of whether or not a stock is located within normal bounds of a range, but whether or not those ranges are contracting, to what extent they are contracting, and how the recent “quietness” compares to the historical movement of a stock.
The pictures above don’t fairly represent how our scores also consider relative movement, relative volatility, and average daily moves and standard deviations over time periods. Overall, using all the data presented will provide a much greater chance of having consolidation patterns rise to the top than may be represented by the images. I covered this in a different post.
Again– once we gather this data, we can have the spreadsheet sort out the data by certain categories– We then can make adjustments to the final individual score based upon the average of the group. A stock showing signs of consolidation in a group with an amazing consolidation score may be worth more than an “ideal” setup in an average group or worse. When you see multiple stocks in an industry setting up, it’s less likely to just be randomness or a fake. If every single stock in a group is setting up this represents a large amount of capital preparing to rotate into a theme. One or two signals can be wrong, but say 20/25 names in the group setting up? The probability of you catching an idea before it makes a big move is greater.
I feel similarly about seasonal data. Seasonal data on an individual stock may be due to a few quarters of earnings at a certain time that happened a few years and skewed the data. But seasonal data that suggests both the individual stock outperforming the industry and an entire industry will outperform the sector, and the sector will outperform at a particular time of year is more likely due to a causal relationship such as capital moving in reaction to or in anticipation of holiday shopping leading to an increase in earnings and a fundamental reason as to why a particular stock has been more effective at capitalizing on this seasonality than it’s peers in the industry. This is a little off topic here as well.
I don’t necessarily need to understand the cause as long as there is evidence the move will continue to correlate with the timeframe as opposed to it just happened to correlate without cause that will be operationally random moving forward. If you don’t understand a cycle and are just selecting a point your results will be normally distributed as if it were random. But if instead it were a cycle with waves that expand and contract in duration, and you were able to identify this cycle and buy closer to the low and sell closer to the high, then you would be able to show superior results. Your results would come at the expense of someone on the other side, so again, analyzing the results would show a normal distribution. If you are an outlier to the upside, someone or a group of people will be an outlier to the downside, and overall the data set would be within a range with little evidence that you were actually able to exploit a tendency. That is the nature of a interconnected system where the wins of one person correlate with the loss of another around a collective average. A little off topic again.
So we basically have a good outline for the different things we can consider when making our intermediate term volatility rank which I have begun working on. I have finished the short term consolidation rank unless I decide that something in the intermediate term rank belongs in the short term or I come up with a different idea of what I can add.Comments »
If the heart of the OABOT is the consolidation score, than the short term consolidation score are the aorta and other major arteries, and the intermediate term is the blood, with the long term being the veins.
Although there’s not too many possible data points that go into a daily score, there’s a surprising number of pieces of the formula that can be applied by combining these variables. The short term consolidation rank helps punish stocks that are breaking down or breaking out, while rewarding stocks that are moving less today than recent average movement over various periods of time. When combined with high intermediate scores, this generally allows you to identify stocks that are closer to the apex. The stocks that show an “inside day” tend to rise to the top a little more often with all other things being equal. It also avoids looking at stocks that may be consolidating, but either are breaking out or aren’t “quiet” today, potentially suggesting they may need more sideways work, or that it may be too late to chase.
Before I get into the nuts and bolts of how I am preparing the daily score, I want to show you how this spreadsheet will be “cleaner” than the last one, and thus, much easier to tweak.
I have a “data” tab (where the data can be updated and replaced with new data completely), an “all tab” where all the data is repeated so that formulas can be applied, scores can be added (and re-added if/when I put the summary tab back on), and the “formula tab” where specific numbers and details within the formula can be tweaked.
Rather than looking at some nonsense like:
and then having to interpret the nonsense and substitute certain variables and change the formula itself, I have instead set it up so the “all” tab pulls from both the data and the formula tab where necessary. Even though the formula still looks like crazy nonsense, I don’t have to look at it and deconstruct it.
If I want to make tweaks, I look at this clean version where I can read what’s happening, and I can make adjustments without adjusting the formula entirely. If I want to make structural changes to how the rule is applied or apply a different rule, I can more easily read what’s actually taking place and what cell on the all tab contains this formula.
Now we can resume talking about the short term consolidation rank tab….
For the time being, I broke the short term consolidation range into 3 subsections. Ultimately I am combining them all, but if later I decide I want a weighted average so that one of them count more towards the final score I can do that.
The categories are:
1)Changes as a multiple of the ATR (A stock that moves less than 0.5 times the ATR will score more than a stock that moves more than 2x the ATR)
2)Absolute Changes (A stock that moves somewhere between -1% and 1% will score more than a stock that moves between -4.5% and 4.5%)
3)Changes relative to the weekly performance.
The 3rd variable is the trickiest, but important. To avoid confusion, I chose to classify the weekly performance as “narrow weekly range” for a smaller weekly move in either direction or “wide weekly range” for a larger weekly move in either direction. I also classify it as “positive” for a stock that is up on the week, and “negative” so I don’t get in trouble with the less than and greater than sign when negatives get involved.
This helps me avoid confusion in making the formula since you may have a weekly positive but a daily negative or vise versa and writing the formula without separating these gets tricky.
Let’s say you want the daily range to be within the weekly move. Let’s say a stock moved a positive 1% on the week. If a stock moves less than 1% but more than -1% it represents consolidation. However if instead the number is negative the same formula would say less than -1% but more than 1% and represent the opposite of what you are trying to express. So by setting up separate formulas for positive and negative numbers it makes things a bit easier.
If a stock is not moving strongly on the week, and is showing signs of the stock moving less today than that weekly action, it’s probably looking like a better target. This is more true if the daily action is less than half of the weekly move, and more true if the daily action is less than 0.25 of the daily move.
While it’s possible a stock started the week moving up 10% and then went down 8% to finish only up 1.2% suggesting less volatility than there actually is, if the stock on the day is only moving 0.30%, you are still looking at a stock that is less volatile on the day than over the course of that week. Not all stocks that show less movement on the day than the week will be consolidating, but most stocks consolidating will see less movement on the day than the week.
When backed by weekly (last 7 days) being less than a multiple of the ATR, it begins to paint a picture of stocks that are in narrowing ranges, even if that contraction may not be orderly.
The above graphic shows that inverse head and shoulder patterns as well as the really steep moves with significant declines on the week in falling wedges may not always pass with a high consolidation score. Also there may be some stocks that pass within the thresholds set but don’t actually show a pattern.
We can try to get the falling wedges after significant increases in price to have a boost by relative comparisons of monthly volatility relative to weekly volatility and movement relative to others.
Capturing the inverse head and shoulders look from a low can actually be done by classifying stocks properly ahead of time so that it’s near the low but has started to see recent strength, but there will be a lot of false positives and a few false negatives. For example, if you have a stock that is close to the low, has seen some movement from the low, is above 20 day moving average with recent weekly move not surpassing a certain amount and hasn’t declined too much and a very mild boost for weekly volatility being less than monthly, and daily volatility data doesn’t matter, you may capture a few that have that look… However, I won’t be using OABOT to try to create a special formula to try to find these patterns at this time.
We are looking for stocks that have not passed certain thresholds on different timelines and getting a bit more strict about the movement as the time range decreases, and using different criteria and different measurements for volatility and movement.
We are looking for contracting volatility. That doesn’t necessarily imply there will always be a pattern of contracting volatility just because volatility on the month is less than the week and the day is less, and the ranges are contained, and the ATR is small, and the beta adjusted movement on all time frames is low. But it’s an excellent start.
This is also why we must look at the same question in different ways. It’s not enough that a stock passes a filter of not breaking out to the upside, but also that the daily movement relative to absolute terms, recent average volatility on the day, and relative to the week suggests consolidation. As the data suggests more consolidation we continue to improve the value of stocks with a high consolidation score.
A stock may have started it’s consolidation awhile after the cutoff. For example a high tight flag will see the stock advance for 3 months straight before it consolidates so both the quarterly and monthly performances may not be suggesting consolidation. If the consolidation has only just started midway through the week, the weekly may not even capture it.
Nevertheless, in terms of finding stocks in consolidation patterns with good setups near the apex, I was able to get about a 40% hit rate on stocks with very high consolidation scores the first time around, and that’s when I wasn’t as nuanced with the short term, intermediate term and long term consolidation scores. However, the VIX at that time I believe was under 12 so setups were abundant.
That was when I also started to included separate criteria to evaluate stocks near the high differently than near the low, but I actually think I can eventually get it to where I have a higher hit rate than that.
This means in the right conditions can look through the top 100 stocks and find around 40 that are at least starting to set up or show some kind of consolidation. Not all of these 40 are going to be worth trading, and certainly many of them won’t trigger a trade, but they at least capture the “look” I was going for.
I don’t really need it to be that high, even in low volatility environments. However, starting out that high gives me the flexibility to be strict about other data that will represent other things I like about the stock such as being in an industry with a good score as well, or being in an industry that has performed well in the long run that is starting to show weakness AND a high setup score, or focusing on the higher beta stocks or better swings in the past, or looking at a separate filter to evaluate a stock’s “risk” and compare it to the phase of the risk cycle.
Reducing the workload from 7000 stocks to 400 or less nearly automatically is the intention, while also tracking a few things that may be useful in providing a “market overview”.Comments »
Perhaps the very most time effective things I’ve done with the OABOT is the Consolidation score.
The consolidation score looks to reward stocks that are consolidating with higher scores. It looks to avoid stocks that have already started the breakout, and identify stocks that have shown at least some evidence of consolidation. Not all of the stocks with a good consolidation score will have the look I want, but it certainly significantly helps narrow the field.
By applying a consolidation rank individually, it helps identify and rank the stocks in order of those who have the most coiled up setups.
By slicing a stock into numerous classification and then looking at each of those classifications, you can then apply an “average classification” for each category. You can also look at the total count of stocks in that classification and the number of stocks with a consolidation score in the top 20%, and come up with a percentage out of 100 that each “theme” has. You can then use excel to sort industries (or sectors, market cap size, risk classification, etc) to see which “themes” are setting up.
When I created OABOT the first time I complicated things a little bit by evaluating stocks differently based upon where they were in relationship to their 52week high, moving averages, from lows, and performance in the days, weeks, month, quarter, year, % of shares that float, short float %, and other factors to label each stock into categories. I still think that’s valuable, but in the process, I think I got a little bit away from the ability to quickly find “themes” in that I combined “what’s working now” with relative volume of themes to multiply scores and combined everything together.
The reboot of OABOT will possibly look to simplify and just keep it contained to identifying
1)consolidation rank on short, medium and long term basis as well as overall consolidation score
2)Consolidation score by “groups” or “themes” (sectors, industry, market cap size, Location, etc)
3)The number of themes that have a certain NUMBER of scores over certain thresholds (For example, what percentage of stocks in each industry have a consolidation score over 80? Over 90?).
If every stock on average has a better score than the rest, OR if there are multiple stocks in the best 20% of setups or best 10% of setups, (even if the others are bad and skew the average to be low setup score for the theme) these metrics will help confirm that institutional capital is likely being prepared to enter a theme, even if they perhaps are being a bit more selective about how many within the theme they’re looking to enter.
This time around, I want to make sure I compartmentalize different aspects of what I’m looking at.
- I want to quickly identify the next theme rotation based upon consolidation score only. At some point I might also be able to separately look at theme performance and what’s working well now so I can manually glance at them and see if they have anything in common “beneath the surface” (are they all showing a particular technical chart pattern or candlestick pattern? Is there a fundamental reason or news related reason why certain stocks are outperforming? Is there some idea that might be next based on this knowledge?)
- Then I want to separately look at the top stocks in some of these top performing themes only; as well as separately looking at the top consolidation setups within themes that are working; as well as the top overall consolidation scores independent of industry.
- It will eventually be valuable to also look at the risk rotation of stocks, and the “What risk cycle is working” vs “what risk cycle is next” in each theme in a more anticipatory fashion. Although this is far more time consuming as it requires a layer of classification, analysis of the theme, and then reclassification providing a second layer of classification.
- And then look at the holistic score that factors in everything as an average.
The consolidation score has to look at a lot of different variables and apply them in different ways. Of the finviz stats could be used in a way that makes them relevant to consolidation:
9)[Distance of price from moving averages]
10)Ratios and multiples of one score to the other.
I don’t believe I will be using the distance from moving averages, but It’s interesting to think about. A stock near the 20 day moving average relative to the ATR suggests movement in the last 20 days has been either quiet or nearly evenly distributed among the upper and lower range over the last 20 days. Or a stock above it’s 50 day but below it’s 20 day may suggest the trend has turned upwards, but the stock has recently begun moving downward slightly… If backed up by consolidation score, then it might be a wedge pattern or flag pattern… but it’s hard to tell just based upon this data if it’s a pullback or a orderly bullish consolidation or a sharp declined FOLLOWED by orderly consolidation (bear flag).
On it’s own it doesn’t necessarily suggest consolidation, but combined with consolidation it suggests recent order or that the stock may be near the apex of the consolidation. Perhaps instead sorting stocks over a certain score by distance away from 20 day moving average might be a better approach than factoring it into the score.
Additionally, it’s possible that a stock far above the 20 day moving average that is consolidating substantially in the last week might be bullish while the stock far below the 20 day may not. This is because the one far above that is consolidating may be a bull flag, while one far below is possibly a bear flag. So if you are going to use moving averages, you have to combine them with other data and possibly create separate classifications that combine certain elements.
I will probably do without the moving average data.
You want weekly volatility less than monthly volatility, price change less than 2x ATR but more than -2xATR, weekly change less than some multiple of ATR and more than the negative. When using ATR you either need to convert the change in % terms to $ terms or convert the ATR to a percentage terms. Daily and monthly volatility under a particular amount. Higher weekly and monthly volatility may be acceptable if the beta is higher, so you might adjust beta.
The ratio of weekly volatility to monthly volatility can influence how much you score the stock. The smaller the weekly and the larger the monthly, the more consolidation that’s taking place in the last week relative to the month.
The way you create a consolidation score is using IF formulas and possibly “AND”, as well as “OR” formulas in excel. For example One formula in excel would allow you to say
“IF weekly consolidation is less than monthly consolidation then add 20 points, if not take away 5 points.” This could be expressed in a format like this
=IF(AY2<AZ2,20,-5)+[other parts of formula that add or subtract a score]
where AY2 is the cell that lists the weekly consolidation, AZ2 is the cell that lists the monthly consolidation, and the brackets just represents a placeholder for other parts of the consolidation score formula.
You continue to apply certain rules that reward or punish, but not eliminate stocks. Once you are done, you can filter out stocks with a total score below a particular threshold.
Rewarding the monthly, weekly, and daily performance above negative absolute thresholds and below positive absolute thresholds as well as ATR or beta adjusted thresholds would be another way to punish stocks that are acting too fast and loose over a particular time frame. You could use quarterly and yearly if you wanted to as well, but that should be devalued or excluded to a long term consolidation score that doesn’t apply very much if at all to the overall score.
I welcome any challenge on my ideas or alternative ideas on how best to convert data into a system that rewards stocks that consolidate to try to filter out stocks that are definitely not consolidating, and rewarding stocks that probably are.
A lot of the ATR vs daily score, absolute daily scores above a particular negative and below a particular positive will apply towards the short term data. Possibly if weekly move is above a particular amount, if the daily movement is greater than weekly, apply a penalty. A lot of the weekly data and weekly vs monthly and ATR (14 day) can apply to the intermediate term. The long term can be mostly monthly data and later.Comments »
As many of you know, I made the decision to undergo a difficult project to try to convert some of OA’s teachings into code.
Along the way, I abandoned the project after I could no longer update the data with a single push of the button due to finviz automatically redirecting the page which would export to a sign-in/register/pay us money page.
The main focus was on volatility contraction and scoring a stock’s classification according to it’s own movement.
The next stage was to use industry metrics as well as location, sector & market cap size metrics to apply a bonus to those in an industry or theme that was “working”.
I also applied a bonus to stocks that were undergoing significant volatility contraction.
An additional bonus was layered for “themes” that were all showing significant scores (and likely saw significant volatility contraction as a group as well)
The next stage which I had begun working on was reclassifying stock according to how it performed relative to the industry.
After taking much time away from the market followed by some time trading without this spreadsheet functioning, I have made the decision to find a work around and get back to it.
With a fresh set of eyes comes a new perspective.
The formulas got too taxing to make adjustments, especially if I want to make quick, temporary ones. I had so many formulas on top of formulas that I lost track at times of what the numbers in the formula meant. That made things very difficult to change.
The solution to that will be to create a tab separately for adjusting the amounts in the formula and explaining in English what the formula does. This way you can tweak a number on the formula page and it will automatically adjust without having to re-copy and paste the formulas and adjust the entire formulas.
While determining how stocks in a particular “theme” are doing relative to each other is useful, I need to do more to classify the “risk” level of a stock. Certainly determining whether a stock is consolidating off it’s 52 week high or 52 week low makes a difference in the “look” you are going for and what type of qualities you are looking for. However, I need to better fine tune definitions with seperate metrics that score the “risk factor” of a stock, it’s industry average and how it relates to it’s industry average (as well as other areas it may have in common with other stocks), to come up with an overall weghted average that converts to a 1-5 score.
I also want to use earnings data to sort of be able to X out stocks that have earnings coming up in X days.
I want to either possibly run some kind of beta test and invite all the IBCers to try it free for some time, and/or possibly even build it live, explaining the details of what I’m doing and effectively making it open source.
Less importantly, I will no longer need the sort of time sensitive data since I won’t be able to use it as efficiently as I would have hoped.
I had worked on breadth statistics of groups to try to find the stock in a group that wasn’t moving when the group was, and also try to classify where the group was in the “risk cycle”.
This was most useful for stocks in larger industries, but I’m going to focus on being productive and ignore the more taxing parts of the projects for now.
If I happen to finish these steps, the next priority will be to try to “normalize” the scores from -100-100 or -1000 to 1000 or something, and then try to make a very rough price projection.
The price projection is not expected to be accurate, but instead useful to help determine the larger rotation of where the capital is headed now, and what those movements will mean for the setups next.
The idea will be to be able to have ONE dimension of metrics based on the here and now, and another dimension of metrics based upon the anticipated movements….
In other words, what happens IF the price action occurs as projected? How does that change the scores of all of the stocks, and what’s the result of the next projection and the next one.
If a stock is consolidating, a 1-3 price move influenced by capital flows into the industry or related elements could be enough to give it the boost to understand when to expect a breakout, and a second iteration of that combined with the 5-10 day look would allow you to look forward and finetune the “YOLO” trades a bit more accurately as well as get a better picture of which 5-10 day moves are likely to fizzle, and which will do well.
The 1-3 strengthens the 5-10 day projection and narrows the timeframe, as well as helps filter the 2nd and 3rd iteration to eventually a 30 day projection
Without the time sensitive data being timely available–that influences the 1-3 day projections–in this next version, I’m probably going to instead focus on ~5-10 day projection as the shorter term and ~10-20 as the longer term and have a couple iterations forward to give me about a monthly projection, and maybe a few monthly projections to give me quarterly, and a few quarterly projections to give me a year projection.
The idea of being able to project some of the best OTM LEAP option purchases and quickly identify the run away trending stocks before they happen is very intriguing to me, but I wish I had some of the shorter term “character” movements that help anticipate sort of the breakout and fizzle, break and chase, fake out and break out, and the false moves to fast moves that really better help define what to expect and how to handle it.
Unfortunately it takes a very long time to set up the calculations of how breadth within an industry and volume within an industry compares to the individual stocks and then looking at which moves are lagging on the days and weeks and determining which stocks are working in the cycle and which are next… and it can only really be used for industries with the greatest number of publicly traded stocks; which is about 10% of the market.
So this time around, I’m probably not using it.Comments »