Home / Excel Spreadsheet Wizardry

Excel Spreadsheet Wizardry

Groups Setting Up

I set up the main tab in Neo to filter out most rising patterns. Bull flags and rising wedges are mostly gone and have been set aside to their own patterns if you should choose to scan for them. I haven’t really optimized this setting. For now anything with week AND monthly movement both up, or day AND week up more than 1%, stock 15% above 20 day moving average or 50% above 50 day moving average is classified separately for scans. Also, anything that has moved more than 60% from it’s 50 day low and has declined less than 50% from the 50 day high with a medium term consolidation range of 1.2x higher than the market is a “possible high tight flag”

I also filtered out exchange traded funds and any “errors”

The “consolidation score” now only relates to stocks that are consolidating sideways to downwards so if a bunch of stocks from the same sectors have higher scores they are much more likely to have a similar correlated setup going on.

As the Option Addict teaches, when you have an uncorrelated market and suddenly a group starts correlating that usually signals stocks ready to move as a group.

So for the first time since implementing these new changes I want to test out OABOT2.0/ Neo’s ability to spot groups setting.


If a stock doesn’t have at least 10 names that qualified I chose not to bother looking at it. Perhaps 5 is enough in some cases.


Aside from industries, I can also look at sectors, exchanges, and in the future I hope to be able to look at “short interest” “groups in similar ranges from highs to lows” and by institutional ownership. Some of these I may also add on by group so you can see the average short interest of a group and average institutional ownership… but that’s later down the road.

There’s a little bias towards stocks that generally do not move as I have not yet added in a system to reward stocks with higher than average movement just yet (I do that on my own when I run scans sometimes) The first number is the consolidation score of all those that qualified. The remaining numbers are the short term, medium term and long term for the entire group including those that were filtered out (I haven’t changed these yet). The next number is the average RSI of the group and the final number is the total number of stocks that qualified.

Now we can use pivot tables find the industries and highlight all of these stocks and paste them into finviz. I decided to also include all stocks of that group, not just those with a consolidation score over 1000.
see here.

Since I sorted them by industry, you can just page through the list and if you see any groups with several setups that look kind of the same you can focus on that group.

Apparal and Property Casualty & Insurance seem to be setting up together, but I’ve certainly seen more actionable themes. I’d say these groups may need a little bit more time to consolidate, but that doesn’t mean you can’t be early and find some individual names setting up. I could look down the list.

One potential problem I’ll have to fix that I just realized is I made a huge punishment to sort out stocks that are likely in an M&A situation (buyout offer) so that they don’t show up in my list. I’ll probably need to instead classify them as “possible M&A” and if they’re classified that way they won’t show up in the “falling wedges and triangle patterns” classification.

Aside from that you can just scan for individual setups .

I’m going to put in a slight bias for high short interest.

What I did was take short interest times 1,000 and then if any are above 1500 I capped the score at 1500. I added this to the total consolidation score. Here are the top 400 names in the “possible triangles and falling wedges” category. There may be some other false positives because I haven’t optimized and out of 400 some setups aren’t going to be perfect. Last time I found a record high of 107 names out of 400 that looked interesting enough to put on a watchlist.

Here’s the list. From this list I like to go through it manually. If I wanted to reduce this step I’d crank up a lot of the scoring methods and only look at the top 50… but in the process of eliminating false positives I’d also eliminate a lot of good setups because it didn’t pass one threshold.

Normally I’d post a condensed version of my own manual scan from that list but today I wanted to illustrate the results and provide a reference so I could check them later this weekend.


Comments »

Automatically Sorting Stocks By Patterns

One of the false positives I’ve been getting is the “rising wedge” pattern. This pattern represents upwards consolidation which is more of a bearish setup. While one can certainly trade this pattern, I want to separate it from the bullish patterns at a minimum.

I also currently don’t want to trade the bull flags just yet, although I want them to be available should I need them.

However, I may wish to trade “high tight flags” so I want to keep these separate.

Both patterns show at some point significant price rises so if I scan for significant price rises over a particular period of time or combination of periods, I may be able to filter some of these out and be left with mostly triangle and falling wedge patterns which is mostly what I want to trade.

If I do trade bull flag patterns, I want to start with the high tight flags.

High tight flags are technically defined as a doubling in price over 3 months or less followed by a consolidation period where it falls less than 20% from the high. These I would consider trading and want to know about them independent of bull flags and rising wedges. While I don’t have a 3 month period, I have a 50 day low and high which I can use to define a stock that is up 80% or more and falling less than 20% PLUS I will also want a 20% higher consolidation rank than the market average at the time to qualify. I set this up for high tight flags.


The -1,0 and 1 represent stocks that I omit due to errors or lack of liquidity. While stocks like DRAM and GSAT and PHMD are satisfactory, there are still others that I wouldn’t touch. I could probably bump up the consolidation rank to 30% higher than average to filter some of these, but for now it works.

When done, I should be able to look among those labeled “not high tight flags” and have a separate filter from there to “rising wedges and bull flags” or “remaining stocks” (falling wedges and triangles). This way I can look at the patterns I want and ignore the ones I don’t.

As mentioned before, now I want to work on coming up with a filter for rising wedges and bull flags. The key here is I don’t want falling wedges and triangle patterns showing up as “bull flags and rising wedges”. Even if I have to leave a few rising wedges and bull flags in the remainder, that’s more acceptable to me than having tradable falling wedges be filtered out. I’m okay with keeping bull flags and rising wedges together because it is a challenge to separate them from each other since both consolidate upwards and move from high and low established ranges.

I intend on coming up with a few formulas and changing the details of the formulas to increase the amount of patterns that get filtered until I see a clear falling wedge or triangle show up as a “bull flag and rising wedge” and then tightening the filters so they’re gone. In this case, I probably want to filter out stocks that have no clear pattern but also happen to have been rising since the goal is not to clearly identify rising wedge and bull flags, but instead eliminate stocks from contention from the patterns I am actually looking for.

This should help me narrow down the list and then I can sort that list by consolidation rank and I will be left with triangles and falling wedges that are liquid enough and have the look that I like to trade…

Should I choose to also look through possible rising wedges and bull flags for patterns I can, but the goal is to reduce the number of bad and untradable patterns without removing any trading opportunities. After filtering the high tight flags, the rising wedges and bull flags I want to look for other patterns that I don’t want in the remaining. If I can filter out some bear flags or something that will improve the quality of what I’m looking at.

The perhaps more important reason behind doing this is that I want to know which industries have multiple falling wedges and triangle patterns because these are the industries I want to trade. Of those industries with multiple falling wedges and triangle patterns I want to be able to sort by average consolidation rank of these patterns. Stocks that set up together in a low correlation environment tend to break out together.

While the original OABOT did a pretty good job of compartmentalizing the setups by where relative to the highs and lows they were and adjusting based upon industry, the inability to rapidly split test variations in formulas prevented me from being able to make improvements quickly. THe new OABOT will eventually be much better once I go through every layer I want to.

The next step after this will probably be “risk factor” which will attempt to handicap a stocks level of “risk” by looking at beta, ATR, monthly volatility, change (over 1 quarter, 6months and 1year), distance from highs and lows and industry metrics vs the market and the individual stock vs it’s peers in sector, industry, market cap size, exchange or index its trading in and other variables.

I could also look at classifying stocks within 10% of highs, 10-25% below highs, 25%-40% below highs and stocks near lows and run a count by number of stocks in each category and consolidation rank of each of these categories and other variables to help me better understand what is setting up right now.


Comments »

A Peak Behind The Curtain

So when it comes to picking stocks, OA teaches that there’s certain points of the market where the appetite for risk is different. Right now I don’t have a great way with the OABOT 2.0 to capture that without making some changes. However, I am experimenting with a way I can modify the score of the screen I ran to make a quick list that approximates some higher risk names. So far it worked pretty well as I will demonstrate.

The original OABOT was designed to mostly focus on where a stock is in relationship to its highs or lows (and in some cases what the short interest and fundamentals were) except you could only classify it once, and because the stocks were ranked according to a limited amount of factors, the classifications were not all that good. However, what the classifications were good at doing was an adaptive scoring model that graded stocks near the highs with a different criteria once they were classified as “laggards” or “trash” so it did some good things.

The current version of the OABOT needs to be changed to somehow capture a measurement of risk. I suppose I could try regularly altering some of the exact numbers. For example when I’m looking for low risk stocks, I’d score absolute low volatility higher than relative volatility. When I’m looking for higher I’d flip that on the head. Or I could add a reward for stocks that have high total volatility relative to the amount of volatility compression. However, for the time being I have another idea.

In order to capture the higher risk names, I decided to make a pivot table and then copy and paste the names that hadn’t been filtered out by liquidity filters and error filters (I’m not real strict about these filters, they’re just to eliminate the stuff that has no pattern or errors in data that prevent me from scoring them)


Then I eliminated the worst 1000 or so consolidation score stocks No real reason for this amount.

excluded 3

Then from the remaining 3000 or so I chose to incorporate a stock’s ATR divided by price or average % movement in the last 14 days. Since these are usually numbers like 0.014 or 0.022, I multiplied the result by 70,000 which mostly gives you a number about equal or less than the consolidation score except for a few and combined it with the total consolidation score to give me a total score that favors the stocks who’s relative consolidation still represents a lot of movement.



Then I imported a few hundred names it ended up being 276 names (just done randomly). From this list I manually skimmed through looking for setups because there are still going to be false positives. I came up with 52 names that looked interesting enough to think about over the next week and posted them for viewing in After Hours chat. 18.8% of the names from the top 276 I liked enough for a closer look. I could have done more but I’m not really into bull flag patterns just yet and have not tried to filter out rising wedge patterns. For now I don’t mind the false positives because I’m looking to just make this a research tool.Should I choose to make this more about being more of a trading system or process I’ll want to be much more strict and just want a list of a few dozen only and then simply monitor them for some kind of buy signal on the intraday chart or else some sort of buy point.

Sample of list (full list in AH /w OA chat)



I’m thinking moving forward I may be able to use various indications of how much a stock has moved between beta and ATR and maybe some other indicators to indicate “risk” It isn’t perfect yet, but I basically just arbitrarily picked the standard. I can easily put some formula to create limits and criteria to cap a score or remove it from consideration of “higher beta” names and then seek to optimize those numbers (and this was follow up. I’d probably also create some sort of metric based upon days since it had its IPO which might also be a good method to identify names that represent a higher risk appetite.

Certainly this screen does pretty good at capturing the “4’s and 5’s” of the market that have not yet moved. It didn’t take a ton of additional effort. For now I’m not sure whether a lot of change is needed, but I certainly could send the time as I’m curious as to how the names with certain classifications are doing curently and have in terms of their consolidation score would change as a group.

For now I just have categorized stocks by market cap, indices/exchange averages, sector, industry,etc and am only looking at the consolidation rank on various timeframes and using the old version when I want to run breadth or look at where the capital is moving.

There’s a current small bias due to the way I created the score that favors the lower beta setup and this “hack” eliminates it completely by rewarding the stocks that have moved a lot per day on average in the 14 days but still have a generally high consolidation score.





Comments »

Neo Enters The Matrix

I’m going to continue doing some work to improve the OABOT 2.0 a.k.a. “Neo”. Neo has been removed from the matrix of the market data and now must begin to be trained “how to see the code”. To start with Neo needs to develop how to discern reality from illusion which means being able to remove the setups that get him in trouble and add skills like a Kung fu master so that he can identify when to strike.


The next optimization step is ATR divided by beta. Beta is a measurement using I believe a year’s worth of data to measure average movement. ATR is a measurement of average price movement in dollars over the last 14 days. ATR divided by price gives you an average percentage move. When ATR is small relative to beta, in theory it suggests a more recent consolidation and range in the last 14 days than is typical. This is common among consolidation patterns.

In theory this should give you stocks moving less in the most recent 14 days than in the prior year.


So I made major changes to the scoring. What I noticed was the setups that I was getting tended to be a little bit longer developing.

The best patterns began to show up as I got after around 300 names. This suggests the punishment knocked off a few good names and the reward also rewarded too many bad setups. In other words, compared to before, the score is too high and not as important as some of the other variables. However, it may also be that the score is important when ATR/price divided by beta is less than 3% but much less important when it’s less than 2.5% and 2%. In fact, it’s even possible that taking an extreme reading like less than 0.50% and punishing stocks for being in that tight of range will eliminate many of the merger/aqcuisition types where the stock has a ceiling equal to the buyout offer and thus it doesn’t hardly move at all over 14 days. This can be tested later. For now I have to get through the main scoring and worry about that later.

Next test 1/6th the size 100/-33. I definitely thought the results improves from the 600/-200 but the original 12/-4 was still better. Now is something like 12/-4 optimal or is something like 20/-6 better? Or perhaps 4/-1 is more optimal. For now I’ll just return it to 12/-4 and leave a note to test 20/-6 and 4/-1 to try to improve it later. The effort to try to optimize this seems less worth the effort than to optimize other criteria so that’s all the work I need to do for now.

Now we have ATR vs monthly volatility. ATR measures average move over 14 day. Monthly volatility measures 30 day volatility. If 14 day “volatility” (in the form of average daily move during that timeframe) is less than 30 day volatility it should represent consolidation if they are calculated in similar ways. This is similar to the first test we did where we wanted weekly volatility (7 day) less than ATR/price (14 day) except now it’s 14 day less than 30 day and represents a little bit longer time horizon of consolidation. I think this is one we’ll want to switch to a larger score than we did last time. Prior to changing it this was the settings.


Last time with the 7 day / 14 day I may have started with too small of score even though I thought the score was pretty massive. It would have been better if I started with 1000/-750 just so I could rule out a higher score then I’d have a better sense of direction. Instead I went with 200/-150 and now I have to test 400/-300 and 100/-75 instead of just one additional test.

So I’m going to start this one off with 1000/-333. There certainly were some good setups. But also some not so good. In fact, some of the highest scores were filled with false positives of the merger/acquisition variety. That stuck out to me which led to changes I make that I describe later. But there were still some decent setups mixed in the top 50. That probably means that if ATR/Price is less than something like .25 or even .125 or lower I could actually substantially punish all stocks and probably eliminate a huge amount of false positives. After those the setups started to pop up. When I searched the top 300-400 range I couldn’t really see as many quality setups as in the 200-300 range but because the score was so large as was the punishment I decided to look at the 800-1000 ranked stock range and the setups improved. As predicted the score and/or punishment is too high. Since I started with such a large number directionally it is clear I have to reduce this number. I went with 800/-200 instead of 800/-266 so I reduced the punishment proportional to the reward. I’m pretty confident the punishment for not meeting these criteria was a little too high because of the quality of setups in the 800-1000 range. I’m also going to change the score for the less than .25 to -10,000 and punishment to zero so that it is obvious which ones were filtered out and I can quickly look at them. Then I can adjust this number to .125 or .33 depending on whether or not it filtered out too many good stocks or not enough merger-acquisition stocks.  These were the only 12 stocks that the score filtered out. Looks like there’s a pretty low chance I’d be missing out on anything as a result of this score. negative

I changed the criteria to punish any stock less than .46 the volatility and I still didn’t really miss out on any and filtered out over 2 dozen. Occasionally I may miss a name, but it isn’t very likely. A stock scoring less than .50 was another requirement, there won’t be hardly any stocks between .46 and .50 and if I change it to something like .7 then there isn’t much room between .7 and .75 so I have to really change all of the points of measurement.

I also decided to test all stocks between .45 and .65 to see how many stocks are worth giving a good score in that range or if I should make additional adjustments. They weren’t necessarily all that great, filled with utilities and defensive names but also had a couple good setups. I decreased the amount of score that I gave those between .45 and .65 and eventually moved it up to .70.

I decided the same problem probably existed for the weekly volatility and it did so I had to remove those less than .4 it removed a few non merger+Acquisitions at .4 but mostly bond funds and defensive funds so I wasn’t bothered by it.

Because of the changes I made, I had to sort of slide all of the scores on both metrics. It currently looks like this but isn’t really necessarily anything close to optimal just yet, it’s just much better than before. As such these numbers are likely to change soon but for now I need to take a break.

nowI’m probably going to have to increase the first criteria’s scoring and decrease the second until I’m satisfied as I think the recent week having less consolidation than the I also may decide to play with the sliders and the exact multipliers of volatility.

It seems that the best stocks may actually be in a range of volatility of not too quiet and not too volatile, but generally less is better. So I’ll have to figure out where that range is to continue to make improvements from here.

I was also going to test weekly change divided by beta, but since ATR divided by beta wasn’t super helpful I probably don’t need to change this one too much so for now I’m going to skip it and circle back to it.

Everything remaining that I want to test in the intermediate term rank is related to the moving averages relative to other volatility metrics. If a stock has departed significantly from the moving average then it represents a larger move away from a range, the goal will be to find the levels at which we can remove stocks as going to far and what comparative metrics helps with this. Perhaps a stock with smaller change relative to the 20 day moving average than the monthly volatility or smaller daily or weekly change relative to the 20 day will capture volatility contraction. It’s really not clear. A 20 day moving average can also use as a reference point to manage risk with a stop on a cross under the average if the stock is above it, so a stock being close to the 20 day may represent a good entry.

I will probably test the long term consolidation (monthly to quarterly change and beta) rank before the short term.

It is unfortunate that there isn’t a “quarterly volatility” or “yearly volatility” or more yearly based volatility metrics aside from beta and even 5 year and 10 year volatility metrics. But for now that’s the limitations I am working with. If the 20 day moving average works well, maybe I’ll use the 50 day and 20 day vs the 50 day and such.

What’s left to do?

1)Adjusting the moving average based rankings

2)Adjusting the long term consolidation rank

3)Adjusting the weightings for the “total consolidation rank” (which combines short term, long term and intermediate term)

4)Possibly adding 50 day moving average into long term consolidation rank.

5)Cycling through a couple more times to fine tune the score.

6)Possibly making more significant changes to eliminate sort of the rising wedge patterns or reclassifying them so that when I sort through stocks I can eliminate them if I’d like.

6)Considering categorizing the stocks based on setup by using the stock’s proximity to highs/lows and behavior and making unique scores in the changes over the week/day/month as well as change in trend that signal the pattern or type of setup. Then if a stock meets this criteria, having a separate score just for that “pattern”. This is probably the best way to go. I sort of did something similar the first time I did OABOT but I’ll probably just copy and paste most of the old method rather than start over.

7)Probably setting up a better “summary”/”cover” tab. I like stocks that set up together, but if I don’t have any sort of way to differentiate a bullish consolidation pattern vs a bearish one it won’t be as useful.

Right now it looks something like this


It has stocks categorized by market, by sector, by industry and market cap size is next to be added. This is another reason I’d like to see how certain “types” of stocks are doing like “stocks near highs”, “stocks IPOd in the last 2 years”, stocks 5-15% off highs, stocks 15-40% off highs, stocks near lows, etc. That’d probably be a good way to get a feel for the market’s risk appetite, but I can also use a feature that sums up a list of individual stocks and lists the average score.The last cover tab was more about what is “working” now and where money was flowing. This is more about consolidation ranking but I may add other breadth metrics and such on there as well. I’m all ears to new ideas too.

I probably will have a duplicate of my spreadsheet because for the cover tab I need to have very few if any false positives that might skew the average where as a research tool to find specific stocks I have no problem looking at several false positives.

I also have a portion of the main tab setup for entering tickers if you just want to see how a group of say the FANG stocks compares to a hand pick group or market averages, or your own GARP index or whatever. This is how I constructed the S&P, Dow, Russel2000, etc market indicies average scores.


Comments »

Optimization Process

I’ve started to recover the OABOT and recreate the consolidation score. It turns out that subscribing to finviz elite does allow immediate updates on stock scores as long as you are logged in at the time in internet explorer. (firefox doesn’t work for some reason, may be a settings problem) I’m not as mad at them as I was for shutting off the export function without a warning, but it is a little bit of a hassel to log in first and redo things as well as pay the dollars for subscription but so far no one else that I’m aware of has really stepped up and offered nearly as much data to compile and sort through and set up formulas to in a way that I can export to excel and have formulas pre set to adjust the way I want to. Originally http://investexcel.net/ had a spreadsheet that would pull all the data from finviz with the push of a button. Basically I altered it by creating a formula that would pull from the tab that had the data and then I set up my own formulas. I wouldn’t really know how to create a formula to do that automatically with another free or paid data source.

There’s a list of 400 stocks that I pulled based upon the top “Intermediate consolidation score” as I applied via my OABOT spreadsheet. From those I manually scan and pull all the ones I like. I pulled 39 names. I probably could have pulled more if I wanted to grab the ones that all have the similar “bull flag” pattern like WABC, but I decided not to

For some reason I’m having trouble posting the link so for now you have to copy and paste it.

What I will be doing next is going through a process attempting to improve either the quality of the results or the quantity of results from the 400 I list.

The process will look something like this:

1)First run the scan as currently exists.

2)Change something about the scan’s formula.

3)Compare the results and repeat until the results improve.

I want to basically start with the extremes on each end and move towards the middle until I can’t seem to improve it or it no longer is noticeable or worth the improvements.

Fortunately the new spreadsheet is better designed for quick adjustments. I created a formula tab that allows me to change lots of numbers in the formula without scanning through computer gibberish to do so.


All I have to do is change a number and all the formulas and results change to reflect these changes.

One criteria is that the stocks weekly volatility is less than a multiple of the ATR/price (% daily movement average), it awards points, and if it fails to meet the criteria a punishment (negative points) is applied. I can adjust the reward, the multiplier and the punishment for failing to try to remove stocks or include stocks in the top 400 that wouldn’t otherwise make it. The goal is to change the formulas dramatically and pay particularly close attention to the stocks at the top of the ranking to see how they change if I change the number to zero so no points are rewarded and then change it to 200 or something extreme so it makes up a larger percentage of the total intermediate term score. By doing this I’m better able to see if this criteria is a good criteria to even have at all, or if it should be removed completely, or if it should be one of the primary indicators used. Most likely it will be somewhere between and that’s what I’m after.

The ATR measures a stock’s daily movement each day over the course of 14 days to come up with an average. In theory if the weekly volatility is less, it means the last 7 days are less volatile than the previous 7 but that may not necessarily always be the case depending on how the volatility is calculated…

That will be the first criteria I test. For now I have 4 separate scores so that a stock that has weekly volatility less than 1x ATR/price is awarded points. A stock that has less than .75x ATR, .5x ATR and .25x ATR are given additional accumulative awards.

Initially as is I got 400 stocks narrowed down to these 39


original 400 stocks


The next stage will be to see if the quality and/or amount of names I pull improves.


So  here is the first criteriatest

The next criteria I went +200 and -150 for all 4 of these criteria. The one below involving ATR/price divided by beta is a separate metric that I will test another time.

I can see which is more important later and if an even score or exponential score is best and whether scoring something like .10x or less than 2x would be more appropriate. Is it better to use this as something to make sure the recent week isn’t totally out of control or to actively seek weeks that are certainly quiet even if it eliminates some that aren’t super quiet according to this criteria but are still reasonable? We’ll eventually find out. For now we want to know how powerful this SET of criteria is together and then we can make adjustments to figure out which part matters more.

So with 200/-150 as the scores I came up with a new list of 400 and a new grouping. I maybe should have went with 1000/-500 because there was still an overlap of names, but that’s okay.

By doing this I actually got 59 names. I’m not sure if this is because there was a single signal that was more dominant or if this is a sign that this indicator needs to be scored higher or if there’s just more value in multiple scans. Certainly there was some overlap.


I think what I will do is remove the tickers I have in common and compare the ones that are different. After doing this I find that not only does the 200/-150 score develop more names, but I like them better. So that means the best solution is probably closer to 200/-150 than 20/-12. We don’t know if the exact solution should be higher than 200/-150 or more like 115/-85. I also see advantages for multiple screens. Nevertheless, for now the 200/-150 stays and we will look to test something like 400/-300 and 100/-75 to see if either of those improves the results in my opinion. If the results are not significantly better I’ll probably just pick one and stop there. The other thing is, I may circle back to this after I change other variables for the better, so for now I’m just trying to get a quick approximation of about where the better results lie. Later I can test punishing the stocks more than I reward or only rewarding with no punishment for the stocks that fail. I can also test scoring stocks that meet every criteria less than the previous criteria or more than the previous (for example if weekly also is .25x less than ATR do I score 1000/-500 or something like 20/-5.

As you can see there’s a lot to test, so I’m taking sort of a pragmatic/split testing system based approach and trying to quickly move on to other things to keep improving this. I wasn’t nearly this efficient the first time I made the score and making changes was more difficult so I should be able to really improve this tool rather quickly by running a similar process.

For now the short term consolidation score doesn’t need any changes as it serves it’s purpose of making sure the stock isn’t breaking out or down on the daily basis without any false positives (although there may be false negatives that get filtered out that are still good to trade). But I want to keep improving different variables in the intermediate term and then get to the long term consolidation rank and seek improvements there as well.

Comments »

OA Bot Update


I’ve been busy due to family issues that have came up.

I just finished the formulas for short, medium, and long term. I still have to do the busy work of making changes so that the medium and long term formulas will apply for every single stock. Some formulas need to remain “locked in” so it’s always accessing one specific cell from the formula tab. Other formulas need to change to correspond to a completely different stock and set of data.

The Long Term Consolidation rank will use quarterly and monthly performance and compare one another. It will also look at the overall range of quarterly and the overall range of monthly. It will also look at total monthly volatility as well as monthly change in price and look at the change/beta.

The consolidation score on it’s various levels has a formula tab that makes it easy to change the formulas and know exactly what you are changing. This makes it possible to tailor these formulas to the individual.

Eventually I also will allow a more complex layer of formulas that actually adjusts the individual stock’s score based upon the strength of the score of the industry that it’s in. If many stocks in the same industry are setting up, these stocks will get a little bit of a boost.

The next step may take some time before I am able to find the time to set it all up, but it involves setting up risk factors, comparing the risk factor of individual stocks to their industry. I may adjust the risk score as well as the overall score based upon relative strength and industry cycles, and I may even apply an input for “market phase” which would then apply a bonus to stocks that are “up next”.

All of this is time consuming though and I want to get a functional tool available, so the first one probably won’t be very complete, but will be complete enough to come up with a list of stocks that are filled with most of the names that are consolidating.

The sort of end point if I am able to free up more time to work on it would be a stock market simulator based upon projections and setting up subjective probabilities of certain performance results. Then I could set up a simulation that would only take a minute to run a few thousand possible outcomes over several iterations.

In other words, what happens if we draw several random numbers for each stock, then based upon the probabilities, convert that number to performance, then recalculate the scores and probabilities of certain outcomes of all stocks based upon how those performances change a stock’s score? …Then what if we interpret the next random number based upon these new probabilities, and repeat the process for to represent weeks or months of performance? …Then what if the simulation would record the final result and run this entire series of simulations thousands of times for a particular spot? This is what the simulation would determine.

The coiled up stocks will tend to move more, the stocks in industries making breakouts will also tend to breakout, the stocks all setting up together will have a higher probability of breaking out and having more significant amounts of capital leading to larger moves. As certain moves develop, it will influence the development of other setups because of the tendency to influence group scores.

The point of all of this will be to project probabilities of a few outcomes for a particular option trade, converting that set of expectations to a risk adjusted growth rate of your portfolio, and compare the top performing trades to determine the best risk adjusted decisions individually.

It should also be possible based upon the tendency of trades to either correlate together or have opposite correlations to possibly try to construct a more optimal portfolio that accounts for the edge and correlation related to a stock’s performance rather than broad theory that starts with assumption that the market is random.

In other words, rather than simulating expected outcome of an individual stock, you can set up a mock portfolio with a particular weighting and look at the probability of the TOTAL  performance. Since certain stocks from the same industry may expect to outperform but when they don’t there is a greater chance that neither will, having a portfolio consisting of all of the best stocks may not have the best return on risk. By adjusting the total portfolio as well as the weightings and finding stocks with inverse correlations but positive return on risks, you should be able to normalize your downside without costing you. Below is an illustration of how with the perfect balance of positive expectation and negative correlation adding a trade with a lower expectation can actually be more constructive to risk and reward than one and produce a much better return on risk than either trade independently.

risk correlation

I wish I had the time to really set this up well in a few weeks to a month, but unfortunately it will take longer than that.  There’s plenty of theory out there about reducing correlation through efficient market hypothesis and appropriate diversification, but that is assuming an efficient mostly random market. There is a greater edge to be had if you can successfully handicap the market, but that doesn’t entirely eliminate the benefits of appropriate weightings and mitigation of risk through reducing portfolio correlations.

Comments »

Intermediate Term Consolidation

The short term consolidation score looks at:
Daily Change
Daily Change Relative To ATR
Daily Change Relative To Weekly.

The short term represents any movement less than 1 week, but really only involves the daily moves because of lack of data between 1-6 days. It also scales the measurements so that as a stock moves less, it gets more points added to the score.

The intermediate term consolidation gives us more data.
Here are a few possible considerations to score:
1)Weekly change vs monthly performance
2)Weekly volatility vs monthly volatility
3)ATR (14 average daily move) ratios…*
4)Weekly volatility vs beta
5)Weekly volatility overall
6)Weekly performance vs beta
7)Absolute weekly performance not too extreme in either direction.
8)Distance from 20 day moving average?
9)Distance from 20 day moving average relative to some volatility metrics**

We can tinker a bit more with how one thing relates to another in different layers to provide a lot more clarity on whether or not the stock is undergoing volatility compression or volatility expansion in the last 7-20 days.
*ATR divided by price gives us a percentage daily move as an average over the last 14 days. While this only refers to the daily movement, it’s a function of 14 day volatility on daily basis. If the weekly volatility is smaller relative to the ATR (14 day volatility), it tends to represent a stock that is contracting in volatility. If the ATR is smaller than the monthly, it also tends to represent a stock that has contracted in volatility over the last 14 days more so than the last 30. You can also apply a beta adjusted bonus so high beta stocks (stocks with typically more volatility over a longer time frame) that also have a low ATR (less volatility over last 14 days), score well.

**A stock that is closer to it’s 20 day moving average will tend to either have moved less over the last 20 days OR be near it’s “balance” in it’s range suggesting it’s close to the Apex if the chart is fully consolidated. However,a stock with a higher beta, or ATR, or monthly or weekly volatility may be able to tolerate a little bit more movement from the 20 day volatility and still have no confirmed breakout or breakdown. Also, a stock that has moved less in the last 14 days or week than it has over it’s 20 day average may be consolidating. The closer a stock is to the 20 day moving average, typically the less it’s moved over those 20 days, or at least the more likely that it has stayed in an equally proportionally rangebound area or regressed to the mean recently.

Also:A stock in an uptrend with a 20 day moving average under the 50 day may represnet short term weakness and consolidation within an uptrend. A stock in a downtrend with the 20 day above the 50 day may represent a stock that may be consolidating and possibly forming a bottm, particularly if the stock is also above the 20 day moving average. This sort of deduction from data probably won’t be used, but may be a decent idea to pair with consolidation data to find chart patterns. It would best be used in a binary (1 for yes, 0 for no) and as an additional filter (E.G. you can set up a table to show you stocks that score over 80 that also pass this filter of “patterning” from uptrend.) Defining the stock as being in a longer term uptrend can be based upon a stock being above a 200 day moving average or 50 day being above the 200, distance from 52 week low, distance from 52 week high or other data. A little off topic here so I will scratch it.

A volatility measurement tends to measure a stock’s standard deviation of movement over a particular time frame. A stock will move within 1 std deviation ~68% of the time if it is normally distributed. 95% will be contained within 2 std deviations and 99.7% of the data set will be contained within 3 standard deviations. Whether or not stocks movements are actually normally distributed has famously come into question by Nassib Taleb in his books “Black Swan” and “fooled by randomness”, but for the purpose of measuring volatility, this doesn’t matter unless we are going to sell option premiums expecting moves within a particular range.

This is a bit different than looking at a stock’s average movement, but still relate to movement over a time period.

So as long as we evaluate each stock with the same measurement, comparing ATR average daily movement over 14 days) to weekly or monthly volatility (standard deviation of movement over the time period) should still get us an idea of how volatility has changed over time.

By looking for contracting volatility over time and comparing volatility over various time frames as well as the magnitude of the change, we are able to look on a 1 day, 1 week, 2 week, and 1 month period of time and compare volatility relative to a stock’s long term comparative movements relative to the S&P (beta), and not only get a good idea of whether or not a stock is located within normal bounds of a range, but whether or not those ranges are contracting, to what extent they are contracting, and how the recent “quietness” compares to the historical movement of a stock.

volatility time volatilty compression2

The pictures above don’t fairly represent how our scores also consider relative movement, relative volatility, and average daily moves and standard deviations over time periods. Overall, using all the data presented will provide a much greater chance of having consolidation patterns rise to the top than may be represented by the images. I covered this in a different post.

Again– once we gather this data, we can have the spreadsheet sort out the data by certain categories– We then can make adjustments to the final individual score based upon the average of the group. A stock showing signs of consolidation in a group with an amazing consolidation score may be worth more than an “ideal” setup in an average group or worse. When you see multiple stocks in an industry setting up, it’s less likely to just be randomness or a fake. If every single stock in a group is setting up this represents a large amount of capital preparing to rotate into a theme. One or two signals can be wrong, but say 20/25 names in the group setting up? The probability of you catching an idea before it makes a big move is greater.

I feel similarly about seasonal data. Seasonal data on an individual stock may be due to a few quarters of earnings at a certain time that happened a few years and skewed the data. But seasonal data that suggests both the individual stock outperforming the industry and an entire industry will outperform the sector, and the sector will outperform at a particular time of year is more likely due to a causal relationship such as capital moving in reaction to or in anticipation of holiday shopping leading to an increase in earnings and a fundamental reason as to why a particular stock has been more effective at capitalizing on this seasonality than it’s peers in the industry. This is a little off topic here as well.

I don’t necessarily need to understand the cause as long as there is evidence the move will continue to correlate with the timeframe as opposed to it just happened to correlate without cause that will be operationally random moving forward.

If you don’t understand a cycle and are just selecting a point your results will be normally distributed as if it were random. But if instead it were a cycle with waves that expand and contract in duration, and you were able to identify this cycle and buy closer to the low and sell closer to the high, then you would be able to show superior results. Your results would come at the expense of someone on the other side, so again, analyzing the results would show a normal distribution. If you are an outlier to the upside, someone or a group of people will be an outlier to the downside, and overall the data set would be within a range with little evidence that you were actually able to exploit a tendency. That is the nature of a interconnected system where the wins of one person correlate with the loss of another around a collective average. A little off topic again.

So we basically have a good outline for the different things we can consider when making our intermediate term volatility rank which I have begun working on. I have finished the short term consolidation rank unless I decide that something in the intermediate term rank belongs in the short term or I come up with a different idea of what I can add.

Comments »

Consolidation Rating The Heart Of The OABOT2.0 –


Perhaps the very most time effective things I’ve done with the OABOT is the Consolidation score.

The consolidation score looks to reward stocks that are consolidating with higher scores. It looks to avoid stocks that have already started the breakout, and identify stocks that have shown at least some evidence of consolidation. Not all of the stocks with a good consolidation score will have the look I want, but it certainly significantly helps narrow the field.

volatility compression

By applying a consolidation rank individually, it helps identify and rank the stocks in order of those who have the most coiled up setups.

By slicing a stock into numerous classification and then looking at each of those classifications, you can then apply an “average classification” for each category. You can also look at the total count of stocks in that classification and the number of stocks with a consolidation score in the top 20%, and come up with a percentage out of 100 that each “theme” has. You can then use excel to sort industries (or sectors, market cap size, risk classification, etc) to see which “themes” are setting up.

When I created OABOT the first time I complicated things a little bit by evaluating stocks differently based upon where they were in relationship to their 52week high, moving averages, from lows, and performance in the days, weeks, month, quarter, year, % of shares that float, short float %, and other factors to label each stock into categories. I still think that’s valuable, but in the process, I think I got a little bit away from the ability to quickly find “themes” in that I combined “what’s working now” with relative volume of themes to multiply scores and combined everything together.

The reboot of OABOT will possibly look to simplify and just keep it contained to identifying

1)consolidation rank on short, medium and long term basis as well as overall consolidation score

2)Consolidation score by “groups” or “themes” (sectors, industry, market cap size, Location, etc)

3)The number of themes that have a certain NUMBER of scores over certain thresholds (For example, what percentage of stocks in each industry have a consolidation score over 80? Over 90?).

If every stock on average has a better score than the rest, OR if there are multiple stocks in the best 20% of setups or best 10% of setups, (even if the others are bad and skew the average to be low setup score for the theme) these metrics will help confirm that institutional capital is likely being prepared to enter a theme, even if they perhaps are being a bit more selective about how many within the theme they’re looking to enter.

This time around, I want to make sure I compartmentalize different aspects of what I’m looking at.

  • I want to quickly identify the next theme rotation based upon consolidation score only. At some point I might also be able to separately look at theme performance and what’s working well now so I can manually glance at them and see if they have anything in common “beneath the surface” (are they all showing a particular technical chart pattern or candlestick pattern? Is there a fundamental reason or news related reason why certain stocks are outperforming? Is there some idea that might be next based on this knowledge?)
  • Then I want to separately look at the top stocks in some of these top performing themes only; as well as separately looking at the top consolidation setups within themes that are working; as well as the top overall consolidation scores independent of industry.
  • It will eventually be valuable to also look at the risk rotation of stocks, and the “What risk cycle is working” vs “what risk cycle is next” in each theme in a more anticipatory fashion. Although this is far more time consuming as it requires a layer of classification, analysis of the theme, and then reclassification providing a second layer of classification.
  • And then look at the holistic score that factors in everything as an average.

The consolidation score has to look at a lot of different variables and apply them in different ways. Of the finviz stats could be used in a way that makes them relevant to consolidation:

1)Monthly Volatility

2)Weekly Volatility


4)Change (daily)


6)Weekly performance


8)Monthly Performance

9)[Distance of price from moving averages]

10)Ratios and multiples of one score to the other.

I don’t believe I will be using the distance from moving averages, but It’s interesting to think about. A stock near the 20 day moving average relative to the ATR suggests movement in the last 20 days has been either quiet or nearly evenly distributed among the upper and lower range over the last 20 days. Or a stock above it’s 50 day but below it’s 20 day may suggest the trend has turned upwards, but the stock has recently begun moving downward slightly… If backed up by consolidation score, then it might be a wedge pattern or flag pattern… but it’s hard to tell just based upon this data if it’s a pullback or a orderly bullish consolidation or a sharp declined FOLLOWED by orderly consolidation (bear flag).

On it’s own it doesn’t necessarily suggest consolidation, but combined with consolidation it suggests recent order or that the stock may be near the apex of the consolidation. Perhaps instead sorting stocks over a certain score by distance away from 20 day moving average might be a better approach than factoring it into the score.

Additionally, it’s possible that a stock far above the 20 day moving average that is consolidating substantially in the last week might be bullish while the stock far below the 20 day may not. This is because the one far above that is consolidating may be a bull flag, while one far below is possibly a bear flag. So if you are going to use moving averages, you have to combine them with other data and possibly create separate classifications that combine certain elements.

I will probably do without the moving average data.

You want weekly volatility less than monthly volatility, price change less than 2x ATR but more than -2xATR, weekly change less than some multiple of ATR and more than the negative. When using ATR you either need to convert the change in % terms to $ terms or convert the ATR to a percentage terms. Daily and monthly volatility under a particular amount. Higher weekly and monthly volatility may be acceptable if the beta is higher, so you might adjust beta.

The ratio of weekly volatility to monthly volatility can influence how much you score the stock. The smaller the weekly and the larger the monthly, the more consolidation that’s taking place in the last week relative to the month.

The way you create a consolidation score is using IF formulas and possibly “AND”, as well as “OR” formulas in excel. For example One formula in excel would allow you to say

“IF weekly consolidation is less than monthly consolidation then add 20 points, if not take away 5 points.” This could be expressed in a format like this

=IF(AY2<AZ2,20,-5)+[other parts of formula that add or subtract a score]

where AY2 is the cell that lists the weekly consolidation, AZ2 is the cell that lists the monthly consolidation, and the brackets just represents a placeholder for other parts of the consolidation score formula.

You continue to apply certain rules that reward or punish, but not eliminate stocks. Once you are done, you can filter out stocks with a total score below a particular threshold.

Rewarding the monthly, weekly, and daily performance above negative absolute thresholds and below positive absolute thresholds as well as ATR or beta adjusted thresholds would be another way to punish stocks that are acting too fast and loose over a particular time frame. You could use quarterly and yearly if you wanted to as well, but that should be devalued or excluded to a long term consolidation score that doesn’t apply very much if at all to the overall score.

I welcome any challenge on my ideas or alternative ideas on how best to convert data into a system that rewards stocks that consolidate to try to filter out stocks that are definitely not consolidating, and rewarding stocks that probably are.

A lot of the ATR vs daily score, absolute daily scores above a particular negative and below a particular positive will apply towards the short term data. Possibly if weekly move is above a particular amount, if the daily movement is greater than weekly, apply a penalty. A lot of the weekly data and weekly vs monthly and ATR (14 day) can apply to the intermediate term. The long term can be mostly monthly data and later.

Comments »

Reviving OABOT


As many of you know, I made the decision to undergo a difficult project to try to convert some of OA’s teachings into code.
Along the way, I abandoned the project after I could no longer update the data with a single push of the button due to finviz automatically redirecting the page which would export to a sign-in/register/pay us money page.

The main focus was on volatility contraction and scoring a stock’s classification according to it’s own movement.
The next stage was to use industry metrics as well as location, sector & market cap size metrics to apply a bonus to those in an industry or theme that was “working”.
I also applied a bonus to stocks that were undergoing significant volatility contraction.
An additional bonus was layered for “themes” that were all showing significant scores (and likely saw significant volatility contraction as a group as well)
The next stage which I had begun working on was reclassifying stock according to how it performed relative to the industry.

After taking much time away from the market followed by some time  trading without this spreadsheet functioning, I have made the decision to find a work around and get back to it.

With a fresh set of eyes comes a new perspective.

The formulas got too taxing to make adjustments, especially if I want to make quick, temporary ones. I had so many formulas on top of formulas that I lost track at times of what the numbers in the formula meant. That made things very difficult to change.

The solution to that will be to create a tab separately for adjusting the amounts in the formula and explaining in English what the formula does. This way you can tweak a number on the formula page and it will automatically adjust without having to re-copy and paste the formulas and adjust the entire formulas.

While determining how stocks in a particular “theme” are doing relative to each other is useful, I need to do more to classify the “risk” level of a stock. Certainly determining whether a stock is consolidating off it’s 52 week high or 52 week low makes a difference in the “look” you are going for and what type of qualities you are looking for. However, I need to better fine tune definitions with seperate metrics that score the “risk factor” of a stock, it’s industry average and how it relates to it’s industry average (as well as other areas it may have in common with other stocks), to come up with an overall weghted average that converts to a 1-5 score.

I also want to use earnings data to sort of be able to X out stocks that have earnings coming up in X days.

I want to either possibly run some kind of beta test and invite all the IBCers to try it free for some time, and/or possibly even build it live, explaining the details of what I’m doing and effectively making it open source.


Less importantly, I will no longer need the sort of time sensitive data since I won’t be able to use it as efficiently as I would have hoped.
I had worked on breadth statistics of groups to try to find the stock in a group that wasn’t moving when the group was, and also try to classify where the group was in the “risk cycle”.
This was most useful for stocks in larger industries, but I’m going to focus on being productive and ignore the more taxing parts of the projects for now.


If I happen to finish these steps, the next priority will be to try to “normalize” the scores from -100-100 or -1000 to 1000 or something, and then try to make a very rough price projection.
The price projection is not expected to be accurate, but instead useful to help determine the larger rotation of where the capital is headed now, and what those movements will mean for the setups next.
The idea will be to be able to have ONE dimension of metrics based on the here and now, and another dimension of metrics based upon the anticipated movements….

In other words, what happens IF the price action occurs as projected? How does that change the scores of all of the stocks, and what’s the result of the next projection and the next one.

If a stock is consolidating, a 1-3 price move influenced by capital flows into the industry or related elements could be enough to give it the boost to understand when to expect a breakout, and a second iteration of that combined with the 5-10 day look would allow you to look forward and finetune the “YOLO” trades a bit more accurately as well as get a better picture of which 5-10 day moves are likely to fizzle, and which will do well.

The 1-3 strengthens the 5-10 day projection and narrows the timeframe, as well as helps filter the 2nd and 3rd iteration to eventually a 30 day projection

Without the time sensitive data being timely available–that influences the 1-3 day projections–in this next version, I’m probably going to instead focus on ~5-10 day projection as the shorter term and ~10-20 as the longer term and have a couple iterations forward to give me about a monthly projection, and maybe a few monthly projections to give me quarterly, and a few quarterly projections to give me a year projection.

The idea of being able to project some of the best OTM LEAP option purchases and quickly identify the run away trending stocks before they happen is very intriguing to me, but I wish I had some of the shorter term “character” movements that help anticipate sort of the breakout and fizzle, break and chase, fake out and break out, and the false moves to fast moves that really better help define what to expect and how to handle it.

Unfortunately it takes a very long time to set up the calculations of how breadth within an industry and volume within an industry compares to the individual stocks and then looking at which moves are lagging on the days and weeks and determining which stocks are working in the cycle and which are next… and it can only really be used for industries with the greatest number of publicly traded stocks; which is about 10% of the market.

So this time around, I’m probably not using it.

Comments »

2014 Goals Streamlining The Process Part 2

The analysis and grading system discussed in part one will look something like this but have more in depth data and calculations and filtering systems along with the ability to categorize based upon the data and pull the information to a coversheet where it will have a summery of the findings that is more clear.

more detailed breakdown and how sub categories will work.

stocks overview

At this point, it is mostly just a concept in my head that I have recently started to get on paper along with a brief draft of one aspect of what it will look like and how it is possible. I don’t even know how far I am going to be able to take this spreadsheet and how much can really be automated, vs how much I will have to manually setup. I have a number of real rough, general pictures in my head of all these spreadsheets and how they will work together so that I just press a few buttons (ideally as few as possible, but as many as necessary for quality results) and get a result, some of which I manually will go into finviz and look over and then look at charts and assess risk/rewards from however many I want, sort those by best available (ideally streaming updates) by expectations per equal unit of risk, and combine them together into the risk simulator to see how the broad strategy will help me meet my goals, so I know how those pieces fit within the broad strategy. With that in mind, the spreadsheet will pull a combination of the possible trades into different categories, make suggestions which I will be able to confirm by adding it to my trading journal for tracking, categorizing and reviewing my results in a way that looks at what I did, what condition the market was in and other variables that I want to be able to track and review over the course of many years to continue to look at areas I need to improve, trades I need to avoid making, trades I should make more of and strategies that could use some tweaking. My trading journal then will be able to adjust to reflect the “best fit” match relative to the target “allocations” and what not, and hopefully account for fees and evaluate whether or not the benefit is worth the costs of “rebalancing” and/or adding new positions and provide a suggestion on position sizing or a look at some simulations of how it would look assuming all opportunities are available and reflect reality.

But to go from conceptual rough draft to an actual concrete set of spreadsheets and what not is a huge leap. One step at a time. The first step will be to really get into the specifics of what I want just one of these spreadsheets to accomplish, and work from there.

Since I have done work on the position sizing/trading system simulator, I have a few adjustments I want to make, likely before year end.

1)Allow the spreadsheet to add in deposits or withdrawals on a per trade basis.

2)Allow the spreadsheet to adjust the “drawdown killswitch” AFTER subtracting the amount added after each trade and adjusting for the drawdown not including deposits.

3)Allow the grand total gain to subtract all capital added and starting amount to get a net gain.

4)Binary Yes/No function if drawdown killswitch is hit so you can track percentage chance that you hit the drawdown killswitch over X trades or less to potentially simulate the percentage of traders over a time frame that meet those results.

5)Consider adding in a “target goal” that functions as a “reverse kill switch” where trading is halted after goal is made

6)Binary Yes/No for “target reached” so you can estimate percentage chance of reaching target in X amount of trades or less given the assumptions you plugged in about expectations of the system(s).

7)Secondary portfolio targets and dynamically adjusted risk – Set it up so IF a particular portfolio target is reached, the risk percentage per trade is then adjusted and/or the amount deposit/withdrawn is adjusted to simulate reaching a goal in which you will attempt to retire from job while managing the sudden need to withdraw from account while being more conservative in your strategy. OR so you can increase the chances of getting to your target so if you get really close you don’t take unnecessary risk to get there at the cost of greater volatility that is not needed if you have traded well

8)Experiment with correlated trades held simultaneously with the same trading system. (the results of one influences the probability of another)

9)If that works, experiment with correlated trades held simultaneously with DIFFERENT expectations (such as a stock trading system combined with an option trading system) with different risk amounts

10)… ideally some sort of adjustment is going to have to be made to allow different average holding periods so the simulation can match up to more accurately reflect the timing of the trades.

11)If you can do 8 and 9, you should be able to set it up for up to 5 simultaneous trades for up to 5 unique “trading systems” simultaneously within portfolio, but may require a lot of busy work.

12)Come up with ideas to test a lot of different assumptions/strategies.

13)Use the spreadsheet to do a lot of testing of those assumptions.

update:You can check out the progress of the OA Bot.

Comments »