Hi everyone, this is sort of a tutorial on better utilization of the GAT engine. It will address two approaches that are applicable in GAT for the best utilization of this engine. So far we know that GAT tables have cycles that are successful in hitting well over a few draws, then calm down and then start over again. This can be seen when testing against 100 or more draws. The reason GATs have cycles is that the signature analyzed changes over time. When this change occurs, a good performing GAT, which trapped a good signature, needs to recalibrate that and during this change we have a non-hitting GAT table. Other GAT tables however can trap this transition (signature change) and hit. So really, at almost any point we can have GATs that do hit on every tested draw. The big problem here is that we: 1) don't know when a signature change occurs 2) we don't know which GAT will be the one that delivers the hit next. Thus comes the need to test the engine over a given set of tested draws, so to have a better understanding of those transitions. Overall GATs will produce quite more hits than natural chance no matter if we test 10, 50, 100 or even 1000 past draws, however the more tested draws, the improvement ratio tend to go down (still above natural chance). This is due to the "cold periods" of each GAT (the cycles). So really we have two distinct approaches to utilize the GAT engine.
The first approach is the one officially supported and suggests to test the engine with 100 or more past draws so to have apparent those cycles. Our task is then to decide on a GAT to use based on its overall hits performance over that tested set of draws. In that approach, the most consistent GATs that hit regularly will emerge. Our selection will also take into account the observation of the hit cycle of each GAT (the blue line) so we really aim to be at the beginning of a "hill" in that line which means we expect an increase of hits over the next few draws.
An alternative approach is the ability to skip entirely our dependance on cycles observed. Since GATs have short-term performance over a few draws, then calm down the start over again, we really could just seek for GATs that perform admirably over a much reduced set of tested draws in the range of 10-20 at most. If we perform an analysis with only e.g. 10-20 tested draws, we actually get GATs that are "very hot" at the moment, regardless if this is a signature transition point or not. This is of course apparent since the overall improved ratio will be quite higher compared to e.g. testing against 100 past draws (due to the cold periods observed in that case). The logic here is to actually take advantage of that short-term performance of a GAT table. In a test against 10-20 past draws, those GATs emerged will be the most "hot", which means they are in a top-performing cycle. Each such GAT will enter a cold cycle period after a while however and might not be the best in the long run if e.g. tested against 100 draws compared to other GATs but nevertheless, it currently provides the most hits at the moment. So, picking a GAT in that case we may trap quickly a good hit assuming the good hit cycle of that GAT didn't end yet but we are unable to observe actual cycles. Insisting on that GAT however for more than 5-10 draws (using the run factor) is probably a bad idea because this GAT will enter a cold cycle sooner or later.
Each approach has its advantages and disadvantages. The latter is more risky but it can provide good hits quicker if the GAT picked is the "right one" meaning it didn't entered a cold cycle at the moment we decided to use it. The former is more consistent because we have an overall good understanding of that GAT table's performance so sooner or later it will provide that hit we expect from it.
In both approaches, we should follow the 100/X rule as described in various posts.
G.A.T. Engine general discussion
1 post • Page 1 of 1
Who is online
Users browsing this forum: Bing [Bot] and 2 guests