Pregame’s grading system handles standard picks in the main sports wonderfully. Full-game sides and totals in NFL, CFB, NBA, CBB, MLB, NHL.
But, the grading system doesn’t handle alternative bets as well.
Such as:
Parlays
Teasers
Team Totals
1st Half or 1st 5 innings (sides or totals)
When the standard system is used for these bets, sometimes the auto-grading is incorrect.
* When the auto-grading is incorrect, sometimes an actual loss is shown as a win in the pick result grid.
For example … a 2-team parlay, when the leg loaded into the standard system wins, but the second leg - documented in the pick analysis - losses. The auto-system only recognizes the first pick (thus, a loss is shown as a win)
* When the auto-grading is incorrect, sometimes an actual win is shown as a loss in the pick result grid.
For example … the pick is loaded as a full game under, but the pick is clarified in the documented pick analysis to be a FIRST HALF under. Then the game goes under in the first half, but goes over for the game (thus, a win is shown as a loss).
It’s important to clearly understand that in every case the actual pick is documented (with a time stamp). In fact, this public documentation is precisely how the recent posts in this thread identified these alterative pick issues.
The public documentation on these picks is correct.
The records promoted by the Pros always use 100% accurate records.
The auto-grading is sometimes wrong (sometimes in a way positive to the Pro; sometimes in a way negative to the Pro)
How often is the auto-grade incorrect due to alternative pick types?
Pregame has over 120,000 picks from Pros graded in our system. With what seems like an exhaustive review, far less than 120 instances have been presented. If the count ever got to 120, that would mean .1% of pick grades have a potential problem (said another way, over 99.9% are correct).
That fact alone - that less than .1% of picks is not enough to even attempt an impactful deception – should make it obvious that there has been no attempt at deception. Add in the fact that all the information used to identify these issues was publicly provided by Pregame. The instances being posted are obvious to anyone who looks. That’s not an attempt at deception, rather that’s a known limitation of the grading system. Then add in the fact that sometimes the auto-grading turned a winner into a loser, resulting in a negative impact for the Pro. That would make no sense if the mis-grades were purposeful. Lastly, add in that the auto mis-grades that in theory improved their records were never promoted by the Pros. The results the Pros promote get many times more attention than the auto-grades in the grid archives. If anyone was trying to deceive, why would they promote the correct records? That answer is simple … they wouldn’t. With a basic understanding of the facts, any reasonable person sees this for exactly what it is: A grading system that doesn’t handle alterative picks well.
Beyond swatting away like flees the ludicrous implications of attempted deception, the important take away is that this bug should be dealt with. As great as >99.9% accuracy is, it’s best to eliminate every error if possible. In addition, the verified auto mis-grades have been manually corrected (as will any future verified instances).
By end of day Tuesday I’ll have my solution for this posted right here in this thread.
Now it’s worth a few moments to consider just how feebly disingenuous these forum watchers have been.
#1) Unless I missed it, not once did they point out a single instance in which an actual win was auto-graded as a loss (i.e., a case when a Pro’s record was actually better than the grid results). There are many of these instances. Which means they checked game after game … thousands of them … and gleefully posted the cases in which an actual loss was auto-graded as a win (i.e., cases in which the grid record made the Pros look better than they actually were). Which means they purposely withheld the cases that showed the Pros to be honest, while making all the noise they could about the instances that called into question the Pros honest. If there was any doubt about their agenda, it is crystal clear now.
#2) Sherlock Holmes referenced “the dog that didn’t bark.” Consider that we now know that the entire Pregame Pros pick log has been combed over one pick at a time. There are numerous ways a pick log could hypothetically be falsified.
For example:
* Losers being deleted from the system at a later date
* Lines being changed after the fact (leaving the pick, but turning winners into losers)
A software system could be easily rigged to delete – for example - 1 out of 10 losers from the system after 100 days have passed. Or, a software system could easily be rigged to improve the spread in 1 out of 10 losers, turning a losing pick into a winner (once again, long enough after the gameday that people wouldn’t likely notice).
It’s funny to think about the wasted hours spent squinting at the screen, hoping to find proof of systematic dishonesty.
So they were forced into a convoluted attempt to make an innocent auto-grading limitation into something it is clearly not.
No dishonesty could be found. The dog didn’t bark. Which means what they have actually done is validate beyond any reasonable doubt that Pregame’s pick archive is at least 99.9% accurate (leaving room for human and system error). Think about it … when haters who were just exposed as being willing to lie if it made Pregame look bad can’t come up with any contrary proof, you know our honesty is beyond debate.
So all that time wasn’t a waste … There’s nothing we could have done that would have more effectively proven the trustworthiness of Pregame’s records. Thank You.