Fezzik said:
This one confuses me. What are they over the last 1000 sample size points. Who cares about 21 data points?
This is a great question that I was hoping someone would ask. Of course, it is very easy to answer it simply by changing the search-from date in the SDQL text. How about this:
H and line<=-140 and p:margin=1 and SG=1 and rest=0 and season>=2007
Over the past seven-plus seasons, MLB teams are 118-115 as a 140-plus favorite in a home series opener when they are off a one-run win yesterday. These teams have had an average line of minus 178, yet they have won about 50% of the time. The ROI playing against these teams has been a staggering +28.8%.
Perhaps I'm trying to see things that aren't there, but I can imagine a team being over-confident as a big favorite in a home series opener off a one-run win. Certainly, a close game is more physically taxing on the relievers -- with a three run lead, they just have to "throw strikes," but in a close game, a reliever can't afford to make a mistake on a single pitch.
I'm not stating with any certainty that this is not just random noise. There are statistical tests that indicate there is a high probability that it is not. However, just because it was a good play in the past does not mean it will be a good play in the future. One thing I will say with confidence -- I would not bet on a favorite in this spot.
This is what handicapping is all about.
Step one: KNOW THE HISTORICAL RESULTS.
Step two: EVALUATE THE MEANING AND SIGNIFICANCE OF THOSE RESULTS.
The SDQL will help with step one. You need brains for step two.
Dr M.