|
Post by hostrauser on Oct 31, 2022 20:37:36 GMT -6
No chit-chat, you don't pay me for my clever witticisms, you pay me for the algorithm. *touches earpiece* I've just been informed that I do not get paid for this. NATIONAL TOP 501. Avon H.S., IN (95.674 in the algorithm, FYI) 2. Hebron H.S., TX (95.672. Not posting every one, just these two for now) 3. Flower Mound H.S., TX 4. Carmel H.S., IN 5. Vandegrift H.S., TX 6. Broken Arrow H.S., OK 7. Cedar Ridge H.S., TX 8. Blue Springs H.S., MO 9. Claudia Taylor Johnson H.S., TX 10. The Woodlands H.S., TX 11. Ronald Reagan H.S., TX 12. William Mason H.S., OH 13. Tarpon Springs H.S., FL 14. Marcus H.S., TX 15. Jenks H.S., OK 16. Cedar Park H.S., TX 17. Brownsburg H.S., IN 18. Dobyns-Bennett H.S., TN 19. Fishers H.S., IN 20. Wando H.S., SC 21. Coppell H.S., TX 22. Vista Ridge H.S., TX 23. Cy-Fair H.S., TX 24. Bentonville H.S., AR 25. Round Rock H.S., TX ------------------------------------ 26. Pearland H.S., TX 27. American Fork H.S., UT 28. Bridgeland H.S., TX 29. Wakeland H.S., TX 30. Mustang H.S., OK 31. Moe & Gene Johnson H.S., TX 32. James Bowie H.S., TX 33. James F. Byrnes H.S., SC 34. O'Fallon Township H.S., IL 35. San Marcos H.S., CA 36. L.D. Bell H.S., TX 37. Vista Murrieta H.S., CA 38. Castle H.S., IN 39. Rouse H.S., TX 40. Kiski Area H.S., PA 41. Westlake H.S., TX 42. Lincoln H.S., SD 43. Chino Hills H.S., CA 44. Rosemount H.S., MN 45. Keller H.S., TX 46. Jenison H.S., MI 47. Ayala H.S., CA 48. Lincoln-Way Community H.S., IL 49. James Logan H.S., CA 50. Bartlett H.S., TN CLASS AAAA TOP 101. Avon H.S., IN 2. Hebron H.S., TX 3. Flower Mound H.S., TX 4. Carmel H.S., IN 5. Vandegrift H.S., TX 6. Broken Arrow H.S., OK 7. Cedar Ridge H.S., TX 8. Claudia Taylor Johnson H.S., TX 9. The Woodlands H.S., TX 10. Ronald Reagan H.S., TX CLASS AAA TOP 101. Blue Springs H.S., MO 2. Cedar Park H.S., TX 3. Dobyns-Bennett H.S., TN 4. Wakeland H.S., TX 5. Castle H.S., IN 6. Rouse H.S., TX 7. Lincoln H.S., SD 8. Rosemount H.S., MN 9. Stephen F. Austin H.S., TX 10. Foster H.S., TX CLASS AA TOP 101. Tarpon Springs H.S., FL 2. Kiski Area H.S., PA 3. Jenison H.S., MI 4. Grain Valley H.S., MO 5. South Jones H.S., MS 6. Morton H.S., IL 7. Anderson County H.S., KY 8. Franklin H.S., TN 9. Green Canyon H.S., UT 10. Lake Hamilton H.S., AR (because you'll ask) 14. Dartmouth H.S., MA 16. Norwin H.S., PA 22. Marian Catholic H.S., IL CLASS A TOP 101. Bourbon County H.S., KY 2. Edgewood H.S., IN 3. Beechwood H.S., KY 4. Estill County H.S., KY 5. Murray H.S., KY 6. Buhler H.S., KS 7. MOC-Floyd Valley H.S., IA 8. Signal Mountain H.S., TN 9. Archbishop Alter H.S., OH 10. Russell County H.S., KY
|
|
|
Post by TXHillCountryBands on Oct 31, 2022 22:01:35 GMT -6
Still waiting for Hostrauser’lottery winning algorithm 🤠
|
|
|
Post by Subito Fortissimo on Nov 1, 2022 19:50:14 GMT -6
Still waiting for Hostrauser’lottery winning algorithm 🤠 hostrauser is going to win the lottery and become a billionaire, start a space rocket company and then provide the funding so that BOA can rotate Grand Nationals between Indianapolis and the moon.
|
|
|
Post by hostrauser on Nov 2, 2022 19:09:23 GMT -6
Still waiting for Hostrauser’lottery winning algorithm 🤠 I still say that creating that algorithm would be easier and less random than some local circuits' judging.
|
|
|
Post by Samuel Culper on Nov 3, 2022 9:04:46 GMT -6
I want my money back.
|
|
|
Post by ncscbandfan on Nov 3, 2022 12:34:11 GMT -6
Catawba Ridge, SC a regional champion this year not even in the top 10 in the AA class? Interesting.
|
|
|
Post by yayband914 on Nov 3, 2022 12:40:15 GMT -6
Catawba Ridge, SC a regional champion this year not even in the top 10 in the AA class? Interesting. The algorithm has been questioned! 😱 I agree though, you put them and the bottom four or five at the same contest and it’s no question who would take the W.
|
|
|
Post by trumpet300 on Nov 3, 2022 14:02:39 GMT -6
Catawba Ridge, SC a regional champion this year not even in the top 10 in the AA class? Interesting. The algorithm has been questioned! 😱 I agree though, you put them and the bottom four or five at the same contest and it’s no question who would take the W. Agreed. You can apply that to a lot of the fabled algorithms predictions/placements particularly in the 2A class but in the others as well.
|
|
|
Post by hostrauser on Nov 3, 2022 18:47:46 GMT -6
Catawba Ridge, SC a regional champion this year not even in the top 10 in the AA class? Interesting. Yep. The competition level at Winston-Salem was not very high and the algorithm was unimpressed. It has Catawba Ridge in 13th in AA, one spot ahead of Dartmouth. The algorithm definitely noticed that bands from Winston-Salem who went on to other BOA regionals got knocked down a peg. Cary H.S. (NC) was 9th/10th at Winston-Salem and just BARELY made Finals (by less than 0.1) the very next week in Orlando. Walton H.S. (GA) was 2nd/3rd at Winston-Salem, but two weeks later against better competition was 7th/9th in Jacksonville. The algorithm sees all.
|
|
|
Post by trumpet300 on Nov 3, 2022 18:58:16 GMT -6
Catawba Ridge, SC a regional champion this year not even in the top 10 in the AA class? Interesting. Yep. The competition level at Winston-Salem was not very high and the algorithm was unimpressed. It has Catawba Ridge in 13th in AA, one spot ahead of Dartmouth. The algorithm definitely noticed that bands from Winston-Salem who went on to other BOA regionals got knocked down a peg. Cary H.S. (NC) was 9th/10th at Winston-Salem and just BARELY made Finals (by less than 0.1) the very next week in Orlando. Walton H.S. (GA) was 2nd/3rd at Winston-Salem, but two weeks later against better competition was 7th/9th in Jacksonville. The algorithm sees all. So now I'm confused. You said that the competition at Winston Salem wasn't very high. So what you are saying is the algorithm ranks ensembles based on multiple factors i.e. scores and who they beat, but it also uses its own rankings of bands to determine their placement based on what it sees in terms of their competetive level? I'm confused on how the point of the algorithm is to rank bands, but one of the factors used to rank the Bands is their ranking in the algorithm against other bands? Maybe I'm not understanding what you're saying but that seems incredibly circular.
|
|
|
Post by hostrauser on Nov 3, 2022 19:17:51 GMT -6
Yep. The competition level at Winston-Salem was not very high and the algorithm was unimpressed. It has Catawba Ridge in 13th in AA, one spot ahead of Dartmouth. The algorithm definitely noticed that bands from Winston-Salem who went on to other BOA regionals got knocked down a peg. Cary H.S. (NC) was 9th/10th at Winston-Salem and just BARELY made Finals (by less than 0.1) the very next week in Orlando. Walton H.S. (GA) was 2nd/3rd at Winston-Salem, but two weeks later against better competition was 7th/9th in Jacksonville. The algorithm sees all. So now I'm confused. You said that the competition at Winston Salem wasn't very high. So what you are saying is the algorithm ranks ensembles based on multiple factors i.e. scores and who they beat, but it also uses its own rankings of bands to determine their placement based on what it sees in terms of their competetive level? I'm confused on how the point of the algorithm is to rank bands, but one of the factors used to rank the Bands is their ranking in the algorithm against other bands? Maybe I'm not understanding what you're saying but that seems incredibly circular. You're right, I worded that very poorly. Let me try again. The algorithm tries to "predict" what a band would score if every band competed at BOA Grand National Prelims. For BOA shows, it uses the point spreads at a given competition and estimates a "cap" for the top three bands in both Prelims and Finals to give a "power rating" for the show overall. Every band's subtotal (penalties are never included) is then adjusted by the power rating to get a data point for the algorithm. BOA is very consistent, but every so often a show has scoring that just goes off the rails. When that happens, data points from future BOA shows can prompt my conditional formatting to flag the outlier scores. If a bunch of outlier scores are all from the same show, I may go back and completely revise the ratings from that show to be more in line with more current results. Nothing is ever set in stone. In this case, the power rating for Winston-Salem was very low. But, the results from Orlando and Jacksonville ended up confirming that the low power rating at Winston-Salem was not out of line with the results of the rest of the season. Another complicating factor is that band shows grow at different rates. But the algorithm looks at all performances. If Band B beats Band A by two points in September, but Band A beats Band B by five points in October, both shows count in the algorithm. The October show has more weight because it's more recent, but the September show doesn't just get tossed in the trash because it's older. That show was part of the season, too.
|
|
|
Post by doublegeez on Nov 3, 2022 19:29:11 GMT -6
So now I'm confused. You said that the competition at Winston Salem wasn't very high. So what you are saying is the algorithm ranks ensembles based on multiple factors i.e. scores and who they beat, but it also uses its own rankings of bands to determine their placement based on what it sees in terms of their competetive level? I'm confused on how the point of the algorithm is to rank bands, but one of the factors used to rank the Bands is their ranking in the algorithm against other bands? Maybe I'm not understanding what you're saying but that seems incredibly circular. You're right, I worded that very poorly. Let me try again. The algorithm tries to "predict" what a band would score if every band competed at BOA Grand National Prelims. For BOA shows, it uses the point spreads at a given competition and estimates a "cap" for the top three bands in both Prelims and Finals to give a "power rating" for the show overall. Every band's subtotal (penalties are never included) is then adjusted by the power rating to get a data point for the algorithm. BOA is very consistent, but every so often a show has scoring that just goes off the rails. When that happens, data points from future BOA shows can prompt my conditional formatting to flag the outlier scores. If a bunch of outlier scores are all from the same show, I may go back and completely revise the ratings from that show to be more in line with more current results. Nothing is ever set in stone. In this case, the power rating for Winston-Salem was very low. But, the results from Orlando and Jacksonville ended up confirming that the low power rating at Winston-Salem was not out of line with the results of the rest of the season. Another complicating factor is that band shows grow at different rates. But the algorithm looks at all performances. If Band B beats Band A by two points in September, but Band A beats Band B by five points in October, both shows count in the algorithm. The October show has more weight because it's more recent, but the September show doesn't just get tossed in the trash because it's older. That show was part of the season, too. Is the algorithm on a computer or in your galaxy sized brain like this is some cool stuff seeing it explained
|
|
|
Post by srv1084 on Nov 3, 2022 19:40:46 GMT -6
Is the algorithm on a computer or in your galaxy sized brain like this is some cool stuff seeing it explained Sort of like that
|
|
|
Post by hewhowaits on Nov 4, 2022 5:45:49 GMT -6
Yep. The competition level at Winston-Salem was not very high and the algorithm was unimpressed. It has Catawba Ridge in 13th in AA, one spot ahead of Dartmouth. The algorithm definitely noticed that bands from Winston-Salem who went on to other BOA regionals got knocked down a peg. Cary H.S. (NC) was 9th/10th at Winston-Salem and just BARELY made Finals (by less than 0.1) the very next week in Orlando. Walton H.S. (GA) was 2nd/3rd at Winston-Salem, but two weeks later against better competition was 7th/9th in Jacksonville. The algorithm sees all. So now I'm confused. You said that the competition at Winston Salem wasn't very high. So what you are saying is the algorithm ranks ensembles based on multiple factors i.e. scores and who they beat, but it also uses its own rankings of bands to determine their placement based on what it sees in terms of their competetive level? I'm confused on how the point of the algorithm is to rank bands, but one of the factors used to rank the Bands is their ranking in the algorithm against other bands? Maybe I'm not understanding what you're saying but that seems incredibly circular. hostrauser explains this in more detail, but think of it as a strength of schedule calculation. Winning is good. Winning against stronger bands is better.
|
|
|
Post by trumpet300 on Nov 4, 2022 7:15:27 GMT -6
So now I'm confused. You said that the competition at Winston Salem wasn't very high. So what you are saying is the algorithm ranks ensembles based on multiple factors i.e. scores and who they beat, but it also uses its own rankings of bands to determine their placement based on what it sees in terms of their competetive level? I'm confused on how the point of the algorithm is to rank bands, but one of the factors used to rank the Bands is their ranking in the algorithm against other bands? Maybe I'm not understanding what you're saying but that seems incredibly circular. hostrauser explains this in more detail, but think of it as a strength of schedule calculation. Winning is good. Winning against stronger bands is better. I read the explanation above but I'm still not understanding this completely. Even in your explanation, you said winning is good but winning against better bands has a more powerful impact. My point is, how are we determining the so called "power rating?" If the algorithm is ranking bands against one another and that's how the power rate is determined, how can it then use that power rate to rank the Bands? Trying to type my confusion is proving difficult lol, but to me, it sounds like the algorithm uses the algorithm to push out scores and overall ranks and that just isn't reliable. Obviously I understand that beating Carmel is a bigger showing of strength than beating someone like Marian Catholic at this point in time(nothing against them, its just an example), but how is this represented in the algorithm? This particular example is clear cut but not everything is. Once you start comparing the same caliber of band i.e. Carmel and Avon, Brownsburg and Fishers, Bourbon County and Murray, Norwin and kiski etc...then it's not so easy to merely say which group is better.
|
|
|
Post by hewhowaits on Nov 4, 2022 10:13:00 GMT -6
hostrauser explains this in more detail, but think of it as a strength of schedule calculation. Winning is good. Winning against stronger bands is better. I read the explanation above but I'm still not understanding this completely. Even in your explanation, you said winning is good but winning against better bands has a more powerful impact. My point is, how are we determining the so called "power rating?" If the algorithm is ranking bands against one another and that's how the power rate is determined, how can it then use that power rate to rank the Bands? Trying to type my confusion is proving difficult lol, but to me, it sounds like the algorithm uses the algorithm to push out scores and overall ranks and that just isn't reliable. Obviously I understand that beating Carmel is a bigger showing of strength than beating someone like Marian Catholic at this point in time(nothing against them, its just an example), but how is this represented in the algorithm? This particular example is clear cut but not everything is. Once you start comparing the same caliber of band i.e. Carmel and Avon, Brownsburg and Fishers, Bourbon County and Murray, Norwin and kiski etc...then it's not so easy to merely say which group is better. You answered your own question. Beating Carmel or Avon - means more than beating Brownsburg or Fishers - means more than beating Ben Davis or Center Grove. Beating Carmel means about the same as beating Avon. Think in terms of college football. Beating Alabama means more than beating Cincinnati means more than beating Kent State.
|
|
|
Post by srv1084 on Nov 4, 2022 11:46:12 GMT -6
This type of modeling is widely used across many other activities, sports, and in the financial world.
Evaluating investment opportunities through multi-variable financial modeling is something that's frequently done in the real world. One of the most difficult professional exams in the world (the Chartered Financial Analyst certification) places heavy emphasis on a candidate's ability to construct and evaluate financial data in the decision making process. What is one key component of financial modeling? You guessed it: subjectivity.
The subjective theory of value suggests that an object's value is not intrinsic but changes according to its context (e.g. adding seasonal sale adjustments to a company's cash flows to arrive at an adjusted company value). In the absence of direct comparisons, some level of subjectivity is required. These are meant to convey projections, not facts, and there will always be some level of variability in outcome. If I ever establish an annual budget for my company and come within 1% variance, I will consider myself incredibly lucky and instantly go out and play the lottery (I can even pull together a luck-adjusted model on my likelihood of winning, but it won't mean I'm any closer to winning the lottery).
Look at online fantasy sports betting. You would think that the player values scale with only the data from their recent performance. Dig deeper, and you'll find that certain quarterbacks who have been valued highly have historically had some difficulty against a great pass-rushing defense and are suddenly facing Pittsburgh - their value will very likely come down that week, and inevitably some folks will complain that they are valued too low without considering an important variable like this.
This is why I've always had trouble understanding why people get so worked up over these and other rankings, like the HR weekly poll. They are projections based on a combination of real data and subjectivity. If anything, subjectivity in assumptions used are the only point of criticism, and if it's subjective for one person it's subjective for all people. Lastly, the activity itself is inherently subjective in arriving at even direct scoring comparisons, so any assumptions are simply adding subjectivity upon subjectivity.
With that in mind, and adding the fact that this is purely for fun and has no impact at all on actual outcomes, I will leave with the following point:
Hostrauser - these are clearly all wrong, and I will not rest until I see Timber Creek, FL and Timber Creek, TX back-to-back in their rightful places.
|
|
|
Post by paddy on Nov 4, 2022 12:01:03 GMT -6
This is why I've always had trouble understanding why people get so worked up over these and other rankings, like the HR weekly poll. They are projections based on a combination of real data and subjectivity. If anything, subjectivity in assumptions used are the only point of criticism, and if it's subjective for one person it's subjective for all people. Lastly, the activity itself is inherently subjective in arriving at even direct scoring comparisons, so any assumptions are simply adding subjectivity upon subjectivity. I have no problem with the HR weekly poll or someone's personal subjective ranking of bands. My problem was always around how this algorithm was initially presented as a dispassionate, scientific, unbiased evaluation of a subjective, biased and passionate subject. Early questions were met with scoffs and derision about how people just didn't understand the calculation and how they were wrong because the algorithm was so deftly crafted. People were told that what they saw with their own eyes was wrong because it disagreed with the rankings and score projections.
|
|
|
Post by trumpet300 on Nov 4, 2022 13:08:10 GMT -6
I read the explanation above but I'm still not understanding this completely. Even in your explanation, you said winning is good but winning against better bands has a more powerful impact. My point is, how are we determining the so called "power rating?" If the algorithm is ranking bands against one another and that's how the power rate is determined, how can it then use that power rate to rank the Bands? Trying to type my confusion is proving difficult lol, but to me, it sounds like the algorithm uses the algorithm to push out scores and overall ranks and that just isn't reliable. Obviously I understand that beating Carmel is a bigger showing of strength than beating someone like Marian Catholic at this point in time(nothing against them, its just an example), but how is this represented in the algorithm? This particular example is clear cut but not everything is. Once you start comparing the same caliber of band i.e. Carmel and Avon, Brownsburg and Fishers, Bourbon County and Murray, Norwin and kiski etc...then it's not so easy to merely say which group is better. You answered your own question. Beating Carmel or Avon - means more than beating Brownsburg or Fishers - means more than beating Ben Davis or Center Grove. Beating Carmel means about the same as beating Avon. Think in terms of college football. Beating Alabama means more than beating Cincinnati means more than beating Kent State. But that doesn't answer the question. I'm asking how the algorithm is representing these differences in caliber of bands?
|
|
|
Post by hewhowaits on Nov 4, 2022 14:01:26 GMT -6
You answered your own question. Beating Carmel or Avon - means more than beating Brownsburg or Fishers - means more than beating Ben Davis or Center Grove. Beating Carmel means about the same as beating Avon. Think in terms of college football. Beating Alabama means more than beating Cincinnati means more than beating Kent State. But that doesn't answer the question. I'm asking how the algorithm is representing these differences in caliber of bands? By a long-term comparison of results. It's not just about this week or this year. It's about how bands have performed over time.
|
|
|
Post by trumpet300 on Nov 4, 2022 17:10:29 GMT -6
But that doesn't answer the question. I'm asking how the algorithm is representing these differences in caliber of bands? By a long-term comparison of results. It's not just about this week or this year. It's about how bands have performed over time. And I'm not saying that I don't understand that. What I'm trying to understand is how rankings of bands are determining rankings of bands. Its like proving something happend merely by saying that it happend. Like, that's true but doesn't explain anything.
|
|
|
Post by hewhowaits on Nov 4, 2022 17:23:26 GMT -6
By a long-term comparison of results. It's not just about this week or this year. It's about how bands have performed over time. And I'm not saying that I don't understand that. What I'm trying to understand is how rankings of bands are determining rankings of bands. Its like proving something happend merely by saying that it happend. Like, that's true but doesn't explain anything. You appear to be stuck in an infinite loop. There must be a baseline to use in assessing relative strength. If that baseline is not ratings from performance to date, it would just be random number generation or the rantings of a madman.
|
|
|
Post by trumpet300 on Nov 4, 2022 18:17:03 GMT -6
And I'm not saying that I don't understand that. What I'm trying to understand is how rankings of bands are determining rankings of bands. Its like proving something happend merely by saying that it happend. Like, that's true but doesn't explain anything. You appear to be stuck in an infinite loop. There must be a baseline to use in assessing relative strength. If that baseline is not ratings from performance to date, it would just be random number generation or the rantings of a madman. that's my point...it already seems to be an infinite loop...that's why I'm asking how the baseline is determined. The explanation for the baseline was the Bands are ranked...well yes...but then we are ranking based of of Rankings then....that's my entire point. What is the baseline, how is it established and how does it work in the math?
|
|
|
Post by srv1084 on Nov 4, 2022 18:38:34 GMT -6
You appear to be stuck in an infinite loop. There must be a baseline to use in assessing relative strength. If that baseline is not ratings from performance to date, it would just be random number generation or the rantings of a madman. that's my point...it already seems to be an infinite loop...that's why I'm asking how the baseline is determined. The explanation for the baseline was the Bands are ranked...well yes...but then we are ranking based of of Rankings then....that's my entire point. What is the baseline, how is it established and how does it work in the math? Why do you need to establish an equal baseline for everyone as a starting point when a baseline already exists? If you are starting completely from scratch when data already exists, it's not a worthwhile approach to forecast future events. If Band A has beaten Band B by 20 points and Band B then beats Band C by another 10 points, what are we then accomplishing by saying no data exists and assigning a new base rating to compare Band A and C? There are clearly facts you can point to already in existence. Your model could already be well on its way to doing its job by predicting their next meeting's results, rather than having to wait for even more data before getting it off the ground to prove what you already know is likely. This is exactly how modeling works. You see it in so many elements of your life already without even noticing, including the prices you pay for goods and services. I genuinely don't understand the issue.
|
|
|
Post by hostrauser on Nov 4, 2022 19:25:53 GMT -6
You appear to be stuck in an infinite loop. There must be a baseline to use in assessing relative strength. If that baseline is not ratings from performance to date, it would just be random number generation or the rantings of a madman. that's my point...it already seems to be an infinite loop...that's why I'm asking how the baseline is determined. The explanation for the baseline was the Bands are ranked...well yes...but then we are ranking based of of Rankings then....that's my entire point. What is the baseline, how is it established and how does it work in the math? Your question doesn't make any sense. There is no baseline until scores start rolling in. Everyone is at zero. The algorithm does refer to prior season's scores for all competing bands for the first week or two until there's enough data present, then it "auto-corrects" to only using the present seasons scores. We aren't ranking off of rankings, we're ranking off of the scores issued at competitions. But you can't directly compare scores from competitions in different states with different judges, so the algorithm tries to find some middle ground. The later in the season we get, the more data there is, the more precise the fine-tuning is.
|
|
hdni
Full Member
Posts: 43
|
Post by hdni on Nov 5, 2022 12:45:15 GMT -6
Lol all of these South Carolina campies getting offended. CR getting by Walton and Harrison this year was not as impressive as it would have been 5 years ago. And no, they would not get by Bartlett, Lincoln-Way, Ayala, or Jenison (can’t speak for James Logan, I’ve not seen them) if they saw them. To Hostrauser’s points - a regional win is only as impressive as the bands who attend.
|
|
|
Post by trumpet300 on Nov 5, 2022 18:19:00 GMT -6
Lol all of these South Carolina campies getting offended. CR getting by Walton and Harrison this year was not as impressive as it would have been 5 years ago. And no, they would not get by Bartlett, Lincoln-Way, Ayala, or Jenison (can’t speak for James Logan, I’ve not seen them) if they saw them. To Hostrauser’s points - a regional win is only as impressive as the bands who attend. I don't think it has anything to do with being offended...its fair and genuine question.
|
|