|
Post by ohioguy2 on Sept 8, 2021 13:49:01 GMT -6
But that doesn't invalidate any of the scores! Jenison beat Plymouth-Canton at 2019 Toledo. Plymouth-Canton beat Jenison at 2019 GN Semis. Both of these statements are true and must be accepted, one doesn't cancel out the other. It would be wrong to disregard Jenison's score at Toledo just because PCEP beat them later in the year. And it would be wrong to disregard PCEP's score at Grand Nationals just because they lost to Jenison earlier in the season. Just curious, what was then the process that left Miamisburg out of the rankings? They’ve been pretty comparable with Jenison the last few years and medaled at Grand Nationals the last time it occurred. They were AA in 2019 and I believe are again this year.
|
|
|
Post by Marching Observer on Sept 8, 2021 14:10:55 GMT -6
Since we are having a rather large topic going on in the weekly rankings, figured it was time for an actual thread be created so that we can truly share and discuss our own rankings. From preseason, midseason, etc, let's rank'em and tank'em.
Continuing the topic from there, I was mearly curious about Camdenton so high and GFC nowhere to be found is all when GFC had made semi's in 18 and 19. I get that early season scores probably tanked them though. I also figured since Norton is generally in A that you'd assume they were still in it but I get wanting final confirmation too on them.
|
|
|
Post by Marching Observer on Sept 8, 2021 14:12:44 GMT -6
Hey all, I created a thread in General discussions so we can move the topic of currently year rankings from us there. Let's keep them party going on in that thread!
EDIT: thewho here, I moved all relevant posts to this thread. This post was the beginning of the thread before the move.
|
|
|
Post by lostchoirguy on Sept 8, 2021 15:25:50 GMT -6
I'm curious, on those classification rankings, were only bands that compete in BOA included?
I can think of a few really good A bands in Texas that would probably have made the list but they don't do BOA.
|
|
|
Post by Marching Observer on Sept 8, 2021 16:42:30 GMT -6
I'm curious, on those classification rankings, were only bands that compete in BOA included? I can think of a few really good A bands in Texas that would probably have made the list but they don't do BOA. Yeah his list is only based upon bands that have done BOA generally. I agree that there are some other great groups but represented but trying to compare numbers across different circuits is a fools errand.
|
|
|
Post by hostrauser on Sept 8, 2021 16:57:46 GMT -6
I'm curious, on those classification rankings, were only bands that compete in BOA included? I can think of a few really good A bands in Texas that would probably have made the list but they don't do BOA. Yeah his list is only based upon bands that have done BOA generally. I agree that there are some other great groups but represented but trying to compare numbers across different circuits is a fools errand. Plus, there are very, very few great marching bands in this country that don't compete in BOA. A few in the SCSBOA in Southern California, some in UIL, and a couple in New England. Other than that, maybe one or two here or there scattered across the country.
|
|
|
Post by hostrauser on Sept 8, 2021 17:01:01 GMT -6
Continuing the topic from there, I was mearly curious about Camdenton so high and GFC nowhere to be found is all when GFC had made semi's in 18 and 19. I get that early season scores probably tanked them though. I also figured since Norton is generally in A that you'd assume they were still in it but I get wanting final confirmation too on them. Reminder that Greenfield-Central only made GN Semis in 2019 to satisfy class requirements. Quite simply, Camdenton's 2019 season was much better scored relative to their BOA competition than GFC's was to their BOA competition.
|
|
|
Post by ohioguy2 on Sept 8, 2021 17:09:29 GMT -6
Continuing the topic from there, I was mearly curious about Camdenton so high and GFC nowhere to be found is all when GFC had made semi's in 18 and 19. I get that early season scores probably tanked them though. I also figured since Norton is generally in A that you'd assume they were still in it but I get wanting final confirmation too on them. Reminder that Greenfield-Central only made GN Semis in 2019 to satisfy class requirements. Quite simply, Camdenton's 2019 season was much better scored relative to their BOA competition than GFC's was to their BOA competition. Do you have Miamisburg listed in AAA and is that why they’re not included in this or do they not score high enough? They were AA in 19 and I believe are again this year.
|
|
|
Post by hostrauser on Sept 8, 2021 17:11:46 GMT -6
I'm bringing this over from the other thread... "Sept 8, 2021 14:27:44 GMT -5 abtwitch said: I think my biggest issue is putting so much stock into prelims scores. Judges are only trying to figure out who should qualify for the next round, everything else doesn't matter that much. Any sort of ranking system using prelims scores is going to be inherently flawed." I completely and totally disagree. Bands of America has the best rubric and the best training system in the country. The judges are, with few exceptions, remarkably consistent and uniform. Case in point: 2019 BOA GN Prelims. The Top 12 bands from prelims (overall, across both panels) were the 12 bands that made Finals. The two prelims panels called 12/12. In 2018, the two prelims panels had 10 of the 12 finalists in the Prelims Top 12 (and another, Homestead, was 14th). In my experience, it's the SEMIFINALS scores you have to watch out for. The judges in Semifinals usually put more spacing between the caption scores to make sure the best 12 bands make it into Finals. Some bands take a precipitous fall from Prelims to Semis despite performing just as well. If anything, I think BOA GN Prelims scores are some of the most "perfect" you will see all season.
|
|
|
Post by hostrauser on Sept 8, 2021 17:13:49 GMT -6
Reminder that Greenfield-Central only made GN Semis in 2019 to satisfy class requirements. Quite simply, Camdenton's 2019 season was much better scored relative to their BOA competition than GFC's was to their BOA competition. Do you have Miamisburg listed in AAA and is that why they’re not included in this or do they not score high enough? They were AA in 19 and I believe are again this year. Yes, I have them in AAA. Their 2020 enrollment would have put them in AAA (by only six kids! So I'm not terribly surprised to hear they'll be back in AA).
|
|
|
Post by thewho on Sept 8, 2021 17:43:17 GMT -6
I'm bringing this over from the other thread... "Sept 8, 2021 14:27:44 GMT -5 abtwitch said: I think my biggest issue is putting so much stock into prelims scores. Judges are only trying to figure out who should qualify for the next round, everything else doesn't matter that much. Any sort of ranking system using prelims scores is going to be inherently flawed." I completely and totally disagree. Bands of America has the best rubric and the best training system in the country. The judges are, with few exceptions, remarkably consistent and uniform. Case in point: 2019 BOA GN Prelims. The Top 12 bands from prelims (overall, across both panels) were the 12 bands that made Finals. The two prelims panels called 12/12. In 2018, the two prelims panels had 10 of the 12 finalists in the Prelims Top 12 (and another, Homestead, was 14th). In my experience, it's the SEMIFINALS scores you have to watch out for. The judges in Semifinals usually put more spacing between the caption scores to make sure the best 12 bands make it into Finals. Some bands take a precipitous fall from Prelims to Semis despite performing just as well. If anything, I think BOA GN Prelims scores are some of the most "perfect" you will see all season. I'm intrigued by how you would use the preliminary scores in ranking the bands. Do you use them as validation for your algorithm accuracy? E.g. your algorithm prints out an end-of-year ranking of bands as if GN (or one major competition bringing the majority of the top 50 programs together) doesn't exist, and you use the actual GN Prelims results to compare the end-of-year rankings.
|
|
|
Post by hostrauser on Sept 8, 2021 19:20:19 GMT -6
I'm intrigued by how you would use the preliminary scores in ranking the bands. Do you use them as validation for your algorithm accuracy? E.g. your algorithm prints out an end-of-year ranking of bands as if GN (or one major competition bringing the majority of the top 50 programs together) doesn't exist, and you use the actual GN Prelims results to compare the end-of-year rankings. You're kind of halfway there. My algorithm/formulas try to guesstimate what the scores of EVERY BOA band would have been had they ALL been at Grand Nationals. There are two basic principles I try to follow when calculating the rankings: start with the new and work toward the old, and start at the top and work toward the bottom. I feel that Grand Nationals scores will be more accurate than October regional scores, which in turn will be more accurate than September regional scores. The top bands tend to be very, very consistent over the course of a season (and, often, year after year), so they are very good touchstones to use when trying to calculate everyone else's "would have been" scores. So I take the twelve GN Finalists' scores, and I compare them to how those twelve bands scored in Semifinals. Then I take a look at the other ~24 Semifinalists' scores and try to calculate what their Finals score would have been. Then I take those ~36 actual and estimated Finals scores, plus the ~36 actual Semifinals scores, and compare both sets to the Prelims scores for those bands. That done, I calculate an estimated Finals rating for every band at Grand Nationals. Once that is all done, every Grand Nationals band will get 1 to 3 scores input into the algorithm: (1) Finalist bands get their ACTUAL Finals score, their actual or adjusted Semifinals score (whichever is higher), and their actual or adjusted Prelims score (whichever is higher); (2) Semifinalist bands get their actual or adjusted Semifinals score (whichever is higher), and their actual or adjusted Prelims score (whichever is higher); and (3) Prelims bands get their actual or adjusted Prelims score (whichever is higher). Once Grand Nationals is all done, I move back to the regionals the week before Grand Nationals, then the week before, etc. I start by looking for bands that attended both the regional and Grand Nationals and chart scoring paths. Some bands peak early, some peak late. At the end of the day, I try to get the best estimate I can as to what every BOA band in the country would have scored at Grand Nationals. Up to five adjusted regional scores can be put into the formula. If a band attends and makes Finals in three or more regionals, they will have more than five scores. Bands get credit for competing more often: in that scenario, the lowest scores would be discarded and only the five highest adjusted scores used. Up to three Grand National scores and up to five Regional scores can be put into the formula. The formula then doubles the best score of the season for that band and averages out the sum. You'll note the repeated usage of "whichever is higher." I err on the side of favorability for every band. I also completely ignore penalties, as those are procedural errors not indicative of the performance capability of the band: only subtotals are used.
|
|
|
Post by marimba11 on Sept 8, 2021 19:33:56 GMT -6
Sometimes I wish BOA would just do the ordinal system like UIL, or how George Hopkins wanted 😳
|
|
|
Post by Marching Observer on Sept 8, 2021 19:51:39 GMT -6
Sometimes I wish BOA would just do the ordinal system like UIL, or how George Hopkins wanted 😳 I've actually checked on the ordinal system after some shows and have found that the placings are honestly not that much different and most cases, don't change at all.
|
|
|
Post by marimba11 on Sept 8, 2021 19:54:40 GMT -6
Sometimes I wish BOA would just do the ordinal system like UIL, or how George Hopkins wanted 😳 I've actually checked on the ordinal system after some shows and have found that the placings are honestly not that much different and most cases, don't change at all. Exactly
|
|
|
Post by LeanderMomma on Sept 8, 2021 21:03:38 GMT -6
Y’all makin my head hurt.
|
|
|
Post by Samuel Culper on Sept 8, 2021 21:05:26 GMT -6
Y’all makin my head hurt. NERDS!!!!! 😁
|
|
|
Post by srv1084 on Sept 8, 2021 22:06:34 GMT -6
I'm intrigued by how you would use the preliminary scores in ranking the bands. Do you use them as validation for your algorithm accuracy? E.g. your algorithm prints out an end-of-year ranking of bands as if GN (or one major competition bringing the majority of the top 50 programs together) doesn't exist, and you use the actual GN Prelims results to compare the end-of-year rankings. You're kind of halfway there. My algorithm/formulas try to guesstimate what the scores of EVERY BOA band would have been had they ALL been at Grand Nationals. There are two basic principles I try to follow when calculating the rankings: start with the new and work toward the old, and start at the top and work toward the bottom. I feel that Grand Nationals scores will be more accurate than October regional scores, which in turn will be more accurate than September regional scores. The top bands tend to be very, very consistent over the course of a season (and, often, year after year), so they are very good touchstones to use when trying to calculate everyone else's "would have been" scores. So I take the twelve GN Finalists' scores, and I compare them to how those twelve bands scored in Semifinals. Then I take a look at the other ~24 Semifinalists' scores and try to calculate what their Finals score would have been. Then I take those ~36 actual and estimated Finals scores, plus the ~36 actual Semifinals scores, and compare both sets to the Prelims scores for those bands. That done, I calculate an estimated Finals rating for every band at Grand Nationals. Once that is all done, every Grand Nationals band will get 1 to 3 scores input into the algorithm: (1) Finalist bands get their ACTUAL Finals score, their actual or adjusted Semifinals score (whichever is higher), and their actual or adjusted Prelims score (whichever is higher); (2) Semifinalist bands get their actual or adjusted Semifinals score (whichever is higher), and their actual or adjusted Prelims score (whichever is higher); and (3) Prelims bands get their actual or adjusted Prelims score (whichever is higher). Once Grand Nationals is all done, I move back to the regionals the week before Grand Nationals, then the week before, etc. I start by looking for bands that attended both the regional and Grand Nationals and chart scoring paths. Some bands peak early, some peak late. At the end of the day, I try to get the best estimate I can as to what every BOA band in the country would have scored at Grand Nationals. Up to five adjusted regional scores can be put into the formula. If a band attends and makes Finals in three or more regionals, they will have more than five scores. Bands get credit for competing more often: in that scenario, the lowest scores would be discarded and only the five highest adjusted scores used. Up to three Grand National scores and up to five Regional scores can be put into the formula. The formula then doubles the best score of the season for that band and averages out the sum. You'll note the repeated usage of "whichever is higher." I err on the side of favorability for every band. I also completely ignore penalties, as those are procedural errors not indicative of the performance capability of the band: only subtotals are used. As someone whose career requires a great deal of financial modeling, I really dig this. I know you also track non-BOA bands, so what, if anything, is done there to achieve full national rankings? Some combination of in-state spreads measured against BOA scores for the bands that do participate?
|
|
|
Post by hostrauser on Sept 9, 2021 7:40:31 GMT -6
As someone whose career requires a great deal of financial modeling, I really dig this. I know you also track non-BOA bands, so what, if anything, is done there to achieve full national rankings? Some combination of in-state spreads measured against BOA scores for the bands that do participate? You won't be surprised to find out that I work in accounting. I spend all day inside business Excel spreadsheets and then come home to my personal band Excel spreadsheets. For the non-BOA bands, yes, I try to find the "common opponents" or bands from their region that do compete in BOA. But I also look at the local caption/scoring sheets to see if things need to be adjusted. For example, there are several states (WI, MN, IA, plus the SCSBOA and NCBA in CA and the NWAPA circuit in the Pacific Northwest) that include percussion and guard captions in the overall score. Since BOA does not do that, I remove the percussion and guard captions, and then pro-rate the rest of the local score back up to 100-point scale before doing the cross-circuit comparisons with BOA.
|
|
|
Post by thestraightestlegs on Sept 9, 2021 7:50:58 GMT -6
Maybe a bit too much in the weeds, but do you ever look at individual judges and how they rate bands to make predictions? I started doing a little bit of that in WGI and found it to add an extra wrinkle that I wasn’t entirely expecting.
|
|
|
Post by thewho on Sept 9, 2021 8:33:28 GMT -6
This a great overlook of your algorithm, thanks for explaining a bit more in detail. Just to be clear (and as a crude summary), you attempt to extrapolate the GN "predicted" score from each week of competition and use those "predicted" scores in finding the power ranking by averaging the actual GN scores and "predicted" scores. Was that correct? And jumping on thestraightestlegs, do you also weigh in potential factors in your algorithm? E.g. Judge, order, weather, etc. Or is this just a straight extrapolation from crude scores?
|
|
|
Post by hostrauser on Sept 9, 2021 11:06:28 GMT -6
Just to be clear (and as a crude summary), you attempt to extrapolate the GN "predicted" score from each week of competition and use those "predicted" scores in finding the power ranking by averaging the actual GN scores and "predicted" scores. Was that correct? And jumping on thestraightestlegs , do you also weigh in potential factors in your algorithm? E.g. Judge, order, weather, etc. Or is this just a straight extrapolation from crude scores? Yes, that is the correct basic summary. It is mostly a straight extrapolation from crude scores. If the weather is extremely foul, they're probably not marching. Order and Judge influence, in my opinion, are vastly overrated. BOA judges are very, very good at numbers management. Regarding judge bias... I looked at that in detail several years ago using DCI results, and found that (despite popular opinion) there was little to no judge bias for any particular corps. The one or two examples I did find that could possibly be attributed to bias usually resulted in a corps caption score being slightly inflated over the norm. I didn't find any examples of a judge with an ax to grind consistently low-balling a particular corps over the season(s). I haven't done a similar study for BOA, but I'm highly skeptical it would produce anything of value. There were 1,066 individual band-performances in BOA in 2019, which translates to 7,462 independent sub-caption scores (minus maybe a handful if a show was missing a full panel). There's no value added in analyzing all 7,500ish scores to find maybe 10-20 that are really whack.
|
|
|
Post by mrmatthews on Sept 9, 2021 12:02:51 GMT -6
....I've got nothing to do since I'm at work, so I'll throw together a quick pre-season poll to pass your time... I need your job!
|
|
|
Post by OldSchoolTrumpet on Sept 9, 2021 12:11:12 GMT -6
....I've got nothing to do since I'm at work, so I'll throw together a quick pre-season poll to pass your time... I need your job! Well, you just posted on a Marching Band forum at 2:02 PM (EST), so you don't seem very overworked yourself! To be honest, I'd worry about my IT department tracking my activities but I sorta run the IT department.
|
|
|
Post by dbalash on Sept 9, 2021 12:38:56 GMT -6
I need your job! Well, you just posted on a Marching Band forum at 2:02 PM (EST), so you don't seem very overworked yourself! To be honest, I'd worry about my IT department tracking my activities but I sorta run the IT department. My boss (his daughter went to a Chicago area school that used to be a lot more competitive) and I talk about competitive marching band whenever we're both in the office.
|
|
|
Post by marimba11 on Sept 9, 2021 14:48:11 GMT -6
I need your job! Well, you just posted on a Marching Band forum at 2:02 PM (EST), so you don't seem very overworked yourself! To be honest, I'd worry about my IT department tracking my activities but I sorta run the IT department. Hahaha that’s nice! No idea what people must think when see horn rank open on my browser 😂
|
|
|
Post by boahistorybuff on Sept 9, 2021 16:07:56 GMT -6
I would love to try to do a pre season guess, but I am bowing out of that this year. In all my years following the activity, I think this is going to be the hardest to predict. No BOA season last year certainly makes things challenging for this year. I hope all of the kids stay healthy this fall, but I would not be surprised if a Covid outbreak in a band causes some cancelled practice time and/or cancelled competitions. My fingers will remain crossed. Just a lot of variables this fall that we have never had before.
Since I am expunging 2020 from the BOA history books, there are two standing records that I am curious as to whether or not they will hold this year.
1. Broken Arrow has won every BOA regional they have competed in going all the way back to 2007. If my math is correct, I believe that is 14 consecutive BOA Regional titles. Can they add to this record this season?
2. Avon has finished in the top three in Grand National Finals every year since 2007; 13 consecutive top three finishes at Grand Nationals. Not even Marian is close to that record. Can they again finish in the top 3 in Grand National Finals again this year?
|
|
|
Post by principalagent on Sept 9, 2021 16:22:41 GMT -6
1. Broken Arrow has won every BOA regional they have competed in going all the way back to 2007. If my math is correct, I believe that is 14 consecutive BOA Regional titles. Can they add to this record this season? 2. Avon has finished in the top three in Grand National Finals every year since 2007; 13 consecutive top three finishes at Grand Nationals. Not even Marian is close to that record. Can they again finish in the top 3 in Grand National Finals again this year? Absolutely yes and leaning no.
|
|
|
Post by srv1084 on Sept 9, 2021 16:22:52 GMT -6
You won't be surprised to find out that I work in accounting. Hello fellow accountant! At least three of us in this thread alone.
|
|
|
Post by marimba11 on Sept 9, 2021 16:27:38 GMT -6
I would love to try to do a pre season guess, but I am bowing out of that this year. In all my years following the activity, I think this is going to be the hardest to predict. No BOA season last year certainly makes things challenging for this year. I hope all of the kids stay healthy this fall, but I would not be surprised if a Covid outbreak in a band causes some cancelled practice time and/or cancelled competitions. My fingers will remain crossed. Just a lot of variables this fall that we have never had before. Since I am expunging 2020 from the BOA history books, there are two standing records that I am curious as to whether or not they will hold this year. 1. Broken Arrow has won every BOA regional they have competed in going all the way back to 2007. If my math is correct, I believe that is 14 consecutive BOA Regional titles. Can they add to this record this season? 2. Avon has finished in the top three in Grand National Finals every year since 2007; 13 consecutive top three finishes at Grand Nationals. Not even Marian is close to that record. Can they again finish in the top 3 in Grand National Finals again this year? Totally agree. I think this season is going to be super hard to predict and I wouldn’t dare to that much extent, but I am glad others are more confident!
|
|