Since the season is over I'll discuss all of the poorly run events I've competed in this year.
Battle of Valley Forge/Conestoga
Robot Arm: Although I didn't compete, I watched my team's runs and noticed a big problem. None of the supervisors were watching my team's first scored run so there was no one to stop the run when the robot left the box, so we got an extra ~1 minute of "illegal" run time.
Tiger Invitational
Invasive Species(6): The ES just reused exact slides from last year's States and Regionals tests - copies of which just happened to be given to each team that attended the PA coaches clinic this year - with 3 or 4 original slides. We brought the National list binder instead but I was able to remember a lot of answers from last year, so we didn't do as badly as we should have with the wrong binder.
Wind Power(13): It wasn't terrible, but they enforced the old 5cm distance rule and they didn't tell us our voltage immediately after each test. Again, this wasn't that bad, it's just annoying that a state ES doesn't know the current rules.
SOUP
Wind Power(6): Very poorly run build portion which I've already touched on
here
Southeast PA Regionals
Robot Arm(9): They took waaaayyy too long to set up. Our timeslot started at 9:00, we were the second team to go, we left at 9:35. They also made mistakes with the scoring. When there were stacks of pennies on the target, they counted the bottom penny towards the score even though the rules are pretty explicit in saying it shouldn't be counted. This caused our score to be about double what it should have been ¯\_(ツ)_/¯
Disease Detectives(6): The first ~half of the test was copied directly from last year's Battle of Valley Forge test.
PA States
Disease Detectives(4): This one wasn't necessarily a bad test, but it was just... weird. The multiple choice was pretty easy(with a few recycled questions from last year) and the case study was very short, but it repeated a lot of questions. There were two questions that asked to interpret data(they were both confidence intervals of odds ratios with p-values) of the same data, except one was worth 10 points and the other was worth 8. Pretty much all of the differentiating points came from those two questions and another question that asked for advantages and disadvantages of different interview methods.
Experimental Design(19): Something here is very fishy, and this event was not even close to run well. More detail
here.
Invasive Species(4): The test was harder than last year's, which is good, but it was oddly structured. Each station had four very easy 1 point questions and then a 2 point all or nothing question which was much harder. Since the hard question on each slide is what probably differentiated most teams, it just doesn't seem right to me to make it worth more and not give partial credit. This wasn't a huge problem though. One big thing was that the test was given on a powerpoint presentation and there was a big, uncovered window on the door, so if you got there early(which everyone in my timeslot did because it ran a little late), then you could see the last few species on the test as long as you didn't make it obvious to everyone else that you were trying. I'd like to think no one cheated this way, but it's a problem that it was an option.
Wind Power(1): This event had both good and bad parts to it, which is why I will also be mentioning it in Awesomely Run Event Stories. The test was a mix of harder theory questions and ridiculously easy Ohm's Law or unit conversion(they literally gave you a bunch of numbers and the unit they wanted the answer in, so all you had to do was multiply a few numbers) questions. The only problem with the hard questions is that they were all taken directly from the Tiger or Regionals tests, so some teams had a clear advantage there. For blade testing, they enforced the 5cm rule again. This shouldn't have had a big impact on scores, but come on, at least know the rules for the event you're running.