1. Touché. You're right, it shouldn't have slipped through the cracks, but I also had exec's breathing down my neck to get started.ehuang wrote:Hi! I have a few things to say to this:Unome wrote:17 stations makes a lot more sense, and also sounds more like what you would do. For the MC, personally I'm not a fan of separated packets of questions unrelated to the stations because it's just more papers to keep track of, but I can understand the appeal.varunscs11 wrote:There were only 17 stations, with 14 having specimens to identify and most answers were a few word answers (originally I had 15 but had to add 2 more because there are 17 teams per block). And you had 2.5 minutes to complete each station. 10% is approximately 1 question per station (some had more). Also there were a lot of bonus points built in for knowing extra knowledge, I think New Trier was the only team to get any of them.
A mistake lots of teams make on the exam was not working separately on the MC portion. I know you couldn't have taken the exam apart but there are many ways to make it work. I know there were only 7.5 minutes total to work on Part A, but a lot of the questions were easy points, that few teams got.
I still think even 1 in e.g. 15 questions being trivia is too much, though we may be working off of different definitions of "trivia".
Firstly, I don't know about the last time block, but rotation instructions would've been extremely helpful for me, and shouldn't have "slipped through the cracks" regardless. We weren't even told in the beginning which way the stations rotated - a lot of people went the wrong way because they assumed the rotation was to the right when it was actually to the left.
Secondly, even if the supervisors have no control over the room, why were the stations taped to the chairs?? This made it extremely difficult to balance a binder, bend over and look at the station, and try to write on the answer sheet at the same time, especially with the tiny space between the desks and the seats below.
Thirdly, point differences being miniscule is obviously not good, but neither is having the top team only getting 20% of the points. Both are extremes that should be avoided. And about the length - an alum from my school peer reviewed your test and told you it was too hard but apparently that meant nothing because you said you didn't care and that it was your "style". Maybe you should listen to your peers next time - 10-15 short answer questions is too much for a station that was only, what, 2 minutes long? My partner couldn't even write simultaneously because she was trying to balance and use the binder so that left me frantically scribbling down what I could. At that point, it becomes a matter of who's the fastest writer, not who knows their stuff. 6-8 short answer questions? Probably fine. 10-15? Way too much, so much that it even became a nuisance to find which question I was answering (since I skipped around as I'm sure most people did).
Fourthly, let's talk about the content! As Cherrie said, there was too much fringe information - things that were relevant but weren't really to the event. Additionally, some of the ID was really pointless - asking us to identify 12 range maps at a station, or 4 different skeletons. This isn't me whining like "oh no range maps and skeletons!!!" It's just that the sheer amount of it made it pointless, when combined with the questions. Sure, maybe some of the content were things that top schools would have known, but a majority of it was so high level that that test probably could've made an actual herpetologist light-headed. I mean pictures of blood cells and skin sections? How were we supposed to know that?
And finally, maybe you're right! Maybe Cherrie just /sucks/ at herpetology. But also maybe next time, before you indirectly call my team member not smart enough or not hard-working enough, you should consider the extremely valid criticisms on your event. I heard complaints after Rocks at MIT, and I heard complaints after this one as well. No one is "clinging" to the notion that a test has to be a certain format with certain questions. We just want a test that gives a more valid representation of how good we are at the event. And if your goal was to help people practice for future competitions, I have to say, I didn't really find it that valuable as a practice. In fact, it felt like a complete waste of time. But that's just my opinion!
2. Stations could have been taped to the tables but that would've meant that you would have to put the binder in your lap and then put the answer sheet on top of stations. But again valid point, but not in my control
3. Top team actually got 25% of the points and when dynamic at MIT was 30% as 1st place, that's not a crazy difference between the two. And I never said that it was my style, in fact, I got no comments that said such things. And once again, stations were 2.5 minutes. And no, it does not become who is the faster writer, because teams that memorize information won't even have to look at the binder, which saves a lot of time.
4. Skeletons are within the scope of the rules, in fact rule 3b explicitly gives "skeletal material" as an example. Furthermore rule 3c and 3d states that competitors are expected to show knowledge of biogeography and distribution of specimens. Asking to identify range maps fits that and in fact, I drew inspiration from the 2009 Nationals Herpetology test, so there is a precedent. And blood cells and skin sections fits within rule 3d as well and that wouldn't have made a real herpetology (or vet) light headed.
5. I never called your team member "not smart" or "not hard-working". I simply pointing out areas of weakness that as a team, you may want to improve upon.
6. I don't know who you heard complaints from about Rocks and MIT, but the responses I got were on average, positive with people noting the specimen quality, the exam difficulty, and application based questions as positives and test length being a negative. And that exam had an great distribution and in general, teams that did well deservedly did so.
7. I'm not going to apologize for the length nor "fringe" information. Some of the "fringe information" is related to questions that have shown up on previous exams (for example MIT asking about what animal the Egyptian God Sobek is or children walking in a line). I will however apologize for logistics. If you believe that it was a complete waste of time, then so be it - that's just your opinion. If you are looking for exams that ask you to identify from standard images and a standard set of questions, you aren't going to find those exams at the top tier invitationals.