jkang wrote: However, I think possibly one screw-up that we had in our device portion could have been what costed us a medal at nats, which is unfortunate.
LOL my bad.chalker wrote: 6th place got 96.48 total, 7th got 95.95.

jkang wrote: However, I think possibly one screw-up that we had in our device portion could have been what costed us a medal at nats, which is unfortunate.
LOL my bad.chalker wrote: 6th place got 96.48 total, 7th got 95.95.

93.52jkang wrote:Also, would it be possible to see the raw scores for It's About Time for the top 6? )

Very interesting. I thought the test was well put together, and I even took the national test from 2010. My score on that test would have placed 2nd at nationals that years and only a fraction of a point from first. I wonder if we messed up our clock that put us in 12th place.chalker wrote:93.52jkang wrote:Also, would it be possible to see the raw scores for It's About Time for the top 6? )
91.21
90.61
89.34
88.91
88.58

A little bummed that we were just half a point off for an additional place, but that's extra motivation for next year. Thanks for posting the results!chalker wrote:69.5asdfqwerzzz2 wrote: Hey Chalker, I'm unsure if you even have access to this or the ability to disclose this, but what was the point distribution among the top 6 astronomy teams?
68
63
62.5
61.5
60

Approximately; it was definitely somewhere between 80% and 90%, and I am reasonably certain that we completed fully no more than two stations.sciolyboy123 wrote:Wait, in Bio, you only anwered 80% of the questions? My partner and I answered 89% of the test and got 14th. We probably did something wrong, but the test was a good test.Unome wrote:Fossils: This was a very good test; there were 19 stations covering a broad range of material from the rules, good pacing so that to do well you had to be fast, and a variety of questions, from easyish to difficult, so that some stations we finished with some (~20 seconds) time left over, and others we didn't finish at all (as said above, dinosaurs)
5/5
Simple Machines: As said in the above post, this test was too easy. It required very superficial knowledge and basic calculations; we only had to reference the binder twice throughout the whole test, and finished early even after thoroughly checking it. Most likely we got 2-3 questions wrong, so as RontgensWallaby said above, the placings came down to how well teams did on the lever portion (which was the reason we managed to do well on easy tests at all; our lever method has been strong enough to score 45-48 out of 50 for even the most inexperienced competitors since last January). However, I would add that the rest of the event was run very well; the supervisors gave specific instructions for everything, so I never had to ask how to write the ratios, how many significant figures to use (or whether to use them at all), how to stop the lever portion timer, etc, which I would normally have to ask at most other competitions.
4/5
Bio-Process Lab: This was a superb test, directly to the rules (of course, since it was written by the Biology Rules Committee Chair), well paced, and very good difficulty (we only answered about 80% of the questions, yet we still got 4th). This shows exactly what a Bio-Process Lab test should be, since as most of you know, this event is rarely run well.
5/5
Meteorology: I'm not sure how a difficult Meteorology test should be written, so I'm not sure what to say, but my impression of the test was that it was too easy (although not necessarily for me, since that's one of my weaker events); it was the length of our state test, except with less multiple-choice, and no questions that I can remember were very difficult.
4/5
Anatomy: This test was a good example of how not to write (or more specifically, how to not write) a test. The test was the Division C Nebraska state test, printed out of order and recycled from an answer key. Most of the answers were blanked out, but badly, leaving much of the section for the Integumentary short answer questions too dark to write in; additionally, some answers were not blanked out, some questions were blanked out, and some questions referenced a diagram that did not exist, so the proctors said those would be thrown out. However, the end of the test was not printed, including half of a matching section, and for those questions which were missing the correct answers in the matching section, they just told us to "do our best," which to me appears to say that they graded that part.
2/5


Top there were ~3600, ~3500, ~3200, and seventh was 2456.Intj wrote:Are the efficiencies for the top 6 bridges also available?

Users browsing this forum: No registered users and 0 guests