Re: Science Olympiad at MIT Invitational 2019
Posted: January 13th, 2019, 5:18 pm
I think top scores were greater than 1200-1300 efficiency.randomperson123 wrote:Anyone know about scores for boomilever?
Science Olympiad Student Center
https://scioly.org:443/forums/
I think top scores were greater than 1200-1300 efficiency.randomperson123 wrote:Anyone know about scores for boomilever?
I noticed many events had delays in device testing. There seems to have been a general shortage of general volunteers this year, which may have been a part of it - for example, I heard that Mission ran with something like half the expected staff, and I ran with 2 people, one of whom arrived halfway into the second session and left immediately after the last one.Raleway wrote:Gonna be a spicy hot take here;
This being my last year competing (therefore my last MIT invitational) and having competed at many of the other top competitions such as Princeton and SOUP, I strongly disliked MIT. I competed here last year and also disliked it, but this year I felt was much worse. I competed in the last block of Sounds of Music - and I waited 50 minutes in line to have to do my 5 minute build portion. Having to stand in the hallway with your instrument and people walking by with everything going on is terrible. Luckily, my other build portions went early on in the day and were able to finish right on time - big props to the Wright Stuff event supervisors that I know are great at what they do based on personal experience. Despite this, I definitely feel the flying arena was poor and strongly conducive to inherent disadvantages, but nothing on the event supervisors since the picking of the arena is fully on the MIT organizers.
It's not just that, but knowing that my Fermi exam was scored incorrectly as well as my Codebusters exam (after only glancing through it for about 30 minutes on our long bus ride home) really irks me. One was in my favor, but I strongly dislike any error in grading, albeit understanding the time crunch. My team also mentioned how in Thermodynamics two teams were even handed out the answer key... and those two teams mentioned it only 5 minutes after they got it. Completely unacceptable in more ways than one. Not just that, each team in that first period for that event was at a disadvantage for only having 30 minutes rather than 50 minutes because of issues.
It is simply my opinion that an invitational that is run smoothly and simply is the best. All these small irking issues pile up and really create an uncomfortable invitational. In attending PUSO and SOUP, I felt each one was better run than MIT - especially PUSO. It's simple, is exactly what it says it is, and we actually got a homeroom that was a "room" with a men's bathroom on the same floor. SOUP has a slightly confusing format to its campus but not as bad as MIT. Despite this, they had volunteers in the cold directing people and answering questions to keep competitors in the right place.
I appreciate that MIT tried to expand its team list and number, but inevitably, there comes a point that it is infeasible. Here, that's what happened. I hope each following invitational can read the feedback and apply it so the same mistakes do not happen again. Many teams travel long distances and put up with pretty bad complications to compete, and it's just very disheartening to see mistakes that are avoidable. Congratulations to our Mason 2.0 from MIT this year and everyone who competed!
*My opinion simply represents my own opinion given my experience and thoughts*
Hopefully not...antoine_ego wrote:Will MIT watermark their tests this year?
They will not. On the website, it says all tests will be released. Additionally, ESes have been given permission to post their exams and keys online.antoine_ego wrote:Will MIT watermark their tests this year?
I believe the primary reason for the shortage of volunteers was the fact that 25% or so of people werent on campus and volunteer recruitment started kinda late. I honestly thought I would be in trouble with needing more volunteers with mission, but I overestimated how many i needed and I was actually good with the 4-5 that I had.Unome wrote:I noticed many events had delays in device testing. There seems to have been a general shortage of general volunteers this year, which may have been a part of it - for example, I heard that Mission ran with something like half the expected staff, and I ran with 2 people, one of whom arrived halfway into the second session and left immediately after the last one.Raleway wrote:Gonna be a spicy hot take here;
This being my last year competing (therefore my last MIT invitational) and having competed at many of the other top competitions such as Princeton and SOUP, I strongly disliked MIT. I competed here last year and also disliked it, but this year I felt was much worse. I competed in the last block of Sounds of Music - and I waited 50 minutes in line to have to do my 5 minute build portion. Having to stand in the hallway with your instrument and people walking by with everything going on is terrible. Luckily, my other build portions went early on in the day and were able to finish right on time - big props to the Wright Stuff event supervisors that I know are great at what they do based on personal experience. Despite this, I definitely feel the flying arena was poor and strongly conducive to inherent disadvantages, but nothing on the event supervisors since the picking of the arena is fully on the MIT organizers.
It's not just that, but knowing that my Fermi exam was scored incorrectly as well as my Codebusters exam (after only glancing through it for about 30 minutes on our long bus ride home) really irks me. One was in my favor, but I strongly dislike any error in grading, albeit understanding the time crunch. My team also mentioned how in Thermodynamics two teams were even handed out the answer key... and those two teams mentioned it only 5 minutes after they got it. Completely unacceptable in more ways than one. Not just that, each team in that first period for that event was at a disadvantage for only having 30 minutes rather than 50 minutes because of issues.
It is simply my opinion that an invitational that is run smoothly and simply is the best. All these small irking issues pile up and really create an uncomfortable invitational. In attending PUSO and SOUP, I felt each one was better run than MIT - especially PUSO. It's simple, is exactly what it says it is, and we actually got a homeroom that was a "room" with a men's bathroom on the same floor. SOUP has a slightly confusing format to its campus but not as bad as MIT. Despite this, they had volunteers in the cold directing people and answering questions to keep competitors in the right place.
I appreciate that MIT tried to expand its team list and number, but inevitably, there comes a point that it is infeasible. Here, that's what happened. I hope each following invitational can read the feedback and apply it so the same mistakes do not happen again. Many teams travel long distances and put up with pretty bad complications to compete, and it's just very disheartening to see mistakes that are avoidable. Congratulations to our Mason 2.0 from MIT this year and everyone who competed!
*My opinion simply represents my own opinion given my experience and thoughts*
I don't think the size is a serious problem. Some builds definitely needed more people and more device testing setups for sure, but grading was not a problem for me despite what people tell me was an extraordinarily long and difficult test. I had an assistant grade the multiple-choice (85 questions at a point each) while I graded the rest of the test (195 points, divided into thematically appropriate chunks) and had no problems at all with grading. We finished grading each session's tests during the next one, with time to spare and at a downright leisurely pace (at least, leisurely by my standards). I finished my final grading by 4:30 and left for scoring with a fully tidied up room by 4:45. I'd like to think my grading is accurate, and I was particularly careful with summations because those are such dangerous errors. Honestly, I probably could have graded the entire test myself and been done in time to make it to awards by the official start time.
The choosing of rooms is more like 25% organizers and 75% what the admin will allow. This is likely the reason I was in the same room as Disease - I assume they had something else in mind but ran into problems with admin or another organization that caused them to lose a room.
I don't find MIT confusing, but I'm generally good at navigation and maps, so I can't really comment on that.
This is an important point. For test or lab events, having alumni with national experience is often the best way to ensure quality exams. MIT has excelled at recruiting these individuals. For build events, I would argue that past supervising experience at large tournaments is far more important than being an alumnus. Experience as a competitor doesn't translate as directly into quality supervising for these events. This is exactly why we focused on recruiting national and state event supervisors for these events at the Solon HS Invitational this year.windu34 wrote: Many of the build event supervisors were first-time supervisors at MIT and although we had experience supervising and almost all of us are national medalists, supervising for 76 teams is something that is hard to even imagine until you have done it.
Congrats to the teams from MIT’s invite Saturday!! Despite it being my third year as MIT’s balsa ES, nothing quite compared to the insanity of running this year’s Boom event. With 76 teams, 1 functioning (but messy, sand-spraying) rig, and a belated volunteer recruit, you’re right, the minutes didn’t compute. We ended up racing through testing the entire day, stayed open an extra two hours past the end of the last event slot just to get everyone in, and *still* had folks backed up in lines basically all day. I’m sorry I didn’t have much chance to talk with every team nor give them the proper time they deserved, but thanks to you all for being as patient and understanding as possible. Despite the drawbacks, MIT still runs one of the best invitationals around and has some of the best talent there is. Hopefully next year we’ll learn from this year’s mistakes and keep up the caliber of competition we always hoped for as competitors ourselves.TheSquaad wrote:MIT this year was one of the best competitions I’ve ever been to in terms of content. The build testing rigs/facilities were great. The tests I took were a challenge unlike anything I’d seen. Overall a great tournament.
Except for one major issue. I had 3 scheduled build events. I wasn’t able to test any of them in the block I scheduled. The build testing facilities (Mission Possible tables, boomi rig, sounds room) were able to accommodate far too few people. For example, boomi testing normally takes ~6 minutes per team, and MIT has six 60 minute blocks. But there are 70 teams at MIT. They had 1 boomi testing rig. It doesn’t add up.
Build tests were constantly backed up; my mission test was pushed into my boomi block, which forced me to test it after block six, but my sounds build test also ran after it’s scheduled block 6.
Each of the actual testing facilities were great, but MIT needs more of them if they want to stay this big.