Re: 2019 Harvard Undergraduate Science Olympiad Invitational
Posted: February 3rd, 2019, 6:35 am
Good luck to all teams competing today!
Science Olympiad Student Center
https://scioly.org:443/forums/
This is amazing and scary at the same time. Such dominance! Are the official (or unofficial) scores up anywhere?TheSquaad wrote:1. Acton-Boxborough B (53)
2. Acton-Boxborough A (91)
3. Newton North B (227)
4. Hillsborough A (229)
5. Belmont A (243)
6. Newton South A (292)
Note: AB had a superscore of 32
https://app.avogadro.ws/invitational/harvard-c/ ?demir wrote:This is amazing and scary at the same time. Such dominance! Are the official (or unofficial) scores up anywhere?TheSquaad wrote:1. Acton-Boxborough B (53)
2. Acton-Boxborough A (91)
3. Newton North B (227)
4. Hillsborough A (229)
5. Belmont A (243)
6. Newton South A (292)
Note: AB had a superscore of 32
Im so glad you enjoyed it! I certainly spent alot of time trying to write a balanced exam (not just pure physics or music theory) that focused on application with relatively few questions that could be answered from a binder. Admittedly, i may have put a little too much saxophone and jazz theory on it, but hopefully that gave it a uniqueness to it that may not appear at other tournaments. I thought the woodwind questions were particularly interesting (and difficult - only 1 or 2 teams got more than half credit on each of them) and i dug through many PhD dissertations to figure out how the physics worked myself. The exam had a pretty good, mostly even score distributions with a high of ~150 and a low of 15 out of 240.TheSquaad wrote: Sounds: The test was probably the best I’ve ever taken in my Scioly career (shoutouts to windu34). The instrument testing however had an extremely bad decibel meter (the entire testing room could hear my build through the walls and it only scored 82db when I know it hits over 100).
windu34 wrote:Im so glad you enjoyed it! I certainly spent alot of time trying to write a balanced exam (not just pure physics or music theory) that focused on application with relatively few questions that could be answered from a binder. Admittedly, i may have put a little too much saxophone and jazz theory on it, but hopefully that gave it a uniqueness to it that may not appear at other tournaments. I thought the woodwind questions were particularly interesting (and difficult - only 1 or 2 teams got more than half credit on each of them) and i dug through many PhD dissertations to figure out how the physics worked myself. The exam had a pretty good, mostly even score distributions with a high of ~150 and a low of 15 out of 240.TheSquaad wrote: Sounds: The test was probably the best I’ve ever taken in my Scioly career (shoutouts to windu34). The instrument testing however had an extremely bad decibel meter (the entire testing room could hear my build through the walls and it only scored 82db when I know it hits over 100).
I thought i wouldnt have the same troubles MIT Sounds had since i had 8 instead of 12 teams per time block and i focused on grading and proctoring rather than scoring devices, but as many of you learned, i was wrong and we were backed up by about 20 mins most of the day and about 40 by the end of the day. After speaking to numerous other tournament directors, it is clear that Sounds REQUIRES two sets of device testing personnel to get through all the teams in one time block. Im slightly disappointed i didnt forsee this seeing as how i helped run one of the TWO optics boards at nationals. This will be a lesson i learn for supervising future physics events.
That said, i know several teams were not pleased with the dynamic volume test, but we did have 1 full score (trombone) and several in the 80s. I was actually quite pleased to not have an overwhelming nunber of high dynamic scores like i expected, and the microphone Harvard scioly purchased for me was quite easy to use and i definitely think it helped with pitch test consistency.
Now for the pitch test - plenty of teams seemed to have not practiced under testing conditions, during which the AVERAGE pitch is what matters. Alot of teams with othereise in-tune devices didnt do so well. I wont go into any more depth regarding what device designs i saw and what worked best, but i will say that my volunteers running the device testing thought that the teams that had put in the most effort ended up scoring the best and that the trombone was clearly the best instrument for the event (in their opinion, i cannot vouch for that).
Finally, i hope everyone who attended and will end up taking my test in the future (as im sure it will circulate throughout the "black market") will enjoy it and learn something new from it, as i put alot of effort into making sure it wasnt "just another binder-regurgitating sounds test" that i have seen so much of.
I think he means that most people only tune their instrument with a traditional tuner, which has a low refresh rate and always centers on the actual pitch. Google science journal (which is what was used) picks up the pitch continuously, including any overtones and points with no audible pitch. Those often create huge spikes and bowls in the pitch curve, messing up the average pitch.scienceisfunalil wrote:windu34 wrote:Im so glad you enjoyed it! I certainly spent alot of time trying to write a balanced exam (not just pure physics or music theory) that focused on application with relatively few questions that could be answered from a binder. Admittedly, i may have put a little too much saxophone and jazz theory on it, but hopefully that gave it a uniqueness to it that may not appear at other tournaments. I thought the woodwind questions were particularly interesting (and difficult - only 1 or 2 teams got more than half credit on each of them) and i dug through many PhD dissertations to figure out how the physics worked myself. The exam had a pretty good, mostly even score distributions with a high of ~150 and a low of 15 out of 240.TheSquaad wrote: Sounds: The test was probably the best I’ve ever taken in my Scioly career (shoutouts to windu34). The instrument testing however had an extremely bad decibel meter (the entire testing room could hear my build through the walls and it only scored 82db when I know it hits over 100).
I thought i wouldnt have the same troubles MIT Sounds had since i had 8 instead of 12 teams per time block and i focused on grading and proctoring rather than scoring devices, but as many of you learned, i was wrong and we were backed up by about 20 mins most of the day and about 40 by the end of the day. After speaking to numerous other tournament directors, it is clear that Sounds REQUIRES two sets of device testing personnel to get through all the teams in one time block. Im slightly disappointed i didnt forsee this seeing as how i helped run one of the TWO optics boards at nationals. This will be a lesson i learn for supervising future physics events.
That said, i know several teams were not pleased with the dynamic volume test, but we did have 1 full score (trombone) and several in the 80s. I was actually quite pleased to not have an overwhelming nunber of high dynamic scores like i expected, and the microphone Harvard scioly purchased for me was quite easy to use and i definitely think it helped with pitch test consistency.
Now for the pitch test - plenty of teams seemed to have not practiced under testing conditions, during which the AVERAGE pitch is what matters. Alot of teams with othereise in-tune devices didnt do so well. I wont go into any more depth regarding what device designs i saw and what worked best, but i will say that my volunteers running the device testing thought that the teams that had put in the most effort ended up scoring the best and that the trombone was clearly the best instrument for the event (in their opinion, i cannot vouch for that).
Finally, i hope everyone who attended and will end up taking my test in the future (as im sure it will circulate throughout the "black market") will enjoy it and learn something new from it, as i put alot of effort into making sure it wasnt "just another binder-regurgitating sounds test" that i have seen so much of.
Hi Windu! I wasn’t at Harvard but my B team was. They did say they had some issues with in tune notes reading out of tune. Could you perhaps elaborate on what you mean by not practicing under testing conditions? Do you mean just not practicing playing the note for 5 seconds? Thanks
TheSquaad wrote:I think he means that most people only tune their instrument with a traditional tuner, which has a low refresh rate and always centers on the actual pitch. Google science journal (which is what was used) picks up the pitch continuously, including any overtones and points with no audible pitch. Those often create huge spikes and bowls in the pitch curve, messing up the average pitch.scienceisfunalil wrote:windu34 wrote: Im so glad you enjoyed it! I certainly spent alot of time trying to write a balanced exam (not just pure physics or music theory) that focused on application with relatively few questions that could be answered from a binder. Admittedly, i may have put a little too much saxophone and jazz theory on it, but hopefully that gave it a uniqueness to it that may not appear at other tournaments. I thought the woodwind questions were particularly interesting (and difficult - only 1 or 2 teams got more than half credit on each of them) and i dug through many PhD dissertations to figure out how the physics worked myself. The exam had a pretty good, mostly even score distributions with a high of ~150 and a low of 15 out of 240.
I thought i wouldnt have the same troubles MIT Sounds had since i had 8 instead of 12 teams per time block and i focused on grading and proctoring rather than scoring devices, but as many of you learned, i was wrong and we were backed up by about 20 mins most of the day and about 40 by the end of the day. After speaking to numerous other tournament directors, it is clear that Sounds REQUIRES two sets of device testing personnel to get through all the teams in one time block. Im slightly disappointed i didnt forsee this seeing as how i helped run one of the TWO optics boards at nationals. This will be a lesson i learn for supervising future physics events.
That said, i know several teams were not pleased with the dynamic volume test, but we did have 1 full score (trombone) and several in the 80s. I was actually quite pleased to not have an overwhelming nunber of high dynamic scores like i expected, and the microphone Harvard scioly purchased for me was quite easy to use and i definitely think it helped with pitch test consistency.
Now for the pitch test - plenty of teams seemed to have not practiced under testing conditions, during which the AVERAGE pitch is what matters. Alot of teams with othereise in-tune devices didnt do so well. I wont go into any more depth regarding what device designs i saw and what worked best, but i will say that my volunteers running the device testing thought that the teams that had put in the most effort ended up scoring the best and that the trombone was clearly the best instrument for the event (in their opinion, i cannot vouch for that).
Finally, i hope everyone who attended and will end up taking my test in the future (as im sure it will circulate throughout the "black market") will enjoy it and learn something new from it, as i put alot of effort into making sure it wasnt "just another binder-regurgitating sounds test" that i have seen so much of.
Hi Windu! I wasn’t at Harvard but my B team was. They did say they had some issues with in tune notes reading out of tune. Could you perhaps elaborate on what you mean by not practicing under testing conditions? Do you mean just not practicing playing the note for 5 seconds? Thanks