Re: National Test Discussion
Posted: May 24th, 2017, 6:01 am
check the 2017 MA threadpikachu4919 wrote: Wait kenniky why were you at home during nationals this year...?
check the 2017 MA threadpikachu4919 wrote: Wait kenniky why were you at home during nationals this year...?
So I'm not really understanding this. Do the numbers get higher if more good teams succeed in a certain event? (If so, WIDI is almost like the lotterybernard wrote:Here's some data you can play with. I've included the Pearson correlation coefficient comparing each event to overall team rank. This can be done for any tournament's results.
Coefficient of correlation tests the correlation between two variables, in this case overall team ranking and team ranking in a certain event. The closer the coefficient of correlation gets to +1.00, the closer to a perfect correlation between overall team rank and the team rank in a certain event.Tailsfan101 wrote: So I'm not really understanding this. Do the numbers get higher if more good teams succeed in a certain event? (If so, WIDI is almost like the lottery)
Pearson correlation coefficient ranges from -1 to +1, where -1 indicates negative correlation (teams that do well overall do poorly in this event), 0 indicates no correlation, and +1 indicates positive correlation (teams that do well overall do well in this event). Note PCC does not tell us the slope of a regression.Uber wrote:Coefficient of correlation tests the correlation between two variables, in this case overall team ranking and team ranking in a certain event. The closer the coefficient of correlation gets to +1.00, the closer to a perfect correlation between overall team rank and the team rank in a certain event.Tailsfan101 wrote: So I'm not really understanding this. Do the numbers get higher if more good teams succeed in a certain event? (If so, WIDI is almost like the lottery :lol:)
You're right about WIDI, and it's likely because it's difficult to perform consistently, so alot of teams just gave up. Ours definitely did.
Chalker, is there ever any discussion of removing or refining events if they consistently show a poor correlation with team scores? If WIDI is so variable and a poor predictor of team outcome, why has its grading/scope not been better standardized or controlled?chalker wrote:Oops.. I did this during lunch and thanks to Bernard posting his sheet I realized I did the wrong ranges. Below are the actual numbers, but the general conclusion is the same:chalker wrote:
Max event correlation: 0.52
Average event correlation: 0.39
Standard deviation: 0.09
Min event correlation: 0.20
Ecology had a correlation coefficient of 0.28. This means it was about 1 standard deviation below the average correlation for events. There were several events with lower correlations.
In essence, what this means is that statistically, the resulting ranks in Ecology are reasonably well aligned across all teams with the overall team ranks.
Max: 0.85
Average: 0.75
Std. Dev.: 0.10
Min: 0.42
Ecology: 0.57
WIDI and EV are lower than Ecology.
8 of the 23 events are under the average.
I'm glad you liked my MIT test. It was supposed to be brutal and make sure to separate the good teams from the weak ones. I remember you won first by a significant margin, so congrats!Uber wrote:
5. MIT ecology was hands down the most brutal test I've ever taken. Golden Gate also had a much more difficult test than nationals, with more critical thinking involved. We won both. The national test came nowhere close. We finished most stations and double-checked with time to spare.
.