I have an London 2012 Olympic hangover. For the last 16 days, my morning routine has been to get up at crazy early hours, turn on the television with my 7 yr old then make breakfast, coffee, etc while we watch all sorts of athletic events. I had my select favourites that I could not miss, but I learned some new things watching other events with my son.
For instance, diving... I noticed the numbers up on the screen with a few 'strike-outs' not understanding what that meant. I didn't understand the scoring nor why they had such huge totals. I then learned from the announcers/webpages: "Seven judges score the dive out of 10. The top and bottom two scores are
discarded, and the remaining three scores are added together and
multiplied by the tariff. Marks are based on a host of factors, including: take-off position,
flight, performance of move and entry. The quieter the splash and the
straighter the back, the better." This led me to wonder if perhaps we could do something similar for evaluating proposals?
Right now, an RFP evaluation book weights each individual criteria from the RFP (similar to the diving tarrif). Individual evaluators (judges) score the proposals (dives) against the questions. The next step is to meet for consensus for a final score - This is where we have some differences occuring - some groups will average all the scores (to save time/effort), whereas others will sit in a room for hours discussing each question until all come to a consensus. A hybrid of the two is to discuss only the 'outliers' for consensus & use an average for the rest.
Maybe we should consider discarding the outliers (highest/lowest) and adding the evaluators scores? Would this be much different than averaging?
What do you think? Keeping in mind the need for openness, transparency and fairness in the process...
--------
Ref: Tom Daley bids for Olympic Diving gold
No comments:
Post a Comment