After an RFP has closed, proposals are checked against any mandatory (must) criteria for compliance. The proposals that meet the mandatories are then given to the evaluation committee. Individual evaluation committee members evaluate each proposal individually first. Then the group is to meet for a consensus score (so that each proposal has one score that is agreed upon by each committee member). HOW this consensus is to be reached is decided in advance: In person meeting, averaging, or hybid? (Hybrid explanation below):
Consensus Evaluation - there are two issues that occur with groups - First, people don't bother reading/evaluating, and two, it’s a struggle to get more than 3 people together to do an efficient evaluation (people get tired, things get missed in the effort to get through the last few responses - generally, the first response is marked hard, the 2nd one more 'appropriately', and then things fall downward from there…) So, my suggestion in past was to have committee members evaluate, submit their scores to the chair to put them into a spreadsheet. Where scores are outside a predetermined variance, those are the specific questions a committee will meet to discuss and come to a consensus score (this streamlines and targets the discussion and keeps everyone 'focused' instead of getting tired and losing steam in the process). It tends to 'motivate' people to submit scores (or at least read things), and even if they don't, they are still involved in the consensus meeting, but with targeted discussions, they aren't slowing down the evaluations.
Options for Consensus Scoring:
1) average all scores that are within 1 point - get consensus on the highlighted ones that have a 2+pt spread
2) go with 'majority rules' on the scores within 1 point - get consensus on the highlighted ones that have a 2pt spread
3) go with 'majority rules' on the scores within 1 point - average the highlighted ones that have a 2pt spread. (which is not the best option, because we need notes to support why there was such a spread in scores).
4) get consensus on all points - regardless of point spread.
There are pros/cons to each approach - however, with a team of more than 3 evaluators, using a scale of 0-5 or 0-10 and averaging scores within an acceptable degree of variance should not dilute the importance factor.
No comments:
Post a Comment