You are here

Analysis of Rating results in the (single) Results table

When you add a Rating sheet in the Meeting dashboard, a Results table is created with it. Add extra Results tables, for instance, if you want to include a differently sorted table in the Meeting report or add a Results chart to such a differently sorted table.

Accessibility of the table

By default, the results of any particular Rating are only accessible after it is over.

While Rating is in progress, the Facilitator or any participant entering the Results table will not see the current state of the results but merely an anonymous "progress meter" which informs on the status of the rating process. Most participants will control their curiosity if you explain that this restriction provides a level playing field, not just in the rating process but also in analysis. It is fairer if nobody knows the state of play when they submit their ratings. It is also fairer that the results are revealed to all at the same time rather than giving some the advantage who can prepare their view on how the results should be read.

To access the Results, simply 'close' the Rating sheet.

That said, if you have good cause to enable access to the results while rating is in progress, you can do so by selecting the relevant option in the 'View' settings tab of the Rating sheet.

Joint or individual analysis

If you use Rating for prioritization or decision making by the group you'll want to analyze results in the group.

For this you can choose between

  1. Sharing your screen
    and walking the group through the table
  2. Opening the table
    for participants to enter and peruse the table individually 

If you have the time and the importance of the occasion warrants it, you can combine both approaches by giving participants a couple of minutes of free range before walking (or talking) them through the table for joint analysis in the group. When talking participants through the table, use the 'Focus' button  to draw attention to particular items and transmit their view of the table (sorting, team comparison, scroll position etc) to the group. 

When going through the results, you'll probably want to watch out for the following:

  1. Which items have scored highly?
    Usually, top-scoring items warrant further attention.
  2. Which items are irrelevant?
    If the group agrees that items are not really 'effective', 'important' or 'promising', you probably don't want to spend time on them, even if they are somebody's hobby horse or stock arguments.
  3. Where do we disagree?
    Depending on your purpose, items rated very high by several participants and very low by others can be the most interesting of the lot. On the other hand, if you are looking for potential fixes and you have identified some highly effective and feasible solutions, you may want to focus on those rather than controversial ideas that might also work after building consensus.

Sorting

By default, Results tables are sorted by the mean rating in descrending order, putting the highest rated items at the top of the table.

Sorting can be controlled in two ways:

  1. Persistently
    by specifying the sort order in the Basic settings tab. This setting determines how the table is sorted on entering and in the Word report. Add extra results tables in the Meeting dashboard if you want the report to give the results of rating sorted by mean and, for instance, sorted by normalized standard deviation (SD).
  2. Temporarily
    by clicking on the header of a table column. This temporary sorting has no effect on how others see the table when they enter it or how the table is printed.

Standard deviation

Standard deviation is an indicator of consensus in the group. The SD value given in MeetingSphere is 'normalized' for the rating method by which the rating has occurred.

Thanks to this adjustment, values

  • below '0.2' indicate general consensus to strong consensus in the group. '0.0' means identical rating by all.
  • above '0.3' indicate strong dissent (controversy) between participants. '0.5' means maximum dissent.

By default the threshold for highlighting 'strong dissent' is set to 0.3. Adjust this value in the 'View' settings tab for sensitivity analysis.

Scale values

For ratings by numerical scale, the detailed results which sit to the right of the main table show how often any particular value was clicked. Expand  these details to analyze items whose mean rating is above average with high standard deviation. Such items will have been rated very highly by part of the group while scoring very low with other members of the group.

An exploration of the reasons for such divergent assessments such as differences in information or assumptions can yield valuable information.

Team comparison

If rating occurred with 'team' tags, results can be reviewed by team and be compared between teams.

Select the relevant teams with the bottom 'team comparison' widget to find out whether consensus or dissent run across teams or between teams and how specific perspectives influence perceptions.

Note that you can specify team comparison also for the Word report. Export to Excel automatically produces all relevant views of the table - including 'by team' - as separate worksheets.