You are here

"View" settings of (single) Results tables

In the 'View' tab, you control the appearance of the Results table.

1 Delegate structuring of table by folder

By default, the Facilitator (you) controls the 'Structuring by folder' of results tables. The Facilitator's choice also controls the way participants see the table.

Switch 'Delegate to participants' (Default: OFF)
enables (ON) or hides (OFF) that control for participants which means that the Facilitator's setting merely becomes the default setting of the participants. Obviously, such delegation only matters if you (plan to) open the Results table for participants to peruse it independently.

2 Mark up dissent

By default, high values for standard deviation are marked up in the Results table. Disable, if you do not want to draw attention to dissent.

The default threshold value for 'strong dissent' is 0.3. Adjust this value, for instance, for sensitivity analysis. If you lower the threshold to, say, 0.28, does that increase the red flags dramatically? In this case, there may be more dissent than meets the eye immediately. Or do most 'red flags' go away if you raise the threshold slightly? In this case, the graphic impression of many red flags may overdramatize the situation. Check this out. 

For an explanation of how MeetingSphere normalizes Standard Deviation as an indicator of consensus that works across rating methods, see further below.

3 Folders

By default, Results tables do not give the folder of rate items which they may still 'remember' from categorization in the Brainstorm workspace or have been sorted to by the Facilitator in the Rating sheet.

To inform users of an item's category when the results table is not 'structured by folder' (see above), Facilitators can display the folder name

  1. in a 'separate folder column' (Default: OFF), or
  2. appended to the item text (Default: OFF), or
  3. prefixed to the item text (Default: OFF)

Note on (normalized) Standard Deviation (SD)  

The Standard Deviation of a data set is used to support blanket statements about how much variation from the average (mean) value exists.

An indicator of "consensus"

A high standard deviation among ratings on a particular issue means that the individual ratings submitted were quite different from each other; a low or zero standard deviation indicates that they were all similar or identical. When applied to Rating results, a high standard deviation implies a contentious issue, while a low standard deviation implies a general consensus.


The usual formula employed for Standard Deviation is to square* the difference of each value to the mean value, add up all squared values, divide by the number of values and take the square root**. The dimension of the resulting value is the same as that of the data set itself; for instance, the average height of adult men in the U.S. is said to be 70", with a standard deviation of 3". Mathematically, this means that about two thirds of all men are between 67" and 73" tall.


The magnitude of the standard deviation is usually linked to the magnitude of the values in the data set. Since MeetingSphere offers different rating methods  (e.g. "Assign scale value from 0 to 10", or "Distribute $10,000 among these seven items") and can display their results side by side, this means that adjacent standard deviation values would not be comparable: both magnitude and dimension can differ between two successive Rating activities. In our example, the same item would probably appear far more contentious in the second Rating activity, simply because the magnitude of possible results ranges from 0-10000 rather than from 0-10. To make these values more comparable, MeetingSphere always normalizes the "standard deviation" values in result tables by dividing them by the maximum possible distance between ratings (10 in the first case and 10,000 in the second). This means that the displayed figures always range from 0 to 0.5 and are dimensionless numbers.


If there were one rating of '0' and one rating of '10' for the same issue, the displayed value would be the theoretical maximum (0.5), where according to the original formula it would be ten times larger (5.0); for two ratings of '5' the value would remain zero. In the second Rating , if one rating were submitted as '0' and one as '10,000', the normalized standard deviation would also be 0.5, implying that the item is just as contentious by the second as by the first criterion. The original formula would have yielded the much larger value 5,000 even though opinions are equally divided on both counts.

* The squaring step ensures that deviations above and below the mean are properly taken into account rather than canceling each other out.

** Taking the square root brings the value back to an area typical for the rating method.