A few answers to frequently asked questions about 360 degree feedback. If you have a question that isn't answered here, please don't hesitate to get in touch and ask it.

1. What is 360 degree feedback (or Multi-Source Feedback)?

It is defined as "The systematic collection and feedback of performance data on an individual, derived from a number of stakeholders in their performance."

2. Is 360 an appraisal tool, and is it a substitute for performance appraisals?

Clearly 360 is an appraisal tool, in that respondents are asked to make judgments on a rating scale for each of the ratable questions in a survey. The ratings are a shorthand way of providing some feedback to the survey subject.

However, it is not imperative that 360 feedback surveys have rated items; they can have just open questions that require written comment.

Typically, a 360 questionnaire will have a mix of both rated questions and open comment questions. The ratings provide a useful means of highlighting where your respondents see you delivering a strong performance in a specific activity, and the comments will further elaborate on why those impressions have been gained.

360 is NOT a substitute for formal performance appraisals, but it complements these.

3. Is there a theoretical model that underpins 360?

Yes, there is. "Survey/Feedback" is one of many Organisation Development tools used worldwide, in many different guises. The different types of applications include:

  • Customer satisfaction surveys
  • Employee satisfaction surveys
  • Team effectiveness surveys
  • Gallop polls

4. On what basis are questions set in a 360 survey?

The questions in any 360 survey questionnaire need to be based on a competence framework that is relevant to the role that is being performed by the survey subject.

If there is a consistency in ratings given by respondents, then these will provide a fair basis for accepting those ratings as having validity.

A survey will not cover all aspects of an individual's role: for practical purposes a survey will typically be designed with up to 30/40 questions that cover the main aspects of the role. A longer survey becomes unwieldy and time-consuming , and this will impact on the return rate (the % of completed survey questionnaires) and the quality of feedback given. A short questionnaire with scope for written comment makes for a much more useful document.

5. Is there any evidence that 360 is effective?

To answer this question one needs to clarify: effective for what purpose?

Clearly any instrument that relies on other peoples' perceptions will have short comings. Nevertheless, the 360 tool has its place in the appraisal and evidential processes as a way of collecting a broad spectrum of views about the way that an individual performs their role.

There is sometimes an expectation that 360 will be a negative experience... but the reality is usually somewhat different.

Research has been carried out on behaviours and their impact upon people in organisations, and the impact upon the quality of team leadership. Some of the research findings include:

  • Teams share moods and emotions when working together. Kelly & Barsade, 2001
  • Good and bad moods perpetuate themselves, and skew perceptions. Gordon H Bower, 2001
  • Negative emotions powerfully disrupt work. Wood, Matthews and Dalgleish, 2001
  • The emotions that people feel whilst they work impact directly on their quality of work life. Fisher & Noble, 2000.
  • The percentage of time people feel positive emotions at work is a strong predictor of how likely they are to quit. Cynthia D Fisher, 2000.
  • Borrill and West (2001) have demonstrated in their research that there is an association between sophistication and extensiveness of staff management practices in NHS hospitals and lower patient mortality. Appraisal systems have the strongest association with lower patient mortality.

6. Who should provide feedback in a 360 survey? Does allowing the survey subject to choose their respondents distort feedback?

The criteria for providing feedback is that each respondent should have been, or is, working regularly and reasonably closely with the person being surveyed so that they will have an informed view about how the subject performs the various elements of their role.

Respondents are briefed not to guess at what the performance might be if they haven't seen it - a "not seen" option is one of the rating options. The surveys are also set up so that particular respondent groups will not see categories of questions about which they will not have an informed opinion.

Is it possible that allowing a subject to choose their respondents will distort (favourably) their survey results? In theory, the answer has to be "yes" - but there are some safeguards around this:

  • Within any respondent group (peers, direct reports, other, etc) there will always be two or more respondents in that respondent group to ensure that any individual's feedback is non-attributable, protecting their anonymity. This encourages the provision of candid and helpful feedback.
  • Whilst a survey subject might think that, by choosing respondents with whom s/he gets on well, s/he might get more favourable results, it may well be that because of the good working relationship the respondents may be more inclined to give candid feedback as this will be helpful to the survey subject.
  • The survey itself is run on-line and password protected so that only the respondent has access to their questionnaire and its contents, protecting confidentiality.
  • 360 is rarely, if ever, directly linked to individual reward (pay) decisions to minimize the potential for distorted feedback.

7. Why is 360 administered outside the employing organisation? Why not in-house?

Whether surveys are run in-house or by an external agency is a matter of choice for the employer. There are cost advantages for running surveys in-house, but the senior staff of many organisations prefer the sense of security of having such personal feedback data being held externally.

Where an external agency is used to run surveys, it is good practice to set out the ground rules about data ownership, who sees the data, and how the data is to be used. Some organisations also have a confidentiality agreement set up with their supplier.

8. Who sees final reports? How is confidentiality guaranteed for such personal data?

The final report, which consolidates all the feedback from all the respondents, will usually be seen by three people:

  1. The subject of the 360
  2. A "facilitator" who will "facilitate" the feedback report, and this may well form part of the annual performance appraisal discussion.
  3. The survey administrator who runs the surveys and produces the reports. In the case of 360 is us Ltd, staff have a confidentiality agreement.

9. Are reports edited?


10. What is the role of the feedback facilitator?

Here is an extract from a research paper published in 2004 in the Clinicians in Management journal (Vol 12, No 4), about the pilot use of 360 at Poole Hospital (Medical Appraisal: collecting evidence of performance through 360 degree appraisal (Bennett, Gatrell and Packham).

"The role of the facilitator is to help persons receiving feedback to take a balanced view. There are a few negative comments in most, if not all reports. Most people, on receiving such feedback, find it difficult to keep it in perspective. Despite having read and apparently absorbed the whole report, negative comments can become the sole focus, with the response: "Who wrote that about me?" This can significantly reduce the potential development value of the process unless attention can be turned to preparation of a development plan based on the report.

On the rare occasions when more serious issues were raised by respondents, the role of the facilitator was to ensure that the consultant committed to making these a part of their personal development plan and subsequent appraisal discussion."

To summarise, the role of a 360-degree feedback facilitator has a number of components:

  • To help the subject (of the survey) to understand how the data has been structured into the final report so that they make sense of the information and come to (developmental) conclusions.
  • To encourage the subject to take a balanced and positive view of their feedback, reversing a fairly typical tendency to focus on (perceived) criticisms and not taking credit for the positive comments and strengths that are revealed by the survey. 360 is intended to be both positively motivational and a diagnostic tool.
  • To encourage and help the subject to identify one or more specific actions that will help them become more effective in their role and/or share their strengths across the organisation by perhaps coaching others. (It's not about focusing on weaknesses; individuals have skills and abilities that others will benefit from learning.)

In choosing an effective Facilitator, one would expect:

  • Absolute confidentiality. The subject needs to be sure that the discussions are in strict confidence.
  • The ability to look at the survey raw data and to pick out any patterns that may emerge, in order to help the subject target appropriate development actions. An ability to deal with "denial" - which sometimes occurs - where there are clearly behavioural issues that may need to be addressed. A key role of the facilitator is to help the subject accept that the feedback is valid, in order to move on to the stage of doing something about that feedback.
  • The confidence and tact to explore/suggest changes with the subject, especially where the subject may be of more senior status in a hierarchical pecking order.