Crowdsourcing Empathetic Responses and Cognitive Reappraisals

22 May

At the Collective Intelligence conference in April, Rob Morris presented a paper that he and Rosalind Picard wrote, titled   “Crowdsourcing Collective Emotional Intelligence”.

It sounds crazy, I know, but they figured out how to structure micro-tasks on Amazon Mechanical Turk such that they elicited empathetic responses and cognitive reappraisals from anonymous workers with no training in psychotherapy.  For example, the system starts with a stressor text that a distressed user might enter, such as, “I’m going to flunk out of school and I’ll never get a job, I know it!”

Generating empathetic responses was fairly straightforward. They post the stressor comment and some guidelines for generating an empathetic response:

(1) address the user directly, (e.g., “Michael, I’m sorry to hear …”), (2) let the user know that his/her emotion makes sense, given the situation, and (3) share how you might feel if you were in a similar situation.

Turkers generate candidate responses and other Turkers vote on whether the candidate responses are appropriately empathetic. In an experiment, these responses were rated as much more empathetic than responses generated in response to the instruction to simply make the stressed user feel better about his/her situation (5.71 vs. 4.14 on a 7-point scale.)

Even more interestingly, the crowd could follow a structured process to generate cognitive reappraisals. They first ask some turkers to classify the stressor statement as having some cognition distortion or not. A distortion means, “logical fallacies within negative statements (Beck, 1979).” The example statement about flunking out never getting is a distortion  because there’s no way the speaker could know that s/he’ll never get a job in the future. On average, workers made this binary classification correctly 89% of the time. Using several workers to classify a single statement could increase accuracy.

When the worker marks a statement as a cognitive distortion, they are asked to give a “thought-based reappraisal” explaining the nature of the distortion. No complex training is needed for the workers: they are simply given some sample responses for inspiration.

When the work does not indicate a distortion, the worker is asked to give a “situation-based reappraisal” that suggests a different way of thinking about the situation. Workers are introduced to the concept and given a few examples of good and bad appraisals (the latter are needed to dissuade workers from offering advice or making unrealistic assumptions about the original speaker’s situation, two common errors they observed.) Some workers were asked to come up with their own reappraisal suggestions, while others were asked to try specific strategies such as finding a silver lining or taking a long-term perspective.)

Responses were limited to four sentences. In the experiment, reappraisals solicited in the way described above  were rated as better at offering a positive way to think about the situation (5.45 vs. 4.41) than when workers were asked to simply make the stressed user feel better about his/her situation.

Overall, this suggests that the crowd can, with little training, be a useful source of informational feedback and emotional support.

Advertisement

One Response to “Crowdsourcing Empathetic Responses and Cognitive Reappraisals”

  1. Michael Bernstein May 22, 2012 at 7:06 pm #

    It would be very interesting to see what happens once workers become experts in a particular domain. For example, one person could become a reappraisal expert for individuals who just went through breakups, and another person could work with people who just got fired. An interesting specialization in the economy…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: