Author Archives: Sean

Online Experiments with the Behavioral Economics of Social Systems

Overview: My primary research this summer was with Benjamin Ho on a project about the value of explanations in social settings. For example, let’s say you take the last cookie from the cookie jar. Once someone notices the last cookie missing, there are a variety of ways in which you might explain yourself: apology (“I’m sorry!”), guilt (“Oops, I feel terrible!”), and selfishness (“Too bad, I wanted the cookie!”) amongst others. We set out to determine the different types of explanations people give, the circumstances in which they arise, the strategies that people employ when explaining themselves, and whether people actually change their behavior as a result of past explanations (i.e., after saying sorry, one might expect less selfish behavior).

Process: First, we needed a way to obtain explanations. For this, we created an online survey (http://goo.gl/xtshAl) and turned to Amazon Mechanical Turk (MTurk), a crowdsourcing platform, to send it to people all over the country. Participants in the experiment are given money for completing simple tasks. They then have the option to take money from other participants, profiting more for themselves at the expense of others. After they make a decision about how much money to take, participants are asked to explain their actions to their partners. Some of the “takers” know about this explanation ahead of time; they have more information on the basis of which to make decisions regarding how much money to take. In other treatments, we vary the takers’ ability to pay to avoid sending an explanation or to guarantee that an explanation is sent. The experiment is then repeated three times with different people. Lastly, we vary both the money awarded to takers (ascending vs. descending payoffs in each round) and the timing of the explanation in the experimental process.

Brief summary of results: We obtained over 1850 responses in the three rounds of our survey. Five MTurk users sorted each explanation we collected, and tests for robustness confirmed adequate sorting. Below is a summary of people’s take rates versus the type of explanation they gave.

We found that a majority (55%) of people wanted to take any money at all from their partners, suggesting that many people are unselfish, even when using a platform like MTurk that is designed solely for their profit. Of those who took any money, the average take rate was around 45%, but it varied wildly by type of explanation sent. Those apologizing or admitting to selfishness took the most; those who were honest or fair in their explanation took the least; those who admitted to being guilty took the average. Only 11% wanted to pay to make sure their explanations were or were not sent, and most of these people tended to send guilty explanations even though guilty people behaved in the most average manner of any group of explanation-senders. We did not find largely significant changes in behavior across rounds of the experiment, so there is evidence to suggest that people’s behaviors are not calculated in a split second but rather formulated in a longer-term way. Additionally, people who were informed ahead of time about the future explanation took less than average by about 12%, and those who could pay to avoid the explanations took more than average by over 5%.

Afterwards & moving forwards: I gave a talk explaining these results at the BEEMA conference at Haverford this June entitled “The Present Value of Future Explanations.” In addition, the Economics faculty at Vassar have helped us greatly in the process of designing our experiment and analyzing our findings. As we finalize our understanding of the mechanisms at work behind people’s behaviors in the context of explanations, I will be writing up our results for publication sometime this fall.