Crowd Labs IncorporatedNovember 22nd, 2010 by Ville Miettinen
The history of experimenting on humans doesn’t have what you’d call a spotless reputation. Google it, and you get Nazis, CIA mind control, and conspiracy theories about Guantanamo Bay and Ritalin. And that’s just in the first ten hits.
Given its unpleasant past, crowdsourced workers might, at first, be less than enthusiastic about their growing popularity as subjects for human research. Thankfully, academia has moved on since the bad old days of the Stanford prison experiment and the worst thing most modern participants ever have to deal with is a badly worded questionnaire.
A researcher’s paradise?
Social scientists, psychologists and economists constantly require thousands of people to take part in experiments. Trouble is, running a lab is expensive: assistants to pay, participants to hunt down and schedules to organize. Compare that (as Lauren Schmidt did at CrowdConf this year) to crowdsourced labor: a global pool of potential subjects who’ll work for a fraction of the price, and don’t need travel expenses.
It’s a concept the research community is just beginning to get behind. Some classic thought experiments – like the prisoner’s dilemma and the Asian disease problem have already been tried out using Amazon’s Mechanical Turk.
One ingenious Harvard PhD candidate – John Horton – also used Mechanical Turk for several “field tests”, where workers undertook tasks without knowing they were in an experiment. So while people thought they were merely tagging a few photos, they had, in fact, unwittingly contributed to a paper on worker productivity. (For now we’re ignoring the obvious ethical issues regarding disclosure.)
Another group put together an entire iPhone app that gets you to record how often you daydream throughout the day (apparently some people even recorded while they were engaging in more active bedroom activity, which is taking dedication to science a bit too far, in my humble opinion). Over 2000 people joined the study, making it the largest in its field.
Not everyone though, is convinced about the wisdom of crowd-based research. We are talking about academics here after all – they’re basically professional skeptics.
One big issue is how you know if participants are really who they say they are? Mechanical Turk lets you select workers for age, gender, location etc, but only based on what workers themselves claim. Was that task really completed by a housewife in Minnesota, or was it a kid in Mumbai on his mom’s laptop?
There’s also worry over whether low-paid, crowdsourced workers will give enough attention to experiments. I imagine it’s pretty infuriating to have your research data skewed because a subject was checking out Beyonce videos in between answers.
As crowdsourcing becomes a more accepted research technique, people will no doubt find ways around these problems. One solution could be a dedicated site or API (a kind of virtual lab) which allowed researchers to customize experiments easily, and attracted workers who knew, at least roughly, what to expect.
Whatever the future holds, it looks like our knowledge of who we are, what we do and why we do it is set to get a serious boost from the crowd.
If you’ve already been experimenting with the crowd, as always, we would love to hear from you.