Science Revolution: Cognitive Science 3.0

Scholars have been wondering how thought works — and how it is even possible — for a long time. Philosophers such as John Locke and early psychologists such as Sigmund Freud relied mostly thought about thought very hard. As a method, this is called introspection. That’s Cognitive Science 1.0.

Experimental Psychology, which got its start around 100 years ago (William James was an important early proponent), uses the scientific method to probe the human mind. Psychologists develop controlled experiments, where participants are assigned to one of several conditions and their reactions are measured. Famous social psychology experiments include the Milgram experiment or the Stanford Prison Experiment, but most are more mundane, such as probing how humans read by monitoring eye movements as they read a simple text. This is Cognitive Science 2.0.

Typically, such experiments involve anywhere from 2 to 20 participants, rarely more. Partly, this is because each participant is expensive — they have to be recruited and then tested. There are many bottlenecks, including the fact that most labs have only a small number of testing facilities and a handful of experimenters. Partly, cognitive scientists have settled into this routine because a great variety of questions can be studied with a dozen or so subjects.

The Internet is changing this just as it has changed so much else. If you can design an experiment that will run via the Web, the bottlenecks disappear. Thousands of people can participate simultaneously, in their own home, on their own computer, and without the presence of an experimenter. This isn’t just theoretical; one recent paper in Science reported the results of a 2,399-person Web-based survey. I currently have over 1,200 respondents for my Birth Order Survey.

Distance also ceases to be an issue. Want to know whether people from different cultures make different decisions regarding morality? In the past, you would need to travel around the world and survey people in many different countries. Now, you can just put the test online. (The results of a preliminary study over around 5,000 participants found that reasoning about certain basic moral scenarios does not differ across several different cultures.) Or perhaps you are interested in studying people with a very rare syndrome like CHARGE. There is no need to go on a road trip to track down enough participants for a study — just email them with a URL. (Results from this study aren’t available, but it was done by Timothy Hartshorne’s lab.)

This may not be as radical a shift as adopting the scientific method, but it is affecting what can be studied and — because Web experiments are so cheap — who can study it. I don’t think it’s an accidence that labs at poorer institutions seem to be over-represented on the Web. It also is opening up direct participation in science to pretty much anybody who cares to click a few buttons.

Beyond surveys

The experiments above were essentially surveys, and although using the Web for surveys is a good idea, surveys are limited in terms of what they can tell you. If you ask somebody, “Do you procrastinate?” you will learn whether or not your subject thinks s/he procrastinates, not necessarily whether they do. Also, surveys aren’t very good for studying vision, speech perception and many other interesting questions. If surveys were all the Web was good for, I would not be nearly so excited.

A few labs have begun pushing the Web envelope, seeking ways to perform more “dynamic” experiments. One of my favorite labs is Face Research. Their experiments involve such things as rating faces, carefully manipulating different aspects of the face (contrast, angle, etc.) to see which will lead you to say the face is more attractive. An even more ambitious experiment — and the one that prompted me to start my own Web-based research — is the AudioVisual Test, which integrates sound and video clips to study speech processing.

Of the hundreds of studies posted on the largest clearing house for Web-based experiments, all but a handful are surveys. Part of the issue is that cognitive science experiments often focus on the speed with which you can do something, and there has been a lot of skepticism about the ability to accurately record reaction times over the Internet. However, one study I recently completed successfully found a significant reaction time effect of less than 80 milliseconds. 80 ms is a medium-sized amount of time in cognitive science, but it is a start. This may improve, as people like Tim O’Donnell (a graduate student at Harvard) are building new software with more accurate timing. The Implicit Association Test, which attempts to measure subtle, underlying racial biases, is also based on reaction time. I do not know if they have been getting usable data, but I do know that they were also recently also developing proprietary software that should improve the timing accuracy.

Do you really need 5,000 participants?

It has been argued that the only reason to put an experiment on the Web is because you’re too lazy to test people yourself. After all, if the typical experiment takes 12 participants, why would you need thousands?

There are many reasons. One is to avoid repeating stimuli. The reason we can get away with having only 12 participants is that we typically ask each one several hundred questions. That can get very boring. For instance, in a memory experiment, you might be asked to remember a few words (dog, cat, elephant) for a few seconds. Then you are presented with another word (mouse) and asked if that’s one of the words you were trying to remember. And you will do this several hundred times. After a few hundred, you might simply get confused. So, in a recent Web-based project, I looked specifically at the very first response. If 20 subjects answering 200 questions gives me 4000 responses, that means I need 4000 subjects if I want to ask each one only one question.

Similarly, in an ongoing experiment, I am interested in how people understand different sentences. There are about 600 sentences that I want to test. Reading through that many makes my eyes start to glaze, and it’s my experiment. I put the experiment on the Web so that I could ask each person to read only 25 sentences — which takes no time at all — and I’ll make up the difference by having more participants.

This is sort of like the Open Source model of programming. Rather than paying a small number of people (participants) put in a great deal of work, you get a large number of volunteers to all put in a small amount of time.

Sometimes the effect is subtle, so you need many participants to see an effect. This is true of the work the folks at the Moral Sense Test are doing.

That said, many experiments can be done happily with just a few participants, and some are actually done better with only a small number of participants. Just as introspection (Cognitive Science 1.0) is still a useful technique and still exists alongside Cognitive Science 2.0, Web-based experimentation will not replace the brick-and-mortar paradigm, but extend into new territory.

But are these experiments really valid?

There are several typical objections to Web-based experiments. Isn’t the sample unrepresentative? It’s on the Internet — how do you know people aren’t just lying? None of them turn out to be important problems.

Isn’t the sample unrepresentative? It is true that the people who surf to my Web-based lab are a self-selected group probably not representative of all humans. However, this is true of the participants in pretty much every cognitive or social science experiment. Typically, participants are either undergraduates required to participate in order to pass Psych 101, or they are paid a few dollars for their time. Either way, they aren’t simply random people off the street.

It turns out that it typically doesn’t matter. While I am fairly certain that liberals and conservatives don’t see eye-to-eye on matters like single-payer health care or the war in Iraq, I’m fairly certain that both groups see in the same way. It doesn’t really matter if there is an over-abundance of liberal students in my subject pool if I want to study how their vision works. Or speech processing. Etc.

In any case, Web-based experiments allow researchers to reach out beyond the Psych 101 subject pool. That’s actually why I put my Birth Order Survey online. I have already surveyed Psych 101 students, and I wanted to make sure my results weren’t specific to students in that class at that university. Similarly, the Moral Sense Test is being used to compare people from different social backgrounds in terms of their moral reasoning. Finding conservatives in Harvard Square (MST is run by the Hauser Lab at Harvard) is tough, but online, it’s easy.

One study specifically compared Web-based surveys to traditional surveys (Gosling, Vazire, Srivastava, “Should we trust Web-based studies?”, American Psychologist, Vol. 49(2), 2004) and found that “Internet samples are shown to be relatively diverse with respect to gender, socioeconomic status, geographic region, and age.”

It’s on the Internet. Maybe people are just lying. People do lie on the Internet. People that come to the lab also lie. In fact, my subjects in the lab are in some sense coerced. They are doing it either for the money or for course credit. Maybe they are interested in the experiment. Maybe they aren’t. If they get tired half way through, they are stuck until the end. (Technically, they are allowed to quit at any time, but it is rare for them to do so.) Online, everybody is a volunteer, and they can quit whenever they want. Who do you think lies less?

In fact, a couple studies have compared the results of Web-based studies and “typical” studies and found no difference in terms of results (Gosling, Vazire, Srivastava, 2004; Meyerson, Tyron, “Validating Internet research,” Behavior Research Methods, Vol 35(4), 2003). As Gosling, Vazire & Srivastava pointed out, a few “nonserious or repeat” participants did not adversely affect the results of Web-based experiments.

The field seems to be more and more comfortable with Web-based experiment. According to one study (Skitka & Sargis, “The Internet as a psychological laboratory,” Vol 57, 2006), at least 21% of American Psychological Association journals have published the results of Web-based experiments. Both Science and Nature, the gold standards of science, have published Web-based research.

Please help get my Web-based research published by participating. On the site you can also find results of a couple older studies and more information about my research topics.


Substack subscription form sign up

Comments are closed.