Beyond the Unitary Factor of Intelligence

Defining intelligence has been an undertaking as difficult as any in the field of psychology. Binet referred to intelligence as “judgment, or common sense, initiative, the ability to adapt oneself.” Thorndike’s definition reads “the power of good responses from the point of view of truth.” Finally, Wechsler defined intelligence as “the aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment” (Cattell, 1943, pp.158-159).

The operational definition of intelligence has continued to increase in sophistication and complexity with the thrust of cognitive and neuropsychological research across the last half-century. This essay will examine issues of intelligence and intelligence testing, beginning with a brief review of history, followed by a discussion of current research trends regarding intelligence related phenomena, such as working memory, long-term memory, and metacognition. The essay will conclude with a discussion of the relationship between these phenomena and the construct of intelligence, as well as implications of this relationship for future psychometric assessments of intelligence.

While a detailed history of the construct of intelligence is vast and beyond the scope of this essay, several figures in the developmental history of intelligence and intelligence testing seem worthy of attention. One of these influential figures was Francis Galton. Galton was a 19th century psychometrician postulating that intellectual ability was both heritable and positively related with sensory ability. He developed a line of sensorimotor tests he believed tapped into intelligence. Alfred Binet, however, criticized Galton for being too simplistic in his approach. To Binet, mental behaviors were complex, involving numerous co-occurring procedures that could not be reduced into separate intelligences, or separate tests of intelligence. Funded by a project to screen for mentally impaired Parisian children, he developed a multifaceted test of intelligence involving memory, judgment, reasoning, and social comprehension. Terman translated Binet’s test for use with American children, launching the American psychometric era of intellectual testing utilizing the intelligence quotient (Cohen & Swerdlik, 1999).

With the onset of World War I, the American government considered that using intelligence measures might facilitate soldier screening, and so employed Arthur Otis to adapt the Stanford-Binet into a group friendly test, the Army Alpha. Eventually, David Wechsler attempted to create a more sophisticated adaptation of these earlier efforts, resulting in the Bellevue-Wechsler scale, and ultimately morphing into the Wechsler series of adult and childhood intelligence tests (Cohen & Swerdlik, 1999).

Some early psychologists, however, were dissatisfied with the evolution of psychometric intelligence testing, believing that the developed tests only captured a portion of the construct of intelligence. As early as 1904, Charles Spearman observed that most intelligence tests correlated with one another. Accordingly, he offered a two-factor theory of intelligence, arguing that the degree to which different tests shared common variance, a general factor of intelligence, or “g,” was being assessed. The degree of unshared variance accounted for either error or a specific factor of “g” (Cohen & Swerdlik, 1999; Lashley, 1929). Raymond Cattell articulated his own two-factor theory of intelligence by splitting “g” into two different components.

The first component, fluid intelligence, regards “a purely general ability to discriminate and perceive relations between any fundaments new or old.” The second component, or crystallized intelligence, “consists of discriminatory habits long established in a particular field” (Cattell, 1943, p.178). Cattell theorized that crystallized intelligence grew out of fluid intelligence, and that fluid intelligence was more prevalent among children and adolescents, whereas crystallized intelligence was more common among adults. While Cattell’s notions of general and fluid intelligence still hold credence in current theorizing, the mechanisms interacting to actualize such phenomena are multifaceted, and appear to be more complex than commonly believed in Cattell’s day. These mechanisms involve reciprocity between working memory, long-term memory, and metacognition functions.

An understanding of working memory may be facilitated by a review of the development of the construct of short-term memory. In the late 1960’s, Atkinson and Shiffrin published a paper explicating a theoretical model for the encoding and storage of information, that would have a major impact on cognitive theorizing (Anderson, 2000). Essentially, the model proposed that incoming information was held first in a brief, transient sensory register, where if attended to it was then transferred into a short-term memory store. The short-term store was limited in capacity, and to stay in the store mental rehearsal was essential. If rehearsal were long or elaborate enough the information would be transferred from short-term to long-term memory, awaiting future retrieval.

The concept of a short-term memory store helped cognitive psychologists develop memory tasks assessing capacity limits and rehearsal strategies leading to long-term memory storage. Data from these tasks began evidencing differences between subject’s innate abilities. After some years, the concept of short-term memory became too simplistic to account for variation in memory tasks across subjects and task modality. In an effort to address some of these concerns, Baddely and Hitch proposed a more sophisticated model of working memory (Baddely & Hitch, 1994).

Essentially, the theory postulated that memory activation was the product of two slave systems—the phonological loop and the visou-spatial sketchpad. These systems were monitored by a dominating adjustment system—the central executive. The key difference between the short-term memory model and the working memory model is that information does not have to stay within the slave systems to enter long term memory, rather the slave systems are simply auxiliary systems that keep information activated by the central executive (Anderson, 2000; Baddely & Hitch, 1994).
To illustrate this process, Anderson (2000) describes the mental computation of a multi-step multiplication problem. An individual trying to solve such a problem may hold a visual image of the problem in his or her mind (i.e., use of the visou-spatial sketch pad), as well as verbally rehearse the product of the first step of multiplication (i.e., use of the phonological loop). Further, the individual may need to access mathematical rules learned through years of education that are found in neither slave system. Working memory, then, is not so much a mechanism of storage as it is an executive system that coordinates and regulates mental operations involving interactions between incoming stimuli, as well as short and long-term memory stores.

The notion of a working memory central executive is reminiscent of Cattell’s belief in a fluid intelligence factor. The ability to discriminate and perceive relations between new and old “fundaments,” however, requires more than simple activation and regulation of incoming information and highlights the importance of the long-term store. The long-term store refers to the relatively permanent memory storage of once novel information (Anderson, 2000). Long-term storage maps more onto Cattell’s notion of crystallized intelligence.

Classically, great debate has been held over whether long-term memory is a function of modularity (i.e., localized) or distributed processes (i.e., cortex-wide interactions). Currently, evidence supports a more distributed approach to long-term memory storage (Anderson, 2000). Generally, research suggests that items are stored in neurological networks, or interrelated nodes of information. The accessibility of a given long-term memory will depend on the number of inter-correlations between the memory and other nodes in the network (Anderson; Craik & Tulving, 1975).

The concept of metacognition is also important for an understanding of current considerations of intellectual processes. Fernandez-Duque et al. (2000) distinguish between two facets of metacognitive function, including knowledge and regulation. Metacognitive knowledge refers to the understanding or awareness individuals have about their cognitive abilities. Metacognitive regulation refers to the processes inherent in the coordination of cognition, such as error detection, source monitoring, error correction, inhibitory control, and resource allocation.

Some theorists have questioned whether metacognition and the central executive differ only in nomenclature.

-HEATH SOMMER

For more information on this article and related topics please visit www.heathsommer.com


Substack subscription form sign up