Happy Birthday, Noam Chomsky
www.youtube.com/watch?v=RLkmea7OqUMNov 23, 2012 – Uploaded by SmellsLikePodcast
Professor Noam Chomsky is a hugely eminent philosopher, author, … Guy Evans of the Smells Like Human …
- More videos for Noam Chomsky: Man and his Mission »
Quotes by Noam Chomsky – unofficial (managed by his fans) … The old man’s message provides the proper context for the timelines on the latest episode in … -Woodrow Wilson and his mission to turn a pacifist public into jingoist warmongers; …
MAN: Noam, watching your reactions to the documentary they made about your ….. As Mr. Chomsky sees it, his mission is to wake up and activate the electorate.
1 day ago – Noam Chomsky turns eighty-four today, more than a half century after he exploded …. resistance, as if the guru hasn’t sanctioned their mission. … so much effort picking on an off-the-cuff remark made by a man in his eighties.
Jun 17, 2012 – I became friendly with a guy named Noam Chomsky. I came to know him as a human being before becoming fully aware of his fame and the …
Jun 15, 2011 – So there’s this guy by the name of Paul Bogdanor that apparently has as his mission to discredit Noam Chomsky. I have no real problem with …
Results for similar searches
Avram Noam Chomsky was born on December 7, 1928 in the affluent East Oak Lane neighborhood of Philadelphia, Pennsylvania. His father, Dr. William “Zev” …
May 6, 2011 – Thus Obama was simply lying when he said, in his White House statement, … Noam Chomsky is Institute Professor emeritus in the MIT Department of Linguistics and Philosophy. … That is a mad (and now senile) man’s rant!
He remains as influential, and divisive, as he was when Larissa MacFarquhar wrote a Profile of Chomsky in The New Yorker nearly a decade ago (“The Devil’s Accountant”). He has at least three new essays on linguistics coming out soon, and if time has slowed him down, it’s not by very much. A few months ago, I sent him a manuscript and he replied, with comments, in less than half an hour.
I can’t speak to his politics, for which he is equally well known. But since his earliest days, Chomsky’s scientific concerns have been as much about philosophy as linguistics. For most of us, words and sentences are tools for communicating. But for Chomsky, words and sentences are tools for understanding the nature and origins of knowledge. Chomsky sees himself, correctly, as continuing a conversation that goes back to Plato, especially the Meno dialogue, in which a slave boy is revealed by Socrates to know truths about geometry that he hadn’t realized he knew. Plato’s question was whether any of what we know about the world is innate as opposed to acquired through experience. For Chomsky, the interest in linguistics isn’t so much whether one language uses infinitives and another uses subjunctives but whether all languages are, at some level, deeply related and constrained by what Chomsky dubbed “universal grammar.”
That idea of universal grammar didn’t just change linguistics, it had repercussions for virtually every field that concerns the mind. In developmental psychology, for example, no idea has ever been as controversial, or as pivotal. How much do children know about language before they even begin to talk? Do they learn language simply by imitating their parents, or is there a built-in language-acquisition device (or what Steven Pinker called a “language instinct”)? Parallel questions soon arose in other aspects of cognitive development as well.
Part of Chomsky’s argument begins with the observation that language is infinite, with no end to the number of possible sentences that you can produce and comprehend. Take Chomsky’s sole entry in Bartlett’s Quotations, “colorless green ideas sleep furiously”; even if you don’t know what it means, you can still immediately apprehend that the sentence is grammatical, whereas “green furiously ideas colorless sleep” is not. Yet any given child’s experience is finite; we constantly encounter sentences that we have never heard or seen before. For Chomsky, the question of linguistics is how children bridge the gap between finite and infinite, from the finite input that they have heard to the infinity of what they can comprehend. In framing the problem in this way, Chomsky took an ancient and seemingly imponderable question—about nature versus nurture—and turned it into something that is actually testable.
One of Chomsky’s most remarkable traits is his willingness to change his own mind, like Bob Dylan suddenly going electric to the consternation of his early fans. Take for example the distinction that he once made between “deep structure” and “surface structure.” In its crudest form, the notion is that an active sentence (“John loved Mary”) and a passive sentence (“Mary was loved by John”) might seem superficially different; yet they have some important underlying commonality, both in meaning and in their representation in the brain. It’s a neat idea that makes certain very specific claims about how language is represented in our mind, and how sound relates to meaning; decades of linguistic work have been based on it. It also (somewhat uncharacteristically for Chomsky) makes for a perfect sound bite; there are plenty of people who know nothing about linguistics, but have the sense that what he was talking about was “deep versus shallow”; there was an even a Nobel Prize winner, Niels Jerne, who used the metaphor in his Nobel address about language and the immune system. Most people would have lived off a metaphor that good for the rest of their careers; Chomsky has spent the past twenty-five years arguing that he made a mistake. Although the basic metaphor is simple, the distinction between deep structure and surface structure required a great deal of behind-the-scenes technical examination in order to make it work with the complexities of different languages. In its place, Chomsky has recently been trying to develop a simpler, more elegant theory (known as the Minimalist Program) that encompasses the spirit of the original. (Not all of us are convinced about the success of that approach; my own view is that language is irreducibly messy, and that the elegance that Chomsky seeks will not be forthcoming.)
More recently, in a co-written 2002 paper, Chomsky seemed to open a door to a view that he’d long criticized: the idea that the “faculty of language,” as he called it, might draw on parts of the brain that weren’t specialized for language. Up to then, Chomsky had been known in part for idea called “the autonomy of syntax,” which, in crude terms, suggested that grammar was cognitively separate from other aspects of the mind (like our understanding of the world and our desire to eat pizza for dinner). I was so surprised by the dramatic shift that I wrote to him to ask. “A lot of people take [your new] paper to be a renouncing of your earlier arguments.” Was that really the case? His response, as immediate as ever, “As for my own views, they’ve of course evolved over the years. This conception of ‘renouncing beliefs’ is very odd, as if we’re in some kind of religious cult. I ‘renounce beliefs’ practically every time I think about the topics or find out what someone else is thinking.”
Nine academics out of ten never change their mind about anything; most (though there are salient exceptions, like Wittgenstein) lock into a position earlier in their careers and then defend it to the hilt. Chomsky, in contrast, has never stopped critiquing his own theories with the same vigor with which he has criticized others. For fifty years, his search for linguistic truth has been relentless.
At times, Chomsky can be maddening. He is not a particularly good listener, and he aims to win every argument, preferably by taking the most contrarian stance possible, like arguing that language evolved not for communication but for internal thought. The odder the arguments are, the more he seems to enjoy himself. A friend of mine was a student of Chomsky’s in the mid-seventies, and he recalls his strategy for meeting with Chomsky. Instead of the usual one-on-one meetings that most students have with their advisors, my friend would bring a classmate along. The two students would take turns debating Chomsky, one stepping in when the other ran out of things to say. Chomsky would win every argument (or at least never admit defeat), and the two would go home exhausted, but also elated. Each debate with Noam brought them a step closer to understanding the true nature of language and mind.
Chomsky can also be dismissive, in ways that still rankle—and stir people to action. He has, for example, never been kind to the field of semantics (which is about meaning) as opposed to the field of syntax (which is about grammar). He has also paid too little attention to the experimental data of psycholinguistics, which focusses on the dynamics of how people understand sentences as they unfold, millisecond by millisecond. Because Chomsky has a kind of first-mover advantage, his views in any area tend to dominate, even in areas, like the evolution of language, in which he is not an expert. When he makes a bad choice and unfairly dismisses an important question (like the way in which meaning is represented in the brain), linguists who aim to answer the questions that Chomsky neglects often face unnecessary resistance, as if the guru hasn’t sanctioned their mission. A good way for a young linguistics graduate student to make a name is to develop an intriguing idea that Chomsky mentions in one of his footnotes; it’s a riskier move to study something that Chomsky doesn’t find to be important.
A year and a half ago, at a symposium at M.I.T., Chomsky said, somewhat flippantly, that he didn’t think that statistical data mining approaches to language had contributed significantly to our understanding of how language worked. A more polite person would have put that sentiment more gently; a less influential person would simply have been ignored. Instead, one of the world’s busiest and most influential software engineers—Peter Norvig, the director of research at Google—wrote an eight thousand four hundred word blog post, extensively footnoted, critiquing Chomsky’s remark. It was two hundred or so paragraphs in response to a handful of Chomsky’s sentences. In practically any other context, it would be unseemly for a leading researcher at one of the world’s largest companies to spend so much effort picking on an off-the-cuff remark made by a man in his eighties. But I see it in a different way: two titans facing off, with Chomsky, as ever, defining the contest.
When I was a second-year graduate student at M.I.T., Chomsky taught a class on philosophy, which I was lucky enough to sit in on. The class itself was an event, almost a circus; people came from all over Boston, not just M.I.T. Grad students in my department (brain and cognitive science) would walk over en masse, facetiously chanting, as if it was a mantra, “Noam… Noam… Noam.” We would poke fun at Chomsky’s body language, and his teaching habits, as if we were above it all. But the truth is that few of us have ever been to a course that stimulating, before or since. To my eternal chagrin, I became exceptionally busy that semester (working on a language acquisition project that had just taken off), and had no choice but to drop the class when I realized I wouldn’t have time to write the required paper.
Sometimes I think of my whole career since as a kind of penance, still trying to wend my way through the philosophical stage that Noam had set. None of the questions Chomsky has posed has yet been fully answered, to his satisfaction or to anybody else’s, but no scholar of the mind has ever been more influential. Chomsky may not always have the right answers. But he has always had the wisdom to pose the right questions.
Gary Marcus, a professor of psychology at N.Y.U. and the author of “Guitar Zero: The Science of Becoming Musical At Any Age,” has written for newyorker.com about the facts and fictions of neuroscience; moral machines; and Ray Kurzweil’s new book.
Photograph by Philip Jones Griffiths/Magnum.
Chomsky’s Revolution in Linguistics
John R. Searle
The New York Review of Books, June 29, 1972
|IThroughout the history of the study of man there has been a fundamental opposition between those who believe that progress is to be made by a rigorous observation of man’s actual behavior and those who believe that such observations are interesting only in so far as they reveal to us hidden and possibly fairly mysterious underlying laws that only partially and in distorted form reveal themselves to us in behavior. Freud, for example, is in the latter class, most of American social science in the former.Noam Chomsky is unashamedly with the searchers after hidden laws. Actual speech behavior, speech performance, for him is only the top of a large iceberg of linguistic competence distorted in its shape by many factors irrelevant to linguistics. Indeed he once remarked that the very expression “behavioral sciences” suggests a fundamental confusion between evidence and subject matter. Psychology, for example, he claims is the science of mind; to call psychology a behavioral science is like calling physics a science of meter readings. One uses human behavior as evidence for the laws of the operation of the mind, but to suppose that the laws must be laws of behavior is to suppose that the evidence must be the subject matter.
In this opposition between the methodology of confining research to observable facts and that of using the observable facts as clues to hidden and underlying laws, Chomsky’s revolution is doubly interesting: first, within the field of linguistics, it has precipitated a conflict which is an example of the wider conflict; and secondly, Chomsky has used his results about language to try to develop general anti-behaviorist and anti-empiricist conclusions about the nature of the human mind that go beyond the scope of linguistics.
His revolution followed fairly closely the general pattern described in Thomas Kuhn’s The Structure of Scientific Revolutions: the accepted model or “paradigm” of linguistics was confronted, largely by Chomsky’s work, with increasing numbers of nagging counterexamples and recalcitrant data which the paradigm could not deal with. Eventually the counter-examples led Chomsky to break the old model altogether and to create a completely new one. Prior to the publication of his Syntactic Structures in 1957, many, probably most, American linguists regarded the aim of their discipline as being the classification of the elements of human languages. Linguistics was to be a sort of verbal botany. As Hockett wrote in 1942, “Linguistics is a classificatory science.”
Suppose, for example, that such a linguist is giving a description of a language, whether an exotic language like Cherokee or a familiar one like English. He proceeds by first collecting his “data,” he gathers a large number of utterances of the language, which he records on his tape recorder or in a phonetic script. This “corpus” of the language constitutes his subject matter. He then classifies the elements of the corpus at their different linguistic levels: first he classifies the smallest significant functioning units of sound, the phonemes, then at the next level the phonemes unite into the minimally significant bearers of meaning, the morphemes (in English, for example, the word “cat” is a single morpheme made up of three phonemes; the word “uninteresting” is made up of three morphemes: “un,” “interest,” and “ing”), at the next higher level the morphemes join together to form words and word classes such as noun phrases and verb phrases, and at the highest level of all come sequences of word classes, the possible sentences and sentence types.
The aim of linguistic theory was to provide the linguist with a set of rigorous methods, a set of discovery procedures which he would use to extract from the “corpus” the phonemes, the morphemes, and so on. The study of the meanings of sentences or of the uses to which speakers of the language put the sentences had little place in this enterprise. Meanings, scientifically construed, were thought to be patterns of behavior determined by stimulus and response; they were properly speaking the subject matter of psychologists. Alternatively they might be some mysterious mental entities altogether outside the scope of a sober science or, worse yet, they might involve the speaker’s whole knowledge of the world around him and thus fall beyond the scope of a study restricted only to linguistic facts.
Structural linguistics, with its insistence on objective methods of verification and precisely specified techniques of discovery, with its refusal to allow any talk of meanings or mental entities or unobservable features, derives from the “behavioral sciences” approach to the study of man, and is also largely a consequence of the philosophical assumptions of logical positivism. Chomsky was brought up in this tradition at the University of Pennsylvania as a student of both Zellig Harris, the linguist, and Nelson Goodman, the philosopher.
Chomsky’s work is interesting in large part because, while it is a major attack on the conception of man implicit in the behavioral sciences, the attack is made from within the very tradition of scientific rigor and precision that the behavioral sciences have been aspiring to. His attack on the view that human psychology can be described by correlating stimulus and response is not an a priori conceptual argument, much less is it the cry of an anguished humanist resentful at being treated as a machine or an animal. Rather it is a claim that a really rigorous analysis of language will show that such methods when applied to language produce nothing but false-hoods or trivialities, that their practitioners have simply imitated “the surface features of science” without having its “significant intellectual content.”
As a graduate student at Pennsylvania, Chomsky attempted to apply the conventional methods of structural linguistics to the study of syntax, but found that the methods that had apparently worked so well with phonemes and morphemes did not work very well with sentences. Each language has a finite number of phonemes and a finite though quite large number of morphemes. It is possible to get a list of each; but the number of sentences in any natural language like French or English is, strictly speaking, infinite. There is no limit to the number of new sentences that can be produced; and for each sentence, no matter how long, it is always possible to produce a longer one. Within structuralist assumptions it is not easy to account for the fact that languages have an infinite number of sentences.
Furthermore the structuralist methods of classification do not seem able to account for all of the internal relations within sentences, or the relations that different sentences have to each other. For example, to take a famous case, the two sentences “John is easy to please” and “John is eager to please” look as if they had exactly the same grammatical structure. Each is a sequence of noun-copula-adjective-infinitive verb. But in spite of this surface similarity the grammar of the two is quite different. In the first sentence, though it is not apparent from the surface word order, “John” functions as the direct object of the verb to please; the sentence means: it is easy for someone to please John. Whereas in the second “John” functions as the subject of the verb to please; the sentence means: John is eager that he please someone. That this is a difference in the syntax of the sentences comes out clearly in the fact that English allows us to form the noun phrase “John’s eagerness to please” out of the second, but not “John’s easiness to please” out of the first. There is no easy or natural way to account for these facts within structuralist assumptions.
Another set of syntactical facts that structuralist assumptions are inadequate to handle is the existence of certain types of ambiguous sentences where the ambiguity derives not from the words in the sentence but from the syntactical structure. Consider the sentence “The shooting of the hunters is terrible.” This can mean that it is terrible that the hunters are being shot or that the hunters are terrible at shooting or that the hunters are being shot in a terrible fashion. Another example is “I like her cooking.” In spite of the fact that it contains no ambiguous words (or morphemes) and has a very simple superficial grammatical structure of noun-verb-possessive pronoun-noun, this sentence is in fact remarkably ambiguous. It can mean, among other things, I like what she cooks, I like the way she cooks, I like the fact that she cooks, even, I like the fact that she is being cooked.
Such “syntactically ambiguous” sentences form a crucial test case for any theory of syntax. The examples are ordinary pedestrian English sentences, there is nothing fancy about them. But it is not easy to see how to account for them. The meaning of any sentence is determined by the meanings of the component words (or morphemes) and their syntactical arrangement. How then can we account for these cases where one sentence containing unambiguous words (and morphemes) has several different meanings? Structuralist linguists had little or nothing to say about these cases; they simply ignored them. Chomsky was eventually led to claim that these sentences have several different syntactical structures, that the uniform surface structure of, e.g., “I like her cooking” conceals several different underlying structures which he called “deep” structures. The introduction of the notion of the deep structure of sentences, not always visible in the surface structure, is a crucial element of the Chomsky revolution, and I shall explain it in more detail later.
One of the merits of Chomsky’s work has been that he has persistently tried to call attention to the puzzling character of facts that are so familiar that we all tend to take them for granted as not requiring explanation. Just as physics begins in wonder at such obvious facts as that apples fall to the ground or genetics in wonder that plants and animals reproduce themselves, so the study of the structure of language beings in wondering at such humdrum facts as that “I like her cooking” has different meanings, “John is eager to please” isn’t quite the same in structure as “John is easy to please,” and the equally obvious but often overlooked facts that we continually find ourselves saying and hearing things we have never said or heard before and that the number of possible new sentences is infinite.
The inability of structuralist methods to account for such syntactical facts eventually led Chomsky to challenge not only the methods but the goals and indeed the definition of the subject matter of linguistics given by the structuralist linguists. Instead of a taxonomic goal of classifying elements by performing sets of operations on a corpus of utterances, Chomsky argued that the goal of linguistic description should be to construct a theory that would account for the infinite number of sentences of a natural language. Such a theory would show which strings of words were sentences and which were not, and would provide a description of the grammatical structure of each sentence.
Such descriptions would have to be able to account for such facts as the internal grammatical relations and the ambiguities described above. The description of a natural language would be a formal deductive theory which would contain a set of grammatical rules that could generate the infinite set of sentences of the language, would not generate anything that was not a sentence, and would provide a description of the grammatical structure of each sentence. Such a theory came to be called a “generative grammar” because of its aim of constructing a device that would generate all and only the sentences of a language.
This conception of the goal of linguistics then altered the conception of the methods and the subject matter. Chomsky argued that since any language contains an infinite number of sentences, any “corpus,” even if it contained as many sentences as there are in all the books of the Library of Congress, would still be trivially small. Instead of the appropriate subject matter of linguistics being a randomly or arbitrarily selected set of sentences, the proper object of study was the speaker’s underlying knowledge of the language, his “linguistic competence” that enables him to produce and understand sentences he has never heard before.
Once the conception of the “corpus” as the subject matter is rejected, then the notion of mechanical procedures for discovering linguistic truths goes as well. Chomsky argues that no science has a mechanical procedure for discovering the truth anyway. Rather, what happens is that the scientist formulates hypotheses and tests them against evidence. Linguistics is no different: the linguist makes conjectures about linguistic facts and tests them against the evidence provided by native speakers of the language. He has in short a procedure for evaluating rival hypotheses, but no procedure for discovering true theories by mechanically processing evidence.
The Chomsky revolution can be summarized in the following chart:
Most of this revolution was already presented in Chomsky’s book Syntactic Structures. As one linguist remarked, “The extraordinary and traumatic impact of the publication of Syntactic Structures by Noam Chomsky in 1957 can hardly be appreciated by one who did not live through this upheaval.” In the years after 1957 the spread of the revolution was made more rapid and more traumatic by certain special features of the organization of linguistics as a discipline in the United States. Only a few universities had separate departments of linguistics. The discipline was (by contrast to, say, philosophy or psychology), and still is, a rather cozy one. Practitioners were few; they all tended to know one another; they read the same very limited number of journals; they had, and indeed still have, an annual get-together at the Summer Linguistics Institute of the Linguistic Society of America, where issues are thrashed out and family squabbles are aired in public meetings.
All of this facilitated a rapid dissemination of new ideas and a dramatic and visible clash of conflicting views. Chomsky did not convince the established leaders of the field but he did something more important, he convinced their graduate students. And he attracted some fiery disciples, notably Robert Lees and Paul Postal.
The spread of Chomsky’s revolution, like the spread of analytic philosophy during the same period, was a striking example of the Young Turk phenomenon in American academic life. The graduate students became generative grammarians even in departments that had traditionalist faculties. All of this also engendered a good deal of passion and animosity, much of which still survives. Many of the older generation still cling resentfully to the great traditions, regarding Chomsky and his “epigones” as philistines and vulgarians. Meanwhile Chomsky’s views have become the conventional wisdom, and as Chomsky and his disciples of the Sixties very quickly become Old Turks a new generation of Young Turks (many of them among Chomsky’s best students) arise and challenge Chomsky’s views with a new theory of “generative semantics.”
The aim of the linguistic theory expounded by Chomsky in Syntactic Structures (1957) was essentially to describe syntax, that is, to specify the grammatical rules underlying the construction of sentences. In Chomsky’s mature theory, as expounded in Aspects of the Theory of Syntax (1965), the aims become more ambitious: to explain all of the linguistic relationships between the sound system and the meaning system of the language. To achieve this, the complete “grammar” of a language, in Chomsky’s technical sense of the word, must have three parts, a syntactical component that generates and describes the internal structure of the infinite number of sentences of the language, a phonological component that describes the sound structure of the sentences generated by the syntactical component, and a semantic component that describes the meaning structure of the sentences. The heart of the grammar is the syntax; the phonology and the semantics are purely “interpretative,” in the sense that they describe the sound and the meaning of the sentences produced by the syntax but do not generate any sentences themselves.
The first task of Chomsky’s syntax is to account for the speaker’s understanding of the internal structure of sentences. Sentences are not unordered strings of words, rather the words and morphemes are grouped into functional constituents such as the subject of the sentence, the predicate, the direct object, and so on. Chomsky and other grammarians can represent much, though not all, of the speaker’s knowledge of the internal structure of sentences with rules called “phrase structure” rules.
The rules themselves are simple enough to understand. For example, the fact that a sentence (S) can consist of a noun phrase (NP) followed by a verb phrase (VP) we can represent in a rule of the form: S ? NP + VP. And for purposes of constructing a grammatical theory which will generate and describe the structure of sentences, we can read the arrow as an instruction to rewrite the left-hand symbol as the string of symbols on the right-hand side. The rewrite rules tell us that the initial symbol S can be replaced by NP + VP. Other rules will similarly unpack NP and VP into their constituents. Thus, in a very simple grammar, a noun phrase might consist of an article (Art) followed by a noun (N); and a verb phrase might consist of an auxiliary verb (Aux), a main verb (V), and a noun phrase (NP). A very simple grammar of a fragment of English, then, might look like this:
1. S ? NP + VP
2. NP ? Art + N
3. VP ? Aux + V + NP
4. Aux ? (can, may, will, must, etc.)
5. V ? (read, hit, eat, etc.)
6. Art ? (a, the)
7. N ? (boy, man, book, etc.)
If we introduce the initial symbol S into this system, then construing each arrow as the instruction to rewrite the left-hand symbol with the elements on the right (and where the elements are bracketed, to rewrite it as one of the elements), we can construct derivations of English sentences. If we keep applying the rules to generate strings until we have no elements in our strings that occur on the left-hand side of a rewrite rule, we have arrived at a “terminal string.” For example, starting with S and rewriting according to the rules mentioned above, we might construct the following simple derivation of the terminal string underlying the sentence “The boy will read the book”:
NP + VP (by rule 1)
Art + N + VP (by rule 2)
Art + N + Aux + V + NP (by rule 3)
Art + N + Aux + V + Art + N
(by rule 2)
the + boy + will + read + the + book
(by rules 4, 5, 6, and 7)
The information contained in this derivation can be represented graphically in a tree diagram of the following form:
This “phrase marker” is Chomsky’s representation of the syntax of the sentence “The boy will read the book.” It provides a description of the syntactical structure of the sentence. Phrase structure rules of the sort I have used to construct the derivation were implicit in at least some of the structuralist grammars; but Chomsky was the first to render them explicit and to show their role in the derivations of sentences. He is not, of course, claiming that a speaker actually goes consciously or unconsciously through any such process of applying rules of the form “rewrite X as Y” to construct sentences. To construe the grammarian’s description this way would be to confuse an account of competence with a theory of performance.
But Chomsky does claim that in some form or other the speaker has “internalized” rules of sentence construction, that he has “tacit” or “unconscious” knowledge of grammatical rules, and that the phrase structure rules constructed by the grammarian “represent” his competence. One of the chief difficulties of Chomsky’s theory is that no clear and precise answer has ever been given to the question of exactly how the grammarian’s account of the construction of sentences is supposed to represent the speaker’s ability to speak and understand sentences, and in precisely what sense of “know” the speaker is supposed to know the rules of the grammar.
Phrase structure rules were, as I have said, already implicit in at least some of the structuralist grammars Chomsky was attacking in Syntactic Structures. One of his earliest claims was that such rules, even in a rigorous and formalized deductive model such as we have just sketched, were not adequate to account for all the syntactical facts of natural languages. The entering wedge of his attack on structuralism was the claim that phrase structure rules alone could not account for the various sorts of cases such as “I like her cooking” and “John is eager to please.”
First, within such a grammar there is no natural way to describe the ambiguities in a sentence such as “I like her cooking.” Phrase structure rules alone would provide only one derivation for this sentence; but as the sentence is syntactically ambiguous, the grammar should reflect that fact by providing several different syntactical derivations and hence several different syntactical descriptions.
Secondly, phrase structure grammars have no way to picture the differences between “John is easy to please” and “John is eager to please.” Though the sentences are syntactically different, phrase structure rules alone would give them similar phrase markers.
Thirdly, just as in the above examples surface similarities conceal underlying differences that cannot be revealed by phrase structure grammar, so surface differences also conceal underlying similarities. For example, in spite of the different word order and the addition of certain elements, the sentence “The book will be read by the boy” and the sentence “The boy will read the book” have much in common: they both mean the same thing—the only difference is that one is in the passive mood and the other in the active mood. Phrase structure grammars alone give us no way to picture this similarity. They would give us two unrelated descriptions of these two sentences.
To account for such facts, Chomsky claims that in addition to phrase structure rules the grammar requires a second kind of rule, “transformational” rules, which transform phrase markers into other phrase markers by moving elements around, by adding elements, and by deleting elements. For example, by using Chomsky’s transformational rules, we can show the similarity of the passive to the active mood by showing how a phrase marker for the active mood can be converted into a phrase marker for the passive mood. Thus, instead of generating two unrelated phrase markers by phrase structure rules, we can construct a simpler grammar by showing how both the active and the passive can be derived from the same underlying phrase marker.
To account for sentences like “I like her cooking” we show that what we have is not just one phrase marker but several different underlying sentences each with a different meaning, and the phrase markers for these different sentences can all be transformed into one phrase marker for “I like her cooking.” Thus, underlying the one sentence “I like her cooking” are phrase markers for “I like what she cooks,” “I like the way she cooks,” “I like the fact that she cooks,” etc. For example, underlying the two meanings, “I like what she cooks” and “I like it that she is being cooked,” are the two phrase markers:
Different transformational rules convert each of these into the same derived phrase marker for the sentence “I like her cooking.” Thus, the ambiguity in the sentence is represented in the grammar by phrase markers of several quite different sentences. Different phrase markers produced by the phrase structure rules are transformed into the same phrase marker by the application of the transformational rules.
Because of the introduction of transformational rules, grammars of Chomsky’s kind are often called “transformational generative grammars” or simply “transformational grammars.” Unlike phrase structure rules which apply to a single left-hand element in virtue of its shape, transformational rules apply to an element only in virtue of its position in a phrase marker: instead of rewriting one element as a string of elements, a transformational rule maps one phrase marker into another. Transformational rules therefore apply after the phrase structure rules have been applied; they operate on the output of the phrase structure rules of the grammar.
Corresponding to the phrase structure rules and the transformational rules respectively are two components to the syntax of the language, a base component and a transformational component. The base component of Chomsky’s grammar contains the phrase structure rules, and these (together with certain rules restricting which combinations of words are permissible so that we do not get nonsense sequences like “The book will read the boy”) determine the deep structure of each sentence. The transformational component converts the deep structure of the sentence into its surface structure. In the example we just considered, “The book will be boy” and the sentence “The boy will read the book,” two surface structures are derived from one deep structure. In the case of “I like her cooking,” one surface structure is derived from several different deep structures.
At the time of the publication of Aspects of the Theory of Syntax it seemed that all of the semantically relevant parts of the sentence, all the things that determine its meaning, were contained in the deep structure of the sentence. The examples we mentioned above fit in nicely with this view. “I like her cooking” has different meanings because it has different deep structures though only one surface structure; “The boy will read the book” and “The book will be read by the boy” have different surface structures, but one and the same deep structure, hence they have the same meaning.
This produced a rather elegant theory of the relation of syntax to semantics and phonology: the two components of the syntax, the base component and the transformational component, generate deep structures and surface structures respectively. Deep structures are the input to the semantic component, which describes their meaning. Surface structures are the input to the phonological component, which describes their sound. In short, deep structure determines meaning, surface structure determines sound. Graphically the theory of a language was supposed to look like this:
The task of the grammarian is to state the rules that are in each of the little boxes. These rules are supposed to represent the speaker’s competence. In knowing how to produce and understand sentences, the speaker, in some sense, is supposed to know or to have “internalized” or have an “internal representation of” these rules.
The elegance of this picture has been marred in recent years, partly by Chomsky himself, who now concedes that surface structures determine at least part of meaning, and more radically by the younger Turks, the generative semanticists, who insist that there is no boundary between syntax and semantics and hence no such entities as syntactic deep structures.
Seen as an attack on the methods and assumptions of structural linguistics, Chomsky’s revolution appears to many of his students to be not quite revolutionary enough. Chomsky inherits and maintains from his structuralist upbringing the conviction that syntax can and should be studied independently of semantics; that form is to be characterized independently of meaning. As early as Syntactic Structures he was arguing that “investigation of such [semantic] proposals invariably leads to the conclusion that only a purely formal basis can provide a firm and productive foundation for the construction of grammatical theory.”
The structuralists feared the intrusion of semantics into syntax because meaning seemed too vaporous and unscientific a notion for use in a rigorous science of language. Some of this attitude appears to survive in Chomsky’s persistent preference for syntactical over semantic explanations of linguistic phenomena. But, I believe, the desire to keep syntax autonomous springs from a more profound philosophical commitment: man, for Chomsky, is essentially a syntactical animal. The structure of his brain determines the structure of his syntax, and for this reason the study of syntax is one of the keys, perhaps the most important key, to the study of the human mind.
It is of course true, Chomsky would say, that men use their syntactical objects for semantic purposes (that is, they talk with their sentences), but the semantic purposes do not determine the form of the syntax or even influence it in any significant way. It is because form is only incidentally related to function that the study of language as a formal system is such a marvelous way of studying the human mind.
It is important to emphasize how peculiar and eccentric Chomsky’s overall approach to language is. Most sympathetic commentators have been so dazzled by the results in syntax that they have not noted how much of the theory runs counter to quite ordinary, plausible, and common-sense assumptions about language. The commonsense picture of human language runs something like this. The purpose of language is communication in much the same sense that the purpose of the heart is to pump blood. In both cases it is possible to study the structure independently of function but pointless and perverse to do so, since structure and function so obviously interact. We communicate primarily with other people, but also with ourselves, as when we talk or think in words to ourselves. Human languages are among several systems of human communication (some others are gestures, symbol systems, and representational art) but language has immeasurably greater communicative power than the others.
We don’t know how language evolved in human prehistory, but it is quite reasonable to suppose that the needs of communication influenced the structure. For example, transformational rules facilitate economy and so have survival value: we don’t have to say, “I like it that she cooks in a certain way,” we can say, simply, “I like her cooking.” We pay a small price for such economies in having ambiguities, but it does not hamper communication much to have ambiguous sentences because when people actually talk the context usually sorts out the ambiguities. Transformations also facilitate communication by enabling us to emphasize certain things at the expense of others: we can say not only “Bill loves Sally” but also “It is Bill that loves Sally” and “It is Sally that Bill loves.” In general an understanding of syntactical facts requires an understanding of their function in communication since communication is what language is all about.
Chomsky’s picture, on the other hand, seems to be something like this: except for having such general purposes as the expression of human thoughts, language doesn’t have any essential purpose, or if it does there is no interesting connection between its purpose and its structure. The syntactical structures of human languages are the products of innate features of the human mind, and they have no significant connection with communication, though, of course, people do use them for, among other purposes, communication. The essential thing about languages, their defining trait, is their structure. The so-called “bee language,” for example, is not a language at all because it doesn’t have the right structure, and the fact that bees apparently use it to communicate is irrelevant. If human beings evolved to the point where they used syntactical forms to communicate that are quite unlike the forms we have now and would be beyond our present comprehension, then human beings would no longer have language, but something else.
For Chomsky language is defined by syntactical structure (not by the use of the structure in communication) and syntactical structure is determined by innate properties of the human mind (not by the needs of communication). On this picture of language it is not surprising that Chomsky’s main contribution has been to syntax. The semantic results that he and his colleagues have achieved have so far been trivial.
Many of Chomsky’s best students find this picture of language implausible and the linguistic theory that emerges from it unnecessarily cumbersome. They argue that one of the crucial factors shaping syntactic structure is semantics. Even such notions as “a grammatically correct sentence” or a “well-formed” sentence, they claim, require the introduction of semantic concepts. For example, the sentence “John called Mary a Republican and then SHE insulted HIM”  is a wellformed sentence only on the assumption that the participants regard it as insulting to be called a Republican.
Much as Chomsky once argued that structuralists could not comfortably accommodate the syntactical facts of language, so the generative semanticists now argue that his system cannot comfortably account for the facts of the interpenetration of semantics and syntax. There is no unanimity among Chomsky’s critics—Ross, Postal, Lakoff, McCawley, Fillmore (some of these are among his best students)—but they generally agree that syntax and semantics cannot be sharply separated, and hence there is no need to postulate the existence of purely syntactical deep structures.
Those who call themselves generative semanticists believe that the generative component of a linguistic theory is not the syntax, as in the above diagrams, but the semantics, that the grammar starts with a description of the meaning of a sentence and then generates the syntactical structures through the introduction of syntactical rules and lexical rules. The syntax then becomes just a collection of rules for expressing meaning.
It is too early to assess the conflict between Chomsky’s generative syntax and the new theory of generative semantics, partly because at present the arguments are so confused. Chomsky himself thinks that there is no substance to the issues because his critics have only rephrased his theory in a new terminology.
But it is clear that a great deal of Chomsky’s over-all vision of language hangs on the issue of whether there is such a thing as syntactical deep structure. Chomsky argues that if there were no deep structure, linguistics as a study would be much less interesting because one could not then argue from syntax to the structure of the human mind, which for Chomsky is the chief interest of linguistics. I believe on the contrary that if the generative semanticists are right (and it is by no means clear that they are) that there is no boundary between syntax and semantics and hence no syntactical deep structures, linguistics if anything would be even more interesting because we could then begin the systematic investigation of the way form and function interact, how use and structure influence each other, instead of arbitrarily assuming that they do not, as Chomsky has so often tended to assume.
It is one of the ironies of the Chomsky revolution that the author of the revolution now occupies a minority position in the movement he created. Most of the active people in generative grammar regard Chomsky’s position as having been rendered obsolete by the various arguments concerning the inter-action between syntax and semantics. The old time structuralists whom Chomsky originally attacked look on with delight at this revolution within the revolution, rubbing their hands in glee at the sight of their adversaries fighting each other. “Those TG [transformational grammar] people are in deep trouble,” one warhorse of the old school told me. But the traditionalists are mistaken to regard the fight as support for their position. The conflict is being carried on entirely within a conceptual system that Chomsky created. Whoever wins, the old structuralism will be the loser.
The most spectacular conclusion about the nature of the human mind that Chomsky derives from his work in linguistics is that his results vindicate the claims of the seventeenth-century rationalist philosophers, Descartes, Leibniz, and others, that there are innate ideas in the mind. The rationalists claim that human beings have knowledge that is not derived from experience but is prior to all experience and determines the form of the knowledge that can be gained from experience. The empiricist tradition by contrast, from Locke down to contemporary behaviorist learning theorists, has tended to treat the mind as a tabula rasa, containing no knowledge prior to experience and placing no constraints on the forms of possible knowledge, except that they must be derived from experience by such mechanisms as the association of ideas or the habitual connection of stimulus and response. For empiricists all knowledge comes from experience, for rationalists some knowledge is implanted innately and prior to experience. In his bluntest moods, Chomsky claims to have refuted the empiricists and vindicated the rationalists.
His argument centers around the way in which children learn language. Suppose we assume that the account of the structure of natural languages we gave in Section II is correct. Then the grammar of a natural language will consist of a set of phrase structure rules that generate underlying phrase markers, a set of transformational rules that map deep structures onto surface structures, a set of phonological rules that assign phonetic interpretations to surface structures, and so on. Now, asks Chomsky, if all of this is part of the child’s linguistic competence, how does he ever acquire it? That is, in learning how to talk, how does the child acquire that part of knowing how to talk which is described by the grammar and which constitutes his linguistic competence?
Notice, Chomsky says, several features of the learning situation: The information that the child is presented with—when other people address him or when he hears them talk to each other—is limited in amount, fragmentary, and imperfect. There seems to be no way the child could learn the language just by generalizing from his inadequate experiences, from the utterances he hears. Furthermore, the child acquires the language at a very early age, before his general intellectual faculties are developed.
Indeed, the ability to learn a language is only marginally dependent on intelligence and motivation—stupid children and intelligent children, motivated and unmotivated children, all learn to speak their native tongue. If a child does not acquire his first language by puberty, it is difficult, and perhaps impossible, for him to learn one after that time. Formal teaching of the first language is unnecessary: the child may have to go to school to learn to read and write but he does not have to go to school to learn how to talk.
Now, in spite of all these facts the child who learns his first language, claims Chomsky, performs a remarkable intellectual feat: in “internalizing” the grammar he does something akin to constructing a theory of the language. The only explanation for all these facts, says Chomsky, is that the mind is not a tabula rasa, but rather, the child has the form of the language already built into his mind before he ever learns to talk. The child has a universal grammar, so to speak, programmed into his brain as part of his genetic inheritance. In the most ambitious versions of this theory, Chomsky speaks of the child as being born “with a perfect knowledge of universal grammar, that is, with a fixed schematism that he uses,…in acquiring language.” A child can learn any human language on the basis of very imperfect information. That being the case, he must have the forms that are common to all human languages as part of his innate mental equipment.
As further evidence in support of a specifically human “faculté de langage” Chomsky points out that animal communication systems are radically unlike human languages. Animal systems have only a finite number of communicative devices, and they are usually controlled by certain stimuli. Human languages by contrast, all have an infinite generative capacity and the utterances of sentences are not predictable on the basis of external stimuli. This “creative aspect of language use” is peculiarly human.
One traditional argument against the existence of an innate language learning faculty is that human languages are so diverse. The differences between Chinese, Nootka, Hungarian, and English, for example, are so great as to destroy the possibility of any universal grammar, and hence languages could only be learned by a general intelligence, not by any innate language learning device. Chomsky has attempted to turn this argument on its head: In spite of surface differences, all human languages have very similar underlying structures; they all have phrase structure rules and transformational rules. They all contain sentences, and these sentences are composed of subject noun phrases and predicate verb phrases, etc.
Chomsky is really making two claims here. First, a historical claim that his views on language were prefigured by the seventeenth-century rationalists, especially Descartes. Second, a theoretical claim that empiricist learning theory cannot account for the acquisition of language. Both claims are more tenuous than he suggests. Descartes did indeed claim that we have innate ideas, such as the idea of a triangle or the idea of perfection or the idea of God. But I know of no passage in Descartes to suggest that he thought the syntax of natural languages was innate. Quite the contrary, Descartes appears to have thought that language was arbitrary; he thought that we arbitrarily attach words to our ideas. Concepts for Descartes are innate, whereas language is arbitrary and acquired. Furthermore Descartes does not allow for the possibility of unconscious knowledge, a notion that is crucial to Chomsky’s system. Chomsky cites correctly Descartes’s claim that the creative use of language distinguishes man from the lower animals. But that by itself does not support the thesis that Descartes is a precursor of Chomsky’s theory of innate ideas.
The positions are in fact crucially different. Descartes thought of man as essentially a language-using animal who arbitrarily assigns verbal labels to an innate system of concepts. Chomsky, as remarked earlier, thinks of man as essentially a syntactical animal producing and understanding sentences by virtue of possessing an innate system of grammar, triggered in various possible forms by the different human languages to which he has been exposed. A better historical analogy than with Descartes is with Leibniz, who claimed that innate ideas are in us in the way that the statue is already prefigured in a block of marble. In a passage of Leibniz Chomsky frequently quotes, Leibniz makes
…the comparison of a block of marble which has veins, rather than a block of marble wholly even, or of blank tablets, i.e., of what is called among philosophers, a tabula rasa. For if the soul resembles these blank tablets, truth would be in us as the figure of Hercules is in the marble, when the marble is wholly indifferent to the reception of this figure or some other. But if there were veins in the block which would indicate the figure of Hercules rather than other figures, this block would be more determined thereto, and Hercules would be in it as in some sense innate, although it would be needful to labor to discover these veins, to clear them by polishing, and by cutting away what prevents them from appearing. Thus, it is that ideas and truths are for us innate, as inclinations, dispositions, habits, or natural potentialities, and not as actions, although these potentialities are always accompanied by some actions, often insensible, which correspond to them.
But if the correct model for the notion of innate ideas is the block of marble that contains the figure of Hercules as “disposition,” “inclination,” or “natural potentiality,” then at least some of the dispute between Chomsky and the empiricist learning theorists will dissolve like so much mist on a hot morning. Many of the fiercest partisans of empiricist and behaviorist learning theories are willing to concede that the child has innate learning capacities in the sense that he has innate dispositions, inclinations, and natural potentialities. Just as the block of marble has the innate capacity of being turned into a statue, so the child has the innate capacity of learning. W. V. Quine, for example, in his response to Chomsky’s innateness hypothesis argues, “The behaviorist is knowingly and cheerfully up to his neck in innate mechanisms of learning readiness.” Indeed, claims Quine, “Innate biases and dispositions are the cornerstone of behaviorism.”
If innateness is the cornerstone of behaviorism what then is left of the dispute? Even after all these ecumenical disclaimers by behaviorists to the effect that of course behaviorism and empiricism require innate mechanisms to make the stimulus-response patterns work, there still remains a hard core of genuine disagreement. Chomsky is arguing not simply that the child must have “learning readiness,” “biases,” and “dispositions,” but that he must have a specific set of linguistic mechanisms at work. Claims by behaviorists that general learning strategies are based on mechanisms of feedback, information processing, analogy, and so on are not going to be enough. One has to postulate an innate faculty of language in order to account for the fact that the child comes up with the right grammar on the basis of his exposure to the language.
The heart of Chomsky’s argument is that the syntactical core of any language is so complicated and so specific in its form, so unlike other kinds of knowledge, that no child could learn it unless he already had the form of the grammar programmed into his brain, unless, that is, he had “perfect knowledge of a universal grammar.” Since there is at the present state of neurophysiology no way to test such a hypothesis by inspection of the brain, the evidence for the conclusion rests entirely on the facts of the grammar. In order to meet the argument, the anti-Chomskyan would have to propose a simpler grammar that would account for the child’s ability to learn a language and for linguistic competence in general. No defender of traditional learning theory has so far done this (though the generative grammarians do claim that their account of competence is much simpler than the diagram we drew in Section II above).
The behaviorist and empiricist learning theorist who concedes the complexity of grammar is faced with a dilemma: either he relies solely on stimulus-response mechanisms, in which case he cannot account for the acquisition of the grammar, or he concedes, à la Quine, that there are innate mechanisms which enable the child to learn the language. But as soon as the mechanisms are rich enough to account for the complexity and specificity of the grammar, then the stimulus-response part of the theory, which was supposed to be its core, becomes uninteresting; for such interest as it still has now derives entirely from its ability to trigger the innate mechanisms that are now the crucial element of the learning theory. Either way, the behaviorist has no effective reply to Chomsky’s arguments.
The weakest element of Chomsky’s grammar is the semantic component, as he himself repeatedly admits. But while he believes that the semantic component suffers from various minor technical limitations, I think that it is radically inadequate; that the theory of meaning it contains is too impoverished to enable the grammar to achieve its objective of explaining all the linguistic relationships between sound and meaning.
Most, though not all, of the diverse theories of meaning advanced in the past several centuries from Locke to Chomsky and Quine are guilty of exactly the same fallacy. The fallacy can be put in the form of a dilemma for the theory: either the analysis of meaning itself contains certain of the crucial elements of the notion to be analyzed, in which case the analysis fails because of circularity; or the analysis reduces the thing to be analyzed into simpler elements which lack its crucial features, in which case the analysis fails because of inadequacy.
Before we apply this dilemma to Chomsky let us see how it works for a simple theory of meaning such as is found in the classical empirical philosophers, Locke, Berkeley, and Hume. These great British empiricists all thought that words got their meaning by standing for ideas in the mind. A sentence like “The flower is red” gets its meaning from the fact that anyone who understands the sentence will conjoin in his mind an idea of a flower with an idea of redness. Historically there were various arguments about the details of the theory (e.g., were the ideas for which general words stood themselves general ideas or were they particular ideas that were made “general in their representation”?). But the broad outlines of the theory were accepted by all. To understand a sentence is to associate ideas in the mind with the descriptive terms in the sentence.
But immediately the theory is faced with a difficulty. What makes the ideas in the mind into a judgment? What makes the sequence of images into a representation of the speech act of stating that the flower is red? According to the theory, first I have an idea of a flower, then I have an idea of redness. So far the sequence is just a sequence of unconnected images and does not amount to the judgment that the flower is red, which is what is expressed in the sentence. I can assume that the ideas come to someone who understands the sentence in the form of a judgment, that they just are somehow connected as representing the speech act of stating that the flower is red—in which case we have the first horn of our dilemma and the theory is circular, since it employs some of the crucial elements of the notion of meaning in the effort to explain meaning. Or on the other hand if I do not assume the ideas come in the form of a judgment then I have only a sequence of images in my mind and not the crucial feature of the original sentence, namely, the fact that the sentence says that the flower is red—in which case we have the second horn of our dilemma and the analysis fails because it is inadequate to account for the meaning of the sentence.
The semantic theory of Chomsky’s generative grammar commits exactly the same fallacy. To show this I will first give a sketch of what the theory is supposed to do. Just as the syntactical component of the grammar is supposed to describe the speaker’s syntactical competence (his knowledge of the structure of sentences) and the phonological component is supposed to describe his phonological competence (his knowledge of how the sentences of his language sound), so the semantic component is supposed to describe the speaker’s semantic competence (his knowledge of what the sentences mean and how they mean what they mean).
The semantic component of a grammar of a language embodies the semantic theory of that language. It consists of the set of rules that determine the meanings of the sentences of the language. It operates on the assumption, surely a correct one, that the meaning of any sentence is determined by the meaning of all the meaningful elements of the sentence and by their syntactical combination. Since these elements and their arrangement are represented in the deep structure of the sentence, the “input” to the semantic component of the grammar will consist of deep structures of sentences as generated by the syntactic component, in the way we described in Section II.
The “output” is a set of “readings” for each sentence, where the readings are supposed to be a “semantic representation” of the sentence; that is, they are supposed to be descriptions of the meanings of the sentence. If for example a sentence has three different meanings the semantic component will duplicate the speaker’s competence by producing three different readings. If the sentence is nonsense the semantic component will produce no readings. If two sentences mean the same thing, it will produce the same reading for both sentences. If a sentence is “analytic,” that is, if it is true by definition because the meaning of the predicate is contained in the meaning of the subject (for example, “All bachelors are unmarried” is analytic because the meaning of the subject “bachelor” contains the meaning of the predicate “unmarried”), the semantic component will produce a reading for the sentence in which the reading of the predicate is contained in the reading of the subject.
Chomsky’s grammarian in constructing a semantic component tries to construct a set of rules that will provide a model of the speaker’s semantic competence. The model must duplicate the speaker’s understanding of ambiguity, synonymy, nonsense, analyticity, self-contradiction, and so on. Thus, for example, consider the ambiguous sentence “I went to the bank.” As part of his competence the speaker of English knows that the sentence is ambiguous because the word “bank” has at least two different meanings. The sentence can mean either I went to the finance house or I went to the side of the river. The aim of the grammarian is to describe this kind of competence; he describes it by constructing a model, a set of rules, that will duplicate it. His semantic theory must produce two readings for this sentence.
If, on the other hand, the sentence is “I went to the bank and deposited some money in my account” the semantic component will produce only one reading because the portion of the sentence about depositing money determines that the other meaning of bank—namely, side of the river—is excluded as a possible meaning in this sentence. The semantic component then will have to contain a set of rules describing which kinds of combinations of words make which kind of sense, and this is supposed to account for the speaker’s knowledge of which kinds of combinations of words in his language make which kind of sense.
All of this can be, and indeed has been; worked up into a very elaborate formal theory by Chomsky and his followers; but when we have constructed a description of what the semantic component is supposed to look like, a nagging question remains: What exactly are these “readings”? What is the string of symbols that comes out of the semantic component supposed to represent or express in such a way as to constitute a description of the meaning of a sentence?
The same dilemma with which we confronted Locke applies here: either the readings are just paraphrases, in which case the analysis is circular, or the readings consist only of lists of elements, in which case the analysis fails because of inadequacy; it cannot account for the fact that the sentence expresses a statement. Consider each horn of the dilemma. In the example above when giving two different readings for “I went to the bank” I gave two English paraphrases, but that possibility is not open to a semantic theory which seeks to explain competence in English, since the ability to understand paraphrases presupposes the very competence the semantic theory is seeking to explain. I cannot explain general competence in English by translating English sentences into other English sentences. In the literature of the Chomskyan semantic theorists, the examples given of “readings” are usually rather bad paraphrases of English sentences together with some jargon about “semantic markers” and “distinguishers” and so on. We are assured that the paraphrases are only for illustrative purposes, that they are not the real readings.
But what can the real readings be? The purely formal constraints placed on the semantic theory are not much help in telling us what the readings are. They tell us only that a sentence that is ambiguous in three ways must have three readings, a nonsense sentence no readings, two synonymous sentences must have the same readings, and so on. But so far as these requirements go, the readings need not be composed of words but could be composed of any formally specifiable set of objects. They could be numerals, piles of stones, old cars, strings of symbols, anything whatever. Suppose we decide to interpret the readings as piles of stones. Then for a three-ways ambiguous sentence the theory will give us three piles of stones, for a nonsense sentence, no piles of stones, for an analytic sentence the arrangement of stones in the predicate pile will be duplicated in the subject pile, and so on. There is nothing in the formal properties of the semantic component to prevent us from interpreting it in this way. But clearly this will not do because now instead of explaining the relationships between sound and meaning the theory has produced an unexplained relationship between sounds and stones.
When confronted with this objection the semantic theorists always make the same reply. Though we cannot produce adequate readings at present, ultimately the readings will be expressed in a yet to be discovered universal semantic alphabet. The elements in the alphabet will stand for the meaning units in all languages in much the way that the universal phonetic alphabet now represents the sound units in all languages. But would a universal semantic alphabet escape the dilemma? I think not.
Either the alphabet is a kind of a new artificial language, a new Esperanto, and the readings are once again paraphrases, only this time in the Esperanto and not in the original language; or we have the second horn of the dilemma and the readings in the semantic alphabet are just a list of features of language, and the analysis is inadequate because it substitutes a list of elements for a speech act.
The semantic theory of Chomsky’s grammar does indeed give us a useful and interesting adjunct to the theory of semantic competence, since it gives us a model that duplicates the speaker’s competence in recognizing ambiguity, synonymy, nonsense, etc. But as soon as we ask what exactly the speaker is recognizing when he recognizes one of these semantic properties, or as soon as we try to take the semantheory as a general account of semantic competence, it cannot cope with the dilemma. Either it gives us a sterile formalism, an uninterpreted list of elements, or it gives us paraphrases, which explain nothing.
Various philosophers working on an account of meaning in the past generation have provided us with a way out of this dilemma. But to accept the solution would involve enriching the semantic theory in ways not so far contemplated by Chomsky or the other Cambridge grammarians. Chomsky characterizes the speaker’s linguistic competence as his ability to “produce and understand” sentences. But this is at best very misleading: a person’s knowledge of the meaning of sentences consists in large part in his knowledge of how to use sentences to make statements, ask questions, give orders, make requests, make promises, warnings, etc., and to understand other people when they use sentences for such purposes. Semantic competence is in large part the ability to perform and understand what philosophers and linguists call speech acts.
Now if we approach the study of semantic competence from the point of view of the ability to use sentences to perform speech acts, we discover that speech acts have two properties, the combination of which will get us out of the dilemma: they are governed by rules and they are intentional. The speaker who utters a sentence and means it literally utters it in accordance with certain semantic rules and with the intention of invoking those rules to render his utterance the performance of a certain speech act.
This is not the place to recapitulate the whole theory of meaning and speech acts, but the basic idea is this. Saying something and meaning it is essentially a matter of saying it with the intention to produce certain effects on the hearer. And these effects are determined by the rules that attach to the sentence that is uttered. Thus, for example, the speaker who knows the meaning of the sentence “The flower is red” knows that its utterance constitutes the making of a statement. But making a statement to the effect that the flower is red consists in performing an action with the intention of producing in the hearer the belief that the speaker is committed to the existence of a certain state of affairs, as determined by the semantic rules attaching to the sentence.
Semantic competence is largely a matter of knowing the relationships between semantic intentions, rules, and conditions specified by the rules. Such an analysis of competence may in the end prove incorrect, but it is not open to the obvious dilemmas I have posed to classical empiricist and Chomskyan semantic theorists. It is not reduced to providing us with paraphrase or a list of elements. The glue that holds the elements together into a speech act is the semantic intentions of the speaker.
The defect of the Chomskyan theory arises from the same weakness we noted earlier, the failure to see the essential connection between language and communication, between meaning and speech acts. The picture that underlies the semantic theory and indeed Chomsky’s whole theory of language is that sentences are abstract objects that are produced and understood independently of their role in communication. Indeed, Chomsky sometimes writes as if sentences were only incidentally used to talk with. I am claiming that any attempt to account for the meaning of sentences within such assumptions is either circular or inadequate.
The dilemma is not just an argumentative trick, it reveals a more profound inadequacy. Any attempt to account for the meaning of sentences must take into account their role in communication, in the performance of speech acts, because an essential part of the meaning of any sentence is its potential for being used to perform a speech act. There are two radically different conceptions of language in conflict here: one, Chomsky’s, sees language as a self-contained formal system used more or less incidentally for communication. The other sees language as essentially a system for communication.
The limitations of Chomsky’s assumptions become clear only when we attempt to account for the meaning of sentences within his system, because there is no way to account for the meaning of a sentence without considering its role in communication, since the two are essentially connected. So long as we confine our research to syntax, where in fact most of Chomsky’s work has been done, it is possible to conceal the limitations of the approach, because syntax can be studied as a formal system independently of its use, just as we could study the currency and credit system of an economy as an abstract formal system independently of the fact that people use money to buy things with or we could study the rules of baseball as a formal system independently of the fact that baseball is a game people play. But as soon as we attempt to account for meaning, for semantic competence, such a purely formalistic approach breaks down, because it cannot account for the fact that semantic competence is mostly a matter of knowing how to talk, i.e., how to perform speech acts.
The Chomsky revolution is largely a revolution in the study of syntax. The obvious next step in the development of the study of language is to graft the study of syntax onto the study of speech acts. And this is indeed happening, though Chomsky continues to fight a rearguard action against it, or at least against the version of it that the generative semanticists who are building on his own work now present.
There are, I believe, several reasons why Chomsky is reluctant to incorporate a theory of speech acts into his grammar: First, he has a mistaken conception of the distinction between performance and competence. He seems to think that a theory of speech acts must be a theory of performance rather than of competence, because he fails to see that competence is ultimately the competence to perform, and that for this reason a study of the linguistic aspects of the ability to perform speech acts is a study of linguistic competence. Secondly, Chomsky seems to have a residual suspicion that any theory that treats the speech act, a piece of speech behavior, as the basic unit of meaning must involve some kind of a retreat to behaviorism. Nothing could be further from the truth. It is one of the ironies of the history of behaviorism that behaviorists should have failed to see that the notion of a human action must be a “mentalistic” and “introspective” notion since it essentially involves the notion of human intentions.
The study of speech acts is indeed the study of a certain kind of human behavior, but for that reason it is in conflict with any form of behaviorism, which is conceptually incapable of studying human behavior. But the third, and most important reason, I believe, is Chomsky’s only partly articulated belief that language does not have any essential connection with communication, but is an abstract formal system produced by the innate properties of the human mind.
Chomsky’s work is one of the most remarkable intellectual achievements of the present era, comparable in scope and coherence to the work of Keynes or Freud. It has done more than simply produce a revolution in linguistics; it has created a new discipline of generative grammar and is having a revolutionary effect on two other subjects, philosophy and psychology. Not the least of its merits is that it provides an extremely powerful tool even for those who disagree with many features of Chomsky’s approach to language. In the long run, I believe his greatest contribution will be that he has taken a major step toward restoring the traditional conception of the dignity and uniqueness of man.
 Quoted in R. H. Robins, A Short History of Linguistics (Indiana University Press, 1967), p. 239.
 Howard Maclay, “Overview,” in D. Steinberg and L. Jacobovitz, eds., Semantics (Cambridge University Press, 1971), p. 163.
 Not all grammarians would agree that these are exactly the right phrase markers for these two meanings. My point here is only to illustrate how different phrase markers can represent different meanings.
 Noam Chomsky, Syntactic Structures (Mouton & Co., 1957), p. 100.
 As distinct from “John called Mary beautiful and then she INSULTED him.”
 Cf., e.g., Noam Chomsky, “Deep Structure, Surface Structure, and Semantic Interpretation,” in D. Steinberg and L. Jacobovitz, eds., Semantics (Cambridge University Press, 1971).
 Noam Chomsky, “Linguistics and Philosophy,” in S. Hook, ed., Language and Philosophy (NYU Press, 1969), p. 88.
 G. Leibniz, New Essays Concerning Human Understanding (Open Court, 1949), pp. 45-46.
 W. V. O. Quine, “Linguistics and Philosophy,” in S. Hook, ed., Language and Philosophy (NYU Press, 1969), pp. 95-96.
 I am a little reluctant to attribute the semantic component to Chomsky, since most of its features were worked out not by him but by his colleagues at MIT; nonetheless since he incorporates it entirely as part of his grammar I shall assess it as such.
 For example, one of the readings given for the sentence “The man hits the colorful ball” contains the elements: [Some contextually definite] (Physical object) (Human) (Adult) (Male) (Action) (Instancy) (Intensity) [Collides with an impact] [Some contextually definite] (Physical object) (Color) [[Abounding in contrast or variety of bright colors] [Having a globular shape]]. J. Katz and J. Fodor, “The Structure of a Semantic Theory,” in The Structure of Language, J. Katz and J. Fodor, eds., (Prentice-Hall, 1964), p. 513.
 In, e.g., L. Wittgenstein, Philosophical Investigations (Macmillan, 1953); J. L. Austin, How to Do Things with Words (Harvard, 1962); P. Grice, “Meaning,” in Philosophical Review, 1957; J. R. Searle, Speech Acts, An Essay in the Philosophy of Language (Cambridge University Press, 1969); and P. F. Strawson, Logico-Linguistic Papers (Methuen, 1971).
 For an attempt to work out some of the details, see J. R. Searle, Speech Acts, An Essay in the Philosophy of Language (Cambridge University Press, 1969), Chapters 1-3.
 E.g., meaning, he writes, “need not involve communication or even the attempt to communicate,” Problems of Knowledge and Freedom (Pantheon Books, 1971), p. 19.
- Discover Interview: The Radical Linguist Noam Chomsky
Discover Interview: The Radical Linguist Noam Chomsky
Over 50 years ago, he began a revolution that’s still playing out today.
For centuries experts held that every language is unique. Then one day in 1956, a young linguistics professor gave a legendary presentation at the Symposium on Information Theory at MIT. He argued that every intelligible sentence conforms not only to the rules of its particular language but to a universal grammar that encompasses all languages. And rather than absorbing language from the environment and learning to communicate by imitation, children are born with the innate capacity to master language, a power imbued in our species by evolution itself. Almost overnight, linguists’ thinking began to shift.
Avram Noam Chomsky was born in Philadelphia on December 7, 1928, to William Chomsky, a Hebrew scholar, and Elsie Simonofsky Chomsky, also a scholar and an author of children’s books. While still a youngster, Noam read his father’s manuscript on medieval Hebrew grammar, setting the stage for his work to come. By 1955 he was teaching linguistics at MIT, where he formulated his groundbreaking theories. Today Chomsky continues to challenge the way we perceive ourselves. Language is “the core of our being,” he says. “We are always immersed in it. It takes a strong act of will to try not to talk to yourself when you’re walking down the street, because it’s just always going on.”
Chomsky also bucked against scientific tradition by becoming active in politics. He was an outspoken critic of American involvement in Vietnam and helped organize the famous 1967 protest march on the Pentagon. When the leaders of the march were arrested, he found himself sharing a cell with Norman Mailer, who described him in his book Armies of the Night as “a slim, sharp-featured man with an ascetic expression, and an air of gentle but absolute moral integrity.”
Chomsky discussed his ideas with Connecticut journalist Marion Long after numerous canceled interviews. “It was a very difficult situation,” Long says. “Chomsky’s wife was gravely ill, and he was her caretaker. She died about 10 days before I spoke with him. It was Chomsky’s first day back doing interviews, but he wanted to go through with it.” Later, he gave even more time to DISCOVER reporter Valerie Ross, answering her questions from his storied MIT office right up to the moment he dashed off to catch a plane.
You describe human language as a unique trait. What sets us apart?
Humans are different from other creatures, and every human is basically identical in this respect. If a child from an Amazonian hunter-gatherer tribe comes to Boston, is raised in Boston, that child will be indistinguishable in language capacities from my children growing up here, and vice versa. This unique human possession, which we hold in common, is at the core of a large part of our culture and our imaginative intellectual life. That’s how we form plans, do creative art, and develop complex societies.
When and how did the power of language arise?
If you look at the archaeological record, a creative explosion shows up in a narrow window, somewhere between 150,000 and roughly 75,000 years ago. All of a sudden, there’s an explosion of complex artifacts, symbolic representation, measurement of celestial events, complex social structures–a burst of creative activity that almost every expert on prehistory assumes must have been connected with the sudden emergence of language. And it doesn’t seem to be connected with physical changes; the articulatory and acoustic [speech and hearing] systems of contemporary humans are not very different from those of 600,000 years ago. There was a rapid cognitive change. Nobody knows why.
What first sparked your interest in human language?
I read modern Hebrew literature and other texts with my father from a very young age. It must have been around 1940 when he got his Ph.D. from Dropsie College, a Hebrew college in Philadelphia. He was a Semitist, working on medieval Hebrew grammar. I don’t know if I officially proofread my father’s book, but I read it. I did get some conception of grammar in general from that. But back then, studying grammar meant organizing the sounds, looking at the tense, making a catalog of those things, and seeing how they fit together.
Linguists have distinguished between historical grammars and
descriptive grammars. What is the difference between the two?
Historical grammar is a study of how, say, modern English developed from Middle English, and how that developed from Early and Old English, and how that developed from Germanic, and that developed from what’s called Proto-Indo-European, a source system that nobody speaks so you have to try to reconstruct it. It is an effort to reconstruct how languages developed through time, analogous to the study of evolution. Descriptive grammar is an attempt to give an account of what the current system is for either a society or an individual, whatever you happen to be studying. It is kind of like the difference between evolution and psychology.
And linguists of your father’s era, what did they do?
They were taught field methods. So, suppose you wanted to write a grammar of Cherokee. You would go into the field, and you would elicit information from native speakers, called informants.
What sort of questions would the linguists ask?
Suppose you’re an anthropological linguist from China and you want to study my language. The first thing you would try to do is see what kind of sounds I use, and then you’d ask how those sounds go together. So why can I say “blick” but not “bnick,” for example, and what’s the organization of the sounds? How can they be combined? If you look at the way word structure is organized, is there a past tense on a verb? If there is, does it follow the verb or does it precede the verb, or is it some other kind of thing? And you’d go on asking more and more questions like that.
But you weren’t content with that approach. Why not?
I was at Penn, and my undergraduate thesis topic was the modern grammar of spoken Hebrew, which I knew fairly well. I started doing it the way we were taught. I got a Hebrew-speaking informant, started asking questions and getting the data. At some point, though, it just occurred to me: This is ridiculous! I’m asking these questions, but I already know the answers.
Soon you started developing a different approach to linguistics. How did those ideas emerge?
Back in the early 1950s, when I was a graduate student at Harvard, the general assumption was that language, like all other human activities, is just a collection of learned behaviors developed through the same methods used to train animals—by reinforcement. That was virtually dogma at the time. But there were two or three of us who didn’t believe it, and we started to think about other ways of looking at things.
In particular, we looked at a very elementary fact: Each language provides a means to construct and interpret infinitely many structured expressions, each of which has a semantic interpretation and an expression in sound. So there’s got to be what’s called a generative procedure, an ability to generate infinite sentences or expressions and then to connect them to thought systems and to sensory motor systems. One has to begin by focusing on this central property, the unbounded generation of structured expressions and their interpretations. Those ideas crystallized and became part of the so-called biolinguistic framework, which looks at language as an element of human biology, rather like, say, the visual system.
You theorized that all humans have “universal grammar.” What is that?
It refers to the genetic component of the human language faculty. Take your last sentence, for example. It’s not a random sequence of noises. It has a very definite structure, and it has a very specific semantic interpretation; it means something, not something else, and it sounds a particular way, not some other way. Well, how do you do that? There are two possibilities. One, it’s a miracle. Or two, you have some internal system of rules that determines the structures and the interpretations. I don’t think it’s a miracle.
What were the early reactions to your linguistic ideas?
At first, people mostly dismissed or ignored them. It was the period of behavioral science, the study of action and behavior, including behavior control and modification. Behaviorism held that you could basically turn a person into anything, depending on how you organized the environment and the training procedures. The idea that a genetic component entered crucially into this was considered exotic, to put it mildly.
Later, my heretical idea was given the name “the innateness hypothesis,” and there was a great deal of literature condemning it. You can still read right now, in major journals, that language is just the result of culture and environment and training. It’s a commonsense notion, in a way. We all learn language, so how hard could it be? We see that environmental effects do exist. People growing up in England speak English, not Swahili. And the actual principles—they’re not accessible to consciousness. We can’t look inside ourselves and see the hidden principles that organize our language behavior any more than we can see the principles that allow us to move our bodies. It happens internally.
How do linguists go about searching for these hidden principles?
You can find information about a language by collecting a corpus of data—for instance, the Chinese linguist studying my language could ask me various questions about it and collect the answers. That would be one corpus. Another corpus would just be a tape recording of everything I say for three days. And you can investigate a language by studying what goes on in the brain as people learn or use language. Linguists today should concentrate on discovering the rules and principles that you, for example, are using right now when you interpret and comprehend the sentences I’m producing and when you produce your own.
Isn’t this just like the old system of grammar that you rejected?
No. In the traditional study of grammar, you’re concentrating on the organization of sounds and word formation and maybe a few observations about syntax. In the generative linguistics of the last 50 years, you’re asking, for each language, what is the system of rules and principles that determines an infinite array of structured expressions? Then you assign specific interpretations to them.
Has brain imaging changed the way we understand language?
There was an interesting study of brain activity in language recently conducted by a group in Milan. They gave subjects two types of written materials based on nonsense language. One was a symbolic language modeled on the rules of Italian, though the subjects didn’t know that. The other was devised to violate the rules of universal grammar. To take a particular case, say you wanted to negate a sentence: “John was here, John wasn’t here.” There are particular things that you are allowed to do in languages. You can put the word “not” in certain positions, but you can’t put it in other positions. So one invented language put the negation element in a permissible place, while the other put it in an impermissible place. The Milan group seems to have found that permissible nonsense sentences produced activity in the language areas of the brain, but the impermissible ones—the ones that violated principles of universal grammar—did not. That means the people were just treating the impermissible sentences as a puzzle, not as language. It’s a preliminary result, but it strongly suggests that the linguistic principles discovered by investigating languages have neurocorrelates, as one would expect and hope.
Recent genetic studies also offer some clues about language, right?
In recent years a gene has been discovered called FOXP2. This gene is particularly interesting because mutations on it correspond with some deficiencies in language use. It relates to what’s called orofacial activation, the way you control your mouth and your face and your tongue when you speak. So FOXP2 plausibly has something to do with the use of language. It’s found in many other organisms, not just humans, and functions in many different ways in different species; these genes don’t do one single thing. But that’s an interesting preliminary step toward finding a genetic basis for some aspects of language.
You say that innate language is uniquely human, yet FOXP2 shows a
continuity among species. Is that a contradiction?
It’s almost meaningless that there’s a continuity. Nobody doubts that the human language faculty is based on genes, neurons, and so on. The mechanisms that are involved in the use, understanding, acquisition, and production of language at some level show up throughout the animal world, and in fact throughout the organic world; you find some of them in bacteria. But that tells you almost nothing about evolution or common origins. The species that are maybe most similar to humans with regard to anything remotely like language production are birds, but that’s not due to common origin. It’s what’s called convergence, a development of somewhat analogous systems independently. FOXP2 is quite interesting, but it’s dealing with fairly peripheral parts of language like [physical] language production. Whatever’s discovered about it is unlikely to have much of an effect on linguistic theory.
Over the past 20 years you’ve been working on a “minimalist”
understanding of language. What does that entail?
Suppose language were like a snowflake; it takes the form it does because of natural law, with the condition that it satisfy these external constraints. That approach to the investigation of language came to be called the minimalist program. It has achieved, I think, some fairly significant results in showing that language is indeed a perfect solution for semantic expression—the meaning—but badly designed for articulate expression, the particular sound you make when you say “baseball” and not “tree.”
What are the outstanding big questions in linguistics?
There are a great many blanks. Some are “what” questions, like: What is language? What are the rules and principles that enter into what you and I are now doing? Others are “how” questions: How did you and I acquire this capacity? What was it in our genetic endowment and experience and in the laws of nature? And then there are the “why” questions, which are much harder: Why are the principles of language this way and not some other way? To what extent is it true that the basic language design yields an optimal solution to the external conditions that language must satisfy? That’s a huge problem. To what extent can we relate what we understand about the nature of language to activity taking place in the brain? And can there be, ultimately, some serious inquiry into the genetic basis for language? In all of these areas there’s been quite a lot of progress, but huge gaps remain.
Every parent has marveled at the way children develop language. It seems incredible that we still know so little about the process.
We now know that an infant, at birth, has some information about its mother’s language; it can distinguish its mother’s language from some other language when both are spoken by a bilingual woman. There are all kinds of things going on in the environment, what William James called a “blooming, buzzing confusion.” Somehow the infant reflexively selects out of that complex environment the data that are language-related. No other organism can do that; a chimpanzee can’t do that. And then very quickly and reflexively the infant proceeds to gain an internal system, which ultimately yields the capacities that we are now using. What’s going on in the [infant’s] brain? What elements of the human genome are contributing to this process? How did these things evolve?
What about meaning at a higher level? The classic stories that people retell from generation to generation have a number of recurring themes. Could this repetition indicate something about innate human language?
In one of the standard fairy tales, the handsome prince is turned into a frog by the wicked witch, and finally the beautiful princess comes around and kisses the frog, and he’s the prince again. Well, every child knows that the frog is actually the prince, but how do they know it? He’s a frog by every physical characteristic. What makes him the prince? It turns out there is a principle: We identify persons and animals and other living creatures by a property that’s called psychic continuity. We interpret them as having some kind of a mind or a soul or something internal that persists independent of their physical properties. Scientists don’t believe that, but every child does, and every human knows how to interpret the world that way.
You make it sound like the science of linguistics is just getting started.
There are many simple descriptive facts about language that just aren’t understood: how sentences get their meaning, how they get their sound, how other people comprehend them. Why don’t languages use linear order in computation? For example, take a simple sentence like “Can eagles that fly swim?” You understand it; everyone understands it. A child understands that it’s asking whether eagles can swim. It’s not asking whether they can fly. You can say, “Are eagles that fly swimming?” You can’t say, “Are eagles that flying swim?” Meaning, is it the case that eagles that are flying swim? These are rules that everyone knows, knows reflexively. But why? It’s still quite a mystery, and the origins of those principles are basically unknown.
You might also like
Noam Chomsky on
Everything Was a Problem and We Did Not Understand a Thing
An interview with Noam Chomsky.
By Graham Lawton|Posted Sunday, March 25, 2012, at 7:00 AM ET
Noam Chomsky on human nature and climate change
Virginie Montet/AFP/Getty Images.
Why can everyone learn Portuguese? Are some aspects of our nature unknowable? Can you imagine Richard Nixon as a radical? Is Twitter a trivializer? New Scientist takes a whistle-stop tour of our modern intellectual landscape in the company of Noam Chomsky.
Let’s start with the idea that everyone connects you with from the 1950s and ’60s—a “universal grammar” underlying all languages. How is that idea holding up in 2012?
It’s virtually a truism. There are people who misunderstand the term but I can’t deal with that. It’s perfectly obvious that there is some genetic factor that distinguishes humans from other animals and that it is language-specific. The theory of that genetic component, whatever it turns out to be, is what is called universal grammar.
But there are critics such as Daniel Everett, who says the language of the Amazonian people he worked with seems to challenge important aspects of universal grammar.
It can’t be true. These people are genetically identical to all other humans with regard to language. They can learn Portuguese perfectly easily, just as Portuguese children do. So they have the same universal grammar the rest of us have. What Everett claims is that the resources of the language do not permit the use of the principles of universal grammar.
That’s conceivable. You could imagine a language exactly like English except it doesn’t have connectives like “and” that allow you to make longer expressions. An infant learning truncated English would have no idea about this: They would just pick it up as they would standard English. At some point, the child would discover the resources are so limited you can’t say very much, but that doesn’t say anything about universal grammar, or about language acquisition. Actually, I doubt very much that a language like that could exist.
Ideas about human nature naturally crop up in your work. It’s a fuzzy term, so what do you mean by it?
To me it’s just like bee nature. Humans have certain properties and characteristics which are intrinsic to them, just as every other organism does. That’s human nature. We don’t know very much about it except in a few domains. We know a lot about how the digestive system develops, that’s part of human nature. We know some things about the visual system. With regard to cognitive systems, the systems are more complex and difficult to investigate, so less is known. But something is. Language is one component of the human cognitive capacity which happens to be fairly amenable to enquiry. So we know a good deal about that.
In your new book, you suggest that many components of human nature are just too complicated to be really researchable.
That’s a pretty normal phenomenon. Take, say, physics, which restricts itself to extremely simple questions. If a molecule becomes too complex, they hand it over to the chemists. If it becomes too complex for them, they hand it to biologists. And if the system is too complex for them, they hand it to psychologists … and so on until it ends up in the hands of historians or novelists. As you deal with more and more complex systems, it becomes harder and harder to find deep and interesting properties.
If human nature is relatively fixed, as you argue, how do we achieve social and political change?
Human nature is not totally fixed, but on any realistic scale evolutionary processes are much too slow to affect it. With language, for example, we have very good evidence that for the last 50,000 years there has been no evolution. That is a reflection of the fact that our basic capacities have not evolved.
So within a realistic time frame there is not going to be any change in human nature. But human nature allows many different options and the choice among those options can change, and it has. So there are striking changes, even in our own lifetime, of what we accept as tolerable. Take something like women’s rights: If you go back not so many years women were basically regarded as property. That’s a sign of the expansion of our moral spheres. So sure, human nature remains the same but a lot of things can change.
Sticking with social and political change, what is going on with climate-change denial in the United States?
The Republican party now has its catechism of things you have to repeat in lockstep, kind of like the old Communist party. One of them is denying climate change.
Why is it happening?
It happens that there’s a huge propaganda offensive carried out by the major business lobbies, the energy associations, and so on. It’s no secret, they’re trying to convince people that the science is unreliable, that it’s a liberal hoax. Those who want to be funded by business and energy associations and so on might be led into repeating this catechism. Or maybe they actually believe it.
The Republican-dominated House of Representatives is now dismantling measures of control over environmental destruction that were instituted by Richard Nixon. That shows you how far to the right they have gone. Today Nixon would be a flaming radical and Dwight D. Eisenhower would be off the spectrum. Even Ronald Reagan would be on the left somewhere. These are interesting, important things happening in the richest and most powerful country in the world that we should be very much concerned about.
The media has been one of your big interests over the years. Are new and social media really changing the way we do things?
I’m probably the wrong person to ask. I’m kind of out of the Stone Age, I don’t use any of these things and don’t know a lot about them, but they are doubtless effective. For example, Occupy Wall Street could not have developed like it did without social media.
Are they affecting other things very much?
I think that is open to question. For one thing, by their very nature they have to be fairly superficial, there isn’t a lot you can say in a tweet or even an internet post. Almost by necessity, I think it is going to lead, or has led, to some superficiality. So like most technology, there is an upside and a downside.
You argue that the United States is in political and economic decline. Is that also true of the intellectual and scientific worlds?
Well, there are some who do claim that, but I’m not convinced. For example, if you look at the journal Science, the editor-in-chief Bruce Alberts has a series of editorials in which he is deploring the way science is taught in the U.S. In the federally funded schools and the universities people are being taught factoids; they are taught the periodic table to memorize when they do not understand what it is about. Alberts says this totally misleads people about the nature of science and that it is driving kids away from science. If what he is describing does overwhelm the education system it will presumably lead to a decline in scientific competence and capacity as well.
Looking back on your long career, if you were to start all over again would you still choose to study language?
When I was a college student and I got interested in linguistics the concern among students was, this is a lot of fun, but after we have done a structural analysis of every language in the world what’s left? It was assumed there were basically no puzzles.
In the 1950s, there was a serious attempt to address the core problems of language and it was immediately discovered that everything was a problem and we did not understand a thing. Now a great deal has been learned and we understand a lot more about the nature of language. The contemporary field is still very exciting. It is a living field. If you’re teaching today what you were teaching five years ago, either the field is dead or you are.
This article originally appeared in New Scientist.
Where Artificial Intelligence Went Wrong
By Yarden Katz
inShare313 Nov 1 2012, 2:22 PM ET 81
An extended conversation with the legendary linguist
Graham Gordon Ramsay
If one were to rank a list of civilization’s greatest and most elusive intellectual challenges, the problem of “decoding” ourselves — understanding the inner workings of our minds and our brains, and how the architecture of these elements is encoded in our genome — would surely be at the top. Yet the diverse fields that took on this challenge, from philosophy and psychology to computer science and neuroscience, have been fraught with disagreement about the right approach.
In 1956, the computer scientist John McCarthy coined the term “Artificial Intelligence” (AI) to describe the study of intelligence by implementing its essential features on a computer. Instantiating an intelligent system using man-made hardware, rather than our own “biological hardware” of cells and tissues, would show ultimate understanding, and have obvious practical applications in the creation of intelligent devices or even robots.
Some of McCarthy’s colleagues in neighboring departments, however, were more interested in how intelligence is implemented in humans (and other animals) first. Noam Chomsky and others worked on what became cognitive science, a field aimed at uncovering the mental representations and rules that underlie our perceptual and cognitive abilities. Chomsky and his colleagues had to overthrow the then-dominant paradigm of behaviorism, championed by Harvard psychologist B.F. Skinner, where animal behavior was reduced to a simple set of associations between an action and its subsequent reward or punishment. The undoing of Skinner’s grip on psychology is commonly marked by Chomsky’s 1967 critical review of Skinner’s book Verbal Behavior, a book in which Skinner attempted to explain linguistic ability using behaviorist principles.
Skinner’s approach stressed the historical associations between a stimulus and the animal’s response — an approach easily framed as a kind of empirical statistical analysis, predicting the future as a function of the past. Chomsky’s conception of language, on the other hand, stressed the complexity of internal representations, encoded in the genome, and their maturation in light of the right data into a sophisticated computational system, one that cannot be usefully broken down into a set of associations. Behaviorist principles of associations could not explain the richness of linguistic knowledge, our endlessly creative use of it, or how quickly children acquire it with only minimal and imperfect exposure to language presented by their environment. The “language faculty,” as Chomsky referred to it, was part of the organism’s genetic endowment, much like the visual system, the immune system and the circulatory system, and we ought to approach it just as we approach these other more down-to-earth biological systems.
David Marr, a neuroscientist colleague of Chomsky’s at MIT, defined a general framework for studying complex biological systems (like the brain) in his influential book Vision, one that Chomsky’s analysis of the language capacity more or less fits into. According to Marr, a complex biological system can be understood at three distinct levels. The first level (“computational level”) describes the input and output to the system, which define the task the system is performing. In the case of the visual system, the input might be the image projected on our retina and the output might our brain’s identification of the objects present in the image we had observed. The second level (“algorithmic level”) describes the procedure by which an input is converted to an output, i.e. how the image on our retina can be processed to achieve the task described by the computational level. Finally, the third level (“implementation level”) describes how our own biological hardware of cells implements the procedure described by the algorithmic level.
The approach taken by Chomsky and Marr toward understanding how our minds achieve what they do is as different as can be from behaviorism. The emphasis here is on the internal structure of the system that enables it to perform a task, rather than on external association between past behavior of the system and the environment. The goal is to dig into the “black box” that drives the system and describe its inner workings, much like how a computer scientist would explain how a cleverly designed piece of software works and how it can be executed on a desktop computer.
As written today, the history of cognitive science is a story of the unequivocal triumph of an essentially Chomskyian approach over Skinner’s behaviorist paradigm — an achievement commonly referred to as the “cognitive revolution,” though Chomsky himself rejects this term. While this may be a relatively accurate depiction in cognitive science and psychology, behaviorist thinking is far from dead in related disciplines. Behaviorist experimental paradigms and associationist explanations for animal behavior are used routinely by neuroscientists who aim to study the neurobiology of behavior in laboratory animals such as rodents, where the systematic three-level framework advocated by Marr is not applied.
In May of last year, during the 150th anniversary of the Massachusetts Institute of Technology, a symposium on “Brains, Minds and Machines” took place, where leading computer scientists, psychologists and neuroscientists gathered to discuss the past and future of artificial intelligence and its connection to the neurosciences.
The gathering was meant to inspire multidisciplinary enthusiasm for the revival of the scientific question from which the field of artificial intelligence originated: how does intelligence work? How does our brain give rise to our cognitive abilities, and could this ever be implemented in a machine?
Noam Chomsky, speaking in the symposium, wasn’t so enthused. Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field’s heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the “new AI” — focused on using statistical learning techniques to better mine and predict data — is unlikely to yield general principles about the nature of intelligent beings or about cognition.
This critique sparked an elaborate reply to Chomsky from Google’s director of research and noted AI researcher, Peter Norvig, who defended the use of statistical models and argued that AI’s new methods and definition of progress is not far off from what happens in the other sciences.
Chomsky acknowledged that the statistical approach might have practical value, just as in the example of a useful search engine, and is enabled by the advent of fast computers capable of processing massive data. But as far as a science goes, Chomsky would argue it is inadequate, or more harshly, kind of shallow. We wouldn’t have taught the computer much about what the phrase “physicist Sir Isaac Newton” really means, even if we can build a search engine that returns sensible hits to users who type the phrase in.
It turns out that related disagreements have been pressing biologists who try to understand more traditional biological systems of the sort Chomsky likened to the language faculty. Just as the computing revolution enabled the massive data analysis that fuels the “new AI”, so has the sequencing revolution in modern biology given rise to the blooming fields of genomics and systems biology. High-throughput sequencing, a technique by which millions of DNA molecules can be read quickly and cheaply, turned the sequencing of a genome from a decade-long expensive venture to an affordable, commonplace laboratory procedure. Rather than painstakingly studying genes in isolation, we can now observe the behavior of a system of genes acting in cells as a whole, in hundreds or thousands of different conditions.
The sequencing revolution has just begun and a staggering amount of data has already been obtained, bringing with it much promise and hype for new therapeutics and diagnoses for human disease. For example, when a conventional cancer drug fails to work for a group of patients, the answer might lie in the genome of the patients, which might have a special property that prevents the drug from acting. With enough data comparing the relevant features of genomes from these cancer patients and the right control groups, custom-made drugs might be discovered, leading to a kind of “personalized medicine.” Implicit in this endeavor is the assumption that with enough sophisticated statistical tools and a large enough collection of data, signals of interest can be weeded it out from the noise in large and poorly understood biological systems.
The success of fields like personalized medicine and other offshoots of the sequencing revolution and the systems-biology approach hinge upon our ability to deal with what Chomsky called “masses of unanalyzed data” — placing biology in the center of a debate similar to the one taking place in psychology and artificial intelligence since the 1960s.
Systems biology did not rise without skepticism. The great geneticist and Nobel-prize winning biologist Sydney Brenner once defined the field as “low input, high throughput, no output science.” Brenner, a contemporary of Chomsky who also participated in the same symposium on AI, was equally skeptical about new systems approaches to understanding the brain. When describing an up-and-coming systems approach to mapping brain circuits called Connectomics, which seeks to map the wiring of all neurons in the brain (i.e. diagramming which nerve cells are connected to others), Brenner called it a “form of insanity.”
Brenner’s catch-phrase bite at systems biology and related techniques in neuroscience is not far off from Chomsky’s criticism of AI. An unlikely pair, systems biology and artificial intelligence both face the same fundamental task of reverse-engineering a highly complex system whose inner workings are largely a mystery. Yet, ever-improving technologies yield massive data related to the system, only a fraction of which might be relevant. Do we rely on powerful computing and statistical approaches to tease apart signal from noise, or do we look for the more basic principles that underlie the system and explain its essence? The urge to gather more data is irresistible, though it’s not always clear what theoretical framework these data might fit into. These debates raise an old and general question in the philosophy of science: What makes a satisfying scientific theory or explanation, and how ought success be defined for science?
I sat with Noam Chomsky on an April afternoon in a somewhat disheveled conference room, tucked in a hidden corner of Frank Gehry’s dazzling Stata Center at MIT. I wanted to better understand Chomsky’s critique of artificial intelligence and why it may be headed in the wrong direction. I also wanted to explore the implications of this critique for other branches of science, such neuroscience and systems biology, which all face the challenge of reverse-engineering complex systems — and where researchers often find themselves in an ever-expanding sea of massive data. The motivation for the interview was in part that Chomsky is rarely asked about scientific topics nowadays. Journalists are too occupied with getting his views on U.S. foreign policy, the Middle East, the Obama administration and other standard topics. Another reason was that Chomsky belongs to a rare and special breed of intellectuals, one that is quickly becoming extinct. Ever since Isaiah Berlin’s famous essay, it has become a favorite pastime of academics to place various thinkers and scientists on the “Hedgehog-Fox” continuum: the Hedgehog, a meticulous and specialized worker, driven by incremental progress in a clearly defined field versus the Fox, a flashier, ideas-driven thinker who jumps from question to question, ignoring field boundaries and applying his or her skills where they seem applicable. Chomsky is special because he makes this distinction seem like a tired old cliche. Chomsky’s depth doesn’t come at the expense of versatility or breadth, yet for the most part, he devoted his entire scientific career to the study of defined topics in linguistics and cognitive science. Chomsky’s work has had tremendous influence on a variety of fields outside his own, including computer science and philosophy, and he has not shied away from discussing and critiquing the influence of these ideas, making him a particularly interesting person to interview. Videos of the interview can be found here.
I want to start with a very basic question. At the beginning of AI, people were extremely optimistic about the field’s progress, but it hasn’t turned out that way. Why has it been so difficult? If you ask neuroscientists why understanding the brain is so difficult, they give you very intellectually unsatisfying answers, like that the brain has billions of cells, and we can’t record from all of them, and so on.
Chomsky: There’s something to that. If you take a look at the progress of science, the sciences are kind of a continuum, but they’re broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics — greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else.
Like the chemists?
Chomsky: If a molecule is too big, you give it to the chemists. The chemists, for them, if the molecule is too big or the system gets too big, you give it to the biologists. And if it gets too big for them, they give it to the psychologists, and finally it ends up in the hands of the literary critic, and so on. So what the neuroscientists are saying is not completely false.
However, it could be — and it has been argued in my view rather plausibly, though neuroscientists don’t like it — that neuroscience for the last couple hundred years has been on the wrong track. There’s a fairly recent book by a very good cognitive neuroscientist, Randy Gallistel and King, arguing — in my view, plausibly — that neuroscience developed kind of enthralled to associationism and related views of the way humans and animals work. And as a result they’ve been looking for things that have the properties of associationist psychology.
“It could be — and it has been argued, in my view rather plausibly, though neuroscientists don’t like it — that neuroscience for the last couple hundred years has been on the wrong track.”
Like Hebbian plasticity? [Editor’s note: A theory, attributed to Donald Hebb, that associations between an environmental stimulus and a response to the stimulus can be encoded by strengthening of synaptic connections between neurons.]
Chomsky: Well, like strengthening synaptic connections. Gallistel has been arguing for years that if you want to study the brain properly you should begin, kind of like Marr, by asking what tasks is it performing. So he’s mostly interested in insects. So if you want to study, say, the neurology of an ant, you ask what does the ant do? It turns out the ants do pretty complicated things, like path integration, for example. If you look at bees, bee navigation involves quite complicated computations, involving position of the sun, and so on and so forth. But in general what he argues is that if you take a look at animal cognition, human too, it’s computational systems. Therefore, you want to look the units of computation. Think about a Turing machine, say, which is the simplest form of computation, you have to find units that have properties like “read”, “write” and “address.” That’s the minimal computational unit, so you got to look in the brain for those. You’re never going to find them if you look for strengthening of synaptic connections or field properties, and so on. You’ve got to start by looking for what’s there and what’s working and you see that from Marr’s highest level.
More at The Atlantic
Elsewhere on the Web
- From written word to digital code, 16 of the most infamous security breaches in history. (from DeVry University)
- 13 Reasons You’re Not As Successful As You Should Be (from NFIB)
- 10 Insanely Overpaid Public Employees (from The Fiscal Times)
- 4 Things to Hoard for an Emergency (from Allstate Blog)
- 3 Things Women Should Stop Apologizing For (from Women&Co.)
Quotes by Noam Chomsky – unofficial (managed by his fans)
Noam Chomsky turns eighty-four today, more than a half century after he exploded onto the scene of linguistics, in in the late nineteen-fifties, as a young professor at M.I.T.
What do you think the use of technocratic governments in Europe says about European democracy?
There are two problems with it. First of all it shouldn’t happen, at least if anybody believes in democracy. Secondly, the policies that they’re following are just driving Europe into deeper and deeper problems. The idea of imposing austerity during a recession makes no sense whatsoever. There are problems, especially in the southern European countries, but in Greece the problems are not alleviated by compelling the country to reduce its growth because the debt relative to GDP simply increases, and that’s what the policies have been doing. In the case of Spain, which is a different case, the country was actually doing quite well up until the crash: it had a budget surplus. There were problems, but they were problems caused by banks, not by the government, including German banks, who were lending in the style of their US counterparts (subprime mortgages). So the financial system crashed and then austerity was imposed on Spain, which is the worst policy. It increases unemployment, it reduces growth; it does bail out banks and investors, but that shouldn’t be the prime concern.
Europe needs stimulus – even the IMF is coming around to that position – and there’s plenty of capacity for stimulus. Europe’s a rich place, there are plenty of reserves available to the European Central Bank. The Bundesbank doesn’t like it, investors don’t like it, banks don’t like it, but those are the policies which should be pursued. Even writers in the US business press agree with that. If Europe doesn’t change policy, they’re just going to go into a deeper recession. The European Commission just released its report on expectations for next year, which are for very low growth and increasing unemployment, which is the main problem. It’s a very serious problem: unemployment is destroying a generation, which is not a trivial matter. It’s also economically outlandish. If people are forced into unemployment then that’s not only extremely harmful from a human point of view – to individuals – but even from an economic point of view. It means there are unused resources, which could be used to grow and develop.
Europe’s policies make sense only on one assumption: that the goal is to try and undermine and unravel the welfare state. And that’s almost been said. Mario Draghi, the President of the European Central Bank, had an interview with the Wall Street Journal where he said that the social contract in Europe is dead. He wasn’t advocating it, he was describing it, but that’s essentially what the policies lead to. [++]
Happy Birthday to Noam Chomsky. He was born on December 7, 1928.
“The US healthcare system is an international scandal. It’s about twice the per capita cost of comparable countries, and there are relatively poor outcomes. And the reason traces mostly to the fact, not entirely, but mostly to the fact that it’s largely privatized and pretty much unregulated. And that leads to huge administrative expenses on other non-health expenses, like advertising profits, and so on; it leads to cherry picking, all kinds of things. So it’s a highly inefficient system, and it’s been [proved] – with pretty good evidence, some good economists worked on this, especially Dean Baker – that if we move to just the kind of healthcare system that other industrial countries have, it would wipe out the deficit. Now, that’s not discussed, and instead what is being discussed are ways to make the system worse! So, for example, one of the proposals about cutting down healthcare costs is to raise the eligibility age for Medicare in a couple of years. Well, what does that do? That shifts people from a fairly efficient government run program — Medicare – to a highly inefficient private program, much more expensive, namely, private insurance. So that’s a way of making the problem worse! Of course it makes it look a little better from the point of view of the power system, because it shifts the cost from government to individuals; individuals are still paying more, just themselves, but the total healthcare expenses go up. “
So what do you do? It’s going to be harder to run things as a private club. Therefore, obviously, you have to control what people think. There had been public relation specialists but there was never a public relations industry. There was a guy hired to make Rockefeller’s image look prettier and that sort of thing. But this huge public relations industry, which is a U.S. invention and a monstrous industry, came out of the first World War. The leading figures were people in the Creel Commission. In fact, the main one, Edward Bernays, comes right out of the Creel Commission. He has a book that came out right afterwards called Propaganda. The term “propaganda,” incidentally, did not have negative connotations in those days. It was during the second World War that the term became taboo because it was connected with Germany, and all those bad things. But in this period, the term propaganda just meant information or something like that. So he wrote a book called Propaganda around 1925, and it starts off by saying he is applying the lessons of the first World War. The propaganda system of the first World War and this commission that he was part of showed, he says, it is possible to “regiment the public mind every bit as much as an army regiments their bodies.” These new techniques of regimentation of minds, he said, had to be used by the intelligent minorities in order to make sure that the slobs stay on the right course. We can do it now because we have these new techniques.
This is the main manual of the public relations industry. Bernays is kind of the guru. He was an authentic Roosevelt/Kennedy liberal. He also engineered the public relations effort behind the U.S.-backed coup which overthrew the democratic government of Guatemala.
His major coup, the one that really propelled him into fame in the late 1920s, was getting women to smoke. Women didn’t smoke in those days and he ran huge campaigns for Chesterfield. You know all the techniques—models and movie stars with cigarettes coming out of their mouths and that kind of thing. He got enormous praise for that. So he became a leading figure of the industry, and his book was the real manual.
Great book here, really takes a look at the overwhelming power the United States has across the world. Chomsky is an amazing writer and is an influential figure to many people when it comes to US politics.
Waiting for Poppy to come over so we can get ready to go to dinner translates in to me reading this (my mother buys me fabulous things).
Who Owns the World? Noam Chomsky on U.S.-Fueled Dangers, from Climate Change to Nuclear Weapons
In the week when President Obama and Republican presidential hopeful Mitt Romney debated issues of foreign policy and the economy, we turn to world-renowned political dissident, linguist, author and MIT professor, Noam Chomsky. In a recent speech, Chomsky examined topics largely ignored or glossed over during the campaign: China, the Arab Spring, global warming, nuclear proliferation, and the military threat posed by Israel and the U.S. versus Iran. He reflects on the Cuban missile crisis, which took place 50 years ago this week and is still referred to as “the most dangerous moment in human history.” He delivered this talk last month at the University of Massachusetts in Amherst at an event sponsored by the Center for Popular Economics. Chomsky’s talk was entitled “Who Owns the World?” [includes rush transcript]
Noam Chomsky, MIT professor, world-renowned political dissident, linguist and author, speaking last month at the University of Massachusetts in Amherst in a talk entitled “Who Owns the World?”
This transcript is available free of charge. However, donations help us provide closed captioning for the deaf and hard of hearing on our TV broadcast. Thank you for your generous contribution.Donate >
AMY GOODMAN: We’re on the road in Portland, Oregon. We are here as part of our 100-city Silenced Majority tour. On this week when President Obama and Republican presidential hopeful Mitt Romney debated issues of foreign policy and the economy, we turn to world-renowned political dissident, linguist, author, MIT Professor Noam Chomsky. In a recent speech, Professor Chomsky examined topics largely ignored or glossed over during the campaign, from China to the Arab Spring, to global warming and the nuclear threat posed by Israel versus Iran. He spoke last month at the University of Massachusetts in Amherst at any event sponsored by the Center for Popular Economics. His talk was entitled “Who Owns the World?”
NOAM CHOMSKY: When I was thinking about these remarks, I had two topics in mind, couldn’t decide between them—actually pretty obvious ones. One topic is, what are the most important issues that we face? The second topic is, what issues are not being treated seriously—or at all—in the quadrennial frenzy now underway called an election? But I realized that there’s no problem; it’s not a hard choice: they’re the same topic. And there are reasons for it, which are very significant in themselves. I’d like to return to that in a moment. But first a few words on the background, beginning with the announced title, “Who Owns the World?”
Actually, a good answer to this was given years ago by Adam Smith, someone we’re supposed to worship but not read. He was—a little subversive when you read him sometimes. He was referring to the most powerful country in the world in his day and, of course, the country that interested him, namely, England. And he pointed out that in England the principal architects of policy are those who own the country: the merchants and manufacturers in his day. And he said they make sure to design policy so that their own interests are most peculiarly attended to. Their interests are served by policy, however grievous the impact on others, including the people of England.
But he was an old-fashioned conservative with moral principles, so he added the victims of England, the victims of the—what he called the “savage injustice of the Europeans,” particularly in India. Well, he had no illusions about the owners, so, to quote him again, “All for ourselves and nothing for other people, seems, in every age of the world, to have been the vile maxim of the masters of mankind.” It was true then; it’s true now.
Britain kept its position as the dominant world power well into the 20th century despite steady decline. By the end of World War II, dominance had shifted decisively into the hands of the upstart across the sea, the United States, by far the most powerful and wealthy society in world history. Britain could only aspire to be its junior partner as the British foreign office ruefully recognized. At that point, 1945, the United States had literally half the world’s wealth, incredible security, controlled the entire Western Hemisphere, both oceans, the opposite sides of both oceans. There’s nothing—there hasn’t ever been anything like that in history.
And planners understood it. Roosevelt’s planners were meeting right through the Second World War, designing the post-war world. They were quite sophisticated about it, and their plans were pretty much implemented. They wanted to make sure that the United States would control what they called a “grand area,” which would include, routinely, the entire Western Hemisphere, the entire Far East, the former British Empire, which the U.S. would be taking over, and as much of Eurasia as possible—crucially, its commercial and industrial centers in Western Europe. And within this region, they said, the United States should hold unquestioned power with military and economic supremacy, while ensuring the limitation of any exercise of sovereignty by states that might interfere with these global designs.
And those were pretty realistic plans at the time, given the enormous disparity of power. The U.S. had been by far the richest country in the world even before the Second World War, although it wasn’t—was not yet the major global actor. During the Second World War, the United States gained enormously. Industrial production almost quadrupled, got us out of depression. Meanwhile, industrial rivals were devastated or seriously weakened. So that was an unbelievable system of power.
Actually, the policies that were outlined then still hold. You can read them in government pronouncements. But the capacity to implement them has significantly declined. Actually there’s a major theme now in foreign policy discussion—you know, journals and so on. The theme is called “American decline.” So, for example, in the most prestigious establishment international relations journal, Foreign Affairs, a couple of months ago, there was an issue which had on the front cover in big bold letters, “Is America Over?” question mark. That’s announcing the theme of the issue. And there is a standard corollary to this: power is shifting to the west, to China and India, the rising world powers, which are going to be the hegemonic states of the future.
Actually, I think the decline—the decline is quite real, but some serious qualifications are in order. First of all, the corollary is highly unlikely, at least in the foreseeable future. China and India are very poor countries. Just take a look at, say, the human development index of the United Nations: they’re way down there. China is around 90th. I think India is around 120th or so, last time I looked. And they have tremendous internal problems—demographic problems, extreme poverty, hopeless inequality, ecological problems. China is a great manufacturing center, but it’s actually mostly an assembly plant. So it assembles parts and components, high technology that comes from the surrounding industrial—more advanced industrial centers—Japan, Taiwan, South Korea, Singapore, the United States, Europe—and it basically assembles them. So, if, say, you buy one of these i-things—you know, an iPad from China—that’s called an export from China, but the parts and components and technology come from outside. And the value added in China is minuscule. It’s been calculated. They’ll move up the technology ladder, but it’s a hard climb, India even harder. Well, so I think one should be skeptical about the corollary.
But there’s another qualification that’s more serious. The decline is real, but it’s not new. It’s been going on since 1945. In fact, it happened very quickly. In the late 1940s, there’s an event that’s known here as “the loss of China.” China became independent. That’s a loss of a huge piece of the grand area of Asia. And it became a major issue in American domestic policy. Who’s responsible for the loss of China? A lot of recriminations and so on. Actually, the phrase is kind of interesting. Like, I can’t lose your computer, right? Because I don’t own it. I can lose my computer. Well, the phrase “loss of China” kind of presupposes a deeply held principle of kind of American elite consciousness: we own the world, and if some piece of it becomes independent, we’ve lost it. And that’s a terrible loss; we’ve got to do something about it. It’s never questioned, which is interesting in itself.
Well, right about the same time, around 1950, concerns developed about the loss of Southeast Asia. That’s what led the United States into the Indochina wars, the worst atrocities of the post-war period—partly lost, partly not. A very significant event in modern history was in 1965, when in Indonesia, which was the main concern—that’s the country of Southeast Asia with most of the wealth and resources—there was a military coup in Indonesia, Suharto coup. It led to an extraordinary massacre, what the New York Times called a “staggering mass slaughter.” It killed hundreds of thousands of people, mostly landless peasants; destroyed the only mass political party; and opened the country up to Western exploitation. Euphoria in the West was so enormous that it couldn’t be contained. So, in the New York Times, describing the “staggering mass slaughter,” it called it a “gleam of light in Asia.” That was the column written by James Reston, the leading liberal thinker in the Times. And the same elsewhere—Europe, Australia. It was a fantastic event.
Years later, McGeorge Bundy, who was the national security adviser for Kennedy and Johnson, in retrospect, he pointed out that it probably would have been a good idea to end the Vietnam War at that point, to pull out. Contrary to a lot of illusions, the Vietnam War was fought primarily to ensure that an independent Vietnam would not develop successfully and become a model for other countries in the region. It would not—to borrow Henry Kissinger’s terminology speaking about Chile, we have to prevent what they called the—what he called the “virus” of independent development from spreading contagion elsewhere. That’s a critical part of American foreign policy since the Second World War—Britain, France, others to a lesser degree. And by 1965, that was over. Vietnam was—South Vietnam was virtually destroyed. Word spread to the rest of Indochina it wasn’t going to be a model for anyone, and the contagion was contained. There were—the Suharto regime made sure that Indonesia wouldn’t be infected. And pretty soon the U.S. had dictatorships in every country of the region—Marcos on the Philippines, a dictatorship in Thailand, Chun in South—Park in South Korea. It was no problem about the infection. So that would have been a good time to end the Vietnam War, he felt. Well, that’s Southeast Asia.
But the decline continues. In the last 10 years, there’s been a very important event: the loss of South America. For the first time in 500 years, the South—since the conquistadors, the South American countries have begun to move towards independence and a degree of integration. The typical structure of one of the South American countries was a tiny, very rich, Westernized elite, often white, or mostly white, and a huge mass of horrible poverty, countries separated from one another, oriented to—each oriented towards its—you know, either Europe or, more recently, the United States. Last 10 years, that’s been overcome, significantly—beginning to integrate, the prerequisite for independence, even beginning to face some of their horrendous internal problems. Now that’s the loss of South America. One sign is that the United States has been driven out of every single military base in South America. We’re trying to restore a few, but right now there are none.
AMY GOODMAN: MIT Professor Noam Chomsky. Coming up, he discusses global warming, nuclear war and the Arab Spring, in a minute.
AMY GOODMAN: We’re on the road in Portland, Oregon, part of our 100-city tour. Today, though, we’re spending the hour with world-renowned political dissident, linguist, author, MIT Professor Noam Chomsky. As Election Day comes closer, Chomsky examines topics largely ignored or glossed over during the presidential campaign, including the threat posed to U.S. power by the Arab Spring.
NOAM CHOMSKY: Well, moving on to just last year, the Arab Spring is another such threat. It threatens to take that big region out of the grand area. That’s a lot more significant than Southeast Asia or South America. You go back to the 1940s, the State Department recognized that the energy resources of the Middle East are what they called “one of the greatest material prizes in world history,” a spectacular source of strategic power; if we can control Middle East energy, we can control the world.
Take a look at the U.S-British coup in Iran in 1953. Very important event. Its shadows cast over the world until today. Now that was—it was a pretense that it was a part of the Cold War; it had nothing to do with the Cold War. What it had to do with was the usual fear: independent nationalism. And it wasn’t even concerned with access to oil or profits. It was concerned with control, control of the oil resources of Iran and, in fact, of the region. And that’s a theme that runs right through policy decisions. It’s not discussed much, but it’s very important to have control, exactly as State Department pointed out—advisers pointed out in the ’40s. If you can control the oil, you can control most of the world. And that goes on.
So far, the threat of the Arab Spring has been pretty well contained. In the oil dictatorships, which are the most important ones for the West, every effort to join the Arab Spring has just been crushed by force. Saudi Arabia was so extreme that when there was an effort to go out into the streets, the security presence was so enormous that people were even afraid to go out. There’s a little discussion of what goes on in Bahrain, where it’s been crushed, but eastern Saudi Arabia was much worse. The emirates totally control. So that’s OK. We managed to ensure that the threat of democracy would be smashed in the most important places.
Egypt is an interesting case. It’s an important country, not an oil producer—it is a small one. But in Egypt, the United States followed a standard operating procedure. If any of you are going into the diplomatic service, you might as well learn it. There’s a standard procedure when one of your favorite dictators gets into trouble. First, you support him as long as possible. But if it becomes really impossible—say, the army turns against him—then you send him out to pasture and get the intellectual class to issue ringing declarations about your love of democracy, and then try to restore the old system as much as possible. There’s case after case of that—Somoza in Nicaragua, Duvalier in Haiti, Marcos in the Philippines, Chun in South Korea, Mobutu in the Congo, over and over. I mean, it takes genius not to see it. And it’s exactly what was done in Egypt and what France tried to do, not quite with as much success, in Tunisia.
Well, the future is uncertain, but the threat of democracy so far is contained. And it’s a real threat. I’ll return to that. It’s also to—important to recognize that the decline over the past 50 years is, to a significant extent, self-inflicted, particularly since the ’70s. I’ll go back to that, too. But first let me say a couple of things about the issues that are most important today and that are being ignored or not dealt seriously—dealt with seriously in the electoral campaigns, for good reasons. So let me start with the most important issues. Now there are two of these. They’re of overwhelming significance, because the fate of the species depends on them. One is environmental disaster, and the other is nuclear war.
I’m not going to take much time reviewing the threats of environmental disaster. Actually, they’re on the front pages almost daily. So, for example, last week the New York Times had a front-page story with the headline, “Ending Its Summer Melt, Arctic Sea Ice Sets a New Low That Leads to Warnings.” The melting this summer was far faster than was predicted by the sophisticated computer models and the most recent United Nations report. It’s now predicted that the summer ice might be gone by 2020. It was assumed before that it may be 2050. They quoted scientists who said this is “a prime example of the built-in conservatism of [our] climate forecasts. As dire [the warnings are] about the long-term consequences of heat-trapping emissions … many of [us] fear [that] they may still be underestimating the speed and severity of the impending changes.” Actually, there’s a climate change study program at MIT, where I am. They’ve been warning about this for years, and repeatedly have been proven right.
The Times report discusses, briefly, the severe attack—the severe impact of all of this on the global climate, and it adds, “But governments have not responded to the change with any greater urgency about limiting greenhouse emissions. To the contrary, their main response has been to plan for exploitation of newly accessible minerals in the Arctic, including drilling for more oil.” That is, to accelerate the catastrophe. It’s quite interesting. It demonstrates an extraordinary willingness to sacrifice the lives of our children and grandchildren for short-term gain, or perhaps an equally remarkable willingness to shut our eyes so as not to see impending peril—these things you sometimes find with young infants: something looks dangerous, close my eyes and won’t look at it.
Well, there is another possibility. I mean, maybe humans are somehow trying to fulfill a prediction of great American biologist who died recently, Ernst Mayr. He argued years ago that intelligence seems to be a lethal mutation. He—and he had some pretty good evidence. There’s a notion of biological success, which is how many of you are there around. You know, that’s biological success. And he pointed out that if you look at the tens of billions of species in human—in world history, the ones that are very successful are the ones that mutate very quickly, like bacteria, or the ones that have a fixed ecological niche, like beetles. They seem to make out fine. But as you move up the scale of what we call intelligence, success declines steadily. When you get up to mammals, it’s very low. There are very few of them around. I mean, there’s a lot of cows; it’s only because we domesticate them. When you get to humans, it’s the same. ‘Til very recently, much too recent a time to show up in any evolutionary accounting, humans were very scattered. There were plenty of other hominids, but they disappeared, probably because humans exterminated them, but nobody knows for sure. Anyhow, maybe we’re trying to show that humans just fit into the general pattern. We can exterminate ourselves, too, the rest of the world with us, and we’re hell bent on it right now.
Well, let’s turn to the elections. Both political parties demand that we make the problem worse. In 2008, both party platforms devoted some space to how the government should address climate change. Today, the—in the Republican platform, the issue has essentially disappeared. But the platform does demand that Congress take quick action to prevent the Environmental Protection Agency from regulating greenhouse gases. So let’s make sure to make it worse. And it also demands that we open the Alaska’s Arctic Refuge to drilling—I’m quoting now—in order to take “advantage of all of our American God-given resources.” You can’t disobey God, after all. On environmental policy, the program says, “We must restore scientific integrity to our public research institutions and remove political incentives from publicly funded research.” All that’s a code word for climate science: stop funding climate science. Romney himself says there’s no scientific consensus, so we should support more debate and investigation within the scientific community, but no action, except to act to make the problems worse.
Well, what about the Democrats? They concede that there’s a problem and advocate that we should work toward an agreement to set emissions limits in unison with other emerging powers. But that’s it. No action. And, in fact, as Obama has emphasized, we have to work hard to gain what he calls a hundred years of energy independence by exploiting domestic or Canadian resources by fracking or other elaborate technologies. Doesn’t ask what the world would look like in a hundred years. So, there are differences. The differences are basically about how enthusiastically the lemmings should march towards the cliff.
Let’s turn to the second major issue: nuclear war. That’s also on the front pages daily, but in a way that would seem outlandish to some independent observer viewing what’s going on on earth, and in fact does seem outlandish to a considerable majority of the countries of the world. Now, the current threat, not for the first time, is in the Middle East, focusing on Iran. The general picture in the West is very clear: it’s far too dangerous to allow Iran to reach what’s called “nuclear capability.” That is, the capability enjoyed by many powers, dozens of them, to produce nuclear weapons if they decide to do so. As to whether they’ve decided, U.S. intelligence says it doesn’t know. The International Atomic Energy Agency just produced its most recent report a couple weeks ago, and it concludes—I’ll quote it: it cannot demonstrate “the absence of undeclared nuclear material and activities in Iran.” Now, that is, it can’t demonstrate something which cannot—a condition that can’t be satisfied. There’s no way to demonstrate the absence of the work—that’s convenient—therefore Iran must be denied the right to enrich uranium, that’s guaranteed to every power that signed the Non-Proliferation Treaty.
Well, that’s the picture in the West. That’s not the picture in the rest of the world. As you know, I’m sure, there was just a meeting of the Non-Aligned Movement—that’s large majority of the countries in the world and representing most of the world’s population—a meeting in Tehran. And once again, not for the first time, they issued a ringing declaration of support for Iran’s right to enrich uranium, right that every country has that signed the Non-Proliferation Treaty. Pretty much the same is true in the Arab world. It’s interesting. I’ll return to that in a moment.
There is a basic reason for the concern. It was expressed succinctly by General Lee Butler. He’s the former head of the U.S. Strategic Command, which controls nuclear weapons and nuclear strategy. He wrote that “It is dangerous in the extreme that in the cauldron of animosities that we call the Middle East,” one nation should arm itself with nuclear weapons, which may inspire other nations to do so. General Butler, however, was not referring to Iran; he was referring to Israel, the country that ranks highest in European polls as the most dangerous country in the world—right above Iran—and, not incidentally, in the Arab world, where the public regard the United States as the second most dangerous country, right after Israel. In the Arab world, Iran, though disliked, ranks far lower as a threat—among the populations, that is, not the dictatorships.
With regard to Iranian nuclear weapons, nobody wants them to have them, but in many polls, majorities, sometimes considerable majorities, have said that the region would be more secure if Iran had nuclear weapons, to balance those of their major threats. Now, there’s a lot of commentary in the Western media, in journals, about Arab attitudes towards Iran. And what you read, commonly, is that the Arabs want decisive action against Iran, which is true of the dictators. It’s not true of the populations. But who cares about the populations, what are called, disparagingly, the Arab street? We don’t care about them. Now that’s a reflection of the extremely deep contempt for democracy among Western elites—I mean, so deep that it can’t be perceived. You know, it’s just kind of like reflexive. The study of popular attitudes in the Arab world—and there is very extensive study by Western polling agencies—it reveals very quickly why the U.S. and its allies are so concerned about the threat of democracy and are doing what they can to prevent it. Just take—they certainly don’t want attitudes like those I just indicated to become policy, while of course issuing rousing statements about our passionate dedication to democracy. Those are relayed obediently by reporters and commentators.
Well, unlike Iran, Israel refuses to allow inspections at all, refuses to join the Non-Proliferation Treaty, has hundreds of nuclear weapons, has advanced delivery systems. Also, it has a long record of violence and repression. It has annexed and settled conquered territories illegally, in violation of Security Council orders, and many acts of aggression—five times against Lebanon alone, no credible pretext. In the New York Times yesterday, you can read that the Golan Heights are disputed territory, the Syrian Golan Heights. There is a U.N. Security Council resolution, 497, which is unanimous, declaring Israel’s annexation of the Golan Heights illegal and demanding that it be rescinded. And in fact, it’s disputed only in Israel and in the New York Times, which in fact is reflecting actual U.S. policy, not formal U.S. policy.
Iran has a record of aggression. too. In the last several hundred years, it has invaded and conquered a couple of Arab islands. Now that was under the Shah, U.S.-imposed dictator with U.S. support. That’s actually the only case in several hundred years.
Meanwhile, the severe threats of attack continue—you’ve just been hearing them at the U.N.—from the United States, but particularly Israel. Now there is a reaction to this at the highest level in the United States. Leon Panetta, secretary of defense, he said that we don’t want to attack Iran, we hope that Israel won’t attack Iran, but Israel is a sovereign country, and they have to make their own decisions about what they’ll do. You might ask what the reaction would be if you reverse the cast of characters. And those of you who have antiquarian interests might remember that there’s a document called the United Nations Charter, the foundation of modern international law, which bars the threat or use of force in international affairs. Now, there are two rogue states—United States and Israel—for whom—which regard the Charter and international law as just a boring irrelevance, so, do what they like. And that’s accepted.
Well, these are not just words; there is an ongoing war, includes terrorism, assassination of nuclear scientists, includes economic war. U.S. threats—not international ones—U.S. threats have cut Iran out of the international financial system. Western military analysts identify what they call “weapons of finance” as acts of war that justify violent response—when they’re directed against us, that is. Cutting Iran out of global financial markets is different.
The United States is openly carrying out extensive cyber war against Iran. That’s praised. The Pentagon regards cyber war as an equivalent to an armed attack, which justifies military response, but that’s of course when it’s directed against us. The leading liberal figure in the State Department, Harold Koh—he’s the top State Department legal adviser—he says that cyber war is an act of war if it results in significant destruction—like the attacks against Iranian nuclear facilities. And such acts, he says, justify force in self-defense. But, of course, he means only attacks against the United States or its clients.
Well, Israel’s lethal armory, which is enormous, includes advanced submarines, recently provided by Germany. These are capable of carrying Israel’s nuclear-tipped missiles, and these are sure to be deployed in the Persian Gulf or nearby if Israel proceeds with its plans to bomb Iran or, more likely, I suspect, to try to set up conditions in which the United States will do so. And the United States, of course, has a vast array of nuclear weapons all over the world, but surrounding the region, from the Mediterranean to the Indian Ocean, including enough firepower in the Persian Gulf to destroy most of the world.
Another story that’s in the news right now is the Israeli bombing of the Iraqi reactor in Osirak, which is suggested as a model for Israeli bombing of Iran. It’s rarely mentioned, however, that the bombing of the Osirak reactor didn’t end Saddam Hussein’s nuclear weapons program. It initiated it. There was no program before it. And the Osirak reactor was not capable of producing uranium for nuclear weapons. But, of course, after the bombings, Saddam immediately turned to developing a nuclear weapons program. And if Iran is bombed, it’s almost certain to proceed just as Saddam Hussein did after the Osirak bombing.
AMY GOODMAN: MIT professor and author, Noam Chomsky, continues in a moment. If you’d like a copy of today’s show, you can go to our website at democracynow.org. Professor Chomsky will next look at nuclear weapons race, as this week marks the 50th anniversary of the Cuban missile crisis, often referred to as “the most dangerous moment in human history.” Back in a moment.
AMY GOODMAN: This is Democracy Now!, democracynow.org, The War and Peace Report. We’re on a 100-city tour, today in Portland, Oregon. I’m Amy Goodman, as we continue our hour today with world-renowned political dissident, linguist, author, and professor emeritus at the Massachusetts Institute of Technology, Noam Chomsky. His recent talk entitled “Who Owns the World?”
NOAM CHOMSKY: In a few weeks, we’ll be commemorating the 50th anniversary of “the most dangerous moment in human history.” Now, those are the words of historian, Kennedy adviser, Arthur Schlesinger. He was referring, of course, to the October 1962 missile crisis, “the most dangerous moment in human history.” Others agree. Now, at that time, Kennedy raised the nuclear alert to the second-highest level, just short of launching weapons. He authorized NATO aircraft, with Turkish or other pilots, to take off, fly to Moscow and drop bombs, setting off a likely nuclear conflagration.
At the peak of the missile crisis, Kennedy estimated the probability of nuclear war at perhaps 50 percent. It’s a war that would destroy the Northern Hemisphere, President Eisenhower had warned. And facing that risk, Kennedy refused to agree publicly to an offer by Kruschev to end the crisis by simultaneous withdrawal of Russian missiles from Cuba and U.S. missiles from Turkey. These were obsolete missiles. They were already being replaced by invulnerable Polaris submarines. But it was felt necessary to firmly establish the principle that Russia has no right to have any offensive weapons anywhere beyond the borders of the U.S.S.R., even to defend an ally against U.S. attack. That’s now recognized to be the prime reason for deploying missiles there, and actually a plausible one. Meanwhile, the United States must retain the right to have them all over the world, targeting Russia or China or any other enemy. In fact, in 1962, the United—we just recently learned, the United States had just secretly deployed nuclear missiles to Okinawa aimed at China. That was a moment of elevated regional tensions. All of that is very consistent with grand area conceptions, the ones I mentioned that were developed by Roosevelt’s planners.
Well, fortunately, in 1962, Kruschev backed down. But the world can’t be assured of such sanity forever. And particularly threatening, in my view, is that intellectual opinion, and even scholarship, hail Kennedy’s behavior as his finest hour. My own view is it’s one of the worst moments in history. Inability to face the truth about ourselves is all too common a feature of the intellectual culture, also personal life, has ominous implications.
Well, 10 years later, in 1973, during the Israel-Arab War, Henry Kissinger called a high-level nuclear alert. The purpose was to warn the Russians to keep hands off while he was—so we’ve recently learned—he was secretly informing Israel that they were authorized to violate the ceasefire that had been imposed jointly by the U.S. and Russia. When Reagan came into office a couple of years later, the United States launched operations probing Russian defenses, flying in to Russia to probe defenses, and simulating air and naval attacks, meanwhile placing Pershing missiles in Germany that had a five-minute flight time to Russian targets. They were providing what the CIA called a “super-sudden first strike” capability. The Russians, not surprisingly, were deeply concerned. Actually, that led to a major war scare in 1983. There have been hundreds of cases when human intervention aborted a first-strike launch just minutes before launch. Now, that’s after automated systems gave false alarms. We don’t have Russian records, but there’s no doubt that their systems are far more accident-prone. Actually, it’s a near miracle that nuclear war has been avoided so far.
Meanwhile, India and Pakistan have come close to nuclear war several times, and the crises that led to that, especially Kashmir, remain. Both India and Pakistan have refused to sign the Non-Proliferation Treaty, along with Israel, and both of them have received U.S. support for development of their nuclear weapons programs, actually, until today, in the case of India, which is now a U.S. ally.
War threats in the Middle East, which could become reality very soon, once again escalate the dangers. Well, fortunately, there’s a way out of this, a simple way. There’s a way to mitigate, maybe end, whatever threat Iran is alleged to pose. Very simple: move towards establishing a nuclear-weapons-free zone in the Middle East. Now, the opportunity is coming again this December. There’s an international conference scheduled to deal with this proposal. It has overwhelming international support, including, incidentally, a majority of the population in Israel. That’s fortunately. Unfortunately, it’s blocked by the United States and Israel. A couple of days ago, Israel announced that it’s not going to participate, and it won’t consider the matter until there’s a general regional peace. Obama takes the same stand. He also insists that any agreement must exclude Israel and even must exclude calls for other nations—meaning the U.S.—to provide information about Israeli nuclear activities.
The United States and Israel can delay regional peace indefinitely. They’ve been doing that for 35 years on Israel-Palestine, virtual international isolation. It’s a long, important story that I don’t have time to go into here. So, therefore, there’s no hope for an easy way to end what the West regards as the most severe current crisis—no way unless there’s large-scale public pressure. But there can’t be large-scale public pressure unless people at least know about it. And the media have done a stellar job in averting that danger: nothing reported about the conference or about any of the background, no discussion, apart from specialist arms control journals where you can read about it. So, that blocks the easy way to end the worst existing crisis, unless people somehow find a way to break through this.
AMY GOODMAN: MIT Professor Noam Chomsky spoke on September 27th of this year at the University of Massachusetts in Amherst. His talk was entitled “Who Owns the World?” If you’d like to get a copy of today’s broadcast, you can go to our website at democracynow.org. And I’ll be speaking along with Professor Chomsky and Juan Cole of the University of Michigan in Princeton, New Jersey, on November 11th at 1:30. You can go to our website at democracynow.org for details.
Democracy Now! has an immediate opening for a Linux systems administrator. You can see democracynow.org for more information.
We continue our 100-city Silenced Majority Election 2012 tour today in Washington state. At noon, I’ll be at Olympia at the Longhouse at Evergreen State College. Tonight at 7:30 p.m., we’ll be in Seattle at Town Hall. On Saturday, we’re in Everett at Everett Community College at 1:30 p.m., and then in Spokane, Washington, at Spokane Falls Community College at 7:00 p.m. On Sunday, we’re in Bend, Oregon, at the Greenwood Playhouse at noon, and then in Ashland, Oregon, at the Mountain Avenue Theatre at 7:00 p.m. On Monday, we move to Salt Lake City at the Rose Wagner Performing Arts Center, 138 West Broadway, 7:00 p.m.; Tuesday in Peoria, Illinois, at Bradley University at the Michel Student Center Ballroom, 915 North Elmwood Avenue. On Halloween, Wednesday, we’ll be in St. Louis, Missouri, at Left Bank Books Downtown, 321 North 10th Street, at 7:00 p.m. On Thursday, November 1st, in Kansas City, Missouri, at IBEW Local 124,
301 East 103rd Terrace, followed by Houston on Friday, November 2nd, at the Emerson Unitarian Universalist Church, 1900 Bering Drive, at 7:00 p.m. On Monday, November 5th, on the eve of the election, we’ll be back in New York City at Barnes & Noble Tribeca, 97 Warren Street, at 6:00 p.m. Then, post-election, on Thursday, November 8th, in Chicago; Saturday at Green Fest in San Francisco. And you can go to our website at democracynow.org for details.