Hitting the Books: Why a Dartmouth Professor Coined the Term “Artificial Intelligence” | Engadget

If the Wu-Tang had produced it in ’23 instead of ’93, they would have called it DREAM because data rules everything around me. Where our society once brokered power based on the strength of our arms and purse strings, the modern world is driven by data-enhancing algorithms to select, isolate, and sell us out. These black box oracles of imperious, inaudible decision-making deign who gets home loans, who gets bail bonds, who finds love, and who gets their kids taken away by the state.

In their new book, How Data Happened: A Story from the Age of Reason to the Age of Algorithms, which builds on their existing curriculum, Columbia University professors Chris Wiggins and Matthew L Jones examine how data is curated into actionable information and used to shape everything from our political views and social mores to our military responses and economic activities. In the excerpt below, Wiggins and Jones examine the work of mathematician John McCarthy, the young Dartmouth professor who single-handedly coined the term “artificial intelligence”… as part of his ploy to secure summer research funding.

White background with multicolored blocks falling from above like a Tetris board to fill

W. W. Norton

Adapted from How Data Happened: A Story from the Age of Reason to the Age of Algorithms by Chris Wiggins and Matthew L Jones. Published by WW Norton. Copyright 2023 by Chris Wiggins and Matthew L Jones. All rights reserved.


Packaging Artificial Intelligence

An avid proponent of symbolic approaches, mathematician John McCarthy is often credited with inventing the term artificial intelligence, including himself: I invented the term artificial intelligence, he explained, when we were trying to get money for a summer study to aim for the goal long-term goal of achieving human-level intelligence. The summer study in question was titled The Dartmouth Summer Research Project on Artificial Intelligence, and the requested funding came from the Rockefeller Foundation. A young mathematics professor at Dartmouth at the time, McCarthy was aided in his Rockefeller speech by his former mentor Claude Shannon. While McCarthy describes the term positioning, Shannon thought artificial intelligence was too flashy a term and could attract unfavorable attention. However, McCarthy wanted to avoid overlap with the existing field of automata studies (including nerve nets and Turing machines) and took a stand to declare a new field. So I’ve decided not to fly any more false flags. The ambition was huge; the 1955 proposal stated that any aspect of learning or any other characteristic of intelligence can in principle be described so precisely that a machine can be built to simulate it. McCarthy ended up with more brain shapers than axiomatic mathematicians of the kind he wanted at the 1956 meeting, which became known as the Dartmouth Workshop. The event saw several and often contradictory efforts to get digital computers to perform tasks considered intelligent, however, as artificial intelligence historian Jonnie Penn argues, the absence of psychology expertise at the seminar meant that the account of the intelligence was informed primarily by a set of specialists working outside the humanities. Each participant saw the roots of their business in a different way. McCarthy recalled, whoever was there was quite stubborn in pursuing the ideas he had before coming, nor was there, as far as I could see, any real exchange of ideas.

Like Turing’s 1950 paper, the 1955 proposal for a summer seminar on artificial intelligence seems incredibly prescient in retrospect. The seven problems that McCarthy, Shannon, and their collaborators set out to study have become major pillars of computer science and the field of artificial intelligence:

  1. Automatic computers (programming languages)

  2. How can a computer be programmed to use a language (Natural Language Processing)

  3. Neuron Nets (neural networks and deep learning)

  4. Theory of the size of a computation (computational complexity)

  5. Self-improvement (machine learning)

  6. Abstractions (feature engineering)

  7. Randomness and creativity (Monte Carlo methods including stochastic learning).

The term artificial intelligence, in 1955, was an aspiration rather than a commitment to a method. AI, in this broad sense, has involved both discovering what human intelligence comprises by attempting to create machine intelligence, and a less philosophically charged effort to simply get computers to perform difficult tasks that a being human might try.

Only a few of these aspirations fueled efforts that, in current usage, have become synonymous with artificial intelligence: the idea that machines can learn from data. Among computer scientists, learning from data would be overlooked for generations.

Most of the first half-century of artificial intelligence focused on combining logic with knowledge encoded in machines. The data gleaned from daily activities was hardly the focus; it paled in prestige next to logic. Over the past five years or so, artificial intelligence and machine learning have started to be used interchangeably; it’s a powerful thought exercise to remind you that it didn’t have to be this way. For the first few decades in the life of artificial intelligence, learning from data seemed to be the wrong approach, an unscientific approach, used by those unwilling to simply program knowledge into the computer. Before data ruled, rules did.

For all their enthusiasm, most of the Dartmouth seminar attendees brought with them few concrete results. One group was different. A team from RAND Corporation, led by Herbert Simon, had brought the goods, in the form of an automated theorem prover. This algorithm could produce proofs of basic arithmetic and logic theorems. But mathematics was just a testing ground for them. As historian Hunter Heyck has pointed out, that group started less with computer science or mathematics than with the study of how to understand large bureaucratic organizations and the psychology of the people who solve problems within them. For Simon and Newell, the human brain and computers were problem solvers of the same kind.

Our position is that the appropriate way to describe problem-solving behavior is in terms of a program: a specification of what the organism will do under varying environmental circumstances in terms of certain elementary information processes it is capable of executing… Digital computers come into the picture only because they can, by proper programming, be made to perform the same sequences of information processes that humans do when solving problems. Thus, as we will see, these programs describe the solution of both human and mechanical problems at the level of information processes.

While they provided many of the first great successes in early artificial intelligence, Simon and Newell focused on a practical investigation into the organization of human beings. They were interested in solving human problems that blended what Jonnie Penn calls a composite of early twentieth-century British symbolic logic and the American administrative logic of a hyper-rationalized organization. Before adopting the moniker of artificial intelligence, they positioned their work as the study of information processing systems encompassing humans and machines alike, which drew on the best understanding of human reasoning at the time.

Simon and his collaborators were deeply involved in debates about the nature of humans as rational animals. Simon later received the Nobel Prize in economics for his work on the limits of human rationality. He was concerned, along with a bevy of postwar intellectuals, with disproving the idea that human psychology should be understood as an animal reaction to positive and negative stimuli. Like others, he rejected a behaviorist view of the human as guided by reflexes, almost automatically, and that learning was primarily about the accumulation of facts acquired through such experience. Great human abilities, like speaking a natural language or doing advanced mathematics, could never emerge from experience alone: ​​they required so much more. To focus only on data was to misunderstand human spontaneity and intelligence. This generation of intellectuals, central to the development of cognitive science, emphasized abstraction and creativity over the analysis of data, sensory or otherwise. Historian Jamie Cohen-Cole explains: Learning was not so much a process of acquiring facts about the world as developing a skill or acquiring proficiency with a conceptual tool that could then be used creatively. This emphasis on the conceptual was central to Simon and Newell’s Logic Theorist program, which didn’t just grind through logical processes but used human-like heuristics to accelerate the search for means to ends. Scholars such as George Plya, who investigated how mathematicians solved problems, had emphasized the creativity involved in using heuristics to solve math problems. So the math wasn’t strenuous, it wasn’t like doing a lot of long division or reducing large amounts of data. It was a creative activity and, in the eyes of its creators, a bulwark against totalitarian views of the human being, both left and right. (And so even life in a bureaucratic organization needn’t be strenuous in this image, it could be a place for creativity. Just don’t tell its employees.)

All products recommended by Engadget are selected by our editorial team, which is independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices correct at time of publication.

#Hitting #Books #Dartmouth #Professor #Coined #Term #Artificial #Intelligence #Engadget

Leave a Comment