The Seven-Decade Boom/Bust Trend of the Term “Artificial Intelligence”: An Intimate Look at Its Very Human Origin

Inside the Term "Artificial Intelligence"'s Very Human Origins, And Its Seven Decade Boom/Bust Cycle

0 220

Given the rising concern and interest in artificial intelligence, it may be important to understand the term’s origins and the reasons behind the ongoing buzz. The term “artificial intelligence,” which has been around for over 70 years, was initially created for the most exclusively human of reasons: ego and rivalry.

It happened in New England in the summer of 1955. In collaboration with Marvin Minsky (Harvard), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories), computer scientist John McCarthy of Dartmouth was drafting a research proposal. Their research would have often been considered “cybernetics” at the time, which refers to automated systems. But that created a challenge.

Professor Norbert Wiener of the neighboring MIT initially introduced the field of cybernetics in his hugely important book of the same name. His early work in cybernetics was mostly concerned with automating and controlling systems. In addition, Wiener was the one who developed one of the first automated weapon systems during World War II, despite the fact that we are rightfully concerned about the employment of AI-guided weapons today.

With the escalation of the Cold War, Wiener became an extremely dominant factor in any discussion concerning the future of computers. That was frequently the case, quite literally: Norbert had a reputation for appearing in person and delivering a lengthy lecture at any cybernetics conferences that were planned close to him in Cambridge.

How can a computing-related academic meeting be held without Wiener attending?

Simple: Create a new keyword and promote the conference using it.

McCarthy once said simply that “one reason for inventing the term [AI] was to escape association with cybernetics.” I didn’t want to have to debate Norbert Wiener or accept him as a guru, to put it mildly.

In this sense, AI was initially a word of confusion rather than clarity. Since it was coined so broadly to distinguish itself from cybernetics, virtually any automated computer system may be referred to as artificial intelligence.

This explains why “AI” has persisted as a potent marketing word over the years. It’s simple to forget that another AI-related product experienced a similar buzz less than ten years ago given the tremendous rise of ChatGPT since 2022.

But it’s true: voice-activated personal assistants dominated the AI hoopla in 2014, with Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana leading the way. Hollywood boosted this excitement, as it did with other “truthy” technologies, thanks to the Scarlet Johannsen-voiced AI in Her and Jarvis from the Iron Man movies. Nearly 10 years after the date of this op-ed, the big proclamation of the moment was “Personal assistant AI is going to change everything!”

At that time, I started receiving pitches from big businesses about creating systems that resembled magic and had a dash of “AI” on top. Further investigation, however, revealed that the executives driving these suggestions looked more concerned with making sure their product offers appeared innovative and competitive than they were with artificial intelligence. They had heard of AI from a variety of popular tech publications, but they hadn’t done any in-depth research. They quickly lost patience when I tried to explain the specifics of various implementations to them.

I was aggressively urged to ride the era’s AI hype wave as this was soon after my TED talk on cyborg anthropology.

It always follows the same pattern: from innovations at research universities, new AI applications and playthings occasionally appear. Those on the periphery are excited by these, especially marketers and well-paid evangelists, who subsequently raise expectations surrounding them. Company leaders urge engineers to perform tasks that are unattainable despite the fact that these expectations much beyond what is technically feasible for these systems. And the AI programs keep performing just the geeky automation chores that they excel at. Once more, the public feels let down.

It might be vital to clear up the uncertainty unintentionally created by John McCarthy back in the 1950s if there is any possibility to break free from this tiresome loop. Naturally, the very phrase “artificial intelligence” makes us reflect about mortality and the human soul, and it intensifies any existing anxiety of automation. People don’t really look into things before they suddenly panic.

What word would be preferable than “AI”? Maybe “alongside technologies”. They are less memorable, and perhaps that is the idea. It shouldn’t be so catchy, this nonsense. They also underline the necessity of human-guided systems for AI to function effectively.

The delegates of McCarthy’s inaugural AI conference in 1956 contributed millions of dollars in grants to realize a simple objective: Build a computer that is as intelligent as a person. They anticipated achieving success within a generation.

After two generations and multiple hype cycles, it may become obvious that the finest automations focus on enhancing the best aspects of both people and machines, and that attempts to make robots more “human” would always fail.

Leave A Reply

Your email address will not be published.