How I Learned To Stop Worrying and Love the A.I.
#We have been reminded recently that we often rely on scenes from movies to make the challenges of real life a bit more relatable. This is an accurate observation, and we plead guilty as charged. We find great movies often capture life’s challenges in the most memorable ways, and there is something truly magical about experiencing them on a big screen with a cinematic soundtrack.
In 1964, film director Stanley Kubrick made the satirical, dark comedy titled “Dr. Strangelove.” It was a farce set during the height of the Cold War, serving as a foil to the terrifying thriller “Fail Safe” that came out in the same year. "Fail Safe" was based on a bestselling book that began as a series of articles in the Saturday Evening Post. They described a terrifying scenario of what could happen if a mistake were made that accidentally sent US nuclear bombers to attack targets in the Soviet Union.
In other words, what might happen if the “fail-safe” mechanisms, designed to prevent such an accident, actually failed?
The fact that the series in the Saturday Evening Post came out during 1962’s Cuban Missile Crisis made the fictional scenario all the more plausible. Of course, it also helped sales of the book quite a bit. The movie version, featuring a fantastic performance by Henry Fonda as the President of the United States, is a must-watch. (It is streaming on the Tubi service and available to rent on Amazon Prime Video.)
Then, "Dr. Strangelove" took the Cold War scenario and turned it on its head. Kubrick’s film is actually titled “Dr Strangelove—Or How I Learned to Stop Worrying and Love the Bomb.” It satirized the fear that had become a hallmark of the Cold War era in brilliant fashion.
It is our position that every generation has something that it fears. For the children of “The Greatest Generation” who lived through World War II, the era of the Cold War followed. The reality of a war that ended with the use of a nuclear bomb was fresh in the minds of those who witnessed what the United States had done to bring about victory against Japan in the Pacific. When the USSR proved it had a nuclear capacity in 1947, the standoff between the superpowers of the world would be dubbed the Cold War, and it would last until 1991.
Forgive the lengthy preamble, but we’d suggest that much of the same fear that gripped the nation is with us again. And this time, its name is Artificial Intelligence. Better known by its ubiquitous simple abbreviation, “AI"
Yesterday, at the NAB Show New York, in a session titled “The Future of News: AI, New Revenues and Risks, and the Policy Response” (a title that was almost as long as Dr. Strangelove’s), the fear was quantified in a way that the media industry usually relates best to—via an opinion poll.
The poll, conducted by OnMessage, Inc., was presented at the session by the firm’s vice president, Tommy Binton. Binton described how 1,000 likely voters were surveyed and explained the methodology used to produce the results, which were represented to have a margin of error of +/- 3.1 percent.
Of the 1,000 people responding, 46% said that they use AI in their personal life or career. But 50% said they don’t, and 4% said they either don’t know or have no opinion about the question. (We’ll bet those people would have been building backyard bomb shelters during the Cold War.)
All in all, a pretty equal split of the nation. Of course, you probably won’t be surprised to learn that Democrats were more likely to use AI than Republicans (52% to 38%), and people under 55 are far more likely to use AI than those over that age (Some 59% to 37%).
But the headline from the poll that jumped out was that, by an overwhelming majority, (82% to 16%) respondents said they were “Concerned” about the development of artificial intelligence, versus “Not Concerned.” (To be precise, the 82% was made up of people saying they were “Very Concerned” (40%) or “Somewhat Concerned” (42%) opposed to those saying they were “Not So Concerned” (12%) and “Not At All Concerned” (4%.)
Another interesting question asked in the poll: ”Thinking about the development of AI, which of the following statements comes closest to your opinion? The choices were: “The Federal Government needs to step in and place guardrails on the development of AI to protect users from potential risks” or “The Federal Government needs to allow American businesses to experiment with AI with little regulation so that Americans can become the global leader in AI technology."
On this question, 72% said they favored the Federal Government stepping in with guardrails, compared to only 14% who preferred letting American businesses experiment without government oversight. 15% either didn’t know or had no opinion. Large majorities in all demographic and political affiliations support the government stepping in.
A substantial majority of poll respondents also said they would support “Congress passing a law that made it illegal for AI to steal or reproduce journalism and local news stories that are published online without compensation" (to the news organization originating those stories). 77% would strongly or somewhat support such a law, while 11% would oppose it. (12% didn’t know or had no opinion.)
When asked, “How would you describe the level of trust you have that information provided by AI services and chatbots is accurate and unbiased?” “Trustworthy” only received 26% agreement, while 68% said such information was “Untrustworthy."
And then there is this result in the poll. When asked, “How concerned are you that AI will eventually replace your job?” the poll respondents answered that 15% were "very concerned” and 17% were "somewhat concerned” versus 24% who said “not so concerned” and 37% “not at all concerned.” Lumped together, that’s 32% saying they are concerned that AI is going to take their job compared to 61% who aren’t. (And 7% had no opinion or didn’t know.)
No word on whether any of these poll respondents were working in newsrooms or any kind.
At this point, we turn to the wisdom of the fictional Dr. Strangelove from the movie when he delivers this explanation for why he considered and then rejected the notion of creating a “doomsday machine.”
"Based on the findings of the report, my conclusion was that this idea was not a practical deterrent for reasons which, at this moment, must be all too obvious."
We’ll have more to report from the NAB Show in New York in the days to come.
-30-