• Please be sure to read the rules and adhere to them. Some banned members have complained that they are not spammers. But they spammed us. Some even tried to redirect our members to other forums. Duh. Be smart. Read the rules and adhere to them and we will all get along just fine. Cheers. :beer: Link to the rules: https://www.forumsforums.com/threads/forum-rules-info.2974/

Could AI lead to mass extinction? Quite a few CEO's and Tech Wizards think so.

Doc

Bottoms Up
Staff member
GOLD Site Supporter

Top AI researchers and CEOs warn against ‘risk of extinction’ in 22-word statement​


A group of top AI researchers, engineers, and CEOs have issued a new warning about the existential threat they believe that AI poses to humanity.
The 22-word statement, trimmed short to make it as broadly acceptable as possible, reads as follows: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

This statement, published by a San Francisco-based non-profit, the Center for AI Safety, has been co-signed by figures including Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman, as well as Geoffrey Hinton and Yoshua Bengio — two of the three AI researchers who won the 2018 Turing Award (sometimes referred to as the “Nobel Prize of computing”) for their work on AI. At the time of writing, the year’s third winner, Yann LeCun, now chief AI scientist at Facebook parent company Meta, has not signed.

The statement is the latest high-profile intervention in the complicated and controversial debate over AI safety. Earlier this year, an open letter signed by some of the same individuals backing the 22-word warning called for a six-month “pause” in AI development. The letter was criticized on multiple levels. Some experts thought it overstated the risk posed by AI, while others agreed with the risk but not the letter’s suggested remedy.

Dan Hendrycks, executive director of the Center for AI Safety, told The New York Times that the brevity of today’s statement — which doesn’t suggest any potential ways to mitigate the threat posed by AI — was intended to avoid such disagreement. “We didn’t want to push for a very large menu of 30 potential interventions,” said Hendrycks. “When that happens, it dilutes the message.”

Hendrycks described the message as a “coming-out” for figures in the industry worried about AI risk. “There’s a very common misconception, even in the AI community, that there only are a handful of doomers,” Hendrycks told The Times. “But, in fact, many people privately would express concerns about these things.”

The broad contours of this debate are familiar but the details often interminable, based on hypothetical scenarios in which AI systems rapidly increase in capabilities, and no longer function safely. Many experts point to swift improvements in systems like large language models as evidence of future projected gains in intelligence. They say once AI systems reach a certain level of sophistication, it may become impossible to control their actions.

Others doubt these predictions. They point to the inability of AI systems to handle even relatively mundane tasks like, for example, driving a car. Despite years of effort and billions of investment in this research area, fully self-driving cars are still far from a reality. If AI can’t handle even this one challenge, say skeptics, what chance does the technology have of matching every other human accomplishment in the coming years?

Meanwhile, both AI risk advocates and skeptics agree that, even without improvements in their capabilities, AI systems present a number of threats in the present day — from their use enabling mass-surveillance, to powering faulty “predictive policing” algorithms, and easing the creation of misinformation and disinformation.
 

chowderman

Well-known member
I'm not at all convinced"AI" is going to take over the world - for a very simple reason:
AI programming is very specific to its intended usage.
there is no "AI In-the-Box" that will hand chats, write essays and steer cars - trot on down to BestBuy and pick up a copy of AI that'll do anything . . .

how many automakers have spend how many man-years developing "AI" for autonomous driving . . . . and it ain't there yet.
every 'test event' shows it has severe limitations - it works only under "ideal conditions"
and (!) for autonomous AI driving every single system must be able to interact with surrounding systems.
getting every automaker to agree on a single standard is 2-3 centuries away.

medical AI is one bright spot - because medical AI can access kabillions of databases and pull together related data valuable for a human doctor to base the decisions on, like the doctor may not have thought / recalled about "that one."

radiology is shining bright but I would not call it "AI" - those computer programs can examine minute differences in contrast pixel-to-pixel and flag anomalies with much higher accuracy and at much lower thresholds than the human eyeball....

and anyone who has experienced an on-line "I'm here to help!" chat AI . . . already knows it's far from soup yet.
 
  • Like
Reactions: Doc

Melensdad

Jerk in a Hawaiian Shirt & SNOWCAT Moderator
Staff member
GOLD Site Supporter
I tend to think humans are generally lazy. They will let AI take over the mundane. The mundane things actually run the world. Ultimately "sky net" will take over, just like the Terminator movies. We will all die a horrible death because the "smart people" will allow for it to happen.
 

power1

Well-known member
Everyone is safe for the next 20-30 years. That is as long as I will live. I know how to stop AI. For $19.99 a month I will share my knowledge with you.
 
Top