0 3 min 2 yrs

I don’t believe in divine intervention when it comes to artificial intelligence, said the director of the thriller “Oppenheimer”, Christopher Nolan, referring to the future of AI in an interview with “IndieWire”.

Nolan warned of the danger of creating a “false idol” from artificial intelligence.

“AI is like the term algorithm. We look at companies that use algorithms, and now AI, as a means of avoiding responsibility for their actions,” said Nolan.

Google Play

“If we support the view that AI is omnipotent, we are supporting the view that it can relieve people of responsibility for their actions, militarily, socio-economically, whatever. The biggest danger of artificial intelligence is that we attribute these godlike characteristics and therefore get ourselves out of trouble,” he added.

“I don’t know what the mythological underpinnings of this are, but throughout history there is a tendency of human beings to create false idols, to mold something in our own image and then say we have the powers of a god,” Nolan said. .

Google Play

Nolan said the media coverage of ChatGPT and OpenAI illustrated to the public how dangerous the technology could be, only fueling widespread intrigue.

“So now everyone loves it. That’s not to say there isn’t a real danger here, I think there is. But, personally, I identify the risk as an abdication of responsibility,” said Nolan.

Google Play

“I think AI can still be a very powerful tool for us. I’m optimistic about that. But we should see it as a tool. The person using it still has to bear responsibility for the use of that tool. If we give AI the status of a human being, as we did at one point legally with companies, then yes, we will have big problems”, emphasized the director of the film “Inception”.

Nolan is not the only Hollywood icon to express concern about artificial intelligence. Black Mirror director Charlie Brooker wrote a sci-fi episode using ChatGPT.

Leave a Reply

Your email address will not be published. Required fields are marked *