AI and the media

As Geoffrey Hinton, ‘Godfather of AI’, retires from Google the news headlines suggest he ‘QUIT’ and regrets his work.

Kinda sounds like a whistleblower walkout the way it has been framed by UK news agencies.

Although he flags up some widely posed moral questions and future development concerns, that have always been part of the AI discussion, this has been blatantly sensationalised.

They fail to balance this with the fact the man is 75 years old and has RETIRED, he speaks positively about Google and its responsible attitude towards AI.

News agencies are in a bad habit of leading with scaremongering whenever possible…nothing new I suppose, but worth remembering.

BBC News - AI ‘godfather’ Geoffrey Hinton warns of dangers as he quits Google

I’d actually say this (AI Alignment not Hinton) is one of the extremely rare cases where the media is underselling the story.

The average person is vastly more concerned about climate change than misaligned AI and the media coverage is similarly tilted towards that angle.

1 Like

With AI we’re running the train full speed on a track we don’t know the destination too.

Climate change is nothing compared to unregulated AI


I take your point in general, however specifically this story of Hintons retirement has been grossly misrepresented IMO.

The wider AI = job losses and change to world economy is a different matter. Agree it is misunderstood and played down to date.

Agreed, my last response covers it.

I still disagree, the article has some big flaws but it did highlight age as a factor and isn’t sensationalist.

I’d be much more concerned that it completely mis-characterizes the problems with inner misalignment and instrumental convergence

You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals.

The scientist warned that this eventually might “create sub-goals like ‘I need to get more power’”.

The problem with “sub-goals” is not with the operator being bad (the operator could have entirely good goals) but all agents seek self-preservation and resource acquisition.


As someone in the creative industries (book publishing), AI is becoming a bigger threat to our livelihoods. AI created artwork for commercial use is becoming more and more frequent in a bid to save money, but this is harming the human creators who miss out on opportunities.

There’s someone on YouTube who used AI to “create” a song, using two artists voices (cannot remember who) which, in my view, is ridiculous.

There’s also an issue with AI generated contact and the issue of copyright as at the moment, anything created by AI is not covered by copyright laws and therefore is basically public domain for anyone to recreate and use.

AI can be used for our benefit, but it appears that it will be used for completely the wrong reasons.


Certainly a lot of change to come. As yet i guess no one can be sure of how it will play out.

Imho self-preservation is a subgoal to any overarching goal - very much agree! Since you can’t follow any goal if you’re not functional. However, I don’t think this applies to resource acquisition.
It might be if a goal if framed in the wrong/right way, but it’s not a given. Happy to read up on discussions why this would be the case though.

1 Like

My post explains the misrepresentation of this story in particular…not AI in general.

However the thread is now derailed into the wider debate…which is probably more interesting :rofl:

Having more resources strictly increases the solution space, I might be able to win a game of Go with one desktop’s compute - but I’m going to have more options (including perhaps better options) available with say, all of the Earth’s compute.

2 good summaries/intros to the subject:

1 Like

Elon Musk launches xAI Grok as rival to ChatGPT with a more humorous side.

1 Like

Excellent video by Andrej Karpathy, one of the experts in the AI field, explaining how large language models (LLMs), like chatGPT, work.

It’s one of the best ones I have ever seen. Explains clearly how “AI is created”, in simple terms for people to understand it.

1 Like

I think AGI is something to worry about: artificial intelligence with cognitive abilities like humans.