As Geoffrey Hinton, āGodfather of AIā, retires from Google the news headlines suggest he āQUITā and regrets his work.
Kinda sounds like a whistleblower walkout the way it has been framed by UK news agencies.
Although he flags up some widely posed moral questions and future development concerns, that have always been part of the AI discussion, this has been blatantly sensationalised.
They fail to balance this with the fact the man is 75 years old and has RETIRED, he speaks positively about Google and its responsible attitude towards AI.
News agencies are in a bad habit of leading with scaremongering whenever possibleā¦nothing new I suppose, but worth remembering.
BBC News - AI āgodfatherā Geoffrey Hinton warns of dangers as he quits Google
I still disagree, the article has some big flaws but it did highlight age as a factor and isnāt sensationalist.
Iād be much more concerned that it completely mis-characterizes the problems with inner misalignment and instrumental convergence
You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals.
The scientist warned that this eventually might ācreate sub-goals like āI need to get more powerāā.
The problem with āsub-goalsā is not with the operator being bad (the operator could have entirely good goals) but all agents seek self-preservation and resource acquisition.
As someone in the creative industries (book publishing), AI is becoming a bigger threat to our livelihoods. AI created artwork for commercial use is becoming more and more frequent in a bid to save money, but this is harming the human creators who miss out on opportunities.
Thereās someone on YouTube who used AI to ācreateā a song, using two artists voices (cannot remember who) which, in my view, is ridiculous.
Thereās also an issue with AI generated contact and the issue of copyright as at the moment, anything created by AI is not covered by copyright laws and therefore is basically public domain for anyone to recreate and use.
AI can be used for our benefit, but it appears that it will be used for completely the wrong reasons.
Imho self-preservation is a subgoal to any overarching goal - very much agree! Since you canāt follow any goal if youāre not functional. However, I donāt think this applies to resource acquisition.
It might be if a goal if framed in the wrong/right way, but itās not a given. Happy to read up on discussions why this would be the case though.
Having more resources strictly increases the solution space, I might be able to win a game of Go with one desktopās compute - but Iām going to have more options (including perhaps better options) available with say, all of the Earthās compute.