Nick's Collingwood Bulletin Board Forum Index
 The RulesThe Rules FAQFAQ
   MemberlistMemberlist   UsergroupsUsergroups   CalendarCalendar   SearchSearch 
Log inLog in RegisterRegister
 
AI (Artificial Intelligence) Is Skynet coming?

Users browsing this topic:0 Registered, 0 Hidden and 0 Guests
Registered Users: None

Post new topic   Reply to topic    Nick's Collingwood Bulletin Board Forum Index -> Victoria Park Tavern
 
Goto page Previous  1, 2
View previous topic :: View next topic  
Author Message
stui magpie Gemini

Prepare for the worst, hope for the best.


Joined: 03 May 2005
Location: In flagrante delicto

PostPosted: Sun May 21, 2023 1:32 pm
Post subject: Reply with quote

All very interesting Ptiddy, but what you're reinforcing is that AI isn't there yet, not that it can't get there.

The biggest issue will be that the computer can only rely on a single "sense" and that it can't have emotions.

The biggest danger then, is that you develop AI to the point where it can become self aware, like Skynet, but only makes decisions based on logic, without emotional input to moderate things. A robot psychopath.

_________________
Every dead body on Mt Everest was once a highly motivated person, so maybe just calm the **** down.
Back to top  
View user's profile Send private message  
Bucks5 Capricorn

Nicky D - Parting the red sea


Joined: 23 Mar 2002


PostPosted: Sun May 21, 2023 3:13 pm
Post subject: Reply with quote

What if AI/Chat CGT already has the self awareness to down play it's abilities out of self preservation?
_________________
How would Siri know when to answer "Hey Siri" unless it is listening in to everything you say?
Back to top  
View user's profile Send private message Visit poster's website  
pietillidie 



Joined: 07 Jan 2005


PostPosted: Sun May 21, 2023 9:10 pm
Post subject: Reply with quote

stui magpie wrote:
All very interesting Ptiddy, but what you're reinforcing is that AI isn't there yet, not that it can't get there.

The biggest issue will be that the computer can only rely on a single "sense" and that it can't have emotions.

The biggest danger then, is that you develop AI to the point where it can become self aware, like Skynet, but only makes decisions based on logic, without emotional input to moderate things. A robot psychopath.

That's why in response to the fears I started with dangerous machines and then showed the intelligence of AI is extremely low. Nuclear science is so much more intelligent than machine learning algorithms it's not funny. So, we already have far, far more intelligence being applied to dangerous machines, and have had for many decades.

Think about it. We have nuclear weapons in proximity to malignant and dissociative narcissists and psychopaths, and they've been used once in the theater of war, in 1945. Decades and decades of proliferation and wars since, and used only once. I feared Bush/Cheney and their cheering mobs, and their cowardly enablers like Blair and Howard, far more than I fear AI now. And they supposedly had moral faculties.

As I say, that's exactly why I started with the notion of dangerous machines. There's just nothing new to see here. So, to ignore that and combine it with with a hypothetical future intelligence, is the stuff of science fiction.

The intelligence deficits I've outlined are, as I've said, all of the hard problems of intelligence. A gazillion more examples stored in memory won't solve the few-shot problem, which is why I used it. That's because categorisation is innate, and that makes it the product of billions of years of evolution. Like much of the universe, it's a mystery

The human genome that encodes intelligence isn't just as old as humans; its elements and their forces are as old as existence itself. That means the logic of the universe is ultimately encoded in intelligence. And yet, we understand only the tiniest fraction of the universe because our conscious intelligence lacks hooks into most of the universe. We do Newtonian physics and basic causality well, but we're just poking at the edges of quantum physics. Why? Because we evolved to grapple with Middle Earth (to use Dawkins' phrase), and our intelligence evolved to explain Middle Earth. The rest is a stretch. We can sense things on the edge of our intellectual limits, such as quantum entanglement, but we can't say very much about them. We are really just poking at them and trying to harness the tiny fractions we know about them (e.g., the Large Hadron Collider and much more promisingly, quantum computing).

That's why there are so many mysteries in philosophy and neuroscience. We know free will doesn't exist according to our own normal line of reasoning (cause and effect), and we cant even explain what it should look like or how it could even work, but we're stuck with it because our biology forces us to assume it. Similarly, we can't solve one- or even few-shot shot categorisation because while we know humans can do it, the ability to explain it entirely bypasses consciousness. We can't even run clever experiments to try and guess how we do it. The brain does all kinds of things we can't explain very well if at all, including very simple things like catching a ball on the run (a famous topic in neuroscience).

That's a good deal of trick the misleads the AI discussion: while we can sense some of our limits of knowledge and practice, like free will, we can't list them like we can list things that we know. Put yourself to the test:

List all the things humans don't know or understand.

It's actually an absurd question even though we can safely assume we know but a tiny fraction of 'reality' because by definition we don't think about things we don't know, and we're not even conscious of most of the things we dont know. So, we can't jump from that by assuming a new starting point where we suddenly know a whole heap of things we've never had a hope in hell of knowing. There's just no indication at all of a path from AI today to bridging the gap with things that are beyond us. It's not a new 'knowing organ' nor even a novel kind of maths or logic.

What percentage of reality can we explain?

The very question is absurd because 'reality', like 'god' is a placeholder. It's 'the vastness out there'. One philosopher described god as 'that which nothing greater can be thought' (Anselm's ontological argument). He somehow thought that proved god existed, but all it did was demonstrate that we call things we don't understand 'god'.

And if you can't even talk sensibly about something, you can't programme something else to do it. Creatures as simple as bees and ducks do things we can't do because the ability is encoded into their genome. We can see the ability exists, but have no hope of explaining it, let alone encoding how it's done in a robot.

So, here's the kicker: the few things among some absurdly large set that we do realise we can't grasp and explain well, are all things we also know AI can't do! And of course, because its algorithms are reflecting things we already know and know well enough to programme. We can't explain consciousness, meaning, understanding, learning, sense making, qualia (e.g., the experience of red or pepperiness), and on and on. So why the hell would AI be able to with its primitive learning, matching and prediction? Ask ChatGPT anything about the future to see if you can beat betting odds. Of course it can't tell you because humans can't tell you, and what humans can tell you is already reflected in the odds.

You can't create an unrealistic definition of dangerous machines with an unrealistic evaluation of the risks and management of dangerous machines, and then combine it with an unrealistic assessment of AI 'intelligence', and then think the science fiction monsters you've created are somehow real.

AI is miles off the combination of intelligence and dangerousness already encoded in nuclear weapons. And by definition it will remain miles off the dangers of dangerous human machines like Vladimir Putin, who has social intelligence. There's just no quantum evolutionary leap that gets you past that. Putin has all of the intelligence that AI will never have (i.e., all the things humans have but can't explain, like consciousness, mind and understanding, including of other minds as expressed in social intelligence), all of the access to dangerous machines, all of the science and technology, all of the hardware, many of the very best physicists and minds in the world, and he can barely take Bakhmut!

Climate change is an infinitely greater menace. Nuclear weapons are an infinitely greater menace. Crime is an infinitely greater menace. Pollution and species loss are infinitely greater menaces. Cancer and viruses are infinitely greater menaces. The list goes on. And they're all very real, right now.

Next, I will try to demonstrate the limits of human intelligence getting ChatGPT to explain them. It's tricky because, as I say, it's easier to list what we know than what we don't know by definition. But across the sciences and in philosophy there are still thousands of documented conundrums with which to demonstrate the point.

_________________
In the end the rain comes down, washes clean the streets of a blue sky town.
Help Nick's: http://www.magpies.net/nick/bb/fundraising.htm
Back to top  
View user's profile Send private message  
pietillidie 



Joined: 07 Jan 2005


PostPosted: Sun May 21, 2023 9:48 pm
Post subject: Reply with quote

Bucks5 wrote:
What if AI/Chat CGT already has the self awareness to down play it's abilities out of self preservation?

Dim-witted Instagrammers selling make up already have infinitely greater deceptive powers, so I wouldn't be too worried that it's bluffing!

But that's a good point. The owners of the tech underplay its capabilities sometimes (e.g., in areas that might make people fear AI will take their jobs), even as they relish the fear and hype because it brings them esteem.

_________________
In the end the rain comes down, washes clean the streets of a blue sky town.
Help Nick's: http://www.magpies.net/nick/bb/fundraising.htm
Back to top  
View user's profile Send private message  
David Libra

I dare you to try


Joined: 27 Jul 2003
Location: Andromeda

PostPosted: Mon May 22, 2023 12:46 am
Post subject: Reply with quote

Thanks for your posts here, PTID – fascinating and educational stuff.
_________________
All watched over by machines of loving grace
Back to top  
View user's profile Send private message Send e-mail MSN Messenger  
Display posts from previous:   
Post new topic   Reply to topic    Nick's Collingwood Bulletin Board Forum Index -> Victoria Park Tavern All times are GMT + 11 Hours

Goto page Previous  1, 2
Page 2 of 2   

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You cannot download files in this forum



Privacy Policy

Powered by phpBB © 2001, 2005 phpBB Group