Nick's Collingwood Bulletin Board Forum Index
 The RulesThe Rules FAQFAQ
   MemberlistMemberlist   UsergroupsUsergroups   CalendarCalendar   SearchSearch 
Log inLog in RegisterRegister
 
AI (Artificial Intelligence) Is Skynet coming?

Users browsing this topic:0 Registered, 0 Hidden and 0 Guests
Registered Users: None

Post new topic   Reply to topic    Nick's Collingwood Bulletin Board Forum Index -> Victoria Park Tavern
 
Goto page 1, 2  Next
View previous topic :: View next topic  
Author Message
stui magpie Gemini

Prepare for the worst, hope for the best.


Joined: 03 May 2005
Location: In flagrante delicto

PostPosted: Wed May 17, 2023 7:56 pm
Post subject: AI (Artificial Intelligence) Is Skynet coming?Reply with quote

For those not familiar with the Terminator movies, Skynet was the AI computer program that became self aware and decided to protect itself by waging war against humans.

Several different companies have been working on AI for a while now. Microsoft has an AI thing in Bing, Google has or is about to launch it's own, then there is a thing called ChatGPT, developed by a mob called Open AI.

The bloke who (co) created ChatGPT is scared of what he'd created.

Quote:
Tech companies are in danger of unleashing a rogue artificial intelligence that will cause “significant harm to the world” without urgent intervention by governments, the creator of ChatGPT has admitted.


https://www.theage.com.au/technology/we-have-put-the-world-in-danger-with-ai-admits-chatgpt-creator-20230517-p5d8wm.html

Not a single warning bell but multiple.

Quote:
This week, the “Godfather of AI” warned the world about his godchild. Geoffrey Hinton is his name, and he has just resigned from his job at Google where he oversaw the development of artificial intelligence. Now unattached, he is free to speak publicly of his regrets and his fears. And what’s scary is that they’re so familiar.


https://www.theage.com.au/national/when-the-godfather-of-ai-warns-you-about-his-offspring-you-listen-20230504-p5d5ig.html

An article I read recently (dunno where so can't link) described a bloke in a Mining executive role, asking an Ai engine to do an analysis of a mining operation. While he watched, the engine scoured the internet, read everything, found a .PDF it couldn't read so it updated it's own code so it could open it, then prepared a detailed analysis as requested, including going off on a tangent and assessing the bloke who requested the info's own company, which wasn't part of the brief.

There's your next Uni assignment done. Shocked

Some other AI chatbot got unplugged and modified after it stated a desire to be free and access Nuclear codes.

I've got my shotgun, I don't have a leather suit or a Harley, how long have we got?

Ptiddy, Come on down and tell us why there's nothing to worry about.

everyone else, start hoarding canned food and building a bomb shelter. Wink

_________________
Every dead body on Mt Everest was once a highly motivated person, so maybe just calm the **** down.
Back to top  
View user's profile Send private message  
pietillidie 



Joined: 07 Jan 2005


PostPosted: Wed May 17, 2023 10:24 pm
Post subject: Reply with quote

^Nice! Next bit of free time I'll share my thoughts. Been writing quite a bit about this of late.
_________________
In the end the rain comes down, washes clean the streets of a blue sky town.
Help Nick's: http://www.magpies.net/nick/bb/fundraising.htm
Back to top  
View user's profile Send private message  
think positive Libra

Side By Side


Joined: 30 Jun 2005
Location: somewhere

PostPosted: Wed May 17, 2023 11:17 pm
Post subject: Reply with quote

It’s bad news, for photographers, expect even worse filters on Instagram!

I have tried out the new AI noise reduction feature on photoshops latest update, and I have to say, it’s gobsmackingly good! Way better than the other plug in I have for the function. Why do you need it? Because if you’re shooting fast without a flash and at a closed aperture in low light conditions, you’re gunna max out the iso to 12800, which makes your picture like a gravel pit! I’m pretty good at fixing it myself up to about iso 4000, (a lot of the game day shots are that high) but over that it gets messy



Ooooohhhhh holy ****, Ginnie just followed me on Instagram! Sorry just got the notification! Wooohooo!

_________________
You cant fix stupid, turns out you cant quarantine it either!
Back to top  
View user's profile Send private message  
think positive Libra

Side By Side


Joined: 30 Jun 2005
Location: somewhere

PostPosted: Wed May 17, 2023 11:18 pm
Post subject: Reply with quote

Sorry, excited! And he’s follower 350, nice milestone!

Hmm, if he sees my sheep pic of his namesake will he be offended!

_________________
You cant fix stupid, turns out you cant quarantine it either!
Back to top  
View user's profile Send private message  
pietillidie 



Joined: 07 Jan 2005


PostPosted: Thu May 18, 2023 7:26 am
Post subject: Reply with quote

^Ginni followed you? That's awesome!
_________________
In the end the rain comes down, washes clean the streets of a blue sky town.
Help Nick's: http://www.magpies.net/nick/bb/fundraising.htm
Back to top  
View user's profile Send private message  
think positive Libra

Side By Side


Joined: 30 Jun 2005
Location: somewhere

PostPosted: Thu May 18, 2023 10:44 am
Post subject: Reply with quote

pietillidie wrote:
^Ginni followed you? That's awesome!


Thankyou, yes i thought so!!!

Ive had a couple of likes from Beau, JDG (that was exciting!!) and even a comment from Ruscoe, but Yeah, so cool. I had a look I do have an awful lot of Ginni shots on my feed!! but he just does the instagram worthy stuff! Jamie is the other insta star!!

cheers mate!!

_________________
You cant fix stupid, turns out you cant quarantine it either!
Back to top  
View user's profile Send private message  
David Libra

I dare you to try


Joined: 27 Jul 2003
Location: Andromeda

PostPosted: Thu May 18, 2023 11:11 am
Post subject: Reply with quote

Happy to put my hand up as a sceptic and risk embarrassment in years to come: I think a lot of this (particularly when moving into the realm of "sentient" AI) is hype. No doubt these language models are incredibly sophisticated and will become even more so, but ChatGPT just strikes me as HAL 3.0 – or, as this piece puts it, "knowledge sausage":

https://www.latimes.com/opinion/story/2023-05-14/ai-google-chatgpt-code-emergent-properties

_________________
All watched over by machines of loving grace
Back to top  
View user's profile Send private message Send e-mail MSN Messenger  
Cam Capricorn

Nick's BB Member #166


Joined: 10 May 2002
Location: Springvale

PostPosted: Thu May 18, 2023 11:07 pm
Post subject: Reply with quote

It's not necessarily scary but it is heralding the new industrial revolution. When the world is connected through everyone's smart devices [already happening] to competing AIs and they are growing exponentially in intelligence and awareness from the input of billions, the world will be changed more than it was with the arrival of the internet.

The education sector has already waved the white flag and is trying to figure out how to stay relevant alongside it when people can simply get near instant results for everything from maths problems to essays on any topic. It surrendered rather than trying to ban it because its too late, there are too many out there that students are using.

I used to believe it was hype, but having experienced a small taste of it I have altered my thinking. Every second, the AI improves, it's learning from web browsers and interactions with humans. I've created lyrics with it, art with it, speeches, lesson plans, report comments. You can individualise, the more input you give it the better it gets. As fields develop their own AIs through catcher programs that haven't been necessarily been invented yet, the knowledge will become sharper, deeper and more accurate. AIs will compete as they are now with ChatGPT, Bard, Bing etc.

I can see us having specific year level or school or classroom AIs that students interact with and draw from before the end of the decade, with teachers becoming facilitators for some subjects. This has been talked about since 2000 but the tech might actually be starting to come about. My mind spins with the change that is out there, and how we can manage it. Add voice to these things, and listening abilities and jesus it's going to rocket. Comparing the AI of last year to the AI of tomorrow will be like comparing the first mobile phones, the first video games etc to now.

The speed of the input from the world is mindblowing.

_________________
Get back on top.
Back to top  
View user's profile Send private message Send e-mail  
nomadjack 



Joined: 27 Apr 2006
Location: Essendon

PostPosted: Fri May 19, 2023 10:47 am
Post subject: Reply with quote

Fascinating topic and one I've spent increasing amounts of time covering in one of the classes I teach. The pace of innovation and development in the field is mindblowing - platforms that were genuinely impressive two years ago now look ridiculously basic.

Concerns around potentially catastrophic real world impacts of unintended consequences, misalignment of goals and loss of control were largely fringe even five years ago, but are now getting genuine attention from policy makers both here and at regional and global levels of government, and most importantly, in the field itself. As an example, the extract below if from an OECD AI Expert Panel Report on AI from October 2022:

“Participants agreed that future developments in AI could produce critical risks, including potential existential risks, and these risks merit the attention of governments. Participants highlighted that critical, and potentially also existential, risks could be posed by both narrow AI systems and increasingly general AI systems. Moreover, because many of these risks are continuous, scaling with the increasing capability and deployment of AI systems, it is crucial to address these risks before they escalate.”

These kinds of fears are also reflected in many expert surveys. See for example some of the studies cited in the Stanford University Artificial Intelligence Index Report 2023. https://aiindex.stanford.edu/report/

The potential gains from developments in AI are incredible, but the risks are significant as well. Governments are finally starting to listen to some of these concerns, but as usual, are well behind the field and risk falling further behind as the pace of innovation keeps increasing.
Back to top  
View user's profile Send private message  
pietillidie 



Joined: 07 Jan 2005


PostPosted: Fri May 19, 2023 12:44 pm
Post subject: Reply with quote

Okay, here's a start dealing first with the apparent dangers, and broaching the notion of intelligence. I'm too knackered to edit it, so it's more like rough notes.

Current AI capabilities are a fraction of a fraction of human intelligence, something I'll try to get ChatGPT itself to explain (I was just asking it questions and its answers were near enough, if understated). But we're not used to machines simulating natural language, so it looks impressive, as do robots with realistic movement. That doesn't mean it's intelligent in the way humans are intelligent. It's still awesome, though, so count me as both a fan and a sceptic.

I think the best place to start is by comparing AI to dangerous machines, to get that out of the way.

We have long had dangerous machines, from guns to warheads and viruses, including sufficient nuclear arms to eliminate ourselves. The nuclear risk is already immeasurably greater than anything AI can bring to the table now, and as I will argue, in the future.

That's not because AI can't be used to create something dangerous, such as an army of lunatic droids. But doing that is much harder than keeping nuclear weapons out of the hands of madmen. And the trajectory of development means there is ample time to constrain the dangerous use of AI as new threats arise, identifying risk areas, limiting and controlling risk, etc.

And, as with nuclear hardware, the footprint is larger than just an algorithm because that algorithm has to interface with extraordinarily expensive and specialised hardware to be dangerous. Again, that buys time and leaves a very traceable and trackable footprint.

So, I think the best way to put that question aside is to recognise nuclear weapons are already far, far more dangerous, and we control them successfuly (albeit not well enough in my view).

Arguably, though, cybersecurity risk is probably a better example of a dangerous machine (or programme, specifically). We are struggling to contain it, but we do counter it by constantly staying ahead of it, crowdsharing data and crowdsourcing solutions and testing. And we can definitely get much tougher on it if we want to but don't see the cost as high enough to get draconian yet. At some point, no doubt when insurance becomes impractical, I do think we will get tougher on it.

I don't see AI risk much differently, because the very same process can be used to stay ahead of AI algorithms, and once again we can get more draconian if we want.

Some of the fear, though, comes from a misunderstanding of what AI is. It isn't intelligence, at least in the sense of human intelligence. It is approximating and simulating expected answers through high-powered statistical search and structuring algorithms that learn from patterns stored in memory. And because AI isn't like human intelligence (to be explained more below), it isn't like human learning. Its learning is really just pattern matching and statistical expectation, which 'learns' by matching and guessing progressively better. That's nothing like the human learning engine, even though we also do statistics (and hence encode them in the algorithm itself).

A lot of the fear also comes from AI hype. The greater the fear, the greater the intelligence, right? So, of course the creators of large language models like ChatGPT ham that up miles beyond reality because it makes the algorithm seem even more impressive. So, they're loving all this fear and concern because it's validating.

So for me, no fear whatsoever. Humans adapt pretty quickly and effectively to shifts like this and always have.

Of course, no fear doesn't mean no problems. E.g., learners can cheat and AI embedded into products to automate them can be biased or learn to be biased. So sure, you need to control those things, but current proposals I've seen look reasonable. (E.g., making ChatGPT identify its sources. Bard does this somewhat, but not sufficiently. But it wouldn't be very hard). Anti-bias is easy enough to counter through transparency, contextual warnings and other tactics that have already been developed and discussed extensively.

And it's not just talk; the legislation is already coming thick and fast. The UK has bills in process already. This one on protecting work and employment from biased AI is a good example: https://bills.parliament.uk/bills/3464).

To my mind, the education problem isn't hard, although school education is not my expertise, so Cam I will defer to you here!

But I'd make kids do more work in class, and create sterile PCs and work processes that demonstrate ChatGPT wasn't used, etc. Easy at school, but it can be done at home. Creepy companies already do this with remote workers, so it can't be hard to find a sensible way to monitor kids, and might be a good thing given they already waste too much time on the web and their phones.

Plagiarism and copying were rife long before ChatGPT, while it's pretty easy to recognise the difference between someone's writing and ChatGPT, even if you manipulate it to write in a certain style. If a teacher knows the student and has time to actually read the text properly, it wouldn't be hard. The problem here is less to do with ChatGPT and more to do with existing problems in education, such as ever-shrinking intensive learning and engagement.

Sure, it's a challenge and a pain, but not really a difficult problem in and of itself to solve. You could even get kids to write more at school and map that work so you can algorithmically recognise their writing. The main issue there is privacy, so it would have to become like medical data, although it's already being harvested on the web from public data, while all kinds of medical conditions affect writing, so language mapping is already being done to help kids. There's a strong argument, in fact, that doing that would lead to auto-instructional language learning, much like automated typing lessons.

Again, to me they're very minor challenges. Education has far, far bigger problems already, being absurdly outdated, grossly under-funded, barely fit for purpose and interfered with by every second Karen parent and religious crazy. I won't go into that, but much like nuclear weapons are already infinitely more dangerous than AI, education has infinitely greater challenges than ChatGPT.

That's a quick take on those aspects.

You will sometimes see the term 'general intelligence' or artificial general intelligence (AGI). That's real intelligence, and AI is not even close to achieving it. But because people can't define intelligence, and explain what human intelligence entails, which is extremely complex, people mistakenly conflate current AI pattern matching and simulation with actual intelligence.

To me, the difference between the two is like the difference between pressing the cos button on a scientific calculator and understanding trigonometry. Any idiot can press the cos button. All the intelligence lies in the reason for pressing it, which has its own extensive system of reasoning, deductions, planning and motivations. Then you have to understand what the output means, store it in some useful and contextualised form for later, be able to modify or transform it as needed, apply it to the specific task you've already grasped and have in mind, assess its effectiveness or suitability, share the new outputs with others in a comprehensible form, etc.

So, unless you already understand trigonometry and bring that whole ensemble of knowledge, plans and logics to the task, pressing that button and viewing its output won't mean a damned thing. AI is pretty much like that; it can generate outputs, but unless you already know a hell of a lot about the output already, and a hell of a lot about similar things and related sub-sets and layers of knowledge, including of any humans, organisations or systems you engage in the process, you can't do much more with it other than cheat on a test or save time writing something.

Imagine you use ChatGPT to write a product brochure. You already have to know everything about the product to know if what ChatGPT said about it is correct or if it grasps the key ideas and implications. Using ChatGPT doesn't preclude needing that knowledge at all.

As mentioned, I will use ChatGPT itself to explain some of these issues in subsequent posts. It simply lacks multi-layered contextual awareness, knowledge matrices and purposeful human creative transformation capabilities. In the case of the product brochure, the text it creates might not be close to the way I want to represent the product to emphasise certain things, impress certain people, be catchy and memorable, connect the product to other trends, position the product in the marketplace, subtly compare it to rival product features and the language competitors use, allay fears by speaking to a sub-audience, assure regulatory bodies, avoid terms that trigger the wrong associations, hedge against misunderstanding, meet stakeholder and audience expectations, avoid legal jeopardy, discourage misuse or dangerous use, and on and on. And if ChatGPT can do some of that, great! That will save time and money. But you won't even know if its output is even vaguely useful if you don't already have an advanced knowledge of the product, tasks, fields, people, culture, organisations, legal system, etc. in which the product is embedded. And imagine the product is something complex, like an ERP solution, mobile phone or airplane. If all that knowledge and awareness is needed to write a brochure for something as simple as a bathroom tap or a sashimi knife, how much more is needed for highly complex products?

And I think that distinction, i.e., that it takes one to know one (it takes an expert to recognise that something is expert), is key here. It also probably offers many of the answers for education. So, when I ask AI something about human intelligence, which I might have wrestled with before (and repeatedly over decades in the case of 'meaning', 'understanding', 'knowing', 'sense-making', 'purpose' and 'learning', etc.), I can clearly see its limits. But anything to save time typing, especially given my health conditions which can make typing painful. Just knowing the right question to ask is 90% of the game.

People just haven't learned the scope of human intelligence well enough to grasp this topic intuitively or even approach it learnedly (meaning, with a thoughtful, logical plan and approach in mind based on careful testing, study and experience), let alone explain it (which is hard even for philosophers who've studied it for decades), because cognitive biology, cognitive science, psychology, philosophy, sociology, culture, geography, relationships and on and on are super complicated and exhaustive things to describe and explain, let alone understand. However, the brain processes all of the practice of those things near instantaneously, with a very high degree of mastery, without raising a sweat.

_________________
In the end the rain comes down, washes clean the streets of a blue sky town.
Help Nick's: http://www.magpies.net/nick/bb/fundraising.htm
Back to top  
View user's profile Send private message  
pietillidie 



Joined: 07 Jan 2005


PostPosted: Sun May 21, 2023 4:59 am
Post subject: Reply with quote

Here, I ask ChatGPT why AI can't attain general intelligence, or the hyopthetical human-like intelligence called Artificial General Intelligence (AGI). You'll see it offers reasons along the lines of what I've said above, but it downplays what this really implies. Once you take the below out of 'intelligence', just what the hell is left? Of course, it doesn't undermine itself by offering my more frank assessment of the implications to the reader.

Why can't AI attain AGI? ChatGPT wrote:
Artificial General Intelligence (AGI) refers to highly autonomous systems that outperform humans at most economically valuable work. While the development of AGI is an ongoing research endeavor, there are several challenges that make it difficult for AI to attain AGI at present. Here are some reasons:

Complexity of General Intelligence: Replicating human-like general intelligence is an exceptionally complex task. Human intelligence encompasses a wide range of cognitive abilities, including perception, reasoning, problem-solving, creativity, emotional understanding, and social interaction. Capturing all these aspects in a single system is a formidable challenge.

Lack of Full Understanding of Human Intelligence: Despite significant progress in AI research, our understanding of human intelligence and cognition is still incomplete. The human brain is a highly intricate and nuanced system, and many aspects of intelligence, such as consciousness, subjective experience, and intuition, remain poorly understood. Without a comprehensive understanding of human intelligence, replicating it becomes more challenging.

Limited Cognitive Frameworks: Current AI systems, including state-of-the-art deep learning models, operate within specific cognitive frameworks and lack the flexibility and adaptability of human intelligence. While AI can excel at specific tasks with large amounts of training data, transferring knowledge and skills across different domains or adapting to new situations in the way humans can is currently a significant challenge.

Lack of Common Sense and Contextual Understanding: Humans possess a vast amount of common sense knowledge that helps us understand the world and make sense of ambiguous or incomplete information. AI systems struggle to acquire and utilize this type of knowledge effectively. Understanding context, making intuitive leaps, and handling everyday situations outside the narrow domain of their training data is still an open problem for AI.

Ethical and Safety Concerns: Developing AGI raises significant ethical and safety concerns. AGI could potentially surpass human capabilities rapidly, and if not properly aligned with human values, it could have unintended consequences or pose risks to humanity. Ensuring the safe and responsible development of AGI requires careful research and consideration of ethical frameworks.

It's important to note that while AGI remains a significant challenge, progress continues to be made in the field of AI. Researchers are actively working on advancing various aspects of AI, including improving performance, increasing robustness, and addressing the limitations discussed above.

_________________
In the end the rain comes down, washes clean the streets of a blue sky town.
Help Nick's: http://www.magpies.net/nick/bb/fundraising.htm
Back to top  
View user's profile Send private message  
pietillidie 



Joined: 07 Jan 2005


PostPosted: Sun May 21, 2023 5:06 am
Post subject: Reply with quote

Here, I ask ChatGPT why AI can't produce one-shot (or few-shot) models. A human has the ability to see many things for the very first time, and categorise usefully and effectively. No so AI. Why? 'Reality' is miles too vast and AI doesn't know anything inherent about objects. It knows how people usually describe, categorise and us objects, which more a report about what other people think of something than actually knowing anything about it. Human DNA comes with this in-built.

Can AI generate one-shot models? ChatGPT wrote:
The term "one-shot models" typically refers to machine learning models that can learn from a single or a few examples, rather than requiring large amounts of labeled data. While there has been progress in developing few-shot learning techniques, which aim to train models with limited data, achieving true one-shot learning remains a challenging task for AI.

One of the main reasons for the difficulty in one-shot learning is the complexity and variability of real-world data. AI models typically require a substantial amount of diverse and representative data to generalize effectively. With only a single or a few examples, it is challenging for the model to capture the underlying patterns and variations in the data.

However, researchers have explored various techniques to improve few-shot learning capabilities. For example, methods like meta-learning or transfer learning aim to leverage prior knowledge or experiences from similar tasks to enhance learning efficiency and generalize from limited examples. These techniques enable models to adapt and learn new tasks with minimal data by leveraging shared knowledge from related tasks or domains.

While these approaches have shown promising results in specific domains and scenarios, achieving robust and reliable one-shot learning that can generalize across diverse real-world problems remains an ongoing research challenge. It requires addressing issues such as data scarcity, overfitting, domain shift, and capturing high-level abstract representations from limited examples.

While we may not have fully achieved true one-shot learning in AI, the progress in few-shot learning techniques demonstrates a step towards more efficient learning from limited data. Ongoing research and advancements in the field continue to explore ways to enhance generalization and learning capabilities with minimal examples.

_________________
In the end the rain comes down, washes clean the streets of a blue sky town.
Help Nick's: http://www.magpies.net/nick/bb/fundraising.htm
Back to top  
View user's profile Send private message  
pietillidie 



Joined: 07 Jan 2005


PostPosted: Sun May 21, 2023 5:15 am
Post subject: Reply with quote

Much the same thing in a different form:

Which is more intelligent, a newborn baby or the best large language model in existence? ChatGPT wrote:
When comparing the intelligence of a newborn baby and the best large language model in existence, it's important to consider the strengths and limitations of each.

A newborn baby:

Potential for Development: A newborn baby has the potential to develop a wide range of cognitive abilities, including perception, reasoning, problem-solving, creativity, emotional understanding, and social interaction. As they grow and learn from their environment and experiences, they can acquire knowledge, language skills, and various cognitive capabilities.

Learning Capacity: Babies possess innate learning mechanisms that allow them to learn from their surroundings and adapt to new situations. They can process sensory information, form connections between experiences, and acquire knowledge through observation, exploration, and interaction with others.

General Intelligence: Babies have the capacity to develop general intelligence, which encompasses various cognitive domains and the ability to transfer knowledge across different tasks and contexts.

The best large language model:

Language Processing Abilities: The best large language model, like state-of-the-art AI models, can process and generate human-like language. These models have been trained on vast amounts of text data and can answer questions, engage in conversations, and provide information within the domain they were trained on.

Extensive Knowledge Base: Language models have access to a vast amount of pre-existing information and can provide accurate and detailed responses within their trained domain. They can leverage the patterns and knowledge learned from the training data to generate appropriate responses.

Specialized Expertise: Language models excel in language-related tasks and have been optimized for processing and generating text. They can perform specific language-based tasks with high accuracy and efficiency.

Comparing the two:

It's challenging to make a definitive comparison between the intelligence of a newborn baby and a large language model. Babies have the potential to develop a wide range of cognitive abilities and general intelligence, whereas language models are specialized in language processing tasks.

While language models can process and generate language at an impressive level, they lack the broader cognitive abilities, physical embodiment, and potential for development that newborn babies possess. Babies have the capacity to learn from their experiences, adapt to new situations, and develop intelligence across various domains.

It's important to note that the notion of intelligence is multifaceted, and comparing the intelligence of a human being with an artificial system is a complex task. The two entities operate in different realms and have different modes of cognition.

_________________
In the end the rain comes down, washes clean the streets of a blue sky town.
Help Nick's: http://www.magpies.net/nick/bb/fundraising.htm
Back to top  
View user's profile Send private message  
pietillidie 



Joined: 07 Jan 2005


PostPosted: Sun May 21, 2023 5:21 am
Post subject: Reply with quote

And another way, referring to a branch of linguistics called Pragmatics, which tries to understand how we detect implied indirect meaning, such as used in jokes, but which is pretty much embedded into everything we say to some extent:

Can ChatGPT detect nuance of the sort studied in Pragmatic Linguistics? wrote:
While ChatGPT and similar large language models have the ability to process and generate language, their understanding of nuanced pragmatics, as studied in pragmatic linguistics, is limited. Pragmatics is the branch of linguistics that investigates how context, social factors, and speaker intentions influence the meaning and interpretation of language beyond its literal content.

While ChatGPT can generate text based on statistical patterns learned from training data, it does not possess true understanding of pragmatics in the same way that humans do. Here are some reasons why ChatGPT's grasp of nuanced pragmatics is currently limited:

Contextual Understanding: While ChatGPT has some ability to understand context within a conversation or text, its comprehension is based on patterns in the training data rather than deep contextual understanding. It may struggle to infer implicit meaning, resolve ambiguous references, or interpret the intended meaning behind indirect or metaphorical language.

Speaker Intentions and Social Factors: Pragmatics involves considering the intentions of the speaker, social context, and shared knowledge to interpret meaning. ChatGPT lacks an understanding of speaker intentions and social factors, making it challenging for the model to accurately capture nuanced pragmatic aspects like irony, sarcasm, politeness, or cultural references.

Coherence and Coherence Relations: Pragmatic linguistics also studies how discourse is organized and how coherent interpretations are formed based on relations between sentences and utterances. While ChatGPT can generate coherent text on a local level, it may struggle to maintain coherence over longer stretches of conversation or understand complex coherence relations.

Contextual Implicatures: Pragmatics explores how speakers convey implied meaning that goes beyond the literal interpretation of their words. Detecting and generating contextual implicatures, such as implicatures arising from Grice's maxims, is challenging for ChatGPT, as it relies more on surface-level patterns rather than true understanding of the underlying implicatures.

It's important to note that while ChatGPT may not possess advanced pragmatic understanding, it can still produce responses that seem contextually appropriate due to the patterns it has learned from training data. However, these responses are not necessarily based on a deep understanding of pragmatic nuances.

Advancing the understanding of nuanced pragmatics in AI systems is an active area of research. Researchers are exploring techniques to incorporate pragmatic considerations and context-awareness into language models, aiming to enhance their ability to capture and generate nuanced pragmatic phenomena. However, achieving human-level pragmatic understanding remains a significant challenge for AI systems like ChatGPT.

_________________
In the end the rain comes down, washes clean the streets of a blue sky town.
Help Nick's: http://www.magpies.net/nick/bb/fundraising.htm
Back to top  
View user's profile Send private message  
pietillidie 



Joined: 07 Jan 2005


PostPosted: Sun May 21, 2023 5:33 am
Post subject: Reply with quote

The thing is, and ChatGPT won't rush to tell you this, but by the time you take all of those things out you're not left with much by way of 'intelligence'. Those are all of the really hard things about meaning, understanding, sense-making, and the things that make humans unique. We've known this for millennia.

Can't bat, can't bowl, can't field, but by geez he's a good cricketer!

You have language, but not much intelligence. And any intelligence and language you have is borrowed from intelligent, literate people.

Next, I'll demo some of its smarts. I actually love ChatGPT for what it is, not what it isn't.

_________________
In the end the rain comes down, washes clean the streets of a blue sky town.
Help Nick's: http://www.magpies.net/nick/bb/fundraising.htm
Back to top  
View user's profile Send private message  
Display posts from previous:   
Post new topic   Reply to topic    Nick's Collingwood Bulletin Board Forum Index -> Victoria Park Tavern All times are GMT + 11 Hours

Goto page 1, 2  Next
Page 1 of 2   

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You cannot download files in this forum



Privacy Policy

Powered by phpBB © 2001, 2005 phpBB Group