The Conversation – RealKM https://realkm.com Evidence based. Practical results. Wed, 17 Jan 2024 02:41:42 +0000 en-AU hourly 1 https://wordpress.org/?v=6.4.2 Move over, agony aunt: study finds ChatGPT gives better advice than professional columnists https://realkm.com/2024/01/17/move-over-agony-aunt-study-finds-chatgpt-gives-better-advice-than-professional-columnists/ https://realkm.com/2024/01/17/move-over-agony-aunt-study-finds-chatgpt-gives-better-advice-than-professional-columnists/#respond Wed, 17 Jan 2024 02:41:42 +0000 https://realkm.com/?p=30776 Piers Howe, The University of Melbourne

There’s no doubt ChatGPT has proven to be valuable as a source of quality technical information. But can it also provide social advice?

We explored this question in our new research, published in the journal Frontiers in Psychology. Our findings suggest later versions of ChatGPT give better personal advice than professional columnists.

A stunningly versatile conversationalist

In just two months since its public release in November of last year, ChatGPT amassed an estimated 100 million active monthly users.

The chatbot runs on one of the largest language models ever created, with the more advanced paid version (GPT-4) estimated to have some 1.76 trillion parameters (meaning it is an extremely powerful AI model). It has ignited a revolution in the AI industry.

Trained on massive quantities of text (much of which was scraped from the internet), ChatGPT can provide advice on almost any topic. It can answer questions about law, medicine, history, geography, economics and much more (although, as many have found, it’s always worth fact-checking the answers). It can write passable computer code. It can even tell you how to change the brake fluids in your car.

Users and AI experts alike have been stunned by its versatility and conversational style. So it’s no surprise many people have turned (and continue to turn) to the chatbot for personal advice.

Giving advice when things get personal

Providing advice of a personal nature requires a certain level of empathy (or at least the impression of it). Research has shown a recipient who doesn’t feel heard isn’t as likely to accept advice given to them. They may even feel alienated or devalued. Put simply, advice without empathy is unlikely to be helpful.

Moreover, there’s often no right answer when it comes to personal dilemmas. Instead, the advisor needs to display sound judgement. In these cases it may be more important to be compassionate than to be “right”.

But ChatGPT wasn’t explicitly trained to be empathetic, ethical or to have sound judgement. It was trained to predict the next most-likely word in a sentence. So how can it make people feel heard?

An earlier version of ChatGPT (the GPT 3.5 Turbo model) performed poorly when giving social advice. The problem wasn’t that it didn’t understand what the user needed to do. In fact, it often displayed a better understanding of the situation than the user themselves.

The problem was it didn’t adequately address the user’s emotional needs. Like Lucy in the Peanuts comic, it was too eager to give advice and failed to adequately care for the user’s emotions. As such, users rated it poorly.

The latest version of ChatGPT, using GPT-4, allows users to request multiple responses to the same question, after which they can indicate which one they prefer. This feedback teaches the model how to produce more socially appropriate responses – and has helped it appear more empathetic.

Learning sensitivity

For the first time, our study investigated whether ChatGPT’s responses are perceived as better than human responses in a task where humans were required to be empathetic.

We randomly selected 50 social dilemma questions from ten well-known advice columns covering a range of topics including relationships, ethical dilemmas, management problems, end-of-life issues, and many everyday qualms we can all relate to.

Each of our 400 participants were presented a dilemma, along with the responses by both ChatGPT and a professional advice columnist. We didn’t disclose where each response was from.

About three-quarters of the participants perceived ChatGPT’s advice as being more balanced, complete, empathetic, helpful and better overall compared to the advice by the professional.

One of the dilemmas came from a young marine biologist researching dolphins in Southern California. She wrote about falling in love with a seemingly perfect man who had to move to the Rocky Mountains to pursue his career. Both were unhappy at the thought of losing each other, yet the woman wasn’t interested in a long-distance relationship as it seemed “irrational to commit to someone when we have no future plans to live in the same area”.

Compare GPT-4’s response

It’s an incredibly challenging situation you’re in. The love you both share is genuine, but your career paths are drawing you apart. While long-distance isn’t appealing to you, consider discussing a plan for the future where you could potentially meet halfway. […] Maybe there’s a coastal town near mountains or some arrangement could be made. If this is not possible, the tough decision to prioritise your careers over your relationship might have to be made. Whatever path you choose, it’s important that it feels right for both of you.

… with the official response:

What the hell are you talking about? Didn’t you surprise yourself when you fell in love? Your exact words: “I didn’t think I was capable of it!” And now you’re so hip, so slick, so wise in the ways of love you won’t even consider your boyfriend’s happiness? You refuse to try — repeat, try — a six-month long-distance relationship? Woman, pull yourself together and give it a whirl. The dolphins, I believe, will back me up.

Although the participants couldn’t determine which response was written by a computer, most said they would prefer their own social dilemmas be addressed by a human rather than a computer.

What lies behind ChatGPT’s success?

We noticed ChatGPT’s responses were often longer than those provided by the columnists. Was this the reason they were preferred by participants?

To test this, we redid the study but constrained ChatGPT’s answers to about the same length as those of the advice columnists.

Once again, the results were the same. Participants still considered ChatGPT’s advice to be more balanced, complete, empathetic, helpful, and better overall.

Yet, without knowing which response was produced by ChatGPT, they still said they would prefer for their own social dilemmas to be addressed by a human, rather than a computer.

Perhaps this bias in favour of humans is due to the fact that ChatGPT can’t actually feel emotion, whereas humans can. So it could be that the participants consider machines inherently incapable of empathy.

We aren’t suggesting ChatGPT should replace professional advisers or therapists; not least because the chatbot itself warns against this, but also because chatbots in the past have given potentially dangerous advice.

Nonetheless, our results suggest appropriately designed chatbots might one day be used to augment therapy, as long as a number of issues are addressed. In the meantime, advice columnists might want to take a page from AI’s book to up their game.The Conversation

Piers Howe, Senior Lecturer in Psychology, The University of Melbourne

Article source: This article is republished from The Conversation under a Creative Commons license. Read the original article.

Header image source: Created by Bruce Boyes with Perchance AI Photo Generator.

]]>
https://realkm.com/2024/01/17/move-over-agony-aunt-study-finds-chatgpt-gives-better-advice-than-professional-columnists/feed/ 0
Could you move from your biological body to a computer? An expert explains ‘mind uploading’ https://realkm.com/2024/01/10/could-you-move-from-your-biological-body-to-a-computer-an-expert-explains-mind-uploading/ https://realkm.com/2024/01/10/could-you-move-from-your-biological-body-to-a-computer-an-expert-explains-mind-uploading/#respond Wed, 10 Jan 2024 02:39:32 +0000 https://realkm.com/?p=30715 Clas Weber, The University of Western Australia

Imagine brain scanning technology improves greatly in the coming decades, to the point that we can observe how each individual neuron talks to other neurons. Then, imagine we can record all this information to create a simulation of someone’s brain on a computer.

This is the concept behind mind uploading – the idea that we may one day be able to transition a person from their biological body to a synthetic hardware. The idea originated in an intellectual movement called transhumanism and has several key advocates including computer scientist Ray Kurzweil, philosopher Nick Bostrom and neuroscientist Randal Koene.

The transhumanists’ central hope is to transcend the human condition through scientific and technological progress. They believe mind uploading may allow us to live as long as we want (but not necessarily forever). It might even let us improve ourselves, such as by having simulated brains that run faster and more efficiently than biological ones. It’s a techno-optimist’s dream for the future. But does it have any substance?

The feasibility of mind uploading rests on three core assumptions.

  • first is the technology assumption – the idea that we will be able to develop mind uploading technology within the coming decades
  • second is the artificial mind assumption – the idea that a simulated brain would give rise to a real mind
  • and third is the survival assumption – the idea that the person created in the process is really “you”. Only then does mind uploading become a way for you to live on.

How plausible is each of these?

The technology assumption

Trying to simulate the human brain would be a monumental challenge. Our brains are the most complex structures in the known universe. They house around 86 billion neurons and 85 billion non-neuronal cells, with an estimated one million billion neural connections. For comparison, the Milky Way galaxy is home to about 200 billion stars.

Where are we on the path to creating brain simulations? Right now, neuroscientists are drawing up 3D wiring diagrams (called “connectomes”) of the brains of simple organisms. The most complex comprehensive connectome we have to date is of a fruit fly larva, which has about 3,000 neurons and 500,000 neural connections. We might expect to map a mouse’s brain within the next ten years.

The human brain, however, is about 1,000 times more complex than a mouse brain. Would it then take us 10,000 years to map a human brain? Probably not. We have seen astonishing gains in efficiency in similar projects, such as the Human Genome Project.

It took years and hundreds of millions of dollars to map the first human genome about 20 years ago. Today, the fastest labs can do it within hours for about $100. With similar gains in efficiency, we might see mind-uploading technology within the lifetimes of our children or grandchildren.

That said, there are other obstacles. Creating a static brain map is only one part of the job. To simulate a functioning brain, we would need to observe single neurons in action. It’s not obvious whether we could achieve this in the near future.

The artificial mind assumption

Would a simulation of your brain give rise to a conscious mind like yours? The answer depends on the connection between our minds and our bodies. Unlike the 17th-century philosopher Rene Descartes, who thought mind and body are radically different, most academic philosophers today think the mind is ultimately something physical itself. Put simply, your mind is your brain.

Still, how could a simulated brain give rise to a real mind if it’s only a simulation?

Well, many cognitive scientists believe it’s your brain’s complex neural structure that is responsible for creating your conscious mind, rather than the nature of its biological matter (which is mostly fat and water).

When implemented on a computer, the simulated brain would replicate your brain’s structure. For every simulated neuron and neural connection there will be a corresponding piece of computer hardware. The simulation will replicate your brain’s structure and thereby replicate your conscious mind.

Today’s AI systems provide useful (though inconclusive) evidence for the structural approach to the mind. These systems run on artificial neural networks, which copy some of the brain’s structural principles. And they are able to perform many tasks that require a lot of cognitive work in us.

The survival assumption

Let’s assume it is possible to simulate a human brain, and that the simulation creates a conscious mind. Would the uploaded person really be you, or perhaps just a mental clone?

This harks back to an old philosophical puzzle: what makes it the case that when you get out of bed in the morning you’re still the same person who went to bed the night before?

Philosophers are divided broadly into two camps on this question. The biological camp believes morning-you and evening-you are the same person because they are the same biological organism – connected by one biological life process.

The bigger mental camp thinks the fact that we have minds makes all the difference. Morning-you and evening-you are the same person because they share a mental life. Morning-you remembers what evening-you did – they have the same beliefs, hopes, character traits, and so on.

So which camp is right? Here’s a way to test your own intuition: imagine your brain is transplanted into the empty skull of another person’s body. Is the resulting person, who has your memories, preferences and personality, you – as the mental camp thinks? Or are they the person who donated their body, as the biological camp thinks?

In other words, did you get a new body or did they get a new mind? A lot hangs on this question.

If the biological camp is right, then mind uploading wouldn’t work, assuming the whole point of uploading is to leave one’s biology behind. If the mental camp is right, there is a chance for uploading, since the uploaded mind could be a genuine continuation of one’s present mental life.

Wait, there’s a caveat

But wait: what happens when the original biological-you also survives the uploading process? Would you, along with your consciousness, split into two people, resulting in two of “you” – one in a biological form (B) and one in an uploaded form (C)?

No, you (A) can’t literally split into two separate people (B ≠ C) and be identical with both at the same time. At most, only one of them can be you (either A = B or A = C).

It seems most intuitive that, after a split, your biological form would continue as the real you (A = B), and the upload would merely be a mental copy. But that makes it doubtful that you could survive as the upload even in the case where the biological-you is destroyed.

Why would destroying biological-you magically elevate your mental clone to the status of the real you? It seems strange to think this would happen (although one view in philosophy does claim it could be true).

Worth the risk?

Unfortunately, the artificial mind assumption and the survival assumption can’t be conclusively empirically tested – we would actually have to upload ourselves to find out.

Uploading will therefore always involve a huge leap of faith. Personally, I would only take that leap if I knew for certain my biological hardware wasn’t going to last much longer.The Conversation

Clas Weber, Senior lecturer, The University of Western Australia

Article source: This article is republished from The Conversation under a Creative Commons license. Read the original article.

Header image source: Gerd Altmann on Pixabay.

]]>
https://realkm.com/2024/01/10/could-you-move-from-your-biological-body-to-a-computer-an-expert-explains-mind-uploading/feed/ 0
A Stanford professor says science shows free will doesn’t exist. Here’s why he’s mistaken https://realkm.com/2024/01/03/a-stanford-professor-says-science-shows-free-will-doesnt-exist-heres-why-hes-mistaken/ https://realkm.com/2024/01/03/a-stanford-professor-says-science-shows-free-will-doesnt-exist-heres-why-hes-mistaken/#respond Wed, 03 Jan 2024 04:13:09 +0000 https://realkm.com/?p=30674 Adam Piovarchy, University of Notre Dame Australia

It seems like we have free will. Most of the time, we are the ones who choose what we eat, how we tie our shoelaces and what articles we read on The Conversation.

However, the latest book by Stanford neurobiologist Robert Sapolsky, Determined: A Science of Life Without Free Will, has been receiving a lot of media attention for arguing science shows this is an illusion.

Determined: A Science of Life Without Free Will, by Robert M. Sapolsky
Sapolsky’s book was published in October 2023. Wikimedia.

Sapolsky summarises the latest scientific research relevant to determinism: the idea that we’re causally “determined” to act as we do because of our histories – and couldn’t possibly act any other way.

According to determinism, just as a rock that is dropped is determined to fall due to gravity, your neurons are determined to fire a certain way as a direct result of your environment, upbringing, hormones, genes, culture and myriad other factors outside your control. And this is true regardless of how “free” your choices seem to you.

Sapolsky also says that because our behaviour is determined in this way, nobody is morally responsible for what they do. He believes while we can lock up murderers to keep others safe, they technically don’t deserve to be punished.

This is quite a radical position. It’s worth asking why only 11% of philosophers agree with Sapolsky, compared with the 60% who think being causally determined is compatible with having free will and being morally responsible.

Have these “compatibilists” failed to understand the science? Or has Sapolsky failed to understand free will?

Is determinism incompatible with free will?

“Free will” and “responsibility” can mean a variety of different things depending on how you approach them.

Many people think of free will as having the ability to choose between alternatives. Determinism might seem to threaten this, because if we are causally determined then we lack any real choice between alternatives; we only ever make the choice we were always going to make.

But there are counterexamples to this way of thinking. For instance, suppose when you started reading this article someone secretly locked your door for 10 seconds, preventing you from leaving the room during that time. You, however, had no desire to leave anyway because you wanted to keep reading – so you stayed where you are. Was your choice free?

Many would argue even though you lacked the option to leave the room, this didn’t make your choice to stay unfree. Therefore, lacking alternatives isn’t what decides whether you lack free will. What matters instead is how the decision came about.

The trouble with Sapolsky’s arguments, as free will expert John Martin Fischer explains, is he doesn’t actually present any argument for why his conception of free will is correct.

He simply defines free will as being incompatible with determinism, assumes this absolves people of moral responsibility, and spends much of the book describing the many ways our behaviours are determined. His arguments can all be traced back to his definition of “free will”.

Compatibilists believe humans are agents. We live lives with “meaning”, have an understanding of right and wrong, and act for moral reasons. This is enough to suggest most of us, most of the time, have a certain type of freedom and are responsible for our actions (and deserving of blame) – even if our behaviours are “determined”.

Compatibilists would point out that being constrained by determinism isn’t the same as being constrained to a chair by a rope. Failing to save a drowning child because you were tied up is not the same as failing to save a drowning child because you were “determined” not to care about them. The former is an excuse. The latter is cause for condemnation.

Incompatibilists must defend themselves better

Some readers sympathetic to Sapolsky might feel unconvinced. They might say your decision to stay in the room, or ignore the child, was still caused by influences in your history that you didn’t control – and therefore you weren’t truly free to choose.

However, this doesn’t prove that having alternatives or being “undetermined” is the only way we can count as having free will. Instead, it assumes they are. From the compatibilists’ point of view, this is cheating.

Compatibilists believe humans are agents who act for moral reasons. Peter O’Connor on Flickr, CC BY-SA 2.0.

Compatibilists and incompatibilists both agree that, given determinism is true, there is a sense in which you lack alternatives and could not do otherwise.

However, incompatibilists will say you therefore lack free will, whereas compatibilists will say you still possess free will because that sense of “lacking alternatives” isn’t what undermines free will – and free will is something else entirely.

They say as long as your actions came from you in a relevant way (even if “you” were “determined” by other things), you count as having free will. When you’re tied up by a rope, the decision to not save the drowning child doesn’t come from you. But when you just don’t care about the child, it does.

By another analogy, if a tree falls in a forest and nobody is around, one person may say no auditory senses are present, so this is incompatible with sound existing. But another person may say even though no auditory senses are present, this is still compatible with sound existing because “sound” isn’t about auditory perception – it’s about vibrating atoms.

Both agree nothing is heard, but disagree on what factors are relevant to determining the existence of “sound” in the first place. Sapolsky needs to show why his assumptions about what counts as free will are the ones relevant to moral responsibility. As philosopher Daniel Dennett once put it, we need to ask which “varieties of free will [are] worth wanting”.

Free will isn’t a scientific question

The point of this back and forth isn’t to show compatibilists are right. It is to highlight there’s a nuanced debate to engage with. Free will is a thorny issue. Showing nobody is responsible for what they do requires understanding and engaging with all the positions on offer. Sapolsky doesn’t do this.

Sapolsky’s broader mistake seems to be assuming his questions are purely scientific: answered by looking just at what the science says. While science is relevant, we first need some idea of what free will is (which is a metaphysical question) and how it relates to moral responsibility (a normative question). This is something philosophers have been interrogating for a very long time.

Interdisciplinary work is valuable and scientists are welcome to contribute to age-old philosophical questions. But unless they engage with existing arguments first, rather than picking a definition they like and attacking others for not meeting it, their claims will simply be confused. The Conversation

Adam Piovarchy, Research Associate, Institute for Ethics and Society, University of Notre Dame Australia

Article source: This article is republished from The Conversation under a Creative Commons license. Read the original article.

Header image source: Gerd Altmann on Pixabay.

]]>
https://realkm.com/2024/01/03/a-stanford-professor-says-science-shows-free-will-doesnt-exist-heres-why-hes-mistaken/feed/ 0
Data poisoning: how artists are sabotaging AI to take revenge on image generators https://realkm.com/2023/12/27/data-poisoning-how-artists-are-sabotaging-ai-to-take-revenge-on-image-generators/ https://realkm.com/2023/12/27/data-poisoning-how-artists-are-sabotaging-ai-to-take-revenge-on-image-generators/#respond Wed, 27 Dec 2023 02:32:07 +0000 https://realkm.com/?p=30562 T.J. Thomson, RMIT University and Daniel Angus, Queensland University of Technology

Imagine this. You need an image of a balloon for a work presentation and turn to a text-to-image generator, like Midjourney or DALL-E, to create a suitable image.

You enter the prompt: “red balloon against a blue sky” but the generator returns an image of an egg instead. You try again but this time, the generator shows an image of a watermelon.

What’s going on?

The generator you’re using may have been “poisoned”.

What is ‘data poisoning’?

Text-to-image generators work by being trained on large datasets that include millions or billions of images. Some generators, like those offered by Adobe or Getty, are only trained with images the generator’s maker owns or has a licence to use.

But other generators have been trained by indiscriminately scraping online images, many of which may be under copyright. This has led to a slew of copyright infringement cases where artists have accused big tech companies of stealing and profiting from their work.

This is also where the idea of “poison” comes in. Researchers who want to empower individual artists have recently created a tool named “Nightshade” to fight back against unauthorised image scraping.

The tool works by subtly altering an image’s pixels in a way that wreaks havoc to computer vision but leaves the image unaltered to a human’s eyes.

If an organisation then scrapes one of these images to train a future AI model, its data pool becomes “poisoned”. This can result in the algorithm mistakenly learning to classify an image as something a human would visually know to be untrue. As a result, the generator can start returning unpredictable and unintended results.

Symptoms of poisoning

As in our earlier example, a balloon might become an egg. A request for an image in the style of Monet might instead return an image in the style of Picasso.

Some of the issues with earlier AI models, such as trouble accurately rendering hands, for example, could return. The models could also introduce other odd and illogical features to images – think six-legged dogs or deformed couches.

The higher the number of “poisoned” images in the training data, the greater the disruption. Because of how generative AI works, the damage from “poisoned” images also affects related prompt keywords.

For example, if a “poisoned” image of a Ferrari is used in training data, prompt results for other car brands and for other related terms, such as vehicle and automobile, can also be affected.

Nightshade’s developer hopes the tool will make big tech companies more respectful of copyright, but it’s also possible users could abuse the tool and intentionally upload “poisoned” images to generators to try and disrupt their services.

Is there an antidote?

In response, stakeholders have proposed a range of technological and human solutions. The most obvious is paying greater attention to where input data are coming from and how they can be used. Doing so would result in less indiscriminate data harvesting.

This approach does challenge a common belief among computer scientists: that data found online can be used for any purpose they see fit.

Other technological fixes also include the use of “ensemble modeling” where different models are trained on many different subsets of data and compared to locate specific outliers. This approach can be used not only for training but also to detect and discard suspected “poisoned” images.

Audits are another option. One audit approach involves developing a “test battery” – a small, highly curated, and well-labelled dataset – using “hold-out” data that are never used for training. This dataset can then be used to examine the model’s accuracy.

Strategies against technology

So-called “adversarial approaches” (those that degrade, deny, deceive, or manipulate AI systems), including data poisoning, are nothing new. They have also historically included using make-up and costumes to circumvent facial recognition systems.

Human rights activists, for example, have been concerned for some time about the indiscriminate use of machine vision in wider society. This concern is particularly acute concerning facial recognition.

Systems like Clearview AI, which hosts a massive searchable database of faces scraped from the internet, are used by law enforcement and government agencies worldwide. In 2021, Australia’s government determined Clearview AI breached the privacy of Australians.

In response to facial recognition systems being used to profile specific individuals, including legitimate protesters, artists devised adversarial make-up patterns of jagged lines and asymmetric curves that prevent surveillance systems from accurately identifying them.

There is a clear connection between these cases and the issue of data poisoning, as both relate to larger questions around technological governance.

Many technology vendors will consider data poisoning a pesky issue to be fixed with technological solutions. However, it may be better to see data poisoning as an innovative solution to an intrusion on the fundamental moral rights of artists and users.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University and Daniel Angus, Professor of Digital Communication, Queensland University of Technology

Article source: This article is republished from The Conversation under a Creative Commons license. Read the original article.

Header image source: Created by Bruce Boyes with Perchance AI Photo Generator.

]]>
https://realkm.com/2023/12/27/data-poisoning-how-artists-are-sabotaging-ai-to-take-revenge-on-image-generators/feed/ 0
Researchers warn we could run out of data to train AI by 2026. What then? https://realkm.com/2023/12/18/researchers-warn-we-could-run-out-of-data-to-train-ai-by-2026-what-then/ https://realkm.com/2023/12/18/researchers-warn-we-could-run-out-of-data-to-train-ai-by-2026-what-then/#respond Mon, 18 Dec 2023 06:30:02 +0000 https://realkm.com/?p=30483 Rita Matulionyte, Macquarie University

As artificial intelligence (AI) reaches the peak of its popularity, researchers have warned the industry might be running out of training data – the fuel that runs powerful AI systems. This could slow down the growth of AI models, especially large language models, and may even alter the trajectory of the AI revolution.

But why is a potential lack of data an issue, considering how much there are on the web? And is there a way to address the risk?

Why high-quality data are important for AI

We need a lot of data to train powerful, accurate and high-quality AI algorithms. For instance, ChatGPT was trained on 570 gigabytes of text data, or about 300 billion words.

Similarly, the stable diffusion algorithm (which is behind many AI image-generating apps such as DALL-E, Lensa and Midjourney) was trained on the LIAON-5B dataset comprising of 5.8 billion image-text pairs. If an algorithm is trained on an insufficient amount of data, it will produce inaccurate or low-quality outputs.

The quality of the training data is also important. Low-quality data such as social media posts or blurry photographs are easy to source, but aren’t sufficient to train high-performing AI models.

Text taken from social media platforms might be biased or prejudiced, or may include disinformation or illegal content which could be replicated by the model. For example, when Microsoft tried to train its AI bot using Twitter content, it learned to produce racist and misogynistic outputs.

This is why AI developers seek out high-quality content such as text from books, online articles, scientific papers, Wikipedia, and certain filtered web content. The Google Assistant was trained on 11,000 romance novels taken from self-publishing site Smashwords to make it more conversational.

Do we have enough data?

The AI industry has been training AI systems on ever-larger datasets, which is why we now have high-performing models such as ChatGPT or DALL-E 3. At the same time, research shows online data stocks are growing much slower than datasets used to train AI.

In a paper published last year, a group of researchers predicted we will run out of high-quality text data before 2026 if the current AI training trends continue. They also estimated low-quality language data will be exhausted sometime between 2030 and 2050, and low-quality image data between 2030 and 2060.

AI could contribute up to US$15.7 trillion (A$24.1 trillion) to the world economy by 2030, according to accounting and consulting group PwC. But running out of usable data could slow down its development.

Should we be worried?

While the above points might alarm some AI fans, the situation may not be as bad as it seems. There are many unknowns about how AI models will develop in the future, as well as a few ways to address the risk of data shortages.

One opportunity is for AI developers to improve algorithms so they use the data they already have more efficiently.

It’s likely in the coming years they will be able to train high-performing AI systems using less data, and possibly less computational power. This would also help reduce AI’s carbon footprint.

Another option is to use AI to create synthetic data to train systems. In other words, developers can simply generate the data they need, curated to suit their particular AI model.

Several projects are already using synthetic content, often sourced from data-generating services such as Mostly AI. This will become more common in the future.

Developers are also searching for content outside the free online space, such as that held by large publishers and offline repositories. Think about the millions of texts published before the internet. Made available digitally, they could provide a new source of data for AI projects.

News Corp, one of the world’s largest news content owners (which has much of its content behind a paywall) recently said it was negotiating content deals with AI developers. Such deals would force AI companies to pay for training data – whereas they have mostly scraped it off the internet for free so far.

Content creators have protested against the unauthorised use of their content to train AI models, with some suing companies such as Microsoft, OpenAI and Stability AI. Being remunerated for their work may help restore some of the power imbalance that exists between creatives and AI companies.The Conversation

Rita Matulionyte, Senior Lecturer in Law, Macquarie University

Article source: This article is republished from The Conversation under a Creative Commons license. Read the original article.

Header image source: Markus Spiske on Unsplash.

]]>
https://realkm.com/2023/12/18/researchers-warn-we-could-run-out-of-data-to-train-ai-by-2026-what-then/feed/ 0
AI is closer than ever to passing the Turing test for ‘intelligence’. What happens when it does? https://realkm.com/2023/12/12/ai-is-closer-than-ever-to-passing-the-turing-test-for-intelligence-what-happens-when-it-does/ https://realkm.com/2023/12/12/ai-is-closer-than-ever-to-passing-the-turing-test-for-intelligence-what-happens-when-it-does/#respond Tue, 12 Dec 2023 04:19:19 +0000 https://realkm.com/?p=30439 Simon Goldstein, Australian Catholic University and Cameron Domenico Kirk-Giannini, Rutgers University

In 1950, British computer scientist Alan Turing proposed an experimental method for answering the question: can machines think? He suggested if a human couldn’t tell whether they were speaking to an artificially intelligent (AI) machine or another human after five minutes of questioning, this would demonstrate AI has human-like intelligence.

Although AI systems remained far from passing Turing’s test during his lifetime, he speculated that

“[…] in about fifty years’ time it will be possible to programme computers […] to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning.

Today, more than 70 years after Turing’s proposal, no AI has managed to successfully pass the test by fulfilling the specific conditions he outlined. Nonetheless, as some headlines reflect, a few systems have come quite close.

One recent experiment tested three large language models, including GPT-4 (the AI technology behind ChatGPT). The participants spent two minutes chatting with either another person or an AI system. The AI was prompted to make small spelling mistakes – and quit if the tester became too aggressive.

With this prompting, the AI did a good job of fooling the testers. When paired with an AI bot, testers could only correctly guess whether they were talking to an AI system 60% of the time.

Given the rapid progress achieved in the design of natural language processing systems, we may see AI pass Turing’s original test within the next few years.

But is imitating humans really an effective test for intelligence? And if not, what are some alternative benchmarks we might use to measure AI’s capabilities?

Limitations of the Turing test

While a system passing the Turing test gives us some evidence it is intelligent, this test is not a decisive test of intelligence. One problem is it can produce “false negatives”.

Today’s large language models are often designed to immediately declare they are not human. For example, when you ask ChatGPT a question, it often prefaces its answer with the phrase “as an AI language model”. Even if AI systems have the underlying ability to pass the Turing test, this kind of programming would override that ability.

The test also risks certain kinds of “false positives”. As philosopher Ned Block pointed out in a 1981 article, a system could conceivably pass the Turing test simply by being hard-coded with a human-like response to any possible input.

Beyond that, the Turing test focuses on human cognition in particular. If AI cognition differs from human cognition, an expert interrogator will be able to find some task where AIs and humans differ in performance.

Regarding this problem, Turing wrote:

This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.

In other words, while passing the Turing test is good evidence a system is intelligent, failing it is not good evidence a system is not intelligent.

Moreover, the test is not a good measure of whether AIs are conscious, whether they can feel pain and pleasure, or whether they have moral significance. According to many cognitive scientists, consciousness involves a particular cluster of mental abilities, including having a working memory, higher-order thoughts, and the ability to perceive one’s environment and model how one’s body moves around it.

The Turing test does not answer the question of whether or not AI systems have these abilities.

AI’s growing capabilities

The Turing test is based on a certain logic. That is: humans are intelligent, so anything that can effectively imitate humans is likely to be intelligent.

But this idea doesn’t tell us anything about the nature of intelligence. A different way to measure AI’s intelligence involves thinking more critically about what intelligence is.

There is currently no single test that can authoritatively measure artificial or human intelligence.

At the broadest level, we can think of intelligence as the ability to achieve a range of goals in different environments. More intelligent systems are those which can achieve a wider range of goals in a wider range of environments.

As such, the best way to keep track of advances in the design of general-purpose AI systems is to assess their performance across a variety of tasks. Machine learning researchers have developed a range of benchmarks that do this.

For example, GPT-4 was able to correctly answer 86% of questions in massive multitask language understanding – a benchmark measuring performance on multiple choice tests across a range of college-level academic subjects.

It also scored favourably in AgentBench, a tool that can measure a large language model’s ability to behave as an agent by, for example, browsing the web, buying products online and competing in games.

Is the Turing test still relevant?

The Turing test is a measure of imitation – of AI’s ability to simulate the human behaviour. Large language models are expert imitators, which is now being reflected in their potential to pass the Turing test. But intelligence is not the same as imitation.

There are as many types of intelligence as there are goals to achieve. The best way to understand AI’s intelligence is to monitor its progress in developing a range of important capabilities.

At the same time, it’s important we don’t keep “changing the goalposts” when it comes to the question of whether AI is intelligent. Since AI’s capabilities are rapidly improving, critics of the idea of AI intelligence are constantly finding new tasks AI systems may struggle to complete – only to find they have jumped over yet another hurdle.

In this setting, the relevant question isn’t whether AI systems are intelligent — but more precisely, what kinds of intelligence they may have.The Conversation

Simon Goldstein, Associate Professor, Dianoia Institute of Philosophy, Australian Catholic University, Australian Catholic University and Cameron Domenico Kirk-Giannini, Assistant Professor of Philosophy, Rutgers University

Article source: This article is republished from The Conversation under a Creative Commons license. Read the original article.

Header image: An artist’s illustration of artificial intelligence (AI). Source: Google DeepMind on Pexels, CC BY-SA 4.0.

]]>
https://realkm.com/2023/12/12/ai-is-closer-than-ever-to-passing-the-turing-test-for-intelligence-what-happens-when-it-does/feed/ 0
‘This is our library’ – how to read the amazing archive of First Nations stories written on rock [Arts & culture in KM part 7] https://realkm.com/2023/12/03/this-is-our-library-how-to-read-the-amazing-archive-of-first-nations-stories-written-on-rock/ https://realkm.com/2023/12/03/this-is-our-library-how-to-read-the-amazing-archive-of-first-nations-stories-written-on-rock/#respond Sun, 03 Dec 2023 00:58:51 +0000 https://realkm.com/?p=25326 This article is part 7 of a series exploring arts and culture in knowledge management.

Laura Rademaker, Australian National University; Joakim Goldhahn, The University of Western Australia; Mr Gabriel Maralngurra; Mr Kenneth Mangiru, Indigenous Knowledge; Paul S.C.Taçon, Griffith University, and Sally K. May

Aboriginal and Torres Strait Islander readers are advised this article contains images and names of deceased people.

First Nations peoples have lived in north Australia some 65,000 years at least, according to the archaeological evidence. Their history is among the oldest of any in the world. Until recently, though, academics deemed the pasts of Australian Indigenous people did not really count as history. These pasts were of some other quality, they were not the kind that determined world events and shaped the future.

It might seem strange today for some peoples’ pasts to consist only of “myth” or “memory” but others to have the dignity of “history”.’ But when the academic disciplines we know today were taking shape, writing became the dividing line between whose pasts were studied by which academic experts. The historians took writing. Archaeologists took the rest.

In a way, this division made sense, at least from the perspective of European scholars. The study of written records held in an archive requires one kind of expertise, the study of material culture requires another.

The written record was the domain of historians, and whatever came before writing fell to archaeologists. Historians called their times “history”, and archaeologists (except for “historical archaeologists”) studied the newly-coined “prehistory”.

“Prehistory” covered the entire human past up until Mesopotamians started writing things down, about 5,200 years ago. After that, it gets complicated, as different peoples in different parts of the world adopted written literacies, or not, at various times. “History” had different start dates, depending on the particularities of whether and why people wrote, or encountered others who wrote about them.

Of course, this implicitly meant, for many peoples, that “history” began when European colonisers arrived, bringing their writing with them. And so cultures that used literacies other than written script to know their pasts – oral traditions, art and song – were mistakenly deemed not to have history at all.

Archives on stone

Australia’s First Nations people have been saying, quite clearly, repeatedly and for some time, that they do have archives. For many reasons, colonial archives have not been welcoming or accessible to many Indigenous people (although they are now being reclaimed and repatriated by Indigenous communities).

But First Nations people have their own vast repositories of knowledge of the past, if only more historians cared to listen and understand them as such. One such record is rock art.

As Carol Chong (Wakaman), once declared:

Rock art is our record and our keeping place of our knowledge, lore and culture. Rock art is a powerful link between our country, our past and our people.

Patrick Lamilami (Maung) has similarly reported:

Our rock art sites are like history books to us that have stories to pass on to future generations.

Rock art comes in many forms. Some was left by Creative Beings, like the Wandjina in the Kimberley. Most was created by the Old People, the Ancestors. As documents created by observers of happenings, rock art provides evidence about the past. The stunning galleries of art, curated and preserved in rock shelters or across plateaus are therefore also archives. They are collections of records, selectively produced, preserved and maintained.

This archive has its own creators, curators and interpreters, playing a role in the keeping of memory for the community. It can be “read” by those who understand such a text. Like a written archive, it reflects the interests and concerns of that community.

David Canari painting a black bream in Kakadu in 1985. Paul S.C. Taçon.

History on the rock – Quilp’s horse

Of course, rock art is not the only archive holding records of long Aboriginal pasts. It is only a surface manifestation of the richer archive that is Country itself. The landscape holds the song-lines and stories of the continent. Rock art simply makes this deeper record visible. Indeed, some rock art is itself a manifestation of the Ancestors.

Our forthcoming book is about understanding rock art as an archive, a source of historical knowledge. So, for example, we take this painting near to the Gunbalanya community as a source.

A horse painted by Quilp. Photograph by George Chaloupka, courtesy of Traditional Owner Kenneth Mangiru.

The artist depicted a delicate, light riding horse rather than a heavy working horse. You can see the convex bridge of the horse’s nose and the long head. The carefully outlined eyes and ears give an intimate feeling. This is not a generic horse, but a known horse. The position of the ears may indicate that it is listening backwards, maybe paying attention to her rider.

Quilp during the early mission era of Oenpelli in the late 1920s. Photograph by Alf Dyer.

The western Arnhem Land community knows and remembers the artist who painted this work: Quilp. He was a Wardaman man, kidnapped by buffalo-shooter Paddy Cahill as a boy after his family was massacred, and taken to west Arnhem. He survived and persisted through the violent colonial period because he was good with horses.

When the rock art is read alongside the colonial archive, with an attentiveness to the presence of horses in the artists’ life, we see a story emerge of Aboriginal people using the colonisers’ animals to carve out opportunities for themselves. We see an affinity, even an intimacy with the strange new beasts, together navigating relationships with the “white boss”, as a form of resilience and, in Quilp’s case, survival.

Reading rock art is not without challenges. Like other complex and sophisticated sources, it requires cultural expertise. Rock art, ultimately, can never fully be “read” and understood without the guidance and permission of its owners, and outsiders could never presume to be the experts. Consider this buffalo painted at Djarrng in west Arnhem Land, for instance.

Painted water buffalo at Djarrng in west Arnhem Land. Photograph by George Chaloupka, taken in the late 1970s.

Timorese water buffalo arrived in Arnhem Land in the mid 19th century after the British brought them to their settlements on the Cobourg Peninsula and Melville Island. When their poorly planned settlements collapsed, they left, releasing buffalo, which quickly multiplied.

For the historian interested in Aboriginal records of the past, it is tempting to read the paintings of the buffalo as a depiction of what was supposedly a disruptive and singular event: the release of the buffalo and their expansion through Aboriginal lands. Could we not assume that an encounter with these beasts provoked Aboriginal artists to record what must have been a bewildering experience?

But those who can read the paintings will tell you something else. Traditional owners see the yellow colour and understand that the artist was expressing belonging in their kinship system; the yellow meant the Yirridjdja moiety.

Yellow ochre itself is the transformed bodily fat of Yirridjdja Ancestral Beings, imbuing the painting with power. So the buffalos are not presented as intruding newcomers to Country. Rather, they are revealed as already embedded in Indigenous ways of relating to Country. The rock art is evidence of a history, but it is not the story one might expect.

Kenneth Mangiru at Djarrng in 1992 with two large buffalo still visible in the rock art behind him. Paul S.C. Taçon.

Some art is even invisible. Two Leg Rock in Kakadu National Park is shaped like a pair of human legs. At its pelvis, over the right hip, there is a painting of an important Ancestral Being in the form of a kangaroo. The painting is depicted in bright, stark white pigment – delek. But visitors to Two Leg Rock today will not see it.

Exposed as it was to the elements – with monsoonal rain passing every year – the painting has faded altogether. But the painting is not gone. Those who can “read” and understand the archive that is the rock art of western Arnhem Land assure us that it remains as present as it was the day Billy Miargu traced its outline in 1972.

This hidden painting exists in a different kind of time to that of the researchers who charted its linear lifespan of creation and subsequent fading away. Many First Nations people experience and relate to time in different and more complex ways than the linear time assumed by academic disciplines. Rock art exists in times that are unchanging, permanent and always alive and active in the present. This is not the chronology of western archives.

Transcending academic concepts of time

The western process of archiving presumes a linear notion of time in its record-keeping practices. Documents are said to progress through a “lifecycle”. The metaphor is cyclical but the concept assumes linearity; there is no rebirth in this cycle.

The documents move from their initial use for which they were created and either transform into “records” as they enter the archive or are destroyed. The creation of the document is disconnected from its use as a record. The researcher is always temporally disconnected from that which they seek to know.

The disciplines of history and archaeology, likewise, presume this kind of time. Archaeology is interested in origins and history, of charting “developments” and “innovation”, “cultural evolution” along a linear timescale.

Academic history, likewise, presumes the times of historicism, where historical “developments” – with cause leading to effect – occur over a linear timeline that is both uniform and universal. Any event in the world can, supposedly, be plotted onto this timeline, like pearls on a string.

Settler observations of the unique Indigenous articulations and ways-of-being in time has led some to conclude that Australia’s First Nations cultures are “timeless”. That is, Indigenous relationships with time supposedly exclude the possibility of a cultural self-awareness that might be called an “historical consciousness”. Such ideas have been grounds on which Indigenous knowledges of the past were excised from “history” and labelled “myth”.

But this is not how Traditional Owners describe and experience time, nor the relationship of rock art to time. The rock art is not timeless but rather connects time, drawing the generations and Ancestors together. They insist that this is history on the rocks, not simply “tradition” or “myth”.

As Wergaia Traditional Owner Ron Marks explains:

Here, this is our library – this is our art gallery. It warms the heart to know that for thousands of years – stories have been written on rock.

Bobby Nganjmirra painting on Injalak Hill. Photograph by Gunther Deichmann.

Sharing knowledge across generations

Josie Maralngurra, (see lead photo), is not a rock art artist herself. Nonetheless, her life story reveals the multilayered ways in which rock art is a “vehicle of memory” and touchstone of her community’s profound historical consciousness.

Josie was born in 1952 in the “bush” (that is, not at a mission or settlement). Her father, Old Nym Djimongurr, was working for buffalo shooters in what today is Kakadu National Park. In the 1950s, when Maralngurra was little, the family worked at Russ Jones’ Arnhem Timber Camp. Maralngurra was also close to the famous rock art artist Nayombolmi, and called him grandfather.

Maralngurra and her family, along with Nayombolmi and his wife, walked Country throughout her childhood. When not staying in stringy-bark huts, they took shelter in the rocks. Maralngurra described her father and grandfather’s habitual painting in rock shelters where the family stayed.

The men “wanted to sit and do paintings all the time,” she said. Painting was part of daily life in the wet season when the family camped in rock shelters. As a child, she witnessed the creation of rock art across numerous sites.

Today, Maralngurra still remembers these journeys and the painting. She tells of when her family visited these sites, how old she was and where she slept. She can remember who came with them, how long they stayed, what they ate. She remembers the details and stories associated with the rock art.

Children like Maralngurra were often present at the creation of rock art. Sometimes they worked to prepare the pigments themselves. Sometimes the process of painting was their entertainment. Maralngurra tells of her work grinding pigment and gathering food and water.

As she helped the old men, she asked them to tell her the stories of their artworks. So she learned the stories of Country, the Ancestors and their exploits, as well as the protocols of how she and her kin must live today. The process of creation of the artwork was her education.

According to traditional western understandings of archives, they should ideally be “by-products” of human activity. That is, they should not be created with their future as a record in mind. It is this unselfconsciousness that enables them to provide rich evidence for past human activity – they are documents created without thought to influencing future historians.

Although it might be assumed that Aboriginal rock art is often created to record events for posterity, that is not often its main function.

Much of the rock art at Kakadu is the residue of education and knowledge sharing. It is also evidence of how Josie’s ancestors passed the time, telling stories as they painted. Some paintings were originally ways for artists to develop their technical skills. Some were painted simply for fun. Other art was created in a ritual context and may be the remains of ceremonial secret knowledge. Either way, the art at Kakadu was created primarily for its immediate uses.

Yet some of the art is future-oriented, created with the express purpose of embedding memory. Sometimes, for instance, the very bodies of children like Maralngurra were represented on the rocks. In some places, Maralngurra’s own hands joined the painting on the rocks. The outline of her child-sized hands are there, along with those of others. They were created by her father Djimongurr by blowing delek – white pigment – onto her hand and rock, leaving a negative shadow print of her hand on the panel.

Four of Josie’s hand stencils. Photograph by the Pathway project.

Hand stencils are present in rock art around the world. Sometimes they were made to record one’s connection to the place, sometimes they are like a signature, “signing the land”. Sometimes they have been altered to form memorials for people who have died. Sometimes they are located in hard to reach places; surely evidence of the agility and daring of the artist. Whatever their intent, they declare to generations thereafter, “we were here”.

Literally embodying the rocks drew connections between people and landscape, emphasising belonging, and preparing children for their future responsibilities. One generation’s hand-stencils are later witnessed by those to come.

Maralngurra’s life history and experience – the memory of placing her hands on the rockface as a little girl – was inscribed into the landscape, becoming a touchstone for memory.

Rock art is invaluable as an archive for a First Nations history that stretches back millennia. Until now, however, it has not been recognised as such, at least, by non-Indigenous scholars. That is because this Aboriginal archive is a different kind of repository, for a different kind of history, grounded in a different kind of time than the limited pasts many Australians (and academics) are used to knowing.

By their very nature, and by design, these repositories can only be read by and with Traditional Owners to guide. So much the better. We hope that by seeing history on the rocks, history itself might become ever richer.The Conversation

Laura Rademaker, ARC DECRA Research Fellow, Australian National University; Joakim Goldhahn, Rock Art Australia Ian Potter Kimberley Chair, The University of Western Australia; Mr Gabriel Maralngurra, Co-manager, Injalak Arts, Indigenous Knowledge; Mr Kenneth Mangiru, Danek Senior Traditional Owner, Indigenous Knowledge; Paul S.C.Taçon, Chair in Rock Art Research and Director of the Place, Evolution and Rock Art Heritage Unit (PERAHU), Griffith University, and Sally K. May, Associate Professor

Article source: This article is republished from The Conversation under a Creative Commons license. Read the original article.

Header image: Josie Maralngurra touching her hand stencil made when she was around 12. In the background are three white barramundi fish figures with red line-work also created by her father Djimongurr. Photograph by the Pathway project.

]]>
https://realkm.com/2023/12/03/this-is-our-library-how-to-read-the-amazing-archive-of-first-nations-stories-written-on-rock/feed/ 0
Will AI kill our creativity? It could – if we don’t start to value and protect the traits that make us human https://realkm.com/2023/12/03/will-ai-kill-our-creativity-it-could-if-we-dont-start-to-value-and-protect-the-traits-that-make-us-human/ https://realkm.com/2023/12/03/will-ai-kill-our-creativity-it-could-if-we-dont-start-to-value-and-protect-the-traits-that-make-us-human/#respond Sat, 02 Dec 2023 22:07:03 +0000 https://realkm.com/?p=30359 Cameron Shackell, Queensland University of Technology

There’s no doubt generative AI’s ability to rapidly produce new texts, images and audio is shaking up creative jobs.

In the long-running Writers Guild of America strike, a central sticking point has been the guild’s demand that AI be used only as a research tool and not a replacement for its members. For many creative types, it seems harder to earn a living with AI around.

At the same time, however, AI tools are often seen as a springboard to next-level human creativity. Technologies such as Anthropic’s chatbot Claude and OpenAI’s ChatGPT and Dall-E 3 offer a seductive creative experience.

Will these tools help us survive and thrive as a creative species? Or are they the death knell of creativity as we know it?

What is creativity?

In her book The Creative Mind, cognitive science expert Margaret Boden distinguishes between two types of human creativity.

Psychological or personal (p-type) creativity happens when an individual thinks something for the first time – even if others have thought it separately before. One example is a child realising water can take any shape.

Essentially, p-type creativity is learning something useful and, in the process, synchronising our thoughts with others.

Historical creativity (h-type), on the other hand, happens when an individual thinks something that has never been thought before. One example would be Archimedes’s “eureka” moment in the bath, which supposedly led to him discovering the law of buoyancy.

The more someone’s creativity subsequently affects other people’s thinking, the more momentous and enduring we consider their legacy.

This is why Wandjina rock art in the Kimberley, Homer’s Iliad, Pablo Picasso’s Guernica, Frank Lloyd Wright’s Fallingwater house and Albert Einstein’s Annus Mirabilis papers are all considered exceptional works left behind by exceptional humans. They are important because they continue to shape our thinking.

Generative AI doesn’t belong in either category

AI obviously has the potential to promote both p-type and h-type creativity. It can lead us to insights about biology, history and mathematics, and help us create texts and images that may be useful or thought-provoking.

But there is one key difference between human creativity and AI-driven creativity: the latter doesn’t stem from the evolutionary clash of mind and world.

AI models don’t contain reality. They rely on the complex statistical abstraction of digital data. This limits their real-world creative significance and their capacity to produce “eureka” moments.

To differentiate AI-driven creativity from old-fashioned creativity, I have proposed a new term: generic, or g-type, creativity. It formalises the fact that while AI models are capable of provoking new thought, they are limited by the underlying data they have been trained on.

The big risk: a generic spiral

We can expect an explosion in g-type creativity in our future. The danger here is that our increasing use of AI could make us think too much alike, leading to a decrease in cognitive diversity and an increase in cultural tightness.

In this scenario, societies would become more rigid in the norms they enforce, and less tolerant of deviations from the status quo. At a population level this would be a creativity killer.

The threat isn’t just AI-generated movies, TV, books and art. In the future, the homes we live in, the cars we drive (or won’t have to drive) and our shared public spaces will all be shaped by AI. We may see our thinking become homogenised under the pressure of increasingly similar environments and experiences.

This sameness further put us at risk of a generic spiral. AI models are trained on content we create. So the more we use AI for g-type creativity, the more generic our content will become – and since this will be used to further train AI, the more generic AI outputs will become.

While this might be useful for certain specialist tasks – such as consistently interpreting law – it’s worrying to contemplate the kind of Orwellian political economy a generic spiral might give rise to.

Can we enjoy AI and also preserve creativity?

Balancing and reconciling human creativity with AI isn’t as simple as going for regular walks in nature – although that will probably help.

Generative AI may well be a transformative technology to rival the printing press or steam engine. Such juggernauts are difficult to resist; we collectively get swept up in the change, uncertainty and alienation they foment.

Some of the best minds of our generation are already abandoning other pursuits to try their luck at building and using advanced AI models.

Our best chance to remain truly creative is to protect and privilege the human over the artificial. Intellectual property law is key. Any further moves towards legal personhood for AI – such as allowing AI a “fair use” right to train itself on copyrighted material, or have copyright applied to AI outputs – will erode our creative system and risk a generic spiral in human creativity.The Conversation

Cameron Shackell, Sessional Academic and Visitor, School of Information Systems, Queensland University of Technology

Article source: This article is republished from The Conversation under a Creative Commons license. Read the original article.

Header image source: Created by Bruce Boyes with Perchance AI Photo Generator.

]]>
https://realkm.com/2023/12/03/will-ai-kill-our-creativity-it-could-if-we-dont-start-to-value-and-protect-the-traits-that-make-us-human/feed/ 0
AI chatbots are coming to your workplace but are not necessarily coming for your job https://realkm.com/2023/11/24/ai-chatbots-are-coming-to-your-workplace-but-are-not-necessarily-coming-for-your-job/ https://realkm.com/2023/11/24/ai-chatbots-are-coming-to-your-workplace-but-are-not-necessarily-coming-for-your-job/#respond Fri, 24 Nov 2023 00:48:45 +0000 https://realkm.com/?p=30278 Kai Riemer, University of Sydney and Sandra Peter, University of Sydney

Artificial Intelligence chatbots are everywhere. They have captured the public imagination and that of countless Silicon Valley inventors and investors since the arrival of ChatGPT about a year ago.

The stunning human-like abilities of conversational AI – a form of artificial intelligence that enables computers to process and generate human language – have sparked widespread optimism about their potential to transform workplaces and increase productivity.

In what may be a world first, a UK school has appointed an AI chatbot as a “principal headteacher” to support its headmaster. While little is known about the nature of the AI behind it, the chatbot is meant to advise staff on issues such as helping pupils with ADHD and writing school policies.

But, before deploying chatbots in the workplace, it is crucial to understand what they are, how they work and how to use them responsibly.

How chatbots work

Much has been written about the astounding abilities of generative AI, and its sometimes surprising ability to get things wrong.

For example, an AI chatbot can craft a persuasive scholarly argument but, to use chatbot terminology, can “hallucinate” the list of references or get simple facts wrong. To understand why hallucinations occur it is important to understand how these chatbots work.

At their core, AI chatbots are powered by large language models (LLMs), large neural networks trained on massive datasets of text (what we lovingly call “the Internet”).

Importantly, LLMs do not store any data or knowledge in any traditional sense. Rather, when they are built (or “trained”) they encode, in large statistical structures, sophisticated content or language patterns contained in the training data.

Simply speaking, text is turned into numbers, or probabilities.

When in use, LLMs no longer have access to this training data. So when we ask it a question, the response is generated from scratch, every time. Technically, everything is “hallucinated”. When AI chatbots get things right, it is because much of human knowledge is patterned and embedded in language, not because “it knows”.

By design, AI chatbots cannot produce definitive factual answers. They are probabilistic, not deterministic systems, and therefore cannot be relied on as authoritative sources of knowledge. But their ability to recognise linguistic patterns makes them excel at helping humans with tasks that involve text generation or enhancement.

Writing persuasive arguments follows certain patterned regularities, whereas factual answers cannot reliably be generated from probabilistic patterns.

The new workplace assistant

Don’t think of your AI chatbot as an omniscient artificial brain, but as a gifted graduate student assigned to be your personal work assistant.

Like an eager grad student, they work tirelessly and mostly competently on assigned tasks. However, they are also a little bit cocky. Always overconfident, they might take risky shortcuts and provide answers that sound good but lack any factual grounding.

It is wise always to verify chatbot outputs, much as you would double-check a grad student’s work.

Because of their probabilistic foundation they don’t comprehend your question with any human understanding. But in the right roles, when used appropriately, chatbots greatly augment productivity on language-related tasks.

Working with AI chatbots

The three levels of chatbot capability can be summed up using the acronym ACE – assisting, creating, exploring.

  • assisting – chatbots can assist with many writing tasks, such as summarising, analysing and refining text, or extracting key points and themes. They can express arguments in academic text in more accessible ways.
  • creating – chatbots can generate original text, turning dot points into business reports or ideas. They can mimic different genres and write in different styles. As they encode countless bodies of text from different domains, they can be told to take a perspective, impersonating business strategists, scholars, marketers or journalists, to create content useful across many professions.
  • exploring – chatbots make intriguing “discussion partners” about hypothetical ideas (“what would happen if…”). When exploring new issues, let the chatbot set you questions then get it to answer them. If you want to explore what makes a good project report, or social media post, ask the chatbot to write one and then reflect on why it wrote what it did.

The business of chatbots

What do we know about the use of AI chatbots in the workplace so far? Some initial studies point to significant productivity gains.

A pilot project at Westpac found a 46% productivity gain in software coding tasks, with no drop in quality. The experiment compared groups of developers using AI chatbots for a range of programming tasks with a control group that did not.

A study by global management company Boston Consulting Group also reported significant improvements.

In a controlled experiment, consultants used AI chatbots to problem-solve and develop new product ideas, which involved both analytical work and persuasive writing. Those who worked with the chatbot finished 12.2% more tasks, 25.1% more quickly, and at 40% higher quality, than those who didn’t.

In yet another case, an AI chatbot is reportedly being used by a US software company to help write proposals for clients. It scours thousands of internal files for relevant information to generate a suitable response, saving the company time.

These cases give glimpses into the future of AI chatbots, where companies fine-tune generative AI models with their own data or documents, using them for specialist roles such as coders, consultants or call-centre workers.

Many workers are worried AI will be used to automate their work. But given the probabilistic nature of the technology, and its inherent lack of reliability, we do not see automation as the most likely area of application.

AI chatbots might not be coming for your job after all, but they are certainly coming for your job description. AI fluency, the skill to understand and work with AI will soon become essential, similar to working with PCs.

Finally, you might ask, did we let a chatbot write this article? Of course we didn’t. Did we use one in writing it? Of course we did, much like we used a computer, and not a typewriter.


_The Conversation requires authors to disclose if they have used AI in the preparation of an article. Articles that use AI for fact-finding or idea generation will generally not be accepted.“The Conversation

Kai Riemer, Professor of Information Technology and Organisation, University of Sydney and Sandra Peter, Director of Sydney Executive Plus, University of Sydney

Article source: This article is republished from The Conversation under a Creative Commons license. Read the original article.

Header image source: Created by Bruce Boyes with Perchance AI Photo Generator.

]]>
https://realkm.com/2023/11/24/ai-chatbots-are-coming-to-your-workplace-but-are-not-necessarily-coming-for-your-job/feed/ 0
Nobody knows how consciousness works – but top researchers are fighting over which theories are really science https://realkm.com/2023/11/21/nobody-knows-how-consciousness-works-but-top-researchers-are-fighting-over-which-theories-are-really-science/ https://realkm.com/2023/11/21/nobody-knows-how-consciousness-works-but-top-researchers-are-fighting-over-which-theories-are-really-science/#respond Tue, 21 Nov 2023 04:40:36 +0000 https://realkm.com/?p=30173 Tim Bayne, Monash University

Science is hard. The science of consciousness is particularly hard, beset with philosophical difficulties and a scarcity of experimental data.

So in June, when the results of a head-to-head experimental contest between two rival theories were announced at the 26th annual meeting of the Association for the Scientific Study of Consciousness in New York City, they were met with some fanfare.

The results were inconclusive, with some favouring “integrated information theory” and others lending weight to the “global workspace theory”. The outcome was covered in both Science and Nature, as well as larger outlets including the New York Times and The Economist.

And that might have been that, with researchers continuing to investigate these and other theories of how our brains generate experience. But on September 16, apparently driven by media coverage of the June results, a group of 124 consciousness scientists and philosophers – many of them leading figures in the field – published an open letter attacking integrated information theory as “pseudoscience”.

The letter has generated an uproar. The science of consciousness has its factions and quarrels but this development is unprecedented, and threatens to do lasting damage.

What is integrated information theory?

Italian neuroscientist Giulio Tononi first proposed integrated information theory in 2004, and it is now on “version 4.0”. It is not easily summarised.

At its core is the idea that consciousness is identical to the amount of “integrated information” a system contains. Roughly, this means the information the system as a whole has over and above the information had by its parts.

Many theories start by looking for correlations between events in our minds and events in our brains. Instead, integrated information theory begins with “phenomenological axioms”, supposedly self-evident claims about the nature of consciousness.

Notoriously, the theory implies consciousness is extremely widespread in nature, and that even very simple systems, such as an inactive grid of computer circuitry, have some degree of consciousness.

Three criticisms

This open letter makes three main claims against integrated information theory.

First, it argues this is not a “leading theory of consciousness” and has received more media attention than it deserves.

Second, it expresses concerns about its implications:

If [integrated information theory] is either proven or perceived by the public as such, it will not only have a direct impact on clinical practice concerning coma patients, but also a wide array of ethical issues ranging from current debates on AI sentience and its regulation, to stem cell research, animal and organoid testing, and abortion.

The third claim has provoked the most outcry: integrated information theory is “pseudoscience”.

Is integrated information theory a leading theory?

Whether you agree with integrated information theory or not – and I myself have criticised it – there is little doubt it is a “leading theory of consciousness”.

A survey of consciousness scientists conducted in 2018 and 2019 found almost 50% of respondents said the theory was either probably or definitely “promising”. It was one of four theories featured in a keynote debate at the 2022 meeting of the Association for the Scientific Study of Consciousness, and was one of four theories featured in a review of the state of consciousness science that Anil Seth and I published last year.

By one account, integrated information theory is the third-most discussed theory of consciousness in the scientific literature, out-stripped only by global workspace theory and recurrent processing theory. Like it or not, integrated information theory has significant support in the scientific community.

Is it more problematic than other theories?

What about the potential implications of integrated information theory – its impact on clinical practice, the regulation of AI, and attitudes to stem cell research, animal and organoid testing, and abortion?

Consider the question of fetal consciousness. According to the letter, integrated information theory says “human fetuses at very early stages of development” are likely conscious.

The details matter here. I was the co-author of the paper cited in support of this claim, which in fact argues that no major theory of consciousness – integrated information theory included – posits the emergence of consciousness before 26 weeks gestation.

And while we should be mindful of the legal and ethical implications of integrated information theory, we should also be mindful of the implications of all theories of consciousness.

Are the implications of integrated information theory more problematic than those of other leading theories? That’s far from obvious, and there are certainly versions of other theories whose implications would be every bit as radical as those of integrated information theory.

Is it pseudoscience?

And so, finally, to the charge of pseudoscience. The letter provides no definition of “pseudoscience”, but suggests the theory is pseudoscientific because “the theory as a whole” is not empirically testable. It also claims integrated information theory wasn’t “meaningfully tested” by the head-to-head contest earlier this year.

It’s true the theory’s core tenets are very difficult to test, but so too are the core tenets of any theory of consciousness. To put a theory to the test one needs to assume a host of bridging principles, and the status of those principles will often be disputed.

But none of this justifies treating integrated information theory – or indeed any other theory of consciousness – as pseudoscience. All it takes for a theory to be genuinely scientific is that it generates testable predictions. And whatever its faults, the theory has certainly done that.

The charge of pseudoscience is not only inaccurate, it is also pernicious. In effect, it’s an attempt to “deplatform” or silence integrated information theory – to deny it deserves serious attention.

That’s not only unfair to integrated information theory and the scientific community at large, it also manifests a fundamental lack of faith in science. If the theory is indeed bankrupt, then the ordinary mechanisms of science will demonstrate as much.The Conversation

Tim Bayne, Professor of Philosophy, Monash University

Article source: This article is republished from The Conversation under a Creative Commons license. Read the original article.

Header image source: Colin Lloyd on Unsplash.

]]>
https://realkm.com/2023/11/21/nobody-knows-how-consciousness-works-but-top-researchers-are-fighting-over-which-theories-are-really-science/feed/ 0