Ideas for a brighter future for all

For Griffith University's A Better Future for All series, in partnership with HOTA, Home of the Arts, Kerry O'Brien welcomed Toby Walsh

Artificial intelligence has the capacity to completely transform the world, changing everything from interplanetary travel to cooking perfect pasta. But what are the costs: If everything changes through AI, how will we deal with the downside?

The latest instalment of Griffith University’s Better Future for All series sees journalist Kerry O’Brien exploring the future and impact of AI with leading global thinker Professor Toby Walsh. His work not only explores the detailed technology of AI, but also raises a host of critical questions about its impact and morality.

There are few topics more important to the way we live than trying to understand the scope and the consequences of AI. Are we, as Elon Musk warned, “summoning the demon”? Is the potential for danger greater than the promises of instant, encyclopaedic understanding? Artificial intelligence is challenging our understanding of knowledge, learning and human capability.

Don’t miss this vital conversation examining the promise and pitfalls of AI, and what it means for humanity’s future.

Professor Toby Walsh

Toby Walsh is an ARC Laureate Fellow and Scientia Professor of AI at University of NSW (UNSW) and CSIRO Data61. Chief Scientist of UNSW.AI, UNSW’s new AI Institute, Toby is a strong advocate for limits to ensure AI is used to improve our lives.

He is recognised as a thought leader on AI and has spoken at the UN, and to heads of state, parliamentary bodies, company boards and many others on this topic. This advocacy has led to him being “banned indefinitely” from Russia. He is a Fellow of the Australia Academy of Science and was named on the international Who’s Who in AI list of influencers. He has written three books on AI for a general audience, the most recent is Machines Behaving Badly: the morality of AI.

 

Mr Mik Auckland

Good evening, everyone. My name is Mik Auckland. I’m the interim CEO at HOTA and welcome to A Better Future for All. Before we begin, HOTA and Griffith University proudly acknowledge the traditional custodians on the land in which we’re situated, the Kombumerri families of the Yugambeh Language region. We pay our respects to the elders past and present, and recognise their continuing connections to the lands, the waters, and their extended communities throughout Southeast Queensland. I’d also like to acknowledge Mr. Rob Molhoek MP, Shadow Minister for mental health, Shadow Minister for drug and alcohol treatment and Shadow Minister for families and seniors, representing the Leader of the Opposition Mr. David Crisafulli MP and Counsellor William Owen-Jones of the Gold Coast City Council. Thank you all for being here for tonight’s conversation hosted as always, by the inimitable Kerry O’Brien.

If you’re anything like me, you probably have a distinct interest in this evening’s topic. You’re all here and guest, and the expert insights to be shared with us about what the rise of AI mean for researchers, policymakers and global citizens in general. The field of Artificial Intelligence has been growing for decades, but it seems in recent months, to have experienced exponential growth in its mainstream awareness, advancement, and potential application. With the arrival and expansion of the now seemingly ubiquitous platforms, such as Chat GP, DALL E, Midjourney, modern AI is either already or is about to start revolutionising countless industries, education included, and with its increasing ability to generate prose and artwork nearly indistinguishable from that created by the human hand, to reimagine complex processes, and to create social impact in ways we may not yet truly fathom, the AI revolution is rife with opportunities and challenges. For example, search platforms such as Bing and Google are exploring ways to integrate AI into their engines, which could fundamentally change the way we move about online. So too, can it be used for positive ends in a range of fields from healthcare, to entertainment, as well as more menially in our everyday lives.

At the same time, we know enterprising school and university students are already using AI to provide heavy assistance with their assessment. It can pass entry exams for law and medicine, and even generate convincing photos of events that haven’t actually happened. All of these realities carry significant implications for us as a society. So while there’s no question that AI has a capacity to do a measurable good, as with everything we create, this human made, human educated technology also has limits and risks. There is a pressing need for parity between our understanding of the power at our fingertips and our ability to wield it responsibly for everyone’s benefit. Evidently, the ethical issues that underscore the application of artificial intelligence to societal level are exceptionally complex. Fortunately for us, there are few more qualified to talk about all things artificial intelligence than tonight’s guest Professor Toby Walsh, one of the world’s leading thinkers exploring the technological sorry, technological and philosophical impacts of AI, an ARC Laureate Fellow and Scientia Professor of AI at the University of New South Wales and CSIRO Data 61. Toby has spent more than 20 years working around the world in the fields of machine learning and robotics. Adding to his considerable and acclaimed body of work and advocacy, which I’ll let Kerry tell you more about, Toby has most recently contributed to a lead article for Creation Stories, the upcoming 20th anniversary edition of the Griffith Review out in early May. We’re delighted to have such an esteemed thinker and expert in one of the world’s most dynamic and rapidly evolving fields here with us this evening. So without further ado, please join me in welcoming Professor Toby Walsh and Kerry O’Brien to the stage for tonight’s instalment of a Better Future For All.

Kerry O’Brien  

Toby. Kerry. We can get into silly arguments over whether the industrial age had a higher impact on civilization than the digital age. But I don’t think there’s any argument that nothing in human history has brought about more rapid, more intense and more frequent change, than digitalization, nor challenged human capacity to keep up with that change, let alone anticipate it and plan ahead for the social, economic and political impacts. And that’s the context in which we come to talk about artificial intelligence tonight. I think it’s important to start by defining the nature of human intelligence. Even if we all do think we know what that is?

Professor Toby Walsh

Well, I’m not sure we do know what human intelligence is. And that’s, that’s the problem. And that’s the fundamental challenge of working in a field where intelligence is so poorly defined. So how can we build artificial intelligence where we so little understand what intelligence is itself? I mean, there are some broad characteristics that we can talk about, it’s about perceiving the world. So our ability to see the world, hear the world, those are things that we’re trying to give computers, the ability to do. Our ability to reason about the world, make decisions, those are things that we try to get computers to do, and then act in the world, which is why we end up in the land of robotics. And we’ve actually embodied those algorithms into machines that go out. And then the other fundamental component, one that you hear about so much today, of course, is that most of your intelligence, were things that you weren’t born with, you couldn’t read or write and do most of the things that you could do, they were things that you learned. And so a significant component of artificial intelligence is the field of machine learning, where we’re trying to teach computers to do things, just like we learn to do things. But it’s a false friend, to think of it like human learning, because human learning is quite different. If, if I learn how to ride a bicycle, there’s not much I can really help you learn about, you’re gonna have to fall off the bicycle, and hurt yourself as probably as many times as I did, I can give you a few tips, but you’re gonna have to learn that skill for yourself. And that’s, that’s one of the fundamental advantages that machines have is that they don’t have to learn everything from scratch. I can take the code from one machine and load it onto another machine. And now it’s learned to all the things that the other machine does. And you actually see that. I mean, so overnight, for example, if you drive a Tesla, they upload the, all Tesla’s are learning from every other Tesla. It’s a planet-wide process of learning. So that if a Tesla experiences as strange circumstances, a shopping trolley running down the road, that Tesla will see that, it will store that information. And that will be uploaded to all the Tesla’s on the planet. And we’re not we’re not used to, you know, we’re used to learning everything, painfully for ourselves, you know, if you’ve learned German, it doesn’t help me learn German, I’ve got to do that by myself. Whereas, we’re going to be surprised, I think, by the speed with which machines learn because it’s not like humans learning, it’s got actually a distinct advantage.

Kerry O’Brien  

But by the same token, you know, my youngest daughter has delivered my seventh grandchild. And. Congratulations. And I’ve been, I’ve been watching in awe, possibly, for the first time with a real kind of sense of learning myself, watching, watching this, this infant, this baby, taking everything in around him. And it’s like sitting there thinking, this is a blank slate. This, there’s some genetic stuff there. But in terms of what he is going to learn, there’s a blank slate, but what is going to be laid in over the you know, with his computer network, is human feeling, is emotion is love, anger. I mean, I don’t know how you define wisdom and all of that. But as the knowledge builds, you have these, you have these perceptions that kick in around that?

Professor Toby Walsh

Well, I push back a little that we’re not complete blank slates, that your daughter will have. There’s a digital map there. Oh I’m sorry. Some of your. There’s a genetic map. There’s, there’s, I suspect as your granddaughter has a better chance of being a, you know, having a cutting investigative reporter’s mind. There might be some genetics behind that. So we’re not complete blank slates. And we look at language, language has some structures, different languages around the world have, share some common structures. So that the brain seems tuned to learn particular things. But you’re right, we are also very much a product of the environment that we’re in, which is also the troubling aspect of you know, when you see systems like Chat GPT, where they have literally poured the contents of the internet into them, and you think, well, wait a second, there’s a lot of dubious content on the internet. There’s a lot of offensive content. There’s a lot of distasteful content. There’s a lot of racism, sexism. And if we’re not careful, we’re starting to see that, we’re starting to see that being reflected back on us. And that’s a much of reflection on the, on the machines as much as a reflection on us. It’s reflecting back human culture, at scale.

Kerry O’Brien  

So, so when you talk about the fundamental deceit, at the heart of artificial intelligence, what do you mean exactly?

Professor Toby Walsh

Well, it goes to that word at the start, artificial, which is that we’re trying to build machines that mimic, in some sense human intelligence, that do the things that we say are intelligence. And it’s actually, I call it a cardinal sin, it goes back to the very beginning of the field. If you go back to 1956, when the field started, a very few scientific fields actually have, you know, a data point in time where you say, the field began in 1956. It began because John McCarthy, I had the pleasure to know, came up with the word artificial intelligence, and had a conference where he brought together some like minded individuals and said, this computer, we’ve just started building computers, and computers started to become available in the 1950s. Let’s see what we can do with this, let’s see if we can actually build human-like intelligence in it. And that’s when the field started. But when we tried to do that, we started to try and mimic human intelligence. And indeed, people might have heard of Alan Turing, who is the father of the computer, who wrote what is generally considered to be the first scientific paper about artificial intelligence, and posed what has now become the Turing test, the measure at which when maybe we could say we’ve succeeded, when, since intelligence, to go back to the first question, intelligence is such a difficult thing to define. How will we know when we succeeded at artificial intelligence? Well, Alan Turing put forward this, this idea, a very interesting idea, a very functional idea, which is, well, if we can sit down, and we can quiz the computer, and we can’t tell the computer apart from a person, well, then for all intents and purposes, we might as well say it’s thinking. That’s the what we now know as the Turing test, but at the heart is a deceit, it’s can the computer fool us that it is human? And indeed, if you look at the questions that Alan Turing posed in the first Turing test, they’re all questions of deceit. They’re all questions of is, the machine’s trying to fool us?

Kerry O’Brien  

And, and how would a computer respond to those questions today?

Professor Toby Walsh

And unfortunately, a computer will answer those questions successfully. Yes. Deceitfully, will actually. Get away with. Pass for a human, so you can easily, that test is now in some sense, passed. Machines can easily fool us that they’re human and that’s you know, that’s the worrying thing that I see today. An that’s in that Griffith Reviews article essay that I wrote, which is, which is about how we’re now increasingly being fooled by the machines. It’s deep fakes, where the computer is making pictures that, the picture of the Pope, people might not realise this, the picture of the Pope in the puffer jacket was a deep fake, was generated by stable diffusion, it was generated by an algorithm. The Pope has never owned a puffer jacket, although, you know, I did think, I did tweet, this actually. He might want one now. He should go out if you know, if you had any sense of you know, winning the youth vote, he should go out and buy himself a puffer jacket. It would be such a hit. Look, missing a clue there. So it’s deep, deep fake images, the fake video, you can clone people’s voices. I’ve cloned, I’ve cloned my voice. And I can just type away, and you could hear me speak. It’s not me. It’s it’s a computer. And now of course, tools like Chat GPT. Which write human-like text that fool us. And the, you know, the worrying thought there is that these are tools of mass Information, mass persuasion.

Kerry O’Brien  

Well, let’s talk more about that. Because when you talk about the the rapidity of development of these large language models, you’ve touched a little bit on their significance now. But where are they headed?

Professor Toby Walsh

I do think it will be, come a time where, and this, ladies and gentlemen, like you have the pleasure to be in the room with us, to hear with your own ears and see with your own eyes, you have to entertain the idea that it’s fake, it’s synthetic, because there is no way that we can distinguish the synthetic media that we can make, from the real stuff. And and that that is very troubling. And we have to, you know, we have to be very aware of that. I’m somewhat concerned that the upcoming US presidential election next year is going to be, just as the the Trump election was potentially swung by the misuse of social media, that we’re going to see the misuse of this latest technology, in a way that’s going to influence people. And you’ve already seen that, and unfortunately, we’ve already seen this, already see. As an example, there is really, as far as we know, true video of Trump saying some really distasteful thing about women. And as far as as far as I believe it’s known, this is this is real, honest video of Trump saying these things, and he’s just dismissed it. It’s a deep fake. And I suspect you know, some of his supporters believe him that it is a deep fake, and that he’s even less accountable than he would have been if these deep fakes didn’t exist.

Kerry O’Brien  

And the other side of the coin is, so that’s one manipulation, one deception. The other is what we have, I mean, the kind of filling the zone with shit, which is which is what it was what one, one of his acolytes coined the phrase, in terms of throwing so much misinformation onto the internet that the mainstream media was essentially crippled and couldn’t follow it up, couldn’t, could not keep up. To the extent that people then are bamboozled. So, that was that was in 2018, I think he said that. 2023, I suppose you could say that in spades? How are we going to be going in 2026, 2030? In terms of how computers interact, and and enhance the capacity for for unethical, power-driven people to manipulate us?

Professor Toby Walsh

Yeah, I mean, you’ve put your finger on it, something that is really quite troubling. And I, I do worry that we’re going to recreate the harms that we did with social media, with this new technology. And to go back to what you said at the start, which is, and the problem is one of scale, that we can scale this. I mean, that’s the great thing about computers, they do something once, you put the loop around it, and they can do it 100 times 1000 times. And also now we can personalise it, we can take what we learned about you from social media, and we can provide the content and get Chat GPT to write the really personalised content that’s going to appeal to you. If I learn on social media, that you’re a wordle expert, that I can, you know, send you something that’s going to talk about wordles. If it’s golf, I can talk about golf. Yeah. I can get the Chatbot. I’d rather you didn’t. And just imagine this, I mean, this is, this is something that you could build with today’s technology. And it wouldn’t cost a lot of money. You could have a Trump bot. And indeed there was already a Trump bot that exists where you can train it on Trump’s speeches, his tweets, it’s not actually a very high bar to meet Trump’s, Trump, but he will speak like Trump, it will say all the things that push all the buttons of Trump supporters. Now you can clone Trump’s voice. There’s lots of clients with Trump’s voice. So you can connect the Trump bot to the Trump clone voice. Now you can ring up every voter in the United States and have a two way conversation. Trump can speak to every one of his voters individually.

Kerry O’Brien  

Presumably, then, if the other side was equally unethical. They could have Trump ringing people up and abusing them.

Professor Toby Walsh

Yes. They could. Although sometimes you wonder what Trump would have to say to to stifle support for him. But, But yes, you could. And so this does take us to a world and sadly, we already see this. We see people, and this is something you should be aware of, because the only defence is your, is your education here. People being rung, rung up. And it’s a loved one, the voice of a loved one, who says I’ve been in an accident. I need some lawyers fees. You’ve got to wire me $1,000 immediately. And it turns out, this was in Canada, it turns out this was a fake, it was a clone. And you know, you’re gonna see crime like this. It used to be, it was SMSs you received. Now it’s going to be the voices of your loved ones.

Kerry O’Brien  

So how, how effectively can AI be programmed now to recognise fact from mistruth, the ethical from the unethical?

Professor Toby Walsh

Well, it can’t. I mean, it doesn’t. It’s not understanding the world like we’re understanding the world. This is the one of these deceits again, that we’re deceived. At the surface, it looks like it’s doing a good job of saying the right things. But it doesn’t have the deep understanding, the model of the world that we have. It’s saying what’s probable, not what’s true. And these chatbots are more like autocomplete on your phone. They know the probability of what the next word is going to be. They say the sorts of things that you hear on the internet. And so we’re easily lulled into a false sense of security, that they really understand what they’re saying. And when people say, you know, the Chatbot lied to me, I say, no, it’s not lying to you, because lying would require it to know what was true, we’d have to require some deliberation. It doesn’t know anything about what is true or what is false. It is saying what is probable, it is saying the sorts of things it saw on the internet. And a lot of the time, that’s actually quite convincing. But it’s not understanding the way that you and I are understanding the world, we’ve still got a huge great distance to go, which is why, you know, this is only the beginning of the AI journey, we’ve only got a little vision of the future that we’re going to see.

Kerry O’Brien  

Yeah. So, so the thing is, though, isn’t it, is that that from 40 years ago, or 50 years ago, from 50 years ago to 40 years ago, there was a certain development, then to 30, then to 20. It’s exponential, isn’t it? I mean, it’s just it’s like, it’s like measuring a, an earthquake on a Richter scale, that the pace is just gonna keep picking up and keep picking up. It is. And I wonder about, I don’t think our capacity has been great, to stay abreast of digital developments so far anyway, in terms of our social needs and our social capacity to live a civil life. That’s the big question about our future, isn’t it?

Professor Toby Walsh

It is, it is about choosing the future we want. And you’re right. I mean, that is the fundamental difference. You started by talking about the industrial age. And there’s an argument, I think a good argument is be said that this is going to be more transformational because it will happen quicker. It’s no surprise that Chat GPT was the fastest growing app ever. At the end of five days, it was in the hands of a million people. At the end of the first month, it was in the hands of 100 million people. And today, because Microsoft are now embedding it in their products, it’s potentially in the hands of billions of people. And we’ve never had a technology before, where you impact, you can scale the technology into the millions, into the billions so quickly.

Kerry O’Brien  

But now come to who controls the technology? Who controls the technology? What is their motivation? How are they regulated now? How are they going to be regulated? If they are a part of the fundamental process of, of capitalism, then they are driven by the profit motive. In some cases, you’d have to say greed. So, so tell me about tell me about the the the diversity, or the kind of profile of the people who have driven the the digital industries, to the extent that Silicon Valley has led the way? What what’s the kind of profile of those people?

Professor Toby Walsh

You’re right. I mean, it’s not a very diverse set of people. I mean, again, it’s not only the scale, but it’s, it’s, there are 10,000 people on the planet today, who are like me, who have a PhD in AI. There is, it’s hard to think of a revolution where it was 10,000 people are going to probably change the planet in a very significant way. And then largely white male people like myself, with a particular mindset in particular,

Kerry O’Brien  

Well I hope they’re like yourself, because you concern yourself with ethics. Amongst other things.

Professor Toby Walsh

Some of them may be a bit more driven by by avarice than me. Yes. And you go to Silicon Valley, I go to Silicon Valley, and I come away thinking, it’s a pretty strange Kool Aid they’re drinking out here. As an example, you know, there was, I was, I was reading a story in one of my books about about there’s a big homelessness problem in San Francisco. It’s it’s a great tragedy. I mean, you think property prices are expensive in Sydney, you go to San Francisco, you discover, driven, of course, by the inflated salaries, driven by all the wealth, that technology is driving. And you know, on the footsteps of you know, on the doorsteps of all that wealth is this terrible homelessness problem, drug problem. And so there are various charities that have been set up to deal with that. And one of the charities is trying to teach the homeless people coding. As though the problem, their problems will be solved if they could only code like the rest of us. You know, the, you know, we know what the solution to homelessness is. Finland is a fantastic example. They’ve, they’ve really cracked homelessness. And you know how you crack homelessness, you give people homes. It’s amazing how simple it is. You give people homes, no strings attached, you give them a home, they find their feet again, they start, they can get employment, then they can go off and start their lives again. But it starts with not by teaching them coding, it starts by giving them a home. But the people in Silicon Valley thinks it starts by teaching them coding, become like the tech bros.

Kerry O’Brien  

Before we plunge further under the sort of dark side, the big challenging questions of all of this. Can we just focus for a moment on on how, the areas where AI can reasonably be expected to serve us well, where it’s serving us well now, and how that is going to burgeon.

Professor Toby Walsh

Yeah, I mean, so amongst all of the doom and gloom and it’s easy to get lost amongst all the concerns and worries, is to realise, of course, it’s going to transform our lives and in many respects, many possibilities, many opportunities are fantastic. So we pick an area like medicine, the opportunities are fantastic. We are running out of antibiotics. We’re oversubscribed, over prescribing antibiotic drugs, we’re still prescribing penicillin, the you know the what, the first one we ever discovered. And we’re getting you know, drug resistance bacteria to to the antibiotics we have, we’re not discovering anti antibiotics, drugs quickly enough, and it’s costing us more, it costs $2 billion to develop a new drug. It’s a prohibitive barrier. The latest antibiotic has been discovered by machine learning. It’s an antibiotic that was developed at MIT. They gave a machine learning programme a big catalogue of drugs and said go off and find something interesting that might be an antibiotic. And it came up with a drug that’s now being called Halicin, like penicillin, but how after, Hal is the AI computer in 2001. And and the biochemists are really excited. They actually they, this is a really exciting new antibiotic not only because it’s a new antibiotic, and we’re running out of antibiotics, but because it works in a different way, completely different way than any of the existing human discovered antibiotics. It is, it disrupts the way the cell can access energy. And therefore, the expectations in clinical trial today, the expectation is that it’s going to be effective against all these drug resistant antibiotics.

Kerry O’Brien

So do you know what, do you know enough of that, to tell us how much of that outcome was driven by humans using the machine and how much was driven by the machine.

Professor Toby Walsh

So it was a symbiosis. And you know, the humans were there, looking over the shoulder. But it was something that humans couldn’t have done, they gave it a huge grey catalogue of drugs, bigger than a human would have the patience to look through and spot the, you know, this possible idea that that was turned out to be the, the useful antibiotic. And the machine said, here are some, here’s a candidate, a dozen candidates, I think they said, go and test these clinically, and one of them turned out to be good. So it was a combination of human intelligence and machine intelligence, but it played to the strengths of the machine, and as usual played to the strengths of machines, which was, you can throw a big drug catalogue at the machine, much bigger than a human would have the patience to look at it, or to spot this strange correlations in.

Kerry O’Brien  

Doctors tell me that, I mean, we know about it, there are enormous strides being made in imaging, for instance, and we hear and read that, that increasingly, computers will be taking over the diagnosis of illnesses, that as a as a tool, that they will get the diagnosis right, more more often, more frequently than humans will. But of course, there are always those moments where the symptoms are saying one thing and the doctors and all the tests, don’t back up the symptoms, and the doctor falls back on their instinct, on their gut feeling on the kind of practical, the things they’ve seen, and the way their brain works. So you’d say in that sense, there’s going to be a symbiosis too in the way its applied. But what, what is the next stage of that?

Professor Toby Walsh

Well, I mean, the good news, if there any doctors in the house, I don’t think doctors are gonna suffer any unemployment, I don’t think we’re ever going to have any fewer doctors, we’re only going to have more doctors, because, you know, the basic fact of life is we all want to live longer. We know none of us want to suffer.

Kerry O’Brien  

But are doctors going to be increasingly, more and more, using machines? 

Professor Toby Walsh

But increasingly, doctors will be able to, just like they consult their, their colleagues, you will be able to as a doctor, you will be able to consult the world’s best experts in this particular type of blood disease.

Kerry O’Brien  

Now, what about mental health where, where our governments simply seem unable to find the funds to begin to properly treat mental health or mental illness. And you’ve, you’ve got psychologists and psychiatrists endeavouring to make sense of the symptoms before them, which can be incredibly difficult too, so.

Professor Toby Walsh

You have, Kerry. But you’ve, I think you’ve picked like the worst example, which is that,

Kerry O’Brien  

A shame. I can’t help it.

Professor Toby Walsh

You know, of of the part of medicine, which is most about human connection, it’s psychology. It’s about understanding the person in front of you.

Kerry O’Brien  

There will be a lot of psychologists who would want to hear you say, they’re not going to be able to be replaced.

Professor Toby Walsh

Yeah, and I think so. Psychologists have, in some sense, you know, if you’re working in a, in a particular type of imaging, and it’s about seeing strange structures, you could teach a machine to see those strange structures. But if it’s about human empathy, that’s, you know, one of the characteristics that machines don’t have. Machines don’t have empathy, they don’t have our emotional intelligence, they don’t have our social intelligence, to be able to understand. And for a good reason, because they’re not human, they don’t share those experiences. A machine is never going to fall in love, she’s never going to lose a loved one. And a machine is never going to have to face up to its mortality. But your psychologist will, and will be able to relate to those things, because those are human experiences. And so, you know, the psychologists I think, are perhaps going to one of the most safest professions, because machines are always going to struggle with those sorts of ideas.

Kerry O’Brien  

So I’d like to get some sort of sense on the extent to, the area, let’s try to identify the areas, where, where AI is going to be most disruptive of, of human existence, of how we live, work, play.

Professor Toby Walsh

Well, the four D’s, the dirty, the dull, difficult and the dangerous. So if you are doing something dumb, then, you know when people say to me, oh, you know, I just saw that there’s a new AI programme that’s starting to do something interesting. I say, well, we should celebrate. We should never have got humans to do that in the first place. If it’s a dull, repetitive task, we should celebrate that humans now have been liberated, you can focus on the more interesting human things that that we actually enjoy doing.

Kerry O’Brien  

Yeah, well, there’s a big question as well. Because to do that you need you need societies led by governments, and the thinkers of society, in how we plan for an era when much of those those boring, those dull jobs have gone from humans, and how we spread, how we spread the load, how we spread the work, and how we spread the leisure, and how we spread the wealth.

Professor Toby Walsh

Right. And so I think you’re putting your your finger on, these are structural changes to society. The technology is going to require us to think carefully about how we change the structure of our society. And to remember, this is not the first time. Right, it’s not the first time we’ve been down this road. With the industrial revolution, we went through a similar, slightly different, but similar structural change that used to be, all of us went out into the fields, and farmed, while it was light, and when it got dark, we went home and rested and then got up in the morning and at dawn did the same again. And then we changed the nature of work, we invented factories and offices. And we had to change our society to go along with that. It wasn’t we, you know, there were predictions of doom at the time, Marx and people like that, who were predicting, you know, that this was going to break the fabric of society. But we worked out a way, there was, it wasn’t without pain. It was, it was the Great Depression and two world wars, quite a lot of disruption. But we got through it by sharing some of that wealth. But by introducing some really significant changes to the way we ran society, to support everyone through that. We introduced universal education, so people were educated for those new jobs. We introduced the welfare state in most countries, so that if you’re unemployed, you weren’t in the poorhouse, you actually had some support to get yourself back on your feet again. We introduce pensions, we introduced the idea of a universal pension, so that at the end of your working life, when you’re tired from having worked for all those years, you could actually stop and rest.

Kerry O’Brien  

There are big, big questions involved. The thing that disturbs me about it is that I don’t see much evidence that those discussions are really being held in an inclusive or systematic way. No, we don’t. I think there are pockets of it.

32:21

And then there was a distribution question. Yes. Which is that we made sure that it wasn’t just the Carnegie’s, the robber barons who’ve got all the wealth, we actually, you know, introduced taxation reforms and things that spread a bit of that wealth around. Good luck with that.

Kerry O’Brien  

I don’t want to divert too much into this field, because that would be another sense of hopelessness, we’d leave. But that’s a purely subjective judgement, of course, but based on, based on a very long, decades long study of human nature. Can we talk a little bit,

Professor Toby Walsh

So Kerry, let me ask you the question though. You know, we’d managed to do that through the Victorian period, there was, perhaps it was enlightened Victorian gentleman who, partly responsible. But we did, there was,

Kerry O’Brien  

We can, we can continue to hope. There was. And I would hate us, I’d hate to see the day we stop hoping. And I think one of the great issues and problems with climate change, for instance, which plays into the hands of those members of the fossil fuel industry, who want to prolong their profits for as long as possible, is that society develops a sense of hopelessness about whether we can actually deal with it. That, that becomes a serious issue for society when it loses hope.

Professor Toby Walsh

So I think we’re, you’ve brought this conversation back to a really interesting and important point, which is, you know, 70 companies, companies are responsible for what is it, 50% of carbon emissions. Right? And again, we’re seeing this with the tech companies as well, right? It’s, it’s actually only a very small number of companies that are actually misbehaving, that actually, and people forget, you know, the modern corporation was an invention of the last industrial revolution. As a way of, of profiting from that technology, of sharing the risk. And so, you know, maybe we have to reinvent the modern corporation, so that it is better aligned with public goods.

Kerry O’Brien  

We have reached a point where sovereign government is much less sovereign, I think it’s been in certainly in modern times, I mean, there is a, we can talk about the threat of China or the, some outside military threat or a threat from another country, but but there is a kind of there is a corrosion or an erosion of that other form of national sovereignty, which is the thing around which we build physical borders, where information, money is moved around at the speed of lightning across borders, where that where governments, I would suggest, finding it, finding themselves increasingly helpless to really stay ahead of the developments we’re talking about through regulation, but, in any kind of concerted way.

34:58

But at the end of the day, they are human. Corporations are human institutions. And we do get to set the rules. So it’s, maybe we have to demand more from our political masters that we have stricter rules, to share some of the benefits, to share, to ensure that they do behave appropriately with respect. And

Kerry O’Brien  

I didn’t intend to spend quite this much time on this part of it, and I’ll, we might even come back to it later. I want to talk about about those kinds of, the areas of human human endeavour, where we’re driven by imagination, where we’re driven by creativity, the importance of music in our lives, the importance of culture, of literature, of, of brilliant paintings, and so on. Are we going to reach a day where you simply feed into a computer, all of the world’s great masterpieces in whatever genre? And the computer simply reproduces it? Could it build on it?

Professor Toby Walsh

I don’t think so. I mean, so it’s an interesting question. We don’t really know the answer, because we don’t really, just like we don’t know what intelligence is, we don’t really know what creativity is. And so the, you know, one of the great strengths of humanity is not just intelligence, is our creativity, our ability to invent things, to mould the world, to use tools. And that’s a fundamental question that has haunted my fear with artificial intelligence, since indeed, before it began, you can trace it back to Ada Lovelace who was working with Charles Babbage in the 18th century. Babbage was trying to build the first mechanical computer, Ada Lovelace, who was the first computer programmer, brilliant mathematician, daughter of Lord Byron, who wrote the first computer programme, and also wrote, you know, a very interesting tract at the time, which put this question, you know, would machines ever be intelligent? Or are they just following their instructions. And certainly, we’ve seen examples of computers doing things that pass for a decent poem or pass for a painting or win a photographic competitions as we saw a few weeks ago. But I don’t think they’re going to ever speak to us, and it comes back to the psychologist, it comes back to our humanity, which is I don’t think they’re ever going to speak to us in the way that, that human art speaks to us, because human art speaks about the big questions you know, about life and love and loss. And all those things, those human experiences, and try and help us put some something, you know, put some understanding upon those, those matters of existence. But they can be copied. We, they can be copied. They can be copied, and on the surface, they may appeal to us. Yeah, like pop music. Pop music, or you know, modern art or whatever the,

Kerry O’Brien  

What what I was fascinated to read, Ed Sheeran’s. He’s defending himself on charges of plagiarism on a particular song in America, and he was in the witness box explaining that you could lay, you know, template on template on template of the one song, you could just build a whole raft of songs on top of that template. And so if you just took, if you fit every every song of the last, you know, whether everything from the, from the genre of pop music, or rock music, or whatever, they would, a computer would replicate substantially, successfully, would it not? A kind of synthesis of all of those pieces of data.

Professor Toby Walsh

It’s a synthesis, but is it just a pastiche, right. So I mean, if you look at the truly great artists, you know, you look at Shakespeare, who took language to places that language had never been. He was not copying, or bringing together language in a way that had ever happened before. You got a great painter like Picasso, who reinvented himself half a dozen dozen times right and, and created completely new styles that had never been seen before. Those are artists who take art to places that just copying, synthesising together, all existing art would not have taken it.

Kerry O’Brien  

Whereas all so, all I would do, as it is now, having taken every one of Picasso’s paintings and gone through every period of Picasso’s incredible lifetime. They could replicate, but they could not advance it.

Professor Toby Walsh

We’ve yet to see them advancing. Right. So it’s an interesting. How will we know. It’s an interesting challenge for AI to to know whether it would ever get a bit, but certainly you know, where we are today is just, pastiching of what Picasso did. It’s not. It’s not taking us to somewhere particularly new.

Kerry O’Brien  

Yeah. So, so come back to regulation now and I do want to talk about, about autonomous AI next, and I know you’ve just, you’ve just made a submission to the house, to the House of Lords Committee on autonomous weapons. On what investigating frameworks, ethical frameworks for autonomous weapons? And you might tell us a little bit about that. But, but but where the issue of autonomy, autonomy or autonomous AI is a loaded issue, is it not in terms of ethics?

Professor Toby Walsh

It is, I mean, in terms of thinking about the ethical challenges that AI poses, I think intelligence itself is not a problem, you know, the smarter you are, hopefully, the more thoughtful, the better you’re going to be. The challenge is, is is the other word, autonomy. The fact that we’re giving machines, robots or whatever, cars, the ability to act autonomously on their own, without much or little over, human oversight. That is the fresh, that’s, in some sense, I think the only fresh, ethical challenge that AI throws up, is the fact that we’ve got this new actor in our lives, our autonomous car, autonomous robots on the battlefield, that is given some independence to act. And then you run into the fundamental problem, because machines are not conscious beings. They don’t have emotions, they don’t have feelings, they can’t be punished. Who are you going to hold accountable when mistakes happen, or when when the thing does the wrong thing? You have to have someone you know, our legal system, our moral system requires a conscious sentient being to be held accountable. Now we’ve got a machine. That leaves an accountability gap. And something that takes us to, in some sense the only new philosophical place that AI takes us to is that one.

Kerry O’Brien  

So how do we fill that accountability gap?

Professor Toby Walsh

By making sure that we only give autonomy to machines, in limited circumstances where there’s clear line of sight of the people who are going to be held account for those machines.

Kerry O’Brien  

And how do we do that?

Professor Toby Walsh

So in some places that we’re going to do that, we are going to do that, because it’s going to be a great benefit to our lives. So your autonomous car is going to bring a great benefit to your life. 1000 people will die in the next year, in Australia, in road traffic accidents, almost all of them caused by some idiot driving a car. They’re not caused by mechanical failure. They’re caused by human fallibility. And that will go to almost zero in the next 20 or 30 years, we will get fully autonomous cars. And all of those errors, we won’t drive texting, they won’t drive when they’re tired, they won’t drive when they’re drunk, they won’t drive when they’re distracted. All of those mistakes that humans make, will stop. And we’ll suddenly realise, if you survive birth in Australia, your most probable cause of death till the age you’re 30 is road traffic accidents. We suddenly realise that goes away. All of us have had our lives, lives of our families or our friends touched in some way by one of those accidents. And that will stop. And we’ll go, we’ll look back and we’ll think, oh, it was like the Wild West. We we put up with people dying on the roads, because we didn’t have an alternative. Well, ladies, gentlemen, we’re going to have an alternative. We’ll have autonomous cars, 1000 road deaths in Australia will just stop. So that’s the positive side. I mean, but then the negative side, the coming to the House of Lords, is equally we’re going to give that autonomy to machines who’s, who are designed to kill.

Kerry O’Brien  

And you’ve said in your submission, autonomous weapons systems will redefine how we fight war dramatically so, right.

Professor Toby Walsh

They’ve been called the third revolution of warfare. The first revolution being the invention of gunpowder by Chinese that gave us guns and bullets and explosives. Second revolution, being eventually nuclear weapons, which again, step-changing, how we could fight war, well, now we could destroy the planet. And this, the third step-change, a way that we could industrialise war, that these weapons will fight 24/7, they have inhuman accuracy, they will never tire, they will do. Previously, if you wanted to do harm, you needed an army, you needed to equip them, train them and persuade them to do your evil. Now you won’t need. That you’ll need one programmer. And you can tell the robots to do anything, however distasteful.

Kerry O’Brien  

So when you have a country like the UK, which is a huge arms exporter. Yes. You have a country like Australia, which is becoming much more of an arms exporter than it once was. There’s a vested interest. There is. If they, if they are going to be relied on individually and collectively to come up with, with a proper regulatory framework, somehow, some kind of how, it’s a contradiction in terms to have an ethical, an ethical framework for autonomous weapons and war in warfare. But the countries that are putting those regulations together, are dealing in those very items.

Professor Toby Walsh

Yeah, you put your finger on exactly one of the fundamental push backs here, which is there’s a lot of money to be made. Yes. Arms manufacturing is a major business. The UK, one of the major players in this space is the second largest arms exporter on the planet. We are a player ourselves. And you know, we are developing some of these weapons ourselves. Our government’s in our name is investing, there’s $100 million being invested in the trusted autonomous systems defence CRC, Centre of Excellence. We’re building the loyal wingman, we’ve got the autonomous submarines now that are going to go alongside our nuclear submarines.

Kerry O’Brien  

That’ll take a while.

Professor Toby Walsh

Yeah, we have a responsibility to ensure that this happens, you know. I mean, I’ll disagree with you slightly that, that there are ethical rules for war, meaning that war is a distasteful thing. Yes yes, well there is the Geneva Convention. Geneva Convention, we, we push back against the more extreme things, whether it be you know, dumdum bullets or anti personnel, mines, chemical weapons, biological weapons, the world has pushed back about some technology, we decided that’s just too distasteful to use for fighting war. And that’s, you know, that’s why I wrote that submission to the House of Lords. That’s why I’ve been vocal. That’s why I’ve actually got banned from Russia now for life. Speaking how. Congratulations. Thank you. Because, you know, I’m confident at some point, we will just find this sufficiently distasteful, like we found chemical weapons, sufficiently distasteful. But the stakes are big, the stakes are huge. The stakes are big. And the thing that really worries me is that in most of those examples that I’ve given you, it was only after we saw them, in being used, in anger, we saw the the terrible scenes in World War One of the misuse of chemical weapons, we saw nuclear weapons being used in the Second World War. It’s we had to have a princess remind us about anti-personnel mines. It was only when we saw them on our own screens, we saw them for ourselves, that we got round, with all the conviction and courage and pressure to regulate them. And that’s what worries me about autonomous weapons, which is that I’m pretty confident at some point, we’re going to look at it and say, you know what, this looks like some terrible Hollywood movie. And we don’t need to do that. We’ve got plenty of ways of defending ourselves. We don’t need to make warfare, terrible like we do with chemical weapons or biological weapons, we can add it to the list of things that we’ve decided to regulate. But to do so we’ll have to see them being used against women and children.

Kerry O’Brien  

Right. That’s a stopper. Aligning behaviour of high tech companies with the public good is one of the huge challenges, right? Yes, we talk, lets talk more practically, about how about how sovereign governments can individually and collectively, properly reasonably regulate the high tech companies to ensure that there is some measure of control over the process. When we when we look at how information technology generally has exploded this century, even particularly, and how it’s been overwhelmingly commercialised, the extraordinary incursions of the Googles, the Amazons, the Twitter’s the Apples and so on. So many other big corporations now into our lives, ruthlessly monetizing our personal data. What does that tell us, about how the big players are likely to develop and cash in on AI? Won’t it be the same story? Different? Different dressing. Yep. But the same fundamental story. How do we monetize the crap out of this? And don’t get in our way?

Professor Toby Walsh

You’re 100% right. I mean, the track record is not particularly good. We look at what happened with social media. We look at what happened with our data privacy. You know, we were the product. The behaviours were ones that have disrupted, our electoral systems have potentially resulted in people like Trump being elected, potentially resulted in the Brexit referendum going the way it did. You know, all the harms that we’ve seen, you know, violence that’s been perpetuated in many countries, incited by social media. To think well, tech companies haven’t really lived up to their promise to regulate themselves very well.

Kerry O’Brien  

What a surprise. I mean, Bill Gates has has been this, this, this person, dispensing massive amounts of of his personal wealth around the poorest parts of the world, taking on battles with AIDS and various other things at the same time as Microsoft, not saying now, but at the same time as Microsoft was fighting antitrust suits in Washington. Yeah. Where it was, it was continuing to fight like beggary to retain monopolies. I mean, that’s just one tiny example, isn’t it? It’s um.

Professor Toby Walsh

It is. So I was in a meeting with a, with a member of the government, member of the cabinet yesterday, and someone pointed out, you know, Minister, part of the problem is that we’ve been trying to take Facebook to court for five years, who have, about the misuse of our data. And Facebook’s defence is that they don’t operate in Australia. Today. Well, excuse me, but.

Kerry O’Brien  

So that, where’s, where’s the sovereignty in that? Yes. Where’s our national sovereignty in that example, it’s classic.

Professor Toby Walsh

Yeah, unfortunately, for the, beginning of the tech revolution, there was this feeling that you couldn’t and you shouldn’t regulate the tech space. You couldn’t because somehow it was different, right? It was not physical, didn’t cross national boundaries. And you shouldn’t, because that was going to stifle innovation. That might have been true, maybe through the 80s and the 90s. But I think as soon as we crossed into the millennia, that stopped being true. And now we’re discovering you can. It is entirely possible to regulate the tech space, there’s a number of examples of, of GDPR, and for data protection in, in Europe. Even here in Australia, I can give some examples of, of where we’ve pushed back. So after the tragedy in Christchurch, the terrible accident, the terrible incident that happened in Christchurch, we enacted laws here in Australia to hold the platforms responsible, if they don’t take the sort of content that is inciting that, down quickly enough. And that has had a, you know it’s not been perfect, but that has had a positive effect, you actually measure how quickly the content that gets taken down, gets taken down much more quickly, now that we’re holding the offices of those companies criminally responsible for taking that content down. So there’s things you can do.

Kerry O’Brien  

So you’re relying on individual countries to act fast, when they’re confronted with something, because if you’re relying on on the nations of the world, to come to terms with it, and the way they’ve failed to come to terms with climate change.

Professor Toby Walsh

Yeah, there’s absolutely no hope. I mean, the rules-based order is under severe stress. As we see today, we see great divisions between, you know, between the West, our allies, with Russia, with China, we see very little consensus, we see very little possibility for action. But equally, you see quite successfully, national regulation be quite viral. So if we take data protection, we take the fact that our data privacy was abused, well, we haven’t solved that problem, but we got it slightly better. Europe enacted the GDPR. There are now 17 different countries outside of Europe that have regulation, that’s about the same as GDPR. So this regulation tends to be quite viral. So you know, I’m quite hopeful, I spend a lot of time talking to politicians, talking about you know, what they should be doing in this space. Politicians are very aware. You know, the minister was saying to me yesterday, my colleagues are breathing down the back of my neck, I need to do something. Sorry, yeah. We’re seeing, you know, there’s an AI Act being enacted, as we speak in the Europe, we’re talking, there’s there’s a Bill of AI rights, it’s being discussed.

Kerry O’Brien  

Tell me about the Bill of Rights just quickly?

Professor Toby Walsh

Well, I’m not sure that its, that’s still very putative. And it’s very, it’s, you know, it’s like the US Constitution, it’s a lot of high-sounding words, of course the challenge is, you know, how do you turn that into practice, you know, you’re going to give the regulator sufficient teeth. I mean, the good thing is,

Kerry O’Brien  

Well, that in itself is a huge challenge, because the amounts of money that these corporations can bring to bear, it’s just ugly money, you’re throwing them a mercurial.

Professor Toby Walsh

It is, but but the regulator, we have discovered a good trick, which is, and most of the regulation that’s starting to be proposed, enacted these days, uses this trick, which is you find them a fraction of their global GDP, you find them 3% of their global GDP. Okay. That tends to get their attention. That would. Because you’re right, they have very big, you know, they employ lots of lawyers, but equally, their GDP, their turnover, especially large that’s finding 3% of it, does hurt them.

Kerry O’Brien  

Because if you throw in a mercurial and contrarian character, like, like Elon Musk, who one minute is calling for a pause in AI development, at least, until it can be regulated, in the next he’s talking about his own chat bot, which is calling Truth Chat. I don’t know about you, but characters like like Musk with their hands on the levers do make me nervous.

Professor Toby Walsh

Yeah, I’ve had the pleasure to meet Musk and he’s, he’s an interesting character. I mean, he, he is a great engineer. But I wouldn’t put him in charge of Twitter. I wouldn’t put him in charge, I mean, that seems to be a fundamental failure, right. So Twitter is, in some sense, our most important town square, it’s a really important place. And so you know, I don’t think billionaires, Musk or any other billionaire, has any greater insights as to, you know, the challenges of freedom of speech and maintaining decorum on our most important public square than anyone else. And so the fact that he had 44 million, or could borrow 44 million to take that on, I don’t think speaks well for democracy. That we, you know, the regulators should’ve just stepped in and said, maybe this is, you know, not a plaything for, for a billionaire.

Kerry O’Brien  

You talk a lot about using the right tools with AI. What’s in your toolkit?

Professor Toby Walsh

Well, I think, I think the most important tool is education. At the end of the day, it’s about making sure that more people are educated into how to use the tools so that we can take advantage of it. And also the, all of us, right? This is technology that’s going to touch all of us. All of us need to understand, need, all of us need to be literate I, I can’t understand why we teach calculus anymore. Because we used to live in the mechanical world, and calculus is the mathematics of a mechanical world of movement. Well, we now live in a digital world, we should be teaching everyone the fundamentals of the digital world. Not not that everyone should program, I mean the dirty secret is that we need fewer and fewer programmers. Machine learning is computers learning to program themselves. But the if we want to be active players in this increasingly digital, increasingly virtual world, if it’s like magic, we will not. We will be taken advantage of again.

Kerry O’Brien  

Yeah well, you talked about education being a key, being a tool. How are our universities and our schools equipped right now, to deal with what’s happening right now with AI? To what extent is our tertiary sector, effectively preparing itself for what is in the pipeline? Five years from now? 10 years from now?

Professor Toby Walsh

Well, unfortunately, it’s not. I mean, education is very conservative. I, a quick advertisement, I, later, later in May, there’s a thing called The Day of AI, I’m part of a, a not for profit, that’s, that’s giving school kids a taste of AI for one day, it’s baked into the national curriculum, we’re teaching them about ethical AI, we’re teaching about the opportunities, the risks of AI. And so you know, if you’ve got kids at home or you’ve got grandkids at home, mention them, they should sign up, and get a taste of the future. Because, you know, what’s what’s interesting to think about, I was talking to the Department of, the Secretary of the Department of Education, and the Secretary was saying, well, you know, there’s kids entering kindergarten today, they’re going to spend most of their working lives the second half of this century, using technologies that you have not even invented yet. So what should I be teaching them, Toby? And, you know, the interesting thing is, most of those skills are the old-fashioned skills I think. They’re the the, the critical thinking skills, the human skills, the social intelligence, the emotional intelligence, the creativity, the adaptability, the greats. The thing things that, you know, ironically, things like the humanities taught us very well, that we seem to be turning against. It’s not, you don’t need more people to program. I don’t think, you know, teaching everyone programming is a good idea.

Kerry O’Brien  

What about what about equity in education? I mean, so a country like Australia, prosperous country, 16% embedded poverty, at or below the poverty line. Education and digitization, two absolute fundamentals to the future of Australia, to be able to craft itself as more equitable society. So what are the tools you apply there? I mean,

Professor Toby Walsh

You’ve put your finger on, an absolutely fundamental challenge. And we saw, we saw this through the COVID pandemic, right. So what happened then was we switched to online education because we had to stay home, there were schools closed, kids had to be educated at home. And you saw immediately the digital divide. Yes. You saw the large fraction of kids who had no digital device in their lives. And so whilst you know, AI can provide personal tutoring to those kids, they don’t have a device. There’s nothing for the AI to run on. Yes. You have to find that, that fundamental divide has to be tackled before we can begin to say well, okay, Chat GPT is actually an excellent personal tutor. You know, not only can it cheat on your homework, it can also sit there and answer any questions you have, however, repetitive, however trivial the questions, when I said it, it can sit there and answer your questions. It can teach you French, it can teach you how to do your algebra. It can, it can teach you how to program Python. It’s a perfect personal tutor. But unless you have a device, there’s no hope.

Kerry O’Brien  

So, so, just you must think about this particular aspect a lot. Just education in your sector. You must have your own pictures of what it’s going to look like 10 years from now, for instance, but compared to now. So you’d have to start by saying what kind of tick you’d give the quality of our tertiary sector. Now it’s had its own disruptions, it’s had its massive disruptions, really, if you go back from the Dawkin period, and, and you look at how it’s functioning now, you look at how well equipped its academic staff are now, broadly. You look at the corporatization of universities. And then on top of that, you look at what’s coming down the pipeline. So tell me 10 years from now, can you do that?

Professor Toby Walsh

Well, I think it comes down to who we vote for, are we prepared to invest in our future. Education is the greatest leveller in our societies. The greatest, you know, I come from a family where no one had ever gone to university before, right. I had the luxury of an education that has allowed me to achieve the things I’ve been able to achieve. Education is the greatest enabler, and we are not investing in our future. So the question is, you know, next time you go to the ballot box, you know, we are voting for that future. We need to invest more in that future.

Kerry O’Brien  

So, you’ve been actively, actively participating in this field for decades now. And we are coming to the end. But you’ve got a good couple of minutes at least to, to perhaps reflect on this because it’s important. What is the level of your optimism versus your pessimism about our capacity to manage this and manage it well, and manage it for the common good?

Professor Toby Walsh

So, I am optimistic. I think you have to be optimistic or else why would you get up in the morning. But I’m, I’m most, I’m optimistic in the long term. I do feel that, you know, we live much better lives today than our grandparents. Our grandparents lived much,

Kerry O’Brien  

But a great deal more anxiety, I think than they had. Perhaps. Except perhaps when there was a war going on.

Professor Toby Walsh

Well, yeah, they had to live through the Second World War. I think that was a pretty anxious time, I suspect. I wasn’t there. But, but how was that. That was because we embraced technology. We’ve embraced to be much better sanitation, much better medicine, all the computers that have come into our lives. We, we live like kings and queens, we forget that. I mean, it’s easy to, to, you know, point out the problems in our society. But you know, we have devices, labour saving devices, that previously only kings and queens used to have. They were called servants, we know. We have things that wash our dishes, wash our clothes. We live, you know, people forget, even in industrialised countries, like Australia, life expectancy has nearly doubled since the industrial revolution. We have used technology largely, and, you know, constructed a society where you could have a fair go. So it’s not just technology, it was also raising the societal structure, the politics that that supported that, that have allowed us to live, I think, better lives. And I’m optimistic that we can try and do that again. But I would say that. I wish you hadn’t said try. I’m pessimistic that it’s going to be an incredibly bumpy road to get there. Not only because of the disruption that technology is bringing into our lives, which it is, it’s going to disrupt our lives in very severe ways, throw people out of, some people out of work, change the nature of work, change the nature of our society in profound ways, because of also the other tsunami of shit coming down the pipe. The climate change, we haven’t got, actually, we’re not out of the pandemic, still, people are still dying. Every day people are dying here in Australia. The increasing inequality we see within our society, the fractured geopolitical situation where we’re back at war in Europe. We see tensions with China, we see, you know, so many troubles in so many different places, that, you know, I think we have to apologise to young people and say, I inherited a better world from my parents. And I’m very sorry, you’re going to provably have a worse world. There are no two ways about it. It’s too late to fix most of these things, you’re going to have to deal with this terrible set of problems. But here’s the good news. I have been working on some AI which might be a little bit useful. Along with all the other things.

Kerry O’Brien  

So just quickly, you, in my mind, you brought me back to regulation when you were talking there. China is an interesting case right now. Yes. In the way China is trying to regulate as a, as a command central economy, backed up by authoritarian weight, and ruthlessness. How’s that going?

Professor Toby Walsh

Well, China technically has much better regulation of chatbots than we do. China,

Kerry O’Brien  

They certainly want to regulate what it says.

Professor Toby Walsh

Yes. Because obviously they don’t want them to talk about Tiananmen square and all the other things all the other truths, the truth GPT would be a terrible problem in China, if it actually told the truth. So you can see it’s a significant threat. But, but equally, China has announced, very public ambition for the last couple of years, to seek economic and military dominance by the use of artificial intelligence. And they’re going about it in short order. 10 years ago, you would turn up at an AI conference, you wouldn’t see barely a Chinese person. And now, by various measures, they are neck and neck with the US in terms of the leading nation by the number of AI papers, the number of AI patents, the amount of money being invested in the field, and also their use in the military. So you see China embracing this as it is, you know, that’s the sad news. Right? You know, all well, with all respect, right, you know, some fantastic works of warning for us. But you got one thing wrong. It’s not Big Brother, it’s not people watching people, it’s computers, watching people. If you’re an authoritarian state. Yes. Computer is the perfect tool, right. East Germany was, demonstrated the limit of what you could do having people watching people. Apparently at one point, I think, it was a third of the population was watching the other two thirds of the population. You can, you can watch China at scale. China have a system for facial recognition that can scan a billion faces in a minute. So the population in the minute and in just in case, you’re, you’ve got any misconceptions as to their intention, they helpfully named it Skynet, which is the AI computer in the Terminator series.

Kerry O’Brien  

So, I’m trying to look for an optimistic way to end this. I should have asked about your pessimism first. I mean, that does sound like your worst nightmare. But, but the one thing it does say to me, apart from anything else is that when you look at the democracy we have, and just look at the, at the the erosions and potential erosions that are taking place, what it does say to you is, value what we have and protect it. Does that make sense to you? In the context of what we’re talking about.

Professor Toby Walsh

100% I think, you know, I, I’m very proud to live here in Australia. I think it’s a fantastic country where, you know, the motto of a fair go is something that everyone should be given a fair choice. And these are technologies that could, if deployed in the right way, give more people a fair go. I mean, that’s the, it’s the usual problem. Technology is not destiny. It’s about making the right choices, and which is why I’m very happy we can get this conversation, we can try and promote the conversation amongst the wider public, about choosing the right future. The future is up for grabs. I mean, people, people so often ask me, Well, what’s in the future? What’s, what’s the technology you’re gonna give us? I say, it’s up to us. It’s about the choices we make today is going to give us that future. Well, there are good choices to be made. And there are poor choices that are made, or there’s the sitting on our hands, which is a poor choice, typically. But if we make the right choices, there is a very bright future, there’s a future in which the robots take the sweat, we can sit back and enjoy the finer things in life. We can, we perhaps will live the world that, you know, Milton Keynes talked about where, where there is a lack of work, and that we can actually sit back and enjoy the finer things. Work is the only truly obscene four letter word and robots could do it for us.

Kerry O’Brien  

Well, I think that’s the note we end on. Toby Walsh, thank you very much for talking with us tonight. Thank you.

Professor Shaun Ewen

Colleagues, my name is Shaun Ewen and I’m the Deputy Vice Chancellor Education for Griffith University. LinkedIn tells me via its photos, and its text that the Vice Chancellor is in India. But given what we’ve heard tonight, she may well be at the theatres on West End, West End London, and we’ll see what trinkets she brings back to get a sense of where she’s actually been. Artificial intelligence has been central to our deliberations at Griffith, as this year has unwound. And our immediate focus was on student assignments, student learning and the student experience. How would we think about plagiarism? How do we detect cheating, and so on? But its moved very quickly to ethics, and what are the ethics around the use of artificial intelligence and for a university or universities in the game of knowledge, whose knowledge does it value? And for me, one of the challenges is the data that it draws on has a history and a bias, often gendered, often racialized. And if it, if it also relies on written knowledge, written text, how do we think about the knowledge systems that historically haven’t been written and haven’t got the depth of texts and language? So in an Indigenous context, how might we be cautious about another way of, excuse me, of colonization. Just one other point before we close and I can’t remember her name, I’m sorry Toby, one of your colleagues at UNSW made the point on a video, that the more we understand what artificial intelligence can do, the more we will value what humans can do. And Kerry and Toby touched on it in their conversation today. So can you join with me please in offering a vote of thanks and round of applause on behalf of Griffith and Home of the Arts for the fabulous conversation, to both Toby Walsh and Kerry O’Brien.

As you leave, there’s copies of Toby’s books available for sale as well as the Griffith Review Creation Stories 20th anniversary edition in which Toby has an essay. And I understand that Toby will also be available for signing of books. The next event for A Better Future For All, we see Kerry in conversation with the Queensland opposition leader, David Crisafulli, on Wednesday the 31st of May. Together, they will explore the challenges and aspirations of contemporary politics as the Queensland centre right rethink and reshape themselves for the future. Thanks for coming tonight and have a great evening.

Translate »