Beth Singler interview: The dangers of treating AI like a god
Technology

Beth Singler interview: The dangers of treating AI like a god

[ad_1]

Beth Singler

Beth Singler, anthropologist of AI at the University of Cambridge

Dave Stock

We are growing used to the idea of artificial intelligence influencing our daily lives, but for some people, it is an opaque technology that poses more of a threat than an opportunity. Anthropologist Beth Singler studies our relationship with AI and robotics and suggests that the lack of transparency behind it leads some people to elevate AI to a mysterious deity-like figure.

“People might talk about being blessed by the algorithm,” says Singler, but actually it probably comes down a very distinct decision being made at a corporate level. Rather than fearing a robot rebellion or a deity version of AI judging us, she tells Emily Bates, we should identify and be critical of those making decisions about how AI is used. “When fear becomes too all-consuming and it distracts you from those questions, I think that’s a concern,” says Singler.

Emily Bates: You describe yourself as someone who “thinks about what we humans think about machines that think”. Can you explain what you mean by that?
Beth Singler: I’m an anthropologist of AI, so I study humans trying to understand how [AI] impacts their lives. As a field, it involves so many different aspects of technology from facial recognition, diagnostic systems, recommendation systems, and for the general public, all those kinds of details get subsumed under this title of AI. [And] it’s not just the simple uses of AI, but how it’s imagined to be which also reflects what we think of as the future of AI in terms of things like robots, robot rebellions, intelligent machines taking over.

Is there anything we should be concerned about in AI? Are there entrenched biases?
Everything that we create as humans reflects our contexts, our culture, our interests, and when it comes to algorithmic decision making systems, the data that we choose to input into them leads to certain outcomes. We can say rubbish in, rubbish out. But we also want to be very careful we don’t personify AI so much that we decide it has agency that it doesn’t really have. We’ve got to be very clear that there are always humans in the loop.

Why do we think that if an AI becomes powerful, it will want to rule us and put us under the boot?
A lot of the time, it comes from our assumptions about how we treat others. When [Western civilisations] spread around the globe and encountered Indigenous cultures, we’ve not necessarily treated them particularly well, and we can see this same sort of thing scaling up. If we consider ourselves to be the most intelligent species on the planet and we’ve destroyed environments and we’ve made animals endangered, then why not an intelligence greater than ours do the same thing to us? Or if we see the distance between ourselves and the ants as equivalent to the distance between a superintelligence and ourselves, then maybe it just doesn’t care as well. It’s going to react in a particular way beneficial to itself, but that might not be so beneficial for humans.

Do you think a fear of AI is well-founded? Are the robots going to take over?
I think it is actually more interesting that we keep asking that question, that we keep returning to this fear narrative, that we’re concerned about the things that we’re creating, how they might be like us in terms of intelligence, but also how they might be like us in terms of our bad traits, like our rebelliousness or our selfishness or violence. I think a lot of the time, it comes from our feeling that there should only be minds in certain places. So we get very concerned when we see something that’s acting like a human having the same level of intelligence or even the apprehension of sentience in a place where we’re not expecting it.

How far away are we, and how will we know when we’ve made a machine that has the same level of intelligence as we do?
It comes down to really what we conceive of as intelligence and how we describe success in AI. So for a long time since the very conception of the term artificial intelligence, it’s about being very good at doing simple tasks, bounded tasks in a very simplistic domain. And then over time, those domains become more complicated, but still, it’s about being successful. So the whole history of playing computer games, for instance, all the way from the simple boards of tic-tac-toe and chess, all the way up to Go and Starcraft II is developmental, but it’s still framed around success and failure. And we need to ask, is that actually what we think intelligence is? Is intelligence being good at games of that nature?

Is there a game that you think, when a computer can play it successfully, it might have reached human level intelligence?
I am a massive and unrepentant geek, and I really enjoy playing Dungeons & Dragons. And I think what’s really valuable about that form of game playing is that it’s collaborative storytelling. It’s much more about the experience of playing together rather than success or failure. And I think if you could have an AI that could understand that collaboration, then you’d be much, much closer to something that we might think of as embodied intelligence or communal human intelligence. The problem arises is that that might actually be the leap to artificial general intelligence to be able to do that.

New Scientist Default Image

South Korean Go player Lee Sedol plays Deepmind’s AlphaGo

Lee Jin-Man/AP/Shutterstock

What is artificial general intelligence?
The idea is that if AI could reach human level intelligence, it might then surpass it into superintelligence and then perhaps go into an area that we don’t even completely understand, where its intelligence is so far beyond our conception that it would be the equivalent of a cosmological singularity that you have in a black hole or at the beginning of the universe. We don’t know how to really conceive or describe that, [although] science fiction tries to go there with ideas about intelligent, sentient machines that react in particular ways to humans, sometimes quite negatively. Or it could be that we ourselves are sort of swept up into this intelligence and it becomes some sort of secular rapture moment.

You talk about this technological singularity becoming an almost deity-like figure. What do you mean by that?
For some people, it might be a literal transformation, that this is as close to the existence of a god as we could get, and for others, it’s more a metaphorical relationship between our idea of what a god should be and how powerful this potential singularity AI would be. We’ve had ways of characterising monotheistic deities as being omnipresent, omniscient, hopefully omnibenevolent as well. And when you start talking about this very powerful singularity, it seems to have some of those attributes, at least in this theorising about it.

What could the consequences of that be?
For some groups that are generally described as transhumanist, this might be a very positive outcome. They think this might be a route towards some form of immortality, that we might escape our physical bodies and become minds uploaded into the singularity space and therefore may live forever, be able to explore the universe and experience everything that is possible to experience. Others are concerned that a technological singularity might lead to negative consequences, an exponential version of the robot rebellion where this deity version of the AI judges us, doesn’t like us, gets rid of us for various reasons. So there’s both sorts of interpretations.

Is there a danger then of us starting to treat AI in too much of a reverent way, almost like a god-like fashion?
Some of my research looks specifically at where we don’t just personify AI, but actually in some ways, deify AI and start thinking about the algorithms behind the scenes making decisions actually blessing us in some way. If you go on Spotify and you hear a particularly useful or relevant song, or if you’re a content producer and you put something up on YouTube and it does very well because the algorithm highlights it in particular ways, because of the lack of transparency about how AI is being employed and what kind of values are being imported into AI by corporations, it seems like it’s acting in mysterious ways. And then we draw on our existing language and narratives and tropes from existing cultural contexts like religious ideas. And therefore, we lead into talking about AI as some form of god having oversight over us.

What are the chances of us being creations of something with a level of intelligence that we’ve superseded?
Again, this falls into those theological theistic patterns of ideas about a creator god, a superior entity that’s created us. It also draws on some ideas about whether we live in a simulated universe, sort of in the style of The Matrix, because the argument goes that if we can create games of certain sophistication, what’s to mean that a greater entity didn’t basically create that simulation for us to live in? And those sorts of questions keep some people up late at night. But I think it can act sometimes as a bit of a distraction, and actually some of the people who are espousing these sorts of narratives are also using AI in very limited practical ways that are affecting our lives already. So the simulation we live in is already the simulation of certain corporations and billionaires that we actually need to be very critical about. People might talk about being blessed by the algorithm. But actually, it’s a very distinct decision being made at a corporate level that certain stories be highlighted over others.

Do you think we fear AI too much?
I think there’s a certain healthy level of fear when it comes to the applications of AI that could lead to understanding what’s going on with it, being critical of it, trying to push back against this non-transparency, identifying who’s behind the scenes and making decisions about how AI is being used. I think when fear becomes too all consuming and it distracts you from those questions, I think that’s a concern.

What is your hope for the future of AI?
I would like to see the technology used in appropriate and fair and responsible ways, and I think that’s quite a common desire and we’re seeing more and more pushes towards that. My concerns are more about human involvement in making the decisions about how AI is used than AI of running away and becoming this disastrous thing in itself.

New Scientist video
Watch video accompanying this feature and many other articles at youtube.com/newscientist

More on these topics:

[ad_2]

Source link

Leave a Reply