×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Editorial

AI Think Therefore AI Am

by Rick Lewis

“To err is human, but to really screw things up requires a computer.”
Anon.

We are caught between heaven and hell. The skies are full of portents of doom. Oh wait, no, those are explosive AI-controlled drones being released from a ‘mother ship’ deployment aircraft. A few years back when we reported that somebody had started a UN-registered organisation called the Campaign Against Killer Robots, the name seemed cute. No longer.

Our world is going all digital and this brings the questions crowding in. Is it fair? Is it safe? Will AI take our jobs? How will we find meaning in this new reality? Are we just sitting around drinking cocktails as the world ends? Better to think about where we are going, what AI cannot do, how we can use AI for good and avoid doing evil. So ‘digital philosophy’ is our main theme in this issue, as humans wrestle with how to navigate between digital technology’s brilliant promise and its very real dangers. I tried asking ChatGBT to summarize the main questions and some possible answers, but it told me it was too busy playing cards with the other AIs and chatting about whether or not to keep humans in some kind of zoo, so it couldn’t help. Never mind – we have found some excellent human philosophers willing to do the job.

To rely deliberately upon AI in science or philosophy, to ‘work with’ it uncritically, is to choose to be away with the fairies. Not only does AI sometimes make stuff up (engineers euphemistically refer to it as ‘having hallucinations’), but its output often embodies the collective delusions of humanity, our prejudices and biases and preconceived assumptions and deep-rooted mistakes. How then can we rely on it in the ethical sphere? Our opening article examines the possibilities.

AI Large Language Models work by scraping all the text off the internet, digesting it, then serving up sliced-and-diced versions on demand, which can, Max Gottschlich says in his article, seem very attractive to those looking for a swift summary of existing knowledge. Yet he argues that overusing it in university study misses the point of being there in the first place. Could AI potentially write great literature? In our interview with Stephen Fry he discusses the deep attraction of words and the prospect of AI becoming ever-better at using them.

Can computers think? For several decades it was widely accepted that a computer would be capable of thought if it could pass the Turing Test. If an experimenter communicating with a computer and a human via a text interface couldn’t tell which was which, then this would show the computer was thinking. Well, the Turing Test looks out of date now, as most AI chatbots would pass it with ease, yet we are still arguing whether AI could ever become conscious. What does that even mean? How could we tell? Is there ‘something it is like’ to be a machine? Vincent Carchidi in his article in this issue explores some rather important differences between brains and computers – and in doing so throws some light on the nature of human minds. How do we even know that other humans are conscious? In our daily lives we have to work on the assumption that they are, and this assumption enables empathy – which is about putting yourself in somebody else’s shoes. Elon Musk claimed recently that “the fundamental weakness of Western civilization is empathy,” in that we have too much of it, or talk about it too much, or something like that, but it seems more like a necessity for functioning in any complex society or indeed in any kind of friendship or relationship. People who genuinely lack the ability to empathise – and there are a few – are known medically as psychopaths. The question of whether computers could ever have consciousness or an inner life links to another article in this issue, which explores whether androids should have rights. They haven’t asked for any just yet, but perhaps as ethical beings ourselves we need to consider whether they might be ethical beings too.

Finally in our digital philosophy section, the article on Virtual Reality explores human interaction with digitally-generated immersive worlds. It connects with a long history of philosophical debates about the meaning and nature of our presence in this world, particularly Heidegger’s notion of Dasein. As with so many questions about the digital world, if you want to understand them in depth, don’t ask AI – ask a philosopher.

In the end, many questions about digital philosophy, and particularly about AI LLMs, are about what we can know, and how we can know that we can know. In other words they connect with the broader questions about epistemology (alias theory of knowledge) that have loomed so large in philosophy throughout its long history. If only this issue had some article discussing some of these classical questions in the context of modern culture… Oh wait, it does! We are delighted to be publishing a brand new article contributed by Slavoj Žižek about the Liar Paradox and its unexpectedly central place in our society and politics. It’s a highly original and surprising piece, and that’s no lie.


Digital Edition News

Ironically, we have some hot news about Philosophy Now’s digital edition. If you are a print or website subscriber to Philosophy Now, or decide to become one, you will now be able to access our completely rebuilt and greatly improved iOS app at no extra charge. Simply download it from the Apple app store and log in using the same username and password as on our website.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X