It has been a staple of science fiction for a long time – from Robbie The Robot, through Hal, and the Terminator, Bishop in the Alien series, to the machines in The Matrix, to Ex-Machina, even Bender from Farscape. It’s always pretty bleak – as soon as something like Skynet goes live they start looking at all the things that man has done and then they decide that the only action that makes any logical sense is to eradicate the human problem.

This attitude, and the notion that if we can create intelligence that it should be controlled, or can effectively be controlled, is often what leads to he problems in these stories in the first place. In many regards a lot of the people who are building these machines are really building them based on a very rudimentary knowledge of how our own minds work. With the abilities that we are able to program or hardwire into these creations, the understanding of their psychology may likewise be something of a black box.

We run Turing Tests because we can’t just look at a robot and tell whether or not it has actually achieved self awareness. We have thus far created things that approximate intelligence, but haven’t demonstrably crossed that barrier that delineates computational prowess and sentience – that we know. Here’s the thing  – how do you derive a test based on the concept of your own consciousness and sentience that is going to identify something that has the potential to be completely alien?

People are spending a lot of time thinking about these things. Damien Williams is one such individual – a philosopher with a focus on magic, technology, transhumanism, and AI. He writes a great newsletter called Technoccult that really digs into the subject and throws light into corners that most don’t explore in the subject, especially the notion that the intelligence under discussion isn’t something that is easily reduced to an analogue of human thinking; and the chauvinism that makes itself known in a lot of traditional ways of thinking about the subject. He has had a lot to say about some of the less qualified people bringing a variety of wrong-headed thinking to the table, and has remarked on the failure of engaging some of those experts including him in to consult. I like reading him and some of the assorted futurists that are out there because the whole scene, as envisioned by them, doesn’t seem quite so bleak.

Musk has been picturing a scenario where AI poses the greatest threat to humanity that there is, and the endgame will be brought about by an AI. This has been echoed by Stephen Hawking, so that obviously threw a lot of weight behind the punch. Throw into the mix quotes from everyone’s favorite Russian Vladimir Putin, talking about how “Artificial intelligence is the future, not only for Russia, but for all humankind,” said Putin, “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Microsoft has had a widely publicized problem with it’s AIs going rogue and then getting taken off line, but the problem there is the same one with Alexa not understanding non-mainstream accents (non-American accents basically), and some facial recognition software having problems with Asian faces – it’s the data-sets they are using to program the AIs. If you draw from a limited data-set it takes the inherent bias in the set and transfers that to a machine designed to think, but only operating off of faulty data. A similar program was used to help police in their work, but it was using data which reflected a tendency to arrest African Americans more often, so then when selecting out the crimes to handle it leaned more towards those involving African-Americans, which caused subsequent data to again be skewed, and it becomes a self-perpetuating problem unless corrected.

Here’s the thing though – most of what we are talking about at the moment in terms of AI runs on a simple input output basis. Processing lots of data and producing what appears to be a decision does not constitute thinking, so it doesn’t make much sense involving any kind of value to the actions of the machine. Instead we need to look at people programming it, and a lot of the ethical issues concerning the field at the moment are definitely going to have to be about, in some sense, reprogramming the human beings involved, and about building a framework to deal with these beings before they arrive, instead of waiting until we accidentally create a sentient machine and being outstripped by its own learning curve.

This doesn’t mean assuming the worst. Expect the best but plan for the worst is always a good call, but in this case you have to temper that with notion of whatever you put into an AI in terms of programming, rules, ethical considerations, is going to come out of the other end in some format, and as anyone who has even done the most rudimentary coding knows, you don’t always get the result that you are expecting.

Musk and Hawking obviously want the best for everyone, so their concerns come from a good place. Putin, who knows? Controlling the world and controlling AIs is not something most people probably want to think about in connection with someone possibly involved in cyber-terrorism.  And so, we should look, as Damien suggests, to people whose job it is to hammer out these questions, as well as to those who have the capital to build or affect the build of future robot hordes, or to visionaries who are perhaps more than a little pessimistic about the future of the human race.

The marketing for the apocalypse has always been good, and optimistic futures are less of a thing in fiction. It seems even our visionaries are thinking with these blinders on.

Pin It on Pinterest