For this blog entry I am straying away from the current and into philosophy (and some would argue sci-fi) for a bit. I have been thinking a lot recently about the future of technology and of how it will be integrated into our lives. Most likely actually we will become more and more reliant on it. This concept always raises philosophical issues about the evolution of man and the future of our species, not least because our reliance on technology reduces our own natural ability and this is often not realised until the moment when suddenly we need that ability and don’t have it.
But that aside, my love of sci-fi has recently caused me to reflect on the ‘intelligence’ of machines. This all came about largely because I was thinking about a pivotal moment in computing history; the story of Deep Blue. For those of you who aren’t familiar, Deep Blue was a super computer created by IBM designed to play chess. In 1997 the computer and defeated the Grand Chess Master Garry Kasparov. During that match there was a moment when Kasparov tried to lure the computer into a trap and what happened in response was something quite remarkable. The computer paused for a whole 15 minutes, calculating how the game would pan out. the computer was capable of calculating over 3 million moves a second and genuinely appeared to be contemplating the situation before making a move. When it finally made it’s move it skillfully avoiding the trap before brilliantly out manouevring Kasparov.
Sci-fi has covered all the bases on this subject. As far back as the 30s and 40s Asimov was talking about this sort of thing in his acclaimed Foundation series, but the power of computing means that calculating the probability of outcomes and acting accordingly is very possible for the powerful computers we have and will be creating. People will argue that intelligence in computers is not possible, as they ultimately just do as they are programmed to. But this is short sighted. It is the connected-ness of a system of machines and software that present a potential future where machines have the ability to assess all the possible options and outcomes and then make choices according to ‘absolute logic’. And it is worth noting that we as people are apparently ‘intelligent’ but we actually only operate according to information we have learnt. We have the ability to ‘think’ around this but all that thinking really is, if you break it down to its simplest element, is the ability to find and apply information to address a question that has arisen.
So let me give you an example: It would not be too much of a stretch to imagine a piece of security software that is designed to dynamically identify and create mitigation plans for anything that presents a risk to a ‘system’. If that system was a piece of hardware that should always run then a risk would be the power being turned off. Assessing the risk would reveal that one of the most likely causes of power failure would be human intervention to turn the power off. This software then identifies that a logical solution is to therefore prevent it from being possible for a human to access the power supply controls. It proceeds by assessing possible solutions to see how access by humans to the power supply controls can be prevented and identifies that sealing the area by locking the doors is the best option. A new risk is then immediately identified that humans may try to gain entry by removing the door. So it assesses how to prevent this and comes up with another measure to put in place, this time it is to electrify the door. So it immediately sends a new work request to a mobile maintenance machine to make the necessary changes to electrify the door. And so the cycle continues, following ‘absolute logic’ to remove all risks and escalating to the point at which the ‘system’ is completely protected and self-contained.
Now this might seem a bit far-fetched, but actually it isn’t. There are already automated software programs in existence that monitor environmental conditions and react accordingly, be they fire detection or environment controls. There are also already ‘risk evaluation’ software packages around that provide analysis and resolution options. There are even automated machines that carry out maintenance tasks. If you link the three together then already you have the basis of an autonomous presence that, for all intent and purpose, has the ability to assess a situation, make ‘decisions’ about it and take the relevant action to then prevent it. Add in the ability to dynamically apply sets of rules and information and you have what is basically intelligence.
By complete coincidence, this article was published on the BBC last week, suggesting that this is a subject a lot of us are beginning to worry about. Perhaps this is because it feels like we are on the verge of a new revolution in technology. In the next decade I would predict another leap forward in the way we interact with technology and this may well see more ‘intelligent’ machines being introduced that ‘make decisions’ on our behalf in order that our lives are made easier.
So what is the final thought? Creating machines that are ‘intelligent’ may not lead to the end of the human race, but it could well be the end of us as the dominant and controlling ‘species’ on our planet. By conventional measurements, there will be more machines than humans in the world, they will be capable of autonomous existence and most likely evolution. And they may well make decisions for us about how we should live our lives, whether we like it or not.
I hope you enjoyed my little excursion into philosophical sci-fi.