Science

Artificial Intelligence | fears and future

Lizzie Riach summarises a talk in the Royal Society’s series ‘Science Matters’ on the direction of where artificial intelligence is heading, and why there’s no need to be scared of it taking over the world...yet.

Artificial Intelligence | fears and future

Last Wednesday I attended a talk comprised of five leading experts in the fields of machine learning and artificial intelligence (AI), mediated by the calming voice of Brian Cox. It was an enlightening evening. I hadn’t really thought too much about AI, but what struck me was how much of what I did know was based on fiction films. What soon became apparent was that, obviously, we were no where near the likes of the humanoid robot seen in Ex Machina, a film for which one of the panellists, Murray Shahanan, was the scientific advisor. I mean, sure, a machine had beaten the number one Go player in the world, the last strategy game where humans, until now, could not be beaten. But this specific machine was built specifically for playing Go. It had no other functions outside the realm of game-play – it certainly wasn’t going to take over the world.

The first question was: what exactly is the difference between machine learning and AI? The general consensus was that machine learning has a narrow function within the spectrum of AI. Machine learning is when an algorithm is given to a computer, allowing it to use data to learn its own solutions. This is becoming extremely widespread, it’s used by Facebook to tag photos using facial recognition – the more photos you tag of a person, the more data the algorithm gains to what this person looks like. Similarly, Siri uses voice recognition to understand commands. These algorithms decide what you may like on Netflix or Spotify, and you also may notice that adverts on facebook tend to be customised to your likes, searches, and events. In my case, UV rave make up ads haunt my newsfeed – perhaps a hint that I should stop clicking ‘attending’ to RA events, and actually get on with my Masters. The point is, machine learning is constantly becoming an integrated way to advertise, as well as becoming a basic interface for people and devices. The Amazon Echo and Alexa Voice service were among the biggest Christmas bestsellers, due to their voice recognition technology.

This can also be employed in healthcare. Identifying a cancer cell is one example. A machine, at first, could scan an image of healthy cells and cancer cells. With no data on either, the machine would simply guess which one is cancerous. After a doctor verifies which one is which, this data will be logged. Thousands of choices later, the machine will have found patterns based on why certain cells look cancerous, and in time may make more concise diagnoses than humans. So this is machine learning, a fairly narrow part of AI, but one that has garnered the most attention in the past few years. AI uses machine learning, integrating it into a more general framework to become ‘intelligent’ by doing ‘the right thing at the right moment’ in an ever-changing world.

But how do we keep track of what exactly AI is learning or doing? Even the experts confessed that a lot of the time, the most complex and powerful computers used for machine learning give them next to no information on how they are doing it. This is because within their ‘black boxes’, inspired by the neural networks of our own brains, the algorithms get too complex to process. However, the scientists reiterated that the very architecture of these machines limit them to undertake the tasks given to them.

So what about the future? Predictions all pointed to machine learning becoming extremely widespread, being present behind the scenes, especially in healthcare to helping diagnose and treat patients, as well as public services – looking at houses most at risk of fire or youths most at risk from abuse. Self-driving cars will be everywhere, and buildings designed by machines to utilise minimal cost for maximum strength will prove extremely helpful to engineers and architects. Crime patterns, financial predictions, pharmaceutical drug synthesis, and genetic fingerprinting could all be aided by AI. Speech interface between humans and devices will be more equipped for handling complex speech and questions. “So which jobs are ‘tech-proof’?” a concerned audience member asked. Speakers stressed that although some jobs may be made redundant when AI takes over, for others, it will be there to do menial boring jobs, giving us more time to do the fun and creative ones. Jobs that are the ‘most safe’ were apparently those that retain the need for manual dexterity, such as a plumber or electrician, care-givers, as well as those in the creative or entertainment sectors.

Our very own Director of the Aerial Robotics Laboratory, Dr. Mirko Kovac, graced the stage at this point to talk about where drone technology could be heading in the future. He stated that using drones for photography and delivery were obvious applications, and that we should be thinking more creatively about their wider uses. His bio-inspired ideas centred around the ‘Build Drone’, which can attach itself to pillars by building ‘webs’ when they’re running out of power to fly. Once attached, they can recharge and sense its surrounding environment, similar to spiders waiting for prey in their webs. By drawing inspiration from animal behaviours, Dr. Kovac and his laboratory have found a way to allow their drones to act autonomously.

This talk cleared up my own unfounded prejudices against AI, and opened up the possibilities it may hold in the future. Despite my amazement though, I couldn’t help but feel AI may begin to know our habits and living patterns better than we do. Who will have access to this information? The biggest global tech companies are only going to grow in power with this technology, I just hope it’s in a positive direction.