Artificial Intelligence - the verge of a revolution?
Artificial Intelligence (AI) is promising to take humanity to the next level of productivity, intelligence, comfort and luxury. Breakthroughs in health care, automation and decision-making are a few examples. Big industries, SMEs and individuals can and are benefiting from AI in various forms; from taking away those tedious tasks that need less intelligence (such as categorising parcels) to our day to day activities (such as filling our cars with petrol) to getting personalised advice about what is the best daily diet.
Our smart phones have already started the process of AI penetration but it is only the tip of the iceberg. Every arena where technology is useful, AI can penetrate it to make the technology even more useful, more intelligent and less prone to human errors or human moods. The trend is going to continue due to the apparent benefits for business.
AI’s recent surge of interest is fed by two main areas: Deep and Transfer Learning and Reinforcement Learning. Deep Learning is the main reason for the current surge of excitement around AI, where we are able to mimic human intelligence in specific decision-making scenarios using brain-like simple processing units- neurons- formed in sets of chains to perform collaborative and parallel calculations that go beyond what was previously available by shallow neural networks.
The idea is to apply an algorithm called backpropagation on a level by level (layer) basis and often more locally in each level (as in Convolutional Networks), rather than on the whole structure as in the shallow neural networks. This allows us to stack layer over layer of those networks to perform more complex tasks of learning successfully. And this the key issue here.
Learning or adaptation?
Learning or adaptation in general, albeit varied in complexity, is perhaps the most prominent skill for living creatures, and is the main ability that allows humans to prevail over all other species so far.
We seem to be on the verge of being able to mimic this process on machines as we did with aeroplanes and birds, albeit in an early form. We are able at the moment to let a processing architecture (artificial neural networks- ANN) learn to solve a specific task based on input and ready-made desired answers. This is called supervised learning.
There is also unsupervised learning, where we let the machine discover what is interesting for it, in a specific set of data.
This brings us to the blood and nutrition of AI; it is data coming from our daily activities through sensors such as our mobile phones and fridge. Data and sensors are crucial in the process and their connectivity in an internet of things is vital. More data to be processed means more practising for the network and machines - which do not get bored like humans - so their motivation is fixed and their dedication is guaranteed.
Therefore, the longer they are trained with more data, the better. At least in theory.
In practice, however, they can face what we call overfitting - where an ANN becomes very good at giving the right answer for the presented examples but they fall short when faced with unknown novel examples. But again, more data with less training can be effective! You might think, “so they are able to answer new questions!” And the answer is, yes, in general, as long as their generalisation capabilities are good and the questions are related to the task. Preventing overfitting can be achieved in various techniques, for example, Dropout puts some neurons, every now and then, on halt, to prevent them from overly mastering the training set.
Forgetting is another ability that proves important but needs more work to be put into application.
This in turns brings us to Transfer Learning. This is where an already trained neural network that is capable of performing a specific task can be used later to learn a new task without having to start from scratch - only stacking some extra layers and a shorter learning process is required to master the new task.
All of the above seems good, but it does not involve taking actions. It involves recognising patterns in data and being able to differentiate between them.
This brings us to the second reason for the surge of excitement around AI - Reinforcement Learning (RL). This from of learning is where a machine can learn to come up with a suitable policy that allows an agent with actuators (a robot or a car for example) to execute actions in specific cases (states) to optimise achieving a specific task.
The designer may not know the best behaviour (solution, given the agent mechanics) in advance and the machine is genuinely left to try and find a solution based on interacting with the environment to discover its dynamics (model-based algorithms) or to master the task only (model-free algorithms).
We can use these algorithms, actor-critic for example, to train some neural networks to actuate an agent to achieve some task based on some feedback/rewards. This feedback can be either external from the agent’s environment, or internal based on some motivational criteria.
Yes, we can generate emotions on machines and use them to learn a task. For example, in its simplest form, one can set a fixed cost for each time step wasted before achieving the final goal. On the other hand, the feedback can be in terms of emotions coming from grounded behaviour that is related to survival instinct or being affable.
The possibilities here are huge and the applications can be revolutionary. Recently, one unified architecture was able to exceed any human performance in playing a whole host of Atari games.
But, of course, there are challenges facing us ahead. One challenge is certainly technical; for example, Deep Learning and RL both normally take a long time to train and need a lot of data or a lot of repetitions - especially if we are talking about end-to-end learning (knowledge of the task is not embedded in the ANN design).
So, DL and RL may not be suitable for all situations, for example when the task dictates no training phase beforehand. Of course, we need to differentiate between executing a task (which takes a short time) and learning a task (which takes a long time). Long term commitment, raising up a machine, or training machines just in case they are needed, is key issue. Of course these concepts are still in their early stages and more work is needed in those areas.
Another challenge is the ethical aspect of AI - which depends on how fast it is penetrating our society. Of course, our society’s resilience is quite crucial and a mixture of policies and practices will be needed soon to face these challenges. The industrial revolution is an example, with all of its pros and cons, that we still see its consequences in our life today; and the hope is for us to be able to do this coming revolution a bit better than before.