A contribution from Jasmine Morgan.
Schools of Thought
Since the time of mainframe computers, people have envisioned the potential of the machine to help them perform different tasks faster, more accurately and cheaper. Until recently, this was done through software created in a boolean, logical and transparent way. The conditional cycles (if/then/else) and the loop cycles (for/do) were the cornerstones of software as we know it. The accessibility of understanding and changing code was just a matter of training.
Everything had changed with the introduction of neural networks and deep learning. Even the best specialists have a hard time understanding how the machine comes up with brilliant answers after going through a process of digesting large quantities of data. This is closer to the way the human brain works, based on intuition more than logical deduction. Although impressive, it can also be considered frightening. It begs the question, should we allow such machines more autonomy?
Humans and AI: A Winning Team
Up to now, the Level 4 of AI autonomy remains a utopia, and we should not worry about bots taking over the world soon. Yet, the advancements are remarkable and should be put to good use, even if the intelligence still needs a little help and support from the human factor.
Right now, there are 3 viable options for making use of AI: letting the machine handle simple repetitive tasks on its own, using the AI as a trustworthy assistant to a human specialist or having a person ready to take over at any moment, but encouraging the AI to learn more and develop.
The last two cases are the most interesting and worth developing, as most of the cases included in the first situation are merely examples of automation, not real AI.
AI-assisted humans are usually specialists that could use an extra pair of eyes in their job or a second opinion. Just imagine a radiologist looking at something that could be a malign tumor. Before giving such a diagnosis, it would be great to compare the sample with more known examples. That is precisely where AI comes into play. The results obtained by the human-machine team exceed the individual performance of each.
A further step is allowing the deep learning algorithm to oversee the decisions and only correct it when the AI makes glaring mistakes. This is what the self-driving car with a human behind the steering wheel is currently doing. For safety reasons, it is not wise to give full control to the car, although it is performant enough to be compared to a boring, predictable and careful driver. The same logic applies to corporate chatbots taking care of client relationship that can escalate a conversation to a human customer service representative.
The Black Box
The largest challenge posed by AI development is related to the fact that the algorithm is not explicit about how it takes decisions. This translates to an inability to tweak it fast enough to respond in the right way when it makes an error. The only way is to teach it is much like you would teach a child right from wrong, by giving numerous (as in thousands or millions) of good examples and hoping it will perform accordingly the next time.
While in conditional algorithms it is straightforward to isolate the code that causes an error and correct it, here the computer programs itself. Pointing out the element that caused the misbehavior from the data used for training it is almost impossible since the information entering the first neuronal layer is processed, goes to the next layer and there are also feedback mechanisms in place. Offering autonomy to AI powered devices would require trusting them enough that they don’t make mistakes.
As described, to become more autonomous, the AI should be able to correct its own mistakes instead of propagating them further. Imagine a car that does not stop at a red traffic light on one occasion. With no correction, the chances that it will perform this mistake again are high. It can even believe that is the norm, remember TAY? This type of learning also used by humans means lots of trial and error. AI is still in the assisted situation where a team of experts monitor outcomes and correct the machine by providing even more “right” data sets.
One of the biggest problems of AI is false positive yielding. While some situations are not important, like slowing down a self-driving car before something that could be an obstacle, others are life-threatening, such as offering treatment for a condition the patient may exhibit the symptoms of, but may not actually have.
Using neural networks is recommended for projects that involve learning about patterns, customs, and best practices. A recent article by InData Labs suggests three immediate applications for AI in a mobile environment. Recommendation services are the first that come to mind, inspired by Amazon’s development and these are innocent enough to be fully autonomous. Another example is learning behavioral patterns, which can be used from personal concierge services to a personal assistant bot or even a robot-accountant, which could require some human intervention. The last and most important step would be apps that are able to think and decide on the best course of action like a personal legal adviser, a personal doctor or the elusive driverless car, however, we are not there yet.
For now, it is safe to assume that AI will still grow with proper human intervention. The next step is to teach it to correct its mistakes by performing numerous cycles of good examples and developing “mentor” algorithms that will gradually replace humans. The most important feature of AI is the opportunity of learning transfer. Once a machine has learned something, this can just be copied and replicated on another machine and training doesn’t have to start at point 0 for every computer, it can just continue from a previous level. This feature offers AI the possibility of evolving to the human level in just a few years. Will humanity be ready for this?
Jasmine Morgan, the author of this piece can be emailed at firstname.lastname@example.org