Many of us witnessed the historical moment when Google CEO Sundar Pichai showed the recording of an AI-driven conversation in I/O’18. For those, who missed the history defining moment, here a quick ~3 min video below. The Google Assistant’s conversation gives us a clear sense where Artificial Intelligence (AI)/Machine Learning (ML) stands today as well as what can potentially unfold in the near future.
AI did face challenges early on with many unrealistic expectations, but it bloomed once technologists began to realize its true potential.
Artificial Intelligence / Machine Learning are concepts in the field of computer science where a system can evolve on its own. In general, we write business rules in a programming language to perform specific functions. But an AI system depends on self-learning to figure out ways to perform the same function by training and observation.
There are many categories of AI and they differ in the way their system learns. The above example of Google Assistant relies heavily on deep neural networks with reinforcement learning. Going into the specific technical details of these categories is a much larger discussion, which we will park for another time.
This brings us to the first of 5 myths.
Myth 1: A.I. and ML are the same:
Artificial Intelligence and Machine learning are related but not quite the same. ML is a subset of AI, but they are frequently used interchangeably. AI solutions depend on a pre-trained model which is the brain of an AI system. ML can help the system go beyond its initial training and learn new tasks / processes on the job. An ML system keeps improving the pre-trained models as new data comes available. But how do we know whether the new learning was correct? This question leads us to the next myth.
Myth 2: ML systems are completely self-sufficient
As you might have already guessed, this answer is No. Every now and then, any new learning needs to be validated. If this is done through human intervention, it’s called supervised learning*. There are also other ways to validate the data without manual inspection to keep the system from going into overfitting.
What exactly is overfitting, let’s see with a quick example.
Example: A supermarket is equipped with an ML system. The ML system is fed with an array of customer data (features)—name, purchase date, purchase value, contact details, date of birth, payment type (credit card/debit card/cash etc.) and address. The ML system needs to identify ideal customers, who:
- spent more than, say, $500 multiple times in a single transaction.
- are qualified leads to be contacted by marketing team for upselling.
- may have a high conversion rate.
One interesting fact about the ML system is that it’s kind of a black box in decision making.
We don’t get to see the logic of how or why it concluded the output. So while the algorithm is considering the dataset labelling, its cognitive methodology gets dominated by one particular label (which may not actually be important in real world) due to it producing erratic output. In this example, say that 60% of those customers with high spending have a common month of birth e.g. May. This common property increases the cognitive bias leading to a crucial decision-making factor for the ML system. It will result in excluding qualified leads born in other months from promotional campaigns, which may result in a loss of possible revenues. Here the ML system is said to be overfit with the training data.
Myth 3: AI is still an exclusive technology and not generally available for business use.
It’s true that most large IT solution providers / OEMs have developed their own products for AI. From Apple’s Siri and Amazon’s personalized shopping recommendations to Android Pie’s adaptive behavior. But for small business and services-based industries, there are many ways to incorporate AI into day to day business. This can be made possible through cloud services from Google, Amazon or Microsoft. Solutions can be on-premise or device based integrated through ready to use SDKs. AI can help improve many business operations and can be adapted to user or business need.
Note that AI/ML does not have magical capabilities. Some tasks are still far-fetched to be fully automated or self-intelligent per se.
Myth 4: Singularity: AI will take over and dominate humanity
This theory is a popular theme in sci-fi movies. Singularity is the hypothetical theory of artificial superintelligence that will qualitatively dominate the human intelligence. It is the point where machines can auto upgrade and be fully self-aware of their capabilities, leading to an intelligence explosion. It is not necessarily an enemy entity but with its own function roadmap, it can prove to be dangerous.
The answer – Not possible with today’s technology. There are many factors to this. Just like there is no one shoe that fits all, there is no machine learning algorithm that can solve every problem. Each Artificial Intelligence solution has a very particular and narrow use case and hence in many cases, combination of multiple algorithms can be used, from Tesla’s self-driving cars to Boston Dynamics’ self-balancing robots. But again, the combination is very use-case centric. To reach singularity with today’s AI, every possible action and related dataset has to be created and the machines need the motivation to advance towards it. AI/ML mimics human behavior and patterns. Then performs an insane amount of permutation to find the best match. It simply can’t do something which it’s not trained for. Take for example of Microsoft Tay or Facebook AI Bots* and the reasons for their suspension/shut down. A certain amount of control is required to be retained by humans.
Myth 5: “In the future, robots will take our jobs”
It’s undeniable fact that machines are automating a lot of repetitive, process oriented human jobs. However, when it comes to creativity, emotional intelligence, vision, and foresight, humans still score higher. As Erik Brynjolfsson, director at the MIT Initiative on the Digital Economy, says, “The future is not preordained by machines. It’s created by humans.”
There is simply no complete data set or digital representation of everything that a human can do and never will be (starting from emotions to economic and through to social value-based activities).
There will certainly be a wave of role transitions. We have seen this already occur when cloud computing impacted server admin responsibilities and automation impacted testing roles. But these skilled people were then presented with new and strategic opportunities / training / roles.
In a broad way, current job requirements can be classified as:
- Innovation that needs creativity or strategy e.g. painter, scientist and more.
- Existing tasks that are recurring or needs maintenance/optimization e.g. Remote teachers, nurse, etc.
- Roles where human emotions and strategic decisions are an integral part e.g. teachers and CEOs.
- And roles where being practical is more important than compassion e.g. security guard and drivers.
Above is the job classification from Kai-Fu Lee’s TED talk. As he famously said, “AI might be taking away a lot of routine jobs, but routine jobs are not what we’re about.
We should differentiate ourselves from our jobs as humans in the age of AI. There is an axis of creativity, compassion, love, empathy and social work. Those are things that AI cannot do.”
Following is the possible way jobs can be distributed between humans and AI:
To conclude, Artificial Intelligence is bringing about a better world vs. a threat. It is a technology evolution which will revolutionize industries like healthcare, supply chain, finance, science, etc.
Do you have a different take on any of the myths we have debunked above? Share your thoughts below in the comments.
Looking to understand how you can utilize AI/ML in your organization, contact our team to get started.