A Pessimist’s View on AI
Note that I use the term “AI” here throughout, intending for it to be interchangeable with “machine learning” for simplicity.
I’m usually an optimist, but I’m not as positive on AI as the press seems to be. In many ways, we’ve made a lot of progress. We have robots that can do parkour, AI assistants that can make appointments for you, and of course, self driving cars. On the content front, Netflix, Spotify, and the like know you so well that they can predict exactly want you want, when you want it, right? The reality is, the Roomba is the closest thing to a consumer-facing robot we have, I still call my barber to make a haircut appointment (he manages his calendar with pen and paper), and it STILL takes me 30 minutes to find something good to watch on Netflix. So how far along can we really be on the trajectory of AI development?
Now, let me clarify by saying that I do genuinely believe that AI will be one of the most transformative technologies of this generation. However, we have a long way to go before we, as consumers, live in a world where AI is ubiquitous and meaningfully impacting our day-to-day. While we have made some eyebrow-raising advancements on the research front, we have yet to see much of this research implemented into the actual products that we use.
I think our barometer for progress should not be tied to the state of our research but rather to the state of our products.
So let’s talk about where we are in terms of the products we actually use today.
Natural Language Understanding
- We have pretty good speech-to-text technology that powers text dictation and voice queries made from our smartphones and smart speakers. Note that these technologies typically require an internet connection
- Voice assistants like Google Assistant, Siri, and Alexa are able to parse out relatively sophisticated voice commands. However, they aren’t good at retaining context over time as a human would, and much of the logic is rules-based and not “true” AI
- Gmail now uses AI to autocomplete sentences for you, a feature added earlier this year. Gmail also uses AI to auto-categorize emails by topic and also to identify spam
- iOS has auto-suggest text responses, which includes predicting semantic meanings of emojis
Image Recognition
- Face filters in Snapchat, Instagram, and iOS use AI to map effects onto the live video feed of your face
- For years, Facebook has suggested friends to tag in photos based on faces that their AI detects
Recommendations
- Amazon’s product suggestions are a classic example of AI-powered product recommendations
- If you’ve ever found yourself watching video after Youtube video for minutes or hours on end, you know that Youtube’s “Up Next” algorithm is scary good. It’s a neural network-based recommendation system that uses signals from a user’s watch history and search history
- Spotify’s Discover weekly is a popular illustration of AI and content recommendations. As is Netflix’s film scoring / ranking algorithm.
Prediction
- Airbnb uses AI for rental suggestions, dynamic pricing, and fraud detection
- Uber uses AI to predict spikes in rider demand to manage its supply of drivers in certain areas
Fraud Detection
- Most financial firms (e.g., credit card issuers), use some kind of AI to detect fraudulent transactions
Ok, so it’s clear that AI is already making an impact on our lives, but these products are a far cry from human levels of intelligence. Let’s talk about some of the constraints that are keeping us from having smarter systems today.
Some Current Constraints
- Architecting a good AI system is still intuition-based. In most cases, AI in the products that we use are fine-tuned algorithms that were architected by domain experts. That means that a large constraint on the effectiveness of these algorithms is the intuition, experience, and creativity of the data scientists / engineers that are producing them. Iterative improvements on AI systems are still largely done by trial-and-error.
- Improving a system still requires human-intervention. As said above, system improvements happen largely by trial-and-error. They also mostly happen offline (meaning that the systems are evaluated retroactively and improved based on historical performance). Another way of thinking about it: effectiveness of an AI is constrained by human cognitive ability.
- Training complex models is expensive. We are making great progress on computing power, but it is still expensive (both in terms of time and money) to train and serve AI at scale, especially if it requires training at the user level.
- Few companies have the requisite data. Training a good AI system requires a lot of user data, a valuable asset few companies have. It’s very hard for startups, typically the drivers of innovation, to build new AI products without access to massive amounts of data.
The Path Forward
The made-up graph above is my guess at where we are on the trajectory of AI progress. Here are some things that I think need to happen before we accelerate even further:
- We need AI systems that can both design themselves and self-improve over time. We need more tools that allow us to automatically design optimal AI systems with a given problem statement, removing the constraint of human cognitive ability. One promising subfield of AI that seems to be moving us in this direction is reinforcement learning. Reinforcement learning is a form of AI that learns on its own. The system operates in an environment and takes actions within the environment to maximize some reward. Typically, the reward is predetermined, but it would be interesting as a next step to see if we can develop systems that can dynamically change their reward metric over time if need be. Facebook just released an open source reinforcement learning platform called Horizon that may help move things along.
- We need to figure out how to get more data into the hands of innovative startups. Large companies (not just tech companies) have a monopoly on data these days. The pace of innovation in AI will be stifled until this issue is addressed.
- Cloud costs need to come down. It’s still expensive (sometimes prohibitively) to implement large scale AI systems in the cloud. Having these costs come down will increase the amount of experimentation and enable business models that are not sustainable at current prices.