What do we mean by AI?
Artificial intelligence, defined as a set of methods supporting human-based decision making in a replicable way, has actually been around for a long time, since the 19th century, Hugues Talbot pointed out. AI systems have evolved throughout the 20th century, with the development of theoretical computation by Alan Turing and the organization of the Dartmouth Summer Research Project on Artificial Intelligence, a 1956 conference which was a founding event for AI. But the real turnpoint for public awareness of AI was the late 1990s, with the defeat of world chess champion, Garry Kasparov by the IBM Deep Blue machine. In the last decade, the significant progress of AI technologies have been allowed by the breakthrough of artificial neural networks, which are new learning techniques inspired by the way the human brain works. As Hugues Talbot puts it, AI has been around for a while, and has already proven it can have invaluable benefits, not least in the healthcare sector.
AI as a means, not an end
According to Cyril Bertrand, early stage companies need to be careful labelling themselves as “AI startups”. Because of the number of startups claiming they are using AI technology when actually their core business relies on basic software, investors are increasingly skeptical about the term “AI”. Many CEOs of such startups are actually non technical and do not have a clear idea of how these technologies actually work, which is likely to affect their credibility. The key for investors is to understand whether the startup is actually solving a problem and whether the use of a particular AI technology makes sense to solve that problem, usually because it allows the startup to do things that were not possible before.
As such, AI should not be the end, but rather a tool to build a solution around a problem that customers are facing. Hugues Talbot reiterates that AI is good at doing boring things, not creative things, and that any boring problem probably has an AI solution to it. There are many industries which require automation of processes that are data intensive and are complex and inefficient for human beings to manage. As Cyril Bertrand points out, there is probably more potential for AI startups in B2B (business to business) rather than B2C (business to consumer) sectors. The only 2 legitimate AI companies listed in La French Tech’s Next 40 (the list of 40 French startups with unicorn potential), Dataiku and Shift, are B2B companies.
Nicolas Rasamimanana’s company Extrality is actually a good example of an appropriate use of AI technologies to solve a real issue. Numerical simulations for complex, highly engineered products such as aircrafts, are currently done through equations solved numerically, which is a burdensome and time-consuming process. The technology used by Extrality allows to generate similar outputs than traditional methods, but 30x faster.
The importance of explainability and interpretability
Explainability and interpretability were raised as key emerging concerns by our 3 experts. Explainability refers to understanding the steps and models involved in the AI decision-making process so that the results of the solution can be understood by human experts. It contrasts with the concept of a “black box” under which human experts cannot explain why the algorithm reached a particular result. When asked about what type of AI company he would be interested to fund, Cyril Bertrand mentions that explainability is a topic of interest to him, as the current methods are not satisfactory.
Interpretability, by contrast, refers to the extent to which engineers can discern the mechanics and predict what outcome is going to be produced if there is a change in input or algorithmic parameters. For Nicolas Rasamimanana, managing expectations of Extrality’s customers is key, and this comes with clear interpretability. In order to trust the solution, customers will need to compare the output of Extrality’s tool with that of the traditional methods they are used to, and be able to reliably predict what will come out of the new tool.
As we increase our dependence on AI to make decisions, we definitely need to get better at understanding how exactly a machine has reached a certain conclusion. According to Hugues Talbot, a good example of this is autonomous vehicles, which present new challenges for explainability and interpretability. If an autonomous vehicle finds itself in a position where an accident is inevitable, what decisions should it take and who should it protect? We need to be aware in advance of the decisions it will take to make sure those decisions are aligned with ethical and moral priorities.
AI’s next challenge: creating value with less data
As Cyril Bertrand reflects, the challenge for AI is now to reach the same level of performance with a lower amount of data. The current systems can achieve a high level of performance but are very data hungry. When an investor values an early stage startup, the data is never or very seldomly proprietary of the startup, which could be considered risky from an investor’s point of view. What is truly valuable is when a startup actually sits on the source of the data it needs. Nicolas Rasamimanana explains that Extrality can produce their own data to feed their algorithms, and not only rely on customers or third parties providing them with data.
But as Cyril Bertrand puts it, if you want to achieve the same outcome with less data, you need to improve the algorithms. Machines will need to rely less on bottom-up data and rather have a better conceptual understanding of things to be able to replicate what a human being would do in the face of uncertainty, with fewer data points. Hugues Talbot adds that another interesting field of study in current AI research is the development of emotional intelligence (or even stupidity!), to replicate, recognize and understand emotions with increased granularity. Human beings are not purely rational and devote a lot of time to non productive tasks, such as imagination or creation. While the human brain remains a physical system, there is no reason we couldn’t fully study and replicate it. Whether AI could be capable of consciousness is another debate.
- Artificial intelligence, defined as a set of methods supporting human-based decision making, has been around since the 19th century but made breakthrough progress in the last decade.
- For tech startups, AI should not be an end in itself, but rather a tool to build a solution around a problem that customers are facing.
- Explainability and interpretability are emerging concerns that are key to better understanding and predicting AI’s output.
- AI’s next challenge will be to create value with less data, through more efficient reasoning.