Until the escalation of hostilities between the U.S. and Israel on the one hand and Iran on the other, it is probable that the issue causing the greatest concern to the most people was the march of AI. So familiar has the term become in the three years or so since the launch of ChatGPT by OpenAI that we no longer refer to “artificial intelligence.” Just the initials will do. And they are being blamed for everything from the slump in graduate-entry jobs to perceived declines in standards of popular culture.
Somewhat perversely, a lot of the antagonism towards AI stems from statements from executives at some of the biggest AI companies themselves. For instance, earlier this month, Sam Altman, CEO of OpenAI, acknowledged that the balance between capital and labor was shifting drastically, while others have suggested that there could eventually be an AI-led renaissance — but only after intense disruption of the jobs market. Perhaps even more oddly, given that mass unemployment would pose a major headache for governments, politicians appear to share the companies’ enthusiasm for the technology. The lure here, of course, is that AI promises to produce the hikes in productivity that have been so elusive of late.
But is AI really delivering the gains advertised? Of course, proponents would argue that it is early days yet. But if the technology is so transformative, there should at least be some initial results beyond those irritating chatbots that seem to appear any time you visit a corporate website. And this appears to be a growing view among board members who are seeing companies spending billions of dollars on the technology. According to research published last month by Dataiku, a platform that draws on “human expertise and AI reasoning” to provide intelligence for many of the world’s biggest companies, nearly three-quarters of chief information officers say their role will be at risk if their company does not deliver measurable business gains from AI within the next two years. In an interview with me last month, the company’s co-founder and CEO, Florian Douetteau, explained that there was “huge pressure” on companies to use AI, but at the same time boards were “worrying about new risks.” In particular, with 50% of leaders believing that half of their workers were using AI tools, there was the practical risk of data leaks or cyber attacks.
Given the uncertainty about what might come next with AI, employee suspicion was not unfounded, he said. But he added that the situation was “nuanced,” with most of the friction not coming from employees but from leaders. They “need to have a good idea of what you can do with AI.”
The “eco-system of AI” was moving very quickly, with a rapid increase in the number of Dataiku’s customers building their own AI agents, for example. As a result, Douetteau had a “strong intuition” that return on investment would become less of an issue as this year progressed. Instead, the big questions would surround sovereignty and control of systems. In particular, should a company build its own system, with the associated costs and risks, or should it continue to rely on the public cloud, which was cheaper but brought the danger of disruption or reliance on a single player. A reminder that resilience is a key concern for modern boards.
MORE FOR YOU
There also needs to be an acknowledgement that for all the benefits in such areas as data analysis that AI brings, it on its own is not going to transform organizations. Graph technology, for example, can enhance AI applications by subjecting all the data they amass and analyse to much closer investigation. Emil Efrem, CEO and co-founder of neo4j, a company specializing in this area, sees AI as “a brilliant PhD student” going to work every day as if it is his first day. “There is no context,” he said in an interview with me last month. In contrast, a business using graph technology enables the PhD student to “learn from the first day.”
The power of graph technology comes from the fact that it manages data in the form of networks rather than tables as is the traditional approach. By showing the connections and relationships between different items of data it more accurately mirrors how humans make conceptual associations, albeit much more quickly. As such, it has proved highly useful in investigative journalism — neo4j played a key role in analysing the leaked Panama Papers in 2016 — as well as fraud inquiries and supply chain analysis.
As Efrem explained, context is effectively institutional memory. It is “a broad umbrella to make sure the data you look at fits together. That’s the sweet spot.” With fraud inquiries, the key advantage of the technology is “to connect the dots in ways that humans can’t do.”
But he also stresses that — at least for the foreseeable future — humans still need to be involved in the process. For example, it makes complete sense that much of the process of a mortgage application is conducted by AI because that is quicker and more effective. “But in the end it needs to be a human decision.”
His view is that AI is creating a lot of stress because as a species humans — who for most of their existence have not had experience of any real innovation — are not capable of dealing with all the change that is occurring at the moment. While optimistic that overall people will be better off as a result of the technology, he acknowledges that there “will be pockets that are left behind.” Transparency and explainability are going to be key to winning acceptance — as might be some signs of how it is benefiting the wider population beyond the technology giants.
Crunch Time Is Coming For AI’s Big Spenders
RELATED ARTICLES


