AI Agents are the Future
Facing a cancelled Sydney to Melbourne flight—one of the world's busiest routes—I was saved by my assistant. Awoken at 5 am, my assistant swiftly rebooked me, rescheduled my meetings, informed all parties, and even prepared a summary with key readings for my journey, all before I got out of bed.
Of course, all of the above was done by ‘JamesAI’, my personal AI assistant, who is always on, fine tuned for me, and supports my day-to-day chores, work and play.
This AI use-case, one where AI will be able to independently reason and perform complex, disparate tasks, is fast becoming a reality. Being built by many startups including companies we’ve invested in at Galileo Ventures.
In theory, this ‘AI workforce’ will be able to support everyone on the planet, every day with tasks that range from the mundane to the complex such as reasoning, creativity, scientific discovery, and much more. It is a compelling individual use-case.
I believe, the personal AI use case, in both work and daily life, is, in fact, the most valuable use case. As Bill Gates recently said, “Whoever wins the personal agent, that’s the big thing, because you will never go to a search site again, you will never go to a productivity site, you’ll never go to Amazon again.”
Arguably the individual AI use-case is already the early winner in generative AI products. ChatGPT skyrocketed to 100m users in just two months, hitting a remarkable 100m weekly active users by November 2023 and generative an estimated $1.3 billion in annualised revenues – a 4,500% from the previous year. Similarly, Anthropic's Claude AI assistant is reportedly earning $100 million annually.
These AI chatbots have not only achieved a new level of product-market fit but have also significantly impacted tech history.
What are agentic systems?
Chatbots powered by large language models (LLMs) like ChatGPT have captured everyone’s attention but not everything can be solved through a chat interface. AI chatbots suffer from the ‘blank canvas’ problem. What do I use it for? If I can do anything, what should I do? And, importantly, what do I even ask?
LLMs are at their core prediction machines that give you the probable correct outcome. It turns a logic problem into a statistics problem. It turns that search query into a probable answer but it's not quite a database or a list of links. This introduces new product questions.
Beyond just asking a question and getting an answer (one-shot queries), users and engineers started to develop ‘Chain of thought’ approaches where you can build in multiple nested queries, more similar to how we go about our day-to-day. This allows richer, more complex (i.e. useful) tasks and reasoning – an important step towards AGI.
From this, the concept of ‘AI agents’ has emerged. Agents are made up of individual system functions or ‘tools’, such as searching the web or querying website data, that are combined to create an ‘agent’ such as a customer service agent made up of many tools including conversational website querying or looking up a knowledge base but with set objectives and autonomy to perform complex tasks. LLMs have made all these above functions much more effective.
There is no set definition of agents, the term is still evolving, but generally, some researchers and developers are making analogies to the human brain where an agent can have set objectives, adapt to new information, store memory, and interact and execute various tasks.
‘Agent-ic systems’ are on the rise.
As Sequoia recently wrote, “Agentic systems in Generative AI applications are increasingly not just autocomplete or first drafts for human review; they now have the autonomy to problem-solve, access external tools, and solve problems end-to-end on our behalf. We are steadily progressing from level 0 to level 5 autonomy.”
The AI agent landscape is evolving fast. There are over 70 developer projects including AgentGPT, AutoGPT, BambooAI, and BabyAGI among others. These projects cover a range of functionalities from autonomous language processing to task management and code generation, all leveraging the capabilities of LLMs.
Some of the more recently funded startups pioneering agentic approaches include Adept AI ($400m+ raise), Relevance AI ($18m, backed by Galileo), Fixie AI ($17m), and Langchain ($10m).
This market landscape breaks down the core components of what makes up an AI agent.
Key capabilities: Chatbots vs co-pilots vs agents
The AI landscape is changing every week. ChatGPT popularised the chat interface but over the past 12 months we’ve seen chatbots, copilots, and agent-style product launches with some distinctions between them.
Microsoft jumped on the copilot term for their suite of AI tooling whereas ChatGPT is squarely just a ‘chatbot’. The reality is the differences depend on the outcome you’re trying to achieve and the lines will get blurrier as the technology evolves.
- AI Chatbots (manual): Specific use-cases for natural language conversations in question and answer contexts. LLMs have enabled this use-case. For example, education, therapy or querying a knowledge base works very well as a conversational product.
- AI CoPilots (semi-manual): AI-enhanced, human-assist for specific use cases. For example, coding or copywriting.
- AI Agents (autopilot): Combination of the tools above but operates with substantial independence, solving repetitive and mundane tasks with potentially no-human input required.
Galileo’s investment in Relevance AI is arguably at the forefront of AI agents. Their thesis is that all human experts, not just engineers and software developers, will need access to this AI infrastructure to create and manage their own AI agents. In doing so they are pioneering a concept they call the AI Workforce – AI-employees that support real employees with everyday tasks, in turn making everyone more productive.
Relevance AI co-founder, Daniel Vassilev describes in his latest post, “Generative AI is different: it is going to shift the world to an entirely new paradigm. The next great leap in progress is autopilot - the ability of AI to operate autonomously and iteratively without continuous human interaction. This new technology for the first time frees teams from the constraints on productivity imposed by headcount, making access to compute the only limitation on their output.”
They believe that, “every team will have hired at least one AI agent by 2025, and by 2030 have a full-fledged AI team supporting them”.
According to Relevance AI, agents need 7 key capabilities;
1. Perception: They need a ‘sensory system’ to ingest data from around them. ‘Multi-modal’ LLMs or AI systems will become the norm but could use cameras, microphones or images.
2. Reasoning & Decision Making: The agent needs some ability to understand and act on the data using logic and probability to draw conclusions and decide on whats next.
3. Learning & Adaptation: AI systems need to learn and adapt, using machine learning algorithms this is now possible, although the ability to do this in real-time and efficiently is still evolving.
4. Communication: AI will need to clearly communicate and engage through natural language or standard protocols. An interesting question is if humans are ‘on the loop’ will agents communicate with each other in their own language?
5. Task Execution and Automation: This is where software shines, agents will take the load off you and should be able to do repetitive tasks and automate where possible.
6. Planning & Navigation: AI should have ability to sequence tasks and exhibit planning capabilities, although we have seen some limitations with LLMs on these tasks. This usually involves predictive algorithms and optimal route selections.
7. Memory & Knowledge Representation: Finally, AI needs longitudinal memory. Agents will need some ability to recall and represent knowledge. See newer techniques like RAG for LLMs which are supporting these activities without retraining LLMs.
The above was partly written by a ‘Relevance AI marketing agent’, combining some of the latest information and concepts online into a blog post explaining AI agents. The future is already here, it's just not evenly distributed, as the saying goes.
First Examples of Agent-ic Systems
Towards Agents: OpenAI’s ‘GPTs’ are the building blocks of agents
Below I’ve included a screenshot of OpenAI’s ‘GPTs’ – their first step towards agents – from their creative writing coach which is only a couple of steps away from writing or editing my articles automatically. In this case it's reviewing content, originally created by an AI agent, reviewed by another AI agent… we’re getting deep now!
In theory these GPTs or ‘tools’ can then be combined with automation and other systems to make them more agent-like and I believe OpenAI will release these features themselves soon to some degree.
Beyond chatbots: Towards an AI Workforce
Relevance AI, who recently announced their US$10m Series A, has coined the idea of an ‘AI workforce’, one where an individual or organisation can ‘hire’ AI agents for their specific needs. At first, these programs will be able to do basic reasoning and perform mundane tasks such as booking sales meetings for your company, answering customer support queries, or doing research, but in the future, they will take on ever more complex tasks.
The platform enables some of these agents off the shelf to deploy them within your workflows. Below you can see some of their templates as well as some of their configuration options for their ‘world-class researcher’ agent that can perform tasks including description (persona), decision-making flow, and tooling.
Single Use Software
The ability to use natural language to create software that can complete tasks is profound. It may alter the software market entirely.
What does it mean to create an application if it only has a single purpose? How will customers think of software if it's essentially free to create and temporary?
I believe this leads to the creation of ‘single use software’ where you can issue a command, spin up an instance, complete the hyper specific task, and potentially throw away the application that did it (or at least not worry about it again). But eventually this library of applications will evolve and grow over time, customised to each user and how they work.
Single use software was not possible before, teams of engineers needed to create software that could scale, be used by most users, and not worry about edge cases. The properties of single use products; limited durability, low cost, and convenience, all change how we can think about them in a software context:
Is Vertical SaaS dead?
If any expert user can create software tools bundled with AI to create agentic systems, what does this mean for vertical SaaS? For example, if you can ‘spin up a dashboard’ of your business using natural language, and query it with LLMs, why pay for a generalised tool?
Over the last 20 years specialised software has proliferated spurred on by cloud computing and SaaS. From software to managing your customers (Salesforce), software managing your accounts (Xero) to software to manage your software (Atlassian).
Going forward will we see more bundling of software products not by vertical or sector but by outcome, especially in areas where the complexity of tasks is lower or more generalised?
The innovation cycle of bundling and unbundling is an observation in successful businesses, where to make money products are combined or bundled together before being unbundled and sold as separate specialist products. For example, you used to have cable TV networks that bundled many different channels together, but now you have many separate streaming services, each offering their own library of content.
This cycle can be spurred by introducing new platform technology, such as online streaming.If AI and agents change how we create software will bundling software features in vertical-specific applications still make sense? Do we really need an expensive CRM if we can spin up an agent-powered CRM application in seconds?
It is still too early to say, and it’s unclear to me who are the winners and losers, but as natural-language software creation becomes more powerful and accessible the need for vertical-specific software will change.
From ‘UX’ to ‘AX’
LLMs are unlocking individualism in software. If agents proliferate and start managing individual-specific tasks for consumers and business users, the way software is created will change.
Agentic systems will be able to combine multiple tools and functions, complete complex tasks, and interact with users and customers, potentially ushering in the idea of an AI workforce.
‘User experience’ will give way to an ‘agent experience’.
As a result, we’ll likely see agent marketplaces pop up soon and businesses try to stay atop of agent creation. You could easily seed Salesforce creating a ‘sales agents marketplace’ or Adobe with ‘creative agents’ that businesses can ‘hire’ to do design work. If this will be a winner-take-all market, it remains yet to be seen.
While this is an intriguing concept it introduces new questions for policy and work:
- What does it mean to ‘hire’ an AI employee?
- How many AI agents will businesses use in the next 5 years? and how much will that cost to run?
- What type of decisions will we let AI agents make?
In the end, LLMs and AI-enabled natural language software creation is not the end but the beginning of a new software paradigm and market.