ShipItCon 2025 Keynote -Survival in the Age of Agents
I kicked off ShipItCon 2025 with a keynote on titled Survival in the Age of Agents. Boy, did I try to pack a lot in - lessons we’ve learned in Jentic from the middle of the AI agents frontier over the last year, what an AI native company is, what it means for the future of employment and economies, what small countries can do, and what Europe needs to do to try to get in the room where it happens.
More about Jentic.
I’m not sure if there will be a video posted of the actual keynote, so in the meantime I re-recorded a clean take while I had it fresh in my head.
Video here, and transcript below.
Gallery
Introduction and Background
Hi everyone. This is going to be a bit of a tour of what I’m thinking at the moment, as we stand at the cusp of the age of AI.
I’m CEO and co-founder of Jentic, building infrastructure for agents to get real work done. I also am on the government’s AI Advisory Council, but mostly I’m someone who’s been building through multiple technology waves.
I was a super-early co-founder and CTO of Phorest, back when SaaS wasn’t even a word. Then I cofounded Demonware in 2003, building infrastructure for online games just before that became a huge thing and Call of Duty became one of the biggest games of all time. Later, lots more SaaS, then media and ad tech, and now AI agents. So I’ve seen these tech cycles come and go.
Here’s what I’ve learned: Just because something’s hype doesn’t mean it’s not real.
The Transformation Happening Now
At Jentic, our engineers don’t code anymore. They drop into code only for debugging and review. The coding is the machine’s job now, and the humans are all doing architecture and systems design.
The bleeding edge is figuring out how to combine AI with agile engineering best practices ( the latest is the “BMAD” method), and coming up with multi-agent architectures that practice software engineering. And in a surprise turn of events, waterfall software development is back, just folded into faster AI-speed iterations. When AI generates complete implementations from specs in minutes, agile begins to feel very human-paced.
And the jobs market is transformed. Computer science graduates can’t find jobs. Law firms are cutting graduate intake. Customer service is another early casualty. Tech giants are talking openly about it. We’re not even three years into this yet. This is not business as usual. This is the ground shifting beneath our feet.
The Journey to Understanding Agents
Soon after ChatGPT launched, I started a stint in Dogpatch Labs as an entrepreneur in residence, helping and advising other startups while trying to figure out my own next move. I spent a year working with startups while simultaneously playing with AI in the evenings and pondering the consequences of it all. And I realized that it’s not just going to be about co-pilots helping people write code and emails, but it’s going to be about agents that do the work itself.
I kept pulling at that thread. These agents displace humans one task, one workflow at a time. And as they do so, they make the software that people used to use redundant, one SaaS app at a time. Because agents don’t need a user interface to help them use a database—they just use the database directly. And hence, SaaS is dead.
The Existential Crisis for Software Entrepreneurs
It’s a series of existential crises if you’re a software entrepreneur thinking about this. First was the realization that AI is going to write all the software, possibly even on the fly, maybe even single-use software. But it gets worse when you realize that agents themselves are going to replace software—that you don’t even need the software in the first place.
What’s a software entrepreneur to do? In my case, you start an AI startup called Jentic, and you dive in headfirst.
Jentic’s Mission and Discoveries
At Jentic, our goal is to help AI agents operate across APIs, which is where we think the future of agents in business is going to be. And this is mostly an old-fashioned kind of problem—an integration layer problem. But in the age of AI, it has a new kind of solution. Integration used to be solved with code. Now with AI, it’s better to solve it with knowledge. Basically, all an LLM really needs to use an API is a copy of the docs.
So I assembled a kick-ass experienced founding team, raised a ton of money, and then we hired a bunch of great people. We’ve spent over a year deep in the AI agent space. We’ve realized some things that surprised even us, and they’re still actually a bit controversial.
The Future Beyond “Agents as a Service”
First, many people think that if SaaS is dead, that’s going to be okay because we’re going to have “agents as a service” instead—a future full of AaaS, if you will. Well, I actually think it’s going to look pretty different.
Agents themselves are problem solvers. Coder agents, for example, come out of the box tooled up for coding with grep and sed and things like that. But you can get them to do other stuff. You can ask Cursor to write a legal document. You can get it to analyze a budget, shareholder agreements, and come up with an addendum. You can get it to write blog posts in markdown. And if you start adding MCP tools to them, or maybe just tell them to use curl, they start turning into fully fledged generic agents that will try to do anything you ask using APIs.
Apart from coder agents, other generic agents like Manus exist, and there’s a whole new group of browser agents — basically self-driving web browsers that go around the web for you. These are all generic pieces of software, and they’re simpler than you think.
The Simplicity and Power of Generic Agents
The kind of agent that I mostly talk about—the kind that uses APIs—is really just a bit of glue between LLMs and APIs. And if you have suitable API management, e.g., thanks to us, you can write one of these in a few hundred lines of code. And that code can replace nearly any other piece of software.
For example, maybe you’re a company that needs an internal rostering system. You previously would have had to buy some kind of SaaS solution or build something bespoke. But one of these agents can just take care of it. Get one of these generic agents off the shelf. Give it access to Slack or email, Google Drives and spreadsheets, maybe a database. And it’ll sit there, taking care of requests for time off or who’s going to be on what shift. It’ll work it all out and put it in the database or the spreadsheet and email all the relevant people. No new software required.
So we’re looking at a future in which a few general-purpose agents (probably open-source) can replace huge swathes of workflows, employees, and the software they use.
AI Native Businesses: The Next Evolution
But maybe they’ll also largely replace business as we know it. I mean, what is a business but a collection of data infrastructure with workflows on top of it, and a bank account? If all the workflows in a business can be properly documented into a declarative format and then given to agents to perform —well, what exactly is left in the business?
This brings us to the notion of what an AI native business is. I struggled with this for some time late last year and early this year. I knew that Jentic had to be an AI native company or would ultimately get wiped out by one. The problem was I didn’t know what an AI native company was. But I think I’ve got it now.
An AI native business is going to be itself an AI system—a collection of agents, a multi-agent system if you like, stably working together to perform the business of the company, ranging from sales, engineering, support, finance, fundraising, pricing, what have you. And yes, there will still be humans in that business, but they’re going to be the robot maintenance crew. They’re there to build and optimize that AI system, but never to do the actual work.
And this kind of company has very different dynamics. It’s going to have far fewer people in it, for example, because the workforce won’t need to scale to meet demand. And it makes far more sense in that model for employee ownership to be high. The robot maintenance crew are all shareholders in the robot. After all, in the future where labor gets compressed and all that’s left is ownership, if you’ve still got a job maintaining a robot, you really want to own a piece of it.
Who gets on the robot crew?
This gets us to the question of who would be most effective on such a team. Basically, how can a human maintain an edge over AI?
Maybe some of you are old enough to remember when Big Blue beat Kasparov. And humanity decided, after some convulsions, that chess wasn’t such a big hallmark of intelligence after all. And we redrew the borders again when Watson won Jeopardy, then later AlphaGo beat Lee Sedol. And then, one day, AI started writing sonnets and generating art and music, and we’ve gerrymandered the definition of intelligence again.
There’s no shortage of proclamations out there that next-token-prediction is just a statistical trick that can never match the ineffable quality of true human intelligence. Well, I find this position pretty sus. Generating the next token actually requires a whole lot of deep understanding about the world and the universe. And ineffable is just another word for unfalsifiable, so we should be on high-alert for bullshit.
I do actually think there are things that we humans can do that LLMs can’t. But I also think a lot of people so far have been coasting for a long time, doing nothing more than putting one plausible word after another. And AI is really showing those people up.
If we want to earn a place on the robot building or shareholder crew, then it becomes very important for us to figure out exactly what this ineffable human intelligence actually is, because that’s the value we can add that the machine can’t. We need to be people who can do more than reason linearly by stringing words together, because from now on the machine’s always going to beat us at that level.
I think this is bad news for people who live by frameworks they learned in books or their MBA. AI knows all the frameworks, and writes faster than them. We need to be thinking in multiple dimensions about the big picture, about the architecture. Everything now for humans has to be big picture stuff.
Where Do Humans Still Have an Edge?
In particular, AI cannot reason visually or spatially in any way that’s quick, at least. It’s fundamentally trained on 1D data—strings of tokens predicting the next token. Even tiny animals can do more than that and can adeptly navigate complex 3D environments much better than the biggest, most advanced frontier model.
Both the LLMs and us are neural nets, but we animals have a complex evolved architecture. In fact, it’s more like a network of neural nets that we’re carrying around in our heads. And this multi-dimensional spatial reasoning capability that we have evolved bleeds over into other things, into our ability to make complex abstract plans. We feel like we can see the architecture of software and intuit how one component will have a knock-on effect on another and then indirectly on another component at a distance.
Machines may eventually catch up to us as they get trained better for robotics, for example, but they seem to have a long way to go. And in fact, it might require some new fundamental architectural advancement beyond the transformer model, not just more scale.
System 3 Thinking: The Human Advantage
Another thing that AI doesn’t have is flashes of insight or eureka moments. My friend, the entrepreneur and thinker Mark Cummins, has proposed that we might call this System 3 thinking.
For those who missed the famous book “Thinking, Fast and Slow”, by Kahneman & Tversky, System 1 thinking is about the reflexive and instant response - I say “cloud” you say “rain”. And System 2 thinking is the deliberate and explicit step-by-step logical thought that we can do - what’s 87 multiplied by 3? LLMs are like pure System 1. You ask them a question, they give you an immediate hot take. And thinking models, which use chain of thought reasoning, are an analogue for System 2.
But System 3 is where you can make this non-linear jump, sometimes after incubating a problem in your head for a very long time. Perhaps we are somehow fine-tuning our own neural nets over a month of pondering and sleeping on it. But whatever it is, I think we can all agree that if LLMs existed 110 years ago when Einstein was devising his theories, they would never have come up with the idea that what we perceive as gravity is really just shortest-path motion in a space-time field warped by mass. Only Einstein could look at it that way. LLMs would never have got there.
So to earn our place on the robot maintenance crew, we need to find a way to operate at that higher level—not quite at Einstein level, but definitely above what ChatGPT can do by putting one word after another.
Attention Is All We Have
The original transformer paper was named “Attention Is All You Need.” To turn that around, as humans, I think attention is all we have.
We’ve learned from using LLMs that good results come from context engineering, carefully curating the diet of information you feed into the LLM, both to make sure it’s properly informed, but also to focus its attention on where you want it to apply its intelligence. We should do ourselves the same favor.
We need to curate our inputs, design our days so our best energy goes on the problems that we think are worth solving. Build an information diet that feeds leaps, not doomscrolling. What you read, watch, discuss, and try to do changes the quality of your thinking.
You need to protect your attention, budget your energy, and direct your creativity purposefully to where you want it to be. You need to avoid cognitively thrashing around multiple topics, and find space to do some high leverage thinking.
The Problem of Information Pollution
Mark Twain, or someone I guess, said, “Don’t argue with idiots. They’ll drag you down to their level and beat you with experience.” Similarly on the internet now, we all know how not to engage with the trolls. But with AI, we’ve got to be careful, not just about what we post, but about what we read.
The open web has always been full of junk and ad-fueled clickbait. But now it’s—well, the technical term is “enshittified”. Enshittified with AI slop. It’s not just agents spamming blogs and forums with infinite amounts of bland content with no new information in them. Humans are in on it as well. LinkedIn is full of humans with fake plastic AI-generated posts celebrating their latest workiversaries. You can see them a mile off if you squint—you can just see those rocket emojis and em-dashes.
YouTube is filling up with AI-generated voices reading AI-generated scripts in front of AI-generated videos. The same with TikTok. And Pinterest is long gone.
It’s not just a giant tsunami of low-grade content hitting us from all sides. It’s the feed algorithms. The creators, part-human /part-machine now, are just cogs in this algorithm, responding like Pavlov’s dogs to its little treats. Engaging with any of this can suck you in too. Those ML algorithms are hard to beat. These things are fine-tuned at massive scale to hijack your amygdala. And once they’ve got you, they’ll pull you into the world of hundreds of millions of polarized humans dancing to the algorithm’s tune.
It’s all just one big mind trap for humans. We need to take control and exert some agency over where we place our attention if we want to rise above the machines.
Anyway, it’s frankly degrading to be leaf-blowing in this wind of ML-optimized, AI-fueled, 24/7 online melodrama, which probably wouldn’t exist if it wasn’t for advertising.
The Bigger Picture: Geopolitics and National Strategy
Let’s zoom out for a bit. I’ve been talking a lot about our future, trying to be relevant as employees and hopefully significant shareholders, members of that AI maintenance crew. But there’s a bigger picture. Our companies exist in countries. Let’s talk about that—about Dublin, Ireland, EU, geopolitics.
AI is not just another neutral technology. It is politics. It’s power. It’s economics. It’s hybrid warfare. With drones, it has inverted traditional physical warfare. It’s the new Manhattan Project.
Let’s conservatively imagine that AI transforms society at the same rate that the web did. Let’s say that’s about 15 years, from the Mosaic browser to people shopping on iPhone for stuff that they saw on Facebook. So what will our economy look like in 2037, 15 years from the birth of ChatGPT?
Economic Implications and Wealth Distribution
When AI displaces human labor, it reduces redistribution of wealth through employment, while simultaneously flowing capital out of the economy in the form of API fees paid to AI companies. It’s reasonably fair to expect that there’s going to be a massive economic productivity boost from AI all around, but there’s little reason to think that’s going to be shared.
The shareholders in some companies might do very well, especially the shareholders of successful AI companies. But most countries won’t have successful AI companies, and what industry they do have might amount to little more than local distribution channels for American AI.
Europe is already bleeding hundreds of billions of dollars a year to foreign tech providers who supply us the digital underpinnings of business and government and, in fact, sometimes take it away. AI is going to 10x this problem.
If AI replaces human labor, what will fund social welfare to keep all of our people afloat? Not PAYE taxes, given that it’s the highest paid white-collar workers who are being made redundant this time. Probably not taxing corporate profits, at least not from regular domestic companies, who will have replaced labor costs with API fees.
Ireland’s Opportunity in the AI Economy
It’s vital that we find some way to participate in the future global AI economy and ideally even get more than our fair share.
Luckily in Ireland, we have some cards to play. Many of the tech giants that are about to grow even larger have significant investment here. We already host an outsized chunk of the internet in a nicely temperate climate. We have wind. We have supply chains that can build and operate data centers. We should be strongly positioned to host AI inference infrastructure.
It’s our chance to be a producer in the AI economy, not just a consumer.
But AI is all about energy, and we’re held back by an inadequate national grid, no clear vision, poor coordination, an outdated opposition to nuclear energy, and an intractable planning quagmire, and no actual plan on how to decarbonize. At the same time as wind farms are discarding excess energy, $300 million data centers on the M50 are being cancelled because they can’t get an electricity connection.
Ireland should participate in this future by feeding AI with good, clean Irish energy.
Regulation and Competition
The other thing that Ireland can excel at is being friendly to business—a pragmatic approach to regulation. Let’s be realistic. Regulation is necessary, but it’s a pain in the ass. No one wants poisonous food or another financial crash, but there’s nuance.
Regulation is often a practical political tool, a non-tariff trade barrier, a below-the-belt punch in the dirty bare-knuckle fight of international diplomacy. Regulatory capture is very hard to avoid, with the large incumbent companies leading the call for regulation while actually just erecting competitive barriers against newcomers.
Even an idealistic regulation like GDPR was widely perceived as a non-tariff trade barriers against the likes of Google and Meta, but ironically it ended up giving them more unfair advantages over smaller local companies. International corporate giants can always staff up against any number of regulators and happily play for stalemate, while small indigenous companies get ground down or scared off.
Even in the most clear-cut case, like central bank regulation to avoid future financial crashes, we should be clear-eyed about the global situation. America’s government is basically now owned by corporates, while China’s corporates are basically owned by the government. In each superpower, technology and finance are tools for political and economic power. This is blatant with stablecoins right now.
The world has changed rapidly since the financial crash, and it might be necessary for Europe to read the room, and allow politics to lead over regulation just for a little bit.
The Draghi Report and Europe’s Crossroads
One year ago, the Draghi report came out—a report commissioned by the EU on EU competitiveness, led by an EU insider. But it was scathing. It eviscerated the EU and painted a clear picture of the past and of Europe at a crossroads.
In the mid-1990s, the US and Europe had about the same GDP. And then the US stormed ahead. The US embraced the internet while Europe held back, leaning in only when we could buy it from the new US tech giants. Now Europe’s digital infrastructure is American, and we stand at a new fork in the road.
Down one path is more of the same: America innovating, Europe regulating. This is the path that has landed us with an intractable, inscrutable, and punitive EU AI Act at the dawn of a new revolution.
Down the other path, we recognize that the multilateral global order, the Pax Americana, is gone. And we actually now need to fight for the EU to even have a right to exist. We have no real energy policy, no coherent defense policy, and we have such regulatory complexity in Europe that we prefer to trade with America than with each other.
America and China are in the lead, and Europe wants to be at the table, but probably isn’t even in the room.
Living in Interesting Times
There’s a famous curse that says, “May you live in interesting times.” We certainly are. But they are exciting times. There’s going to be new winners and losers, and what we can do is try to be clear about reality and that this new alien intelligence has arrived on planet Earth and joined our companies. It’s simultaneously superhuman and thick as shit. But we can do previously unimaginable things if we can learn how to work with it properly.
Not only will this be a new information revolution, but AI might help solve climate change, nuclear fusion, food production, and aging. Perhaps we can even put it in charge and end war and pestilence and genocide.
To get there, we need to start with understanding the situation and choose to lead, not to follow, and to extend that right up through our companies, up to national politics, and perhaps even up to European reform.
Key Takeaways
Here are the key takeaways for today:
-
Agents are going to absorb software, and possibly companies themselves.
-
AI native companies are going to out-compete everything else.
-
We, each of us, need to try to get ourselves onto a robot maintenance crew and make sure that we get some shares while we’re at it.
-
We should all support leaders locally, nationally, and in Europe who have their eyes open to the reality about what’s happening and are realistic about where Ireland sits within Europe and where Europe stands within the world and are willing to do something about it.
The ground is shifting. The question for everyone here, for Dublin, for Ireland, for Europe, is: Are we going to be architects of this transformation, or its casualties?
The time to choose is now. The train is leaving.