Jane & Marie on Mt Hector in the Tararuas not thinking about AI
This week I’ve been thinking about two uses of AI which will, supposedly, help people – AI agents attending meetings and AI frontline combat robots . I’ve been working on an agentic AI funding proposal where a senior researcher happily said, “In the near future, we’ll be able to send our agents to meetings rather than going ourselves.”
As an independent contractor, I’m all for fewer meetings. I can be highly efficient in my own work but meeting time can’t be compressed. I’m concerned when clients propose lots of meetings because that makes for an expensive contract. However, alarm bells rang in my brain at the suggestion of meetings for AI agents.
The only purpose for meetings that makes sense to me is for human beings to compare and align their thoughts and actions and to connect as human beings. If a meeting is purely for to provide information, with no inter-person transactions, there are far more efficient ways of communicating, like writing, or recording a person talking (or getting an AI system to automatically create a video of a person speaking).
I recently I listened to a meeting about the future of the science system online while washing windows. This meeting was very much of ‘They talk at you’ sort. ‘They’ weren’t looking at hundreds of little rectangles with postage stamp faces. They were presenting to an ether in which I sat. I’d much rather the session had been recorded so I could have played it at 1.5x when it suited me. Simultaneous window washing was the next best option.
What I could do more of in time freed up by agentic AI attending my meetings




I could also have sent an agent to the meeting who could have summarised the information for me. This I’m not so sure about. I don’t necessarily trust summaries of other digital or humane entities, I’d rather my brain did the summarising of the information presented. It also seems wasteful – lots of people sending agents to a meeting (requiring large amounts of energy) so create multiple summaries of a meeting they don’t want to attend. It would be much more efficient for the presenters to provide a summary if that’s all their attendees want.
When I expressed scepticism about sending agents to meetings, my researcher who is fond of agents suggested agents can interact at meetings, informing themselves. However, agents can take in the same information from a video or recording way faster than ‘being present’ over an hour. Inefficient, again. The agents could, though, do what people have typically done at meetings – create and build relationships. My researcher sees agents as enhancing extensions of himself – like employees. So his agent attending a meeting would be like his team doing work for him, making him more productive because it’s work he doesn’t have to do and relationships he doesn’t have to build.
Here I get conceptually stuck. Do we want communities of digital assistants who operate (largely) autonomously, theoretically enhancing the people for whom they work? Or maybe the agents will do their own thing, like people do (despite instructions)? Is this the world we want to create? Where people hand off tasks they find hard or boring or time consuming to digital assistants who form their own communities while humans become ever more fragmented as they avoid dealing with each other. Anyone see a risk there? And not just in lost jobs!
Moving right along, what about AI fighting on the frontline? At face value this sounds like a better proposition than avoiding meetings – avoiding fighting. War can be a game (the US appears to already think it is) where technology engages in the battlefield and no one has to die. The contender with the best tech wins!
Robotic AI wars would be particularly appealing if they were conducted in a designated battleground containing nothing of interest to humans ie no collateral damage, just identification of a winner. NZ could rent out virtual battleground space, like the Central Otago wastelands Shane Jones would like to see properly used for mining. The tech could duke it out, declare a winner and then… what?
Would countries put up stakes, like a bet? If my tech wins the battle I get your city? Your oil? Your rare earth minerals I now need to replace all the tech burned up in the fight? If your tech wins I give you control of my computer chip plant?
And how would warring parties decide what would be wagered and whether they considered the stakes fair? Negotiations could be lengthy, and boring, and risky if humans got angry. At least that question is easy to answer. There’s no need for humans to be dead or bored. Send in the AI.


Discover more from Jane Shearer
Subscribe to get the latest posts sent to your email.



