Quo vadis? What we saw at Google Next 25
Google
Industry events

Quo vadis? What we saw at Google Next 25

nPlan's VP of Product Leonie Mueck was part of nPlan's contingent at Google Cloud Next '25 - here's her take on what was announced - and what it means for nPlan's customers.

Quo vadis? What we saw at Google Next 25
Written by
Leonie Mueck
Leonie loves turning cutting technologies into products. Before joining nPlan, she was Chief Product Officer at quantum computing scale-up Riverlane. Prior to that, she pursued a career in scientific publishing, first as physics editor at the journal Nature and later at Open Access publisher PLOS where she built a physical sciences and engineering division. She holds a PhD in quantum chemistry.

Penn and Teller have forever been suspected of using stooges. Night after night, they show Las Vegas’s captive audiences magic tricks that many deem too elaborate and difficult to pull off without briefed extras clandestinely posing as unassuming audience members. 

Taking place on Penn and Teller’s home turf in Las Vegas, Google Next 2025 came close to inducing a similar amount of awe and disbelief. While lacking in Penn’s wonderfully self-deprecating humour, Google’s demos showed machines having abilities so human-like that it was hard not to suspect foul play.

Picture this: at the opening keynote, a Google Cloud product managers rocks up on stage with a crate full of flower pots, claiming he needs help with gardening. Browsing a garden centre site, he contacts customer service. But rather than being connected with a person, he soon finds himself talking to an agent, an invisible robot that harnesses the power of large language models (LLMs) to answer his questions. This assistant guides our novice gardener through his choices in a completely natural flow of conversation and tone, at one point escalating a request for a discount to a human manager. Without knowledge of the nature of the demonstration, it would have been hard to identify this interaction as mediated entirely by an in-silico assistant.

The age of the agent

This and many other impressive demonstrations were made possible by technological step changes relating to agents. Borrowing a definition from Open Source company LangChain, an agent is “a system that uses an LLM to decide the control flow of an application.” If that sounds a bit too abstract, never fear. nPlan’s very own project controls chatbot Barry has recently been re-architected--giving them powerful agentic abilities--so let’s use Barry to illustrate this definition.

Report writing is a task as recurring as it is unloved among project controls professionals. Agents like Barry can easily help with this task. First, an LLM must reason through a request such as “help me write a monthly progress report for my project”. The LLM then decides about the “control flow” needed to successfully complete the task. In other words, it decides which steps it needs to take and which tools it needs to call. These tools, crucially, have access to nPlan’s data and knowledgebase. The LLM model by itself was trained on publicly available data and has to be fed information about the project and forecasts to be of any use for report writing. After all steps are finished and the LLM has checked for completion, it presents the completed report to our project controls professional.

Google’s garden centre service assistant will have followed roughly the same architecture, with access to many tools and sophisticated loops to call and check on tools. It will be obvious to the attentive reader that some of the tools called by the LLM may be agents themselves, resulting in networks of Russian agent dolls with ever more specialised capabilities. In addition, the garden centre service assistant used voice input and output. Multiple modalities as input and output — voice, images, video — is not new per se but Google recently made it easier to use voice input and output with its Gemini LLMs.

BYOA — build your own agent

Agents and LLM-based assistants have been around for a while. However, a new quality showcased at Google Next is their ubiquity and power, fuelled by ever more sophisticated frameworks and tools that let us build and deploy agents more easily and more cheaply. For developers, Google’s Agent Development Kit, presented at Google Next, or LangChains LangGraph library, make it easy to reliably define complex agents with many tools and access to many different knowledge-bases.

Most excitingly, building agents is about to stop being the domain of nerdy engineers. Many tech companies want everyone to build their own agents for hyper personalised workflow automation. Google Cloud launched AgentSpace, an enterprise product where anyone can put together their own assistant with just a few clicks, feeding it a knowledge base of emails and pdf documents to help prepare for a meeting, for example. If these no-code agents turn out to be sufficiently reliable and powerful, the opportunity for productivity gains in the enterprise seem enormous, as automating highly specialised and personalised workflows like trawling through contracts to extract renewal dates into a spreadsheet or forecasting financial performance for the next quarter will become as easy as pie.

The ultimate ambition is to make ecosystems of agents in the same vein as apps in an app store. The vision for Google’s AgentSpace is that anyone capable and creative can build an agent and make it available, for a fee, to others in the same ecosystem. Making all of this smooth and mutually compatible will require protocols that guarantee compatibility. One such protocol that has been emerging as the front runner is Anthropic’s Model Context Protocol which standardises how LLMs, tools and data bases talk to each other.

What does this mean for nPlan’s products?

At nPlan, we have always prided ourselves on making the latest cutting edge technology useful to project managers and project controls professionals. Many exciting agent-based developments and features on our roadmap will make the interaction with our forecasts and data more flexible and bespoke. For example, we will be delivering an improved data science agent that allow our clients to slice and dice our forecast and progress data for their bespoke problems using only natural language as an input. Matching cost and schedule data using agents is also firmly on our roadmap. Last, but certainly not least, Schedule Studio, our new product that is capable of generating schedules based on scope documents and natural language prompts, would not have happened without the recent progress in agents and LLMs. However, Google Next has sparked a further ambition in us: we want to give people around the world toiling to finish their construction megaprojects on time and on budget the power to build their own agents.