Tanya Kohen: Welcome back to the Invention Mode podcast. This show is all about what happens when corporate professionals and companies choose to invent — to rethink how work gets done. AI agents are one of the clearest expressions of that impulse today. The moment leaders see what an agent can automate, there is a rush to build more. It's Invention Mode in its purest form, fast, creative, full of potential.
But invention without direction brings its own risks. When every department builds its own agent with its own data, logic, and tools, organizations suddenly find themselves with a zoo of disconnected systems. And instead of scaling, companies slow down. So in this episode, we're going practical, a case-based conversation on how to invent responsibly and how to build your first AI agent the right way.
What you'll hear is how to select the right first use case, how to build something scalable and not disposable, how to choose between low code and custom development, what metrics actually prove value, and how to avoid ending up with a zoo of agents that don't work together. To walk through this step by step, I'm joined by Paul Chayka, AI solutions expert at WaveAccess. Paul, welcome to the show.
Paul Chayka: Thanks for having me.
Tanya Kohen: Paul specializes in designing and implementing AI-driven workflows that solve real operational challenges. His expertise spans generative AI, complex system integration, and architecture that allows AI agents to operate reliably inside large organizations. Paul's insights are grounded in web access expertise and experience delivering more than 80 AI projects. I've worked alongside Paul on several of these, and he has an exceptional ability to bring clarity to complex technical choices. All right, let's jump into the interview questions. Paul, it's clear from your track record that you've seen both the challenges and successes of AI implementation, particularly when it comes to AI agents.
Before we dive into the details, I want to start with a broader question that really resonates with many conversations I'm having with business leaders. Most of them have an ambitious vision for AI, but they are overwhelmed with options, having dozens of processes that seem like good candidates for automation.
How do you determine which use case is the right starting point for an AI agent to bring the greatest impact?
Paul Chayka: Honestly, it's where most projects either set themselves up to success or fall right from the start. When you're choosing the first use case, you want to look for something that's high-volume and repetitive, where inputs and outputs are pretty clear. Those processes are easy to model and they tend to deliver value quickly.
I also recommend paying attention to where there is real operation friction, like places where teams are spending hours and maybe days doing manual work, passing information back and forth and double-checking the same data. If people are burning time just to keep the process moving, that's a strong signal.
Another good filter is a process that affects multiple people and even departments. If an agent helps an entire team or a cross-functional workflow, the value becomes visible very quickly.
Then there is also the data question, of course. An agent is only as good as data it receives, so it's smart to start where the data is already clean, accessible and sitting in one place.
You also want the workflow to be stable. If the process changes every week, the agent will constantly need rework. So, the first win should be something well understood and documented.
And maybe the most important point — pick something measurable. Choose a task where success can be quantified, like time saved, costs reduced, or error eliminated. So you can go back to leadership with a clear proof of point for scaling.
And one more thing. Don't start with the mission-critical processes. The first agents should be meaningful for sure, but not so high-risk that even small inaccuracies create big consequences.
Tanya Kohen: All these are great pieces of advice for sure. What you are saying makes me think of one of the projects you shared during our prep call. And this was the client who debated several automation ideas. Ultimately, the bottleneck was a reporting task — simple, repetitive, non-mission critical, but definitely dragging your team's time. Would you mind unpacking that story for our listeners?
Paul Chayka: Absolutely. It's one of those cases where the need for automation was very obvious. The company delivers due diligence reporting service for the property market, and their teams were overwhelmed as volume grew and they spent hours creating document summaries. So, all the leadership realized that unless something changed, they would either fall behind or have to scale their team unnecessarily.
In our case, the challenge wasn't “we just need AI”. It was “our analysis is stuck down to repetitive work”. So, the process behind that was predictable, repetitive, and low risk. You can see, it's a daily first use case.
Here, we built an LLM-based AI agent that processes those documents, extract required information, and generates structured summaries. This drastically reduced manual effort and sped up the whole report generation process.
This solution was a low budget project that delivered an immediate impact. And it showed that even starting small with a clear repetitive task is often the best way to begin the process of integrating AI.
Tanya Kohen: That sounds like a perfect first step, right? Get that initial momentum and prove that AI can deliver results quickly. However, identifying the opportunities is only half the decision, right? The other half is choosing the right approach to actually deliver it.
So, how should leaders weigh the trade-offs between going with a quick Low-Code build or investing in a custom solution with more control and integration?
Paul Chayka: I see Low-Code solutions as a fantastic tool when you're trying to move quickly and validate the whole idea. They let you get an agent into production fast, test it on some real data, and actually see where the use case delivers enough value to justify going further. You also avoid the upfront technical overhead because the platform handles a lot of basics for you like hosting and orchestration.
But in the meantime, Low-Code has its limits. It works really well for some contained workflows, but as soon as you need deeper customization or more complex business logic or even integration across several internal systems, it starts to get restrictive. That's usually the point where custom development becomes the better option.
I think, first of all, custom builds give you control over integration, security, data flow, scalability — all these things that matter once the agent becomes part of the whole core workflow. And for companies in regulated industries, the transparency you get with a custom solution can actually be a major deciding factor.
Another thing to keep in mind is the cost structure. Low-Code is inexpensive at the beginning, but it can become costly as you grow, or as you need more advanced capabilities. On the other hand, custom development requires more investment upfront, but it often pays off long-term because you aren't tied to certain platform pricing or limitations.
Overall, I see it like this. So, Low-Code is perfect for early validation and custom development becomes essential when you're ready to scale and integrate. It's not like competing approaches, they're just different stages of the same journey.
Tanya Kohen: I really like this framing because you really need to start somewhere and then move along and scale as you understand more how these things work, so it makes perfect sense, and it sounds like it's a lot more strategic to do this phased approach, so to speak.
Let's move to the next big piece leaders care about, how to actually measure success. Once you've got an AI agent up and running, it's easy to report the things like tasks automated, hours saved, but counting operations doesn't really tell the full story. For business leaders, those are just numbers and they want to see real impact. So, what are the key metrics that actually convince business leaders that the agent is worth scaling?
Paul Chayka: Here, when we talk about measuring success, the focus really has to shift from activity to outcome that ties directly to business KPIs, like essentially outcome that demonstrates real Return of Investment. And I don't mean “how many tasks the agents completed”. I mean — what actually changed for the business because of it. For example, did we shorten the time it takes to make a decision? Maybe the process that used to take three days now takes just a couple of hours. Or did we reduce costs, either within a specific process or across the entire department? Maybe we managed to cut down on errors, like eliminating manual invoicing mistakes or improving compliance. I think that it has real financial and reputational values. Or, for instance, did we drive revenue in some way, like through more accurate pricing or sharper forecasting?
I think that one of the most clear signs that an agent is working is then other departments start asking for the same thing. You see, it tells you that the value is real and the solution is definitely worth scaling.
Tanya Kohen: I think this is the area where cross-functional work shines, because that's exactly how scalability is achieved, working with the same agent from different standpoint and kind of getting different insights from them. That's what's important, because I think the beauty of this is not just that we are looking to cut the costs because that's the conversation that's been around forever when we talk about AI. So, are we all going to lose our jobs and are we going to cut the cost so much that people are not needed anymore? I think right now when we're talking about agents and talking about more meaningful decision making, agents are going to actually help people to do their work. Right, and so I think that's a very interesting kind of mindset. So, how can my agents give me more insight, how can I work better with it and how can I make better decisions? Kind of moving along to this business intelligence that actually helps us drive better decisions.
First of all, that was a very helpful breakdown that you just gave us. Сould you share that story where the success became obvious only after aligning the metrics with some business outcomes?
Paul Chayka: Absolutely. A great example of this is a project that we did for one of our clients in treasury space. Our client relied heavily on dashboards for working capital management. But as data grew, the dashboard became harder to extend and adapt, and extracting some insights became a real challenge. So, we introduced an AI-powered conversational agent integrated into their Business Intelligence platform, into their dashboards. The agent allows users to ask questions in natural language and get answers, insights, and even data visualization instantly. Instead of tracking queries processes, we looked at business outcomes, like time-to-insight was reduced from days to seconds. The team no longer needs to create new dashboards manually. And operational costs in the BI department were reduced by 20%.
This shows that focusing on metrics that align with business goals like time saved and cost reduced can turn a simple pilot into a scalable solution.
Tanya Kohen: Right, and that's a great example. Talking to your dashboard is something that I'm sure many finance teams and treasury teams are really excited about. So now, when we're talking about time-to-insight, and cost savings, and better decision making, all this is not just vague efficiency, and I really like this. Again, a real impact on our daily work and what we do.
You know, I had several conversations with clients who were excited about experimenting with AI agents, but they weren't sure if they were ready. Some of them wondered whether they need to start with more traditional AI applications like predictive analytics, machine learning models, and only then move towards agents. And others were hoping that maybe the agent itself could be the entry point into AI without having to build the whole AI stack first.
So, I'm curious, can an agent be the first step into AI or does a company need a certain level of readiness before?
Paul Chayka: The short answer is — you can definitely start with an AI agent early, but you can't scale it without a solid foundation. Agents rely on a strong operational base. It means clear structured data, systems that can talk with each other, and well-designed processes. So, if your data is scattered, or the underlying workflow isn't clear, the agent can either struggle or require constant manual correction.
One thing I always empathize with is that agentic AI isn't a high level of AI. It's not just like a phase two, or a final phase, or maternity. It's just a layer, an orchestration layer that sits on the top of whatever system or data you already have. But that layer only works if what's underneath is coherent.
So, yes, you can absolutely start experimenting with an agent as your first AI initiative, or experiment, or something else. But in fact, for some companies, it's the fastest way to prove value. But if the goal is to build more agents later or connect them into some kind of multi-agent workflow, you really need that foundation to be solid. Otherwise your face agent becomes just isolated and very hard to extend.
Tanya Kohen: Yeah, and you may end up just repeating the same developments multiple times and wasting time and effort on that. Again, this brings us to the layered approach to AI implementation you and I have been talking about so often in our meetings and in webinars as well.
The key is building the right layer in the right sequence so that the agent can actually grow with the organization instead of becoming a one-off experiment. Right?
Now that we understand the importance of a strong foundation, let's talk about scaling because once a company sees success with one AI agent, the question is — what's next? How do you move from a single automation to something more powerful? What does real multi-agent collaboration look like? And at what point does the company actually need it?
Paul Chayka: Usually, a single agent is focused on one very specific task, for example, answering questions, maybe generating content, or routing requests. For other companies, one well-built agent already creates noticeable value.
But eventually, if you reach a point where automation, a single task isn't enough, like in this case, the real efficiency comes when you're automating the entire workflow. And that's where a multi-agent system makes sense.
I think that there are few clear signals to understand where a company truly needs multiple agents. First, it's when the process has distinct steps owned by different roles. For example, if a workflow involves review, maybe integration, data enrichment, validation, routing, reporting, no single agent can do it all, as each step requires different logic of context. In this case, multiple agents are required.
Another signal is when a task depends on different systems or data sources. One agent can specialize in knowledge base, another in ticketing, another in analytics. When one task requires several tools, distributing the work across agents is more stable.
Next, it’s a case when volume increases, and the business needs some kind of parallelization. A single agent becomes a bottleneck here, while multiple specialized agents can divide the workload.
Another signal is when the business wants traceability. Multiple agents create clear responsibility boundaries like “this agent enriches”, “this one validates”, “this one integrates" and so on.
Finally, it's when you move from answering to orchestrating. That's the shift from task automation to process automation.
In a multi-agent setup, each agent does what it is best at. And they communicate, pass contact, and call each other when needed, almost like a real department with real people. The real value comes from this coordination.
I can also share that example of multi-agent collaboration from the IT support department of a large bank. They needed to streamline support operation while improving quality and efficiency. For that, we created a system where multiple agents work together.
So, there were three agents. One agent monitors unanswered custom questions, ensures that they're routed to specialists, collects their responses and updates the knowledge base to always keep this knowledge base up to date. Another agent searches the knowledge base and provides accurate answers in chat format. And the third agent automatically creates tickets where no relevant answer is found, ensuring that no request is lost.
It all was a coordinated workflow. The agent functioned almost like a digital support team that improves response time, accuracy, and overall efficiency.
Tanya Kohen: I really like this analogy, digital support team. So it does seem like it takes a village, right? Not only from the human team standpoint, but even a digital team is something that works best when it's specialized on certain tasks. And I think, as you were outlining the structure that we need to keep in mind, how do we move from automating a task to automating a full workflow? I think that's the key. Seeing the full picture, and the full workflow, and the important pieces of it is what makes the whole system work better. So, thanks for this example.
I have a technical question here. How to keep these agents aligned and communicating smoothly as the system grows?
Paul Chayka: The short answer is — MCP standard. It all comes down to a common standard, which is what MCP provides. It's becoming a pretty widely discussed term, like a hype in AI space, and for good reason. In practical terms, it's a very effective way to organize and manage how multiple agents work together and communicate with each other. In a nutshell, MCP, a Model Context Protocol, is a standard that lets several AI agents operate inside one environment, each with access only to the data and tools they are supposed to use. It doesn't replace existing platforms, it just connects the agents so they can collaborate smoothly. And in this particular project, MCP allowed the free agents to function in one unified single-chart interface that made the whole system easier to manage, more secure, and much more scalable as the support workflow grows.
Tanya Kohen: You've just described a perfectly synchronized team, but that coordination is hard to maintain, isn't it? This whole idea of having multiple agents on board brings up a concern I hear from a lot of leaders. How do you scale this without descending into chaos?
Again, once teams see that a well-designed agent, what a well-designed agent can do, enthusiasm skyrockets. Every department wants their own agent, often built quickly, independently, and without a shared blueprint. One successful agent becomes three, then 10, and they all can be built differently, and they can be pulling from different data and unable to talk to each other. Suddenly, the organization has a collection of agents rather than a real ecosystem.
To wrap up, let me ask you this. How can leaders encourage invention and experimentation without ending up with a disconnected zoo of agents?
Paul Chayka: The key point is that the solution isn't to just restrict the version, it's to guide it with proper governance. One of the most effective approaches is establishing some kind of center of excellence. It doesn't have to be a big team, but they set up a standard for the entire organization. They can review new agent requests, like proposals, maybe maintain a registry of existing agents and make sure
Everything aligned with the company's architecture, data governance and security policies. The second piece is creating very simple early rules. It can be things like which data model to use, maybe what API standards should we use or what API standards agents should follow, how authentication works and maybe where logs and monitoring information should be centralized. Trust me, these don't slow teams down. They actually speed things up because everyone is building on the same foundation instead of reinventing it every time. And finally, I always encourage that I call it a citizenship mindset. So, instead of thinking, what can I automate or even cannot automate this task. The better question is, could the agent eventually collaborate with other agents? If every agent is built with the expectation that it might work with other agents in the future, you naturally get better alignment and better integration and a much clearer path to scale into a true multi-agent ecosystem. Overall, to sum up, encourage invention — absolutely, but make sure that there's a shared backbone that keeps every new agent from becoming an isolated experiment.
Tanya Kohen: Thanks, Paul. You gave us a few really important insights today, and I like the way you structured it. Now I think our listeners have a good roadmap in front of them and a good way of thinking about AI agents and how to scale them within their organization. And for our listeners, I hope you enjoyed our conversation today and do reach out with any questions.
How we process your personal data