Tanya Kohen: Daniel, while preparing for this episode, I was looking at some of the projects you share online, like the AI for film recommendations or the Sherlock tool that sifts through headlines. It got me wondering, how did you first get into building these kinds of things? Is this a side passion or did it all start from a business need?
Daniel Huszar: It started from a business need way back in my previous role, and I needed to be a part-time engineer because we were a small team and we needed to get those demos on track. Even before LLMs, this took a bit longer than it does now, but you need to show the client what they want to see so they know that you understand.
The software is proof that you understand the business process behind that. Always a tool builder, always thinking about the product that I was selling back then. Right now it's like this. I share mostly the tools that I build for myself and for our company. I'm not allowed to show many of the tools that I built for clients. That is one thing.
The way that it started is actually the true business needs because I publish a lot of content. If I didn't have an AI writing 70-80% of this for me, I couldn't do it. The same for research agents or stuff like that. It's just not possible without it. I tailored it to my writing persona and what interests me. So the results will only contain things that are somewhat interesting to me and can output these patterns. To close this, a lot of my clients don't even care about AI. They care about the results that I achieve, like a sales strategy, the pitch to refine the pitch, for example, or build a prototype, things like that.
But some of them asked — especially when LLMs came around — “How did you do that? Can we use the same tools? How did you write that? How did you actually research that? We suspect it's AI, we cannot prove it.” It's kind of half-half: a real business needs to be efficient, but also to make clients better, I would say.
Tanya Kohen: You also mentioned something very interesting to me. You said if you're personally interested in something, it just helps. I think in our life, when we try to separate: these are my professional interests, these are my personal interests, — sometimes we go too far with this. You're the same person, so pretty much anything that's of interest to you, you'll do better with this, if you can blend those interests and find the application, so to speak, or similarities in your professional life as well. Personally, myself: some content I see just strikes certain thoughts in my professional line of work. Don't you think it's a good blend?
Daniel Huszar This is such a great thought. I would ask you, do you think AI would exist without science fiction? Maybe not, right? It bridges this gap between what seems to be impossible or what can be possible and where we are now. When I was thinking about machine learning and applying that to work in capital and how we can see more patterns and stuff like that, that was exciting for me. It still is. But if you would have told me at that time, 5-10 years ago “You can talk to Aristotle, you can just talk to a large language model, to an AI” — that would be insane, or like Jarvis in Iron Man — “you can build this.”
Tanya Kohen: Even having a video conference as we are having it right now was something unimaginable, or at least not deemed a reality even 20-25-30 years ago, but here we are — it's so common these days. It's fascinating but it's one world that we are immersed in now, so I think the more creative people get with their thinking these days, maybe the better they are professionally in their jobs too because it's all now kind of one in a sense.
It sounds like that hands-on building is giving you a very concrete sense of what's possible. Is it exactly what led you to a bigger idea you've written about in your article, “The End of the Dashboard”? This is how you can use complex data and it isn't something that you have to have pre-built but something that has an intelligent interface. Can you talk a little bit about this and what drove you to write this article?
Daniel Huszar Absolutely. It was partially tool building, using LLMs for hours every day because it's very useful for my work in general. That and frustration with the kind of data and the insights that we have in working capital, which are extremely great in a way. With granular data, we're at the pulse of the company with the invoices, with the transaction flows. Then I thought, what if we add another data layer on top of that? Or what if we could talk to that data? That led us many years back to explore machine learning to find new patterns. Then I thought — now that we have natural language, we can't just ask the data to tell us about these correlations or say “I'm suspecting that this client here, they're growing very fast, and I know this guy, he's been a client for 10 years, they are a supplier of the automotive sector in Germany, not doing so great — how does that make sense?”
And I can ask these kinds of data synthesis questions in natural language. And the little bit of frustration and a curiosity with LLMs. And I think the title was provocative, which works on LinkedIn sometimes, right? But we need dashboards, because there is an initial visual cue, and then you can filter, click around, drill down and all these kinds of things. But what if you see — let's make sense of these numbers in a different way. Even for a junior, imagine someone going into risk in working capital in treasury “Hey, look at this stuff, it's hard to figure out.” They could have a small agent telling them “This is what this means in treasury, this is what these two numbers mean combined.”
I'm not a Treasury guy. This is the music evolution that I described in that article. You had the messy MP3 collection like everyone else who was pirating. Some people organized it, some people didn't. Then you had iTunes with all the labeling and stuff like that. And now I don't need labeling anymore. I have Spotify sitting on top of that layer, organizing everything. Nothing is messy anymore. I can search for anything and everything will be of a good quality. This is a new abstraction layer that sits on top of the dashboard, on top of our data. And now it happens that we can just use natural language to interact with our data, which I think is completely insane. And it's not even that hard to start with smaller, more focused use cases because the models are so good as well.
Tanya Kohen: It does seem that it’s a sort of an evolution of the structures we use in our workplaces. The analogy you're using with music, that's an evolution of a structure of the way you interact with your listening library, so to speak. The same pattern led me to very similar thinking as far as dashboards. I agree with you, we don't have to stop using what we've been using for such a long time. We can definitely continue with the processes that work well. But then on the other side, we don't have to stick to this flow anymore. We don't even have to rely on reporting and dashboard as much as we had to because now we have additional options. It's all about optionality and just the ways that you can quickly retrieve certain numbers.
I always make this example. I've spent 20 years as a treasurer in large corporations and smaller — and even nonprofit — companies. The one thing I keep hearing everywhere I work is one kind of phrase: “We know we have this data somewhere in our systems, we just don't know how to get to it.” I think this is the time where we can finally streamline this, build appropriate data layers, and companies really should be thinking about getting into this right now and just making sure they are working and they have a roadmap because this is the important part. Data layer, having it all in appropriate places and having appropriate access to it is key to be able to really work, have this insight on your fingertips — for a CFO, for finance professionals, because that's how you can access it quicker.
Daniel Huszar It's like a universal translator for all your data. We can't just mindlessly dump it in, but it's much easier to have a structured request for your data. What I would add is we can really ask questions that we couldn't ask before just because of LLMs. Although they are not 100% reliable in a lot of cases, they still work very well. But if you have a senior expert like yourself, sitting in front of a treasury dashboard and just talking about risk or liquidity planning and these kinds of things, you can ask questions and you can make this predictive analytics when you say “What if the business does X? Give me three scenarios: very bad, medium, very good.” And then the dashboard can draw you the graph if you want. And to me, this is completely insane because this used to take many, many steps of iterating thoughts in your head without seeing anything on the screen. And now we can do this much faster, with different insights. And actually we're not sacrificing so much in that case because you're an expert. You can tell me if this output is complete nonsense or you can say “Let's think about that, let's sleep on it, let's try and formulate a strategy tomorrow.” We couldn't do that before. That's fascinating to me.
Tanya Kohen: So much stronger decision support is available now. Let's get very practical and let's talk about this: if a treasury team is intrigued by this idea of a conversational interface, where is the best place to start preparing for this conversational future in your opinion?
Daniel Huszar There are three things that come into play here. First of all, I would say: “What kind of questions are you asking that an LLM should answer in a perfect world?” Maybe let’s start there, because this is the fun part, right? Like, what would I be excited about? It doesn’t matter if it works or if it doesn’t. A lot of things might work. And then you need to actually think about the data and data curation that you're doing, because I don't recommend just dumping everything inside an LLM. You can actually do that in some cases to have gigantic prompts and it will still be usable, especially if the model is really smart. But in business contexts, we should look at the data and curate it and maybe have a small assistant, something that we build. And then, where should they start? We have the senior users and they will figure out if the dream was achieved or if we got somewhere with this. You need to be very agile, because this technology is evolving so fast. Also, the actual dashboard is not that important. That’s the funny thing about this. It’s more of what question should be answered ideally. But we cannot answer with the dashboard itself quickly.
Tanya Kohen: This is not as obvious as it sounds. What is the right question? It’s a skill of its own, and I think developing these skills is going to be so beneficial across the organization for everyone. Focusing on “why do I want to run this report and what action do I want to propose based on this report?” This is what people should be focusing on more than anything else. That's actually a very good skill to build.
Daniel Huszar This is like business sense — prompting, translating that into a prompt. If you can do that, that is great for a team — you’re interacting with a technology, upping the skill set. But then you can get to the questions. Like “I always wanted to know this about our data, or is there any way we can answer this question?” Because that would unlock X new clients or X new liquidity, or X new limit in factoring, because we could trust the client more, the business case — more, our system — more. And that is the real ROI that is not even measurable because it’s infinity in a way, because I just couldn't do it before LLMs.
Tanya Kohen: I'm very excited for this development. It's also not only from the business value standpoint, but the value to the team, the work becomes more interesting, which is also an important motivation, right? Not to be bored, and the feeling that you bring in value, that actually it matters what you think and what you do with this data, how do you interpret it, how do you propose the next steps. I think it's a very important development and this is the part that many people underestimate when they talk about the threat of AI to jobs, because there are certain threats — no way to deny that. But there is room to build more skills that potentially can lead to more meaningful work.
Going back to the ROI stuff and everything you've written about — a concept you call “From shadow AI to ROI”. Could you walk us through that, and how can an organization take that organic — often unsanctioned — experimentation with AI and actually channel it into a strategy that delivers real return?
Daniel Huszar We did a study in the summer, it was with 143 executives, and a few more people answered. We tried to find out how people are using AI: why are you not using it more, what are you using it for, what would you like to learn? I wanted to see how the leaders use this: about half of them say “this is useful”, 25% more say “we could see this to be useful”, and the other ones were not impressed. Then this NANDA study came out from MIT NANDA just a month later — very similar numbers, and they said that the company ROI is hardly measurable. We think there's individual ROI, but the company ROI is just 5%, 95% are considered a failure in terms of ROI. What we are seeing is basically that this is very hard to measure because it is hard to measure on an enterprise scale. I think that the studies hint at that. Of course, nobody can be certain that there is extreme ROI in individual cases. If I can publish on LinkedIn twice as fast, three times as fast, and it's actually good because I edit the content — then there's actually ROI in there. We can talk about business sense, and “the shadow AI” use — a lot of people in the organization use this: like LLMs. Should they do it, or shouldn't they do it? Sometimes it's mandated, sometimes it's not, and this is like “the gray area” in between. And the MIT study found out that only 40% of the 300 enterprises that they questioned have official contracts. They buy an API flat-rate for all the tokens that they want, and then they give it to their employees. But what about the 60%, what are they doing? And nobody can tell me. If I look at LinkedIn, I see so many AI-isms, LLM-isms — a lot of people are using it.
Coming back to your question, how do we leverage that? We can leverage this in a way that we actually make it legal to use AI to experiment with certain hard guardrails around it, allow some breathing room. I don't think that official marketing data on the website should not be sent to an LLM. Why not? It scraped it already.
What we found out also in workshops, and we have seen Fujitsu doing it. The University of Milan, Politecnico just released a study on that, which is that they did something that is a platform approach. So what they did is actually they put a platform in between the users and leadership, and everybody participated and they built sort of custom GPTs — custom assistants that have some data in them. They have very specific instructions, sometimes very long prompts inside them and they behave a certain way. Maybe they make you a movie recommendation, maybe they write like you do, maybe they research things that interest you. And what this “platform thinking” actually is to say “we want everybody to be happy with the platform because everybody is a customer of that platform.” Anyway, just like with Airbnb, in theory (maybe not anymore because it gets kind of expensive): if you're having an apartment and you're just renting that out, you're a customer of Airbnb; you're leaving your apartment, you're renting it out. If you are renting that, you're a customer of Airbnb and kind of everybody is happy.
What I would recommend to do is actually that everybody builds their use cases. Maybe we do hackathons, and that's what we did actually with some clients as well. Do hackathons to see what is actually working right now. Like, what kind of use cases can we see working for individuals. Leadership is the same — they can put in their use cases, and then just share for my personal research or something like that, or writing drafts for this X amount of clients. And then people got together, they experimented together and they all became builders.
This is kind of a no-code or very low-code approach. Of course, you can code, but usually it's more prompting and giving the LLM direction. I think this is a genius approach. And the MIT study comes to the same conclusion in a way that people who experimented, people who are really passionate about it, that's what you were kind of hinting at in your previous answer.
Then something happened because we engaged with a technology that is not deterministic like software, but it's probabilistic. It can be a bit frustrating sometimes to hallucinations, but we are molding it into something that we can use. And that can be from experimenting with it from cooking recipes and then taking that approach to work and making something else. Taking this tiny bit of data, ingesting it into a GPT and then showing it to a colleague like “Look, this is really cool, you never have to write that draft again and maybe it doesn't have to be Shakespeare that time, and you can do that.” We found that this works extremely well even with prompting together and that everybody has at least one good tool and you don't have to be technical for this. That is the best way. So it's kind of a bottom-up approach. You cannot say to people “Here's ChatGPT, do whatever you want” without any training, but this is the ultimate training because it makes you a practitioner. Not only can you start with prompting, then more complex prompts, then building assistants, and there will be ROI. There will be, I've seen it, I believe it, and I believe the studies.
Tanya Kohen: Right, and it seems like the approach to collaborate more. I think this is one of the silver lining of what I'm hearing is: collaborate together, work on this, and strategy can be unique in every scenario, but who can build a strategy if not from within? And I actually do like the bottom-up approach in the sense that that's where the details come from. Because details matter, especially with technology like this, right?
I've been a fan of working cross-functionally for forever. I do believe that that's where invention happens, when you work together. Because you know something, you already applied it in your line of work, but then someone else comes and you think “This would fit perfectly with what you do.” And this other person may not even know about it. So I just thought of a certain initiative that I had way back, that was maybe 10 years ago. No AI was involved at all in that initiative. But in my workplace back then, it was a large corporation and they had an initiative of just that, which I think is very beneficial. They said “OK, all departments, if you want to submit your successes, like high ROI use cases, and share, and we'll have all the other departments look at it and just talk about it”. So not for the sake of showcasing “this is what we've done”, but rather “this is what we've done — maybe you can use this too.” And I think it's very powerful and I actually remember a couple of projects emerging from that experiment. So again, not being afraid to talk to each other, experiment more and be open to this.
Daniel Huszar What you just said — to show each other the things. A lot of companies have these kind of fixes, or something like a weekly thing where somebody presents something that they have done, could be a lot of things — and you can actually demo these things to each other very easily. The thing is that you know you could build a much better Treasury GPT than I could, even though I'm technical. You could do it because you have the domain knowledge, and I could support and say “Okay, maybe you phrase it this way, maybe we need the data from there to make it more deterministic in some ways.” And that's the crazy thing that is happening now.
So the approach that you just talked about gets supercharged by LLMs just because we can use natural language. Finally, you — domain experts — can now have the full deck of cards in your hands. You can even code with natural language and you could 100%, because you think very logically, you're a domain expert. And that is something really remarkable, I think. I wrote in a post recently that maybe most of the internal applications will not come from your IT department anymore. All of these little assistants flying around come from some senior experts or from accounting or whatever.
Tanya Kohen: That's exactly why I actually left my full-time treasury job and started pursuing project work just because of that. Because I feel that there is a need in the market to help business leaders actually start creating new structures and new systems and new things because now they can — now anyone can be an inventor. And so that's basically the reason for all of this here. Let's go on a different note. Let's talk about your podcast “Discover the Ordinary”, and if you'd like to share a little bit with the audience about that and what is the story behind the title and what do you hope your listeners take away from your conversations there.
Daniel Huszar This podcast originated from my art background because I used to be a filmmaker, a designer and things like that. So this is very ingrained in my previous life and very, very young years. And I still go out and do street photography a lot of times. And this means you capture ordinary scenes, you find something and you make it extraordinary in a way, you kind of discover the extraordinary moments in the ordinary and that's what it's all about. It was a strategy podcast in a way, also for sales strategy, because this is how I spent a lot of my career. But it became an AI podcast just because of my work focus and my private focus. I mean, I'm immersed in this 24/7.
You see, it applies even to this case. I built this little assistant that helps me to do X much better: to write much better or whatever. And it's so ordinary. Sometimes the things that are the smallest applications, the smallest use cases, and then you can scale them across your whole company.
And that was one point that I forgot actually. When we have this marketplace with all these people making these assistants, you look at it in a structured way, and then you scale the ones that are really good and that not only one person can benefit from, maybe a department or also the whole organization.
Tanya Kohen: And that's why I actually talk about it a lot as well, innovation versus invention, and something resonates with me — when you're saying you can take a small thing and scale it, you can see something's working, you can use it in another department. This is exactly the spirit of inventing new things. So I think it's a new era in a way. Being innovative is good because you have to improve things, but then coming up with totally new things and doing something that you've never done before. This is an invention to me and so this is what I think is ultimately important in this day and age.
Thank you so much, Daniel, and thank you for coming and for such a thought-provoking conversation. I really enjoyed our talk, and I'm sure our audience did as well. We'll talk more in a bit and just see how things are developing, because this field of AI is developing so quickly, so I'm excited for it.
How we process your personal data