Skip to content

Why Being Human-First is the way to Win with AI - When it Works and When it Doesn't

 

👫 We build software for humans, and yes, we use AI to do that faster than before. But, at the end of the day, if we aren’t serving you (our human customers) then we aren’t doing it right, which is why we put you first, not AI. This article outlines our current approach to AI; if it resonates, get in touch

“People don’t take trips… trips take people.”

– John Steinbeck

I joined a Women Travel Leaders webinar hosted by Jillian Dickens last month. It was a beautiful example of the power of AI to achieve our goals. Jillian demonstrated a research project where Claude enabled her to collect information rapidly that she could never have done as quickly or perhaps not at all without the current iteration of AI Large Language Models (LLM’s) assistance. In this article I use LLM in place of specific instances like Claude, Gemini, ChatGPT, Grok, Perplexity, etc.

Why was this such a great use case?

  • The question she asked required a vast amount of information, that was not consistently indexed, to be reviewed and collated quickly - a perfect problem for a large language model (LLM)
  • The resulting summary provided by Claude was verifiable - since Jillian’s next step was to reach out to the list of provided companies directly - eliminating the risk of hallucinations impacting the results of the exercise

Jillian Dickens is the founder of Gliderfox, where she is helping experiential travel companies leverage AI innovation and intentional experience design.

On that call, other members asked some great questions that highlight for me why AI is not the solution to every problem we have and certainly shouldn’t be the only hammer in our toolbox.

Generative & Hallucinating

“Why does ChatGPT give me a different answer every time I ask it the same question?”

By design, large language models (LLMs) will give you a different answer each time you engage, even within the same context, because they are generative. The “GPT” in ChatGPT stands for “Generative Pre-trained Transformer”.

That is just a geeky way to say that based on the prompt (and what the model has been “pre-trained” on), they make a guess about the right next word and put that out on the screen (they generate a response). It’s amazing how good it is at guessing, until it’s wrong in a way that can feel very bizarre.

anchor-man-brian-fantana

This is excellent for creative ideation and exploration, an excellent place to start and bounce ideas around.

But if you’re trying to create consistency, it’s definitely not a feature. It can be great to generate the idea for a new consistent process, but then the decision about whether that is right for you is a human-led process. Then the process itself should be implemented in a system that assures consistency (like YouLi!).

You may have noticed you need to keep reminding your AI chat buddy that you want certain outcomes and even that the answer provided is not accurate, sometimes it even identifies that it's wrong and it still doesn’t correct the issue. There are ways to help with this, but again, the model is by design is generative, which means it makes stuff up, sometimes that stuff is true, and sometimes it isn’t. That’s the nature of the algorithm and it has been proven mathematically that this is not something these models will ever be able to 100% eliminate. [see Journal of Web Semantics]

So that just means we have to be aware of this when using an LLM as a tool. If being creative and maybe a bit wrong is what you need, perfect, if not, perhaps an LLM is not what you need.

If you're not sure what you need, that's why you pick a tech partner that supports your goals, and finds the best solution for you, so you don't have to become an AI expert. 👋

This is also why the push for “agentic engineering” is going to cause a lot of damage (in addition to some great new tools).  Because every agent, at some point will hallucinate. Maybe that’s fine when it’s just giving you the best recipe for a busy mum on a Thursday night, but if that software is responsible for handling payroll, I’m guessing (see what I did there?) you’d rather that software not hallucinate. Unless it causes a bank error in your favor! ;-)

💡TIP: LLMs are notorious for not being great at math, it generates content probalistically, which means it can’t be trusted to add numbers. So I recommend using it to generate a spreadsheet that does the math for you – and then go check it in the spreadsheet. (Some systems now do this for you automatically, but be sure you check your numbers before banking on them)

The main point here is that AI in the current form (LLMs) is an amazing tool for some things, but not all things. So knowing when to use it and when to avoid wasting your time or putting your project/business at risk by deploying it is important.

You’ll know it’s time to tackle your problem with other tools when you find yourself arguing with the AI as if it understood you. So after the 3rd time of it apologizing for the mistake and then giving you the same wrong answer, find a different tool (or book in a demo to learn how we can help you automate your process so you don't have to fight with the AI).

Is the cost of AI worth it?

Another great question from that webinar:

“We’ve seen customers come with AI generated itineraries for us to then refine. Should we make it possible for them to do that on our website directly?”

That is a great insight into an emerging trend with consumers shopping for travel experiences. It highlights two key points:

  1. Notice that the traveler is still coming to an agent to make that trip a reality - because they also know that the AI is not 100% right or they know that having an agent book it provides them with support if something goes wrong in the real world (where the AI buddy is not, yet). 🤖
  2. The traveler is using a free tool to do all the research and bring a partial itinerary to the conversation, potentially saving the agent a lot of time and money on back and forth. This is just the most recent iteration of this behavior. That’s why I recommended on that call to avoid bringing that AI interaction into the agent’s website (at the time of this writing). Because if they put it on their site, then they have to pay for the tokens, bringing that cost of the "back and forth" back into the travel business. 💸

Right now, the AI boom is fuelled by venture capital and debt as companies race to own the market. We don’t know which ones will survive and we don’t know what they will charge for their product when it goes live. That’s also why we’re deliberate about where AI is applied within YouLi ensuring every use case has a clear return, rather than introducing cost without measurable value.

Even at $200/month, Claude is losing money on all the queries it handles. At some point, all those companies will start charging what it costs. (Claude changed their pricing model to be usage based during the writing of this article)

💡TIP: Don’t build a key business function on a free model. Feel free to play and explore, but be ready to let it go or pay a significant price to keep it.

YouLi Case Study: Generating measurable ROI on AI

Scout 208 Release

We have adopted AI in our support channel as a way to provide real time responses in a very interactive and timely way. Scout is our St Bernard is the avatar we gave our AI chat bot, because he’s a very helpful, but at the end of the day, he’s not a human and can’t help with all the queries we get.

But he’s actually better than some humans at pulling just the right info out of our support articles. I love using it and it has allowed us to focus our support team on the more sophisticated problems rather than the basic queries.

Scout costs us credits for every interaction. While tokens are a basic unit of measurement for the work AI does, our supplier has converted that to “credits”, which we are seeing more and more with AI service providers.

We have chosen to only roll out customer facing AI when we:

  1. Know the cost model (not counting on free forever)

  2. Have confidence the product is not going to vanish next quarter

  3. and we can properly measure the ROI

We don’t plan to be one of the 95% of companies whose AI projects fail to deliver ROI as documented by a recent MIT study.

Just 5% of integrated AI pilots are extracting…value, while the vast majority remain stuck with no measurable P&L impact. This divide does not seem to be driven by model quality or regulation, but seems to be determined by approach.

So far we are in the 5%, who extract value from these new tools. Because we are taking a measured approach.

We rolled out our first customer facing project in a staged manner, with clear measures of ROI at each stage to progress to the next. We took the time to review how it handled chat interactions. We refined our support articles to ensure Scout provided better answers, because AI is only as good as the data it is using. We checked on satisfaction metrics. We measured how expensive it was relative to the cost of having a person answer the same question. After 6 months, we released it to handle the first line response on all our support enquiries. As we grew, we knew that we’d either need to get Scout working, or hire another person. Training up another human would have taken a lot longer and only been in one timezone.

This way, our first line support can be on all the time and we can focus our human effort on improving the product and documentation, rather than just directing users to the right article.

So far, we’ve decided this is an effective use of AI in the business. Because there are humans checking on the results, both the customer getting the response, and our support team on a regular basis reviewing Scout responses/handling escalations. We think we’ve been able to provide the best balance of real time responses 24/7/365 backed by humans who can handle what isn’t documented in our support articles. We hope that continues to be true, and we’ll keep monitoring and seeking feedback. (While writing this article our supplier has changed the pricing model on this service, that’s how fast this is moving)

If you’re considering AI adoption, consider whether there are documented examples of your use case being solved well by AI. Also consider the long term maintenance costs.

Then be very aware that if the cost presented to you is unlimited (a flat fee or no fee) that it will likely switch to usage pricing very soon. Consider whether you’d roll out that solution if you had to pay for every interaction.

I have not yet delved into the environmental costs, but of course the “cost” I speak of includes increasing water and energy consumption that should only be spent if it is worth it, not frivolously. People argue about whether it really is such a big water consumer (compared to other industries), but the problem is not the exact current usage, but the fact that it is a rapidly growing consumer of a limited resource (water). At a time when we need to be conserving, not consuming more, it’s worth being aware of. The thing that is not up for debate: each query you send to a free LLM right now consumes more resources than you are paying for. That means that you don’t see the cost so you’re not making balanced tradeoff decisions.

This is not a sustainable approach; economically or environmentally.

Keep that in mind when you ask an LLM for a recipe you know you have in a book on your shelf.

Wisdom vs. Information

“One's destination is never a place, but a new way of seeing things.”

Henry Miller

This leads me to an excellent observation by our Angel Investor, Dr. Paul Boxer (he happens to have a PhD in Artificial Intelligence). Artificial Intelligence has access to effectively the entire digitized world of information that humans have amassed. It’s able to aggregate and synthesize this information into knowledge that we can use. But what it cannot do is apply wisdom to know when something is right (in all senses of the word).

We all know that teenagers armed with a little information can be dangerous. Why is that? Because they lack the wisdom to know what to do with that information.

AI in its current form is like a teenager with access to the internet and probably these days your internal documents. Sure, it sounds smart, can be charismatic, can get things right and surprise you with its creativity. Yet, it has no wisdom - nor the capacity to develop it.

We work in travel because we are passionate travelers, and we have seen the power of travel to educate and teach us lessons that could never be acquired from the internet. What we acquire is not knowledge, but perspective and ultimately (we hope) wisdom. Travel is powerful and those of us lucky enough to experience the world through travel know that an itinerary is not the full story.

We should use the tools that let us become more human, not delegate critical decision making to entities that do not understand why they are even making the requested itinerary. We should certainly leverage tools to make us more efficient, but not give up our own autonomy to entities that aren’t really thinking (despite what the little prompt tells you).

Those of us with the hard earned wisdom of living in this world should be the ones deciding when these tools are used and how, because we know things from our experience while it can only generate variations on knowledge it has been given access to (pre-trained on) without truely knowing the point. That’s why so much of what it makes is slop.

A developer recently quipped to me, “we are going to stop being Software Engineers and become Sanitation Engineers if we keep this up”. We all know that “haste makes waste” and AI is really good at haste. We need to be the ones guiding that so we don’t end up like Mickey Mouse in the Sorcerer’s Apprentice; snoozing in a chair while our creations wreak havoc.

mickey mouse animation GIF by hoppip

💡TIP: Setting up an automation is easier than ever, but...

- What about all the data scenarios you didn't think about? 
- What about when you need to update because of a security issue?

At YouLi, we have systems and people in place watching your automations.
With full service solutions, we will find and correct errors before you know it failed.

 

Human-First, not AI-First

“Exploration is really the essence of the human spirit.”

– Frank Borman

We’ve seen a lot of companies suddenly become “AI-first”, even though most of their (very successful) software is not based on AI technology. I get it, it’s an aspiration and a way to raise a large amount of capital during a hype cycle. The leaders in AI have made sure the narrative is “embrace AI 100% or get left behind”.

I’ve lived through a lot of Hype and Gloom cycles in technology. I was ghosted by a guy in a chat room in 1995, long before we had a word for it. I graduated with a Computer Science degree in 1999, just in time to see the first .com bubble burst. Each wave of technology has changed things, and this wave will too. I don’t pretend to know how this one will play out in specifics, but in general I know that no new technology is ever as world-changing as the creators hype it to be, and usually not as awful as the doomers predict. It’s always a balance of good and bad. The key is to be focused on the creation of the good that it can enable, not on the technology itself.

Whether you’re swinging towards the hype or the doom side of the prediction curve, remember: AI is just a tool - it is only our master if we let it be.

That’s why at YouLi, we are Human-first, not AI-first. Because we are the masters of this tool and we only give it autonomy in narrow spaces with a lot of review and oversight.

We build software for humans, and yes, we use AI to do that faster than before. Yet, at the end of the day, if we aren’t serving you (our human customers) then we aren’t doing it right.

That philosophy is core to how we’re evolving YouLi — not as an AI-first platform, but as a human-first system where AI is applied carefully to support, not replace, the people running complex travel operations.

And because we are humans, we care about getting it right.

Book in a demo to learn more about how we power unique group travel companies through automation.

Footnote:
- I only had humans provide feedback on this article 
- But the audio version is AI generated. Please check the written version if any of it sounds wrong.