
You're Not Behind on AI. You're Where You Need to Be to Get It Right.
Post 1 of 3 in a series on responsible AI adoption for destination marketing organizations (DMOs) and tourism leaders.
I'm getting versions of the same question from DMO leaders: How do we adopt AI in a way that actually works? And then, usually in the same conversation: but how do we do it responsibly? They've seen the headlines, the water use, the emissions, the labour implications, and they don't want to be the ones approving something connected to these impacts.

Here's what I see when I actually look at their organizations. AI adoption is already happening. Just not strategically. And usually without anyone tracking the unintended consequences. Someone in marketing is running campaigns through a personal ChatGPT account. Someone in visitor services has a Claude subscription they use as an informal knowledgebase. Someone else is using Gemini to draft emails.
The pattern has a name. Shadow AI. About three-quarters of knowledge workers are using AI for their jobs, and roughly half of all knowledge workers are using AI tools their employer didn't issue. Most companies don't have a formal AI policy. DMOs are not an exception to this pattern. Adopting AI this way creates real risks. It also leaves significant opportunities unrealized.
What I hear from leadership is that they feel stuck. They can see the need to do something. They can also see the risks of doing the wrong thing. So they don't do much at all. And while they're not deciding, the organization moves ahead anyway, without guardrails or coordination.
Being behind might be an advantage
Some DMOs are behind on AI adoption and most are behind on intentional use of the tools. AI is being described as a transformational force more powerful than the industrial revolution, and happening many times faster. If this is even partially true, we need to be giving AI more care and attention. Many key decisions haven't been formally made yet, including vendor choices, data-sharing practices, oversight policies, and workflow redesigns. If any of this is true for you, it may actually be an advantage.
The organizations that raced ahead on AI are dealing with their own set of challenges including data security, quality control, and incidents that have led to loss of customer trust. DMOs that paused, even accidentally, can do things right the first time. But this is an advantage only if you use the window to make those choices deliberately, before your organization makes them for you by default.
A key part of making responsible implementation choices is understanding the impacts of AI and being accountable. We can think about this in three main categories.
Environmental: AI use has a real environmental footprint across water, energy, and emissions. Data centers consume water for cooling, draw power from regional grids, and generate emissions that vary based on the type of energy being used. AI companies range in terms of their environmental footprint and transparency practices.
Social: AI use affects your workforce, the communities that call your destination home, and the cultures you represent. Workforce exposure to AI displacement is highest among the demographics that overlap most with tourism labor. Data centers are increasingly being built in water-stressed regions that compete with people's water needs. AI-generated content about marginalized populations without consent is an accelerating version of a problem the sector already had.
Governance: AI governance is the work of deciding what's in bounds, naming who is accountable when AI causes harm, and staying current as tools and regulations evolve. The EU AI Act is already partially in force, with transparency requirements for AI-generated content, deepfakes, and AI chatbots landing on August 2, 2026. These rules apply to anyone marketing into the EU, not just EU-based organizations. Canada's attempt at federal AI legislation collapsed in January 2025. The US has yet to pass any. The regulatory landscape is unsettled, which is part of what governance has to navigate.
Each of these deserves more than a paragraph. The next two posts in this series will go deeper. For now, the point is that AI has impacts we need to acknowledge and address.
I've had this conversation before, about flights
I keep getting déjà vu in meetings about AI. Not because the technology is familiar, but because the conversation is.
I've been having similar conversations about air travel throughout my career. Destinations that had built their entire tourism economies around long-haul flights, realizing late that the carbon math was going to catch up with them. Not because anyone had acted maliciously, but because when the tool became cheap and scalable, everyone rushed to use it before the full costs were priced in.
Air travel and AI are not the same thing. But both conversations have a similar structural shape. Both tools are powerful. Both can be used frivolously or for meaningful purposes. Both have individual best practices for using them more responsibly. Both have system-level challenges that require collective advocacy and action.
In aviation, the period between when long-haul flight became cheap enough to scale and when the sector started seriously measuring emissions was roughly twenty to thirty years. During those decades, destinations built economies that were dependent on exactly the thing we now need them to reduce. Whole economies were built around a fuel source whose costs we hadn't yet priced in. Since AI is advancing so much faster than aviation did, we don't have decades to get this right.
AI is at the moment right before structural dependencies lock in. Which vendors should we engage with? What type of energy should the data centers run on? How fast should we roll out the technology? What types of policies and guardrails should be in place? How do we support people through this transition? These are questions we need to answer as a collective. What gets decided in the next year will shape the years to come.
Where I am with this
A note on my own practice, since I'm asking you to look at yours.
I don't have all of this figured out. As a society, we're still uncovering what AI actually does to us, to our work, to the systems it touches, and because the technology continues to evolve rapidly, we won't know the full picture for some time. That's not a reason to wait. It's a reason to be proactive and proceed with care.
AI is a genuinely powerful tool. It can be used to solve serious problems and make a real positive impact. It can also be used to do significant harm, sometimes on purpose, more often as a side effect of how it's developed and deployed.
Here's where I am personally. I use AI daily for research, coding, and intellectual sparring. I've stopped using some tools after running their numbers in the AI Environmental Impact Calculator I built. I have a working policy about what I will and won't use AI for. I've revised it a few times and I expect to keep revising it as I learn more and as the technology evolves.
A good starting point for adopting AI responsibly is knowing what you're working with. Here's a question for you to place yourself in your own journey:
If your board asked how your organization uses AI and what its impacts are, what would you say?
If the answer is "we'd have to find out," you're in good company. Many DMOs are in the same holding pattern. None of this is locked in yet, which means you can still make clear decisions about where your organization lands. The question is whether you make those decisions or whether they get made for you.
The next post walks through the impacts of AI in more detail and shares a tool to help you quantify them.
- TYLER ROBINSON
Tyler Robinson is the Founder and Principal Consultant of Tydal Consulting, where he helps organizations adopt AI responsibly and effectively. He brings over a decade of experience as a climate and sustainability strategist, advising places and organizations on how to operate more sustainably. He built the AI Environmental Impact Calculator to help organizations measure the impact of their AI usage.


