AI & Data
Mark Simpson, Engineering Director
26 June 20256 minutes
In Part 1, we explored the foundations of Generative AI adoption: how context, thinking time, models and applications shape your success with AI. Now, we’re moving deeper into the mechanics that make AI useful in the real world, for your organisation, your data and your people.
As we said before, AI isn’t an IT project. It’s a whole-business change initiative, and it needs to be approached with that same breadth of perspective. But as the IT leader, you’re the one who’s going to bring the vision to life, which means understanding how it works, where its strengths and weaknesses are, how to adapt to your organisation and how to guide others in how to get the most from tools available.
This blog dives into four more concepts you need to understand to make generative AI work in your organisation: knowledge assets, business data, guardrails and agents.
Every enterprise has a vast, often underutilised wealth of internal knowledge, hidden across thousands of documents, PDFs, videos, slide decks, meeting transcripts, emails, and image libraries. This is unstructured content: information that’s valuable, but not in a form a traditional system can easily use.
This is where generative AI really starts to show value.
Modern AI applications can be configured to search and process unstructured data. That means you can surface hidden insights, accelerate onboarding, support customer service teams, or make historical project knowledge available on demand, simply by enabling your AI to query this material. That information then feeds back into your model, providing it with more business context and understanding.
For example:
Not only can AI generate unstructured responses (text, audio, visuals), but it can also cite the internal content it draws from, giving you transparency and auditability.
Tip: Start by indexing one high-value dataset, like your support knowledge base, technical documentation or customer onboarding materials. Focus on a business-critical area with high friction or repeated effort, and give your teams (or customers) an easy, searchable way to access that knowledge.
Here’s where the game really changes. When you combine generative AI with your own structured data, the value goes up exponentially.
This isn’t about plugging your confidential databases directly into the model. In fact, the model doesn’t need access to your private data at all. Instead, your application acts as an intermediary. The model makes a request (“I need these five sales figures for this region”), and your application securely retrieves that data before the final response is generated.
We touched on this with applications last time, but think of it like this:
This architecture is critical for enterprise environments. It lets you generate highly personalised, context-rich outputs (like generating a management report from real-time sales data) while still maintaining full control over who accesses what.
Tip: The AI doesn’t need to see your data. It needs to know what kinds of data or functions it can ask for and trust that your application will safely handle the request. This is essential to communicate with more cautious stakeholders.
Let’s talk about trust. AI projects often launch with excitement, then hit resistance when issues like accuracy, bias, security or reputational risk come into play. It’s not unreasonable: generative AI can sometimes go off-piste, make up facts, or produce unexpected results.
But that’s why guardrails exist.
Guardrails are policies, filters and controls that govern how your AI is used. These can include:
This isn’t just about safety, it’s also about improving quality over time. Responses can be reviewed against company guidelines, and if needed, fed back into the system to iteratively refine future answers.
Tip: Build guardrails from the start, not as a bolt-on. It’s easier to win over internal stakeholders when you can clearly demonstrate that AI is being used responsibly. You can adapt over time, but start with some key concerns and work from there.
If a model is the brain, and the application is the vehicle, then agents are the specialist co-pilots. They’re a powerful evolution in AI application design, allowing you to create role-specific digital assistants that go beyond one-off answers.
Agents:
In practice, this means you can build AI assistants that mirror the behaviour of expert employees – surfacing answers, completing tasks, and even escalating when human judgement or action is required.
Tip: Just as with any software development, start small. Choose a specific, high-value use case where a specialist assistant could save time or improve consistency, then build out more roles as the business gains confidence.
Understanding these next four concepts – knowledge assets, business data, guardrails and agents – equips you to think far more strategically about how and where generative AI can add value to your organisation.
You don’t need to deploy everything at once, but the more you understand these building blocks, the more confidently you’ll be able to lead the conversation, bring others with you, and ensure AI delivers real-world results in your organisation.
In the meantime, if you’d like to get hands-on with AI in your organisation, our commercial AI workshops are a practical, collaborative way to identify real use cases – and unlock real value.
Enter your email below, and we'll notify you when we publish a new blog or Thought Leadership article.