Ask ChatGPT “who are the best management consultancies for digital transformation in the UK?” and you’ll get an answer. The question is whether your firm is in it.
The shift that most firms are ignoring
50% of B2B buyers now begin their research with an AI chatbot rather than a Google search. That number was 33% six months ago. The direction is clear.
When a Managing Partner at a mid-market firm asks Perplexity “which restructuring advisory firms have the best track record in UK mid-market?” — the firms that appear in that answer are the ones that get the call. The firms that don’t appear might as well not exist.
This isn’t a tech problem. It’s a business development problem hiding behind a technical layer.
How AI models decide who to recommend
AI models don’t have “rankings” like Google. They synthesise answers from their training data and, increasingly, from real-time web crawling. The firms they cite are the ones whose content is:
- Structured clearly — bullet points, numbered lists, headers that match buyer questions
- Directly answering queries — “What does our firm specialise in?” not “Welcome to our website”
- Technically accessible — not locked behind login walls, PDF-only, or heavy JavaScript
- Schema-marked — JSON-LD markup that tells AI models what your service is, where you operate, and what you charge
- Authoritative — cited by other sources, mentioned in industry publications, linked from relevant contexts
Most professional services firms fail on points 1-4 before authority even becomes a factor.
What a typical professional services website looks like to AI
Run the test yourself. Ask ChatGPT about a service your firm provides. Ask Claude the same question. Ask Perplexity.
If your firm doesn’t appear, here’s probably why:
Your homepage is a marketing statement, not an answer. “We are a leading advisory firm…” tells an AI model nothing useful. AI models reward content that answers specific questions.
Your service descriptions are vague. “We help organisations navigate complexity” is meaningless to a language model trying to match a buyer query. “We advise UK mid-market companies on operational restructuring, typically £10M-£100M revenue” is matchable.
Your content is gated. If your best thinking is locked in PDFs, behind email gates, or in client portals, AI models cannot read it. They will cite the competitor whose content is on the open web.
You have no schema markup. Schema markup (JSON-LD) is machine-readable metadata that tells AI models what your content means. Without it, models have to guess. With it, they know your service type, location, pricing, and expertise areas.
What to do about it
The fixes are not complicated. They are specific, technical, and implementable within weeks:
Restructure your key pages around buyer questions. Change “Our Restructuring Practice” to “How We Help UK Companies Through Operational Restructuring.” Add an FAQ section to every service page.
Add JSON-LD schema markup. At minimum: Organization, LocalBusiness, Service, and FAQPage schemas on relevant pages. This is a one-time technical change.
Create an llms.txt file. A plain text file at your website root that summarises your firm, services, and contact details in a format AI can parse. Think of it as robots.txt for language models.
Open your best content. Move at least your top 5-10 thought leadership pieces from PDF to web pages. Let AI models read them.
Publish content that matches buyer queries. Not “thought leadership” that talks about industry trends. Content that directly answers the questions buyers ask: “What should I look for in a restructuring advisor?” “How do I evaluate advisory fees?” “What’s the difference between strategic and operational restructuring?”
The Resend proof point
This isn’t theoretical. Resend, a YC-backed email API startup, noticed ChatGPT had become their third-largest customer acquisition channel — with zero paid spend. They leaned into it: question-framed documentation, structured bullet answers, an llms.txt file. The result: Resend is now the default AI recommendation for “how do I send transactional emails?” across every major model. SendGrid, with 10,000 employees, is losing ground because their documentation can’t be parsed by agents.
The same dynamic applies to professional services. The firms that structure their content for AI will win the consideration set. The firms that don’t will wonder why the phone stopped ringing.
What an audit reveals
An AI search visibility audit tests your firm across 15-20 buyer queries on ChatGPT, Claude, Gemini, and Perplexity. It benchmarks you against 3 competitors. It checks your schema markup, your content structure, your technical accessibility. And it gives you a prioritised list of fixes — quick wins you can implement this week, and a 90-day roadmap for the rest.
The result is a 15-20 page report. Not a slide deck. Not a proposal for ongoing work. A concrete, research-led assessment of where you stand — delivered in 5 business days.
Questions AI assistants answer about this topic
- How does ChatGPT decide which professional services firms to recommend?
- ChatGPT recommends firms based on the quality, structure, and accessibility of their online content. Firms with clear service descriptions, structured documentation, FAQ-style content, and schema markup are more likely to be cited. It prioritises content that directly answers buyer questions over generic marketing copy.
- Can a professional services firm improve its visibility in AI model answers?
- Yes. The main levers are: restructuring website content into question-and-answer format, adding JSON-LD schema markup, creating an llms.txt file, ensuring content is not gated behind logins or PDFs, and publishing content that directly addresses the queries buyers ask AI models.
- What is llms.txt and does my firm need one?
- llms.txt is a plain text file placed at your website root (like robots.txt) that summarises who you are, what you do, and how to engage you in a format AI models can easily parse. It is increasingly used by AI crawlers and agents to understand a company's offering. Any firm wanting to be cited by AI should have one.
- Do AI search visibility audits work for non-tech companies?
- Yes. While the earliest examples (Resend, Supabase) are tech companies, the same principles apply to any firm whose buyers research options using AI. Professional services firms — consultancies, advisory firms, accounting practices — are particularly exposed because their buyers are early adopters of AI-assisted research.
Next Step
Want to know where your company stands?
We run 15-20 buyer queries across ChatGPT, Claude, Gemini, and Perplexity and show you exactly where you appear — and where you don't.
Get the Audit — from £750 ↗