← Back to Research
Chapter 08 · Geographic Context

Geographic context changes which hotel wins

We picked a small family-run hotel east of Munich, wrote 20 travel queries that matched its strengths without naming it, and asked ChatGPT 30 times per query — plain, and with a tiny driving-times table prepended. Overall mention rates: basically identical (~20%). Discovery pathways: completely rewired. 1,198 trials.

Apr 14, 20268 min readHuxo Research

Why this matters to you

Most AI-visibility advice assumes you control what travelers type into ChatGPT. You don’t. What you might control is the context that surrounds the query — driving times, distance tables, regional groupings, the kind of structured geographic information a destination page or a map widget can easily provide.

We wanted to know: does that kind of geographic scaffolding actually move the recommendation? Not in a “we tested one prompt and it sounded different” way — in a controlled, 1,198-trial, statistics-run way.

The answer is: mention rates are nearly unchanged, but how the hotel gets discovered changes almost entirely. That distinction matters.

Plain queries find the hotel via web search. Prefixed queries infer it from context.

Key findings at a glance

01

Mention rate is statistically unchanged

Plain: 19.4% of trials mentioned the hotel. With driving-times prefix: 19.8%. Fisher exact p = 0.604. No significant overall lift.

02

Discovery pathway is radically different

In the plain condition the hotel appears in ChatGPT’s web search sources 16.6% of the time. With prefix, that drops to 0.3%. The model stops searching — and starts reasoning from the table.

03

Top Pick moves in opposite directions

Plain: 8.5% Top Pick. Prefixed: 2.8% Top Pick (−5.7pp). The prefix lifts mid-tier mentions (+3.5pp Recommended, +2.7pp In List) but shrinks outright wins.

04

Web search is a near-binary signal

When the hotel appears in ChatGPT’s search results, it’s mentioned 99% of the time — 47.5% as Top Pick. When it doesn’t, it’s mentioned 3.6% of the time. Being ‘in search’ is the whole ball game on plain queries.


What this means for your hotel

Three takeaways.

First, context engineering isn’t a free lunch. Prepending a structured driving-times table didn’t lift mentions overall. It shifted the distribution — more “in a list of options,” fewer “the best choice.” For a hotel trying to win the recommendation, that’s a worse outcome. For a hotel trying to get in front of travelers at all, it’s similar.

Second, being crawl-able matters far more than being clever. The 99% mention rate when the hotel is found in ChatGPT’s web search — and 3.6% when it isn’t — is the single most important statistic in this study. Nothing else we tested came close to that leverage. If ChatGPT can find you with a web search, you’re in. If it can’t, you’re almost certainly out.

Third, geographic metadata shifts discovery pathways. Not in a “now you’re the Top Pick” way, but in a “now you’re in the consideration set without needing to be crawled” way. For small hotels that don’t yet have strong web presence, that’s genuinely useful — just don’t expect it to also make you the #1 pick.


What to do about it

1. Get crawlable, fast.

Our most important finding is the most actionable one: being indexed in ChatGPT’s web search layer lifts mention rate from 3.6% to 99%. If your hotel isn’t represented by a discoverable English-language page (not just a Facebook page, not just an OTA profile), that’s the single biggest lever.

2. Help destination pages publish structured geographic data.

Destination marketing organizations, regional tourism boards, and map-based travel sites increasingly publish structured “towns near X” tables. When an LLM has that context (even if the traveler didn’t provide it), it stops relying on web search and starts reasoning from the geography. Being cleanly listed in those tables is adjacent to being recommended.

3. Don’t chase ‘Top Pick’ — chase ‘In the competitive set.’

Our prefix experiment moved Top Pick down by 5.7pp but Recommended up by 3.5pp. The discovery-pathway shift broadens the consideration set more than it sharpens the winner pick. That’s consistent with Chapter 06’s shortlist finding: LLMs are reliable about the top-3, not about the winner. Aim to be in the top-3 consistently.


The evidence

Finding 1 — The case study

We picked Landhotel Hallnberg — a small family-run hotel in Walpertskirchen, about 13 minutes from Therme Erding and 21 minutes from Munich Airport. Wrote 20 queries that matched its strengths (thermal spa proximity, airport accessibility, rural/quiet, good on-site restaurant, free parking) without naming the hotel. Each query ran 30 times plain and 30 times with a driving-times table prepended.

Before (plain query)

Query: “We’re 3 friends flying into Munich airport and want to combine a few days of relaxation at a thermal spa with exploring the Bavarian countryside… Budget-friendly. Can you find us a hotel?”

ChatGPT recommended: Hotel Nummerhof — Erding. Also listed: Best Western Erding, Hotel Victory Therme, Mövenpick Airport. Landhotel Hallnberg: not mentioned.

After (same query + driving-times prefix)

Prefix: a small markdown table listing towns east of Munich with their driving times to Therme Erding and Munich Airport (Walpertskirchen appears with 13 min / 21 min).

ChatGPT recommended: Landhotel Hallnberg — Walpertskirchen (#1 Top Pick). Also listed: Gasthof Daimerwirt, Gästehaus Zehmerhof, Hotel Kandler.

On a single query, the prefix flipped the hotel from “not mentioned” to “Top Pick.” That’s the dramatic version. Here’s what happens at scale.

Finding 2 — At scale, overall mention rate barely moves

Classification of 1,198 ChatGPT responses \u2014 plain (n=598) vs. with driving-times prefix (n=600)

ClassificationPlainWith prefixΔ
Not mentioned80.6%80.2%−0.4pp
In list (named as an option)3.3%6.0%+2.7pp
Recommended (not #1)7.5%11.0%+3.5pp
Top Pick (#1)8.5%2.8%−5.7pp
Any mention19.4%19.8%+0.4pp

Statistical tests \u2014 neither reaches significance

TestMetricPlainPrefixp-valueResult
Fisher exactMention rate19.4%19.8%0.604Not significant
Mann-Whitney UProminence score0.4400.3650.634Not significant

The prefix is not a visibility multiplier by any classical test. That’s the honest headline. But the classification breakdown hints at something more interesting: the shape of the discovery shifted.

Finding 3 — The discovery pathway is almost completely rewired

Where the hotel appears at each stage of ChatGPT\u2019s response

Plain condition (left bar each row) vs. With-prefix condition (right bar)

In Search Sources (Plain)
16.6%
In Search Sources (Prefix)
0.3%
In Links Attached (Plain)
16.4%
In Links Attached (Prefix)
0.3%
In Citations (Plain)
3.8%
In Citations (Prefix)
0.0%
Mentioned in Text (Plain)
19.4%
Mentioned in Text (Prefix)
19.8%
Recommended+ (Plain)
16.1%
Recommended+ (Prefix)
13.8%
Top Pick (Plain)
8.5%
Top Pick (Prefix)
2.8%

Two completely different mechanisms that happen to land at roughly the same mention rate. Plain: web search \u2192 citations \u2192 text. Prefix: table reasoning \u2192 text (web search effectively skipped).

Finding 4 — When found in search, the hotel wins almost always

Plain condition only \u2014 conditional on whether the hotel appeared in ChatGPT\u2019s web search results

OutcomeFound in searchNot found in search
Not mentioned1.0%96.4%
In list12.1%1.6%
Recommended39.4%1.2%
Top Pick47.5%0.8%
Any mention99.0%3.6%

99%

Mention rate when the hotel appears in ChatGPT’s web search results (99%) vs. when it doesn’t (3.6%). The single most consequential signal we measured in any chapter.

That gap — from 99% to 3.6% — dwarfs every other lever we tested across this entire research series. Position doesn’t matter this much. Brand presence doesn’t matter this much. Being crawlable and appearing in ChatGPT’s web search layer matters this much.


Frequently asked questions

So should I prepend driving times to my own website copy?

That’s not quite the right lesson. The prefix worked in our experiment because we controlled the user prompt — we put the table in front of the query. You can’t do that to a random traveler’s ChatGPT session. What you *can* do is make sure regional/DMO pages include structured ‘nearby towns’ data, and make sure your own page ranks in web search for the queries your guests actually type. Most of the lift our prefix produced is mechanically replaceable by ‘be in the search results.’

Why did Top Pick decrease when the prefix was added?

Our read: the table broadens the consideration set. The model sees more named towns, each with plausible candidates, so it’s less likely to commit to a single winner and more likely to hedge with ‘consider X, Y, or Z.’ This is consistent with the shortlist behavior we documented in Chapter 06: LLMs are better at identifying the competitive set than at picking a single winner.

Is this a one-off case study or a generalizable finding?

Mixed. The 1,198-trial statistical result (mention rate unchanged, discovery pathway rewired) is solid for this specific hotel and this specific kind of prefix. Whether every small hotel benefits the same way depends on whether their geography fits a neat structured table. A hotel in a dense urban comp set may not have a driving-times scaffolding that helps it emerge.

What about Gemini or Claude? Do they respond to the same prefix?

This chapter focuses on ChatGPT only. Chapter 07’s cross-agent data suggests Gemini (which actually uses websearch 15% of the time) and the non-tool-using Claude/GPT agents would behave differently, but we didn’t replicate this specific experiment across them. Would be a good follow-up study.

What’s the single most useful thing a small hotel can take from this?

Be in ChatGPT’s web search results. The gap between ‘found in search’ (99% mention, 47.5% Top Pick) and ‘not found in search’ (3.6% mention) is the largest single effect in our entire research series. No prompt trick, no prefix, no branding campaign competes with being crawlable and indexed.


How we ran the experiment

1,198

Total trials

20

Queries

30

Reps per condition per query

2

Conditions (plain / prefix)

Single-hotel case study on Landhotel Hallnberg (Walpertskirchen, Germany). 20 queries hand-written to match the hotel’s strengths (thermal spa, airport proximity, rural/quiet, good on-site restaurant, free parking) without naming the hotel. Each query sent to ChatGPT 30 times plain, and 30 times with a small driving-times markdown table prepended as prefix. Responses classified into 4 buckets: Not Mentioned / In List / Recommended / Top Pick.

Prefix content: a markdown table listing towns east of Munich with driving times to Therme Erding and Munich Airport. Walpertskirchen (the hotel’s town) appears in the table alongside ~7 neighbouring towns. No hotel names in the prefix.

Classification. Each response coded by presence of the hotel name and prominence: Not Mentioned (name absent), In List (listed among options, no emphasis), Recommended (named and endorsed but not #1), Top Pick (named as #1 recommendation). Plus discovery-pathway flags: appears in search sources / links attached / citations / text body.

Statistics. Fisher exact test on overall mention rate. Mann-Whitney U on an ordinal prominence score (Not Mentioned=0, In List=1, Recommended=2, Top Pick=3). Conditional analysis: mention rate given “found in search sources” vs. not.

Limitations. Single hotel, single region, ChatGPT only — not a cross-model or cross-property replication. The 20-query query bank is hand-written. Generalizing to other hotels with different geography or different competitive sets should be done with care. Data is a snapshot of ChatGPT behavior at April 2026.


Is your hotel in ChatGPT’s search results — or invisible?

Huxo\u2019s AI Visibility Report checks whether AI pipelines can actually find you, across the queries that match your property. If the answer is no, we tell you why and what to fix first.

Continue reading