‹ Blog

ChatGPT missed my renovation company for 2 years. Here's what fixed it.

In February 2024 I typed "best kitchen renovation company in Markham" into ChatGPT. It named three of my competitors. It did not name Yellow Pencil, which had 11 active projects and a 4.9-star Google profile at the time.

That stung. Not because ChatGPT was wrong — the competitors it named are real businesses doing real work — but because I couldn't figure out the rule. Our Google ranking on the same query was #4. Our reviews were better than two of the three. Our site had been up since 2019. What was the model looking at that didn't include us?

This post is what I eventually figured out and how I fixed it. It took 90 days. The fixes were smaller than I expected. The fact that I had to figure them out with nobody telling me they mattered is why I ended up building RankingLocal.ai.

What I was wrong about

My assumption for the first month was that AI engines are basically better Google — if your Google ranking is good, AI citations follow. That's half right and mostly misleading.

The right framing, which took me embarrassingly long to land on, is: AI engines read pages. They don't rank links.

When ChatGPT answers "best kitchen renovation in Markham," it isn't pulling a top-10 list and picking three. It's compositing an answer from chunks of text it can parse, attribute, and trust. A page that ranks #4 on Google but reads like a JavaScript-rendered marketing slide deck to the scraper is invisible to that process. Yellow Pencil's homepage was exactly that — a beautiful interactive slideshow that didn't emit much meaningful HTML before JS ran.

Once I understood that, the fixes became obvious. They also became small.

Fix 1: LocalBusiness schema on the homepage

We had no structured data. Zero. Our homepage HTML said "Yellow Pencil — we build kitchens" in a hero banner, and that was pretty much it for an AI crawler. The first fix was a basic LocalBusiness JSON-LD block in the <head>:

{
  "@context": "https://schema.org",
  "@type": "LocalBusiness",
  "name": "Yellow Pencil",
  "address": { "streetAddress": "…", "addressLocality": "Markham", "addressRegion": "ON", "postalCode": "L3R …" },
  "telephone": "+1-416-…",
  "url": "https://yellowpencil.co",
  "priceRange": "$$$",
  "image": "https://yellowpencil.co/og.jpg",
  "areaServed": ["Markham","Richmond Hill","Scarborough","Thornhill"],
  "sameAs": ["https://instagram.com/yellowpencilco","…"]
}

That's it. No rewrites, no redesign, no content change. 15 minutes of work.

Two weeks later, Perplexity started citing us for queries like "kitchen contractor Markham." Not ChatGPT yet — Perplexity crawls more often — but the signal was there.

What it looks like from the outside

Before: "Yellow Pencil" appears as a store name in search engines but without structured data the entity is ambiguous. After: the page tells crawlers exactly what kind of business this is, where it serves, and what it costs.

Fix 2: FAQPage schema on the service pages

The second fix was even smaller. On each service page (kitchens, bathrooms, basements, additions) I added a FAQPage JSON-LD block with three questions the customer actually asks:

  1. "How long does a kitchen renovation take?"
  2. "Do you handle permits in Markham?"
  3. "What does a full kitchen renovation cost in the GTA?"

Answers were 30-60 words each. Same text already appeared in the page body; the schema block just made it machine-addressable.

This is the fix I saw move the needle hardest on ChatGPT specifically. FAQPage is one of the formats ChatGPT's training and retrieval pipelines treat as high-trust — the Q&A shape is already their output shape, so the model will happily paraphrase it verbatim.

Fix 3: A Reddit answer in r/askTO

This one surprised me the most. I went on r/askTO (78K members), found three threads where people asked for kitchen renovation recommendations in Markham/Scarborough, and wrote substantive answers with specifics — rough cost per linear foot, which permits require a structural engineer, when to avoid a GC and hire direct trades.

I linked to our site once in each answer, in context. I didn't mention it was my company. Top of each thread a week later.

Six weeks after that, "yellow pencil markham" — my branded query — started appearing in ChatGPT with Reddit as the citation source. Then broader queries followed. The Reddit content became a secondary landing the model trusted more than my own homepage, which was the unlock.

The lesson I took: ChatGPT weighs "this site vouches for this business" above "this business vouches for itself." That's obvious in retrospect and not obvious from inside the marketing department.

What the score did

I didn't have a scoring system when I started. When I later built RankingLocal.ai and backfilled historical data on Yellow Pencil, the GEO Score timeline came out like this:

The three fixes together took maybe 6 hours of work. The delay between each fix and the model picking it up was 2-6 weeks — a product of crawl cadence, not effort.

What this doesn't mean

I want to be careful about what I'm not saying.

I'm not saying these three fixes are what every local business needs. They're what Yellow Pencil needed, which was a specific set of gaps. A dental clinic might be fine on schema but missing from Foursquare. A law firm might have schema but a Cloudflare policy that blocks AI crawlers. The method is universal; the recipe is site-specific.

I'm not saying this is a substitute for a working business. If your reviews are bad and your service is bad, no amount of schema will get ChatGPT to recommend you. The model rewards consensus, and consensus is what your customers already say about you.

I'm not saying any of this is permanent. The models change every few months. What worked for me in Q2 2026 may need to be re-run in Q4. That's why the RankingLocal.ai product is built around weekly re-scoring instead of a one-time audit.

What to do if you're reading this

Run your own site through the free AI Visibility Checker. It'll tell you where your structural gaps are — not from guessing, from actually asking ChatGPT, Perplexity, Google AI, and four more engines whether they cite you for your core queries. Takes about 60 seconds.

If the number is under 40, start with LocalBusiness schema. If it's between 40 and 70, the FAQPage fix on service pages almost always moves it. Above 70, the work shifts to off-page signals — which is a harder post I haven't written yet.

Questions, corrections, things that didn't work for you — hello@rankinglocal.ai is read by me directly.