Live retrieval test against ChatGPT, Perplexity, Claude and Gemini — both with web search active and from training data alone. Tells you what each model knows about your book: title-search visibility, author recognition, listicle inclusion, structured data (Wikipedia / Wikidata / Goodreads) and Reddit citation. Multi-pass aggregation keeps the score stable run-to-run.
In 2026 readers don't just search Amazon and Google. They ask ChatGPT for a recommendation, browse Perplexity's lists, ask Claude what's worth reading. If those models don't know your book, you've already lost the discovery layer.
This audit runs the same queries a curious reader would type — "what's a good book on X?", "books like Y", "best UK self-published Z" — across the four major models, with and without web search active, in multiple passes. The score reflects how reliably your book surfaces. It does NOT score your Amazon ranking — that's the Advertising Readiness Score.
"Can ChatGPT, Perplexity, Claude and Gemini find my book?"
Your book title and author name (the tool checks both directly), plus your Amazon listing URL if you want the deepest result. The multi-pass query takes about 90 seconds across the four models.
For each of ChatGPT, Perplexity, Claude and Gemini, we run:
ChatGPT alone runs ~3 billion searches per week as of late 2025. Perplexity is the default search engine for Comet browser users. Claude's web search launched in 2024. These models are increasingly the front door to book discovery — especially for non-fiction and learning-oriented buyers. If you're invisible there, your funnel has a hole that no Amazon Ads spend can plug.
Each maps to a different stage of self-publishing on Amazon.
Will your manuscript pass KDP review? 30+ checks against Amazon's spec — margins, gutter, DPI, fonts, bleed, EPUB validity.
Run the score →Will your Amazon listing convert? Cover at mobile thumbnail size, title block on search, blurb opener, review base.
Run the score →