“Crawled – currently not indexed” means Google successfully fetched your URL, but chose not to store it in the search index. The usual drivers are low unique value, duplication/canonical conflicts, weak internal signals, soft-404 signals, or index bloat from too many low-value URLs.

What this status actually means in Google Search Console

Google is telling you two things:

  1. Crawled: Googlebot could access the page.
  2. Not indexed: Google’s indexing systems decided, “Not worth keeping right now.”

Google’s own guidance is blunt: this status doesn’t automatically mean there’s a technical problem, and even “good” pages may not be indexed.

Crawled vs Discovered: why the difference matters

Don’t treat these the same.

  • Discovered – currently not indexed: Google found the URL but hasn’t crawled it.
  • Crawled – currently not indexed: Google crawled it and still passed on indexing it.

That’s why “request indexing” alone rarely fixes this status. You’re not stuck at discovery. You’re failing the keep-or-drop decision.

Step 1: Decide if the page should be indexed (triage)

A lot of “crawled, not indexed” URLs are pages you should not want indexed.

Usually not worth indexing

  • Internal search result pages
  • Tag pages with little unique content
  • Parameter/facet URLs (filter/sort combos)
  • Paginated pages with no standalone value

If a page should not appear in search, use noindex (meta robots or X-Robots-Tag). Google documents both methods and how they behave.

Important nuance: robots.txt is not noindex

Robots.txt controls crawling. It does not reliably prevent a URL from appearing in search. Google explicitly warns about this and recommends noindex or removal for blocking indexing.

Step 2: Confirm what Google sees (the 5-minute inspection)

Open GSC ? URL Inspection for one affected URL and check:

  • Indexing allowed? (no accidental noindex)
  • User-declared canonical vs Google-selected canonical
  • Live test: does Google render real main content?
  • HTTP status: clean 200 for a page, clean 301 if redirecting
  • Last crawl date: did Google reprocess after your changes?

If Google-selected canonical differs, treat that as a primary clue, not a side note. Google says it may choose a different canonical for multiple reasons, including content quality.

The real reasons this happens (and the fixes that work)

1) The page is thin, repetitive, or “not better than what’s already indexed”

This is the most common cause.

Google’s community guidance around this status points to perceived value: Google may crawl fine and still decide it’s not valuable enough to include.

Fix that works

Make the page harder to ignore. Add unique blocks that competitors and your own templates don’t have:

  • A short “How to diagnose” section with exact checks
  • A decision tree (what to do based on what GSC shows)
  • Real examples: “If you see X in URL Inspection, do Y”
  • A mini checklist at the end

AI Overviews also prefer tight, unambiguous blocks. That’s why “definition + top causes + steps” near the top helps.

2) Google thinks it’s a duplicate (and picks another canonical)

If your site has many similar URLs, Google may index one version and exclude the rest.

Google’s canonical troubleshooting doc is explicit: even if you declare a canonical, Google may pick a different one, and content quality can be a factor.

Common duplicate patterns

  • http vs https, www vs non-www
  • Trailing slash vs non-trailing slash
  • URL parameters (UTM, filter, sort)
  • Printer-friendly versions
  • Near-identical location pages (“city swap” pages)

Fix that works

  • Pick the one URL that should rank.
  • Make it the best version (most complete).
  • Ensure internal links point to that URL.
  • Remove conflicting canonicals and mixed signals.

If Google keeps selecting a different canonical, improve the preferred URL’s usefulness and consistency until it becomes the obvious choice.

3) Weak internal linking (low importance signals)

A page can be crawlable and still look unimportant.

If a URL is:

  • orphaned (no internal links),
  • buried deep,
  • only present in a sitemap,

…Google may crawl it but decide not to store it.

Fix that works

Add contextual links from pages that already perform:

  • link from the most relevant guide
  • link from a hub/category page
  • and link from 2–3 related articles using descriptive anchor text

Then update the sitemap so it contains only pages you truly want indexed.

4) Soft 404 signals (the page looks like “nothing here”)

A “soft 404” is when a page returns a normal 200 response but looks like a “not found” or empty page to Google.

Google explains this concept in Search Console help content and recommends returning true 404s for truly missing pages, or adding meaningful content so it’s not mistaken for a soft 404.

Fix that works

  • If the page is truly gone: return a real 404/410.
  • If it’s valid: add substance, alternatives, navigation, and clear intent.

This is huge for:

  • empty category pages,
  • “no results” pages,
  • out-of-stock product pages with no substitutes.

5) Faceted navigation and parameter explosion (index bloat)

If your site generates endless filter/sort URLs, Google can spend crawl resources in the wrong places.

Google’s faceted navigation guidance warns about large combinations and recommends either preventing crawling of those URLs or following best practices if you actually want some indexed.

Fix that works (choose a strategy)

Strategy A: Don’t index facets

  • Prevent crawling of low-value facet URLs.
  • Noindex the ones that get discovered anyway.

Strategy B: Index a small, intentional set

  • Only allow a limited set of facet URLs that match real search demand.
  • Add unique content to those pages (not just filtered lists).

6) Crawl budget and crawl prioritization (especially on large sites)

On bigger sites, crawl and indexing are constrained by prioritization.

Google’s crawl budget documentation explains that crawl budget matters mostly for very large or frequently updated sites and provides guidance to optimize crawling efficiency.

Fix that works

  • Reduce low-value URLs (filters, duplicates, thin tags).
  • Fix internal link structures so important pages are easier to reach.
  • Keep sitemaps clean and current.

Even on smaller sites, the same principles help. Less noise. More clarity.

7) JavaScript rendering issues (Google crawls, but content doesn’t show)

Sometimes Google can fetch the URL, but the main content isn’t reliably available at render time.

Google notes that dynamic rendering was a workaround and recommends server-side rendering, static rendering, or hydration as solutions for JS-heavy sites.

Fix that works

  • Confirm “View tested page” in URL Inspection shows your primary content.
  • Ensure critical text is present in the initial HTML when possible.
  • Avoid loading core content only after user interactions.

If Google can’t consistently render the content, indexing becomes less likely.

A practical fix plan you can follow every time

Use this sequence. Don’t skip steps.

  1. Triage: should it be indexed? If no, noindex it.
  2. Inspect: check canonical choice and indexing allowed.
  3. Add uniqueness: improve main content so it’s clearly worth storing.
  4. Consolidate: merge/redirect duplicates and align internal links.
  5. Control facets: stop infinite URL growth.
  6. Request indexing once, after changes.

Conclusion

Crawled – currently not indexed” is Google saying: we visited the page, but we’re not saving it for search right now. That’s usually a selection decision, not a single “broken” setting. Google’s own guidance notes there’s no magic bullet, and even functional pages can be skipped if Google doesn’t see enough value.

The most reliable path is to earn indexing by sending clearer signals:

  • Reduce index bloat (filters, thin archives, duplicates).
  • Improve uniqueness and usefulness (so the page is worth storing).
  • Fix canonical conflicts, so Google doesn’t pick a different URL.
  • Use “Request indexing” only after real improvements, because there’s a quota and repeated requests won’t speed things up.

If you do those four things consistently, you’ll see fewer “crawled, not indexed” URLs, stronger indexing rates on your best pages, and cleaner Search Console reports over time.

Key takeaways

  • Crawled ? indexed. Google fetched the URL and still decided not to keep it.
  • Triage first. If a URL shouldn’t rank (filters, internal search, thin tags), noindex it instead of fighting for indexing.
  • Make the page index-worthy. Add unique, hard-to-copy value (diagnosis steps, examples, decision rules). This is the #1 lever.
  • Canonical clarity matters. Even with a declared canonical, Google may choose a different canonical—often tied to signals like content quality and consistency.
  • Control faceted navigation. Infinite filter/sort URLs can drain crawl attention and suppress the indexing of important pages.
  • Request indexing is the final step, not the first. It’s quota-limited, and requesting repeatedly won’t get a URL crawled faster.

Reference Links:

  1. Google Search Console Help (Community Guide): “Seeing ‘Crawled – currently not indexed’ in Search Console?” (Google Help)
  2. Search Central: Ask Google to recrawl your URLs (URL Inspection “Request indexing,” quotas, best practices).
  3. Search Central: Block indexing with noindex (meta robots / headers guidance).
  4. Crawling Infrastructure: Managing crawling of faceted navigation URLs (prevent crawl traps / parameter control).
  5. Search Central: Canonicalization troubleshooting (Google-selected canonical vs user-declared). (Google for Developers)
  6. MDN Web Docs: X-Robots-Tag header (how crawlers interpret indexing directives in HTTP headers). (MDN Web Docs)
  7. Ahrefs: Explanation of “Crawled – currently not indexed” and practical fixes. (SEOTesting.com)
  8. SEOTesting: Additional causes + troubleshooting patterns for the status. (SEOTesting.com)