The 4 stages of search all SEOs need to know

“What’s the distinction between crawling, rendering, indexing and rating?”

Lily Ray recently shared that she asks this query to potential staff when hiring for the Amsive Digital search engine marketing workforce. Google’s Danny Sullivan thinks it’s a wonderful one.

As foundational as it might appear, it isn’t unusual for some practitioners to confuse the essential stages of search and conflate the method solely.

In this text, we’ll get a refresher on how search engines work and go over every stage of the method.   

Why figuring out the distinction issues

I lately labored as an professional witness on a trademark infringement case the place the opposing witness received the stages of search incorrect.

Two small firms declared they every had the precise to use related model names.

The opposition social gathering’s “professional” erroneously concluded that my consumer performed improper or hostile search engine marketing to outrank the plaintiff’s web site. 

He additionally made a number of important errors in describing Google’s processes in his professional report, the place he asserted that:

  • Indexing was net crawling.
  • The search bots would instruct the search engine how to rank pages in search outcomes. 
  • The search bots may be “educated” to index pages for sure key phrases.

An important protection in litigation is to try to exclude a testifying professional’s findings – which might occur if one can reveal to the courtroom that they lack the essential {qualifications} mandatory to be taken critically.

As their professional was clearly not certified to testify on search engine marketing issues by any means, I introduced his inaccurate descriptions of Google’s course of as proof supporting the rivalry that he lacked correct {qualifications}. 

This would possibly sound harsh, however this unqualified professional made many elementary and obvious errors in presenting info to the courtroom. He falsely introduced my consumer as in some way conducting unfair commerce practices through search engine marketing, whereas ignoring questionable habits on the half of the plaintiff (who was blatantly utilizing black hat search engine marketing, whereas my consumer was not).

The opposing professional in my authorized case will not be alone on this misapprehension of the stages of search utilized by the main search engines. 

There are distinguished search entrepreneurs who’ve likewise conflated the stages of search engine processes main to incorrect diagnoses of underperformance within the SERPs. 

I’ve heard some state, “I feel Google has penalized us, so we will’t be in search outcomes!” – when actually that they had missed a key setting on their net servers that made their website content material inaccessible to Google. 

Automated penalizations may need been categorized as half of the rating stage. In actuality, these web sites had points within the crawling and rendering stages that made indexing and rating problematic. 

When there are not any notifications within the Google Search Console of a guide motion, one ought to first concentrate on widespread points in every of the 4 stages that decide how search works.

It’s not simply semantics

Not everybody agreed with Ray and Sullivan’s emphasis on the significance of understanding the variations between crawling, rendering, indexing and rating.

I observed some practitioners contemplate such issues to be mere semantics or pointless “gatekeeping” by elitist SEOs. 

To a level, some search engine marketing veterans could certainly have very loosely conflated the meanings of these phrases. This can occur in all disciplines when these steeped within the data are bandying jargon round with a shared understanding of what they’re referring to. There is nothing inherently incorrect with that. 

We additionally have a tendency to anthropomorphize search engines and their processes as a result of decoding issues by describing them as having acquainted traits makes comprehension simpler. There is nothing incorrect with that both. 

But, this imprecision when speaking about technical processes may be complicated and makes it more difficult for these making an attempt to study concerning the self-discipline of search engine marketing. 

One can use the phrases casually and imprecisely solely to a level or as shorthand in dialog. That stated, it’s at all times greatest to know and perceive the exact definitions of the stages of search engine know-how.

Many completely different processes are concerned in bringing the net’s content material into your search outcomes. In some methods, it may be a gross oversimplification to say there are solely a handful of discrete stages to make it occur. 

Each of the 4 stages I cowl right here has a number of subprocesses that may happen inside them. 

Even past that, there are important processes that may be asynchronous to these, corresponding to:

  • Types of spam policing.
  • Incorporation of parts into the Knowledge Graph and updating of data panels with the knowledge.
  • Processing of optical character recognition in pictures.
  • Audio-to-text processing in audio and video recordsdata.
  • Assessing and software of PageSpeed information.
  • And extra.

What follows are the first stages of search required for getting webpages to seem within the search outcomes. 


Crawling happens when a search engine requests webpages from web sites’ servers.

Imagine that Google and Microsoft Bing are sitting at a pc, typing in or clicking on a hyperlink to a webpage of their browser window. 

Thus, the search engines’ machines go to webpages related to the way you do. Each time the search engine visits a webpage, it collects a duplicate of that web page and notes all the hyperlinks discovered on that web page. After the search engine collects that webpage, it would go to the subsequent hyperlink in its record of hyperlinks but to be visited.

This is referred to as “crawling” or “spidering” which is apt for the reason that net is metaphorically a large, digital net of interconnected hyperlinks. 

The data-gathering applications utilized by search engines are known as “spiders,” “bots” or “crawlers.” 

Google’s major crawling program is “Googlebot” is, whereas Microsoft Bing has “Bingbot.” Each has different specialised bots for visiting adverts (i.e., GoogleAdsBot and AdIdxBot), cellular pages and extra. 

This stage of the search engines’ processing of webpages appears simple, however there’s a lot of complexity in what goes on, simply on this stage alone. 

Think about what number of net server programs there may be, operating completely different working programs of completely different variations, together with various content material administration programs (i.e., WordPress, Wix, Squarespace), after which every web site’s distinctive customizations. 

Many points can maintain search engines’ crawlers from crawling pages, which is a superb cause to examine the main points concerned on this stage. 

First, the search engine should discover a hyperlink to the web page sooner or later earlier than it may possibly request the web page and go to it. (Under sure configurations, the search engines have been recognized to suspect there might be different, undisclosed hyperlinks, corresponding to one step up within the hyperlink hierarchy at a subdirectory stage or through some restricted web site inner search types.) 

Search engines can uncover webpages’ hyperlinks by way of the next strategies:

  • When a web site operator submits the hyperlink instantly or discloses a (*4*) to the search engine.
  • When different web sites hyperlink to the web page. 
  • Through hyperlinks to the web page from inside its personal web site, assuming the web site already has some pages listed. 
  • Social media posts.
  • Links present in paperwork.
  • URLs present in written textual content and never hyperlinked.
  • Via the metadata of varied sorts of recordsdata.
  • And extra.

In some cases, a web site will instruct the search engines not to crawl a number of webpages by way of its robots.txt file, which is positioned on the base stage of the area and net server. 

Robots.txt recordsdata can comprise a number of directives inside them, instructing search engines that the web site disallows crawling of particular pages, subdirectories or your complete web site. 

Instructing search engines not to crawl a web page or part of a web site doesn’t imply that these pages can not seem in search outcomes. Keeping them from being crawled on this means can severely influence their capacity to rank effectively for his or her key phrases.

In but different instances, search engines can wrestle to crawl a web site if the location mechanically blocks the bots. This can occur when the web site’s programs have detected that:

  • The bot is requesting extra pages inside a time interval than a human might.
  • The bot requests a number of pages concurrently.
  • A bot’s server IP deal with is geolocated inside a zone that the web site has been configured to exclude. 
  • The bot’s requests and/or different customers’ requests for pages overwhelm the server’s assets, inflicting the serving of pages to decelerate or error out. 

However, search engine bots are programmed to mechanically change delay charges between requests after they detect that the server is struggling to sustain with demand.

For bigger web sites and web sites with steadily altering content material on their pages, “crawl budget” can develop into a consider whether or not search bots will get round to crawling all of the pages. 

Essentially, the net is one thing of an infinite house of webpages with various replace frequency. The search engines may not get round to visiting each single web page on the market, so that they prioritize the pages they are going to crawl. 

Websites with enormous numbers of pages, or which might be slower responding would possibly dissipate their out there crawl price range earlier than having all of their pages crawled if they’ve comparatively decrease rating weight in contrast with different web sites.

It is helpful to point out that search engines additionally request all the recordsdata that go into composing the webpage as effectively, corresponding to pictures, CSS and JavaScript. 

Just as with the webpage itself, if the extra assets that contribute to composing the webpage are inaccessible to the search engine, it may possibly have an effect on how the search engine interprets the webpage.


When the search engine crawls a webpage, it would then “render” the web page. This entails taking the HTML, JavaScript and cascading stylesheet (CSS) info to generate how the web page will seem to desktop and/or cellular customers. 

This is necessary to ensure that the search engine to have the option to perceive how the webpage content material is displayed in context. Processing the JavaScript helps guarantee they might have all the content material {that a} human consumer would see when visiting the web page. 

The search engines categorize the rendering step as a subprocess inside the crawling stage. I listed it right here as a separate step within the course of as a result of fetching a webpage after which parsing the content material so as to perceive how it could seem composed in a browser are two distinct processes. 

Google makes use of the identical rendering engine utilized by the Google Chrome browser, known as “Rendertron” which is constructed off the open-source Chromium browser system. 

Bingbot makes use of Microsoft Edge as its engine to run JavaScript and render webpages. It’s additionally now constructed upon the Chromium-based browser, so it primarily renders webpages very equivalently to the best way that Googlebot does. 

Google shops copies of the pages of their repository in a compressed format. It appears possible that Microsoft Bing does in order effectively (however I’ve not discovered documentation confirming this). Some search engines could retailer a shorthand model of webpages in phrases of simply the seen textual content, stripped of all the formatting.

Rendering principally turns into a difficulty in search engine marketing for pages which have key parts of content material dependent upon JavaScript/AJAX. 

Both Google and Microsoft Bing will execute JavaScript so as to see all the content material on the web page, and extra complicated JavaScript constructs may be difficult for the search engines to function. 

I’ve seen JavaScript-constructed webpages that had been primarily invisible to the search engines, leading to severely nonoptimal webpages that may not have the option to rank for his or her search phrases. 

I’ve additionally seen cases the place infinite-scrolling class pages on ecommerce web sites didn’t carry out effectively on search engines as a result of the search engine couldn’t see as many of the merchandise’ hyperlinks.

Other situations also can intrude with rendering. For occasion, when there may be a number of JaveScript or CSS recordsdata inaccessible to the search engine bots due to being in subdirectories disallowed by robots.txt, it will likely be unimaginable to totally course of the web page. 

Googlebot and Bingbot largely won’t index pages that require cookies. Pages that conditionally ship some key parts based mostly on cookies may also not get rendered totally or correctly. 


Once a web page has been crawled and rendered, the search engines additional course of the web page to decide if it will likely be saved within the index or not, and to perceive what the web page is about. 

The search engine index is functionally related to an index of phrases discovered on the finish of a e book. 

A e book’s index will record all the necessary phrases and matters discovered within the e book, itemizing every phrase alphabetically, together with a listing of the web page numbers the place the phrases/matters might be discovered. 

A search engine index incorporates many key phrases and key phrase sequences, related to a listing of all the webpages the place the key phrases are discovered. 

The index bears some conceptual resemblance to a database lookup desk, which can have initially been the construction used for search engines. But the key search engines possible now use one thing a pair of generations extra subtle to accomplish the aim of wanting up a key phrase and returning all the URLs related to the phrase. 

The use of performance to lookup all pages related to a key phrase is a time-saving structure, as it could require excessively unworkable quantities of time to search all webpages for a key phrase in real-time, every time somebody searches for it. 

Not all crawled pages might be stored within the search index, for varied causes. For occasion, if a web page features a robots meta tag with a “noindex” directive, it instructs the search engine to not embody the web page within the index.

Similarly, a webpage could embody an X-Robots-Tag in its HTTP header that instructs the search engines not to index the web page.

In but different cases, a webpage’s canonical tag could instruct a search engine {that a} completely different web page from the current one is to be thought-about the primary model of the web page, leading to different, non-canonical variations of the web page to be dropped from the index. 

Google has additionally acknowledged that webpages might not be stored within the index if they’re of low high quality (duplicate content material pages, skinny content material pages, and pages containing all or an excessive amount of irrelevant content material). 

There has additionally been a protracted historical past that implies that web sites with inadequate collective PageRank could not have all of their webpages listed – suggesting that bigger web sites with inadequate exterior hyperlinks could not get listed totally. 

Insufficient crawl price range may lead to a web site not having all of its pages listed.

A significant element of search engine marketing is diagnosing and correcting when pages don’t get listed. Because of this, it’s a good suggestion to totally examine all the varied points that may impair the indexing of webpages.


Ranking of webpages is the stage of search engine processing that’s in all probability probably the most centered upon. 

Once a search engine has a listing of all the webpages related to a specific key phrase or key phrase phrase, it then should decide the way it will order these pages when a search is performed for the key phrase. 

If you’re employed within the search engine marketing trade, you possible will already be fairly accustomed to some of what the rating course of entails. The search engine’s rating course of can also be referred to as an “algorithm”. 

The complexity concerned with the rating stage of search is so enormous that it alone deserves a number of articles and books to describe. 

There are a terrific many standards that may have an effect on a webpage’s rank within the search outcomes. Google has stated there are greater than 200 rating elements utilized by its algorithm.

Within many of these elements, there can be up to 50 “vectors” – issues that may affect a single rating sign’s influence on rankings. 

PageRank is Google’s earliest model of its rating algorithm invented in 1996. It was constructed off an idea that hyperlinks to a webpage – and the relative significance of the sources of the hyperlinks pointing to that webpage – might be calculated to decide the web page’s rating energy relative to all different pages. 

A metaphor for that is that hyperlinks are considerably handled as votes, and pages with probably the most votes will win out in rating greater than different pages with fewer hyperlinks/votes. 

Fast ahead to 2022 and so much of the previous PageRank algorithm’s DNA continues to be embedded in Google’s rating algorithm. That hyperlink evaluation algorithm additionally influenced many different search engines that developed related varieties of strategies. 

The previous Google algorithm technique had to course of over the hyperlinks of the net iteratively, passing the PageRank worth round amongst pages dozens of instances earlier than the rating course of was full. This iterative calculation sequence throughout many tens of millions of pages might take almost a month to full. 

Nowadays, new web page hyperlinks are launched day by day, and Google calculates rankings in a kind of drip technique – permitting for pages and adjustments to be factored in rather more quickly with out necessitating a month-long hyperlink calculation course of.

Additionally, hyperlinks are assessed in a classy method – revoking or decreasing the rating energy of paid hyperlinks, traded hyperlinks, spammed hyperlinks, non-editorially endorsed hyperlinks and extra. 

Broad classes of elements past hyperlinks affect the rankings as effectively, together with: 


Understanding the important thing stages of search is a table-stakes merchandise for turning into an expert within the search engine marketing trade. 

Some personalities in social media suppose that not hiring a candidate simply because they don’t know the variations between crawling, rendering, indexing and rating was “going too far” or “gate-keeping”. 

It’s a good suggestion to know the distinctions between these processes. However, I’d not contemplate having a blurry understanding of such phrases to be a deal-breaker.

search engine marketing professionals come from a range of backgrounds and expertise ranges. What’s necessary is that they’re trainable sufficient to study and attain a foundational stage of understanding.

Opinions expressed on this article are these of the visitor writer and never essentially Search Engine Land. Staff authors are listed here.

New on Search Engine Land

About The Author

Recommended For You

Leave a Reply