Googlebot is Google's web crawling bot (sometimes also called a “spider”). Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index. We use a huge set of computers to fetch (or “crawl”) billions of pages on the web. Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site. From Google: Googlebot
Loaʻa ʻo Google i nā kolo, kolo a hopu i kāu ʻaoʻao ʻaoʻao ʻokoʻa mai ka polokalamu kele pūnaewele. Hiki iā Google ke pupule scripting, hana ia aole mean that it will always be successful. And just because you test a redirect in your browser and it works, doesn't mean that the Googlebot is properly redirecting that traffic. It took some dialogue between our team and the hosting company before we figured out what they were doing… and key to finding out was using the Kiʻi ʻia e like me Google mea hana ma Webmasters.
The Fetch as Google tool allows you to enter a path within your site, see whether or not Google was able to crawl it, and actually see the crawled content as Google does. For our first client, we were able to show that Google was not reading the script as they would have hoped. For our second client, we were able to utilize a different methodology to redirect the Googlebot.
Ināʻoe eʻike Nā Hewa Holo ma waena o Webmasters (i ka ʻāpana olakino), hoʻohana i ka Fetch e like me Google e hoʻāʻo i kāu hoʻohuli ʻana a ʻike i ka ʻike a Google e kiʻi nei.