Google search console looks like a mirror of your website, The most important part of this is coverage, most of these coverage issues make you worried. Today we will know how to fix it.
How to fix Coverage issues in the Google search console?
Google Search Console is Google's own tool. By adding your website here, you can analyze all the reports of your website. You can submit the website in XML, index the page, and submit it. Reports of all pages are also available to be analyzed. If there is an issue on any page, you can fix it by looking at it separately.
What is the coverage?
In
Google search console dashboard has coverage. Click on coverage, four boxes will display. they are Error, Warning, Valid, and Excluded sections. Here you can see full reports on your website. Like which page is indexed and which is not.
Which page is crawled and which is not crawled. You can also see if there is an error on your website and which page is showing the error.
On clicking on every box, you can also see the reports of the number given in it. For example, if you want to see the reports of an error, then you click on it, then you will see the error region and the URL of that error will also show. In this way, you can fix each box by looking at different reports by clicking on them.
Error
The page that is showing the error has not been indexed, here mainly two errors Google finds. 1st Server error and second redirect error.
Server error
server error means Google bot couldn't access the URL, the request has been timeout. Or your website is busy according to the request. Then
Google leaves the request forcefully. When your website isn't indexed then
your server returned which is also called error 500
Redirect error
This error is normally such as A redirect chain, A redirect URL, A redirect loop. A redirect
URL means maximum URL length. There are bad URLs in the redirect chain.
These errors are many kinds of returns unauthorized request(401),
non-existent URL (404), requires authorized access(403), submitted URL
blocked (400), soft error (404).
Warning
The warning pages are to seek your attention. These pages are indexed or not is according to your specific results. You get two types of
warnings
- Index, blocked by robots.txt. This means your website is indexed even after being blocked by robots.txt. The Google bot always follows the robot.txt file. Google bot does not stop indexing even if some of your pages are linked to some other page. Only accept the request index but does not crawl. You can index again using the information provided on your blog. Because this is the rule of robots.txt, any snippet shown in the search result for the page is very limited.
- Page indexed without content. This page can be seen in the Google index, but the Google bot can't read this content due to some reasons. One of the possible reasons Google bot can't format it.
Valid
The pages which are indexed appear. You have submitted for indexing and it has been indexed and crawled.
Excluded
Every page shown is excluded has been indexed. But all these pages are not completely valid. This means there are some issues on all these pages. As an induplicate of the index page or blocked from indexing by a mechanism on your site.
Or it can be that it's not indexed for a reason, but we think it's an error. the flowing
errors are of many kinds, let's know what is the errors and for what.
Excluded by 'noindex' tag
When Google bot tries to index a page they get a no-index directive that is why they don't index. If you have stopped indexing this page, then this error is correct. If you want that this page should be indexed, then you
have to remove the no-index directive.
Blocked by robot.txt
Google bot has blocked by robot.txt
file. Through the robots.txt tester, you can verify this. If without loading Google bot gets that page's information then the page can be indexed. Be sure that Google bot hasn't indexed your page. Remove the
robots.txt and use the no-index directive.
Discovered- currently not indexed
This page is found by Google bot but isn't crawled yet. Typically, Google bot wants to crawl this page URL but due to overloading on the site, they failed. Hence Google bot avoids the crawl for meanwhile, due to this reason The last crawl reports show empty.
Alternate page with proper
canonical tag
A canonical tag is a master link. This page's URL resembles the URL of another page. Because of the master link on the website, the Google bot crawls the master link by not crawling this page URL. This page indicates the canonical page so you don't have to do anything. It means some page URL is similar and you have added the canonical tag in your website.
Not found-(404)
When we request to open and the page returns instead of showing the results then it's called a 404 error. Google discovers the URL without revealing any request. Google may discover the URL as a link from another site. This page is maybe deleted.
Google tries to continue this URL instantly but there is no way to command Google bot to forget the URL. That is why Google keeps
reminding time to time to crawl the URL but due to the URL missing 404
error displays.404 errors are also called soft errors. 404 response is not a
problem. The page has moved to use the 301 redirects to the new location.
Conclusion
I hope that my article could help you in the improvement of your website.
If you want any suggestions that needed to be included in the article, you can comment on them. I will surely get back to you
0 Comments