Sun 26 June 2022:
The tech giant Google has defined how its Googlebot ranks pages, stating that it will only crawl the first 15 MB of a webpage and that anything beyond this point will not be taken into account when determining rankings.
“any resources referenced in the HTML such as images, videos, CSS and JavaScript are fetched separately”, according to Google’s help documentation.
“After the first 15 MB of the file, Googlebot stops crawling and only considers the first 15 MB of the file for indexing,” Google said.
“The file size limit is applied on the uncompressed data,” it added.
As per the report, this left some in the SEO community wondering if this meant Googlebot would completely disregard text that fell below images at the cutoff in HTML files.
“Embedded resources/content pulled in with IMG tags is not a part of the HTML file,” he added.
Important material must now be included near the top of web pages in order to be weighted by Googlebot.
This means that the code must be organized so that the first 15 MB of any text-based or HTML file contain the SEO-relevant data.
Additionally, whenever possible, images and videos should be compressed rather than directly encoded into HTML.
SOURCE: INDEPENDENT PRESS AND NEWS AGENCIES
___________________________________________________________________________________________________________________________________________
FOLLOW INDEPENDENT PRESS:
TWITTER (CLICK HERE)
https://twitter.com/IpIndependent
FACEBOOK (CLICK HERE)
https://web.facebook.com/ipindependent
Think your friends would be interested? Share this story!