{"id":8413,"date":"2018-03-07T16:20:20","date_gmt":"2018-03-07T16:20:20","guid":{"rendered":"https:\/\/blog.mageworx.com\/?p=8413"},"modified":"2021-06-02T09:17:44","modified_gmt":"2021-06-02T09:17:44","slug":"debunking-3-common-myths-behind-site-crawling-indexation-and-xml-sitemaps","status":"publish","type":"post","link":"https:\/\/www.mageworx.com\/blog\/common-myths-behind-site-crawling-indexation-sitemaps","title":{"rendered":"Debunking 3 Common Myths Behind Site Crawling, Indexation, and XML Sitemaps"},"content":{"rendered":"\n<!-- SEO Ultimate (http:\/\/www.seodesignsolutions.com\/wordpress-seo\/) - Code Inserter module -->\n<!-- Google Tag Manager (noscript) -->\r\n<noscript><iframe src=\"https:\/\/www.googletagmanager.com\/ns.html?id=GTM-5DTCW7B8\"\r\nheight=\"0\" width=\"0\" style=\"display:none;visibility:hidden\"><\/iframe><\/noscript>\r\n<!-- End Google Tag Manager (noscript) -->\n<!-- \/SEO Ultimate -->\n\n<span class=\"span-reading-time rt-reading-time\" style=\"display: block;\"><span class=\"rt-label rt-prefix\">Reading Time: <\/span> <span class=\"rt-time\"> 10<\/span> <span class=\"rt-label rt-postfix\">minutes<\/span><\/span><p>Many of us erroneously believe that launching a website equipped with an XML sitemap will automatically get all its pages crawled and indexed.<\/p>\n<p>In this regard, some myths and misconceptions build up. The most common ones are:<\/p>\n<ul>\n<li>Google automatically crawls all sites and does it fast.<\/li>\n<li>When crawling a website, Google follows all links and visits all its pages and includes them all in the Index straight away.<\/li>\n<li>Adding an XML sitemap is the best way to get all site pages crawled and indexed.<\/li>\n<\/ul>\n<p>Sadly, getting your website into the Google\u2019s index is a little bit more complicated task. Read on to get a better idea of how the process of crawling and indexation works, and what role an XML sitemap plays in it.<\/p>\n<p>Before we get down to debunking the abovementioned myths, let\u2019s learn some essential SEO notions:<br \/>\n<!--more--><\/p>\n<p><strong>Crawling<\/strong> is an activity implemented by the search engines to track and gather URLs from all over the Web.<\/p>\n<p><strong>Indexation<\/strong> is the process that follows crawling. Basically, it is about parsing and storing Web data that is later used when serving results for search engine queries. The Search Engine Index is the place where all the collected Web data is stored for further usage.<\/p>\n<p><strong>Crawl Rank<\/strong> is the value Google assigns to your site and its pages. It\u2019s still unknown how this metric is calculated by the search engine. Google confirmed multiple times that <a href=\"http:\/\/www.thesempost.com\/higher-crawl-rates-do-not-equal-higher-rankings-in-google\/\">indexing frequency is not related to ranking<\/a>, so there is no direct correlation between a websites ranking authority and its crawl rank.<\/p>\n<p>News websites, sites with valuable content, and sites that are updated on a regular basis have higher chances of getting crawled on a regular basis.<\/p>\n<p><strong>Crawl Budget<\/strong> is an amount of crawling resource\u2019s the search engine allocates to a website. Usually, Google calculates this amount based on your site Crawl Rank.<\/p>\n<p><strong>Crawl Depth<\/strong> is an extent to which Google drills down a website level when exploring it.<\/p>\n<p><strong>Crawl Priority<\/strong> is an ordinal number assigned to a site page that signifies its importance in relation to crawling.<\/p>\n<p>Now, knowing all the basics of the process, let\u2019s get those 3 myths behind XML sitemaps, crawling and indexation busted!<\/p>\n<p>&nbsp;<\/p>\n<h2>Myth 1. Google automatically crawls all sites and does it fast.<\/h2>\n<p>Google claims that when it comes to collecting Web data, it is being agile and flexible.<\/p>\n<p>But truth be told, because at the moment there are trillions of pages on the Web, technically, the search engine can\u2019t quickly crawl them all.<\/p>\n<p><em><strong>Selecting Websites to Allocate Crawl Budget for<\/strong><\/em><\/p>\n<p>The smart Google algorithm (aka Crawl Budget) distributes the search engine resources and decides which sites are worth crawling and which ones aren\u2019t.<\/p>\n<p>Usually, Google prioritizes trusted websites that correspond to the <a href=\"https:\/\/support.google.com\/webmasters\/answer\/35769\">set requirements<\/a> and serve as the basis for defining how other sites measure up.<\/p>\n<p>So if you have just-out-of-the-oven website, or a website with scraped, duplicate or thin content, the chances it&#8217;s properly crawled are pretty small.<\/p>\n<p>The important factors that may also influence allocating crawling budget are:<\/p>\n<ul>\n<li>website size,<\/li>\n<li>its general health (this set of metrics is determined by the number of errors you may have on each page),<\/li>\n<li>and the number of inbound and internal links.<\/li>\n<\/ul>\n<p>To increase your chances of getting crawl budget, make sure your site meets all Google requirements mentioned above, as well as optimize its crawl efficiency (see the next section in the article).<\/p>\n<p><em><strong>Predicting Crawling Schedule<\/strong><\/em><\/p>\n<p>Google doesn\u2019t announce its plans for crawling Web URLs. Also, it\u2019s hard to guess the periodicity with which the search engine visits some sites.<\/p>\n<p>It can be that for one site, it may perform crawls at least once per day, while for some other gets visited once per month or even less frequently.<\/p>\n<ul>\n<li>The periodicity of crawls depends on:<\/li>\n<li>the quality of the site content,<\/li>\n<li>the newness and relevance of information a website delivers,<\/li>\n<li>and on how important or popular the search engine thinks site URLs are.<\/li>\n<\/ul>\n<p>Taking these factors into account, you may try to predict how often Google may visit your website.<\/p>\n<p><em><strong>The role of external\/internal links and XML sitemaps<\/strong> <\/em><\/p>\n<p>As pathways, Googlebots use links that connect site pages and website with each other. Thus, the search engine reaches trillions of interconnected pages that exist on the Web.<\/p>\n<p>The search engine can start scanning your website from any page, not necessarily from the home one. The selection of the crawl entering point depends on the source of an inbound link. Say, some of your product pages have a lot of links that are coming from various websites. Google connects the dots and visits such popular pages in the first turn.<\/p>\n<p>An <a href=\"https:\/\/www.mageworx.com\/magento-2-sitemap-extension.html\">XML sitemap<\/a> is a great tool to build a well-thought site structure. In addition, it can make the process of site crawling more targeted and intelligent.<\/p>\n<p>Basically, the sitemap is a hub with all the site links. Each link included into it can be equipped with some extra info: the last update date, the update frequency, its relation to other URLs on the site, etc.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-8427\" src=\"https:\/\/www.mageworx.com\/blog\/wp-content\/uploads\/2018\/03\/Sitemap1.png\" alt=\"\" width=\"907\" height=\"565\" srcset=\"https:\/\/www.mageworx.com\/blog\/wp-content\/uploads\/2018\/03\/Sitemap1.png 907w, https:\/\/www.mageworx.com\/blog\/wp-content\/uploads\/2018\/03\/Sitemap1-600x374.png 600w, https:\/\/www.mageworx.com\/blog\/wp-content\/uploads\/2018\/03\/Sitemap1-768x478.png 768w, https:\/\/www.mageworx.com\/blog\/wp-content\/uploads\/2018\/03\/Sitemap1-320x200.png 320w\" sizes=\"auto, (max-width: 907px) 100vw, 907px\" \/>All that provides Googlebots with a detailed website crawling roadmap and makes crawling more informed. Also, all the main search engines give priority to URLs that are listed in a sitemap.<\/p>\n<p>Summing up, to get your site pages on Googlebot\u2019s radar, you need to build a website with great content and optimize its internal linking structure.<\/p>\n<hr>\n<h2><\/h2>\n<h2><em><strong>Takeaways<\/strong><\/em><\/h2>\n<p>\u2022 Google doesn\u2019t automatically crawl all your websites.<br \/>\n\u2022 The periodicity of site crawling depends on how important or how popular site and its pages are.<br \/>\n\u2022 Updating content makes Google visit a website more frequently.<br \/>\n\u2022 Websites that don\u2019t correspond to the search engine requirements are unlikely to get crawled properly.<br \/>\n\u2022 Websites and site pages that don\u2019t have internal\/external links are usually ignored by the search engine bots.<br \/>\n\u2022 Adding an XML sitemap can improve the website crawling process and make it more intelligent.<\/p>\n<hr>\n<h2><\/h2>\n<h2><strong>Myth 2. Adding an XML sitemap is the best way to get all the site pages crawled and indexed.<\/strong><\/h2>\n<p>Every website owner wants Googlebot to visit all the important site pages (except for those hidden from indexation), as well as instantly explore new and updated content.<\/p>\n<p>However, the search engine has its own vision of site crawling priorities.<\/p>\n<p>When it comes to checking a website and its content, Google uses a set of algorithms called crawl budget. Basically, it allows the search engine to scan site pages, while savvily using its own resources.<\/p>\n<p><em><strong>Checking a website crawl budget<\/strong> <\/em><\/p>\n<p>It\u2019s quite easy to figure out how your site is being crawled and whether you have any crawl budget issues.<\/p>\n<p>You just need to:<\/p>\n<ul>\n<li>count the number of pages on your site and in your XML sitemap,<\/li>\n<li>visit Google Search Console, jump to Crawl -&gt; Crawl Stats section, and check how many pages are crawled on your site daily,<\/li>\n<li>divide the total number of your site pages by the number of pages that are crawled per day.<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-8424\" src=\"https:\/\/www.mageworx.com\/blog\/wp-content\/uploads\/2018\/03\/slack-imgs.com_.png\" alt=\"\" width=\"1241\" height=\"679\" srcset=\"https:\/\/www.mageworx.com\/blog\/wp-content\/uploads\/2018\/03\/slack-imgs.com_.png 1241w, https:\/\/www.mageworx.com\/blog\/wp-content\/uploads\/2018\/03\/slack-imgs.com_-600x328.png 600w, https:\/\/www.mageworx.com\/blog\/wp-content\/uploads\/2018\/03\/slack-imgs.com_-1200x657.png 1200w, https:\/\/www.mageworx.com\/blog\/wp-content\/uploads\/2018\/03\/slack-imgs.com_-768x420.png 768w\" sizes=\"auto, (max-width: 1241px) 100vw, 1241px\" \/>If the number you have got is bigger than 10 (there are 10x more pages on your site than what Google crawls on a daily basis), we have bad news for you: your website has crawling issues.<\/p>\n<p>But before you learn how to fix them, you need to understand another notion, that is\u2026<\/p>\n<p><em><strong>Crawl depth<\/strong> <\/em><\/p>\n<p>The depth of crawling is the extent to which Google keeps exploring a website down to a certain level.<\/p>\n<p>Generally, the homepage is considered as level 1, a page that is 1 click away is level 2, etc.<\/p>\n<p>Deep level pages have a lower Pagerank (or don\u2019t have it at all) and are less likely to be crawled by Googlebot. Usually, the search engine doesn\u2019t dig down deeper than level 4.<\/p>\n<p>In the ideal scenario, a specific page should be 1-4 clicks away from the homepage or the main site categories. The longer the path to that page is, the more resources the search engines need to allocate to reach it.<\/p>\n<p>If being on a website, Google estimates that the path is way too long, it stops further crawling.<\/p>\n<p><em><strong>Optimizing crawl depth and budget<\/strong><\/em><\/p>\n<p>To prevent Googlebot from slowing down, optimize your website crawl budget and depth, you need to:<\/p>\n<ul>\n<li><em>fix all 404, JS and other page errors; \u2028<\/em><\/li>\n<\/ul>\n<p>An excessive amount of page errors can significantly slow down the speed of Google\u2019s crawler. To find all the main site errors, login into your Google (Bing, Yandex) Webmaster Tools panel and follow all the instructions given <a href=\"https:\/\/analytics.googleblog.com\/2013\/09\/monitoring-analyzing-error-pages-404s.html\">here<\/a>.<\/p>\n<ul>\n<li><em>optimize pagination; \u2028<\/em><\/li>\n<\/ul>\n<p>In case you have too long pagination lists, or your pagination scheme doesn\u2019t allow to click further than a couple of pages down the list, the search engine crawler is likely to stop digging down such a pile of pages.<\/p>\n<p>Also, if there are few items per such page, it can be considered as thin-content one, and won\u2019t be crawled through.<\/p>\n<ul>\n<li><em>check navigation filters;<\/em><\/li>\n<\/ul>\n<p>Some navigation schemes may come with multiple filters that generate new pages (e.g. pages filtered by layered navigation). Although such pages may <a href=\"https:\/\/www.mageworx.com\/blog\/2017\/01\/magento-seo-a-fresh-look-at-optimizing-layered-navigation-pages\/\">have organic traffic potential<\/a>, they can also create unwanted load on the search engine crawlers.<\/p>\n<p>The best way to solve this is to limit systematic links to the filtered lists. Ideally, you should use 1-2 filters maximum. E.g. if you have a store with 3 LN filters (color\/size\/gender), you should allow systematic combination of only 2 filters (e.g., color-size, gender-size). In case you need to add combinations of more filters, you should manually add links to them.<\/p>\n<ul>\n<li><em>Optimize tracking parameters in URLs; \u2028<\/em><\/li>\n<\/ul>\n<p>Various URL tracking parameters (e.g. \u2018?source=thispage\u2019) can create traps for the crawlers, as they generate a massive amount of new URLs. This issue if typical for pages with \u201csimilar products\u201d or a \u201crelated stories\u201d blocks, where these parameters are used to track users\u2019 behavior.<\/p>\n<p>To optimize crawling efficiency in this case, it\u2019s advised to transmit the tracking information behind a \u2018#\u2019 at the end of the URL. This way, such a URL will remain unchanged. Additionally, it\u2019s also possible to redirect URLs with tracking parameters to the same URLs but without tracking.<\/p>\n<ul>\n<li><em>remove excessive 301 redirects;<\/em><\/li>\n<\/ul>\n<p>Say, you have a big chunk of URLs that are linked to without a trailing slash. When the search engine bot visits such pages, it gets redirected to the version with a slash.<\/p>\n<p>Thus, the bot has to do twice as much as it\u2019s supposed to, and eventually it can give up and stop crawling. To avoid this, just try to update all the links within your site whenever you change URLs.<\/p>\n<p><em><strong>Crawl priority<\/strong> <\/em><\/p>\n<p>As said above, Google prioritizes websites to crawl. So it\u2019s no wonder it does the same thing with pages within a crawled website.<\/p>\n<p>For the majority of websites, the page with the highest crawl priority is the homepage.<\/p>\n<p>However, as said before, in some cases that can also be the most popular category or the most visited product page. To find the pages that get a bigger number of crawls by Googlebot, just look at your server logs.<\/p>\n<p>Although Google doesn\u2019t officially announce that the factors that can assumably influence the crawl priority of a site page are:<\/p>\n<ul>\n<li>inclusion into an XML sitemap (and add the Priority tags* for the most important pages),<\/li>\n<li>the number of inbound links,<\/li>\n<li>the number of internal links,<\/li>\n<li>page popularity (# of visits),<\/li>\n<li>PageRank.<\/li>\n<\/ul>\n<p>But even after you\u2019ve cleared the way for the search engine bots to crawl your website, they may still ignore it. Read on to learn why.<\/p>\n<p>To better understand how crawl priority, watch this <a href=\"https:\/\/www.youtube.com\/watch?v=GVKcMU7YNOQ\">virtual keynote<\/a> by Gary Illyes.<\/p>\n<p>Talking about the Priority tags in an XML sitemap, they can either be added manually, or with the help of the built-in functionality of the platform your site is based on. Also, some platforms support third-party <a href=\"https:\/\/www.mageworx.com\/magento-2-sitemap-extension.html\">XML sitemap <\/a>extensions \/ apps that simplify the process.<br \/>\nUsing the XML sitemap Priority tag, you can assign the following values to different categories of site pages:<\/p>\n<ul>\n<li>0.0-0.3 to utility pages, outdated content and any pages of minor importance,<\/li>\n<li>0.4-0.7 to your blog articles, FAQs and knowledgeable pages, category and subcategory pages of secondary importance, and<\/li>\n<li>0.8-1.0 to your main site categories, key landing pages and the Homepage.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<hr>\n<p>&nbsp;<\/p>\n<h2>Takeaways<\/h2>\n<p>\u2022 Google has its own vision on the priorities of the crawling process.<br \/>\n\u2022 A page that is supposed to get into the search engine Index should be 1-4 clicks away from the homepage, the main site categories, or most popular site pages.<br \/>\n\u2022 To prevent Googlebot from slowing down and optimize your website crawl budget and crawl depth, you should find and fix 404, JS and other page errors, optimize site pagination and navigation filters, remove excessive 301 redirects and optimize tracking parameters in URLs.<br \/>\n\u2022 To enhance crawl priority of important site page, make sure they are included into an XML sitemap (with Priority tags) and well linked with other site pages, have links coming from other relevant and authoritative websites.<\/p>\n<hr>\n<p>&nbsp;<\/p>\n<h2>Myth 3. An XML sitemap can solve all crawling and indexation issues.<\/h2>\n<p>While being a good communication tool that alerts Google about your site URLs and the ways to reach them, an XML sitemap gives NO guarantee that your site will be visited by the search engine bots (to say nothing of including all site pages into the Index).<\/p>\n<p>Also, you should understand that sitemaps won\u2019t help you improve your site rankings. Even if a page gets crawled and included into the search engine Index, its ranking performance depends on tons of other factors (internal and external links, content, site quality, etc.).<\/p>\n<p>However, when used right, an XML sitemap can significantly improve your site crawling efficiency. Below are some pieces of advice on how to maximize the SEO potential of this tool.<\/p>\n<p><em><strong>Be consistent<\/strong><\/em><\/p>\n<p>When creating a sitemap, remember that it will be used as a roadmap for Google crawlers. Hence, it\u2019s important not to mislead the search engine by providing the wrong directions.<\/p>\n<p>For instance, you may occasionally include into your XML sitemap some utility pages (<em>Contact Us, or TOS pages, pages for login, restoring lost password page, pages for sharing content<\/em>, etc.).<\/p>\n<p>These pages are usually hidden from indexation with noindex robots meta tags or disallowed in the robots.txt file.<\/p>\n<p>So, including them into an XML sitemap will only confuse Googlebots, which may negatively influence the process of collecting the info about your website.<\/p>\n<p><em><strong>Update regularly<\/strong><\/em><\/p>\n<p>Most websites on the Web change nearly every day. Especially eCommerce website with products and categories regularly shuffling on and off the site.<\/p>\n<p>To keep Google well-informed, you need to keep your XML sitemap up-to-date.<\/p>\n<p>Some platforms (Magento, Shopify) either have built-in functionality that allows you to periodically update your XML sitemaps, or support some third party solutions that are capable of doing this task.<\/p>\n<p>For example, in Magento 2, you can the periodicity of sitemap update cycles. When you define it in the platform\u2019s configuration settings, you signal the crawler that your site pages get updated at a certain time interval (hourly, weekly, monthly), and your site needs another crawl.<\/p>\n<p><a href=\"https:\/\/www.mageworx.com\/wiki\/magento-2-sitemap\/\">Click here<\/a> to learn more about it.<\/p>\n<p>But remember that although setting priority and frequency for sitemap updates helps, they may not catch up with the real changes and not give a true picture sometimes.<\/p>\n<p>That is why make sure that your sitemap reflects all the recently made changes.<\/p>\n<p><em><strong><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-8425\" src=\"https:\/\/www.mageworx.com\/blog\/wp-content\/uploads\/2018\/03\/Sitemaps.png\" alt=\"\" width=\"1049\" height=\"516\" srcset=\"https:\/\/www.mageworx.com\/blog\/wp-content\/uploads\/2018\/03\/Sitemaps.png 1049w, https:\/\/www.mageworx.com\/blog\/wp-content\/uploads\/2018\/03\/Sitemaps-600x295.png 600w, https:\/\/www.mageworx.com\/blog\/wp-content\/uploads\/2018\/03\/Sitemaps-768x378.png 768w\" sizes=\"auto, (max-width: 1049px) 100vw, 1049px\" \/>Segment site content and set the right crawling priorities<\/strong><\/em><\/p>\n<p>Google is working hard to measure the overall site quality and surface only the best and most relevant websites.<\/p>\n<p>But as it often happens, not all sites are created equal and capable of delivering real value.<\/p>\n<p>Say, a website may consist of 1,000 pages, and only 50 of them are \u00abA\u00bb grade. The others are either purely functional, have outdated content or no content at all.<\/p>\n<p>If Google starts exploring such a website, it will probably decide that it is quite trashy due to the high percentage of low-value, spammy or outdated pages.<\/p>\n<p>That\u2019s why when creating an XML sitemap, it\u2019s advised to segment website content and guide the search engine bots only to the worthy site areas.<\/p>\n<p>And as you may remember, the Priority tags, assigned to the most important site pages in your XML sitemap can also be of great help.<\/p>\n<hr>\n<p>&nbsp;<\/p>\n<h2>Takeaways<\/h2>\n<p>\u2022 When creating a sitemap, make sure you don\u2019t include pages hidden from indexation with noindex robots meta tags or disallowed in the robots.txt file.<br \/>\n\u2022 Update XML sitemaps (manually or automatically) right after you make changes in the website structure and content.<br \/>\n\u2022 Segment your site content to include only \u00abA\u00bb grade pages into the sitemap.<br \/>\n\u2022 Set crawling priority for different page types.<\/p>\n<hr>\n<p>&nbsp;<\/p>\n<p>That\u2019s basically it.<\/p>\n<p>Have something to say on the topic? Feel free to share your opinion about crawling, indexation or sitemaps in the comments section below.<\/p>\n","protected":false},"excerpt":{"rendered":"<p><span class=\"span-reading-time rt-reading-time\" style=\"display: block;\"><span class=\"rt-label rt-prefix\">Reading Time: <\/span> <span class=\"rt-time\"> 10<\/span> <span class=\"rt-label rt-postfix\">minutes<\/span><\/span>Many of us erroneously believe that launching a website equipped with an XML sitemap will automatically get all its pages crawled and indexed. In this regard, some myths and misconceptions build up. The most common ones are: Google automatically crawls all sites and does it fast. When crawling a website, Google follows all links and [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":13189,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[255,432],"tags":[56,169],"class_list":{"0":"post-8413","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-magento-2","8":"category-seo","9":"tag-seo","10":"tag-seo-suite-ultimate"},"_links":{"self":[{"href":"https:\/\/www.mageworx.com\/blog\/wp-json\/wp\/v2\/posts\/8413","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.mageworx.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.mageworx.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.mageworx.com\/blog\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.mageworx.com\/blog\/wp-json\/wp\/v2\/comments?post=8413"}],"version-history":[{"count":12,"href":"https:\/\/www.mageworx.com\/blog\/wp-json\/wp\/v2\/posts\/8413\/revisions"}],"predecessor-version":[{"id":10438,"href":"https:\/\/www.mageworx.com\/blog\/wp-json\/wp\/v2\/posts\/8413\/revisions\/10438"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.mageworx.com\/blog\/wp-json\/wp\/v2\/media\/13189"}],"wp:attachment":[{"href":"https:\/\/www.mageworx.com\/blog\/wp-json\/wp\/v2\/media?parent=8413"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.mageworx.com\/blog\/wp-json\/wp\/v2\/categories?post=8413"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.mageworx.com\/blog\/wp-json\/wp\/v2\/tags?post=8413"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}