Screaming Frog https://www.screamingfrog.co.uk Tue, 21 Dec 2021 07:58:51 +0000 en-US hourly 1 https://wordpress.org/?v=5.8.2 The Screaming Frog & BrightonSEO Charity Five-a-Side Tournament https://www.screamingfrog.co.uk/charity-five-a-side/ https://www.screamingfrog.co.uk/charity-five-a-side/#comments Mon, 29 Nov 2021 09:47:21 +0000 https://www.screamingfrog.co.uk/?p=179040 We’re delighted to announce that Screaming Frog & BrightonSEO are coming together to host a charity five-a-side football tournament next summer in London! 2022 will mark 4 years since the last BrightonSEO 5-a-side, and just like the FIFA World Cup, it’s once again time for the footballing giants of the...

The post The Screaming Frog & BrightonSEO Charity Five-a-Side Tournament appeared first on Screaming Frog.

]]>
We’re delighted to announce that Screaming Frog & BrightonSEO are coming together to host a charity five-a-side football tournament next summer in London!

screaming frog five-a-side

2022 will mark 4 years since the last BrightonSEO 5-a-side, and just like the FIFA World Cup, it’s once again time for the footballing giants of the SEO and digital world to come together, break bread, and compete for one of the least coveted trophies in all of search. We had a blast last time around –

screaming frog 5-a-side screaming frog 5-a-side screaming frog 5-a-side screaming frog 5-a-side screaming frog 5-a-side

Relive the magic of the last tournament in the video of our famous 9-2 win in #ElCrawlico last time around ;)

Not only are we pleased to help host a fun social event for the SEO community, raising money and awareness for charities is very important to us. For instance, we recently raised £8,835 in our charity auction for SeeSaw, and our regular bake off events for Macmillan Cancer Support are a great cause and always very popular internally.

The key bits of info for the Screaming Frog & BrightonSEO charity 5-a-side tournament are –

  • When: Thursday 14th July 2022
  • Time: 1.30pm to 5.30pm
  • Where: Powerleague Shoreditch, Braithwaite Street, Off Bethnal Green Road, Shoreditch, London, E1 6GJ (view on Google maps)
  • How much: £100 (+VAT) per team to enter. 100% of all the collected entry fees will be donated to a charity of the winning team’s choice.
  • Format: 16 teams beginning in a group stage format (4 groups of 4 teams), before moving to knock out fixtures to determine a winner!
  • Kits: We strongly encourage teams to create their own custom, creative football kits, but the venue has bibs if required!
  • Winners: Trophies for the winners and runners up, plus a case of beer for the winning team!
  • Teams: Squads of up to 7 players per team, with 5 on the pitch at any one time (rolling substitutions). This is an inclusive event so all genders and mixed teams are welcome to participate. Agencies and in-house both welcome!
  • Supporters: Supporters and fans are very welcome, there’s plenty of space to watch and cheer your team on. We encourage songs, banners and tifo!
  • After: BBQ catering and post-match drinks.
  • After after: Plenty of pubs nearby for further post-match analysis!
  • How to enter: Register here for the Screaming Frog & BrightonSEO charity five-a-side! First come, first serve – we expect this to be popular so please get stuck in ASAP.

The event also includes FA qualified referees, all pitch hire, ball hire and bib hire, though we do encourage teams to make their own bespoke football kits! We have full use of changing rooms and shower facilities, as well as bar and hospitality facilities too. Any other questions or things we might have missed, please do slide into my DMs on Twitter.

We’re excited and honoured to co-host the event with our friends at BrightonSEO, and look forward to the early exchanges of inter-agency banter about the tournament at the next BrightonSEO in April 2022. Make sure to register your tickets for that now.

One last plug – register here to participate in the Screaming Frog & BrightonSEO five-a-side :)

The post The Screaming Frog & BrightonSEO Charity Five-a-Side Tournament appeared first on Screaming Frog.

]]>
https://www.screamingfrog.co.uk/charity-five-a-side/feed/ 4
Screaming Frog Scoops Three Trophies at the UK Search Awards https://www.screamingfrog.co.uk/screaming-frog-search-awards-hattrick/ https://www.screamingfrog.co.uk/screaming-frog-search-awards-hattrick/#comments Wed, 17 Nov 2021 14:51:27 +0000 https://www.screamingfrog.co.uk/?p=178899 Unbelievably, it’s been 2 years since our last trip to the UK Search Awards, so you can imagine how excited the team were to be in attendance. The ceremony celebrates the very best achievements in the search industry and we were delighted to win three awards on the night. We...

The post Screaming Frog Scoops Three Trophies at the UK Search Awards appeared first on Screaming Frog.

]]>
Unbelievably, it’s been 2 years since our last trip to the UK Search Awards, so you can imagine how excited the team were to be in attendance.

The ceremony celebrates the very best achievements in the search industry and we were delighted to win three awards on the night.

We kicked things off with the ‘Best Use of Search – Retail / Ecommerce’, thanks to our work with StressNoMore.

Next up, we won ‘Best Use of Search – Travel / Leisure’ with our client InsureMyTrip.

Finally, with a brace under our belts we followed up with a hat-trick, winning ‘Best Low Budget Campaign (SEO): Large’, once again with our client StressNoMore.

Special mention to our Head of SEO, Patrick, who saw his opportunity for a great pose and grabbed it with both hands.

We also received a silver award for ‘Best Use of Search – Health’ with StressNoMore. Congratulations to PlatformGSK and Digitas UK for their win in this category!

StressNoMore were just as thrilled as us:

“At StressNoMore we’re extremely delighted with the work Screaming Frog has done for us over the past year and we’ve seen huge SEO gains off the back of it. Each month they amaze us with the amount of coverage we consistently receive on a low budget, which is a testament to their reactive and creative approach to content. We have complete trust in Screaming Frog, they’re like an extension of our small team here in Hull.”

It was a fantastic evening and great to catch up with people at a real-life event once again. Big congratulations to all the winners and nominees, and we hope to be at the next one!

The post Screaming Frog Scoops Three Trophies at the UK Search Awards appeared first on Screaming Frog.

]]>
https://www.screamingfrog.co.uk/screaming-frog-search-awards-hattrick/feed/ 6
Conducting an SEO Content Audit: 6 Things You Might Be Overlooking https://www.screamingfrog.co.uk/conducting-an-seo-content-audit/ https://www.screamingfrog.co.uk/conducting-an-seo-content-audit/#comments Mon, 15 Nov 2021 13:37:48 +0000 https://www.screamingfrog.co.uk/?p=176775 The process of conducting an SEO content audit can take many shapes. It can be an interpretive review of a site’s top organic landing pages. Or it can take the form of a data-driven inventory that documents an entire site’s URLs. There are many ways to assess content through an...

The post Conducting an SEO Content Audit: 6 Things You Might Be Overlooking appeared first on Screaming Frog.

]]>
The process of conducting an SEO content audit can take many shapes. It can be an interpretive review of a site’s top organic landing pages. Or it can take the form of a data-driven inventory that documents an entire site’s URLs.

There are many ways to assess content through an SEO lens. But the overarching purpose is generally unified – to pinpoint disruptive bottlenecks and uncover new opportunities that can effectively grow search (and brand) visibility.

This article is a guest contribution from Tory Gray and Tyler Tafelsky.


What’s an SEO Content Audit? And What’s the Point?

In a general sense, an SEO content audit is an actionable inventory of indexable content that’s organized in ways to better determine what to consolidate (or remove!), what to improve upon, and what to leave as-is.

In many cases, a specific intention is set to find duplicate content, cannibalization issues, index bloat, or mysterious problems – like persistent layout shifts or JavaScript SEO issues. In other cases, it can be a broader exercise in identifying untapped ideas and new topics to help grow a site’s online footprint.

The art of performing an SEO content audit is a balance between both the right and left brain. Depending on the project, the process involves a combination of perspectives, tools, and procedures to methodically peel the onion on the many layers that impact SEO.

The underlying point of employing a regular content audit (especially for large websites) is to investigate issues that may be “slowing its SEO roll”. The fact is, you could be producing the most engaging and valued content, but it may struggle to rank if there are technical disruptions holding it back. And that’s what a majority of this post seeks to outline.

Alternatively, an external analysis of search trends, predictions, keyword data, and competitor strategies can inspire new opportunities to further grow and differentiate. While not directly focused on the site itself, these off-site insights can be of tremendous use for SEO content creation.


What Should I Look For in a Content Audit?

There’s no shortage of step-by-step, how-to guides on how to perform a content audit with SEO in mind. No doubt, they’re useful resources for those getting started, especially learning the basics like what metrics to include in a crawl report, how to organize your spreadsheets, and how to identify duplicate or redundant content.

Many great posts teach these systematic fundamentals and what you should look for when performing a content audit. But most guides of this type leave out advanced technical SEO issues and high-level strategic perspectives.

With that in mind, this post seeks to broaden your horizons by highlighting specific matters that often get overlooked when SEO-auditing a site’s content.

So let’s dive into the good stuff!


1. Ensure Content Is Being Fully Accessed and Rendered (Hello, JavaScript SEO)

As web content has evolved from static HTML pages to interactive graphics, videos, and more dynamic forms, the programming languages to support such elements have also evolved. Among the most powerful and popular of those languages is JavaScript.

JavaScript can power many forms of content, ranging from collapsible accordions and product sliders to animations and dynamic navigations, to name just a few. While JavaScript provides tremendous versatility in delivering rich and engaging content, it can also come with a fair share of limitations and challenges for SEO.

Unlike simple web content that search engine crawlers like Googlebot can easily crawl and index, the issue with more advanced JavaScript-driven content is that it can be difficult and potentially time/resource-intensive to process, especially if a site relies on client-side rendering in the browser versus server-side rendering or pre-rendering.

The problem is that Googlebot isn’t always going to do the heavy lifting to process JavaScript content. As a result, it’s important to proactively audit for JavaScript errors and accessibility issues to ensure your site’s content is being fully rendered and indexed.

There are a couple of underlying issues worth investigating to ensure JavaScript sites retain and maximize the SEO potential of their content. Thanks to ground-breaking advancements with its SEO Spider software, Screaming Frog now makes it easy to crawl JavaScript rendered content and audit these potential issues.

Can Search Engines Discover All Content on Your Pages?

One of the most common JavaScript issues is when search engine bots are unable to identify and crawl critical content on your pages. This can result from general coding mistakes or because the content is not made readily available for rendering and indexation.

To audit this, it’s important to be able to view the rendered DOM after JavaScript has been executed to pinpoint discrepancies to the original response HTML. Essentially, we’re evaluating differences in the raw HTML with that of the rendered HTML – and the visible content on the page.

While there are a few ways to do this, SEO Spider can streamline the comparison process by enabling you to view the original HTML and rendered HTML when using JavaScript.

Within the tool, you can set this up by clicking into Configuration > Spider > Extraction and selecting the appropriate options to both “store HTML” and “store rendered HTML”.

In the View Source pane, this will display the original source HTML and the rendered HTML side-by-side, allowing you to compare differences and assess whether or not critical content and links are actually interpreted in the rendered DOM. The check box to Show Differences above the original HTML window makes this comparison process even more efficient.

This feature is extremely helpful, not only for SEO content audits but a number of debugging scenarios. What’s nice about the SEO Spider tool is that you can also choose to store and crawl JavaScript files independently. When this filter has been activated, you can switch the ‘HTML’ filter to ‘Visible Content’ to identify exactly which text content is only in the rendered HTML.

Are URL Accessibility Issues Present on the Site?

Another common JavaScript-related issue – which is typically more straightforward and easier to troubleshoot – is found within your site’s URLs.

By default, many JavaScript frameworks do not generate unique URLs that can be accessed individually, which is especially the case with single-page applications (SPAs) and certain web apps. Rather, the contents of a page change dynamically for the user and the URL remains exactly the same.

As you can imagine, when search engines can’t access all of the URLs on a site, it creates a lot of problems for SEO. To check if your URLs are available and indexable, use Screaming Frog to crawl the site, and assemble a dashboard of URLs and the data associated with them.

This simple process will let you know which URLs are in fact indexable, which are not, and why that’s the case.

If you have certain URLs on your site that should be indexed but aren’t being found, there are problems that need to be addressed. Some of the most common issues to consider are pages:

  • with blocked resources
  • that contain a noindex
  • that contain a nofollow
  • that contain a different canonical link
  • redirects being handled at a page level instead of at the server request
  • that utilize a fragment URL (or hash URL)

SEO Spider’s JavaScript tab is equipped with 15 different filters that highlight common JavaScript issues, making it easier to find disruptive accessibility problems hindering SEO performance.


2. Audit Core Web Vitals and Optimize for Page Speed

While SEO and user behaviour metrics are most commonly used when conducting a content audit, you can potentially move the needle by bringing Core Web Vitals into the mix.

While a fast website won’t make up for low-quality content that doesn’t meet the needs of real users, optimizing for page speed can be a differentiator that helps you edge ahead in the “competitive race” that is SEO.

Simply put, these metrics are designed to measure both page speed and user experience variables. The primary three Core Web Vitals include:

  • Largest Contentful Paint (LCP) – measures the time it takes for the primary content on a page to become visible to users. Google recommends an LCP of fewer than 2.5 seconds.
  • First Input Delay (FID) – measures a page’s response time when a user can interact with the page, such as clicking a link or interacting with JavaScript elements. Google recommends an FID of 100 milliseconds or less.
  • Cumulative Layout Shift (CLS) – measures the number of layout shifts that reposition a page’s primary, which ultimately affects a user’s ability to engage with content. Google recommends a CLS score of 0.1 or less.

Since Google has added Core Web Vitals to its search algorithm, it has become more widely known that improving LCP, FID, and CLS can positively affect a page’s SEO potential.

Measuring Core Web Vitals

It’s now easy to measure Core Web Vitals through several commonly used tools. The foundational tool is the Chrome User Experience Report, which collects real user data and shares it via tools like PageSpeed Insights and Google Search Console.

  • Within Google Search Console, the Core Web Vitals feature utilizes the data collected from the Chrome User Experience Report to reveal any issues throughout your website.
  • PageSpeed Insights, which can be used as a standalone audit tool, analyses the performance of your pages and makes suggestions on how to improve, both on mobile and desktop browsers. The field data refers to the actual user data pulled from the Chrome User Experience Report.
  • The Chrome Web Vitals Extension is a handy way to see Core Web Vitals while browsing the web or making changes to your site. Not only is this extension great for internal audits, but it can be quite useful in measuring up against your competitors.

Each of these tools enables you to effectively measure LCP, FID, and CLS for a site, which in turn can lead to actionable improvements. Google also shares a collection of tools and workflow considerations to measure and improve Core Web Vitals at Web.Dev.


3. Audit Index Bloat and Keyword Redundancies, and Prune Mindfully

Pages that have low-quality content, duplicate content, cannibalizing content, or no content pages should be kept out of the search results. These low-value pages contribute to wasted crawl budget, keyword dilution, and index bloat. As such, auditing index bloat is a powerful exercise that’s intended to correct this very problem.

What Causes Index Bloat?

In simple terms, index bloat is when a site has too many URLs being indexed – that likely shouldn’t be indexed. This occurs when search engines find and index a considerable excess of URLs – more than what’s desired or specified in the sitemap. It’s a particularly common scenario with very large websites, such as eComm stores with thousands of pages.

Most often, index bloat is an ominous occurrence that stems from:

  • Dynamically generated URLs (unique and indexable pages created by functions like filters, search results, pagination, tracking parameters, categorization, or tagging)
  • User-generated content (UGC)
  • Coding mistakes (e.g broken URL paths in a site’s footer)
  • Subdomains (thin or non-search value pages on domains you accidentally aren’t paying attention to.

Orphan pages are also a common source of index bloat worth investigating. These are pages that exist but aren’t being linked to on the site in a crawl. That doesn’t necessarily mean they’re low-quality pages, but that they’re not easily accessible and have no internal links pointing to them.

The SEO Spider makes it easy to identify orphan pages and review them as part of a comprehensive SEO content audit. While some of these URLs may be worthy of keeping and restoring (building internal links to), in many cases, they can add bloat and should be pruned.

Lastly, the more mistakenly intentional form of index bloat occurs when a site has too many pages covering the same topic or keyword theme (e.g. the classic SEO mistake of pumping out blog posts targeting the same keyword). Such content overlap and keyword redundancy dilutes SEO by confusing search engines as to which URL to prioritize ranking.

How to Identify Index Bloat

There are a few different techniques we like to use to identify index bloat in all of its forms. For deep websites showing signs of significant bloat, we recommend a combination of the first techniques mentioned below, as these practices are more comprehensive and thorough.

Do a Cannibalisation Analysis

If you have more than one page aiming to rank for the same keyword (on purpose or accidentally), you may encounter cannibalization issues. Cannibalization issues aren’t inherently bad; for example, if you are able to rank in positions 1 & 2, or 2 & 3, for the same keyword, that’s a GREAT result.

But in many cases, your content ends up competing against itself, and suddenly you are ranking on page 3 and 5 with little to no organic visibility.

Pull data on potential cannibalization issues, identity which might be problematic, and put together a plan to either:

  • Update one page’s content (and internal anchor links) so it’s no longer competing
  • Retire and redirect the lower performing pages.
  • Consolidate the pages into one, leveraging the best content from both to cover the subject matter better and more comprehensively. Use the better performing URL (if possible), and redirect the other to the new page.

Compare Index Coverage With Sitemaps in Google Search Console

First, pin down exactly what property you’re investigating in Google Search Console. This could involve auditing the HTTPS vs HTTP properties of a domain, including any additional subdomain variations. Alternatively, it’s often easier to pull data from one Domain property and access URLs across all associated subdomains.

This distinction is important, especially for sites that use subdomains. All too often SEOs overlook subdomain-induced index bloat purely because it’s a different property than the ‘main’ site they’re focusing on. Out of sight and out of mind, the result can leave substantial bloat on the table.

Whether you’re cross-auditing different properties or using one Domain property (the preferred method) the next step is to get your XML sitemap right. This should hinge on the sitemap page that includes links to pages you value the most and seek to index. But with XML sitemaps, that’s not always the case.

Sometimes a cursory glance at the XML sitemap unveil obvious signs of index bloat, like unwanted tag pages and pagination URLs. This can especially be the case with sites on platforms that automatically generate XML sitemaps in a disorganized, lackadaisical fashion. Ultimately you don’t want to submit a bloated XML sitemap to Google Search Console.

Google Search Console itself offers helpful tools to help you identify and clean up index bloat. Under the Index option, you can use the Coverage feature to see “All known pages” that Google has indexed.

While this tool is generally used to pinpoint pages with error and warning statuses by default (e.g. submitted pages that are broken), you can view all valid pages that Google is picking up as indexable.

The true insights from this report are found just below the graph under Details, specifically under valid pages that are “Indexed, not submitted in sitemap.” These URLs are not specified in the sitemap but are still being discovered and indexed by Google, making them worth auditing for potential index bloat issues.

Using these reporting tools in Google Search Console are handy for this audit process, but they do come with limitations. For instance, the list of URLs in the report is based on the last crawl, which may not be the most up-to-date. Additionally, the report is limited to a maximum of 1,000 URLs, so this approach alone may not suffice for very large sites with considerable bloat.

Run a Crawl Report with SEO Spider

Another more comprehensive way to audit index bloat is to run a crawl report using SEO Spider along with API data (‘Configuration > API Access’) from Google Analytics and Search Console. This will enable you to see URLs from these tools that might not be accessible via a crawl, and whether or not they’re indexable.

After running the crawl, export the Internal HTML report into a spreadsheet and extract all URLs that are not indexable into a separate sheet. This provides you with a holistic view of all pages that can be discovered and indexed.

Next, by crawling and/or organizing all URLs listed in the XML sitemap, you can compare these two lists to filter and determine any outliers that should not be discoverable nor indexed, and are otherwise adding bloat to the site. This slightly more manual approach works great in identifying URLs that should be removed, redirected, or attributed with noindex, nofollow, or canonical tags.

This approach, combined with using a Domain property as the Google Search Console property, provides a complete look into all potential index bloat that may be hindering a site’s SEO performance.

Leverage Crawl Analysis to Filter Relevant URL Data

Screaming Frog has streamlined this process with its Crawl Analysis feature, which enables you to leverage certain filters post-crawl that are invaluable for the auditing process. When integrated with the Google Analytics or Google Search Console API, Crawl Analysis offers crucial insights when matching a site’s crawl data against the URLs on these platforms, particularly the “Indexed, not submitted in sitemap” report.

At the end of a crawl (or when a crawl has been paused), click into Crawl Analysis > Configure to activate certain data points of interest. In this case, Sitemaps and Content are the most useful when troubleshooting index bloat, however, all of these options have merit when conducting an SEO content audit.

Once the desired data points have been selected, you’ll need to start the actual Crawl Analysis to gather data for these filters. You can also tick the checkbox to “Auto-analyse at End of Crawl” to have this done automatically.

When the Crawl Analysis has been completed, the insightful data you’re looking for will be available and filterable from the right-hand ‘overview’ window pane. In the Sitemaps drop-down, you can filter to see data like:

  • URLs in Sitemap
  • URLs not in Sitemap (most useful for auditing unwanted index bloat)
  • Non-indexable URLs in Sitemap
  • Orphan pages

In addition to these data points, the Content filtering option can also help you find duplicate content or near-duplicates worth reviewing for keyword redundancies/dilution. SEO Spider’s configurations enable you to set the percentage of duplicate similarity threshold, an algorithm that’s designed to pinpoint similarities in page text.

By default, the threshold percentage is set at 90%, which is highly sensitive. For large websites, this can be a high percentage that requires significant auditing. Alternatively, you can adjust to a lower similarity threshold to help sort and find duplicates that may be more bloat-indicative and SEO-damaging.

Together, these features available in SEO Spider provide a comprehensive view into what URLs should and should not be indexed. This video provides a nice overview of the Crawl Analysis features that you can leverage.

Pruning Index Bloat and Problematic URLs

Once you’ve determined which URLs are in the category of index bloat, or appear redundant and/or duplicate in nature, the next step is to prune them. This where having an organized spreadsheet with relevant metrics, notes, and action items is very helpful.

Before removing all of them from the indexes all at once, it’s important to make a mindful assessment of these URLs and how to best handle how they should be pruned. For instance, some URLs might be earning organic traffic or backlinks. Removing them entirely (instead of 301 redirecting them) might result in losing all SEO value that could otherwise be retained and allocated to other pages.

Here are a few ways to assess the value of URLs, which can then help you determine how they should be pruned.

  • Review organic metrics in Google Analytics, such as organic search traffic, conversions, user behaviour, and engagement to better gauge how much SEO value a URL has.
  • Review All User segment metrics as well, so you don’t accidentally prune content that’s driving business value. More about this below.
  • In Google Search Console, use Performance > Search Results to see how certain pages perform across different queries. Near the top are filter options (Search type: Web and Date: Last 3 months will be activated by default; we prefer to review at least 12 months of data at a time to account for seasonality). Add a Page filter to show the search performance of specific URLs of interest. In addition to impressions and clicks from search, you can click into each URL to see if they rank for any specific queries.
  • Use the Link Score metric (a value range of 1-100) from the SEO Spider Crawl Analysis. URLs that have a very low Link Score typically indicate a low-value page that could perhaps be pruned via redirect or noindex/removal.
  • Additional tools like Ahrefs can help determine if a URL has any backlinks pointing to it. You can also utilize certain metrics that indicate how well a page performs organically, such as organic keywords and (estimated) organic traffic.

It helps to have some SEO experience and a keen eye during this evaluation process to effectively identify URLs that do contain some degree of value as well as others that are purely bloat. In any case, taking measures to prune, and prune mindfully, helps search engines cut through the clutter and prioritize the real content you want driving your SEO strategy,

The actual pruning process will more than likely involve a combination of these approaches, depending on the evaluation conducted above. For instance, a page with thin or duplicate content, but has a few high-quality will be pruned differently (e.g. 301 redirected) versus just a page with thin or duplicate content.

  • Remove & Redirect – In most cases, the URLs you’d like to prune from index can be removed and redirected to the next most topically relevant URL you wish to prioritize for SEO. This is our preferred method for pruning index bloat, where historical SEO value can be appropriately allocated with the proper 301 redirect.

In cases when strategic business reasons take priority (and remove & redirect is not an option), the next best alternatives include:

  • Meta Robots Tags – Depending on the nature of the page, you can set a URL as “noindex,nofollow” or “noindex,follow” using the meta robots tag.
    • “Noindex,nofollow” prevents search engines from indexing as well as following any internal links on the page (commonly used for pages you want to be kept entirely private, like sponsored pages, PPC landing pages, or advertorials). You are also welcome to use “Noindex, follow” if preferred, but keep in mind that this follow will eventually be treated as a nofollow.
  • Disallow via Robots.txt – In cases that involve tons of pages that need to be entirely omitted from crawling (e.g. complete URL paths, like all tag pages), the “disallow” function via Robots.txt file is the machete in your pruning toolkit. Using robots.txt prevents crawling, which is resourceful for larger sites in preserving crawl budget. But it’s critical to fix indexation issues FIRST and foremost (via removing URLs in Search Console, meta robots noindex tag, and other pruning methods).
  • Canonicalization – Not recommended as an end-all solution to fixing index bloat, the canonical tag is a handy tool that tells search engines the target URL you wish to prioritize indexing. The canonical tag is especially vital to ensure proper indexation of pages that are similar or duplicative in nature, like syndicated content or redundant pages that are necessary to keep for business, UX, or other purposes.

These are the fundamental pruning options in a nutshell. Before deploying significant changes in removing/redirecting/blocking URLs, be sure to take a performance benchmark so that you can clearly see the impact once the index bloat has been pruned.


4. Determine (And Improve) Pages Where You’re Losing Users

A good auditor’s mindset is to embrace performance pitfalls and weaknesses as room for improvement. Identifying pages where users are exiting the site (without converting) is a great place to prioritize such opportunities.

Pinpointing pages with high exit rates is not synonymous with pages with high bounce rates, however many of the same improvements can apply to both. Unlike bounce rate which is specifically associated with the entrance pages (user leaves the site from the first page they enter on), exit rate indicates how often users exit from a page after visiting any number of pages on a site. Exit rate is calculated as a percentage (number of exits/number of pageviews).

Pinpointing Pages with High Exit Rates

In Google Analytics, you can view pages with high exit rates by navigating to Behavior > Site Content. Under All Pages, there’s a % Exit column that shows exit rate metrics for all pages of the site, as well as other general performance metrics. You can drill deeper via Behaviour > Site Content > Exit Pages which sorts pages by highest volume of exits and the associated exit rate for each page.

The Exit Pages report in Google Analytics is a great place to start in assessing which URLs users are dropping off the most. Keep in mind it’s best to set the date range of this report to include a significant sample size of data, such as the last 6-12 months.

Depending on the size of the site, you may want to export the data to include more than just the top ten URLs shown in the report by default. For instance, there may be several pages of importance that don’t quite have as high of a volume of exits but still have an enormously high exit rate worth troubleshooting.

How to Improve Exit Pages and Reduce Drop Offs

Finding pages with high exit rates is one thing. Implementing changes to reduce drop offs is another. While some problems might be obvious (slow loading pages, broken links, media not rendering, etc.) deciding where and how to improve content and user experience takes some investigation.

Here is a series of tips to help you reduce exit rates, improve engagement, and boost conversions:

  • Enhance UX & Content Readability – Ensure your content is easy to digest by keeping organized and easy to navigate. Avoid lengthy paragraphs, complex sentences, and wording. Structure pages with table of contents, subheadings, lists, tables, images, and visual media whenever possible.
  • Speed-up Slow Loading Pages – Slow page load times are a major contributor to high exit rates. In fact, over 50% of users abandon a site if it takes more than three seconds to load. Prioritizing page speed by improving Core Web Vitals, leveraging Accelerated Mobile Pages, optimizing images, minifying code, etc.
  • Test Cross-Browser Compatibility – It’s critical to ensure that proper testing is done across all web browsers, especially when most users are using mobile devices, to ensure content renders appropriately without any technical issues. Pages with content formatting problems can result in immediate drop-offs.
  • Propel the Next Step with CTA’s – One of the biggest reasons for users dropping off is failing to include calls to action to invoke the next step in the conversion funnel. Use graphics, buttons, and other obvious elements to grab the attention of users and compel desired action that you want them to take.
  • Improve Internal Linking – Similar to CTAs, page copy should include internal links to other useful and related pages of a site. Not only is this SEO best practice, but it logically guides users to relevant content that they might be interested in, keeping them on the site longer while aiding conversion.
  • Include Related Posts/Content – Either between paragraphs of copy or at the end of the page, another best practice to minimize exists is to include links to related content, such as blog posts, resources, and other pages that might be of interest to users. These typically work best when placed within the content naturally with thumbnails and span the width of the page.
  • Avoid Intrusive Pop-ups & Too Many Ads – Pages that are riddled with intrusive pop-ups and/or too many banner ads are notorious for having high exit rates. When tastefully implementing pop-ups and ads can serve a good purpose, but in many cases, they can lead to a frustrating user experience when overdone.

For deeper investigations, it can be helpful to collect additional user behavior data and feedback with tools like session recordings, heatmaps, and on-site surveys. While these efforts can be more labour and resource-intensive, they can often yield valuable insights for improvement.


5. Leverage Content Gaps Inspired by Competitors and Keyword Data

A fundamental step in any SEO content audit is conducting a content gap analysis to identify untapped topics and new ideas for content. Some of the best insights can be found by analysing close competitors and scoping out what type of content they’re publishing (that you’re not).

We find it useful to support competitive content audits with good ole’ fashioned keyword analysis and research. Data validates the SEO potential of new ideas and helps guide the strategy. Together, these approaches can help you discover and leverage content gaps as new growth opportunities.

Peeling the Onion on Your Competitors’ SEO Content Strategies

While large sites like Amazon, eBay, and Wikipedia might be top-ranking SEO competitors, they probably won’t offer much insight for this exercise. Instead, target closely related competitors who are concentrated in your niche.

You can gain a lot of insight by manually auditing competitors’ sites to see what topics they’re covering and what strategies they’re using. It’s also worth delivering into the content of close competitors to see the effort and quality they’ve invested and how they’ve positioned themselves.

While your analytical eye is among the best of tools, it’s crucial to support your findings with tangible data. Using tools like SEMrush, Ahrefs, SpyFu, and Sistrix for content gap analysis is extraordinarily helpful in pulling the curtain on valuable gaps worth exploring.

What’s especially handy about some of these SEO tools is that they allow you to see your competitors’ top organic content, what keywords it ranks for, and other interesting metrics.

As you can imagine, it’s easy to go down a rabbit hole in more ways than one. Not only can you immerse yourself in a library of competitor content – from blogs and resources to core pages of a site’s hierarchy – but you can go off the rails exploring new keyword themes you may have never even considered.

Scoping competitors is one of the best ways to find gaps in your existing SEO strategy. This approach also helps inspire new directions when conducting a keyword refresh or pulling data to back ideas.

Certainly having access to SEO tools like Ahrefs and Semrush helps peel the onion, as these tools provide unbeatable features when performing any type of SEO competitor analysis. But even a manual audit combined with cursory keyword research via Google Keyword Planner can be enough to guide your future SEO roadmap strategy.


6. Consider non-SEO Segments and Overall Conversion Value

When gathering data to support your audit and pruning process, it’s common best practice to pull and evaluate user data across different user segments, particularly when using the SEO Spider tool integrated with Google Analytics’ API.

It’s a natural mistake to prioritize only Organic user segment data when pulling this data. But that fact is, fixating on just Organic users likely means you’re overlooking important data from other channels/segments. Neglecting and writing off all value from non-SEO segments is a huge miss. Not only can these pages be mistakenly pruned and wastefully retired, but in most cases, the content on these pages goes neglected and unutilized.

As a more holistic alternative, use All Users as a separate segment and apply a VLOOKUP to get important metrics – like sessions, user behavior, and conversions – from BOTH segments. Cross-referencing more than just Organic segments into a single dashboard can help you avoid missed opportunities by discounting content that might have more value than expected (especially if certain pages do have content, links, and sessions.)

Equipping your arsenal with greater depth and support from your data helps steer the story and conversation, especially with the clients or senior team members who are alarmed by the idea of pruning content. Being able to clearly articulate that a page contains little-to-no value – for any marketing channel! – over a year-long period, for instance, can help alleviate their concerns and get ahead of the issue.

Dashboarding Your Data and Bringing Action to the Audit

As any good audit requires, we recommend assembling your findings into a dashboard (or a master spreadsheet) – not just for collecting data but encouraging a more collaborative, hands-on approach in defining actionable next steps.

Beyond documenting metrics, it’s valuable to conduct a more manual review of target pages, which is often facilitated by the blog editor, copywriter, or content strategist (or those that are more intimately involved in developing the content). Columns can be added to the dashboard to annotate specific issues and opportunities, like:

  • Old/outdated content – If the page is performing for SEO, then what next steps can be done to update the content
  • Off-brand tone – Similarly, if there’s SEO value, then reframe the content or remove and redirect to a better page.
  • Wrong tone or target audience – If not completely low-value or obvious index bloat, then update the content to better align the voice and context.

Content can be pruned, or “retired” even if it provides SEO value – if that’s the right move for the business as a whole. This analysis will help you determine the relative value for SEO, and therefore the right next steps for HOW to prune or re-frame that content appropriately.

A dashboard is pivotal in helping set priorities and keeping communication straight among teams. Other filterable data points that will help with analyzing performance include:

  • Author – If you have a multi-author blog, it’s likely some authors might produce better results on average versus others.
  • Content category – Which categories are over or underperforming, and why? Are there ways to breathe life into priority categories?
  • Business alignment – Should certain pieces of content be emphasized/prioritized to support product features?
  • Customer funnel stage – Outline the target funnel stage for individual pages (e.g. a blog might target Awareness or Consideration stages, or be targeted at making Advocates of existing customers); then specify different goals per stage to move them along the funnel, and/or identify gaps where you’re not meeting the needs of your customers at a particular stage (e.g. not enough pages targeted at a specific stage.)
  • Other notes & action items – Other specific notes, edits, or general enhancements can provide a subjective view for improvement or discussion.

The point is to assemble insights at scale and cultivate hypotheses on the “why” behind each data point while noting actionable improvements that can be made. This can include improving the content of less authoritative authors, or updating content to better suit patterns and trends that can be leveraged.

These insights, which can include a more analytical and subjective review, are invaluable to a great content audit update strategy. Not only does it bring actionable creativity to the mix, but a collaborative dashboard helps clients feel more involved with the audit process, oftentimes granting them more say in what stays, what goes, and what gets improved (which usually makes the whole project easier).


Embarking on Your SEO Content Audit

There are many ways to embrace an SEO content audit and the manner you collect, organize, and share your findings can take many forms.

The real value-add is knowing how to use your toolkit and peeling the onion in all the right places.

With the basic how-to guides in abundance, hopefully these ideas bring a refreshing point of view on a popular topic. How do you approach an SEO Content audit?

The post Conducting an SEO Content Audit: 6 Things You Might Be Overlooking appeared first on Screaming Frog.

]]>
https://www.screamingfrog.co.uk/conducting-an-seo-content-audit/feed/ 4
What You Need to Know About Google’s New Enforcement System https://www.screamingfrog.co.uk/googles-new-enforcement-system/ https://www.screamingfrog.co.uk/googles-new-enforcement-system/#comments Wed, 27 Oct 2021 13:16:21 +0000 https://www.screamingfrog.co.uk/?p=176516 Everybody watch out the Google police are about! 👮‍♀️ The time has come, and Google has finally acted on their newly planned ‘strike’ based system which was announced back in late July to give us advertisers a chance to amend any wrongdoings (if you have no idea what I’m on...

The post What You Need to Know About Google’s New Enforcement System appeared first on Screaming Frog.

]]>
Everybody watch out the Google police are about! 👮‍♀️

The time has come, and Google has finally acted on their newly planned ‘strike’ based system which was announced back in late July to give us advertisers a chance to amend any wrongdoings (if you have no idea what I’m on about then please click here to see the full announcement for yourself).

It’s hard to believe that the time has gone so quick, but here we are and Google’s initial announcement has now become a legitimate enforcement system which came into effect on Tuesday 21st September. This was done for the sole reason of stopping those naughty advertisers from repeatedly violating against their policies.

So I guess you’re thinking what policies does this apply to right? Well good question. The lovely people at Google have been kind enough to display a list of policies that this will apply to and have even mentioned their plans to extend this strike-based system to additional policies over time (as well as having a gradual ramp up over a period of 3 months for the policies already announced). Please see my easy to read list below (I hope I don’t have to update this too much!):

  • Enabling dishonest behaviour
  • Unapproved substances
  • Guns
  • Gun parts & related products
  • Explosives
  • Other Weapons
  • Tobacco

It might also be worth noting that this update does not apply to Googles Ads egregious policy violation, as there will be an immediate suspension instead (the red card of football if you will). The following policies to receive this ‘red card’ are:

  • Circumventing Systems
  • Coordinated deceptive practices
  • Counterfeit
  • Promotion of unauthorised pharmacies
  • Unacceptable business practices
  • Trade Sanctions violation

So What’s Actually Changing?

The people at Google have decided the best way to stop these repeated violations is to implement a new strike-based system where advertisers will receive a warning followed by up to 3 strikes if the violations continue. These warnings/strikes will also be accompanied by an email/in-account notification which will show us advertisers where we’ve inevitably gone wrong and how we can fix the issue.

What’s more, the penalties for the strikes themselves will differ as strikes 1 & 2 will result in your account being temporarily put on hold, whereas strike number 3 will result in a full account suspension (strike 3, you’re out!).

However, Google also offers a solution, so you can re-activate your account quickly and go back to being the busy advertiser you are. For strikes 1 & 2, you will be required to remedy your mistakes and submit an acknowledgement form for your account to be released in either 3 days at strike 1, or 7 days at strike 2. If no action is taken the Google Ads account will remain on temporary hold and the strike will remain on the account for 90 days unless successfully appealed.

Strike 3 on the other hand is a little more severe, as Google states ‘you will not be able to run any ads or create any new content unless the suspension is successfully appealed’ (ouch).

But My Ads Aren’t in Violation, How Do I Get My Ads Back Up and Running Without Making Changes?

Yes that’s a good question, as I’m sure this has happened to all of us at least once in our career. If you’re feeling brave enough to take on the guardians of Google then you can issue an appeal to justify why you believe your ads aren’t in error (as mentioned before, this is compulsory for strike 3).

To do this you’ll need to hover your mouse over the status of the ad (it should say disapproved) which will then give you the option to either edit the ad or appeal the ad. When you click appeal you will be able to choose between two reasons to appeal the ad, as well as being able to appeal for the ad group, campaign, or whole account (as shown by the image below).

However just like the difference in penalties, it is worth knowing that the service of your ads after an appeal will differ depending on the type of strike you’re on too. Please see the different types of actions you can take depending on the strike issued to you:

  • 1st or 2nd Strike – You will be able to serve your ads immediately after appealing the strike if you believe it was an issue in error which will save you waiting either 3 or 7 days (however it might be best to be 100% certain you’re not in violation as it’s quite a risky game to play). Likewise to acknowledging the strike, if no action is taken then the Google Ads account will remain on temporary hold and strikes will remain on the account for 90 days unless successfully appealed.
  • 3rd Strike – You will have to successfully complete Google’s appeal form which can be found on the right hand side of the Google Ads account after you click on the ‘Contact Us’ link (which is found in the help section). Once your appeal is reviewed, you will receive an email with the outcome (fingers crossed you’re successful!). If your appeal is rejected, you can re-appeal following the same process that’s outlined above.

So there we have it, the newest set of rules to keep us on our toes (whoop!). Hopefully this blog has given a bit more of an insight into Googles new enforcement system and how to overcome the tricky hurdles that might come your way.

Let us know below what you think of the new strike policy!

The post What You Need to Know About Google’s New Enforcement System appeared first on Screaming Frog.

]]>
https://www.screamingfrog.co.uk/googles-new-enforcement-system/feed/ 3
How To Find Broken Links Using The SEO Spider https://www.screamingfrog.co.uk/broken-link-checker/ https://www.screamingfrog.co.uk/broken-link-checker/#comments Fri, 22 Oct 2021 08:34:32 +0000 https://www.screamingfrog.co.uk/?p=4752 You can use the Screaming Frog SEO Spider for free (and paid) to check for broken links (the http response ‘404 not found error’) on your website. Below is a very quick and easy tutorial on how to use the tool as a broken link checker. First of all, you’ll...

The post How To Find Broken Links Using The SEO Spider appeared first on Screaming Frog.

]]>
You can use the Screaming Frog SEO Spider for free (and paid) to check for broken links (the http response ‘404 not found error’) on your website.

Below is a very quick and easy tutorial on how to use the tool as a broken link checker. First of all, you’ll need to download the SEO Spider which is free for crawling up to 500 URLs. You can download via the green button in the right hand side bar.

You can crawl more than 500 URLs with the paid version. The next steps to find broken links within your website can be viewed in our video, and tutorial below.

1) Crawl The Website

Open up the SEO Spider, type or copy in the website you wish to crawl in the ‘Enter URL to spider’ box and hit ‘Start’.

Find Broken Links

2) Click The ‘Response Codes’ tab & ‘Client Error (4XX)’ Filter To View Broken Links

You can wait until the crawl finishes and reaches 100%, or you can just view 404 broken links while crawling by navigating to the ‘Response Codes’ tab and using the filter for ‘Client Error 4XX’.

There are two ways to do this, you can simply click on the ‘tab’ at the top and use the drop down filter –

View Broken Links

Alternatively you can use the right-hand window crawl overview pane and just click directly on ‘Client Error (4xx)’ tree view under the ‘Response Codes’ folder. They both show the same results, regardless of which way you navigate.

404 Errors Via Right Hand Window

This crawl overview pane updates while crawling, so you can see there number of client error 4XX links you have at a glance. In the instance above, there are 9 client errors which is 0.18% of the links discovered in the crawl.

3) View The Source Of The Broken Links By Clicking The ‘Inlinks’ Tab

Obviously you’ll want to know the source of the broken links discovered (which URLs on the website link to these broken links), so they can be fixed. To do this, simply click on a URL in the top window pane and then click on the ‘Inlinks’ tab at the bottom to populate the lower window pane.

View Broken Links Source Pages

You can click on the above to view a larger image. As you can see in this example, there is a broken link to the BrightonSEO website (https://www.brightonseo.com/people/oliver-brett/), which is linked to from this page – https://www.screamingfrog.co.uk/2018-a-year-in-review/.

Here’s a closer view of the lower window pane which details the ‘inlinks’ data –

‘From’ is the source where the 404 broken link can be found, while ‘To’ is the broken link. You can also see the anchor text, alt text (if it’s an image which is hyperlinked) and whether the link is followed (true) or nofollow (false).

It looks like the only broken links on our website are external links (sites we link out to), but obviously the SEO Spider will discover any internal broken links if you have any.

4) Use The ‘Bulk Export > Response Codes > Client Error (4XX) Inlinks’ Export

If you’d rather view the data in a spreadsheet you can export both the ‘source’ URLs and ‘broken links’ by using the ‘Bulk Export’, ‘Response Codes’ and ‘Client Error (4XX) Inlinks’ option in the top level menu.

Bulk Export Broken Links & Source Pages

This should cover the majority of cases for finding broken links on a website.

However, the ‘source’ URL is the very last page to link to the 404 error page. So there might be times that the ‘source’ is a redirect (and possibly in a chain of redirects). You can see if the ‘source’ is a redirect as the ‘Type’ column will say ‘HTTP Redirect’ for example.

To quickly find the original source page of these errors, we recommend using the ‘All Redirects‘ export under ‘Reports > Redirects > All Redirects’. Open up the report and filter the ‘final status code’ column to ‘404’. The ‘Source’ is the original source page, the ‘address’ is the last source, and the ‘final address’ is the 404 URL.

There’s a number of ways you can export data from the SEO Spider, so please read our user guide on exporting.

Crawling A List Of URLs For Broken Links

Finally, if you have a list of URLs you’d like to check for broken links instead of crawling a website, then you can simply upload them in list mode.

To switch to ‘list’ mode, simply click on ‘mode > list’ in the top level navigation and you’ll then be able to choose to paste in the URLs or upload via a file.

Find broken Links in list mode

Jump links are a useful way to link users to a specific part of a webpage using named anchors on a link, also referred to as ‘bookmarks’, ‘named anchors’ and ‘skip links’.

These broken bookmarks can’t be found in the same way as above, as they don’t respond with a 404 status code, and they often go unnoticed.

You’ll need to enable ‘Crawl Fragment Identifiers’ located in ‘Config > Spider > Advanced’, crawl the website and then view them under the ‘URL’ tab and ‘Broken Bookmark’ filter.

Broken Bookmarks, AKA Jump Links or Anchor Links

Read our tutorial on finding broken jump links for a full walk-through.

Hopefully the above guide helps illustrate how to use the SEO Spider tool to check for broken links efficiently.

Please also read our Screaming Frog SEO spider FAQs and full user guide for more information.

The post How To Find Broken Links Using The SEO Spider appeared first on Screaming Frog.

]]>
https://www.screamingfrog.co.uk/broken-link-checker/feed/ 384
How To Manage Your Inner Chimp In The PR Jungle https://www.screamingfrog.co.uk/how-to-manage-your-inner-chimp-in-the-pr-jungle/ https://www.screamingfrog.co.uk/how-to-manage-your-inner-chimp-in-the-pr-jungle/#comments Tue, 19 Oct 2021 11:00:34 +0000 https://www.screamingfrog.co.uk/?p=175575 If you find yourself feeling gut-wrenchingly anxious, bubbling with anger or out of your depth in the daily PR grind, it could be because your inner chimp is taking over. Everyone has an inner chimp, whether they like to admit it or not. It’s the primitive, instinctual part of the...

The post How To Manage Your Inner Chimp In The PR Jungle appeared first on Screaming Frog.

]]>
If you find yourself feeling gut-wrenchingly anxious, bubbling with anger or out of your depth in the daily PR grind, it could be because your inner chimp is taking over.

Everyone has an inner chimp, whether they like to admit it or not. It’s the primitive, instinctual part of the brain that thinks and acts for us without our permission, helping us navigate jungle life.

While we don’t live in the jungle anymore, the world of PR can certainly feel like it at times.

But by learning simple mind management tricks, you’ll find you’re not just surviving but thriving.

Right, so I Have a Chimp Controlling My Mind?

Yes. Well, sometimes.

Here’s a brief lesson in neuroscience (I promise).

In 2012, Professor Steve Peters, respected psychiatrist and coach to Olympic athletes, wrote a book called “The Chimp Paradox”.

It explains there are three parts of the brain, each with a different function:

  • The human: the part of your brain that’s really “you”, powered by logic, reason and compassion
  • The computer: your memory bank, holding past experiences and lessons on file
  • And the chimp: the fast-acting defence mechanism that controls your primitive desire to fight, flight or freeze.

The human and chimp parts of your brain are in a constant tug of war. And it’s up to you who wins.


PR – It’s a Jungle Out There

As PRs, our job involves creating campaigns that appeal to people’s inner chimp, focusing on emotive topics like love, power, money and danger to get media exposure for our clients.

But if you’re unable to control your own chimp, you’ll find it hard to enjoy what you do and do it well.

Among other things, your inner chimp craves instant reward, to be liked and protected from danger. And if it doesn’t get these things, it will act out.

Within the fast-paced and pressure-intense PR environment where risk-taking, gut feelings and public acceptance are key to success, your chimp will be fully satisfied if things go your way – but throw a full-on tantrum if they don’t.

But by learning some simple mind management techniques, you’ll not only survive but thrive in the PR jungle…

1. Don’t fight your chimp

If you think you’ll be able to kill or lock your chimp away, you’re wrong.

You and your chimp are in it for the long haul, so you need to learn to live together. This means understanding it and building a relationship with it. Naming it can help.

Appreciate the positives your chimp offers. It gives you that gut feeling that can be so valuable in planning and launching PR campaigns. It lets you know what ideas hit the right emotive buttons. It can give you the power to keep going when the chips are down and turn things around.

So, never ignore it, learn to harness it.

2. Let your chimp offload

Every so often you’ll need to vent about something that’s wound you up. If you keep this inside, it’ll only build up and the bang will become an explosion.

When you complain about something out loud or to another person, it instantly makes you feel better. That’s because your human brain is the one listening and reason kicks in. You’ll find your chimp will soon get exhausted too and give up.

3. Take a pause

Ruled by emotion, your chimp will be the first to react to something. But by taking a pause before you decide what to do, you’ll give your rational and reasonable human brain the chance to catch up and make an informed decision, taking your chimp’s feelings into account.

You can also take five or ten minutes a day to assess how it’s going and reflect on the decisions you’re making. This awareness can help keep your chimp out of the driving seat.

4. Control your own self-esteem

A lot of what we do in PR is measured by KPIs, client satisfaction and comparing ourselves to others.

But for your chimp, no amount of success will ever be enough. It will chase success but once it’s got that it will redefine it. This can make your confidence drop, making you doubt your ability to do your job well.

Instead, make sure you base your success and self-worth on more. Appreciate your strengths. Are you a positive person that motivates your team? Do you care about what you do? Do you produce creative and thought-provoking content? Are you a good communicator? Are you honest?

If you’re measuring success by your strengths and values, building self-esteem is in your own hands…not your chimp’s.


Other Recommended Reading

I asked the PR Twitter community to recommend the best self-help books they’ve read. Here’s what they said:

  • Thinking, Fast and Slow by Daniel Kahneman
  • Black Box Thinking: Marginal Gains and the Secrets of High Performance by Matthew Sayed
  • Know Your Worth: How to build your self-esteem, grow in confidence and worry less about what people think by Anna Mathur
  • Atomic Habits by James Clear
  • How to Win Friends and Influence People by Dale Cargeie
  • Joy at Work by Marie Kondo and Scott Sonenshein
  • The Subtle Art of Not Giving A F*ck: A Counterintuitive Approach to Living a Good Life by Mark Manson
  • The Practice by Seth Godin

Comment below with any other recommendations!

The post How To Manage Your Inner Chimp In The PR Jungle appeared first on Screaming Frog.

]]>
https://www.screamingfrog.co.uk/how-to-manage-your-inner-chimp-in-the-pr-jungle/feed/ 1
How to Use Content & PR to Build Deep Links https://www.screamingfrog.co.uk/how-to-use-content-and-pr-to-build-deep-links/ https://www.screamingfrog.co.uk/how-to-use-content-and-pr-to-build-deep-links/#comments Mon, 04 Oct 2021 11:00:51 +0000 https://www.screamingfrog.co.uk/?p=173508 “Links to www.example.com/path/page are going to be a key metric for campaign success. We have always struggled to achieve money links.” More clients are coming to us wanting deep links to pages that target their core keywords. And the reality is, they are right to want deep links…to an extent....

The post How to Use Content & PR to Build Deep Links appeared first on Screaming Frog.

]]>
“Links to www.example.com/path/page are going to be a key metric for campaign success. We have always struggled to achieve money links.”

More clients are coming to us wanting deep links to pages that target their core keywords.

And the reality is, they are right to want deep links…to an extent.

Deep Links Matter

Deep links can drastically improve organic performance in the long term. You can see below the increase in clicks for a page on travel restrictions by state after we helped build over 220 referring domains to it:

Clicks to Travel Insurance Page After 220 Deep Links

Having said that, we ran a quick survey on Twitter to see how much time link builders spend building deep links to pages with commercial/ranking opportunity:

Most respondents (44.8%) admitted to not bothering building links to core keyword pages at all. And just 8.6% are spending most of their time building these links.

But why are we ignoring them?

  1. Well, they are hard to get. Media hooks for existing content can be limited and as creatives we want to make our own content from scratch.
  2. On top of that, creating large campaigns are a better way of hitting overall link KPIs.
  3. They aren’t always needed. Raising the overall authority of a site will help all pages rank better too.
  4. It isn’t natural to link to commercial pages excessively and we can’t influence where publishers link to.

While all are true, the reality is that a client’s only end goal is increasing sales. And building deep links can have more benefits than boosting PageRank as they help bring in qualified referrals and boost commercial brand awareness.

How Can I Build Deep Links?

First, make sure it’s worth investing in deep links to your page. Consider the following:

Themed Gift Guides

This is incredibly valuable for eCommerce sites. You should be creating product mailshots at key times of the year.

These should be a staple in your plan at Christmas, Mother’s Day, Father’s Day, Valentine’s Day (and Black Friday if your brand offers discounts). Make sure you are including products which are under linked but well targeted and stocked.

You’ll likely have to give free samples to journalists to make the cut, but from that point, deep links are guaranteed.

Competitions

If you can offer an enticing product or service, competitions are a fantastic strategy. They work particularly well for mainstream industries, e.g. travel stays, tech gadgets, health and beauty products etc.

You’ll need to create a new competition landing page with T&Cs. But be sure to link to this and your commercial page in your press release. Journalists usually will do the same in their articles.

In fact, many news sites have their own /competitions/ subfolder where they promote them. See the below example from Female First:

Screenshot of Female First's Competition Subfolder

Be aware that competitions can be seen as participating in a ‘link scheme‘ – meaning publishers may indicate the link is sponsored by adding a qualified attributed to the <a> tag (e.g. rel=”sponsored”).

As a result, these deep links won’t be beneficial from a PageRank perspective, but they will help drive brand awareness and referrals.

Giveaways

Similarly, free giveaways are a great tactic for less expensive products. To maximise results, be reactive to events in the media.

For example, back in October 2020 Tesco came under fire for labelling sanitary products as ‘non-essential’. One of our clients happened to specialise in menstrual cups and we created a giveaway to the first 100 buyers struggling to get their hands on them.

Just be sure to set a limit on how many products you or your client is comfortable letting go!

Again, this is another tactic that can be seen as participating in a ‘link scheme‘. But the benefits of brand awareness and referral traffic are still there for the taking.

Directories & Citations

Not the most glamorous of jobs and not specifically content & PR but this is useful for business services.

Utilise tools such as Moz, Bright Local and SEMrush to identify new directory and citation opportunities. It’s also worth checking their current citations, to make sure they show up-to-date addresses and contact information.

If your website has lots of locations, this can work well. On top of using the above tools, it’s worth looking into local newspaper directories too.

Directories also have the benefit of bringing in qualified referral traffic as well as deep links.

Embed Assets

If your content is informational, you can add new assets that have outreach value.

We do this for clients who want to improve traffic for informational-led keywords. We’ll consider if there are relevant ways we can add a new heading tag that has more mainstream appeal.

For example, if your client has a page on ‘what is travel insurance’, you could still incorporate any campaigns you have on ‘travel insurance’. Just be sure to keep things as concise as possible within the copy to avoid deterring relevance from the page’s core keyword.

See below both our client’s target keyword page and sections we embedded to increase media appeal:

Thought Leadership

Thought leadership ideas shouldn’t be directly about your products or services but larger stories about your industry, utilizing an expert for commentary.

Try to use informational core keyword pages as your starting point for ideation. Ask yourself ‘what’s relevant to this that might be interesting to the media?’.

In your commissioned article, you should hyperlink to a relevant core keyword page either using your brand as anchor text or naturally within your copy. Journalists often copy this over. These links shouldn’t look out of place, as thought leadership should always relate to your brand.

Which Strategies Work For Me?

Remember not all the above will work all the time. Strategies depend on page type and your website’s offering.

You can’t use themed gift guides for a software company, just like you can’t embed assets onto a commercial page.

We’ve created this checklist so you can have a look at what might work for you:

Building Deep Links Checklist

Set The Right Expectations

It’s important to manage expectations. Building deep links is hard.

Clients will want to build deep links to pages that have the highest lead generating or eCommerce revenue opportunity. But you need to balance this with what is newsworthy.

You also can’t expect the amount of links you’ll get from a large content campaign. But the links you’ll get will be highly relevant and important for improving your rankings.

Create A Balanced Approach

It’d also be naïve to forgo any backlinks that go to pages that aren’t keyword targeted.

With large campaigns, be sure to internally link to core keyword pages – this will help with link equity. Then the trick is to make sure you are balancing building both generic links with large campaigns (to achieve overall link KPIs) and deep links to lead gen pages (to help with core keyword performance).

Doing a healthy mix is a sure way to keep everyone happy. Hopefully, as an industry, we can make 25-50% of our time building deep links the norm.

 

The post How to Use Content & PR to Build Deep Links appeared first on Screaming Frog.

]]>
https://www.screamingfrog.co.uk/how-to-use-content-and-pr-to-build-deep-links/feed/ 6
10 Mistakes in 10 Years in SEO (BrightonSEO – Sept 2021) https://www.screamingfrog.co.uk/10-mistakes-in-10-years-in-seo-brightonseo-sept-2021/ https://www.screamingfrog.co.uk/10-mistakes-in-10-years-in-seo-brightonseo-sept-2021/#comments Thu, 23 Sep 2021 08:02:07 +0000 https://www.screamingfrog.co.uk/?p=171560 I started working in SEO in June 2011. On my first day at Screaming Frog, this is very first email I sent – It’s just a blank email to my dad – I think I was just proud to have my own email signature. I share this because I’ve been...

The post 10 Mistakes in 10 Years in SEO (BrightonSEO – Sept 2021) appeared first on Screaming Frog.

]]>
I started working in SEO in June 2011. On my first day at Screaming Frog, this is very first email I sent –

10 years in seo

It’s just a blank email to my dad – I think I was just proud to have my own email signature.

I share this because I’ve been reflecting over the fact that I’ve recently passed the milestone of working in the SEO industry for 10 years. Obviously that is a significant period of time, but it’s also plenty long enough to have made some mistakes.

So a few months ago when I saw that Kelvin Newman, founder of my favourite SEO conference BrightonSEO, was looking for speakers for the September 2021 edition, an idea popped into my head. I’d tentatively pitched to speak at BrightonSEO before, but in hindsight I’m not sure I ever had a strong enough idea, and was probably at least partly pitching because I thought I ought to, or that it was ‘about time’ I spoke at Brighton. I’m glad I never had, because it made doing so this year all the more special.

My idea was to openly share some of the mistakes I’d made over that 10 year period. I wanted to show, warts and all, the realities of managing clients, managing people, trying to help grow an agency, and to share some of the challenges I’ve faced and mistakes I’ve made along the way.

Kelvin and the team kindly accepted my pitch and I was officially a confirmed speaker at BrightonSEO number 24 – I’m still waiting for some sort of public recognition that I’ve attended all but the first three of them

patrick langridge screaming frog

I sit here typing this the week after the in-person version of the event (at the time of writing there is still time to register for the free online version on 23rd & 24th September), and can say without hesitation that it was a tremendous success, both for me personally and in the wider industry sense too. I delivered my talk just how I wanted it and have had some lovely feedback from the audience since. The whole event was magic; the overall quality of talks were as high as ever (ahem), it was delightful to meet and talk to other SEOs face to face again, and the many seafront beers did not fail to deliver (or did, depending on the perspective of your hangover). In short; in-person events hit different.

So without further ado, I present in blog post format, 10 Mistakes in 10 Years in SEO.

patrick langridge seo

 

1. Thinking I’ve completed it, mate.

 

seo completed it mate

The work that my colleagues and I have undertaken has changed vastly over 10 years. Perhaps this is true of any job, but especially so in SEO. I started out by building links on random resource pages across the web (sometimes exchanging money for those links…). I did a lot of product reviews and giveaways in exchange for links. I’ve done my fair share of infographics, of interactive content, of data-driven campaigns, of digital PR stories, and so on and so on…

What am I getting at? Well those examples demonstrate how quickly and how often things change in SEO, and they represent just the tip of the iceberg. They also focus only on link building, one small component of SEO. So I go back to the mistake I’ve made. Thinking I’ve completed SEO, thinking I have it all figured out.

If I’ve learned 1 overarching lesson in those 10 years is that no-one’s really got it all figured out. If they think or say they do, they’ve probably already fallen behind the curve. The fact that the SEO industry iterates in the way that it does is precisely why it’s so exciting and why I love it so much. So please, don’t make the mistake I and many have probably made in the past, of thinking you’ve completed it.

 

2. Failing to set expectations.

 

Setting expectations is pretty much an occupational hazard working at an SEO agency. This takes many forms and is almost a daily occurrence in some shape or form. One important lesson I’ve learned in my time at Screaming Frog is that failing to set expectations is almost always a big mistake.

Particularly with clients, if they don’t have fair and realistic expectations, or don’t fully understand the scope of your role or remit of your responsibilities, at some point that is going to catch up with you. This is certainly a mistake I’ve made, and something I wish I’d understood sooner in my SEO career. Ensuring you set out very clear expectations with a client should minimise any confusion or disappointment down the line.

 

3. Talking SEO instead of normal language.

 

seo buzzwords

Now, I regularly use all of the above SEO buzzwords and terminology, and of course there’s a time and a place to use them all, but I also think there’s a danger of getting wrapped up in that sort of language and forgetting how to actually communicate SEO with a non-SEO audience.

In fact, many business owners and key stakeholders I regularly communicate with have little knowledge or interest in canonicals, hreflang and so on, so going in heavy by name dropping the latest SEO tool metric or creatively coined algo update, is at best going to confuse them, and at worst alienate them completely. It’s easy to have the blinkers on and be focused in on our little SEO world, but I would encourage people not to make the mistake of forgetting who you’re talking to, and how best to frame and position certain things.

If you’re interested in this subject I recommend checking out Tom Critchlow’s SEO MBA – it’s a free newsletter with loads of great advice on things like getting stakeholder buy in, how to present in a more compelling way etc. And also Chris Green is well worth a follow on Twitter – he and I have discussed some of this sort of thing at great length before, and his article on skills required to become a senior agency SEO is excellent.

 

4. Listening to SEO Twitter.

 

Ahh SEO Twitter. It is a complicated and confusing arena at the best of times. I’ll preface this point by stating that there are lots of lovely people on SEO Twitter, and at times it can feel like a very welcoming and encouraging place –

seo twitter

But sometimes, it can be a cesspool of arrogance, ignorance, disrespect and plain nonsense. Unfortunately, you still see a lot of this –

seo twitter

You see a lot of Twitter feedback like “this is nothing new” or “this is so basic and has been covered before”.

I can’t emphasise in the strongest possible terms what complete rubbish this is, and that no keyboard troll can tell you what you are or what you aren’t, and this type of gaslighting is just the tip of the iceberg of the rubbish often spouted on SEO Twitter. These people have no context of who you are or what you are putting out into the world, and this attitude of constantly looking down at people new to the industry or those who are less experienced, honestly makes my blood boil.

So please, do not make the mistake of taking SEO Twitter too seriously!

 

5. Assuming a client won’t noindex their entire website.

 

This mistake does what it says on the tin. Every agency SEO has a story or two of a horror show mistake on a client website, and I am no different.

One morning in 2015 I was scrolling through my daily client ranking report, and noticed one of our clients had suddenly lost positions for all their keywords overnight. I crawled their site and found the whole thing had been noindexed with an X-Robots-Tag

x-robots-tag

Clearly this wasn’t good news. A new release of the site which had been in staging got pushed live with the noindex in tact, which was obviously a mistake. We spotted it quickly, the client got it fixed even faster, so no long term damage was done, but it served as a reminder that these kinds of mistakes can and do (and will!) happen. I’ve learned there are no end to the kinds of mistakes that might be made on clients websites, so this particular incident served as a sobering reminder to always remain vigilant and to be as on top of client website performance as you can be, to minimise the damage when these mistakes occur.

 

6. Blaming Google for everything.

 

Like it or not, Google are a massive component of the SEO industry. As SEO professionals we need to pay attention to what they do, the changes the make, and try to make educated guesses as to where they may go in the future.

We know that a lot of changes they’ve made to the search results in the last 10 years have dramatically changed the face of SEO and what most of us do day to day. We also know that some (or even most) of these changes have been or continue to be a threat to organic traffic. It can be easy to get frustrated by the changes the Google make, and I’ve certainly felt that way myself.

blaming google

The thing is, dealing with what Google does is my job. I have to deal with it. I have to think of new approaches. I have to restrategise. I have to find new opportunities, new traffic sources, new ways of doing things. That’s my job.

If a client reminds me they are paying me a lot of money and want to know where their traffic, conversions & revenue are, I can’t just say; sorry, it’s all Google’s fault. Even when there are big algo updates announced, the industry becomes rife with debate about them, people start tweeting Googlers to complain about things being unfair or wrong.

My view is that honestly, most of this is just a waste of time and energy. Complaining about Google or blaming them for all your troubles doesn’t actually achieve anything, it doesn’t give you practical solutions as to what to do about it, so I would encourage people not to make the mistake I certainly have in the past of blaming Google for everything. That time and energy could be much more effectively used by analysing and strategising what you might do about Google’s changes.

 

7. Being afraid to challenge clients.

 

being afraid to challenge clients

The above visualisation will be familiar to anyone who works in an agency (and I’m sure it’s applicable in different ways inhouse, too). And that is of clients basically being in charge. They are in charge of the budgets, they pay the invoices, so broadly speaking, what they say goes. While this is inevitable in a client/agency relationship, it can also foster an unhealthy dynamic where the agency feel afraid to push back or challenge clients, even when they might be right to do so.

From my experience though, the best client/agency relationships are those which have push & pull, back & forth, challenge & counter – use your analogy of choice. Not only is the healthy in any relationship, it can help to reinforce your expertise and knowledge, as well as ensure you set clear boundaries and expectations when working together.

If you are going to challenge or even say no to a client though, there are certain rules of engagement. Hopefully these tips are useful when you face such challenging situations –

  • Be nice and polite! – manners cost nothing, so simply being polite and empathetic in how your phrase certain things, can go a long way in getting a client to understand your position.
  • Offer an alternative – if you’re rejecting a request or challenging a client, offering some sort of alternative is very effective. Think about it – if someone asks something of you and you just flat out refuse, that isn’t a very constructive or helpful response, so offering an alternative approach or suggesting a different way of doing something, can strengthen your position significantly.
  • Back up with evidence – if you are saying no to a client, if you can back that up with some sort of evidence, whether that be some data, a fancy upwards SEO graph, or citing previous conversations about the topic, that can make your position a lot more compelling.
  • Be clear you are saying no – learn from a mistake I’ve made, it’s important that the client walks away from the conversation understanding that you’ve said no. This not only makes it clear that the request has been rejected, but they also should understand what it might take for you to accept a request.
  • And finally – being well prepared, being honest and speaking confidently, for me are the three key pillars to having what can sometimes be tricky conversations with clients.

 

8. Form over function.

 

So, if you’re planning on moving house anytime soon, you might be interested in this infographic –

bad infographic

It’s an ESSENTIAL checklist for changing your address, with some very useful tick boxes of water, gas, electricity…

Now, I don’t mean to pick on this example, but is this really the best way this topic could be tackled or the best way this information could be visualised? For instance could the content be interactive and have links to where you can inform your energy provider of your change of address? Could the list be segmented by the different types of companies you need to contact? Could it be in timeline form to show you when you need to notify different companies?

Those are just a handful of ideas. I suspect this infographic exists in this form because some people still think that ‘SEO strategy’ means “let’s make an infographic / interactive / a dream job linkbait / a most Instagrammed campaign / a most Googled data piece” etc. And this is a mistake I’ve made myself, probably more times than I care to admit.

For the record, none of those formats are wrong if they are the best way of articulating what you want to say, to tell your story and get your key hooks across to readers, but shouldn’t ever really be the first point when ideating or coming up with a new client campaign. That’s how you end up with content like the ESSENTIAL checklist for changing your address.

My colleagues Tom & James have previously covered how to ideate content marketing campaigns in much better detail than I ever could, and our Head of Marketing Mark Porter also gave some great tips on how to avoid content marketing mistakes on a recent Fractl podcast. I recommend checking both of those out.

 

9. Taking things personally.

 

If I’ve learned anything from my 10 years at Screaming Frog, it’s that to survive in an agency environment, you need thick skin. In my early years I remember taking every email I ever received quite personally, I read a lot into things which probably weren’t there, and honestly spent a lot of time stressing over every bit of communication that came through me.

While you might hope that everyone you end up communicating with doing this job will be fair, measured and respectful, unfortunately that isn’t always the case. In fact, one of the less enjoyable parts of the job is having to deal with the odd arsey email, ungrateful clients, and sometimes just being spoken to in ways which are uncalled for.

This can range from the quite amusing blunt feedback –

client feedback

To emails which are actually quite nasty and personal. It’s easy to take these emails personally, and sometimes they can do a lot of damage, whether that be to people’s mental health, their confidence, or even their career progression.

What I’ve learned is to try and not make the mistake of taking things too personally, and don’t sweat every email you ever receive. If you can get up and walk away from your computer at the end of the day, knowing you’ve done the best you can, you have to accept that you can’t keep everyone happy all the time. That’s life (especially agency life!).

 

10. Bullet points (apparently).

 

My last mistake is rather tongue-in-cheek (you can tell I was getting tight on the 20 minute BrightonSEO talk limit!) – apparently people don’t like bullet points?! I’m not really sure why. I think in bullet points, I talk in bullet points, I just think they are very effective in structuring my thoughts.

If you’ve ever seen Greg Gifford talk at a conference, he has a theory that bullet points kill kittens. Well, my little girl Bella is alive and well –

bella

So to leave you with one key takeaway –

  • I
  • Love
  • Bullet
  • Points

Thank you for reading!

The post 10 Mistakes in 10 Years in SEO (BrightonSEO – Sept 2021) appeared first on Screaming Frog.

]]>
https://www.screamingfrog.co.uk/10-mistakes-in-10-years-in-seo-brightonseo-sept-2021/feed/ 8
Screaming Frog SEO Spider Update – Version 16.0 https://www.screamingfrog.co.uk/seo-spider-16/ https://www.screamingfrog.co.uk/seo-spider-16/#comments Wed, 22 Sep 2021 07:45:08 +0000 https://www.screamingfrog.co.uk/?p=168654 We’re excited to announce Screaming Frog SEO Spider version 16.0, codenamed internally as ‘marshmallow’. Since the launch of crawl comparison in version 15, we’ve been busy working on the next round of prioritised features and enhancements. Here’s what’s new in our latest update. 1) Improved JavaScript Crawling 5 years ago...

The post Screaming Frog SEO Spider Update – Version 16.0 appeared first on Screaming Frog.

]]>
We’re excited to announce Screaming Frog SEO Spider version 16.0, codenamed internally as ‘marshmallow’.

Since the launch of crawl comparison in version 15, we’ve been busy working on the next round of prioritised features and enhancements.

Here’s what’s new in our latest update.


1) Improved JavaScript Crawling

5 years ago we launched JavaScript rendering, as the first crawler in the industry to render web pages, using Chromium (before headless Chrome existed) to crawl content and links populated client-side using JavaScript.

As Google, technology and our understanding as an industry has evolved, we’ve updated our integration with headless Chrome to improve efficiency, mimic the crawl behaviour of Google closer, and alert users to more common JavaScript-related issues.

JavaScript Tab & Filters

The old ‘AJAX’ tab, has been updated to ‘JavaScript’, and it now contains a comprehensive list of filters around common issues related to auditing websites using client-side JavaScript.

JavaScript Tab & Filters

This will only populate in JavaScript rendering mode, which can be enabled via ‘Config > Spider > Rendering’.

Crawl Original & Rendered HTML

One of the fundamental changes in this update is that the SEO Spider will now crawl both the original and rendered HTML to identify pages that have content or links only available client-side and report other key differences.

Crawl raw and rendered HTML

This is more in line with how Google crawls and can help identify JavaScript dependencies, as well as other issues that can occur with this two-phase approach.

Identify JavaScript Content & Links

You’re able to clearly see which pages have JavaScript content only available in the rendered HTML post JavaScript execution.

For example, our homepage apparently has 4 additional words in the rendered HTML, which was new to us.

Screaming Frog word count diff

By storing the HTML and using the lower window ‘View Source’ tab, you can also switch the filter to ‘Visible Text’ and tick ‘Show Differences’, to highlight which text is being populated by JavaScript in the rendered HTML.

Visible Content Diff

Aha! There are the 4 words. Thanks, Highcharts.

Pages that have JavaScript links are reported and the counts are shown in columns within the tab.

Identify JavaScript Links

There’s a new ‘link origin’ column and filter in the lower window ‘Outlinks’ (and inlinks) tab to help you find exactly which links are only in the rendered HTML of a page due to JavaScript. For example, products loaded on a category page using JavaScript will only be in the ‘rendered HTML’.

View JavaScript links

You can bulk export all links that rely on JavaScript via ‘Bulk Export > JavaScript > Contains JavaScript Links’.

Compare HTML Vs Rendered HTML

The updated tab will tell you if page titles, descriptions, headings, meta robots or canonicals depend upon or have been updated by JavaScript. Both the original and rendered HTML versions can be viewed simultaneously.

JavaScript updating titles and descriptions

This can be useful when determining whether all elements are only in the rendered HTML, or if JavaScript is used on selective elements.

The two-phase approach of crawling the raw and rendered HTML can help pick up on easy to miss problematic scenarios, such as the original HTML having a noindex meta tag, but the rendered HTML not having one.

Previously by just crawling the rendered HTML the page would be deemed as indexable when in reality Google will see the noindex in the original HTML first, and subsequently skip rendering, meaning the removal of the noindex won’t be seen and the page won’t be indexed.

Shadow DOM & iFrames

Another enhancement we’ve wanted to make is to improve our rendering to better match Google’s own behaviour. Giacomo Zecchini’s recent ‘Challenges of building a search engine like web rendering service‘ talk at SMX Advanced provides an excellent summary of some of the challenges and edge cases.

Google is able to flatten and index Shadow DOM content, and will inline iframes into a div in the rendered HTML of a parent page, under specific conditions (some of which I shared in a tweet).

After research and testing, both of these are now supported in the SEO Spider, as we try to mimic Google’s web rendering service as closely as possible.

Flatten Shadow DOM & iframes

They are enabled by default, but can be disabled when required via ‘Config > Spider > Rendering’. There are further improvements we’d like to make in this area, and if you spot any interesting edge cases then drop us an email.


2) Automated Crawl Reports For Data Studio

Data Studio is commonly the tool of choice for SEO reporting today, whether that’s for your own reports, clients or the boss. To help automate this process to include crawl report data, we’ve introduced a new Data Studio friendly custom crawl overview export available in scheduling.

Data Studio Crawl Export

This has been purpose-built to allow users to select crawl overview data to be exported as a single summary row to Google Sheets. It will automatically append new scheduled exports to a new row in the same sheet in a time series.

Custom Crawl Summary Report In Google Sheets

The new crawl overview summary in Google Sheets can then be connected to Data Studio to be used for a fully automated Google Data Studio crawl report. You’re able to copy our very own Screaming Frog Data Studio crawl report template, or create your own better versions!

Screaming Frog Data Studio Crawl Report

This allows you or a team to monitor site health and be alerted to issues without having to even open the app. It also allows you to share progress with non-technical stakeholders visually.

Please read our tutorial on ‘How To Automate Crawl Reports In Data Studio‘ to set this up.

We’re excited to see alternative Screaming Frog Data Studio report templates, so if you’re a Data Studio whizz and have one you’d like to share with the community, let us know and we will include it in our tutorial.


3) Advanced Search & Filtering

The inbuilt search function has been improved, it defaults to regular text search but allows you to switch to regex, choose from a variety of predefined filters (including a ‘does not match regex’) and combine rules (and/or).

Advanced search and filtering in the GUI

The search bar displays the syntax used by the search and filter system, so this can be formulated by power users to build common searches and filters quickly, without having to click the buttons to run searches.

Advanced search box

The syntax can just be pasted or written directly into the search box to run searches.


4) Translated UI

Alongside English, the GUI is now available in Spanish, German, French and Italian to further support our global users. It will detect the language used on your machine on startup, and default to using it.

Translated GUI

Language can also be set within the tool via ‘Config > System > Language’.

A big shoutout and thank you to the awesome MJ Cachón, Riccardo Mares, Jens Umland and Benjamin Thiers at Digimood for their time and amazing help with the translations. We truly appreciate it. You all rock.

Technical SEO jargon alongside the complexity and subtleties in language makes translations difficult, and while we’ve worked hard to get this right with amazing native speaking SEOs, you’re welcome to drop us an email if you have any suggestions to improve further.

We may support additional languages in the future as well.


Other Updates

Version 16.0 also includes a number of smaller updates and bug fixes, outlined below.

  • The PageSpeed Insights integration has been updated to include ‘Image Elements Do Not Have Explicit Width & Height’ and ‘Avoid Large Layout Shifts’ diagnostics, which can both improve CLS. ‘Avoid Serving Legacy JavaScript’ opportunity has also been included.
  • ‘Total Internal Indexable URLs’ and ‘Total Internal Non-Indexable URLs’ have been added to the ‘Overview’ tab and report.
  • You’re now able to open saved crawls via the command line and export any data and reports.
  • The include and exclude have both been changed to partial regex matching by default. This means you can just type in ‘blog’ rather than say .*blog.* etc.
  • The HTTP refresh header is now supported and reported!
  • Scheduling now includes a ‘Duplicate’ option to improve efficiency. This is super useful for custom Data Studio exports, where it saves time selecting the same metrics for each scheduled crawl.
  • Alternative images in the picture element are now supported when the ‘Extract Images from srcset Attribute’ config is enabled. A bug where alternative images could be flagged with missing alt text has been fixed.
  • The Google Analytics integration now has a search function to help find properties.
  • The ‘Max Links per URL to Crawl’ limit has been increased to 50k.
  • The default ‘Max Redirects to Follow’ limit has been adjusted to 10, inline with Googlebot before it shows a redirect error.
  • PSI requests are now x5 times faster, as we realised Google increased their quotas!
  • Updated a tonne of Google rich result feature changes for structured data validation.
  • Improved forms based authentication further to work in more scenarios.
  • Fix macOS launcher to trigger Rosetta install automatically when required.
  • Ate plenty of bugs.

That’s everything! As always, thanks to everyone for their continued feedback, suggestions and support. If you have any problems with the latest version, do just let us know via support and we will help.

Now, download version 16.0 of the Screaming Frog SEO Spider and let us know what you think in the comments.


Small Update – Version 16.1 Released 27th September 2021

We have just released a small update to version 16.1 of the SEO Spider. This release is mainly bug fixes and small improvements –

  • Updated some Spanish translations based on feedback.
  • Updated SERP Snippet preview to be more in sync with current SERPs.
  • Fix issue preventing the Custom Crawl Overview report for Data Studio working in languages other than English.
  • Fix crash resuming crawls with saved Internal URL configuration.
  • Fix crash caused by highlighting a selection then clicking another cell in both list and tree views.
  • Fix crash duplicating a scheduled crawl.
  • Fix crash during JavaScript crawl.

Small Update – Version 16.2 Released 18th October 2021

We have just released a small update to version 16.2 of the SEO Spider. This release is mainly bug fixes and small improvements –

  • Fix issue with corrupt fonts for some users.
  • Fix bug in the UI that allowed you to schedule a crawl without a crawl seed in Spider Mode.
  • Fix stall opening saved crawls.
  • Fix issues with upgrades of database crawls using excessive disk space.
  • Fix issue with exported HTML visualisations missing pop up help.
  • Fix issue with PSI going too fast.
  • Fix issue with Chromium requesting webcam access.
  • Fix crash when cancelling an export.
  • Fix crash during JavaScript crawling.
  • Fix crash accessing visualisations configuration using languages other then English.

Small Update – Version 16.3 Released 4th November 2021

We have just released a small update to version 16.3 of the SEO Spider. This release is mainly bug fixes and small improvements –

  • The Google Search Console integration now has new filters for search type (Discover, Google News, Web etc) and supports regex as per the recent Search Analytics API update.
  • Fix issue with Shopify and CloudFront sites loading in Forms Based authentication browser.
  • Fix issue with cookies not being displayed in some cases.
  • Give unique names to Google Rich Features and Google Rich Features Summary report file names.
  • Set timestamp on URLs loaded as part of JavaScript rendering.
  • Fix crash running on macOS Monetery.
  • Fix right click focus in visualisations.
  • Fix crash in Spelling and Grammar UI.
  • Fix crash when exporting invalid custom extraction tabs on the CLI.
  • Fix crash when flattening shadow DOM.
  • Fix crash generating a crawl diff.
  • Fix crash when the Chromium can’t be initialised.

Small Update – Version 16.4 Released 14th December 2021

We have just released a small update to version 16.4 of the SEO Spider. This release includes a security patch, as well as bug fixes and small improvements –

  • Update to Apache log4j 2.15.0 to fix CVE-2021-44228 vulnerability.
  • Added scheduling history feature under ‘File > Scheduling’.
  • Added validation of scheduled tasks to list view to catch issues like removing config files after setting up crawls.
  • Allow double click to edit scheduled crawls.
  • Rate limit Google Sheets exports to prevent export failures.
  • Renaming a custom search/extraction no longer clears the filter.
  • Update failed to find GA account details to list account names and IDs.
  • Add Crawl Timestamp to URL Details tab.
  • Fix crash changing custom search mid crawl.
  • Fix JavaScript crawling bug with pages that send POST/HEAD requests.
  • Fix memory leak during JavaScript Crawling.
  • Fix crash on startup with corrupt tab config file.
  • Fix issue with scheduled crawls hanging if APIs don’t connect.
  • Fix command line crawl issue where Google Sheets limits causes subsequent exports to fail randomly.
  • Fix bug with HTTP Canonicals not being spotted when deriving indexability.
  • Fix crash extracting Chrome on start up.
  • Fix bug parsing robots.txt for User-Agents that already have rules.
  • Fix bug in hreflang filters around sitemap hreflangs and crawl order.
  • Fix crash doing hreflang validation when a sitemap is removed.
  • Fix duplicated cookies stored against a URL.
  • Fix various issues with the Forms Based authentication.
  • Fix crash in GSC.
  • Fix crash selecting items in overview table.

  • Small Update – Version 16.5 Released 21st December 2021

    We have just released a small update to version 16.5 of the SEO Spider. This release includes a security patch, as well as bug fixes and small improvements –

    • Update to Apache log4j 2.17.0 to fix CVE-2021-45046 and CVE-2021-45105.
    • Show more detailed crawl analysis progress in the bottom status bar when active.
    • Fix JavaScript rendering issues with POST data.
    • Improve Google Sheets exporting when Google responds with 403s and 502s.
    • Be more tolerant of leading/trailing spaces for all tab and filter names when using the CLI.
    • Add auto naming for GSC accounts, to avoid tasks clashing.
    • Fix crash running link score on crawls with URLs that have a status of “Rendering Failed”.


    The post Screaming Frog SEO Spider Update – Version 16.0 appeared first on Screaming Frog.

    ]]>
    https://www.screamingfrog.co.uk/seo-spider-16/feed/ 42
    Screaming Frog Charity Auction https://www.screamingfrog.co.uk/screaming-frog-charity-auction/ https://www.screamingfrog.co.uk/screaming-frog-charity-auction/#comments Thu, 02 Sep 2021 08:59:07 +0000 https://www.screamingfrog.co.uk/?p=170293 Update: This Auction Is Now Closed! Take a look at the totals here. Believe it or not, one the most common questions we get asked isn’t related to our services or the SEO spider, it’s related to our merch. Whether it’s via email or Twitter DM, people regularly get in...

    The post Screaming Frog Charity Auction appeared first on Screaming Frog.

    ]]>
    Update: This Auction Is Now Closed! Take a look at the totals here.


    Believe it or not, one the most common questions we get asked isn’t related to our services or the SEO spider, it’s related to our merch. Whether it’s via email or Twitter DM, people regularly get in touch to find out how they can get their hands on one of our highly coveted hoodies or a simple but timeless branded T-shirt.

    Screaming Frog merch is strictly only available through giveaways, competitions and conferences: until now. For the first time ever, we’re offering people the chance to guarantee themselves a hoody or t-shirt through a charity auction, with 100% of the proceeds going to SeeSaw.

    SeeSaw is a charity that’s local to us, providing grief support for children, young people and their families in Oxfordshire. They help to reduce the emotional, psychological and mental health consequences of bereavement. You can find out more information about SeeSaw and how they help here.


    SF Merch

    The Screaming Frog hoodies are usually the most popular of all our merch, and only a select few people have managed to snag one over the years. However, for this auction we have 20 up for grabs available in all sizes (Unisex – Small through to XXL).

    Screaming Frog OG Hoodies

    As well as this we also have 20 of Fruit of the Loom’s finest Screaming Frog branded T-shirts, available in a range of colours, kindly modelled by our very own SEO Rockstar, Oliver Brett:


    Tweet Us Your Bid

    If you’ve ever just wanted to ‘buy one’, now is your chance. The cost is just giving well to charity.

    The auction will run until 16th September, and to be in with a chance of securing a hoody or T-shirt, all you have to do is tweet us your bid using the hashtag #sfcharityauction.

    All bids will be closely monitored by us, and once the auction has finished we’ll get in touch with the top 40 bidders. The top 20 highest bidders will receive a hoody, and the next 20 bidders will receive a T-shirt.

    If you’d rather bid confidentially, that’s fine too. You can email us at support@screamingfrog.co.uk and bids will also be in contention.

    Once the auction has finished we’ll be in touch to collect the details of the lucky bidders, as well as advise on how to make your charity donation.

    On top of all of this, Screaming Frog will match winning donations up to a total of £2,500. The cost of merchandise, postage etc will also be covered by us (obviously!).

    A few things to note –

    • In the event that there are multiple bids of the same amount, winners will be chosen at random.
    • Bids can be placed in GBP or USD, but USD will be converted to GBP to compare bids based upon the exchange rate when we analyse winners.
    • Bids should be whole pounds or dollars (or will be rounded to it by us).
    • If winning bidders fail to donate, then the next highest bidder will be chosen.
    • Screaming Frog‘s decision on winners will be final.

    Don’t miss your chance to secure yourself some Screaming Frog merch, and help a great cause in the process.


    Update

    The bids are in and finalised and we’re in the process of getting in touch with the lucky winners! If you were in contention, please do keep an eye on your Twitter DMs/Emails.

    We’re genuinely blown away by people’s generosity and the bids greatly surpassed all our expectations.

    In total, the top 40 bids amount to over £6,300, and with our £2,500 matched donation on top, the combined total is an amazing £8,835!

    The highest bid for a hoody came in at a whopping £1,000 from Ash Young, of CarMats.co.uk. Huge thanks Ash!

    The highest bid for a T-shirt came in at £145 from Martin MacDonald, who also bagged himself a hoody with a separate equally generous bid.

    As mentioned, all proceeds are going directly to SeeSaw, and we thank everyone involved for contributing to such a great cause.

    The post Screaming Frog Charity Auction appeared first on Screaming Frog.

    ]]>
    https://www.screamingfrog.co.uk/screaming-frog-charity-auction/feed/ 4