Go To Video 9: "Case Study 2: Maternity Clothes"

Case Study 1: NFL Betting

by Karl Adamsas,  Last updated 

At this point we've covered most of the strategy you're going to need to get started building links like an SEO agency, so I wanted to show you a case study, and we'll bring together everything we've learned into one video, and you can see our thought process from start to finish.

We're going to take a real life client of ours, a client who has a website that focuses on NFL betting.

Guest posting still works with the shady niches like gambling, but you aren't going to get as good a response rate from the bloggers. Some will refuse to work with a gambling site, but you can still keep them in your database for when you find a non-gambling football client.

what kind of blogs do we want to get links from?

Since the client is involved in NFL betting, I want to focus on sports and football blogs.

Now we need to decide which of our ScrapeBox strategies we're going to use, so far I've taught you six different strategies:

For the keyword footprint, we've got:

  • The SEMrush Method
  • Brands, Models & People
  • The ScrapeBox Keyword Tool

For the relevance/quality footprint, we've got:

  • The WordPress Method 
  • The Comment Method 
  • The Moditisation Modifier

We can now mix and match these footprints to build a super-relevant list of blogs to contact.

Now I've worked in this niche before, so I know some good footprints to use are "Brands, Models & People" and "The Comment Method"

Depending on how your list performs, we can always go back and try some different strategies to build a better, more responsive list.

We're going to start with "Brands, Models & People", to create a list of relevant keywords. These are the keywords that we want to see on blogs as an indication of relevance.

Brands

For brands, we're going to focus on the team names.

We'll do a quick Google search and find a list, NFL.com has a list of all the current teams, so just grab those team names and paste them into Excel.

Now the team names are still very broad, and one of these names alone does not guarantee that a site is strictly football related. You're going to scrape up a lot of personal interest pages and a lot of news sites, so we're going to use the advance search modifier "in title".

And what this will do, it will scrape up blogs that have the team names in the title of the page, not just a mention in the content. This really helps to add that extra bit of relevance to the search.

Models

Models is a bit of a tough one for NFL. I'd be inclined to skip it, but we could do something like list all the different positions on a football team: quarterback, running back, defensive end, etc.

So we'll search  Google and find a list. 

You'd have to go through these one by one, some of them would be too general. Names like quarterback and center are obviously not going to work., but wide receiver, tight end, running back, defensive tackle, defensive end would all be great keywords.

People

People is where the gold is going to be on our list.

We're going to list the names of current NFL players, a lot of these players are going to be pretty obscure, and only super-relevant NFL blogs are going to be writing about them.

You are going to get a lot of results doing this, so it's probably best to search one team at a time, rather than try to list every player in the NFL.

We'll start with the Eagles.

In Wikipedia, we have a list of all past and current Eagles' players. Now we can definitely use this, but for the sake of this example, I want a shorter list. So we'll just grab the current team from the Eagle website

Just add these to Excel, and we now have 71 current Eagles' players. Now we're going to add what we've got to ScrapeBox. 

We'll just paste in the player names & Wrap keywords and quotes. Because these guys names aren't unique, I want to throw another modifier in here, of just the word "Eagles", so that any blog we scrape up is going to have a reference to the Eagles along with their player name.

So I've got a notepad doc with the term "Eagles" written on it.  Now we've got blogs with the word "Eagles" and the player names. ​

That's our keyword footprint taken care of.

The Quality Footprint

Moving over to the quality footprint, we decided that we were going to use "The Comment Method".

So we have our notepad doc with numbered comments one to 40, which we just go ahead and import into ScrapeBox, and that's all our search queries ready to go.

We'll only scrape up the first 20 results, and now we've got 2,840 keywords.

5 hours later...

It's been about four or five hours, and ScrapeBox has finally finished searching all those queries, so now we're going to clean up this list, and shave it down to a more targeted list of prospects.

We currently have over 56,000 URLs to work with.

We'll start by removing the duplicate URLs, which takes us down to 19,000.

The next step is to remove all the unsuitable sites that we always scrape up, things like all your social media platforms, all the free blogging platforms, things like reddit, Facebook, Pinterest, Blogspot, Weebly.

That takes us down to just over 10,000 results.

Another step I'm going to take here is to remove anything that is not a ".com". For this particular project, the client only wants US sites.

That takes us down to 9,400.

I'm going to apply my own blacklist against this. Our blacklist is the database of bloggers that already work with us. This step will remove any blog that we're already currently working with so that we don't contact them again, which takes us down to 9,360.

Now we're going to remove duplicate domains, which takes us way down to 744 sites.

Now I want the top level URLs, so I can run it through Majestic, so we'll:

  • trim to route ​
  • remove the sub-domains ​
  • remove duplicate domains again

We're now at 577 sites.

MAJESTIC

Copy these sites to your clipboard and paste them into Majestic.

So we'll take out anything with a Trust Flow of less than five

AHREFS

Now we want to take these sites and place them into Ahrefs.

We're still left with some really high quality sites here, which probably won't work with us.

This is why we have a real person go through the sites and do one last visual check. Our virtual assistant can just skip over these sites without bothering with them.

We eliminate any blog with less than 100 traffic.

What we're left with now is 169 sites, not a big enough list, so what we're going to do, is we're going to go back to the scraping stage, and add to our search queries, and run ScrapeBox all over again.

We have a couple of options here:

We can go back to our quality footprint and try adding more comments in

we only went up to 40 comments, we can try adding more numbered comments to our list, we can go up to 50, 60, 100, whatever we want, but this is most likely not going to give us the huge boost in results that we need.

We can try is scraping up more results from Google.

I configured ScrapeBox to only search  the first 20 results in Google for each search query, that's only the first two pages.

We could try setting this much higher to maybe the first 100 results.

I don't really want to test that at this stage in the scrape because this particular client wants higher quality sites, so I really want to limit our search to the first couple of pages of Google.

Our best bet is to go back to our the "Brands, Models, People" footprint and expand upon this.

In this particular scrape, we've just been focusing on the people element, so I want to expand upon that.

So far we've only included Eagles' players, so we can try adding some other teams. We could try finding a list of current players for the Cowboys, the Giants, the Sea Hawks and so on, but I don't really want to do that either, I want to stick with the Eagles for now.

In our research at the start of the video, we found a huge list of past and present Eagles' players in Wikipedia. I want to test adding some of these in, so we'll add players "A" through "E". This will give us another 400 Eagles' players for our scrape.

Now that we've got this new list of names, we just repeat the same steps from the beginning of the tutorial.

The next day

I left that scrape overnight, it was 16,000 keywords, so it did take some time.

You can see we've scraped up almost 117,000 URLs

Okay, so now we've cleaned up that list that we've run it through Ahrefs and we are left with 744 sites.

We'll grab the original list of 169 blogs and we'll add it to our new list.

Then we'll do a quick de-dupe and we have 848 sites to contact.

We have roughly 840 sites to work with from here, So what we're going to do is we'll break this list into two or three parts, and we'll give it to a few different virtual assistants.

Their job is to visit each one of these URLs and do one final visual assessment, this is our last line of defence. We'll have a human decide whether or not each site is relevant enough to contact, and if it is, they'll send the template that I showed you earlier, and manage all follow-ups and negotiations.

2 weeks later

It's been a couple of weeks and my virtual assistants are finished contacting this list and negotiating with the bloggers.

At this point, I'm going to have to blur these URLs, since it would be unfair for me to share their details with you guys. If you're interested in seeing some of the blogs who responded, drop me an email and I might be willing to share some of them privately.

This is a summary of our results.

  • We scraped up 848 blogs, 
  • 320 of those were unrelated, had no contact details, or some other roadblock that stopped us from contacting them.
  • 215 blogs responded to our outreach, that's a 23.35% response rate.
  • 32 blogs agreed to let us guest post, that's a 3.77% success rate, not a huge success rate, but not too bad considering the quality of these blogs.

So what can we do from here to improve these results?

  • We can concentrate on the bloggers who responded, but didn't agree to let us guest post.
  • This could actually be a good task for Mailshake, or Buzzstream, or one of the other automated mail services we covered in video 3.
  • We could follow-up with the non-responders again. I usually follow-up a maximum of twice with a list of non-responders.
  • We could have tested using the page scanner. I showed you the page scanner add-on in video 2. The page scanner searches our list for footprints that indicates a site is looking for advertisers. I only really use this tool on much larger lists. It's a great tool, but it's not foolproof, you will lose some great blogs in the process so I generally don't use it unless I have a list in the thousands.
  • Most obviously we could test different ScrapeBox footprints, perhaps try "The ScrapeBox Keyword Tool," or "The WordPress Method".

Conclusion

There is no one way to approach outreach like this, you're going to have to test out some different strategies. No two niches are the same, so you're going to have to change you're approach depending on the project.

The whole point of this strategy is that you can outsource the bulk of your workload to virtual assistants who will deliver you a link of opportunities each week, that are ready to go and has very little input from you.

Hopefully by now you have a better understanding of how this can work for your link building, and you have everything you need to start scraping up your own link opportunities.

FEEDBACK

Now I’d like to hear from you.

Perhaps you have a question about something you read?

let me know by leaving a comment below…

>