video9

Case Study 2: Maternity Clothes

by Karl Adamsas,  Last updated 

In this second case study, we're looking at a client who sells  maternity clothes.

It might be helpful to do another real world example for you where we do another project together from start to finish.

what kind of blogs do we want to get links from?

We're going to focus on mommy bloggers: bloggers with young children, new moms, anyone talking about baby related topics would be a great fit.

The next thing we want to decide is which of our ScrapeBox strategies we're going to use:

For the keyword footprint, we've got:

  • The SEMrush Method
  • Brands, Models & People
  • The ScrapeBox Keyword Tool

For the relevance/quality footprint, we've got:

  • The WordPress Method 
  • The Comment Method 
  • The Moditisation Modifier

This time around, we're going to use "The SEMrush Method" and "The Monetisation Modifier".

I've chosen "The SEMrush Method" because it's an industry that I'm not too familiar with and I'm sure that an industry of this size has a tonne of long tail keywords that we can borrow from an established blog.

I'm going to use "The Monetisation Modifier" because a lot of mommy bloggers operate in a close knit community.

They try to keep their blogs legit by playing by the rules laid out by the FCC. Things like labelling their affiliate deals. You might be concerned this means that they won't offer paid links or they'll want to label all our links as sponsored but this isn't necessarily true.

Remember, for the right price, these bloggers will bend their own rules all the time.

The SEMrush Method 

We're going to start with "The SEMrush Method". We want to find a relevant Top 10 blog that we can enter into SEMrush and get all the keywords that they rank for.

We'll just search a broad keyword and see what pops up.

This site looks perfect for us, they've got a tonne of organic traffic and they're ranking for over 16,000 keywords.

Now, they rank for over 16,000 keywords but the most we can export is 10,000 which is still plenty for us.

So just copy them and paste them straight in the ScrapeBox.

Wrap keywords in quotes. 

Now you should have your list of monetisation modifiers saved to your desktop, so just bring them into ScrapeBox like you normally would and you can see that we've got 40,000 keywords.

Scrape up the first 20 results and then just start harvesting as normal.

THE NEXT DAY

ScrapeBox has found almost 97,000 URLs.

Our next step is to clean up this list.

We'll start by removing the duplicate URLs which takes us down to about 60,000.

The next thing I'll do is take out all the useless sites, our social platforms and the free blogging platforms.

That takes us down to about 28,000.

Now, I'm going to apply our blacklist which is the list of sites already in our database. They're already working with us and we don't want to contact them twice.

THE PAGE SCANNER

So now we have 27,311 URLs. that's quite a decent sized list so I'm going to use the page scanner. 

We have our advertising footprints but I need to remove the word "affiliate".

Because "The Monetisation Modifier" is based around the affiliate keyword, every URL has that word on it. So we need to remove it from this list of footprints, otherwise, it's just going to trigger every single site. 

Now we're going to run the page scanner.

20 minutes later

It's been about 20 minutes and the page scanner is finished checking all those URLs.

It will flag it with the word advertise in the middle column if it finds one of our footprints on the side and you can see, we got a lot of positive results here.  

The page scanner will give us is three different files:

  • Advertise
  • Error
  • Not Found

 "Advertise" is all the positive hits where it found our footprint. 

"Error" is obviously errors. 

"Not Found" are URLs where it did not find our footprint. 

We're  still going to  contact all of these URLs but I want to prioritise the advertise list

We do this because the tool isn't perfect and I don't want to miss out on any opportunities after doing all this leg work.

This is our main list, so we're going to paste this back into ScrapeBox and we're going to remove duplicate domains at this point which takes us down to 4,293 blogs.

That's quite a lot of sites, which is just our good list.

So that's about 4,300 sites that are looking for advertisers and are bloggers talking about maternity clothes or babies.

this is currently our best list of prospects 

majestic

I want to run these through Majestic and eliminate any low quality sites. 

So we'll trim to root and then remove the sub domain and remove duplicate domains again which takes us down to just under 4,000.

We'll copy these to a notepad and run them through Majestic. 

Now we have our three lists with our trust flow and our citation flow and we're going to remove anything with a trust flow of less than five.

We've sorted by trust flow, highest to lowest and removed anything with a trust flow that's too high.

We now have 2,269 URLs to contact.

Next we'll remove anything that isn't a .com so we can get this down even further.

We've left anything that is a .com, .co.uk, .net or a .org.

That leaves us with 1,940 sites.

AHREFS

I'm not going to run all of these sites through Ahrefs to get the organic traffic estimates. 

The list is too big.

Remember, we can only do 200 at a time in the batch analysis. So we'll have to run the tool 10 times for this entire list which I don't want to do.

For a situation like this, my virtual assistants share an Ahrefs log in and are briefed how to get the traffic stats themselves.

They can flag any blog without enough traffic before the outreach stage.

2 Weeks later

So what we're going to do from here is we'll split this up between a team of virtual assistants and they can start contacting them right away with our template and we can see how this list performed.

Our virtual assistants have finally finished contacting this list and negotiating with the bloggers.

We've only done one round of outreach at this point. I wanted to wrap this tutorial up instead of dragging it out for weeks.​

If you're interested in seeing some of the blogs who responded, drop me an email and I might be able to share some of them privately.

So this is a summary of our results:

  • We scraped up 1,940 blogs 
  • 852 of those were unrelated, had no contact details or some other roadblock that stopped us from contacting them 
  • 395 blogs responded to our outreach. That's a 20% response rate 
  • 95 blogs agreed to let us guest post which is a 5% success rate

Not a massive result but we would most likely see an increase over the coming weeks as we follow up with this list. 

So what can we do from here to improve these results?

  • We can follow up with the 300 bloggers who responded but didn't agree to guest post. There are plenty of reasons why a blogger wouldn't have agreed. Another email might be all they need to push them over the edge. 
  • We could follow up with the non-responders again. We haven't followed up with this list at all at this point.
  • We could have tested a different set of footprints in the page scanner. The footprints that we used have been refined over many scrapes but they do need to be mixed up from time to time, depending on the niche you're working in.
  • We could test different ScrapeBox footprints. Perhaps, try "The SEMrush Method" or "The WordPress Method"

CONCLUSION

So hopefully, you took something away from this case study. There are so many different ways to approach outreach like this and the most important thing is that you take the first step and start testing any of the strategies that I've shown you.

Once you get the hang of it, you'll be able to churn out custom links on a massive scale, just like an SEO agency.

video8

Case Study 1: NFL Betting

by Karl Adamsas,  Last updated 

At this point we've covered most of the strategy you're going to need to get started building links like an SEO agency, so I wanted to show you a case study, and we'll bring together everything we've learned into one video, and you can see our thought process from start to finish.

We're going to take a real life client of ours, a client who has a website that focuses on NFL betting.

Guest posting still works with the shady niches like gambling, but you aren't going to get as good a response rate from the bloggers. Some will refuse to work with a gambling site, but you can still keep them in your database for when you find a non-gambling football client.

what kind of blogs do we want to get links from?

Since the client is involved in NFL betting, I want to focus on sports and football blogs.

Now we need to decide which of our ScrapeBox strategies we're going to use, so far I've taught you six different strategies:

For the keyword footprint, we've got:

  • The SEMrush Method
  • Brands, Models & People
  • The ScrapeBox Keyword Tool

For the relevance/quality footprint, we've got:

  • The WordPress Method 
  • The Comment Method 
  • The Moditisation Modifier

We can now mix and match these footprints to build a super-relevant list of blogs to contact.

Now I've worked in this niche before, so I know some good footprints to use are "Brands, Models & People" and "The Comment Method"

Depending on how your list performs, we can always go back and try some different strategies to build a better, more responsive list.

We're going to start with "Brands, Models & People", to create a list of relevant keywords. These are the keywords that we want to see on blogs as an indication of relevance.

Brands

For brands, we're going to focus on the team names.

We'll do a quick Google search and find a list, NFL.com has a list of all the current teams, so just grab those team names and paste them into Excel.

Now the team names are still very broad, and one of these names alone does not guarantee that a site is strictly football related. You're going to scrape up a lot of personal interest pages and a lot of news sites, so we're going to use the advance search modifier "in title".

And what this will do, it will scrape up blogs that have the team names in the title of the page, not just a mention in the content. This really helps to add that extra bit of relevance to the search.

Models

Models is a bit of a tough one for NFL. I'd be inclined to skip it, but we could do something like list all the different positions on a football team: quarterback, running back, defensive end, etc.

So we'll search  Google and find a list. 

You'd have to go through these one by one, some of them would be too general. Names like quarterback and center are obviously not going to work., but wide receiver, tight end, running back, defensive tackle, defensive end would all be great keywords.

People

People is where the gold is going to be on our list.

We're going to list the names of current NFL players, a lot of these players are going to be pretty obscure, and only super-relevant NFL blogs are going to be writing about them.

You are going to get a lot of results doing this, so it's probably best to search one team at a time, rather than try to list every player in the NFL.

We'll start with the Eagles.

In Wikipedia, we have a list of all past and current Eagles' players. Now we can definitely use this, but for the sake of this example, I want a shorter list. So we'll just grab the current team from the Eagle website

Just add these to Excel, and we now have 71 current Eagles' players. Now we're going to add what we've got to ScrapeBox. 

We'll just paste in the player names & Wrap keywords and quotes. Because these guys names aren't unique, I want to throw another modifier in here, of just the word "Eagles", so that any blog we scrape up is going to have a reference to the Eagles along with their player name.

So I've got a notepad doc with the term "Eagles" written on it.  Now we've got blogs with the word "Eagles" and the player names. ​

That's our keyword footprint taken care of.

The Quality Footprint

Moving over to the quality footprint, we decided that we were going to use "The Comment Method".

So we have our notepad doc with numbered comments one to 40, which we just go ahead and import into ScrapeBox, and that's all our search queries ready to go.

We'll only scrape up the first 20 results, and now we've got 2,840 keywords.

5 hours later...

It's been about four or five hours, and ScrapeBox has finally finished searching all those queries, so now we're going to clean up this list, and shave it down to a more targeted list of prospects.

We currently have over 56,000 URLs to work with.

We'll start by removing the duplicate URLs, which takes us down to 19,000.

The next step is to remove all the unsuitable sites that we always scrape up, things like all your social media platforms, all the free blogging platforms, things like reddit, Facebook, Pinterest, Blogspot, Weebly.

That takes us down to just over 10,000 results.

Another step I'm going to take here is to remove anything that is not a ".com". For this particular project, the client only wants US sites.

That takes us down to 9,400.

I'm going to apply my own blacklist against this. Our blacklist is the database of bloggers that already work with us. This step will remove any blog that we're already currently working with so that we don't contact them again, which takes us down to 9,360.

Now we're going to remove duplicate domains, which takes us way down to 744 sites.

Now I want the top level URLs, so I can run it through Majestic, so we'll:

  • trim to route ​
  • remove the sub-domains ​
  • remove duplicate domains again

We're now at 577 sites.

MAJESTIC

Copy these sites to your clipboard and paste them into Majestic.

So we'll take out anything with a Trust Flow of less than five

AHREFS

Now we want to take these sites and place them into Ahrefs.

We're still left with some really high quality sites here, which probably won't work with us.

This is why we have a real person go through the sites and do one last visual check. Our virtual assistant can just skip over these sites without bothering with them.

We eliminate any blog with less than 100 traffic.

What we're left with now is 169 sites, not a big enough list, so what we're going to do, is we're going to go back to the scraping stage, and add to our search queries, and run ScrapeBox all over again.

We have a couple of options here:

We can go back to our quality footprint and try adding more comments in

we only went up to 40 comments, we can try adding more numbered comments to our list, we can go up to 50, 60, 100, whatever we want, but this is most likely not going to give us the huge boost in results that we need.

We can try is scraping up more results from Google.

I configured ScrapeBox to only search  the first 20 results in Google for each search query, that's only the first two pages.

We could try setting this much higher to maybe the first 100 results.

I don't really want to test that at this stage in the scrape because this particular client wants higher quality sites, so I really want to limit our search to the first couple of pages of Google.

Our best bet is to go back to our the "Brands, Models, People" footprint and expand upon this.

In this particular scrape, we've just been focusing on the people element, so I want to expand upon that.

So far we've only included Eagles' players, so we can try adding some other teams. We could try finding a list of current players for the Cowboys, the Giants, the Sea Hawks and so on, but I don't really want to do that either, I want to stick with the Eagles for now.

In our research at the start of the video, we found a huge list of past and present Eagles' players in Wikipedia. I want to test adding some of these in, so we'll add players "A" through "E". This will give us another 400 Eagles' players for our scrape.

Now that we've got this new list of names, we just repeat the same steps from the beginning of the tutorial.

The next day

I left that scrape overnight, it was 16,000 keywords, so it did take some time.

You can see we've scraped up almost 117,000 URLs

Okay, so now we've cleaned up that list that we've run it through Ahrefs and we are left with 744 sites.

We'll grab the original list of 169 blogs and we'll add it to our new list.

Then we'll do a quick de-dupe and we have 848 sites to contact.

We have roughly 840 sites to work with from here, So what we're going to do is we'll break this list into two or three parts, and we'll give it to a few different virtual assistants.

Their job is to visit each one of these URLs and do one final visual assessment, this is our last line of defence. We'll have a human decide whether or not each site is relevant enough to contact, and if it is, they'll send the template that I showed you earlier, and manage all follow-ups and negotiations.

2 weeks later

It's been a couple of weeks and my virtual assistants are finished contacting this list and negotiating with the bloggers.

At this point, I'm going to have to blur these URLs, since it would be unfair for me to share their details with you guys. If you're interested in seeing some of the blogs who responded, drop me an email and I might be willing to share some of them privately.

This is a summary of our results.

  • We scraped up 848 blogs, 
  • 320 of those were unrelated, had no contact details, or some other roadblock that stopped us from contacting them.
  • 215 blogs responded to our outreach, that's a 23.35% response rate.
  • 32 blogs agreed to let us guest post, that's a 3.77% success rate, not a huge success rate, but not too bad considering the quality of these blogs.

So what can we do from here to improve these results?

  • We can concentrate on the bloggers who responded, but didn't agree to let us guest post.
  • This could actually be a good task for Mailshake, or Buzzstream, or one of the other automated mail services we covered in video 3.
  • We could follow-up with the non-responders again. I usually follow-up a maximum of twice with a list of non-responders.
  • We could have tested using the page scanner. I showed you the page scanner add-on in video 2. The page scanner searches our list for footprints that indicates a site is looking for advertisers. I only really use this tool on much larger lists. It's a great tool, but it's not foolproof, you will lose some great blogs in the process so I generally don't use it unless I have a list in the thousands.
  • Most obviously we could test different ScrapeBox footprints, perhaps try "The ScrapeBox Keyword Tool," or "The WordPress Method".

Conclusion

There is no one way to approach outreach like this, you're going to have to test out some different strategies. No two niches are the same, so you're going to have to change you're approach depending on the project.

The whole point of this strategy is that you can outsource the bulk of your workload to virtual assistants who will deliver you a link of opportunities each week, that are ready to go and has very little input from you.

Hopefully by now you have a better understanding of how this can work for your link building, and you have everything you need to start scraping up your own link opportunities.

If you have any questions or feedback, please add it below.

video7

Keyword Footprint: The Scrapebox Keyword Tool

by Karl Adamsas,  Last updated 

The ScrapeBox Keyword Tool is a tool inside of ScrapeBox that collects the most popular keywords being searched in Google by scraping up the auto-suggest results for any keyword you give you it.

You'll need to play around with it to get the best results for your specific niche.

Give it a broad keyword and it will scrape up the top 10 search phrases related to your keyword.

You can see here where it pulls it from.

Your ScrapeBox results might look a little bit different than Google's, depending on where you're located and which country you set ScrapeBox to scrape in.

We obviously want a much bigger list, so from here, we'll transfer our results to the left side and then just start the search again.

Now we have 97 unique keywords, probably still not enough.

You just keep transferring your results to the left hand side and the tool will grab each one of those keywords and scrape up all the auto-suggest terms people are searching in Google.

We can export these straight into Scrapebox.

We now have 409 relevant keywords that people are searching for in Google and should trigger relevant blogs.

For a list like this, I would not wrap the keywords in quotes. It's unlikely that these keywords are appearing on blogs exactly as is, but more likely as some sort of a variation. Definitely worth a test though.

From here, you would use either "The WordPress Method", "The Comment Method" or "The Monetisation Modifier" to build out your search queries and start scraping.

Any questions or comments, leave them below. 

video6

The monetisation modifier

by Karl Adamsas,  Last updated 

We're going to look at another Quality Footprint called "The Monetisation Modifier".

 In our last video, we looked at using Brands, Models, & people for "The Keyword Footprint". Now we're going to look at another option for the Quality Footprint.

We're calling this one "The Monetisation Modifier".

The idea behind this strategy is to identify bloggers who are trying to monetise their blogs through affiliate programmes. 

The reasoning is that bloggers who are trying to make money from their blogs might be open to other avenues of monetisation. Things like guest posts or link inclusion. Just because they're not currently offering guest posts doesn't mean they wouldn't consider it if we pitched it to them.

You just use this list as your relevance footprint in ScrapeBox.

  • "This post contains affiliate links"
  • "contain affiliate links"
  • "contains affiliate links"
  • “affiliate disclosure”

This is a common list of footprints that bloggers use to warn their readers that their posts might contain affiliate links.

Not all bloggers are going to take this step, but the ones that do generally have a big enough audience where they need to worry about keeping everything above board and not upsetting their fan base.

So you should know how to do this by now, you just save this lit to your desktop and then import it into ScrapeBox like all the other footprints, and then just start ScrapeBox as you normally would.

Any questions or comments, please leave them below


video5

Brands, Models & People

by Karl Adamsas,  Last updated 

This tutorial focuses on a different way to generate a list of relevant keywords for "The Keyword Footprint".

We've already gone through "The SEMrush Method" to build a list of keywords, and this strategy is going to take a little bit more brainstorming, it's called "Brands, Models & People".

You would use it instead of "The SEMrush Method" and you would use it in conjunction with "The WordPress Method" or "The Comment Method".

We obviously want relevant blogs to post on, and we're going to come up with a list of targeted keywords our ideal bloggers would be writing about, any brands, models or people that are related to your niche.

  • Brands of products
  • Models of specific items within those brands​
  • People who are well-known personalities in that niche

I had a client selling poker chips online

BRANDS

For brands we looked at brick and mortar casinos or online poker sites.

Wikipedia has a list of thousands of casinos, these are just the US ones. It's not hard to find a list of online poker rooms, PokerStars, party poker, BetOnline, Everest Poker. These are all great keywords.

MODELS

For models we were looking at specific models of poker sets, poker chips, poker tables, etc

We went to a competing eCommerce store that was already selling poker chips, and you can see that all of these different sets of names

PEOPLE

And for people we looked at famous poker players, poker personalities, anyone who's won the world series of poker. A quick search in Google, not hard to find a list of famous poker personalities, Daniel Negreanu, Phil Ivey, Tom Dwan.

Another client we worked on recently was a plumber, which is a bit more difficult than poker chips.

BRANDS

For brands, we looked at white goods: fridges, sinks, toilets.

There are plenty of appliance websites that will list hundreds of different brands. Some of them are too general. Some of them won't have any volume whatsoever, but there's definitely some gold in there.

MODELS

For models, we were looking a specific model numbers of the different fridges, the toilets, the dishwashers. Any eCommerce store selling white goods will have plenty of model numbers for you to grab.

We left people out because there were no famous plumbers that we could think of for this project.

Although, looking back at this, what we good have done for this category is brainstorm a list of famous handymen - do-it-yourself-ers, personalities from reality TV or YouTube, guys that teach how to fix your drywall, how to fix a broken washing machine, what to do if your toilet's leaking, etc.

So now you have a list of keywords that relevant blogs should be mentioning. You can couple this with "The Comment Strategy" or "The WordPress Method" to build-out your search queries for ScrapeBox.

Any comments or questions, please add them below.

video4

The Quality Indicator, Getting More Out of Your Scrapes

by Karl Adamsas,  Last updated 

There can be a lot of trial and error when building a responsive list for outreach and it might take you a few different strategies to find out where the gold is.

In video two we used two elements to create a search query for ScrapeBox: "The Keyword Footprint" and "The Relevance Footprint".

I'm going to show you how we can use another strategy to get a different results focusing specifically on the second element. This time we're going to swap out the relevance footprint with a quality footprint.

Now I've already shown you "The WordPress Method" of using a list of footprints common to WordPress blogs designed to scrape up small to medium size bloggers.

In video two we used "The SEMrush Method" for our keyword footprint and the WordPress method for our relevance footprint.

Instead of a relevance footprint, we're going to use a quality footprint this time.

We're going to call this strategy "The Comment Method" because we're specifically looking for pages with comments on them. So your scrape would consist of "a Keyword Footprint" and "The Comment Method".

comments on a page as an indication of quality

In our very first video we listed interaction with a blog as an indication of quality.

The fact that people are commenting on a piece of content, tells us that the blog has an audience who are engaged.

This blog is ranking for terms in Google and they have real traffic who are reading blog posts and sharing their opinions.

Fake blogs do not have this level of interaction. They might have fake comments, but these are very easy to spot when your virtual assistant does their final assessment of a blog.

Just grab a Notepad doc with a list of numbered comments.

"1 comment"
"2 comments"
"3 comments"
"4 comments"
"5 comments"
"6 comments"
"7 comments"
"8 comments"
"9 comments"
"10 comments"

"11 comments"
"12 comments"
"13 comments"
"14 comments"
"15 comments"
"16 comments"
"17 comments"
"18 comments"
"19 comments"
"20 comments"

"21 comments"
"22 comments"
"23 comments"
"24 comments"
"25 comments"
"26 comments"
"27 comments"
"28 comments"
"29 comments"
"30 comments"

"31 comments"
"32 comments"
"33 comments"
"34 comments"
"35 comments"
"36 comments"
"37 comments"
"38 comments"
"39 comments"
"40 comments"

I've gone up to 40 comments, but you can take this as far as you like. I've seen plenty of articles with over a hundred comments on them. It really depends on what niche you're working in.

Save this comment doc to your desktop and then import it into ScrapeBox along with your keyword list.

Depending on how many numbered comments you use and how big your keyword list is, you should end up with a massive list of URLs to clean up.

Any questions or comments, please add them below

video3

how to contact 1,000 plus blogs a week to take your links

by Karl Adamsas,  Last updated 

In this tutorial we're going to  show you how we contact a massive list of blogs and negotiate a guest post.

The list of blogs that you're left with by this stage will generally be in the thousands, sometimes in the tens of thousands. Contacting all of these sites is going to be very time consuming if we don't take some shortcuts.

There are many different ways that we can approach this step

A lot of people like to rely on automated tools like Hunter, MailShake, or Viola Norbert.

These services are all a little different but have a similar purpose, to automate your outreach and all of your followups.

Mail Shake needs you to supply an email address for each site you want to contact.

Hunter and Viola Norbert will try and find all email addresses associated with a website or a person and perform all your called outreach to them.

These services sound great in theory, but in our experience they're not the most effective way to perform outreach on scale. They won't find an email address for most of your list because, most bloggers are very protective of this information these days, and hide behind onsite forms.

Any publicly listed email accounts have usually been spammed to death and any unsolicited emails are ignored. I'm not saying that all automated outreach services are completely useless, they're just not suited to the type of outreach that we are doing. They can work well on smaller projects and specific types of blogs.

Now, there is a way that you can contact all of these blogs without actually doing any of the outreach yourself.

The best way to contact the large list of blogs 
is to have a human do it for you

There are quite a few reasons why we use humans to handle all of our outreach.

  • Accuracy: we make sure that we don't waste any leads. A human can get around hurdles that an automated system cannot. Some of our blogs are looking for advertisers like us, and we want to make sure we go through the correct channels so we don't get ignored.
  • Being Thorough: 95% of blogs are only contactable via an onsite form and you're going to have to contact these sites manually anyway. So you're just adding in an extra step using an automated service.
  • Personalisation: A virtual assistant can properly personalise an email with the correct lead's name from the website. This will always improve your response rate.
  • Automation: People hate automation. If your lead can tell that they've been bulk emailed, you're less likely to get a response.
  • Negotiation: A virtual assistant can handle all of the responses and negotiations for you. Even with an automated outreach service, you're going to have to deal with all the responses and the questions and the negotiations that come with contacting 1,000 blogs in a week.
  • Quality Control: Final quality control on all blogs. Even with a super targeted list, you're still going to want to do a final manual review of each blog to make sure they're relevant and they're suitable. It's very easy to train a virtual assistant on what to look for, so that they can flag any unsuitable blogs.

We've tested a lot of automated outreach services and while they do have their place under certain circumstances, we have yet to have find one that is faster and more accurate and more reliable than a human.

UPWORK.COM 

We usually find all of our virtual assistants on Upwork.

The trick to finding the best assistant is to find fluent English speakers and then train them how to do the task.

Don't look for freelancers who pitch themselves as SEO superstars or an outreach expert. You can use a program like Camtasia to record your desktop and teach them how you want the outreach performed, then share this video with anyone you hire via YouTube for future training. 

This makes building an outreach team very quick and very easy.

An average virtual assistant should be able to contact one site every four minutes.

That's about 15 sites per hour.

If you've got a full-time virtual assistant at eight hours per day, they should be able to contact around 120 blogs per day. Keep in mind, that's just the outreach and doesn't include answering responses from bloggers and negotiating terms and rates.

A team of three assistants should be able to contact 
1,000 blogs in a week and handle all responses

In terms of what kind of a response rate you can expect, this is a very difficult thing to quote since it varies so massively between niche and list quality. We regularly see response rates as low as 5% and as high as 30%.

You should have your assistant keep track of the response rate across the whole project so that, you can abandon a bad list and having another go at scraping up better prospects.

The outreach template that we use in-house has not changed in years. We often tried different variations, but this one continues to out-perform all others.

Simplicity is best, just get straight to the point and don't waste anyone's time.

Subject: Advertising on ___________

Hello ___________,

We're interested in advertising on ___________.
Could you please let me know what advertising options you offer? 

Thanks for your time.

___________

You don't even need to mention the guest post at this stage. You can negotiate that once you've got a dialogue going and you have a bit of a relationship with the blogger.

Don't use Gmail for your outreach, it's best not to include Google into something that's going to violate their terms of service.

I use a private email service and have all my assistants manage their email through Thunderbird, which is like Outlook, but it's free.

Any questions or feedback, please add them below...

video1

1. How to Build Links Like an SEO Agency

by Karl Adamsas,  Last updated 

We are the Link Building Association. We work as outsource link builders for SEO's, online marketing agencies, brands, and a few individuals. We build custom links to meet our clients' specifications on a massive scale.

Because we build so many links, we need a reliable way of creating them every day. Super white hat methods are great for those who have the time, but we need something faster, something more reliable, and something more consistent.

All of our links are paid links.

Guest posts, outsourcing, whatever you want to call it, we find suitable bloggers and we offer them money to include their link on their site.

Paid linking gives us total control over the whole process

  • the anchor text used
  • the quality of the link
  • the number of links that we can produce each week

Paid links are completely safe and they are undetectable as paid, if you go about it the right way

So do paid links still work?

Yes, of course they do. I'm going to show you how we build paid links that are 100% Google safe and completely effective at improving your rankings.

If you take the time to do outreach correctly and you target the right type of blog, you can buy links that are indistinguishable from natural links.

can Google detect paid links? 

I'm sure they can find the bad ones easy enough. Most paid links are pretty easy to spot. Google, no doubt, has an algorithm that can automatically find them just as easy as you can spot them.

The good news is that it's not too difficult to make paid links look natural. The trick is to raise your standards and only approach legitimate blogs.

You need to integrate your link on relevant pages where there are other natural links.

If the post is not labelled as sponsored, your link is included where it genuinely makes sense and surrounded by legitimate links, then your link will be treated like every other legitimate link on that page.

So what type of blogs do we want to get links from?

The foundations of outreach haven't changed too much, but there have been some major changes in the type of blogs that we want to target, how we find them, how we automate the process, and how we scale it up.

Forget Domain Authority, Trust Flow, URL rank, etc, there's only one reliable metric that we really use to determine if a blog is safe to link on. I'll explain that further in a minute.

Let's start by looking at a bad site first.

This is Redpearloflove.com.

This is obviously a pretty extreme example of a bad link. It's clearly a PBN.

T
he English is shocking. irrelevant content, badly written, no images, no advertising.

This site exists to sell links.

But the interesting thing is, if we run it through Moz, we have a domain authority of 28.

If we run it through Majestic, we have a trust flow of 20 and a citation flow of 31

Hrefs shows a URL rating of 32 and a domain rating of 10.

On paper, these metrics are pretty good, but looking at the site, you can see it's absolute garbage.

What we use as a quality indicator, without having seen the site, is traffic.

Redpearloflove.com's organic traffic is zero.

If it was a quality site, it would be ranking for keywords, there would be people on site, but this site is just a black hole on the Internet. There's nothing going on here.

They have slammed it with links to artificially inflate the domain authority, the trust flow, the URL rating and unfortunately, all those metrics are pretty useless when it comes to judging a site's quality.

What these quality metrics are good for is eliminating a site.

A site with a trust flow of 20 and a domain authority of 28 might not be good, but a site with a domain authority of five and a trust flow of zero is probably gonna be bad. So you can use these metrics to eliminate sites, but not really to approve sites.

This is a great example of a quality blog. I want to start off by saying that I do not work with this blog. They're not selling links to me. I don't know what their advertising status is. I'm just using them as an example of a quality blog. Now, if you have a look at one of their articles, you can see that they're linking out, so plenty of resources to help back up their claims. It's a very in-depth article, lots of pictures, advertising. Plenty of social shares, a tonne of comments - 310 comments.

This is mamanatural.com - This is a great example of a quality blog.

I want to start off by saying that I do not work with this blog. They're not selling links to me. I don't know what their advertising status is.

I'm just using them as an example of a quality blog.

Now, if you have a look at one of their articles, you can see that they're linking out, so plenty of resources to help back up their claims. They have very in depth content, lots of pictures, advertising. Plenty of social shares and comments

Now, if we check their quality metrics:

  • domain authority of 53,
  • A trust flow of 38 & a citation Flow 46.
  • URL rating 49 & Domain Rating 69

Just looking at their traffic graph, there's been no dips. There's been no penalties. It's steadily growing.

This is a best-case scenario site.

If you could contact this blogger and ask her to include your link into this article, it would be hidden with all these natural links, and it would be treated exactly the same.

Once again, do not contact this blogger. I don't work with her. Generally, sites this big don't work with people like us. I will show you how to find sites that will work with you, in an upcoming video.

I just wanted to quickly cover today what we look for in a blog when we pay for links. So paid linking is still very much alive. Outreach still definitely works. Just make sure the blogs that you're getting links from have organic traffic.

video2

2. How to Find Powerful Blogs for Link Building

by Karl Adamsas,  Last updated 

Today you're  going to learn how to automatically create a list of potential link partners, how to identify the type of blogs we want to link on and how to find them on scale. 

We defined a legit blog as one with organic traffic, but we need to drill down further. 

What does our ideal site look like and what type of site is more likely to work with us?

We don't want to build links on:

  • Private blog networks
  • Free blogging platforms
  • Social platforms

These are too easy to get. We're looking for quality, not quantity.

Attributes we do want to see in a potential link partner: 

  • Evidence of their visitors interacting with this site (ie: comments and social signals)
  • Bloggers who are trying to monetise their sites through advertising (ie: affiliate deals)
  • Bloggers who are open to working with us (ie: offers a media kit)
  • WordPress platform
  • Relevance

What Not To Do

Most outreach tutorials will teach you to search for phrases like:

  • keyword + “submit a guest post”
  • keyword + “guest post”
  • keyword + “guest post by”
  • keyword + “accepting guest posts”
  • keyword + “guest post guidelines”

This will definitely get you results and this strategy has a place in link building, but it's a bit dated now.

You're going to get the same results as your competitors and all your guest posts will be labelled as sponsored.

How We Approach Outreach

The best way to approach outreach coming into 2019 is to find relevant blogs that are high quality and offer to buy advertising from them. We focus on finding suitable blogs first, then try and convince them to take our clients link under our terms. 

This strategy works on the premise that most blogs are trying to monetise their traffic and a lot of the time these bloggers are open to discussing new advertising options, or even bending their own rules for the right price.

Searching for "keyword" + "guest post" is still a valid strategy for finding the low hanging fruit, but you're going to need to go the extra mile if you want to push ahead of your competitors.

Our strategy is more time consuming and your conversion rate will be lower, but you're going to find more unique and more powerful links this way. We just have to scale it much further to find decent numbers of links.

What we want to do is identify footprints of the aforementioned qualities so that we can find them in Google.

To find suitable blogs on scale, we're going to use software called ScrapeBox.

ScrapeBox is a piece of software that performs searches in Google and record all the results for us.

We feed it key phrases and it automatically combines them to make massive lists of queries and searches them in Google for us. We use it to find link opportunities on a massive scale.

We're going to need a couple of specific elements to make up our search to get the best results.

Instead of using "keyword" + "guest post", the elements that we're going to feed into ScrapeBox are:

  • the Keyword footprint
  • the relevance footprint

The Keyword Footprint 

we obviously want relevant keywords for SEO, we want blogs that are relevant to your industry.

The best way to do this is with long tail phrases. Head terms are just to general, and long tail phrases will give us a much stronger indication that a blog is specialised and relevant.

There are many different ways that we can come up with a reliable list of keywords for this step, and I'll show you some more in a different lesson, but for now, I'm just going to show you what we call the SEMrush Method.

The SEMrush Method

We're going to use SEMrush to find all the long tail phrases that a site ranks for. 

Go to Google, grab any of the top 10 blogs, enter it into SEMrush and then just grab all the keywords that are ranks for.

From here, we want to eliminate any of the keywords which are too short.

Depending on your definition of long tail, I use the following formula to count the number of words in the cell next to it then from here we can eliminate anything with one keyword, two keyword, three keyword, whatever you feel is too short to be long tail or too long. 

=IF(LEN(TRIM(A1))=0,0,LEN(TRIM(A1))-LEN(SUBSTITUTE(A1," ",""))+1)

Now that we have a decent list of keywords, we need to move on to what we call the relevance footprint. 

The relevance footprint

The relevance footprint is just that. It's an indication of relevance or quality.

In this tutorial, we're going to use what I call the WordPress method.

In this strategy, we're specifically looking for blogs, blogging platforms, mainly WordPress. The bloggers who are running WordPress blogs are our best targets and that's who we're trying to find, but we'll still turn up a lot of non WordPress sites with this method. 

WordPress is super common and it's used by a vast majority of small to medium sized bloggers, these are the guys who are going to be willing to work with us, these are the guys who are going to have the most specialised blogs, and these are the guys who are going to give you the best prices.

When you start talking to full time bloggers or blogs who are run by teams of people, then you're going to start to pay top dollar for your links 

The smaller bloggers are generally more responsive to our emails and will be the most lenient to our requirements such as, not labelling a post as sponsored.

This list of footprints is designed to scrape up blogging platforms:

  • "Leave a Reply"
  • "leave a comment"
  • "add comment"
  • "comment here"
  • "all fields are required"
  • "notify me of new comments via email"
  • "fields with” “are required"
  • "This site uses Akismet to reduce spam"
  • "Blog Archive"
  • "Filed Under"
  • "tagged with"
  • "Save my name, email, and website in this browser for the next time I comment"
  • "No content on this site may be reused in any fashion without written permission"
  • "By using this form you agree with the storage and handling of your data by this website"

We now have our keyword footprints and our relevance footprints. 

We scraped up a massive list of long tail keywords using SEMrush, and we have our list of WordPress footprints designed to help us find WordPress blogs.

So now we want to combine all this together in ScrapeBox.

First of all, grab your list of long tail keywords, then just paste it in the top left window of ScrapeBox.

I normally wrapped my keywords in quotes (it's always worth experimenting leaving the step out)

Now grab a notepad doc, put all your WordPress footprints in it and save it to your desktop.

We have 7,091 keywords from SEMrush and here we have our list of WordPress footprints.

What ScrapeBox is going to do is it's going to take this list of WordPress footprints, and it's going to combine each of those footprints with all 7,000 of these SEMrush keywords. It's going to combine every WordPress footprint with every SEMrush keyword, and it's going to give us a huge list of keywords.

14 WordPress Footprints x 7,091 SEMrush Keywords = 99,274 Keywords

So, if we just press this button here and then navigate to where our WordPress footprints are, you can say that jumped up to 99,274 keywords, and then it's combined our two lists together.

This box here we can decide how many results that we're going to scrape up per search. I usually get the first 20 results in Google, you can scrape up to 1,000 if you wish, that's really not necessary, you get a lot of junk after the first few pages.  

Then you just hit start harvesting. Make sure Google is selected then hit start.

Now because this is almost 100,000 keywords, it's going to take a really long time, I would normally leave this overnight. For the sake of this tutorial, we'll just let it run for a few hours, we'll collect a few results and I'll show you how we clean up the list.

I'll let ScrapeBox run for a couple of hours and we have just over 37,000 results.

Now we want to get this list down as much as we can and clean it up.

First step, remove the duplicate URLs, which Takes us down to 24,000.

The next thing we want to take out are all the free blogging platforms and all your social networks (ie BlogSpot, Reddit Facebook, Weebly, etc). If you click on the Remove button and choose "remove URLs containing...", you can give it keywords and it will delete any URL that contains that word.

This is a list of words we usually remove from most projects. If you give your scrape a quick scan, you should see anything else that might need removing.

  • BlogSpot
  • WordPress.com (ie: yoursite.wordpress.com)
  • Weebly
  • Pinterest
  • Facebook
  • Reddit
  • .gov
  • .edu

For this project we're only interested in .com's. So, we'll choose remove URLs not containing and say .com, which takes us down to 7,000 results.

The Page Scanner 

This step is optional. I would usually use this on a much bigger list of sites, but it can be very helpful. We're going to use an add-on called the Page Scanner.

The Page Scanner scans each one of the URLs that we've scraped up and looks for any of these keywords.

This is is a list of words that tells us that a site is actively looking for advertisers. This would give us a list of quick wins, the sites that are looking for advertisers, we would contact these guys first. I just start the Page Scanner and just leave it a few minutes.

  • guest post
  • sponsored
  • advertise
  • advertising
  • promoted
  • promoted content
  • media kit
  • affiliate

Once the Page Scanner has finished, just export the results to your desktop. Now we'll grab that list of sites that are looking for advertisers and we'll put them back into ScrapeBox.

From here, remove duplicate domains and we're down to 735 suitable sites.

Trust Flow with Majestic 

Now grab your list of URLs and export it to your desktop.

Go to majestic.com and run your list through their bulk URL checker. All we're interested in here is the trust flow and the citation flow.

The reason we use Majestic for this step is because we can get the results very quickly. We can get the Domain Authority with ScrapeBox, but it takes a very long time.

In our first tutorial, we said that these quality metrics are pretty useless for judging a site's quality.

What I said was, sites with a high Trust Flow aren't necessarily good, but sites with a low Trust Flow are usually very bad. 

We're going to use Majestic to eliminate all the low Trust Flow sites. We're also going to get rid of some of the super high Trust Flow  sites because those sites are very unlikely to respond to us.

We'll filter out anything with a Trust Flow  less than five and sort by Trust Flow  highest to lowest.

The sites with the highest Trust Flow : Amazon, TripAdvisor, YouTube, eBay,  etc, can be removed because they're never going to respond to us. We'll then remove ​anything with a Trust Flow  above 35.

Check THEIR organic traffic

Now we're left with a list of 462 sites, but we want to make sure that these sites all have traffic.

We're going to jump over to Ahref's and we're going to check all of our URLs in the batch analysis tool. This will give us an estimate of their current organic traffic. We can only such 200 sites at a time, so if you have a really big list, it's the perfect job for a virtual assistant.

Now that we've run that list through Ahref's, we can do a little bit more cleaning up and eliminate anything without enough traffic.

We're left with a list of 420 suitable sites.

So, each one of the blogs on our list:

  • Contains one of our long tail keywords 
  • They're all on a blogging platform like WordPress 
  • they are all looking for advertisers 
  • they all have traffic

There are plenty of steps that I could have scaled up to get a much bigger list of blogs, but that's a pretty decent start.

Any questions or comments, please add them below.

>