Posted by Jeff_Baker
In January of 2018 Brafton began a massive organic keyword targeting campaign, amounting to over 90,000 words of blog content being published.
Did it work?
Well, yeah. We doubled the number of total keywords we rank for in less than six months. By using our advanced keyword research and topic writing process published earlier this year we also increased our organic traffic by 45% and the number of keywords ranking in the top ten results by 130%.
But we got a whole lot more than just traffic.
From planning to execution and performance tracking, we meticulously logged every aspect of the project. I’m talking blog word count, MarketMuse performance scores, on-page SEO scores, days indexed on Google. You name it, we recorded it.
As a byproduct of this nerdery, we were able to draw juicy correlations between our target keyword rankings and variables that can affect and predict those rankings. But specifically for this piece...
How well keyword research tools can predict where you will rank.
A little backgroundWe created a list of keywords we wanted to target in blogs based on optimal combinations of search volume, organic keyword difficulty scores, SERP crowding, and searcher intent.
We then wrote a blog post targeting each individual keyword. We intended for each new piece of blog content to rank for the target keyword on its own.
With our keyword list in hand, my colleague and I manually created content briefs explaining how we would like each blog post written to maximize the likelihood of ranking for the target keyword. Here’s an example of a typical brief we would give to a writer:
While Moz wins top-performing keyword research tool, note that any keyword research tool with organic difficulty functionality will give you an advantage over flipping a coin (or using Google Keyword Planner Tool).
As you will see in the following paragraphs, we have run each tool through a battery of statistical tests to ensure that we painted a fair and accurate representation of its performance. I’ll even provide the raw data for you to inspect for yourself.
Let’s dig in!
The Pearson Correlation CoefficientYes, statistics! For those of you currently feeling panicked and lobbing obscenities at your screen, don’t worry — we’re going to walk through this together.
In order to understand the relationship between two variables, our first step is to create a scatter plot chart.
Below is the scatter plot for our 50 keyword rankings compared to their corresponding Moz organic difficulty scores.
Phew. Still with me?
So each of these scatter plots will have a corresponding PCC score that will tell us how well each tool predicted where we would rank, based on its keyword difficulty score.
We will use the following table from statisticshowto.com to interpret the PCC score for each tool:
Coefficient Correlation R Score |
Key |
---|---|
.70 or higher |
Very strong positive relationship |
.40 to +.69 |
Strong positive relationship |
.30 to +.39 |
Moderate positive relationship |
.20 to +.29 |
Weak positive relationship |
.01 to +.19 |
No or negligible relationship |
0 |
No relationship [zero correlation] |
-.01 to -.19 |
No or negligible relationship |
-.20 to -.29 |
Weak negative relationship |
-.30 to -.39 |
Moderate negative relationship |
-.40 to -.69 |
Strong negative relationship |
-.70 or higher |
Very strong negative relationship |
In order to visually understand what some of these relationships would look like on a scatter plot, check out these sample charts from Laerd Statistics.
The closer the numbers cluster towards the regression line in either a positive or negative slope, the stronger the relationship.
That was the tough part - you still with me? Great, now let’s look at each tool’s results.
Test 1: The Pearson Correlation CoefficientNow that we've all had our statistics refresher course, we will take a look at the results, in order of performance. We will evaluate each tool’s PCC score, the statistical significance of the data (P-val), the strength of the relationship, and the percentage of keywords the tool was able to find and report keyword difficulty values for.
In order of performance:
#1: MozVisually, SpyFu shows a fairly tight clustering amongst low difficulty keywords, and a couple moderate outliers amongst the higher difficulty keywords.
SpyFu Organic Difficulty Predictability |
|
---|---|
PCC |
0.405 |
P-val |
.01 (P<0.05) |
Relationship |
Strong |
% Keywords Matched |
80.00% |
SpyFu came in right under Moz with 1.7% weaker PCC (.405). However, the tool ran into the largest issue with keyword matching, with only 40 of 50 keywords producing keyword difficulty scores.
#3: SEMrushAhrefs comes in fifth by a large margin at .316, barely passing the “weak relationship” threshold.
Ahrefs Organic Difficulty Predictability |
|
---|---|
PCC |
0.316 |
P-val |
.03 (P<0.05) |
Relationship |
Moderate |
% Keywords Matched |
100% |
On a positive note, the tool seems to be very reliable with low difficulty scores (notice the tight clustering for low difficulty scores), and matched all 50 keywords.
#6: Google Keyword Planner ToolAnd the resulting scores are as follows:
Tool |
PCC Test |
---|---|
Moz |
10 |
SpyFu |
9.8 |
SEMrush |
8.8 |
KW Finder |
8.7 |
Ahrefs |
7.7 |
KPT |
1.1 |
Moz takes the top position for the first test, followed closely by SpyFu (with an 80% match rate caveat).
Test 2: Adjusted Pearson Correlation CoefficientLet’s call this the “Mulligan Round.” In this round, assuming sometimes things just go haywire and a tool just flat-out misses, we will remove the three most egregious outliers to each tool’s score.
Here are the adjusted results for the handicap round:
Adjusted Scores (3 Outliers removed) |
PCC |
Difference (+/-) |
---|---|---|
SpyFu |
0.527 |
0.122 |
SEMrush |
0.515 |
0.150 |
Moz |
0.514 |
0.101 |
Ahrefs |
0.478 |
0.162 |
KWFinder |
0.470 |
0.110 |
Keyword Planner Tool |
0.189 |
0.144 |
As noted in the original PCC test, some of these tools really took a big hit with major outliers. Specifically, Ahrefs and SEMrush benefitted the most from their outliers being removed, gaining .162 and .150 respectively to their scores, while Moz benefited the least from the adjustments.
For those of you crying out, “But this is real life, you don’t get mulligans with SEO!”, never fear, we will make adjustments for reliability at the end.
Here are the updated scores at the end of round two:
Tool |
PCC Test |
Adjusted PCC |
Total |
---|---|---|---|
SpyFu |
9.8 |
10 |
19.8 |
Moz |
10 |
9.7 |
19.7 |
SEMrush |
8.8 |
9.8 |
18.6 |
KW Finder |
8.7 |
8.9 |
17.6 |
AHREFs |
7.7 |
9.1 |
16.8 |
KPT |
1.1 |
3.6 |
4.7 |
SpyFu takes the lead! Now let’s jump into the final round of statistical tests.
Test 3: ResamplingBeing that there has never been a study performed on keyword research tools at this scale, we wanted to ensure that we explored multiple ways of looking at the data.
Big thanks to Russ Jones, who put together an entirely different model that answers the question: "What is the likelihood that the keyword difficulty of two randomly selected keywords will correctly predict the relative position of rankings?"
He randomly selected 2 keywords from the list and their associated difficulty scores.
Let’s assume one tool says that the difficulties are 30 and 60, respectively. What is the likelihood that the article written for a score of 30 ranks higher than the article written on 60? Then, he performed the same test 1,000 times.
He also threw out examples where the two randomly selected keywords shared the same rankings, or data points were missing. Here was the outcome:
Resampling |
% Guessed correctly |
---|---|
Moz |
62.2% |
Ahrefs |
61.2% |
SEMrush |
60.3% |
Keyword Finder |
58.9% |
SpyFu |
54.3% |
KPT |
45.9% |
As you can see, this tool was particularly critical on each of the tools. As we are starting to see, no one tool is a silver bullet, so it is our job to see how much each tool helps make more educated decisions than guessing.
Most tools stayed pretty consistent with their levels of performance from the previous tests, except SpyFu, which struggled mightily with this test.
In order to score this test, we need to use 50% as the baseline (equivalent of a coin flip, or zero points), and scale each tool relative to how much better it performed over a coin flip, with the top scorer receiving ten points.
For example, Ahrefs scored 11.2% better than flipping a coin, which is 8.2% less than Moz which scored 12.2% better than flipping a coin, giving AHREFs a score of 9.2.
The updated scores are as follows:
Tool |
PCC Test |
Adjusted PCC |
Resampling |
Total |
---|---|---|---|---|
Moz |
10 |
9.7 |
10 |
29.7 |
SEMrush |
8.8 |
9.8 |
8.4 |
27 |
Ahrefs |
7.7 |
9.1 |
9.2 |
26 |
KW Finder |
8.7 |
8.9 |
7.3 |
24.9 |
SpyFu |
9.8 |
10 |
3.5 |
23.3 |
KPT |
1.1 |
3.6 |
-.4 |
.7 |
So after the last statistical accuracy test, we have Moz consistently performing alone in the top tier. SEMrush, Ahrefs, and KW Finder all turn in respectable scores in the second tier, followed by the unique case of SpyFu, which performed outstanding in the first two tests (albeit, only returning results on 80% of the tested keywords), then falling flat on the final test.
Finally, we need to make some usability adjustments.
Usability Adjustment 1: Keyword MatchingA keyword research tool doesn’t do you much good if it can’t provide results for the keywords you are researching. Plain and simple, we can’t treat two tools as equals if they don’t have the same level of practical functionality.
To explain in practical terms, if a tool doesn’t have data on a particular keyword, one of two things will happen:
- You have to use another tool to get the data, which devalues the entire point of using the original tool.
- You miss an opportunity to rank for a high-value keyword.
Neither scenario is good, therefore we developed a penalty system. For each 10% match rate under 100%, we deducted a single point from the final score, with a maximum deduction of 5 points. For example, if a tool matched 92% of the keywords, we would deduct .8 points from the final score.
One may argue that this penalty is actually too lenient considering the significance of the two unideal scenarios outlined above.
The penalties are as follows:
Tool |
Match Rate |
Penalty |
---|---|---|
KW Finder |
100% |
0 |
Ahrefs |
100% |
0 |
Moz |
100% |
0 |
SEMrush |
92% |
-.8 |
Keyword Planner Tool |
88% |
-1.2 |
SpyFu |
80% |
-2 |
Please note we gave SEMrush a lot of leniency, in that technically, many of the keywords evaluated were..
https://ift.tt/2B9C2hF
No comments:
Post a Comment