Author Archives: Kevin Richard

What is SEObserver?

Here we are finally.

The first contractions have started.

The SEObserver project which has taken months to grow will finally see daylight. How will it be? And mostly, how will people respond to it? A lot of unresolved questions.

But the first and most important question is: What is SEObserver?

The frustration of every SEO

Like most projects, SEObserver was born out of frustration. The frustration of having to fight against a giant, Google, without precisely knowing his weaknesses in real time.

In addition to that, the frustration of seeing some websites reaching the top of the rankings with seemingly little effort and having to spend hours to analyze their strategy.

And finally, the unbearable frustration: having to bear the useless and almost insulting advice of Google’s Search Quality team, people who have never done SEO nor lived from the earnings of their own websites, without truly knowing what works. “Create quality content”, “Don’t be evil”. Oh, yeah…

The ideal SEO tool

I went on a quest to find the perfect tool to study, spy and finally control Google.

The ideal tool on a technical point of view to actually verify what works and what doesn’t, in my opinion is a tool that:

  • tracks the rankings of all the websites on the Internet, or a sample all of it
  • tracks all the changes on-site/off-site to check the elements that have had an impact.

Finally, the tool that was cruelly missing until now, is a control tower that detects every significant change within the SERPs and that provides the right information at the right time.

However, note that I am not talking about a brief info such as “a competitor has moved from the 4th place to the 3rd place on ‘cheap chocolate pizza new york’ “. This is classic ranking monitoring we all know.

I am talking about a really, really relevant information, such as:

  • this website has been penalized on this keyword that yields 2000$/day,
  • this competitor has created a backlink at this very place last night,
    here is the list of all the recent backlinks that have been created by the first 20 websites ranking on “car insurance”,
  • this competitor has leaped in the SERPs, and this can be explained by a change in his title 3 weeks ago and a backlink campaign, that started 2 months ago, based on comments posted on wordpress websites.

Now do you get what I’m talking about?

In fact, the idea is really to start from the mindset of a hacker on its prey : study, analyze and “crack” the code to be able to do what you want.

Google spies on us so much daily that at last, to spy on the spy, it’s rather fair game isn’t it?

Here is SEObserver

A photo is worth a 1000 words, so here is my vision of the SEO monitoring tools.

I had in mind to monitor not a dozen, not a hundred but more like several tens of thousands keywords. Thus, in that way, I could have a typical sample of the most important niches SEO-wise.

I started to scrap or should I rather say “monitor” Google, like start-ups owners like to say.

Then, once a particular website targeted, I decided to focus on two essential factors in SEO: the on-site and the off-site.

The off-site part was rather easy, as there was only an important data provider (data that is very, very expensive!) that was needed to extract a list of backlinks that needed sorting to advance further:

  • analyze the CMS,
  • retrieve the PageRank,
  • check if the page is indexed or not,
  • check the number of outbound links,
  • retrieve the number of shares on social networks.

With an option to sort manually then semi-automatically to discover the type of backlink (organic, paid, spam comment/forum, etc…).

I had a hard time with the on-site part. It is really difficult to extract information that can be easily sorted for on-site SEO, so I decided to focus on the simple elements and the most important ones:

  • changes in the title tag
  • evolution in the number of words per page (added or deleted content)
  • number of indexed pages

Unity is strength

The big challenge in all that I have just described in detail, it is the cost.

If scraping Google is extremely expensive, if not costing an arm (several hundreds of IPs needed to scrap several tens of thousands keywords everyday mostly), the access to the backlinks data from one of the leaders of the market costs a leg.

Even so, a scrap remains a scrap, and once the job done, it does not cost that much higher to share the access with 1 or 100 people.

The idea here was to strike a decisive blow and create a sort of “Groupon” of positions monitoring: instead of focusing on a model of “1 project = 1 euro”, I preferred to do a scrap on the top 50 000 (and soon the top 100 000) keywords that are the most relevant according to Adwords, with an element that is radically different: each client that subscribe can access the complete database and history.

And if the client wants to track the keywords that are not yet in the database, he/she can simply add them.

Basically, where does that lead?

The tool, as it operates so far, includes three modules:

An advanced database of keywords, with the option to download the whole database in csv format. This data, by itself, costs several hundred dollars.

A screen with a time-travelling option within the SERPS. You can view the SERPs at two specific dates, and when you put your cursor over a particular website, you can view its evolution.

An information screen with detailed information with a graph of the rankings of the websites and a list of the changes that have been made, with several filters available.

And of course, a screen with the list of the latest backlinks created by all the websites monitored (+ a widget that will be added on the homepage).

The whole, without any consultation restriction, at least for the ranking data, within reasonable limits.

OK, ok, but when can I access it?

  1. There are two details that need to be solved before the proper opening of the registrations :
    the completion of the deal with the data provider for the backlinks, i.e, MajesticSEO. Most of the tools offer to connect your Ahrefs or MajesticSEO account to their tool, which end up making you paying several subscriptions at the same time.
    I want to proceed differently and to provide you directly this data, which is rather complicated because this puts me in a position of a “data reseller”, with has a lot of business implications for the providers, who generally does not like the sound of that.
    They prefer naturally to have a “direct link” with the final client instead of me. Well, the negotiations were tight but a deal has been reached. I am currently in the process of validating several details that are taking a bit longer than expected.
  2. the completion of the drafting and approval of my Terms and Conditions by my lawyer. A time consuming and thankless task, that is absolutely necessary to be able to launch a tool. I want to be able to protect the users who might want to spend several hours on the tool doing proper research and development, and to protect myself from scrappers technically as well as legally.

The registrations will open progressively, by batch of 10-20 users so as not to surcharge the server and to analyze the scalabilty. First come, first served.

To benefit in priority from the launch of SEObserver, enter your email below.