What Google Doesn’t Want you to Know About Panda
Despite all the conversations about what Google is looking for when indexing pages, there is still one thing that they are completely focusing on: incoming links from other sites. In fact, from what I’ve seen in my own personal SEO studies, incoming links is still one of the main factors that determine the rank for any site – but not like it used to be. I believe that the Panda Update that everyone has been talking about still, was only a revamp of this strategy.
Not all links are the same and here are some thought and ideas that I have about what works, and what does not. A lot of this has been through testing, but also conversations with Gail Gardner of Growmap.
While this has been pointed out many times before, there seems to be a substantial change to Google’s algorithm. The entire link building strategies that include anchor texting on random sites doesn’t have much effect anymore especially faced against links from relevant content.
What does this mean exactly? If a site is about a topic, let’s says Cats and Dogs, and has a lot of pages about Cats and Dogs, the entire ranking of any link coming from that site is worth more than let’s say a page about Cats and Dogs on a site about Cars.
Google’s algorithm somehow ranks an entire site’s relevancy and provides that links from that site to something similarly relevant has more juice than any random link. I’ve tested this with link building, finding that about 200 random links are worth about one link from a similar relevant PR4 site. That seems crazy, but it also means that if you only get a few people linking to you from really good sites with good keywords, you’ll do a lot better than let’s say the guy who paid some offshore guy to spam blog comments.
First Unique Content Goes First.
I believe that this is probably the least understood part of the entire system. There was so much talk about article sites that were losing SEPRS because they had so much content. It makes more sense that they lost their rankings because their content wasn’t just not unique, but wasn’t the first mention of the content. Duplicate content isn’t the issue here. I think all those people saying “Don’t allow someone to copy your content” are lost, because you can’t prevent people from stealing, mentioning, content. Then you could hurt other people’s SERPS by copying it everywhere.
There is a crazy theory that perhaps the first instance of content actually gains rankings based on the other versions of the content, even when not linked!
I know this doesn’t particularly make sense, but I did several tests of this myself. I put some content on a friend’s very small blog, and then two weeks later duplicated that same exact content on one of the top technology sites in the world. At first the content showed up in that site on the first page of a search, but then it dropped off and the original content on the small blog turned up at a higher ranking than the bigger sites original ranking. This site was originally only page 4 on the ranking, and suddenly went to page 1 with no links in.
Also I tested this several other times with similar content, duplicating it on other major sites that I have access to and found the same results. In every instance, the original content gained ranking eventually but only when duplicate content was placed on a secondary very high ranked page.
It is my belief that the engineers Google realize that a lot of content is copied from a variety of places, often without permission, or as part of a feed, news service. Penalizing the original source because of duplication makes no sense whatsoever! Why would it?
In theory, getting links from that duplicate content should enhance your listings too – something which I am going to test more and more in the coming months.
Age of Site & Links is Utmost Importance.
Google has gone nuts with the whole refreshing of content– often showing new content with new links on the first page of searches and having them quickly drop off within a week to page 3 or 4. I believe that this is part of Google new strategy to change things up a bit, to allow new content to get a “day in the sun” but also to emphasize that the age of a site brings more relevancy to the table.
I’m pretty sure that they’ve created a system that a site must be around for a certain amount of time to get any juice –and combined with the fact that older sites generally have more content about a certain topic, their age of site has become a significant factor.
Basically, if you have links coming from an older site that has been around for longer time, to your site that has been around for a longer time, you will see immediate jumps in keyword rankings over younger site links of any type. New links aren’t as powerful as old links.
It seems to me that even newer content on older sites has less value than older content. It causes a temporary jump, but long term unless it consistently builds in links to it from other sites, it’s not as valuable.
If I could find a term for this, it would be a “gravitas” scale of some sort. Basically, Google wants reference, fact sites to have a higher ranking than those sites that are opinion, news or promotional sites. The proof of a sites authenticity or gravitas would be in people consistently linking to it over a period of time. People who link to a site or article only over a small period of time are usually just linking to an interesting news story, a fad, or link building.
Links over a long time show that a site is a valuable “resource.”
That’s why Wikipedia scores so high in so many factors (including PR) because its s stable resource that people link to all the time when making references.
Here’s the basic formula, for those who don’t want to read my entire article:
First + Relevant Content + Age + Links