Last Updated on by
For many years, top search engines have been servicing the vital information needs of the internet using public in many ways. Search engines like Google routinely point internet users towards the best resources for their enquiries, and make available the most relevant search results based on what users type in the search box. Over the years, the concept of ranking with Google has also changed a lot, with previously important things no longer being considered an important factor and new factors taking center stage.
There have been a number of important algorithms implanted by Google which assess websites on the basis of these factors and assign rankings, and businesses have had to keep up with these changes to hold on to their previously achieved ranks. Some have succeeded, while some have taken major hits. Here arises the question that Google, the world’s most popular search engine by a long shot, might actually be bending the scales away from fair search engine rankings.
When it comes to fairness in achieving search engine rankings, there have been a number of significant questions pertaining to Google’s modus operandi that have recently come to the fore. While the philosophy is something that every effective search engine should adopt, the processes surrounding it and the transparency involved can sometimes leave a little to be desired. There is also an important question of whether or not Google itself is becoming a scraper site, especially in the light of the Google’s recent crackdown on scraper sites that have recently seen a growth spurt, often spamming search engines and creating problems at multiple levels. These questions merit a clean, in-depth discussion and careful deliberation.
Google and the Concept of Search Rankings
Since a long time, Google has been inviting website owners to submit their site to a search index. Once a website is submitted, it is quickly indexed to scan its quality and relevance, and then the engine keeps indexing the site over time at intervals to log any changes in structure or content. The indexing also puts the website through different kinds of checks, validations, and algorithmic operations, and this is what finally shapes the ranking potential of the site. Previously, the focus was more on the presence and amount of specific keywords in the content and elsewhere on the web pages. Currently, the focus is a lot more on relevance, uniqueness and value addition, as well as on different design aspects and mobile readiness.
Now, depending on the quality of search engine optimization that has been put to use by the website owner and its impact on Google’s scheme of things when it comes to search engine rankings, the website would rank either low or high. This seems to be a pretty straightforward way of handling website rankings in an intuitive manner. There are, however, certain issues that can force one to think about Google’s role in the context of imparting full fairness to rankings, and these stem from essential policies, processes and methods that Google adapts at specific times.
In All Fairness – Examining Transparency and Behavior
Imagine a case like this – a company which has a lot invested in its business website and has used the services of professional designers, information specialists, and search engine optimization experts suddenly starts going down in search ranking. There can be many reasons for this to happen – the website must somehow have fallen foul of Google’s famous algorithms. While websites can be built beautifully, responsive and have great navigation, trying to purchase multiple keyword-rich top-level domains, or having plagiarized or duplicate content on your website can be good enough reasons to ultimately face penalties from Google.
Another prime candidate for taking a ranking hit is websites that use web crawling bots to source their content with a view to earn hefty advertising revenues with high-impact keywords, or to spam search engines. Google has vowed to come down strongly on websites like these, but some of their own practices can cast a shadow on their intent. There are two main issues here – transparency, and Google’s own behavior which can sometimes mimic that of a scraper site.
Examining the Transparency Issue
The argument that can often be heard among those that have been penalized by Google for shortcomings in their search engine optimization concerns the lack of transparency therein. If your website has seen a recent fall in its ranking, you can see that reflected in your position among Google search results and consequently, in the dip of traffic to your website that is sure to ensue. However, this is the only indicator to the fact that you might have committed some SEO mistakes. Google does not provide you with any information about exactly what steps that you have taken makes you eligible for that penalty, exactly what parameters of SEO good practices have you not conformed to, exactly what algorithmic system demands that your website be penalized, and what you can do to correct the mistakes concerning your website.
This lack of transparency is sometimes the reason why website owners, in spite of suffering from significant falls in ranking, fail to turn around the repair the damage by mending their mistakes. Providing penalized website owners with the basic indicators regarding their mistakes and suggesting a way out is quite possibly very little expectation from the giant in the realm of search engines, and we can only hope that Google addresses this issue sometime in the near future.
Is Google Indeed a Scraper Site?
Of late, certain notions have surfaced which indicate that Google might possibly have overstepped the line a little when it comes to providing tailored, relevant search results in the form of direct answers, definitions and graph boxes. In itself a useful initiative, Google’s use of these kind of search returns is fast becoming the norm. The concept itself, however, has one important issue. Displaying search results like this involves taking content from a high authority location and displaying the content on the Google search results page. While Google does cite sources and credit the original website for sourcing the content, its location in the search results page is still on top of the high authority website from which the content is actually from.
Usually, a Google search results in enough information that helps the average internet user to click through to the resource they deem to be fitting of their requirements. With change in times, more and more Google search results are attempting to directly answer questions right on the search results page, thereby robbing the high authority web location of crucial click through action. While for some kinds of search questions this can be the right approach which aids the user, there are many areas in which is this balancing act becomes skewed and can even be termed unfair. Search engine agreements give Google the ability to get their content from indexed locations and use it for their own benefit while there is a reasonably fair exchange of traffic. With these kinds of search results, the notion of that fairness is being stretched too far.
Answering the Vital Question
A scraper site is defined as a website which scraped content from other web locations, and then uses this content for its own advantages. Technically speaking, search engines are not meant to be scraper sites. They are rather meant to act as an indexing tool where different web resources are tracked in depth, so that in the end they can be successfully communicated to the users with the right queries. With this new penchant for direct answers in search results, it has to be admitted that Google is, inadvertently or not, somewhat mimicking the behavior of scraper sites by borrowing content from other web locations and providing to that content a better place in its result pages than the page the content originally is from.
While it is not sufficient to term Google to be a scraper site out and out, it has to be said that in certain ways, Google does operate like a scraper site at times, and robs deserving websites of at least a part of their traffic. Over time, this loss of traffic and utility can build up to an enormous extent and actually hurt the specific business interests of that company. As a search engine, we can only expect that this trend evolves in a more meaningful way of providing people with answers the easy way, while not diverting important traffic from high authority websites, which have attained that stage through a lot of effort. This the kind of dilemma that can have even the most seasoned search engine optimization experts bamboozled and wondering what their next move should be.
In conclusion, while not fully a scraper site, there are some characteristics that Google shares with scraper sites. Since Google is a provider of immense utility to millions of people, it is best hope that they become successful in dispelling this notion in the long term.