Information restriction - Dangerous search engine tactics
If I said to you "Sorry, but nobody else can republish your knowledge" What would you read into that?
A problem looms that involves our evolutionary method of information gathering that a very tiny few big money corporations are straying into dictating control of. That is the business of search engine censorship by means of duplicate content filtering. The only logical outcome of today’s heavy use of search engines is that they (search engines) will have, already have the power to filter what we see and know about depending on mathematical algorithms.
In fact this is a frightening concept because it is a well known fact that some search engines are prepared to penalise websites (information sites) that repeat content. This in effect means that a form of dictatorship has been adopted for the convenience of internet search scientists who know of no better method of devising ways of showing good search results, beside imparting censorship on information sources because their systems have found similar or the same information elsewhere. This in turn disables information competitiveness and you dear reader will and already do have a limited view of the world when using internet search engines because what is displayed in answer to your search is pre-determined by the mathematics of just a comparative few individuals seeking to show you an entirely different view of results than actually is represented by websites in existence.
Lets for example say, this very article was adopted by many websites. The likelihood is that those same websites would be penalised by search engines for containing information displayed elsewhere, hence the many references you might read about concerning 'duplicate content filter' or 'website de-indexing due to duplicate content'. The problem is that as in any school or library, information has to be duplicated in order that it may reach as many people as possible.
This is without doubt a terrible precedent and horrible public power that is being exercised by a limited number of individuals that effectively ensures profitable gain in addition to information control for the corporations in which they work, but does not and cannot reflect the realities of the public vote or capability of repeating internet information whenever the public chooses to do so. If we are to continue with the algorithms that eliminate websites from being viewed by normal publicly acceptable means (increasingly via search engines) this will effectively create an ethos whereby search engines actually control the material that the planet's population can or will read during website exploration and internet surfing.
You would not expect only one library in the world to have the complete works of Shakespeare published on the internet. Nor should your accept that articles cannot be republished for fear of websites being starved of visitors due to filtering imposed by the supposed original-content filters of certain search engines. In my view, you should be able to get whatever information that is publicly available from as many sources as you choose, at your own convenience and not be restricted to only a single source unless of course nobody else happens to want to republish or endorse that bit of information.
I believe we have to make public vehicles of information as accountable as your local bus service of electricity Supply Company. Information is important to all of us and cannot simply be filtered and restricted because it’s hard for search engines to share traffic fairly among websites displaying the 'same' article or bit of information. The moment we allow that, we revert to the information stone-age. Well... welcome to the information stone-age. Look at the results from your favourite search engine and then decide 'how fresh, or common, or recurrent it is' Unless you happen to be viewing an RSS news feed, the likelihood is that the static information that your are allowed to see has waited a long time to surface on the internet. This defeats the object of cutting edge freedom of information via the internet, doesn't it?
Take a look for yourself. If there really are more than 8 billion pages, how come the same web pages always come on top of the search results for the same query? Surely, every website with useful information pertinent to your search, deserves a viewing, even occasionally?
We as public users must regain the authority and ensure that publicly available information is viewable and can be searched for with ‘our’ most popular publicly predominant service tools such as search engines. There is a fine line between corporate profit-making and public service, once that line is crossed as it has been with some notable search engines, they become empowered and therefore responsible by their own mechanisms that achieved their success and dominence, to deliver with increasing fair play and responsibility. We must not allow search engines to suggest for a moment longer, that human beings cannot repeat or republish information as we see fit for fear of penalties capable of closing websites purely because of republication of content with permissions as originally intended by publicists, writers, scientists, news and all others.
Duplicate content is as necessary today as it ever was in any classroom exercise book that you might find in any school anywhere in the world. Websites must be allowed to republish useful information. That is a fact.
About the Author: Ronnie Roberts - Author is the owner of education relevant sites www.learn-anything.com | www.elearn-university.org | www.elearn-university.com
Feel free to republish this article in its entirety including author's Bio and relevant site details.