If Google researchers have their way, we may soon look back and laugh
at the time when search engines ranked web pages based on link-driven
popularity instead of factual content.
According to New Scientist, a team of Google researchers is currently working toward a future where search engines judge websites not on the number of other sites that trust them enough to link to them, but by the accuracy of their content.
Since web pages can be littered with factual inaccuracies and still appear credible because of a high number of quality links, the Google team is pursuing a future where endogenous signals carry far more weight than exogenous signals.
In short, Google may soon be more concerned with the information your website contains than the level of trust people have in your website. New websites could immediately be ranked higher than established competitor sites just by hosting content that is more factually accurate than theirs.
Google has quietly been building a database that contains the accumulated knowledge of the entire human race. This enormous cache of facts and information is readable by both machines and humans. Called the Knowledge Vault, this information super warehouse is locked in a cycle of self-perpetuating improvement - the more information it gathers, the more information it is able to collect. Bots scan text on web pages and then double-check what they find against the information stored in Knowledge Vault. Those same bots can then deposit new information that they "learn" from those web pages into the vault.
Researchers believe the very near future will include machines that recognize objects. When a person wearing a heads-up display looks at a bridge, for example, the device will recognize it and request information from Knowledge Vault. Knowledge Vault will instantly beam facts about the bridge back to the wearer.
For now, Knowledge Vault is just the world's greatest fact checker - the brains behind Google's pursuit of being able to judge web pages on their endogenous information, not their exogenous links.
Although this system is not yet ready to be applied Internet-wide, it could certainly supplement the signals that are currently used to evaluate a website's quality.
In 2012, Penguin and Panda changed the relationship between SEO and search rankings. The impact felt by those algorithm updates, however, could be dwarfed by Google's current quest to judge websites by their factual accuracy and truthfulness, as opposed to ranking pages based on links.
That future isn't here yet, but it appears to be close. If Google's early tests are accurate, web pages may soon be ranked by the facts they contain, not the links they receive.
Source : searchenginewatch
According to New Scientist, a team of Google researchers is currently working toward a future where search engines judge websites not on the number of other sites that trust them enough to link to them, but by the accuracy of their content.
Exogenous vs. Endogenous Credibility
Google researchers are brilliant, so they use words like "exogenous" to describe signals that come from outside a web page, such as hyperlink structure. They use words like "endogenous" to describe signals that come from within a web page, such as factual accuracy.Since web pages can be littered with factual inaccuracies and still appear credible because of a high number of quality links, the Google team is pursuing a future where endogenous signals carry far more weight than exogenous signals.
In short, Google may soon be more concerned with the information your website contains than the level of trust people have in your website. New websites could immediately be ranked higher than established competitor sites just by hosting content that is more factually accurate than theirs.
"Knowledge Vault": The Storage Room for Humanity's Collective Information
So where do Google's bots go to check the facts found in the web pages they crawl?Google has quietly been building a database that contains the accumulated knowledge of the entire human race. This enormous cache of facts and information is readable by both machines and humans. Called the Knowledge Vault, this information super warehouse is locked in a cycle of self-perpetuating improvement - the more information it gathers, the more information it is able to collect. Bots scan text on web pages and then double-check what they find against the information stored in Knowledge Vault. Those same bots can then deposit new information that they "learn" from those web pages into the vault.
Researchers believe the very near future will include machines that recognize objects. When a person wearing a heads-up display looks at a bridge, for example, the device will recognize it and request information from Knowledge Vault. Knowledge Vault will instantly beam facts about the bridge back to the wearer.
For now, Knowledge Vault is just the world's greatest fact checker - the brains behind Google's pursuit of being able to judge web pages on their endogenous information, not their exogenous links.
Early Tests Provide Promising Results
Google tested 2.8 billion triples, which are are facts discovered on and extracted from web pages. Using those triples, Google researchers were able to "reliably predict the trustworthiness of 119 million web pages and 5.6 million websites."Although this system is not yet ready to be applied Internet-wide, it could certainly supplement the signals that are currently used to evaluate a website's quality.
In 2012, Penguin and Panda changed the relationship between SEO and search rankings. The impact felt by those algorithm updates, however, could be dwarfed by Google's current quest to judge websites by their factual accuracy and truthfulness, as opposed to ranking pages based on links.
That future isn't here yet, but it appears to be close. If Google's early tests are accurate, web pages may soon be ranked by the facts they contain, not the links they receive.
Source : searchenginewatch
No comments:
Post a Comment