META Tags, Robots.txt And Sitemaps

It's no mystery that people in the SEO world have strong feelings about cloaking. The very mention of an off-white hat tactic in one of my YOUmoz posts earned me multiple thumbs down and accusations of cloaking. One comment in particular interested me, though: someone suggested that they consider cloaking to be virtually any tactic that treats search engines differently from end-users. As a usability specialist, this really got me thinking, and I began to realize just how many mechanisms have been specifically created (and sanctioned by search engines) to separate bots from users, and, more importantly, how these tactics can actually improve usability.
As a group, I'll call these tactics "soft cloaking." Before anyone objects, I know they aren't cloaking in the traditional sense, but they are tools and tactics that deliberately treat search engines and users differently, and, in essence, deliver different content to each of the two groups.



1. META Tags


One of the earliest examples of soft cloaking were the META tags, adopted by major search engines over a decade ago. The most prominent META tags, including the keyword and description fields, were almost completely hidden from end-users, and were created specifically to provide search engines with information they could use to classify and prioritize content. Although abuse by the black-hat community eventually gave META tags a bad name, I'd still argue that they were intended to improve usability. They initially allowed search engines to retrieve relevant information in a machine-friendly way, without burdening users with that additional content (such as keyword lists plastered across pages). While the original dream of META tags (and the adoption of new tags to categorize geographic locations, industry, etc.) never really materialized, they were one of the first steps in helping search engines to deliver better, more targeted content.

2. Robots.txt

The increasing prevalence of search spiders and bots, as well as their tendency to soak up bandwidth, finally forced search-engine developers to allow webmasters to block bot activity. While this was originally intended to slow spider activity and keep them out of sensitive territory, the robots.txt concept eventually evolved to allow fairly complex, selective blocking. Over time, more and more people have used their robots.txt file to help search engines zero in on relevant and search-friendly content. By helping spiders avoid duplicate pages, for example, blocking cleans up superfluous search results and can ultimately save search users time and frustration.
This brings up a broader and very important aspect of soft cloaking. By treating search engines and end-users differently, we ultimately also recognize that search visitors are different from direct visitors. A case in point is Google’s recent discussion about cracking down on search results pages within their search results. The logic is simple: showing a visitor a targeted piece of information is generally more valuable than forcing them to jump from one set of search results to another set of results in a different format. My own experience has shown just how valuable helping Google arrive directly at specific information can be. By serving up specific product pages instead of search results, search users are essentially jumping 2-3 levels deeper into my clients' sites, improving both usability and conversion rates, and also helping Google to provide a better user experience.

3. The Sitemap Protocol

More recently, some of the major search engines agreed to adopt the XML-based sitemaps protocol. Unlike traditional sitemaps, targeted at end-users, these are machine-readable maps designed specifically for search engines. In some cases, a sitemap file may contain thousands of links not normally (or easily) available to users. While such a file should never be a replacement for good navigation, a well-built sitemap file can help spiders better understand the structure of a complex site in a way that humans would either find unreadable or simply overwhelming (an HTML sitemap with hundreds, lets alone thousands, of links would be useless from a human usability standpoint). Done correctly, providing the spiders with content that may be very different from what human users see can actually improve their ability to understand a site's structure, consequently improving search usability.

Is Soft Cloaking Good for Users?

Of course, any of these mechanisms can be abused, but there is nothing inherently wrong with presenting different information to search engines and humans. In many cases, soft cloaking allows spiders and end-users to process information in a way that suits each of them best. Providing spiders with more digestible data is ultimately good for users, especially people who use search engines as a form of navigation. As more and more users visit websites from a diversity of paths (directly, through search engines, via RSS feeds, etc.), we have to not only be aware of the different challenges of these mediums, but be open to delivering content in unique ways to suit each path and the technology that enables it. Real-world usability is never a one-sized-fits-all equation.

3 comments:

Aceh Software Store said...

wow you are the best in seo

Lucia said...

FELIZ ANO NOVO!

Beijos!

Asis Sugianto said...

yups,,, nice info,,, i'm applying in my blog,,,

Post a Comment

◄ Newer Post Older Post ►