Alongside Google‘s PageRank, keyword density is one of the most touted and important metrics involved in Search Engine Optimization (SEO) and Search Engine Marketing. Christopher Herg of TheSiteWizard calls it "[o]ne of the simplest ways to improve your site’s placement in the search engine results". Keyword density refers to the number of times a specific keyword appears on a webpage in relation to other words. Search Engine Marketers believe that a higher keyword density for carefully-selected keywords will result in a web page being listed in a higher position for those keywords. This belief leads to the act of ‘keyword spamming’ whereby a webmaster places a keyword on a page more often than would otherwise be necessary. Some webmasters go as far as to create entire pages which are nothing but a collection of keywords (doorway pages), in an attempt to draw the attention of search engine spiders. Google consider this problem so important, that they specifically noted it as a "Risk Factor" in their SEC filing as part of the lead-up to their imminent initial public offering (IPO) and listing on the NASDAQ
Wikipedia contains an extensive entry on keyword spamming (spamdexing) which includes details of some of the approaches taken by webmasters to affect their keyword density (hidden or invisible text, meta tag stuffing, hidden links etc) and discusses some related problems and implications of the practice. WebProNews.com even includes "[h]idden text and keywords" under a list of "[o]bjectives to avoid" when "optimizing your website for search engines." Keyword spamming is hardly a new problem for search engines, and yet it continues to be an issue when presenting fairly-weighted results for a user’s search.
Consider for a moment that you are a search engine spider; a piece of software which is sent out to scour the Web for new content and report back to your search engine on what you find, and how it should be ranked against certain keywords. Without reading and understanding the content of a page, how would you be able to tell that it shouldn’t have a certain keyword listed as many times as it does? Assuming you determined that there were ‘too many occurrences’ of a keyword, what would the punishment be? What if it was a simple case of the writing style of the author calling for the use of a word a number of times? Without being able to somehow determine that a page has used a keyword ‘too many times’, a search engine is likely to assume that the page is very heavily related to that term, and thus rank it well for searches containing that keyword.
Evolving search ranking algorithms are getting better at detecting and punishing keyword spamming, but there is always going to have to be an acceptable level of keyword density, and no doubt people will figure out what it is. Webmasters eager for traffic will make sure that their pages are right on the line to avoid punishment while maximizing benefit. This is one of the problems which may just be part of the search engine scene whether we like it or not.