By 06 new digital data had grown to 161 billion gigabytes It said that this digital data will be 3 million times greater than all the information on books ever written It is multiplying at an accelerating pace, more than doubling of information each year. By 2010, it is predicted to hit 1,000 billion gigabytes (a zettabyte). This is new digital information — in summation to the vast amounts produced in each previous year.
With all this information how does anyone find anything ? It is not surprising that search engines have become massive businesses and information constitution is the hot career choice of the decade.
When it comes to your website is concerned the quaint old notion that ‘if I build it, they will come’ doesn’t cut it any more. Your web pages are snowflakes in a blizzard of data . People need to know that you exist. More of importly, they need to see your web pages as compellingly different, so that they will chose to come to your site rather than to any one of the thousands of others out there.
If you want people to see you as better than their other options, you need to get world search engines to see you the same way.
There are basically two ways in which everything on the web is classified and indexed. The first, and oldest, is by directories. You submit a short description to the directory for your entire site, or editors write one for sites they review . Each site can only occupy one slot depends on humans for its listings, determine where each site belongs.
The Open Directory Project (ODP), also known as Dmoz (from directory.mozilla.org, its original domain name), is a multilingual open content directory of World Wide Web links. It is owned by Netscape, but it is constructed and maintained by a community of volunteer editors. It is called DMOZ and you find it at dmoz.org. DM0Z is the foundation of the web, and all the major search engines build off it. You have to submit your site to DMOZ, and then wait, practically for many weeks, for person to get approximately to looking at it.
If you are not already in DMOZ, it can take Google a very long time to find you. Google is not a directory; it is a search engine, the second way in which information is sorted and identified on the web. While directories are run by people, online search engines are automated. Their fieldwork is done by small bits of code called bots or spiders.
This code scurries around the web, following hyperlinks. Every page it encounters is absorbed and taken back home, where it is deposited in a vast database that indexes web pages.
When a spider absorbs a web page, it takes the obvious text, the out of sight code, the names and addresses of pages and files that the page links to, and the details of pages that link into it from elsewhere. They “crawl” or “spider” the web, then people search through what they have found.
When someone enters a search query, typically a word or phrase, the search engine retrieves all the pages that relate to the query. It processes them by means of an algorithm that looks at more than 100 different characteristics, and then ranks the pages according to how relevant they are to the query, how good the content is, and how important the page is relative to all the competing pages on the web that are about the same theme. If you want your page to get into the top two or three out of thousands for any given keyword, your site has to be one of the very best in its field.
A keyword is a word or short phrase used to encapsulate the essence of a web page. The best search engines use it to classify what a page is about, searchers use it as a search query to find pages that may play their problems, and marketers use it to trigger advertisements that will lead searchers to their site. online search engines profit the keywords for a page from a number of places, including:
the content and context of the web page
the anchor text of inbound links to that page
the title, description and keyword meta-tags in the code of the page
the description of the page in web directories
the tags assigned to the page by social bookmarkers
the algorithms of the search engines themselves
The algorithms that diverse search engines use are well protected, to prevent people from specifically creating pages to get better ranks, or at least to limit the degree to which they can do that. This difference is why different search engines yield different results for the same terms. Google might determine that one page is the best result for a search term, and Ask might determine that same page is not even in the top 50. This is all just based on how they value inbound and outbound links, the compactness of the keywords they find important, how they value different placement of words, and any number of smaller factors.
The dominant force in search, at least for now, is Google. Google’s share of the search market varies from country to country and from resume to survey. Most reports give Google a US search- queries share of around 64 percent. Yahoo runs a distant and receding second, at around 21 percent. MSN/Live, Microsoft’s search engine, is doing quite well at 8 percent. There is not much room left for anyone else, and since MSN abandoned its late(a) attempt to acquire Yahoo’s search business, the future of Google’s competitors is rather uncertain.