The History Of Search Engines at Search Engine Optimization

Search Engine Optimization Reviews

Search Engine Optimization - The History Of Search Engines
Search Engine Optimization - search engine optimization articles
Search Engine Optimization - search engine optimization articles

The History Of Search Engines

The History of Internet Search Engines

Just a little over ten years ago, if a person needed information they were forced to go to the local library and spend hours entombed amongst shelves of books. Now that the internet is available in almost every home finding information is easier then ever before. Now when someone needs information all they have to do is boot up their computer and type their needs into a search engine

A search engine is an information retrieval system that is designed to help find information stored on a ca computer system.

In 1990 the very first search engine was created by students at McGill University in Montreal. The search engine was called Archie and it was invented to index FTP archives, allowing people to quickly access specific files. FTPs ( short for File Transfer Protocol ) are used to transfer data from one computer to another ocer the internet, or through a network that supports TCP / IP protocol. In its early days Archie contacted a list of FTP archives approximately once a month with a request for a listing. Once Archie received a listing it was stored in local files and could be searched using a UNIX grep command. In its early days Archie was a local tool but as the kinks got worked out and it became more efficient it became a network wide resource. Archie users could utilize Archie's services through a variety of methods including e - mail queries, teleneting directly to a server, and eventually through the World Wide Web interfaces. Archie only indexed computer files.

A student at the University of Minnesota created a search engine that indexed plain text files in 1991. They named the program Gopher after the University of Minnesota's mascot.

In 1993 a student at MIT created Wandx, the first Web search engine.

Today, search engines match a user's keyword query with a list of potential websites that might have the information the users is looking for. The search engine does this by using a software code that is called a crawler to probe web pages that match the user's keyword. Once the crawler has identified web pages that may be what the user is looking for the search engine uses a variety of statistical techniques to establish each pages importance. Most search engines establish the importance of hits based on the frequency of word distribution. Once the search engine has finished searching web pages it provides a list of web sites to the user.

Today, when an internet user types a word into a search engine they are given a list of websites that might be able to provide them with the information they seek. The typical search engine provides ten potential hits per page. The average internet user never looks farther they the second page the search engine provides. Webmasters are constantly finding themselves forced to use new methods of search engine optimization to be highly ranked by the search engines.

In 2000, a study was done by Lawrence and Giles that suggested internet search engines were only able to index sixteen percent of all available webpage's.

About The Author:

Peter Dobler is a veteran in the IT business. His passion for experimenting with new internet marketing strategies leads him to explore new niche markets.
Read more about his experience with search engine optimization; visit


2nd Search Engine Optimization - The History Of Search Engines 2nd Search Engine Optimization - search engine optimization articles Search Engine Optimization - search engine optimization articles



More Search Engine Optimization Resources

To search the massive ebook directory, enter your search term in the box below



Search This Site




More Search Engine Optimization Articles

How Google Uses Page Rank For SEO

... that supports the webpage. The more pages that link to the page, the more votes of support the webpage receives. If PageRank comes across a website that has absolutely no links connecting it to another webpage then it is not awarded any votes at all. Tests done with a model like PageRank have shown that ... 

Read Full Article  

How Title Tags And Meta Tags Are Usd For Serch Engine Optimization

... engines. Meta tags can also let website owners prevent having their website indexed at all. Meta tag keywords are a way to provide extra test for web crawler based search engines to index. While this is great in theory several of the major search engines have crawlers that ignore the HTML and focus entirely ... 

Read Full Article  

Search Engine Marketing Different From Search Engne Optimization

... differs from search engine optimization which is the art and science of making web pages attractive to internet search engines. Non - profit organizations, universities, political parties, and the government can all benefit from search engine marketing. Businesses that sell products and / or services ... 

Read Full Article  

Algorithms The Foundation Of Serch Engine Optimization

... simple algorithms. Computers generally use programming languages that are intended for expressing algorithms. There are different ways to classify algorithms. The first is by the specific type of algorithm. Types of algorithms include recursive and interative algorithms, deterministic and non - deterministic ... 

Read Full Article