Wednesday, November 14, 2007

Week 4

Practical 4 & 5

Search Engine

A general Web search engine is a program that searches documents on the World Wide Web for specified keywords and returns a list of results where the keywords were found. Without search engines, it would be almost impossible to locate anything on the Web without the specific URL. There are three types of search engines: Those that are powered by robots are called crawlers (spiders) and those that are powered by human submissions, and those that are a hybrid of the two.

Human-powered search engine rely on humans to submit information that is subsequently indexed and catalogued. Only information that is submitted is put into the index.

Crawler-based search engines uses crawlers to visit a web site, read all of the information found on the site, also read the site’s meta-tags (special HTML tags that provides information about a Web page) and also follow the links that the site connects to, and then do indexing on all linked Web sites as well. Next, the crawler returns all the information back to the server where the data is indexed. These crawlers will periodically return to the sites to check for updated information.

Meta-Search Engine

A Meta-search engine is an engine that queries other search engines and then combines the results that are received. In other words, meta-search engines allow users to search several engines simultaneously. It also included other words of similar meaning as the keywords that are used to search.

Subject Directories

Subject directory is a web directory that organizes web sites by subject. They are maintained by humans instead of software. Web directories are much smaller as compared to search engines databases, since they are arranged and stored by human instead of robots (crawlers).

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home