It
is a computer program that browses the World Wide Web in a methodical,
automated manner or in an orderly fashion. Other terms for Web crawlers are
ants, automatic indexers, bots, Web spiders, Web robots, or Web scutters.
Web crawling or
spidering
A
Web crawler is one type of bot, or software agent. In general, it starts with a
list of URLs to visit, called the seeds. As the crawler visits these URLs, it
identifies all the hyperlinks in the page and adds them to the list of URLs to
visit, called the crawl frontier. URLs from the frontier are recursively
visited according to a set of policies.
Web Crawler's Functions
- Web crawlers are mainly used
to create a copy of all the visited pages for later processing by a search
engine that will index the downloaded pages to provide fast searches.
- It can also be used for
automatic maintenance tasks on a Web site, such as checking links or
validating HTML code.
- It can be used to gather
specific types of information from Web pages, such as harvesting e-mail addresses
(usually for sending spam).
No comments:
Post a Comment