INTRODUCTION OF SPIDER

The term search engine spider can be used interchangeably with the term search engine crawler. A spider is a program that a search engine uses to seek out information on the World Wide Web, as well as to index the information that it finds so that actual search results appear when a search query for a keyword is entered. The search engine spider "reads" the text on the web page or collection of web pages, and records any hyperlinks it finds. The search engine spider then follows these URLs, spiders those pages, and collects all the data by saving copies of the web pages into the index of the search engine for use by visitors. Search engine spiders are always working, sometimes to index new web pages, and sometimes to update ones that change frequently. The goal of a search engine spider is to perpetually supply the search engine it belongs to with the most up-to-date material possible.

Lebih baru Lebih lama