网站大量收购闲置独家精品文档,联系QQ:2885784924

chapter02-part03-search-engine_有哪些信誉好的足球投注网站引擎.ppt

chapter02-part03-search-engine_有哪些信誉好的足球投注网站引擎.ppt

  1. 1、本文档共33页,可阅读全部内容。
  2. 2、有哪些信誉好的足球投注网站(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
  3. 3、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  4. 4、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
查看更多
chapter02-part03-search-engine_有哪些信誉好的足球投注网站引擎

Mining the Web Chakrabarti and Ramakrishnan Search Engine 朱廷劭(Zhu, Tingshao)Ph.D Examples of search engines Conventional (library catalog). Search by keyword, title, author, etc. Text-based (Lexis-Nexis, Google, Yahoo!). Search by keywords. Limited search using queries in natural language. Multimedia (QBIC, WebSeek, SaFe) Search by visual appearance (shapes, colors,… ). Question answering systems (Ask, NSIR, Answerbus) Search in (restricted) natural language Clustering systems (Vivisimo, Clusty) Research systems (Lemur, Nutch) What does it take to build a search engine? Decide what to index Collect it Index it (efficiently) Keep the index up to date Provide user-friendly query facilities What else? Understand the structure of the web for efficient crawling Understand user information needs Preprocess text and other unstructured data Cluster data Classify data Evaluate performance How Search Engines Work Gather the contents of all web pages (using a program called a crawler or spider) Organize the contents of the pages in a way that allows efficient retrieval (indexing) Take in a query, determine which pages match, and show the results (ranking and display of results) Standard Web Search Engine Architecture Standard Web Search Engine Architecture Standard Web Search Engine Architecture Spiders (crawlers) How to find web pages to visit and copy? Can start with a list of domain names, visit the home pages there. Look at the hyperlink on the home page, and follow those links to more pages. Use HTTP commands to GET the pages Keep a list of urls visited, and those still to be visited. Each time the program loads in a new HTML page, add the links in that page to the list to be crawled. Four Laws of Crawling A Crawler must show identification A Crawler must obey the robots exclusion standard /wc/norobots.html A Crawler must not hog resources A Crawler must report errors Example robots.txt file /robots.txt (just the first few lines) Lots of tricky aspects Servers are ofte

文档评论(0)

f8r9t5c + 关注
实名认证
内容提供者

该用户很懒,什么也没介绍

版权声明书
用户编号:8000054077000003

1亿VIP精品文档

相关文档