2.50
Hdl Handle:
http://hdl.handle.net/2436/4012
Title:
Methodologies for crawler based Web surveys.
Authors:
Thelwall, Mike
Abstract:
There have been many attempts to study the content of the Web, either through human or automatic agents. Describes five different previously used Web survey methodologies, each justifiable in its own right, but presents a simple experiment that demonstrates concrete differences between them. The concept of crawling the Web also bears further inspection, including the scope of the pages to crawl, the method used to access and index each page, and the algorithm for the identification of duplicate pages. The issues involved here will be well-known to many computer scientists but, with the increasing use of crawlers and search engines in other disciplines, they now require a public discussion in the wider research community. Concludes that any scientific attempt to crawl the Web must make available the parameters under which it is operating so that researchers can, in principle, replicate experiments or be aware of and take into account differences between methodologies. Also introduces a new hybrid random page selection methodology.
Citation:
Internet Research, 12(2): 124-138
Publisher:
MCB UP Ltd
Issue Date:
2002
URI:
http://hdl.handle.net/2436/4012
DOI:
10.1108/10662240210422503
Additional Links:
http://www.emeraldinsight.com/10.1108/10662240210422503
Type:
Article
Language:
en
ISSN:
10662243,00000000
Appears in Collections:
Statistical Cybermetrics Research Group ; Statistical Cybermetrics Research Group

Full metadata record

DC FieldValue Language
dc.contributor.authorThelwall, Mike-
dc.date.accessioned2006-08-23T15:54:20Z-
dc.date.available2006-08-23T15:54:20Z-
dc.date.issued2002-
dc.identifier.citationInternet Research, 12(2): 124-138en
dc.identifier.issn10662243,00000000-
dc.identifier.doi10.1108/10662240210422503-
dc.identifier.urihttp://hdl.handle.net/2436/4012-
dc.description.abstractThere have been many attempts to study the content of the Web, either through human or automatic agents. Describes five different previously used Web survey methodologies, each justifiable in its own right, but presents a simple experiment that demonstrates concrete differences between them. The concept of crawling the Web also bears further inspection, including the scope of the pages to crawl, the method used to access and index each page, and the algorithm for the identification of duplicate pages. The issues involved here will be well-known to many computer scientists but, with the increasing use of crawlers and search engines in other disciplines, they now require a public discussion in the wider research community. Concludes that any scientific attempt to crawl the Web must make available the parameters under which it is operating so that researchers can, in principle, replicate experiments or be aware of and take into account differences between methodologies. Also introduces a new hybrid random page selection methodology.en
dc.format.extent305216 bytes-
dc.format.mimetypeapplication/pdf-
dc.language.isoenen
dc.publisherMCB UP Ltden
dc.relation.urlhttp://www.emeraldinsight.com/10.1108/10662240210422503en
dc.subjectIndexesen
dc.subjectInterneten
dc.subjectSurveysen
dc.subjectWeb crawlersen
dc.titleMethodologies for crawler based Web surveys.en
dc.typeArticleen
dc.format.digYES-
All Items in WIRE are protected by copyright, with all rights reserved, unless otherwise indicated.