Show simple item record

dc.contributor.authorThelwall, Mike
dc.date.accessioned2006-08-23T15:54:20Z
dc.date.available2006-08-23T15:54:20Z
dc.date.issued2002
dc.identifier.citationThelwall, M. (2002), "Methodologies for crawler based Web surveys ", Internet Research, Vol. 12 No. 2, pp. 124-138. https://doi.org/10.1108/10662240210422503
dc.identifier.issn1066-2243
dc.identifier.doi10.1108/10662240210422503
dc.identifier.urihttp://hdl.handle.net/2436/4012
dc.descriptionThis is an accepted manuscript of an article published by MCB UP Ltd in Internet Research on 01/05/2002, available online: https://doi.org/10.1108/10662240210422503 The accepted version of the publication may differ from the final published version.
dc.description.abstractThere have been many attempts to study the content of the Web, either through human or automatic agents. Describes five different previously used Web survey methodologies, each justifiable in its own right, but presents a simple experiment that demonstrates concrete differences between them. The concept of crawling the Web also bears further inspection, including the scope of the pages to crawl, the method used to access and index each page, and the algorithm for the identification of duplicate pages. The issues involved here will be well-known to many computer scientists but, with the increasing use of crawlers and search engines in other disciplines, they now require a public discussion in the wider research community. Concludes that any scientific attempt to crawl the Web must make available the parameters under which it is operating so that researchers can, in principle, replicate experiments or be aware of and take into account differences between methodologies. Also introduces a new hybrid random page selection methodology.
dc.formatapplication/pdf
dc.format.extent305216 bytes
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherMCB UP Ltd
dc.relation.urlhttp://www.emeraldinsight.com/10.1108/10662240210422503
dc.subjectIndexes
dc.subjectInternet
dc.subjectSurveys
dc.subjectWeb crawlers
dc.titleMethodologies for crawler based Web surveys.
dc.typeJournal article
dc.identifier.journalInternet Research
dc.format.digYES
rioxxterms.versionAM
dc.source.volume12
dc.source.issue2
dc.source.beginpage124
dc.source.endpage138
refterms.dateFCD2020-06-30T12:40:59Z
refterms.versionFCDAM
refterms.dateFOA2018-08-21T11:56:08Z
html.description.abstractThere have been many attempts to study the content of the Web, either through human or automatic agents. Describes five different previously used Web survey methodologies, each justifiable in its own right, but presents a simple experiment that demonstrates concrete differences between them. The concept of crawling the Web also bears further inspection, including the scope of the pages to crawl, the method used to access and index each page, and the algorithm for the identification of duplicate pages. The issues involved here will be well-known to many computer scientists but, with the increasing use of crawlers and search engines in other disciplines, they now require a public discussion in the wider research community. Concludes that any scientific attempt to crawl the Web must make available the parameters under which it is operating so that researchers can, in principle, replicate experiments or be aware of and take into account differences between methodologies. Also introduces a new hybrid random page selection methodology.


Files in this item

Thumbnail
Name:
2002_ Methodologies for Crawler ...
Size:
298.0Kb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record