Skip to content Skip to sidebar Skip to footer

Scrapy Start_urls

The script (below) from this tutorial contains two start_urls. from scrapy.spider import Spider from scrapy.selector import Selector from dirbot.items import Website class DmozS

Solution 1:

start_urls class attribute contains start urls - nothing more. If you have extracted urls of other pages you want to scrape - yield from parse callback corresponding requests with [another] callback:

classSpider(BaseSpider):

    name = 'my_spider'
    start_urls = [
                'http://www.domain.com/'
    ]
    allowed_domains = ['domain.com']

    defparse(self, response):
        '''Parse main page and extract categories links.'''
        hxs = HtmlXPathSelector(response)
        urls = hxs.select("//*[@id='tSubmenuContent']/a[position()>1]/@href").extract()
        for url in urls:
            url = urlparse.urljoin(response.url, url)
            self.log('Found category url: %s' % url)
            yield Request(url, callback = self.parseCategory)

    defparseCategory(self, response):
        '''Parse category page and extract links of the items.'''
        hxs = HtmlXPathSelector(response)
        links = hxs.select("//*[@id='_list']//td[@class='tListDesc']/a/@href").extract()
        for link in links:
            itemLink = urlparse.urljoin(response.url, link)
            self.log('Found item link: %s' % itemLink, log.DEBUG)
            yield Request(itemLink, callback = self.parseItem)

    defparseItem(self, response):
        ...

If you still want to customize start requests creation, override method BaseSpider.start_requests()

Solution 2:

start_urls contain those links from which the spider start crawling. If you want crawl recursively you should use crawlspider and define rules for that. http://doc.scrapy.org/en/latest/topics/spiders.html look there for example.

Solution 3:

The class does not have a rules property. Have a look at http://readthedocs.org/docs/scrapy/en/latest/intro/overview.html and search for "rules" to find an example.

Solution 4:

If you use BaseSpider, inside the callback, you have to extract out your desired urls yourself and return a Request object.

If you use CrawlSpider, links extraction would be taken care of by the rules and the SgmlLinkExtractor associated with the rules.

Solution 5:

If you use an rule to follow links (that is already implemented in scrapy), the spider will scrape them too. I hope have helped...

from scrapy.contrib.spiders import BaseSpider, Rule
    from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
    from scrapy.selector import HtmlXPathSelector


    classSpider(BaseSpider):
        name = 'my_spider'
        start_urls = ['http://www.domain.com/']
        allowed_domains = ['domain.com']
        rules = [Rule(SgmlLinkExtractor(allow=[], deny[]), follow=True)]

     ...

Post a Comment for "Scrapy Start_urls"