Scrapy Data Table extract - web-scraping

I am trying to scrape "https://www.expireddomains.net/deleted-com-domains/"
for the expired domain data list.
I always get empty item fields for the following
class ExpiredSpider(BaseSpider):
name = "expired"
allowed_domains = ["example.com"]
start_urls = ['https://www.expireddomains.net/deleted-com-domains/']
def parse(self, response):
log.msg('parse(%s)' % response.url, level = log.DEBUG)
rows = response.xpath('//table[#class="base1"]/tbody/tr')
for row in rows:
item = DomainItem()
item['domain'] = row.xpath('td[1]/text()').extract()
item['bl'] = row.xpath('td[2]/text()').extract()
yield item
Can somebody point out what is wrong? Thanks.

As a first note, you should use scrapy.Spider instead of BaseSpider which is deprecated
Secondly, .extract() method returns a list rather than a single element.
This is how the item extraction should look like
item['domain'] = row.xpath('td[1]/text()').extract_first()
item['bl'] = row.xpath('td[2]/text()').extract_first()
Also,
You should use the built in python logging library
import logging
logging.debug("parse("+response.url+")")

Related

Duplication in data while scraping data using Scrapy

python
I am using scrapy to scrape data from a website, where i want to scrape graphic cards title,price and whether they are in stock or not. The problem is my code is looping twice and instead of having 10 products I am getting 20.
import scrapy
class ThespiderSpider(scrapy.Spider):
name = 'Thespider'
start_urls = ['https://www.czone.com.pk/graphic-cards-pakistan-ppt.154.aspx?page=2']
def parse(self, response):
data = {}
cards = response.css('div.row')
for card in cards:
for c in card.css('div.product'):
data['Title'] = c.css('h4 a::text').getall()
data['Price'] = c.css('div.price span::text').getall()
data['Stock'] = c.css('div.product-stock span.product-data::text').getall()
yield data
You're doing a nested for loop when one isn't necessary.
Each card can be captured by the CSS selector response.css('div.product')
Code Example
def parse(self, response):
data = {}
cards = response.css('div.product')
for card in cards:
data['Title'] = card.css('h4 a::text').getall()
data['Price'] = card.css('div.price span::text').getall()
data['Stock'] = card.css('div.product-stock span.product-data::text').getall()
yield data
Additional Information
Use get() instead of getall(). The output you get is a list, you'll probably want a string which is what get() gives you.
If you're thinking about multiple pages, an items dictionary may be better than yielding a dictionary. Invariably there will be the thing you need to alter and an items dictionary gives you more flexibility to do this.

Get structured output with Scrapy

I'm just starting to use scrapy and this is one of my first few projects. I am trying to scrape some company metadata from https://www.baincapitalprivateequity.com/portfolio/ . I have figured out my selectors but I'm unable to structure the output. I'm currently getting everything in one cell but I want the output to be one row for each company. If someone could help with where I'm going wrong, it'll be really great.
import scrapy
from ..items import BainpeItem
class BainPeSpider(scrapy.Spider):
name = 'Bain-PE'
allowed_domains = ['baincapitalprivateequity.com']
start_urls = ['https://www.baincapitalprivateequity.com/portfolio/']
def parse(self, response):
items = BainpeItem()
all_cos = response.css('div.grid')
for i in all_cos:
company = i.css('ul li::text').extract()
about = i.css('div.companyDetail p').extract()
items['company'] = company
items['about'] = about
yield items
You can just yield each item in the for loop:
for i in all_cos:
item = BainpeItem()
company = i.css('ul li::text').extract()
about = i.css('div.companyDetail p').extract()
item['company'] = company
item['about'] = about
yield item
This way each item will arrive in the pipeline separately.

When using scrapy shell, I get no data from response.xpath

I am trying to scrape a betting site. However, when I check for the retrieved data in scrapy shell, I receive nothing.
The xpath to what I need is: //*[#id="yui_3_5_0_1_1562259076537_31330"] and when I write in the shell this is what I get:
In [18]: response.xpath ( '//*[#id="yui_3_5_0_1_1562259076537_31330"]')
Out[18]: []
The output is [] but I expected to be something from which I could extract the href.
When I use the "inspect" tool from Chrome, while the site is still loading, this id is outlined in purple. Does this mean that the site is using JavaScipt? And if this is true, is this the reason why scrapy does not find the item and returns []?
i try scraping the site just using Scrapy and this is my result.
This the items.py file
import scrapy
class LifeMatchsItem(scrapy.Item):
Event = scrapy.Field() # Name of event
Match = scrapy.Field() # Teams1 vs Team2
Date = scrapy.Field() # Date of Match
This is my Spider code
import scrapy
from LifeMatchesProject.items import LifeMatchsItem
class LifeMatchesSpider(scrapy.Spider):
name = 'life_matches'
start_urls = ['http://www.betfair.com/sport/home#sscpl=ro/']
custom_settings = {'FEED_EXPORT_ENCODING': 'utf-8'}
def parse(self, response):
for event in response.xpath('//div[contains(#class,"events-title")]'):
for element in event.xpath('./following-sibling::ul[1]/li'):
item = LifeMatchsItem()
item['Event'] = event.xpath('./a/#title').get()
item['Match'] = element.xpath('.//div[contains(#class,"event-name-info")]/a/#data-event').get()
item['Date'] = element.xpath('normalize-space(.//div[contains(#class,"event-name-info")]/a//span[#class="date"]/text())').get()
yield item
And this is the result

Scraping data from wikipedia using Scrapy - why/when do errors occur due to processing URLs?

I am just starting to use Scrapy, and I am learning to use it as I go along. Please can someone explain why there is an error in my code, and what this error is? Is this error related to an invalid URL I have provided, and/or is it connected with invalid xpaths?
Here is my code:
from scrapy.spider import Spider
from scrapy.selector import Selector
class CatswikiSpider(Spider):
name = "catswiki"
allowed_domains = ["http://en.wikipedia.org/wiki/Cat‎"]
start_urls = [
"http://en.wikipedia.org/wiki/Cat‎"
]
def parse(self, response):
sel = Selector(response)
sites = sel.xpath('//body/div')
for site in sites:
title = ('//h1/span/text()').extract()
subtitle = ('//h2/span/text()').extract()
boldtext = ('//p/b').extract()
links = ('//a/#href').extract()
imagelinks = ('//img/#src').re(r'.*cat.*').extract()
print title, subtitle, boldtext, links, imagelinks
#filename = response.url.split("/")[-2]
#open(filename, 'wb').write(response.body)
And here are some attachments, showing the errors in the command prompt:
You need a function call before all your extract lines. I'm not familiar with scrapy, but it's probably something like:
title = site.xpath('//h1/span/text()').extract()

How To Remove White Space in Scrapy Spider Data

I am writing my first spider in Scrapy and attempting to follow the documentation. I have implemented ItemLoaders. The spider extracts the data, but the data contains many line returns. I have tried many ways to remove them, but nothing seems to work. The replace_escape_chars utility is supposed to work, but I can't figure out how to use it with the ItemLoader. Also some people use (unicode.strip), but again, I can't seem to get it to work. Some people try to use these in items.py and others in the spider. How can I clean the data of these line returns (\r\n)? My items.py file only contains the item names and field(). The spider code is below:
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.loader import XPathItemLoader
from scrapy.utils.markup import replace_escape_chars
from ccpstore.items import Greenhouse
class GreenhouseSpider(BaseSpider):
name = "greenhouse"
allowed_domains = ["domain.com"]
start_urls = [
"http://www.domain.com",
]
def parse(self, response):
items = []
l = XPathItemLoader(item=Greenhouse(), response=response)
l.add_xpath('name', '//div[#class="product_name"]')
l.add_xpath('title', '//h1')
l.add_xpath('usage', '//li[#id="ctl18_ctl00_rptProductAttributes_ctl00_liItem"]')
l.add_xpath('repeat', '//li[#id="ctl18_ctl00_rptProductAttributes_ctl02_liItem"]')
l.add_xpath('direction', '//li[#id="ctl18_ctl00_rptProductAttributes_ctl03_liItem"]')
items.append(l.load_item())
return items
You can use the default_output_processor on the loader and also other processors on individual fields, see title:
from scrapy.spider import BaseSpider
from scrapy.contrib.loader import XPathItemLoader
from scrapy.contrib.loader.processor import Compose, MapCompose
from w3lib.html import replace_escape_chars, remove_tags
from ccpstore.items import Greenhouse
class GreenhouseSpider(BaseSpider):
name = "greenhouse"
allowed_domains = ["domain.com"]
start_urls = ["http://www.domain.com"]
def parse(self, response):
l = XPathItemLoader(Greenhouse(), response=response)
l.default_output_processor = MapCompose(lambda v: v.strip(), replace_escape_chars)
l.add_xpath('name', '//div[#class="product_name"]')
l.add_xpath('title', '//h1', Compose(remove_tags))
l.add_xpath('usage', '//li[#id="ctl18_ctl00_rptProductAttributes_ctl00_liItem"]')
l.add_xpath('repeat', '//li[#id="ctl18_ctl00_rptProductAttributes_ctl02_liItem"]')
l.add_xpath('direction', '//li[#id="ctl18_ctl00_rptProductAttributes_ctl03_liItem"]')
return l.load_item()
It turns out that there were also many blank spaces in the data, so combining the answer of Steven with some more research allowed the data to have all tags, line returns and duplicate spaces removed. The working code is below. Note the addition of text() on the loader lines which removes the tags and the split and join processors to remove spaces and line returns.
def parse(self, response):
items = []
l = XPathItemLoader(item=Greenhouse(), response=response)
l.default_input_processor = MapCompose(lambda v: v.split(), replace_escape_chars)
l.default_output_processor = Join()
l.add_xpath('title', '//h1/text()')
l.add_xpath('usage', '//li[#id="ctl18_ctl00_rptProductAttributes_ctl00_liItem"]/text()')
l.add_xpath('repeat', '//li[#id="ctl18_ctl00_rptProductAttributes_ctl02_liItem"]/text()')
l.add_xpath('direction', '//li[#id="ctl18_ctl00_rptProductAttributes_ctl03_liItem"]/text()')
items.append(l.load_item())
return items

Resources