I've been at this one for a few days, and no matter how I try, I cannot get scrapy to abstract text that is in one element.
to spare you all the code, here are the important pieces. The setup does grab everything else off the page, just not this text.
from scrapy.selector import Selector
start_url = "https://www.tripadvisor.com/VacationRentalReview-g34416-d12428323-On_the_Beach_Wide_flat_beach_Sunsets_Gulf_view_Sharks_teeth_Shells_Fish-Manasota_Key_F.html"
#BASIC ITEM AND SPIDER YADA, SPARE YOU THE DETAILS
hxs = Selector(response)
response_css = response.css("body")
desc_data = hxs.xpath('//*[#id="DETAILS_TRUNC_TEXT"]//text()').extract()
desc_data2 = response_css.css('#DETAILS_TRUNC_TEXT::text').extract()
both return empty lists. Yes, I found the xpath and css selector via chrome, but the rest of them work just fine as I'm able to find other data on the site. Please help me find out why this isn't working.
To get the data you need to use any browser simulator like selenium so that It can catch the response of dynamically generated content. You need to put some delay to let the webpage load it's content fully. This is how you can go:
from selenium import webdriver
from scrapy import Selector
import time
driver = webdriver.Chrome()
URL = "https://www.tripadvisor.com/VacationRentalReview-g34416-d12428323-On_the_Beach_Wide_flat_beach_Sunsets_Gulf_view_Sharks_teeth_Shells_Fish-Manasota_Key_F.html"
driver.get(URL)
time.sleep(5) #If you take out this line you won't get anything because the content of that page take some time to get loaded.
sel = Selector(text=driver.page_source)
item = sel.css('#DETAILS_TRUNC_TEXT::text').extract() #It is working
item_ano = sel.xpath('//*[#id="DETAILS_TRUNC_TEXT"]//text()').extract() #It is also working
print(item, item_ano)
driver.quit()
I tried your xpath and css in scrapy shell, and got nothing also.
Then I used view(response) command and found out the site is dynamic.
Here is a screenshot:
You can see that the details under Overview doesn't show up, and that's why no matter how you try, you still got nothing.
Solutions: Try Selenium (check the solution that SIM provided in the last answer) or Splash.
Good Luck. :)
Related
I am trying to scrape https://www.lithia.com/new-inventory/index.htm.
But it seems that the slash is unable to extract simple elements on the page.
I tried to extract an element from the page with the appropriate XPath using either Scrapy project (python) or slash site (http://0.0.0.0:8050/), but Splash is unable to extract the element.
Code (I have simply so it is easier to convey and debug) :
import scrapy
from scrapy_splash import SplashRequest
from time import sleep
class CarSpider(scrapy.Spider):
name = 'car1'
allowed_domains = ['lithia.com/']
start_urls = ['https://www.lithia.com/baierl-auto-group/new-inventory.htm']
def start_requests(self):
for url in self.start_urls:
yield SplashRequest(url = url,
callback = self.parse,
endpoint = 'render.html')
def parse(self, response):
sleep(5)
year = response.xpath('//span[contains(#class, "facet-list-group-label") and contains(text(), "Year")]')
sleep(5)
yield{
'year': year,
}
It returns:
{'year': []}
Meaning it is not extracted.
I check the Splash site (http://0.0.0.0:8050/) as well, and lots of element is not displayed in the HTML output. It seems like there is some rendering issue.
Following that, I came across this page (https://splash.readthedocs.io/en/stable/faq.html#website-is-not-rendered-correctly), informing possible debugs of the rendering issue by Splash:
I have tried:
Turning off private mode.
Tuning splash:wait()
setting splash:viewport_full()
adding splash:set_user_agent
enable it using splash.plugins_enabled
splash.html5_media_enabled property to enable HTML5 media
But so far, I am still unable to extract the element. In fact, lots of other elements cannot be extracted as well, just giving the element above as an example.
Please Help.
I have written the following code and it works fine. I really enjoyed because I am quite new in python requests or even python3 but at the following day I noticed that the price variable is not updated. And it does not update any time I run the code for a week (709.49 if does it matter). I think it is not a secret so I pasted the whole code below with link to the website.
So I want to ask whether I wrote something in wrong way or the web page is not that simple to make a request. Could you tell me what happened?
Here is the original code:
import requests
import re
from bs4 import BeautifulSoup
pattern = '\d+\.?\d*'
site_doc = requests.get('https://bitbay.net/pl/kurs-walut/kurs-ethereum-pln').text
soup = BeautifulSoup(site_doc, 'html.parser')
price = str(soup.select('title'))
price = re.findall(pattern, price)
print(price)
Thanks in advance!
The reason this doesn't work is that the content you are trying to get is JavaScript rendered. For this, I'd recommend using Selenium in order to get JavaScript rendered content.
I was trying to scrap the first 9 pages of a website but it looks like page5 and page7 are missing. This is making show python an attribute error. However, I think an 'if' function can solve this but I'm unable to figure out the code for the if function.
Here is my code
import requests
from bs4 import BeautifulSoup
base_url="http://cbcs.fastvturesults.com/student/1sp15me00"
for page in range(1,10,1):
r=requests.get(base_url+str(page))
c=r.content
soup=BeautifulSoup(c,"html.parser")
items=soup.find(class_="text-muted")
if ??????????:
pass
else:
print("{}\n{}".format(items.previous_sibling,items.text))
The error occurs when you try to access attributes of items when items is set to None. This is done when BeautifulSoup cannot find anything with class_="text-muted"
The solution:
if not items:
continue
Note that pass(from your solution) will just pass the current statement and move on to the next line in the loop. continue will end the current iteration and move on to the next iteration.
You don't need to create else block here. Only checking if items is not None suffices. Try the below approach:
items = soup.find(class_="text-muted")
if items:
print("{}\n{}".format(items.previous_sibling,items.text))
I am brand new to scrappy and have worked my way through the tutorial and am trying to figure out how to implement what I have learned so far to complete a seemingly basic task. I know very little python so far and am using this as a learning experience, so if I ask a simple question, I apologize.
My goal for this program is to follow this link http://ucmwww.dnr.state.la.us/ucmsearch/FindDocuments.aspx?idx=xwellserialnumber&val=971683 and to extract the well serial number to a csv file. Eventually I want to run this spider on several thousand different well files and retrieve specific data. However, I am starting with the basics first.
Right now the spider doesnt crawl on any web page that I enter. There are no errors listed in the code when I run it, it just states that 0 pages were crawled. I cant quite figure out what I am doing wrong. I am positive the start url is ok as I have checked it out. Do I need a specific type of spider to accomplish what I am trying to do?
import scrapy
from scrapy import Spider
from scrapy.selector import Selector
class Sonrisdataaccess(Spider):
name = "serial"
allowed_domains = ["sonris.com"]
start_urls = [
"http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=972498"]
def parse(self, response):
questions = Selector(response).xpath('/html/body/table[1]/tbody/tr[2]/td[1]')
for question in questions:
item = SonrisdataaccessItem()
item['serial'] = question.xpath ('/html/body/table[1]/tbody/tr[2]/td[1]').extract()[0]
yield item
Thank you for any help, I greatly appreciate it!
First of all I do not understand what you are doing in your for loop because if you have a selector you do not get the whole HTML again to select it...
Nevertheless, the interesting part is that the browser represents the table way different than it is downloaded with Scrapy. If you look at the response in your parse method you will see that there is no tbody element in the first table. This is why your selection does not return anything.
So to get the first serial number (as it is in your XPath) change your parse function to this:
def parse(self, response):
item = SonrisdataaccessItem()
item['serial'] = response.xpath('/html/body/table[1]/tr[2]/td[1]/text()').extract()[0]
yield item
For later changes you may have to alter the XPath expression to get more data.
In Python, I'm trying to read the values on http://utahcritseries.com/RawResults.aspx. How can I read years other than the default of 2002?
So far, using mechanize, I've been able to reference the SELECT and list all of its available options/values but am unsure how to change its value and resubmit the form.
I'm sure this is a common issue and is frequently asked, but I'm not sure what I should even be searching for.
So how about this:
from mechanize import Browser
year="2005"
br=Browser()
br.open("http://utahcritseries.com/RawResults.aspx")
br.select_form(name="aspnetForm")
control=br.form.find_control("ctl00$ContentPlaceHolder1$ddlSeries")
control.set_value_by_label((year,))
response2=br.submit()
print response2.read()
With problems relating to AJAX-loading of pages, use Firebug!
Install and open Firebug (it's a Firefox plugin), go to the Net page, and make sure "All" is selected. Open the URL and change the select box, and see what is sent to the server, and what is received.
It seems the catchily-named field ctl00$ContentPlaceHolder1$ddlSeries is what is responsible.. Does the following work..?
import urllib
postdata = {'ctl00$ContentPlaceHolder1$ddlSeries': 9}
src = urllib.urlopen(
"http://utahcritseries.com/RawResults.aspx",
data = urllib.urlencode(postdata)
).read()
print src