This query is returning 0 or 20 randomly every time i run it. Yesterday when i loop through the pages i always get 20 and I am able to scrape through 20 listings and 15 pages. But now, I can't run my code properly because sometimes the listings return 0.
I tried adding headers in the request get and time sleep (5-10s random) before each request but am still facing the same issue. Tried connecting to hotspot to change my IP but am still facing the same issue. Anyone understand why?
import time
from random import randint
from bs4 import BeautifulSoup
import requests #to connect to url
airbnb_url = 'https://www.airbnb.com/s/Mayrhofen--Austria/homes?tab_id=home_tab&refinement_paths%5B%5D=%2Fhomes&date_picker_type=calendar&query=Mayrhofen%2C%20Austria&place_id=ChIJbzLYLzjdd0cRDtGuTzM_vt4&checkin=2021-02-06&checkout=2021-02-13&adults=4&source=structured_search_input_header&search_type=autocomplete_click'
soup = BeautifulSoup(requests.get(airbnb_url).content, 'html.parser')
listings = soup.find_all('div', '_8s3ctt')
print(len(listings))
It seems AirBnB returns 2 versions of the page. One "normal" HTML and other where the listings are stored inside <script>. To parse the <script> version of page you can use next example:
import json
import requests
from bs4 import BeautifulSoup
def find_listing(d):
if isinstance(d, dict):
if "__typename" in d and d["__typename"] == "DoraListingItem":
yield d["listing"]
else:
for v in d.values():
yield from find_listing(v)
elif isinstance(d, list):
for v in d:
yield from find_listing(v)
airbnb_url = "https://www.airbnb.com/s/Mayrhofen--Austria/homes?tab_id=home_tab&refinement_paths%5B%5D=%2Fhomes&date_picker_type=calendar&query=Mayrhofen%2C%20Austria&place_id=ChIJbzLYLzjdd0cRDtGuTzM_vt4&checkin=2021-02-06&checkout=2021-02-13&adults=4&source=structured_search_input_header&search_type=autocomplete_click"
soup = BeautifulSoup(requests.get(airbnb_url).content, "html.parser")
listings = soup.find_all("div", "_8s3ctt")
if len(listings):
# normal page:
print(len(listings))
else:
# page that has listings stored inside <script>:
data = json.loads(soup.select_one("#data-deferred-state").contents[0])
for i, l in enumerate(find_listing(data), 1):
print(i, l["name"])
Prints (when returned the <script> version):
1 Mariandl (MHO103) for 36 persons.
2 central and friendly! For Families and Friends
3 Sonnenheim for 5 persons.
4 MO's Apartments
5 MO's Apartments
6 Beautiful home in Mayrhofen with 3 Bedrooms
7 Quaint Apartment in Finkenberg near Ski Lift
8 Apartment 2 Villa Daringer (5 pax.)
9 Modern Apartment in Schwendau with Garden
10 Holiday flats Dornau, Mayrhofen
11 Maple View
12 Laubichl Lodge by Apart Hotel Therese
13 Haus Julia - Apartment Edelweiß Mayrhofen
14 Melcherhof,
15 Rest coke
16 Vacation home Traudl
17 Luxurious Apartment near Four Ski Lifts in Mayrhofen
18 Apartment 2 60m² for 2-4 persons "Binder"
19 Apart ZEMMGRUND, 4-9 persons in Mayrhofen/Tirol
20 Apartment Ahorn View
EDIT: To print lat, lng:
...
for i, l in enumerate(find_listing(data), 1):
print(i, l["name"], l["lat"], l["lng"])
Prints:
1 Mariandl (MHO103) for 36 persons. 47.16522 11.85723
2 central and friendly! For Families and Friends 47.16209 11.859691
3 Sonnenheim for 5 persons. 47.16809 11.86694
4 MO's Apartments 47.166969 11.863186
...
Related
I'm very new to web scrapping in python. I want to extract the movie name, release year, and ratings from the IMDB database. This is the website for IMBD with 250 movies and ratings https://www.imdb.com/chart/moviemeter/?ref_=nv_mv_mpm.I use the module, BeautifulSoup, and request. Here is my code
movies = bs.find('tbody',class_='lister-list').find_all('tr')
When I tried to extract the movie name, rating & year, I got the same attribute error for all of them.
<td class="title column">
Glass Onion: une histoire à couteaux tirés
<span class="secondary info">(2022)</span>
<div class="velocity">1
<span class="secondary info">(
<span class="global-sprite telemeter up"></span>
1)</span>
<td class="ratingColumn imdbRating">
<strong title="7,3 based on 207 962 user ratings">7,3</strong>strong text
title = movies.find('td',class_='titleColumn').a.text
rating = movies.find('td',class_='ratingColumn imdbRating').strong.text
year = movies.find('td',class_='titleColumn').span.text.strip('()')
AttributeError Traceback (most recent call last)
<ipython-input-9-2363bafd916b> in <module>
----> 1 title = movies.find('td',class_='titleColumn').a.text
2 title
~\anaconda3\lib\site-packages\bs4\element.py in getattr(self, key)
2287 def getattr(self, key):
2288 """Raise a helpful exception to explain a common code fix."""
-> 2289 raise AttributeError(
2290 "ResultSet object has no attribute '%s'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?" % key
2291 )
AttributeError: ResultSet object has no attribute 'find'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?
Can someone help me to solve the problem? Thanks in advance!
To get the ResultSets as list, you can try the next example.
from bs4 import BeautifulSoup
import requests
import pandas as pd
data = []
res = requests.get("https://www.imdb.com/chart/moviemeter/?ref_=nv_mv_mpm.I")
#print(res)
soup = BeautifulSoup(res.content, "html.parser")
for card in soup.select('.chart.full-width tbody tr'):
data.append({
"title": card.select_one('.titleColumn a').get_text(strip=True),
"year": card.select_one('.titleColumn span').text,
'rating': card.select_one('td[class="ratingColumn imdbRating"]').get_text(strip=True)
})
df = pd.DataFrame(data)
print(df)
#df.to_csv('out.csv', index=False)
Output:
title year rating
0 Avatar: The Way of Water (2022) 7.9
1 Glass Onion (2022) 7.2
2 The Menu (2022) 7.3
3 White Noise (2022) 5.8
4 The Pale Blue Eye (2022) 6.7
.. ... ... ...
95 Zoolander (2001) 6.5
96 Once Upon a Time in Hollywood (2019) 7.6
97 The Lord of the Rings: The Fellowship of the Ring (2001) 8.8
98 New Year's Eve (2011) 5.6
99 Spider-Man: No Way Home (2021) 8.2
[100 rows x 3 columns]
Update: To extract data using find_all and find method.
from bs4 import BeautifulSoup
import requests
import pandas as pd
headers = {'User-Agent':'Mozilla/5.0'}
data = []
res = requests.get("https://www.imdb.com/chart/moviemeter/?ref_=nv_mv_mpm.I")
#print(res)
soup = BeautifulSoup(res.content, "html.parser")
for card in soup.table.tbody.find_all("tr"):
data.append({
"title": card.find("td",class_="titleColumn").a.get_text(strip=True),
"year": card.find("td",class_="titleColumn").span.get_text(strip=True),
'rating': card.find('td',class_="ratingColumn imdbRating").get_text(strip=True)
})
df = pd.DataFrame(data)
print(df)
AttributeError: ResultSet object has no attribute 'find'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?
find_all returns an array, meaning that movies is an array. You need to iterate over the array with for movie in movies:
for movie in movies:
title = movie.find('td',class_='titleColumn').a.text
rating = movie.find('td',class_='ratingColumn imdbRating').strong.text
year = movie.find('td',class_='titleColumn').span.text.strip('()')
I want to scrape all the pages of Internshala and extract the Job ID, Job name, Company name and the Last date to apply and store everything in a csv to later convert to a dataframe.
import requests
import scrapy
from bs4 import BeautifulSoup
from scrapy import Selector
from scrapy.crawler import CrawlerProcess
from twisted.internet import reactor
from scrapy.crawler import CrawlerRunner
import string
import pandas as pd
url='https://internshala.com/fresher-jobs'
sel=Selector(text=BeautifulSoup(requests.get(url).content).prettify())
pages=sel.xpath('//span[#id="total_pages"]').xpath('normalize-space(./text())').extract()
pages[0]=int(pages[0])
print(pages[0]) #which gives -> 4
class jobMan(scrapy.Spider):
name='job'
to_remove={0:["\n ","\n "],\
1:['\n ','\n ']}
def start_requests(self):
urls="https://internshala.com/fresher-jobs/page-1"
yield scrapy.Request(url=urls,callback=self.parse)
def parse(self,response):
ID=response.xpath('//div[#class="container-fluid individual_internship visibilityTrackerItem"]/#internshipid').extract()
Job_Post = response.xpath('//div[#class="heading_4_5 profile"]/a').xpath('normalize-space(./text())').extract()
Company = response.xpath('//a[#class="link_display_like_text"]').xpath('normalize-space(./text())').extract()
Apply_By = response.xpath('//div[#class="internship_other_details_container"]/div[#class="other_detail_item_row"][2]//div[#class="item_body"]').xpath('normalize-space(./text())').extract()
for page in range(2,pages[0]+1):
yield(scrapy.Request(url=f"https://internshala.com/fresher-jobs/page-{page}",callback=self.parse))
yield {
'ID': ID,
'Job':Job_Post,
'Company':Company,
'Apply_By':Apply_By
}
process=CrawlerProcess(settings={
'FEED_URI':'JOBSS.csv',
'FEED_FORMAT':'csv'
})
process.crawl(jobMan)
process.start()
And then finally-:
final=pd.read_csv('JOBSS.csv')
print(final)
Which gave me-:
ID Job \
0 NaN Product Developer - Science,Salesforce Develop...
1 NaN Business Development Manager,Mobile App Develo...
2 NaN Software Engineer,Social Media Strategist And ...
3 NaN Reactjs Developer,Full Stack Developer,Busines...
Company \
0 Open Door Education,Aekot Consulting And Techn...
1 ISB Studienkolleg,TutorBin,Alphacore Technolog...
2 CrewKarma,Internshala,Mithi Software Technolog...
3 Startxlabs Technologies Private Limited,RavGin...
Apply_By
0 7 Aug' 21,7 Aug' 21,7 Aug' 21,7 Aug' 21,7 Aug'...
1 31 Jul' 21,30 Jul' 21,30 Jul' 21,31 Jul' 21,30...
2 24 Jul' 21,24 Jul' 21,23 Jul' 21,23 Jul' 21,23...
3 11 Jul' 21,11 Jul' 21,11 Jul' 21,11 Jul' 21,11...
Doubt_1-: Why is it not printing the IDs ?? I tried scraping just the ID for the first page using the same xpath and I got the correct output but not while crawling.
/
Doubt_2-: I wanted a a dataframe such that, for example, the Job_Post column contains each job post's name nested under each other (means as a new row) from all the pages merged but I am getting rows per page.
How can I solve these issues ?? Please help
Doubt_1-: Why is it not printing the IDs ?? I tried scraping just the ID for the first page using the same xpath and I got the correct output but not while crawling.
Because the class name has a space in it, use:
ID=response.xpath('//div[contains(#class, "container-fluid individual_internship visibilityTrackerItem")]/#internshipid').extract()
I have a list of >1.000 companies which I could use to invest in. I need the ticker symbol id's from all these companies. I find difficulties when I am trying to strip the output of the soup, and when I am trying to loop through all the company names.
Please see an example of the site: https://finance.yahoo.com/lookup?s=asml. The idea is to replace asml and put 'https://finance.yahoo.com/lookup?s='+ Companies., so I can loop through all the companies.
companies=df
Company name
0 Abbott Laboratories
1 ABBVIE
2 Abercrombie
3 Abiomed
4 Accenture Plc
This is the code I have now, where the strip code doesn't work, and where the loop for all the company isn't working as well.
#Create a function to scrape the data
def scrape_stock_symbols():
Companies=df
url= 'https://finance.yahoo.com/lookup?s='+ Companies
page= requests.get(url)
soup = BeautifulSoup(page.text, "html.parser")
Company_Symbol=Soup.find_all('td',attrs ={'class':'data-col0 Ta(start) Pstart(6px) Pend(15px)'})
for i in company_symbol:
try:
row = i.find_all('td')
company_symbol.append(row[0].text.strip())
except Exception:
if company not in company_symbol:
next(Company)
return (company_symbol)
#Loop through every company in companies to get all of the tickers from the website
for Company in companies:
try:
(temp_company_symbol) = scrape_stock_symbols(company)
except Exception:
if company not in companies:
next(Company)
Another difficulty is that the symbol look up from yahoo finance will retrieve many companies names.
I will have to clear the data afterwards. I want to set the AMS exchange as the standard, hence if a company is listed on multiple exchanges, I am only interested in the AMS ticker symbol. The final goal is to create a new dataframe:
Comapny name Company_symbol
0 Abbott Laboratories ABT
1 ABBVIE ABBV
2 Abercrombie ANF
Here's a solution that doesn't require any scraping. It uses a package called yahooquery (disclaimer: I'm the author), which utilizes an API endpoint that returns symbols for a user's query. You can do something like this:
import pandas as pd
import yahooquery as yq
def get_symbol(query, preferred_exchange='AMS'):
try:
data = yq.search(query)
except ValueError: # Will catch JSONDecodeError
print(query)
else:
quotes = data['quotes']
if len(quotes) == 0:
return 'No Symbol Found'
symbol = quotes[0]['symbol']
for quote in quotes:
if quote['exchange'] == preferred_exchange:
symbol = quote['symbol']
break
return symbol
companies = ['Abbott Laboratories', 'ABBVIE', 'Abercrombie', 'Abiomed', 'Accenture Plc']
df = pd.DataFrame({'Company name': companies})
df['Company symbol'] = df.apply(lambda x: get_symbol(x['Company name']), axis=1)
Company name Company symbol
0 Abbott Laboratories ABT
1 ABBVIE ABBV
2 Abercrombie ANF
3 Abiomed ABMD
4 Accenture Plc ACN
I've been trying for the last 3 hours to scrape this website and get the rank, name, wins, and losses of each team.
When implementing this code:
import requests
from bs4 import BeautifulSoup
halo = requests.get("https://www.halowaypoint.com/en-us/esports/standings")
page = BeautifulSoup(halo.content, "html.parser")
final = page.encode('utf-8')
print(final.find_all("div"))
I keep getting this error
If anyone can help me out then it would be much appreciated!
Thanks!
You are calling the the method on the wrong variable, use the BeautifulSoup object page not the byte string final:
print(page.find_all("div"))
To get the table data is pretty straightforward, all the data is inside the div with the css classes "table.table--hcs":
halo = requests.get("https://www.halowaypoint.com/en-us/esports/standings")
page = BeautifulSoup(halo.content, "html.parser")
table = page.select_one("div.table.table--hcs")
print(",".join([td.text for td in table.select("header div.td")]))
for row in table.select("div.tr"):
rank,team = row.select_one("span.numeric--medium.hcs-trend-neutral").text,row.select_one("div.td.hcs-title").span.a.text
wins, losses = [div.span.text for div in row.select("div.td.em-7")]
print(rank,team, wins, losses)
If we run the code, you can see the data matches the table:
In [4]: print(",".join([td.text for td in table.select("header div.td")]))
Rank,Team,Wins,Losses
In [5]: for row in table.select("div.tr"):
...: rank,team = row.select_one("span.numeric--medium.hcs-trend-neutral").text,row.select_one("div.td.hcs-title").span.a.text
...: wins, losses = [div.span.text for div in row.select("div.td.em-7")]
...: print(rank,team, wins, losses)
...:
1 Counter Logic Gaming 10 1
2 Team EnVyUs 8 3
3 Enigma6 8 3
4 Renegades 6 5
5 Team Allegiance 5 6
6 Evil Geniuses 4 7
7 OpTic Gaming 2 9
8 Team Liquid 1 10
I am running neo-4j 1.8.2 on a remote unix box. I am using this jar (https://github.com/jexp/batch-import/downloads).
nodes.csv is same as given in example:
name age works_on
Michael 37 neo4j
Selina 14
Rana 6
Selma 4
rels.csv is like this:
start end type since counter:int
1 2 FATHER_OF 1998-07-10 1
1 3 FATHER_OF 2007-09-15 2
1 4 FATHER_OF 2008-05-03 3
3 4 SISTER_OF 2008-05-03 5
2 3 SISTER_OF 2007-09-15 7
But i am getting this exception :
Using Existing Configuration File
Total import time: 0 seconds
Exception in thread "main" java.util.NoSuchElementException
at java.util.StringTokenizer.nextToken(StringTokenizer.java:332)
at org.neo4j.batchimport.Importer$Data.split(Importer.java:156)
at org.neo4j.batchimport.Importer$Data.update(Importer.java:167)
at org.neo4j.batchimport.Importer.importNodes(Importer.java:226)
at org.neo4j.batchimport.Importer.main(Importer.java:83)
I am new to neo4j, was checking if this importer can save some coding effort.
It would be great if someone can point to the probable mistake.
Thanks for help!
--Edit:--
My nodes.csv
name dob city state s_id balance desc mgr_primary mgr_secondary mgr_tertiary mgr_name mgr_status
John Von 8/11/1928 Denver CO 1114-010 7.5 RA 0023-0990 0100-0110 Doozman Keith Active
my rels.csv
start end type since status f_type f_num
2 1 address_of
1 3 has_account 5 Active
4 3 f_of Primary 0111-0230
Hi I had some issues in the past with the batch import script.
The formating of your file must be very rigorous, which means :
no extra spaces where not expected, like the ones I see in the first line of your rels.csv before "start"
no multiple spaces in place of the tab. If your files are exactly like what you've copied here, you have 4 spaces instead of on tab, and this is not going to work, as the script uses a tokenizer looking for tabs !!!
I had this issue because I always convert tabs to 4 spaces, and once I understood that, I stopped doing it for my csv !