I am loging into the following website via splash lua script but can't logg in - web-scraping

I want to log in into the https://login.starcitygames.com/ website by using splash integration with lua script. i first check in locallhost for testing them.
when detecting all the form css tags and entering log in credentials i failed to logged in.
The code are here:
function main(splash)
splash:set_custom_headers({
["user-agent"] = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.72 Safari/537.36",
})
local url = splash.args.url
assert(splash:go(url))
assert(splash:wait(10))
splash:set_viewport_full()
local search_input = splash:select('input[type=text]')
search_input:send_text("(censored)#gmail.com")
local search_input = splash:select('input[name=password]')
search_input:send_text("(censored)")
assert(splash:wait(5))
local submit_button = splash:select('button[type=submit]')
submit_button:click()
assert(splash:wait(15))
return {
html = splash:html(),
png = splash.png(),
}
end
After when i run it on localhost 'http://0.0.0.0:8050/' then the following results is come and can't logged in .
May be the css tags i use is incorrect or anything .
I am new to splash lua so don't understand it.
the output is:

Try to replace local search_input = splash:select('input[type=text]') with local search_input = splash:select('input[name=username]').

Related

Losing information when using BeautifulSoup

I am following the guide of 'Automate the Boring Stuff with Python'
practicing a project called 'Project: “I’m Feeling Lucky” Google Search'
but the CSS selector returns nothing
import requests,sys,webbrowser,bs4,pyperclip
if len(sys.argv) > 1:
address = ' '.join(sys.argv[1:])
else:
address = pyperclip.paste()
res = requests.get('http://google.com/search?q=' + str(address))
res.raise_for_status()
soup = bs4.BeautifulSoup(res.text,"html.parser")
linkElems = soup.select('.r a')
for i in range (5):
webbrowser.open('http://google.com' + linkElems[i].get('href'))**
I already tested the same code in the IDLE shell
It seems that
linkElems = soup.select('.r')
returns nothing
and after I checked the value returned by beautiful soup
soup = bs4.BeautifulSoup(res.text,"html.parser")
I found all class='r' and class='rc' is gone for no reason.
But they were there in the raw HTML file.
Please tell me why and how to avoid such problems
To get version of HTML where it's defined class r, it's necessary to set User-Agent in headers:
import requests
from bs4 import BeautifulSoup
address = 'linux'
headers={'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0'}
res = requests.get('http://google.com/search?q=' + str(address), headers=headers)
res.raise_for_status()
soup = BeautifulSoup(res.text,"html.parser")
linkElems = soup.select('.r a')
for a in linkElems:
if a.text.strip() == '':
continue
print(a.text)
Prints:
Linux.orghttps://www.linux.org/
Puhverdatud
Tõlgi see leht
Linux – Vikipeediahttps://et.wikipedia.org/wiki/Linux
Puhverdatud
Sarnased
Linux - Wikipediahttps://en.wikipedia.org/wiki/Linux
...and so on.
The reason why Google blocks your request is because default requests user-agent is python-requests. Check what's your user-agent thus blocking your request and resulting in completely different HTML with different elements and selectors. But sometimes you can receive a different HTML, with different selectors when using user-agent.
Learn more about user-agent and HTTP request headers.
Pass user-agent into request headers:
headers = {
'User-agent':
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
requests.get('YOUR_URL', headers=headers)
Try to use lxml parser instead, it's faster.
Code and full example in the online IDE:
from bs4 import BeautifulSoup
import requests
headers = {
'User-agent':
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
params = {
"q": "My query goes here"
}
html = requests.get('https://www.google.com/search', headers=headers, params=params)
soup = BeautifulSoup(html.text, 'lxml')
for result in soup.select('.tF2Cxc'):
link = result.select_one('.yuRUbf a')['href']
print(link)
-----
'''
https://dev.mysql.com/doc/refman/8.0/en/entering-queries.html
https://www.benlcollins.com/spreadsheets/google-sheets-query-sql/
https://www.exoscale.com/syslog/explaining-mysql-queries/
https://blog.hubspot.com/marketing/sql-tutorial-introduction
https://mode.com/sql-tutorial/sql-sub-queries/
https://www.mssqltips.com/sqlservertip/1255/getting-io-and-time-statistics-for-sql-server-queries/
https://stackoverflow.com/questions/2698401/how-to-store-mysql-query-results-in-another-table
https://www.khanacademy.org/computing/computer-programming/sql/relational-queries-in-sql/a/more-efficient-sql-with-query-planning-and-optimization
http://cidrdb.org/cidr2011/Papers/CIDR11_Paper7.pdf
https://www.sommarskog.se/query-plan-mysteries.html
'''
Alternatively, you can do the same thing by using Google Organic Results API from SerpApi. It's a paid API with a free plan.
The difference in your case is that you only need to extract the data you want from JSON string rather than figuring out how to extract, maintain or bypass blocks from Google.
Code to integrate:
params = {
"engine": "google",
"q": "My query goes here",
"hl": "en",
"api_key": os.getenv("API_KEY"),
}
search = GoogleSearch(params)
results = search.get_dict()
for result in results["organic_results"]:
print(result['link'])
-------
'''
https://dev.mysql.com/doc/refman/8.0/en/entering-queries.html
https://www.benlcollins.com/spreadsheets/google-sheets-query-sql/
https://www.exoscale.com/syslog/explaining-mysql-queries/
https://blog.hubspot.com/marketing/sql-tutorial-introduction
https://mode.com/sql-tutorial/sql-sub-queries/
https://www.mssqltips.com/sqlservertip/1255/getting-io-and-time-statistics-for-sql-server-queries/
https://stackoverflow.com/questions/2698401/how-to-store-mysql-query-results-in-another-table
https://www.khanacademy.org/computing/computer-programming/sql/relational-queries-in-sql/a/more-efficient-sql-with-query-planning-and-optimization
http://cidrdb.org/cidr2011/Papers/CIDR11_Paper7.pdf
https://www.sommarskog.se/query-plan-mysteries.html
'''
Disclaimer, I work for SerpApi.

Goutte / Symfony DOM Crawler download file from form

There is a form in the remote page, which, after submitted, automatically download specific file to your computer. How could I grab that file and store it on server using Goutte or native Symfony DOM Crawler?
Currently I have this code:
$client = new Client();
$client->setHeader('user-agent', "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.101 Safari/537.36");
$crawler = $client->request('GET', 'ADDRESS');
$form = $crawler->selectButton('Get Results')->form();
$crawler = $client->submit($form);
If Goutte does not allow to do this, which technology would?

Scrapy splash download file from js click event

I'm using scrapy + splash plugin, I have a button which triggers a download event via ajax, I need to get the downloaded file, but don't know how.
My lua script is something like this
function main(splash)
splash:init_cookies(splash.args.cookies)
assert(splash:go{
splash.args.url,
headers=splash.args.headers,
http_method=splash.args.http_method,
body=splash.args.body,
})
assert(splash:wait(0.5))
local get_dimensions = splash:jsfunc([[
function () {
var rect = document.querySelector('a[aria-label="Download XML"]').getClientRects()[0];
return {"x": rect.left, "y": rect.top}
}
]])
splash:set_viewport_full()
splash:wait(0.1)
local dimensions = get_dimensions()
-- FIXME: button must be inside a viewport
splash:mouse_click(dimensions.x, dimensions.y)
splash:wait(0.1)
return splash:html()
end
My request object from my spider:
yield SplashFormRequest(self.urls['url'],
formdata=FormBuilder.build_form(response, some_object[0]),
callback=self.parse_cuenta,
cache_args=['lua_source'],
endpoint='execute',
args={'lua_source': self.script_click_xml})
Thanks in advance
I just tried this with SplashFormRequest and it looks like splash won't work for you. Instead you can send the same Ajax request using python Requests.
here is an example
data = {'__EVENTTARGET': 'main_0$body_0$lnkDownloadBio',
'__EVENTARGUMENT': '',
'__VIEWSTATE': viewstate,
'__VIEWSTATEGENERATOR': viewstategen,
'__EVENTVALIDATION': eventvalid,
'search': '',
'filters': '',
'score': ''}
HEADERS = {
'Content-Type':'application/x-www-form-urlencoded',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36',
'Accept': 'text / html, application / xhtml + xml, application / xml;q = 0.9, image / webp, image / apng, * / *;q = 0.8'
}
data = urllib.urlencode(data)
r = requests.post(submit_url, data=data, allow_redirects=False, headers=HEADERS)
filename = 'name-%s.pdf' % item['first_name']
with open(filename, 'wb') as f:
f.write(r.content)
Please make sure the data and headers you sending are correct.

POST data into ASPX with requests. How to fix "remote mashine error" message

I'm trying to scrap data from aspx page using request with POST data.
On parsed html I'm getting an error "An application error occurred on the server. The current custom error settings for this application prevent the details of the application error from being viewed remotely (for security reasons). It could, however, be viewed by browsers running on the local server machine."
I was searching for solutions a while but frankly I'm new in Python and can't really figure out what's wrong.
The ASPX has javaonclick function which opens a new window with data in html.
The code I've created is below.
Any help or suggestions would be greatly welcomed. Thank you!
import requests
from bs4 import BeautifulSoup
session = requests.Session()
url = 'http://ws1.osfi-bsif.gc.ca/WebApps/FINDAT/Insurance.aspx?T=0&LANG=E'
r=session.get(url)
soup = BeautifulSoup(r.content,'lxml')
viewstate = soup.select("#__VIEWSTATE")[0]['value']
eventvalidation = soup.select("#__EVENTVALIDATION")[0]['value']
payload = {
r'__EVENTTARGET': r'',
r'__EVENTARGUMENT': r'',
r'__LASTFOCUS': r'',
r'__VIEWSTATE': viewstate,
r'__VIEWSTATEGENERATOR': r'B2E4460D',
r'__EVENTVALIDATION': eventvalidation,
r'InsuranceWebPartManager$gwpinsuranceControl$insuranceControl$institutionType': r'radioButton1',
r'InsuranceWebPartManager$gwpinsuranceControl$insuranceControl$institutionDropDownList': r'F018',
r'InsuranceWebPartManager$gwpinsuranceControl$insuranceControl$reportTemplateDropDownList': r'C_LIFE-1',
r'InsuranceWebPartManager$gwpinsuranceControl$insuranceControl$reportDateDropDownList': r'3+-+2015',
r'InsuranceWebPartManager$gwpinsuranceControl$insuranceControl$submitButton': r'Submit'
}
HEADER = {
"Content-Type":"application/x-www-form-urlencoded",
"Content-Length":"11759",
"Host":"ws1.osfi-bsif.gc.ca",
"User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36",
"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Language":"en-US,en;q=0.5",
"Cache-Control": "max-age=0",
"Accept-Encoding":"gzip, deflate",
"Connection":"keep-alive",
}
df = session.post(url, data=payload, headers=HEADER)
print df.text

Check HTTP Status for jpg files using jsoup

I am getting http status codes for urls using jsoup as follows:
Connection.Response response = null
Document doc = Jsoup.connect(url).ignoreContentType(true).get()
response = Jsoup.connect(url)
.userAgent("Mozilla/5.0 (X11 Linux x86_64) AppleWebKit/535.21 (KHTML, like Gecko) Chrome/19.0.1042.0 Safari/535.21")
.timeout(10000)
.execute()
int statusCode = response.statusCode()
if (statusCode == 200)
urlExists = true
else
urlExists = false
Basically, I want to check if the url specified is returning 200 status code or not i.e. if its a html page, does it exist or if its a pdf file, does it exist and so on. It does not work for urls ending in .jpg because jpg files cannot be parsed by jsoup. I am using jsoup in conjunction with crawler4j. Is there any other way i can find the http status code for all the urls. My urls end in following extensions:
css
js
pdf
zip
rar
tar
png
gif
html
Can't you just use
int responseCode = new URL(url).openConnection().responseCode

Resources