I'm trying to run a scraper of which the output log ends as follows:
2017-04-25 20:22:22 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <429 http://www.apkmirror.com/apk/instagram/instagram-instagram/instagram-instagram-9-0-0-34920-release/instagram-9-0-0-4-android-apk-download/>: HTTP status code is not handled or not allowed
2017-04-25 20:22:22 [scrapy.core.engine] INFO: Closing spider (finished)
2017-04-25 20:22:22 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 16048410,
'downloader/request_count': 32902,
'downloader/request_method_count/GET': 32902,
'downloader/response_bytes': 117633316,
'downloader/response_count': 32902,
'downloader/response_status_count/200': 121,
'downloader/response_status_count/429': 32781,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 4, 25, 18, 22, 22, 710446),
'log_count/DEBUG': 32903,
'log_count/INFO': 32815,
'request_depth_max': 2,
'response_received_count': 32902,
'scheduler/dequeued': 32902,
'scheduler/dequeued/memory': 32902,
'scheduler/enqueued': 32902,
'scheduler/enqueued/memory': 32902,
'start_time': datetime.datetime(2017, 4, 25, 17, 54, 36, 621481)}
2017-04-25 20:22:22 [scrapy.core.engine] INFO: Spider closed (finished)
In short, of the 32,902 requests, only 121 are successful (response code 200) whereas the remainder receives 429 for 'too many requests' (cf. https://httpstatuses.com/429).
Are there any recommended ways to get around this? To start with, I'd like to have a look at the details of the 429 response rather than just ignoring it, as it may contain a Retry-After header indicating how long to wait before making a new request.
Also, if the requests are made using Privoxy and Tor as described in http://blog.michaelyin.info/2014/02/19/scrapy-socket-proxy/, it may be possible to implement retry middleware which makes Tor change its IP address when this occurs. Are there any public examples of such code?
You can modify the retry middleware to pause when it gets error 429. Put this code below in middlewares.py
from scrapy.downloadermiddlewares.retry import RetryMiddleware
from scrapy.utils.response import response_status_message
import time
class TooManyRequestsRetryMiddleware(RetryMiddleware):
def __init__(self, crawler):
super(TooManyRequestsRetryMiddleware, self).__init__(crawler.settings)
self.crawler = crawler
#classmethod
def from_crawler(cls, crawler):
return cls(crawler)
def process_response(self, request, response, spider):
if request.meta.get('dont_retry', False):
return response
elif response.status == 429:
self.crawler.engine.pause()
time.sleep(60) # If the rate limit is renewed in a minute, put 60 seconds, and so on.
self.crawler.engine.unpause()
reason = response_status_message(response.status)
return self._retry(request, reason, spider) or response
elif response.status in self.retry_http_codes:
reason = response_status_message(response.status)
return self._retry(request, reason, spider) or response
return response
Add 429 to retry codes in settings.py
RETRY_HTTP_CODES = [429]
Then activate it on settings.py. Don't forget to deactivate the default retry middleware.
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.retry.RetryMiddleware': None,
'flat.middlewares.TooManyRequestsRetryMiddleware': 543,
}
Wow, your scraper is going really fast, over 30,000 requests in 30 minutes. That's more than 10 requests per second.
Such a high volume will trigger rate limiting on bigger sites and will completely bring down smaller sites. Don't do that.
Also this might even be too fast for privoxy and tor, so these might also be candidates for those replies with a 429.
Solutions:
Start slow. Reduce the concurrency settings and increase DOWNLOAD_DELAY so you do at max 1 request per second. Then increase these values step by step and see what happens. It might sound paradox, but you might be able to get more items and more 200 response by going slower.
If you are scraping a big site try rotating proxies. The tor network might be a bit heavy handed for this in my experience, so you might try a proxy service like Umair is suggesting
Building upon Aminah Nuraini's answer, you can use Twisted's Deferreds to avoid breaking asynchrony by calling time.sleep()
from twisted.internet import reactor, defer
from scrapy.downloadermiddlewares.retry import RetryMiddleware
from scrapy.utils.response import response_status_message
async def async_sleep(delay, return_value=None):
deferred = defer.Deferred()
reactor.callLater(delay, deferred.callback, return_value)
return await deferred
class TooManyRequestsRetryMiddleware(RetryMiddleware):
"""
Modifies RetryMiddleware to delay retries on status 429.
"""
DEFAULT_DELAY = 60 # Delay in seconds.
MAX_DELAY = 600 # Sometimes, RETRY-AFTER has absurd values
async def process_response(self, request, response, spider):
"""
Like RetryMiddleware.process_response, but, if response status is 429,
retry the request only after waiting at most self.MAX_DELAY seconds.
Respect the Retry-After header if it's less than self.MAX_DELAY.
If Retry-After is absent/invalid, wait only self.DEFAULT_DELAY seconds.
"""
if request.meta.get('dont_retry', False):
return response
if response.status in self.retry_http_codes:
if response.status == 429:
retry_after = response.headers.get('retry-after')
try:
retry_after = int(retry_after)
except (ValueError, TypeError):
delay = self.DEFAULT_DELAY
else:
delay = min(self.MAX_DELAY, retry_after)
spider.logger.info(f'Retrying {request} in {delay} seconds.')
spider.crawler.engine.pause()
await async_sleep(delay)
spider.crawler.engine.unpause()
reason = response_status_message(response.status)
return self._retry(request, reason, spider) or response
return response
The line await async_sleep(delay) blocks process_response's execution until delay seconds have passed, but Scrapy is free to do other stuff in the meantime. This async/await corutine syntax was introduced in Python 3.5 and support for it was added in Scrapy 2.0.
It's still necessary to modify settings.py as in the original answer.
You can use HTTPERROR_ALLOWED_CODES =[404,429]. I was getting 429 HTTP code and I just allowed it and then problem fixed. You can allow the HTTP code that you are getting in terminal. This may be solve your problem.
Here is what I found, a simple trick
import scrapy
import time ## just add this line
BASE_URL = 'your any url'
class EthSpider(scrapy.Spider):
name = 'eth'
start_urls = [
BASE_URL.format(1)
]
pageNum = 2
def parse(self, response):
data = response.json()
for i in range(len(data['data']['list'])):
yield data['data']['list'][i]
next_page = 'next page url'
time.sleep(0.2) # and add this line
if EthSpider.pageNum <= data['data']['page']:
EthSpider.pageNum += 1
yield response.follow(next_page, callback=self.parse)
Related
This program is an API based program that has been working for a few months and all of a sudden has went days without pushing anything to Discord. The script looks fine in CMD, but no errors are being thrown. I was wondering if there was a way to eliminate possible issues such as an API instability issue or something obvious. The program is supposed to go to the site www.bitskins.com and pull skins based on parameters set and push them as an embed to a Discord channel every 10 minutes.
There are two files that run this program.
Here is the one that uses Bitskins API (bitskins.py):
import requests, json
from datetime import datetime, timedelta
class Item:
def __init__(self, item):
withdrawable_at= item['withdrawable_at']
price= float(item['price'])
self.available_in= withdrawable_at- datetime.timestamp(datetime.now())
if self.available_in< 0:
self.available= True
else:
self.available= False
self.suggested_price= float(item['suggested_price'])
self.price= price
self.margin= round(self.suggested_price- self.price, 2)
self.reduction= round((1- (self.price/self.suggested_price))*100, 2)
self.image= item['image']
self.name= item['market_hash_name']
self.item_id= item['item_id']
def __str__(self):
if self.available:
return "Name: {}\nPrice: {}\nSuggested Price: {}\nReduction: {}%\nAvailable Now!\nLink: https://bitskins.com/view_item?app_id=730&item_id={}".format(self.name, self.price, self.suggested_price, self.reduction, self.item_id)
else:
return "Name: {}\nPrice: {}\nSuggested Price: {}\nReduction: {}%\nAvailable in: {}\nLink: https://bitskins.com/view_item?app_id=730&item_id={}".format(self.name, self.price, self.suggested_price, self.reduction, str(timedelta(seconds= self.available_in)), self.item_id)
def __lt__(self, other):
return self.reduction < other.reduction
def __gt__(self, other):
return self.reduction > other.reduction
def get_url(API_KEY, code):
PER_PAGE= 30 # the number of items to retrieve. Either 30 or 480.
return "https://bitskins.com/api/v1/get_inventory_on_sale/?api_key="+ API_KEY+"&code=" + code + "&per_page="+ str(PER_PAGE)
def get_data(url):
r= requests.get(url)
data= r.json()
return data
def get_items(code, API_KEY):
url= get_url(API_KEY, code)
try:
data= get_data(url)
if data['status']=="success":
items= []
items_dic= data['data']['items']
for item in items_dic:
tmp= Item(item)
if tmp.reduction>=25 and tmp.price<=200: # Minimum discount and maximum price to look for when grabbing items. Currently set at minimum discount of 25% and maxmimum price of $200.
items.append(tmp)
return items
else:
raise Exception(data["data"]["error_message"])
except:
raise Exception("Couldn't connect to BitSkins.")
# my_token = pyotp.TOTP(my_secret)
# print(my_token.now()) # in python3
And here is the file with Discord's API (solution.py):
#!/bin/env python3.6
import bitskins
import discord
import pyotp, base64, asyncio
from datetime import timedelta, datetime
TOKEN= "Not input for obvious reasons"
API_KEY= "Not input for obvious reasons"
my_secret= 'Not input for obvious reasons'
client = discord.Client()
def get_embed(item):
embed=discord.Embed(title=item.name, url= "https://bitskins.com/view_item?app_id=730&item_id={}".format(item.item_id), color=0xA3FFE8)
embed.set_author(name="Skin Bot", url="http://www.reactor.gg/",icon_url="https://pbs.twimg.com/profile_images/1050077525471158272/4_R8PsrC_400x400.jpg")
embed.set_thumbnail(url=item.image)
embed.add_field(name="Price:", value="${}".format(item.price))
embed.add_field(name="Discount:", value="{}%".format(item.reduction), inline=True)
if item.available:
tmp= "Instantly Withdrawable"
else:
tmp= str(timedelta(seconds= item.available_in))
embed.add_field(name="Availability:", value=tmp, inline=True)
embed.add_field(name="Suggested Price:", value="${}".format(item.suggested_price), inline=True)
embed.add_field(name="Profit:", value="${}".format(item.margin), inline=True)
embed.set_footer(text="Made by Aqyl#0001 | {}".format(datetime.now()), icon_url="https://www.discordapp.com/assets/6debd47ed13483642cf09e832ed0bc1b.png")
return embed
async def status_task(wait_time= 60* 5):
while True:
print("Updated on: {}".format(datetime.now()))
code= pyotp.TOTP(my_secret)
try:
items= bitskins.get_items(code.now(), API_KEY)
for item in items:
await client.send_message(client.get_channel("656913641832185878"), embed=get_embed(item))
except:
pass
await asyncio.sleep(wait_time)
#client.event
async def on_ready():
wait_time= 60 * 10 # 10 mins in this case
print('CSGO BitSkins Bot')
print('Made by Aqyl#0001')
print('Version 1.0.6')
print('')
print('Logged in as:')
print(client.user.name)
print('------------------------------------------')
client.loop.create_task(status_task(wait_time))
try:
client.run(TOKEN)
except:
print("Couldn't connect to the Discord Server.")
You have a general exception, this will lead to catching exceptions that you really don't want to catch.
try:
items= bitskins.get_items(code.now(), API_KEY)
for item in items:
await client.send_message(client.get_channel("656913641832185878"), embed=get_embed(item))
except:
pass
This is the same as catching any exception that appears there (Exceptions that inherit BaseException
To avoid those problems, you should always catch specific exceptions. (i.e. TypeError).
Example:
try:
raise Exception("Example exc")
except Exception as e:
print(f"Exception caught! {e}")
I'm trying to send data from my dronekit.io vehicle using flask-socket.io. Unfortunately, I got this log:
Starting copter simulator (SITL)
SITL already Downloaded and Extracted.
Ready to boot.
Connecting to vehicle on: tcp:127.0.0.1:5760
>>> APM:Copter V3.3 (d6053245)
>>> Frame: QUAD
>>> Calibrating barometer
>>> Initialising APM...
>>> barometer calibration complete
>>> GROUND START
* Restarting with stat
latitude -35.363261
>>> Exception in attribute handler for location.global_relative_frame
>>> Working outside of request context.
This typically means that you attempted to use functionality that needed
an active HTTP request. Consult the documentation on testing for
information about how to avoid this problem.
longitude 149.1652299
>>> Exception in attribute handler for location.global_relative_frame
>>> Working outside of request context.
This typically means that you attempted to use functionality that needed
an active HTTP request. Consult the documentation on testing for
information about how to avoid this problem.
Here is my code:
sample.py
from dronekit import connect, VehicleMode
from flask import Flask
from flask_socketio import SocketIO, emit
import dronekit_sitl
import time
sitl = dronekit_sitl.start_default()
connection_string = sitl.connection_string()
print("Connecting to vehicle on: %s" % (connection_string,))
vehicle = connect(connection_string, wait_ready=True)
def arm_and_takeoff(aTargetAltitude):
print "Basic pre-arm checks"
while not vehicle.is_armable:
print " Waiting for vehicle to initialise..."
time.sleep(1)
print "Arming motors"
vehicle.mode = VehicleMode("GUIDED")
vehicle.armed = True
while not vehicle.armed:
print " Waiting for arming..."
time.sleep(1)
print "Taking off!"
vehicle.simple_takeoff(aTargetAltitude)
while True:
if vehicle.location.global_relative_frame.alt>=aTargetAltitude*0.95:
print "Reached target altitude"
break
time.sleep(1)
last_latitude = 0.0
last_longitude = 0.0
last_altitude = 0.0
#vehicle.on_attribute('location.global_relative_frame')
def location_callback(self, attr_name, value):
global last_latitude
global last_longitude
global last_altitude
if round(value.lat, 6) != round(last_latitude, 6):
last_latitude = value.lat
print "latitude ", value.lat, "\n"
emit("latitude", value.lat)
if round(value.lon, 6) != round(last_longitude, 6):
last_longitude = value.lon
print "longitude ", value.lon, "\n"
emit("longitude", value.lon)
if round(value.alt) != round(last_altitude):
last_altitude = value.alt
print "altitude ", value.alt, "\n"
emit("altitude", value.alt)
app = Flask(__name__)
socketio = SocketIO(app)
if __name__ == '__main__':
socketio.run(app, host='0.0.0.0', port=5000, debug=True)
arm_and_takeoff(20)
I know because of the logs that I should not do any HTTP request inside "vehicle.on_attribute" decorator method and I should search for information on how to solve this problem but I didn't found any info about the error.
Hope you could help me.
Thank you very much,
Raniel
The emit() function by default returns an event back to the active client. If you call this function outside of a request context there is no concept of active client, so you get this error.
You have a couple of options:
indicate the recipient of the event and the namespace that you are using, so that there is no need to look them up in the context. You can do this by adding room and namespace arguments. Use '/' for the namespace if you are using the default namespace.
emit to all clients by adding broadcast=True as an argument, plus the namespace as indicated in #1.
I am attempting to use Python3 to send metrics to Hosted Graphite. The examples given on the site are Python2, and I have successfully ported the TCP and UDP examples to Python3 (despite my inexperience, and have submitted the examples so the docs may be updated), however I have been unable to get the HTTP method to work.
The Python2 example looks like this:
import urllib2, base64
url = "https://hostedgraphite.com/api/v1/sink"
api_key = "YOUR-API-KEY"
request = urllib2.Request(url, "foo 1.2")
request.add_header("Authorization", "Basic %s" % base64.encodestring(api_key).strip())
result = urllib2.urlopen(request)
This works successfully, returning a HTTP 200.
So far I have ported this much to Python3, and while I was (finally) able to get it to make a valid HTTP request (i.e. no syntax errors), the request fails, returning HTTP 400
import urllib.request, base64
url = "https://hostedgraphite.com/api/v1/sink"
api_key = b'YOUR-API-KEY'
metric = "testing.python3.http 1".encode('utf-8')
request = urllib.request.Request(url, metric)
request.add_header("Authorization", "Basic %s" % base64.encodestring(api_key).strip())
result = urllib.request.urlopen(request)
The full result is:
>>> result = urllib.request.urlopen(request)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/python3/3.3.1/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py", line 160, in urlopen
return opener.open(url, data, timeout)
File "/usr/local/Cellar/python3/3.3.1/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py", line 479, in open
response = meth(req, response)
File "/usr/local/Cellar/python3/3.3.1/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py", line 591, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/Cellar/python3/3.3.1/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py", line 517, in error
return self._call_chain(*args)
File "/usr/local/Cellar/python3/3.3.1/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py", line 451, in _call_chain
result = func(*args)
File "/usr/local/Cellar/python3/3.3.1/Frameworks/Python.framework/Versions/3.3/lib/python3.3/urllib/request.py", line 599, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Bad Request
Is it obvious what I am doing wrong? Are there any suggestions on how I might capture and compare what the successful (python2) and failing (python3) requests are actually sending?
Don't mix Unicode strings and bytes:
>>> "abc %s" % b"def"
"abc b'def'"
You could construct the header as follows:
from base64 import b64encode
headers = {'Authorization': b'Basic ' + b64encode(api_key)}
A quick way to see the request is to change the host in the url to localhost:8888 and run before making the request:
$ nc -l 8888
You could also use wireshark to see the requests.
I want to scrape these pages shown below, but it needs an authentication.
Tried the below code, but it says 0 pages scraped.
I am not able to understand what's the issue.
Can someone help please..
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders.init import InitSpider
from scrapy.http import Request, FormRequest
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import Rule
from kappaal.items import KappaalItem
class KappaalCrawler(InitSpider):
name = "initkappaal"
allowed_domains = ["http://www.kappaalphapsi1911.com/"]
login_page = 'http://www.kappaalphapsi1911.com/login.aspx'
#login_page = 'https://kap.site-ym.com/Login.aspx'
start_urls = ["http://www.kappaalphapsi1911.com/search/newsearch.asp?cdlGroupID=102044"]
rules = ( Rule(SgmlLinkExtractor(allow= r'-\w$'), callback='parseItems', follow=True), )
#rules = ( Rule(SgmlLinkExtractor(allow=("*", ),restrict_xpaths=("//*[contains(#id, 'SearchResultsGrid')]",)) , callback="parseItems", follow= True), )
def init_request(self):
"""This function is called before crawling starts."""
return Request(url=self.login_page, callback=self.login)
def login(self, response):
"""Generate a login request."""
return FormRequest.from_response(response,
formdata={'u': 'username', 'p': 'password'},
callback=self.check_login_response)
def check_login_response(self, response):
"""Check the response returned by a login request to see if we are
successfully logged in.
"""
if "Member Search Results" in response.body:
self.log("Successfully logged in. Let's start crawling!")
# Now the crawling can begin..
self.initialized()
else:
self.log("Bad times :(")
# Something went wrong, we couldn't log in, so nothing happens.
def parseItems(self, response):
hxs = HtmlXPathSelector(response)
members = hxs.select('/html/body/form/div[3]/div/table/tbody/tr/td/div/table[2]/tbody')
print members
items = []
for member in members:
item = KappaalItem()
item['Name'] = member.select("//a/text()").extract()
item['MemberLink'] = member.select("//a/#href").extract()
#item['EmailID'] =
#print item['Name'], item['MemberLink']
items.append(item)
return items
Got the below response after executing the scraper
2013-01-23 07:08:23+0530 [scrapy] INFO: Scrapy 0.16.3 started (bot: kappaal)
2013-01-23 07:08:23+0530 [scrapy] DEBUG: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2013-01-23 07:08:23+0530 [scrapy] DEBUG: Enabled downloader middlewares:HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-01-23 07:08:23+0530 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-01-23 07:08:23+0530 [scrapy] DEBUG: Enabled item pipelines:
2013-01-23 07:08:23+0530 [initkappaal] INFO: Spider opened
2013-01-23 07:08:23+0530 [initkappaal] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-01-23 07:08:23+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-01-23 07:08:23+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-01-23 07:08:26+0530 [initkappaal] DEBUG: Crawled (200) <GET https://kap.site-ym.com/Login.aspx> (referer: None)
2013-01-23 07:08:26+0530 [initkappaal] DEBUG: Filtered offsite request to 'kap.site-ym.com': <GET https://kap.site-ym.com/search/all.asp?bst=Enter+search+criteria...&p=P%40ssw0rd&u=9900146>
2013-01-23 07:08:26+0530 [initkappaal] INFO: Closing spider (finished)
2013-01-23 07:08:26+0530 [initkappaal] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 231,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 23517,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2013, 1, 23, 1, 38, 26, 194000),
'log_count/DEBUG': 8,
'log_count/INFO': 4,
'request_depth_max': 1,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2013, 1, 23, 1, 38, 23, 542000)}
2013-01-23 07:08:26+0530 [initkappaal] INFO: Spider closed (finished)
I do not understand why it is not authenticating and parsing the start url as mentioned.
Also make sure you have cookies enabled so that when you log in then the session remains logged in
COOKIES_ENABLED = True
COOKIES_DEBUG = True
in your settings.py file
I fixed it like this:
def start_requests(self):
return self.init_request()
def init_request(self):
return [Request(url=self.login_page, callback=self.login)]
def login(self, response):
return FormRequest.from_response(response, formdata={'username': 'username', 'password': 'password'}, callback=self.check_login_response)
def check_login_response(self, response):
if "Logout" in response.body:
for url in self.start_urls:
yield self.make_requests_from_url(url)
else:
self.log("Could not log in...")
By overloading start_requests, you ensure that the login process goes over correctly and only then you start scraping.
I'm using a CrawlSpider with this and works perfectly! Hope it helps.
Okay, so there's a couple of problems that I can see. However, I can't test the code because of the username and password. Is there a dummy account that can be used for testing purposes?
InitSpider doesn't implement rules, so whilst it won't cause a problem, that should be removed.
The check_login_response needs to return something.
To wit:
def check_login_response(self, response):
"""Check the response returned by a login request to see if we are
successfully logged in.
"""
if "Member Search Results" in response.body:
self.log("Successfully logged in. Let's start crawling!")
# Now the crawling can begin..
return self.initialized()
else:
self.log("Bad times :(")
# Something went wrong, we couldn't log in, so nothing happens.
return
This may not be the response your looking for, but I feel your pain...
I ran into this same issue I felt the documentation was not sufficient for Scrapy. I ended up using mechanize to log in. If you figure it out with scrapy great if now, Mechanize is pretty straight forward.
I am trying to use Tornado's sync-style 'gen' tool to run a simple echo function, in a non-blocking style:
import tornado.web
import tornado.gen
import logging
def echo(message):
return message
#tornado.gen.engine
def runme():
response = yield tornado.gen.Task(echo, 'this is a message')
logging.warn(response)
runme()
As far as I can tell this code isn't significantly different to the demo code in the docs, minus the unnecessary request handler stuff - I'm not handling any HTTP requests, AFAICT that's orthagonal to running something asynchronously. Yet this always fails with:
Traceback (most recent call last):
File "./server.py", line 46, in <module>
runme()
TypeError: wrapper() takes at least 1 argument (0 given)
Exactly where am I missing the argument? How can I make Tornado run this function asynchronously?
Task doesn't actually make a callback for the function being run, and start the callback when the function returns, as I originally thought.
I need to create a callback in the task being run myself, and invoke it, i.e.:
import tornado.web
import tornado.gen
import logging
def echo(message, callback=None):
callback(message)
#tornado.gen.engine
def runme():
response = yield tornado.gen.Task(echo, 'this is a message')
logging.warn(response)
runme()