I need to provide a button, once the user click on it It will download a PDF file or Image.
I have been tried so many ways out there. non of them works.
and I couldn't find any tutorial for this topic.
I'll copypaste a working code from my own question (Django 1.7, Python 3.4):
Django 1.7: serve a pdf -file (UnicodeDecodeError)
from views.py:
from django.http import HttpResponse
def download(request, file_name):
file = open('path/to/file/{}'.format(file_name), 'rb')
response = HttpResponse(file, content_type='application/pdf')
response['Content-Disposition'] = "attachment; filename={}".format(file_name)
return response
Related
I am working on a web application project which will allow user to search from keywords and get a set of results based on the keywords entered from scraping. For this I use scrapy for scraping results from a web search engine. I wrote some code to pass the keywords to the scrapy file and display the scrapy results on a webpage. However, I'm having trouble passing the keywords to scrapy using FastAPI, because when I run my api code, I always get a set of errors from Scrapy. Here is the gist containing the runtime terminal output. I don't understand the problem, yet my scrapy code was working perfectly before I connected it to the api, I'm a beginner on creating APIs, so I ask for your help. Here is the code for my web scraper:
import scrapy
import datetime
from requests_html import HTMLSession
class PagesearchSpider(scrapy.Spider):
name = 'pageSearch'
def start_requests(self,query):
#queries = [ 'investissement']
#for query in queries:
url = f'https://www.ask.com/web?q={query}'
s = HTMLSession()
r = s.get
print(r.status_code)
qlist[""]
yield scrapy.Request(url, callback=self.parse, meta={'pos': 0})
def parse(self, response):
print('url:', response.url)
start_pos = response.meta['pos']
print('start pos:', start_pos)
dt = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
items = response.css('div.PartialSearchResults-item')
for pos, result in enumerate(items, start_pos+1):
yield {
'title': result.css('a.PartialSearchResults-item-title-link.result-link::text').get().strip(),
'snippet': result.css('p.PartialSearchResults-item-abstract::text').get().strip(),
'link': result.css('a.PartialSearchResults-item-title-link.result-link').attrib.get('href'),
'position': pos,
'date': dt,
}
qlist.append(items)
# --- after loop ---
next_page = response.css('.PartialWebPagination-next a')
if next_page:
url = next_page.attrib.get('href')
print('next_page:', url) # relative URL
# use `follow()` to add `https://www.ask.com/` to URL and create absolute URL
yield response.follow(url, callback=self.parse, meta={'pos': pos+1})
# --- run without project, and save in file ---
from scrapy.crawler import CrawlerProcess
c = CrawlerProcess({
#'USER_AGENT': 'Mozilla/5.0',
# save in file CSV, JSON or XML
'FEEDS': {'test.json': {'format': 'json'}},
#'ROBOTSTXT_OBEY': True, # this stop scraping
})
c.crawl(PagesearchSpider)
c.start()
the code allowing the functioning of my api:
from fastapi import FastAPI
from script import PagesearchSpider
app = FastAPI()
request = PagesearchSpider()
#app.get("/{cat}")
async def read_item(cat):
return request.start_requests('cat')
I changed part of my scraper code, viz:
def start_requests(self,query):
#queries = [ 'investissement']
#for query in queries:
url = f'https://www.ask.com/web?q={query}'
if __name__ == '__main__':
s = HTMLSession()
r = s.get
print(r.status_code)
qlist[""]
but I still get the same errors. Unless I got the wrong crawler code, I'm not very good at scrapy
I also rewrote the code below my function...
# --- run without project, and save in file ---
if __name__ == "__main__":
from scrapy.crawler import CrawlerProcess
c = CrawlerProcess({
#'USER_AGENT': 'Mozilla/5.0',
# save in file CSV, JSON or XML
#'FEEDS': {'test.json': {'format': 'json'}},
#'ROBOTSTXT_OBEY': True, # this stop scraping
})
c.crawl(PagesearchSpider)
c.start()
and I executed this command python -m uvicorn main:app --reload to start server and I had the following results in the command line :
←[32mINFO←[0m: Will watch for changes in these directories: ['C:\\Users\\user\\Documents\\AAprojects\\Whelpsgroups1\\searchApi\\apiFast']
←[32mINFO←[0m: Uvicorn running on ←[1mhttp://127.0.0.1:8000←[0m (Press CTRL+C to quit)
←[32mINFO←[0m: Started reloader process [←[36m←[1m10956←[0m] using ←[36m←[1mstatreload←[0m
←[33mWARNING←[0m: The --reload flag should not be used in production on Windows.
←[32mINFO←[0m: Started server process [←[36m6724←[0m]
←[32mINFO←[0m: Waiting for application startup.
←[32mINFO←[0m: Application startup complete.
←[33mWARNING←[0m: StatReload detected file change in 'main.py'. Reloading...
←[33mWARNING←[0m: The --reload flag should not be used in production on Windows.
←[32mINFO←[0m: Started server process [←[36m2720←[0m]
←[32mINFO←[0m: Waiting for application startup.
←[32mINFO←[0m: Application startup complete.
But when I click from the command line on the link of the address my server started at, it opens my file explorer on windows 10 and when I manually write that link i.e. http://127.0.0.0:8000/ in my browser search bar it says 127.0.0.0 took too long to respond. Yet I didn't change any of my files, just the command I was using in console line so I don't know why this error.
I looked at the stack overflow questions, but they weren't directly related to the problem of difficulty sharing data between an api and a web scraper and on the internet I couldn't find any relevant answers. So I hope you could help me, I look forward to your answers, thank you!
My django webserver url accept query-parameters. For ex. "mydomain.com?emp_name=abc&dept=admin".
I want to automate this write a test using pytest-djnago to see if the url accepts the query parameters or not. Please suggest.
There are two ways to achive this.
My setup :
Django 3.2.12
django-test-curl 0.2.0
gunicorn 20.1.0
pytest 7.0.1
pytest-django 4.5.2
1) By using pytest-curl-report plugin (https://pypi.org/project/pytest-curl-report/)
I was getting below error after installing the "pytest-curl-report 0.5.4" plugin.
Error:
`pluggy._manager.PluginValidationError: Plugin 'curl-report' for hook
'pytest_runtest_makereport'
hookimpl definition: pytest_runtest_makereport(multicall, item, call)
Argument(s) {'multicall'} are declared in the hookimpl but can not be found in the hookspec`
I tired multiple install/unistall but it didnt helped.
Also I didnt found any fix this issue on google, so I skipped this one and decided to use django-test-curl plugin. See point no. 2)
2) By using django-test-curl plugin ("https://github.com/crccheck/django-test-curl")
$ pip3 install django-test-curl
Usage
from django_test_curl import CurlClient
class SimpleTest(TestCase):
def setUp(self):
self.client = CurlClient()
def test_details(self):
response = self.client.curl("""
curl http://localhost:8000/customer/details/
""")
self.assertEqual(response.status_code, 200)
self.assertEqual(len(response.context['customers']), 5)
is "chardetect.exe" executable required for a get request and content? Just wanted to know. If not needed, then is it okay to delete it so that I can store the chardet files in bitbucket/git without an executable?
Code to use:
req = requests.get(url)
with io.BytesIO() as buf:
buf.write(req.content)
buf.seek(0)
Requests currently requires the chardet package but doesn't rely on any of the CLI tools.
https://github.com/psf/requests/issues/5548#issuecomment-668228215
I have a python code thats using mitm proxy to capture website traffic and generate a JSON file and I am trying to integrate that code with Robot using its process library. If I run the python file by itself and initiate Robot tests from different window then the JSON file is generated with no issues but if I run the same file as part of my test setup in Robot(using process library) then no file is generated. Wondering what am I doing wrong here?
Here is my Python code
tracker.py
from mitmproxy import http, ctx
import json
match_url = ["https://something.com/"] # Break Point URL portion to be matched
class Tracker:
def __init__(self):
self.flow = http.HTTPFlow
def requests(self, flow):
for urls in match_url:
if urls in flow.request.pretty_url:
with open('out.json', 'a+', encoding='utf-8') as out:
json.dump(flow.request.content.decode(), out)
def done(self):
print("Bye Bye")
ctx.master.shutdown()
addons = [
AGTracker()
]
keyword.robot
Start browser proxy process
${result} = start process mitmdump -s my_directory/tracker.py -p 9995 > in.txt shell=True alias=mitm
Stop browser proxy process
Terminate process mitm
in Watson Studio I am writing code in a Jupyter Notebook to use a Watson Visual Recognition custom model.
It works ok with external images.
I haven't been able yet to refer to an image I have uploaded to the Assets of my project.
The url of the asset gets to a full page not the image only:
https://dataplatform.ibm.com/projects/2f4b89d9-b93a-4c98-a327-9b863a467b7c/data-assets/ed16c385-e09e-4bcb-bfab-67ee864538e4/?context=data
Thank you
In Watson the asset files are saved in Cloud Object Store, or COS. You have to download the image from COS to the notebook server file system and then you can refer to the file in your notebook as a regular local file.
I use the cos api to get the files. https://github.com/IBM/ibm-cos-sdk-python.
https://console.bluemix.net/docs/services/cloud-object-storage/libraries/python.html#using-python
First, find out what your credentials are by
highlighting a notebook cell,
click Data menu,
select Files,
"Insert to Code"
credential.
Then you can download the file to your local disk storage using the API. For example, to download a file from COS:
# The following code contains the credentials for a file in your IBM Cloud Object Storage.
# You might want to remove those credentials before you share your notebook.
credentials_1 = {
'IBM_API_KEY_ID': '**************************************',
'IAM_SERVICE_ID': 'iam-ServiceId-**************************',
'ENDPOINT': 'https://s3-api.us-geo.objectstorage.service.networklayer.com',
'IBM_AUTH_ENDPOINT': 'https://iam.ng.bluemix.net/oidc/token',
'BUCKET': '********************************',
'FILE': 'file.xlsx'
}
from ibm_botocore.client import Config
import ibm_boto3
def download_file_cos(credentials, local_file_name, key):
cos = ibm_boto3.client(service_name='s3',
ibm_api_key_id=credentials['IBM_API_KEY_ID'],
ibm_service_instance_id=credentials['IAM_SERVICE_ID'],
ibm_auth_endpoint=credentials['IBM_AUTH_ENDPOINT'],
config=Config(signature_version='oauth'),
endpoint_url=credentials['ENDPOINT'])
try:
res=cos.download_file(Bucket=credentials['BUCKET'], Key=key, Filename=local_file_name)
except Exception as e:
print(Exception, e)
else:
print("Dowloaded:", key, 'from IBM COS to local:', local_file_name)
In a Notebook cell list directory contents:
%%script bash
ls -l
# to list all .png files in COS you can use a function like this:
def list_objects(credentials):
cos = ibm_boto3.client(service_name='s3',
ibm_api_key_id=credentials['IBM_API_KEY_ID'],
ibm_service_instance_id=credentials['IAM_SERVICE_ID'],
ibm_auth_endpoint=credentials['IBM_AUTH_ENDPOINT'],
config=Config(signature_version='oauth'),
endpoint_url=credentials['ENDPOINT'])
return cos.list_objects(Bucket=credentials['BUCKET'])
response = list_objects(credentials_1)
for c in response['Contents']:
if c['Key'].endswith('.png'):
print(c['Key'], "last modified:", c['LastModified'])