I have a valid Here licence and I have been using it for long time. Suddenly I had the above response from server.
The url is: http://pde.api.here.com/2/matchroute.json?app_id=xxxxxxxxxxxxxxx&app_code=xxxxxxxxxxxx&routemode=car&file=®ions=WEU&release=LATEST
def connectPost(self, url, data, headers=None):
return self.S.post(url, data=data, headers=headers , proxies = self.proxy_dict)
As per the licensing, and delivery back-end system The account has expired that's why.
Cheers,
WaliedCheetos
Try please this python code(it works properly for me)
# Hello World program in Python
import requests
import json
data = bytes("\n".join([
'LATITUDE,LONGITUDE',
'50.875182,4.499943',
'50.875240,4.499935',
'50.875302,4.499920',
'50.875375,4.499893',
'50.875450,4.499854',
'50.875527,4.499799',
'50.875602,4.499728',
'50.875670,4.499643'
]), 'utf-8')
url = 'http://pde.api.here.com/2/matchroute.json?app_id=xxxx&app_code=xxxxx&routemode=car®ions=WEU&release=LATEST'
resp = requests.post(url, data)
parsed = json.loads(resp.text)
print(json.dumps(parsed, indent=4, sort_keys=True))
This code in on playground
Related
I'm getting below error when making a requests call post method
{'detail': [{'loc': ['body', 'files'], 'msg': 'field required', 'type': 'value_error.missing'}]}
I tried
response = requests.post("url",headers={mytoken},params=p,files=files)
files = { "file 1": open("sample.pdf",'rb'), "file 2":open("sample11.pdf",'rb')}
I want to get 200 status but I'm getting 422 validation error. Any Idea Why? Its for API Testing purpose, Im new to this I've been debugging this for whole day but still couldn't figure out.
It is not clear from the question what kind of a request the server is expecting. Also, its not clear the exact code snippet you are using too.
From the question, the snippet looks like as follows,
response = requests.post("url",headers={mytoken},params=p,files=files)
files = { "file 1": open("sample.pdf",'rb'), "file 2":open("sample11.pdf",'rb')}
if so, that means you are reading files after you send the request, may be thats why the server complained about missing files field.
See the below example on how you can send two files to an endpoint expecting files.
import requests
import logging
logger = logging.getLogger(__name__)
post_url = "https://exampledomain.local/upload"
file1 = open("sample1.pdf", "rb")
file2 = open("sample2.pdf", "rb")
files = {"file1": file1, "file2": file2}
headers = {"Authorization": "Bearer <your_token_here>"}
params = {"file_type": "pdf"}
response = requests.post(post_url, files=files, headers=headers, params=params)
file1.close()
file2.close()
logger.debug(response.text)
I'm using this API to search through books. I need to create a request with given parameters. When I use requests library and params argument it creates bad URL which gives me wrong response. Let's look at the examples:
import requests
params = {'q': "", 'inauthor': 'keyes', 'intitle': 'algernon'}
r = requests.get('https://www.googleapis.com/books/v1/volumes?', params=params)
print('URL', r.url)
The URL is https://www.googleapis.com/books/v1/volumes?q=&inauthor=keyes&intitle=algernon
Which works but gives a different response than when the link is as Working Wolumes tells.
Should be: https://www.googleapis.com/books/v1/volumes?q=inauthor:keyes+intitle:algernon
Documentation of requests tells only about params and separates them with &.
I'm looking for a library or any solution. Hopefully, I don't have to create them using e.g. f-strings
You need to create a parameter to send the url, the way you are doing it now is not what you wanted.
In this code you are saying that you need to send 3 query parameters, but that is not what you wanted. You actually want to send 1 parameter with a value.
import requests
params = {'q': "", 'inauthor': 'keyes', 'intitle': 'algernon'}
r = requests.get('https://www.googleapis.com/books/v1/volumes?', params=params)
print('URL', r.url)
try below code instead which is doing what you require:
import requests
params = {'inauthor': 'keyes', 'intitle': 'algernon'}
new_params = 'q='
new_params += '+'.join('{}:{}'.format(key, value) for key, value in params.items())
print(new_params)
r = requests.get('https://www.googleapis.com/books/v1/volumes?', params=new_params)
print('URL', r.url)
I am trying to do a local load testing with Locust. I got the test environment up and running and a local build is also working. I am trying to test the responses of a local path and the response I get in the terminal is correct. But the Locust UI and also the statistics after terminating the test give me 100% fail results.
For creating the locust code (I am pretty new to it) I use the postman content and adjusted it. This is the Code for Locust:
from locust import HttpLocust, TaskSet, task, between
import requests
url = "http://localhost:8080/registry/downloadCounter"
payload = "[\n {\n \"appName\": \"test-app\",\n \"appVersion\": \"1.6.0\"\n }\n]"
class MyTaskSet(TaskSet):
#task(2)
def index(self):
self.client.get("")
headers = {
'Content-Type': 'application/json',
'Accept':'application/json'
}
response = requests.request("POST", url, headers=headers, data = payload)
print(response.text.encode('utf8'))
class MyLocust(HttpLocust):
task_set = MyTaskSet
wait_time = between(2.0, 4.0)
For the Locust swarm I used just basic numbers:
Number of total users to simulate: 1
Hatch Rate: 3
Host: http://localhost:8080/registry/downloadCounter
I do not get any results there, the table stays blank. I guess it has something to do with the json format but I am not able to find the solution myself.
I also put a Screenshot of the Terminal response after termination in this post.
Thank you in advance for your help!
Best regards
This helped:
from locust import HttpLocust, TaskSet, task, between
import requests
url = "http://localhost:8080/registry/downloadCounter"
payload = "[\n {\n \"appName\": \"test-app\",\n \"appVersion\": \"1.6.0\"\n }\n]"
headers = {'Content-type':'application/json', 'Accept':'application/json'}
class MyTaskSet(TaskSet):
#task(2)
def index(self):
response = self.client.post(url = url, data = payload, headers=headers)
print(response.text.encode('utf8'))
print(response.status_code)
class MyLocust(HttpLocust):
task_set = MyTaskSet
wait_time = between(2.0, 4.0)
```
Relatively new to Splash. I'm trying to scrape a website which needs a login. I started off with the Splash API for which I was able to login perfectly. However, when I put my code in a scrapy spider script, using SplashRequest, it's not able to login.
import scrapy
from scrapy_splash import SplashRequest
class Payer1Spider(scrapy.Spider):
name = "payer1"
start_url = "https://provider.wellcare.com/provider/claims/search"
lua_script = """
function main(splash,args)
assert(splash:go(args.url))
splash:wait(0.5)
local search_input = splash:select('#Username')
search_input:send_text('')
local search_input = splash:select('#Password')
search_input:send_text('')
assert(splash:wait(0.5))
local login_button = splash:select('#btnSubmit')
login_button:mouse_click()
assert(splash:wait(7))
return{splash:html()}
end
"""
def start_requests(self):
yield SplashRequest(self.start_url, self.parse_result,args={'lua_source': self.lua_script},)
def parse_result(self, response):
yield {'doc_title' : response.text}
The output HTML is the login page and not the one after logging in.
You have to add endpoint='execute' to your SplashRequest to execute the lua-script:
yield SplashRequest(self.start_url, self.parse_result, args={'lua_source': self.lua_script}, endpoint='execute')
I believe you don't need splash to login to the site indeed. You can try next:
Get https://provider.wellcare.com and then..
# Get request verification token..
token = response.css('input[name=__RequestVerificationToken]::attr(value)').get()
# Forge post request payload...
data = [
('__RequestVerificationToken', token),
('Username', 'user'),
('Password', 'pass'),
('ReturnUrl', '/provider/claims/search'),
]
#Make dict from list of tuples
formdata=dict(data)
# And then execute request
scrapy.FormRequest(
url='https://provider.wellcare.com/api/sitecore/Login',
formdata=formdata
)
Not completely sure if all of this will work. But you can try.
I made a small program to test Microsoft Cognitive Service, but it always return
{
"code":"InternalServerError",
"requestId":"6d6dd4ec-9840-4db3-9849-a6497094fa4c",
"message":"Internal server error."
}
The code I'm using is:
#!/usr/bin/env python
import httplib, urllib, base64
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': '53403359628e420ab85a516a79ba1bd0',
}
params = urllib.urlencode({
# Request parameters
'visualFeatures': 'Categories,Tags,Adult,Description,Faces',
'details': '{string}',
})
try:
conn = httplib.HTTPSConnection('api.projectoxford.ai')
conn.request("POST", "/vision/v1.0/analyze?%s" % params,
'{"url":"http://static5.netshoes.net/Produtos/bola-umbro-neo-liga-futsal/28/D21-0232-028/D21-0232-028_zoom1.jpg?resize=54g:*"}', headers)
response = conn.getresponse()
data = response.read()
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
Am I doing something wrong or it's a generalized server problem?
The problem is in the params variable. When defining which visual features you would like to extract you can specify specific details from the image, as described in the documentation. The details field, if used, must be initialized with one of the valid string options available (currently, only supporting the "Celebrities" option, that would identify which celebrity is in the image). In this case, you initialized the details field with literally the placeholder noted in the documentation ('{string'}). That caused the system to give an internal error.
To correct that, you should try:
params = urllib.urlencode({
# Request parameters
'visualFeatures': 'Categories,Tags,Adult,Description,Faces',
'details': 'Celebrities',
})
(PS: Have already reported this behavior to Microsoft Cognitive Services.)