how to use flurl to get the response body with query string parameters - flurl

how to use flurl to get the response body with query string parameters. I tried using Postman and the results were as I expected. but I can't apply it to flurl.
var strUrl = await "https://example.com/api/v2/xxx/yyy?need_personalize=true&promotionid=2007354722&sort_soldout=true"
.WithHeaders(new
{
user_agent = "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36",
content_type = "application/json",
referer = "https://example.com/xxx",
cookie = myCookies
})
.PostJsonAsync(new
{
need_personalize = true,
promotionid = 2007354722,
sort_soldout = true
});
bodyUrl = await strUrl.GetStringAsync();
the results I got : 200
{
"version": "6f1e0da21b667876ba3853b59a8bb812",
"error_msg": null,
"error": 10010
}
the resulting response should be like this :
{
"version": "6f1e0da21b667876ba3853b59a8bb812",
"data": {
"selling_out_item_brief_list": [
{
"itemid": 466536326,
"from": null
}
],
"items": [],
"mega_sale_items": [],
"item_brief_list": [
{
"itemid": 6720842242,
"from": null,
"is_soldout": false
}
],
"promotionid": 2007354722
},
"error_msg": null,
"error": 0
}
maybe anyone can help me?

Related

Invalid patch error in Contentful CMA client

I am trying to populating an empty field by using patch method in Contentful. The following piece of code works in one cloned environment but does not work in another.
let patchData: OpPatch[] = [
{
op: 'replace',
path: '/fields/keywords',
value: entryKeyword,
},
];
await cmaClient.entry.patch({ entryId: entryId }, patchData, { 'X-Contentful-Version': entryVersion });
When I try to execute this, receiving a 'Unprocessable Entity' error:
UnprocessableEntity: {
"status": 422,
"statusText": "Unprocessable Entity",
"message": "Could not apply patch to entry: invalid patch",
"details": {},
"request": {
"url": "/spaces/xyz/environments/abc/entries/123456789",
"headers": {
"Accept": "application/json, text/plain, */*",
"Content-Type": "application/json-patch+json",
"X-Contentful-User-Agent": "sdk contentful-management-plain.js/7.54.2;",
"Authorization": "Bearer ...",
"user-agent": "node.js/v14.19.2",
"Accept-Encoding": "gzip",
"X-Contentful-Version": 25,
"Content-Length": 78
},
"method": "patch",
"payloadData": "[{\"op\":\"replace\",\"path\":\"/fields/keywords\",\"value\":\"test keyword\"}]"
},
"requestId": "abcd-123456"
}
I have the same exact access permissions to both environments. What am I missing out on?
I had the same issue - turned out when the entry doesn't have the filed you're trying to patch - it will throw an error like above.

Build an Authenticated GET API in R

I can't figure out how to set up an API correctly. I have an example in Python and would like to understand how to reproduce it
with R, how to correctly choose attributes and authenticate.
import requests
import json
url = "https://developer.junglescout.com/api/product_database_query?marketplace=us"
payload = json.dumps({
"data": {
"type": "product_database_query",
"attributes": {
"include_keywords": [
"videogames"
],
"categories": [
"Video Games"
],
"exclude_unavailable_products": True
}
}
})
headers = {
'Content-Type': 'application/vnd.api+json',
'Accept': 'application/vnd.junglescout.v1+json',
'Authorization': 'KEY_NAME:MY_API_KEY'
}
response = requests.request("POST", url, headers=headers, data=payload)

Need help scraping contents of this page with Scrapy

Can someone please tell me how to scrape the data (Names & Numbers) from this page using Scrapy. The data is dynamically loaded. If you check Network tab you'll find a POST request to https://www.icab.es/rest/icab-api/collegiates. So I copied it as cURL and send the request through Postman. But I am getting error. Could someone please help me?
URL: https://www.icab.es/es/servicios-a-la-ciudadania/necesito-un-abogado/buscador-de-profesionales/?extraSearch=false&probono=false
This is a very good question! But maybe next time you'll want to add your code and maybe format it a little better. How to ask
Solution:
You need to recreate the request. I inspected the request with Burp Suite.
I got the headers for the url in start_urls, and both the headers and body for the json_url.
If you try to to get the json_url from start_request you'll get 401 error, so we first go to the start_urls url and only then request the json_url.
The complete code:
import scrapy
class Temp(scrapy.Spider):
name = "tempspider"
allowed_domains = ['icab.es']
start_urls = ['https://www.icab.es/es/servicios-a-la-ciudadania/necesito-un-abogado/buscador-de-profesionales']
json_url = 'https://www.icab.es/rest/icab-api/collegiates'
def start_requests(self):
headers = {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Origin": "https://www.icab.es",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-US,en;q=0.5",
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"DNT": "1",
"Host": "www.icab.es",
"Pragma": "no-cache",
"Sec-Fetch-Dest": "document",
"Sec-Fetch-Mode": "navigate",
"Sec-Fetch-Site": "none",
"Sec-Fetch-User": "?1",
"Sec-GPC": "1",
"Upgrade-Insecure-Requests": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36",
}
yield scrapy.Request(url=self.start_urls[0], headers=headers, callback=self.parse)
def parse(self, response):
headers = {
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"DNT": "1",
"Pragma": "no-cache",
"Sec-GPC": "1",
'Accept': 'application/json',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'en-US,en;q=0.9',
'Content-Type': 'application/json',
'Host': 'www.icab.es',
'Sec-Ch-Ua': '"Chromium";v="91", " Not;A Brand";v="99"',
'Sec-Ch-Ua-Mobile': '?0',
'Origin': 'https://www.icab.es',
'Referer': 'https://www.icab.es/es/servicios-a-la-ciudadania/necesito-un-abogado/buscador-de-profesionales',
'Sec-Fetch-Site': 'same-origin',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Dest': 'empty',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36',
"X-KL-Ajax-Request": "Ajax_Request",
}
body = '{"filters":{"keyword":"","name":"","surname":"","street":"","postalCode":"","collegiateNumber":"","dedication":"","language":"","paginationFirst":"1","paginationLast":"25","paginationOrder":"surname","paginationOrderAscDesc":"ASC"}}'
yield scrapy.Request(url=self.json_url, headers=headers, body=body, method='POST', callback=self.parse_json)
def parse_json(self, response):
json_response = response.json()
members = json_response['members']
for member in members:
yield {
'randomPosition': member['randomPosition'],
'collegiateNumber': member['collegiateNumber'],
'surname': member['surname'],
'name': member['name'],
'gender': member['gender'],
}
Output:
{'randomPosition': '27661107', 'collegiateNumber': '35080', 'surname': 'Abad Bamala', 'name': 'Ana', 'gender': 'M'}
{'randomPosition': '98668217', 'collegiateNumber': '14890', 'surname': 'Abad Calvo', 'name': 'Encarnacion', 'gender': 'M'}
{'randomPosition': '53180188', 'collegiateNumber': '29746', 'surname': 'Abad de Brocá', 'name': 'Laura', 'gender': 'M'}
{'randomPosition': '41073111', 'collegiateNumber': '31865', 'surname': 'Abad Esteve', 'name': 'Joan Domènec', 'gender': 'H'}
{'randomPosition': '63371735', 'collegiateNumber': '29647', 'surname': 'Abad Fernández', 'name': 'Dolors', 'gender': 'M'}
{'randomPosition': '30290704', 'collegiateNumber': '45016', 'surname': 'Abad Hernández', 'name': 'Laura', 'gender': 'M'}
{'randomPosition': '57510617', 'collegiateNumber': '16083', 'surname': 'Abad Mariné', 'name': 'Jose Antonio', 'gender': 'H'}
................
................
................

Filtering out login logs from Gitlab production_json.log with jq

I'm trying to filter out login events from the production_json.log of a Omnibus GitLab server.
Thus JSON elements that i want to filter look like this:
{
"method": "POST",
"path": "/users/sign_in",
"format": "html",
"controller": "SessionsController",
"action": "create",
"status": 302,
"duration": 146.22,
"view": 0,
"db": 16.64,
"location": "https://maschm.ddnss.de/",
"time": "2021-01-05T11:44:30.180Z",
"params": [
{
"key": "utf8",
"value": "✓"
},
{
"key": "authenticity_token",
"value": "[FILTERED]"
},
{
"key": "user",
"value": {
"login": "root",
"password": "[FILTERED]",
"remember_me": "0"
}
}
],
"remote_ip": "46.86.21.18",
"user_id": 1,
"username": "root",
"ua": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.2 Safari/605.1.15",
"queue_duration": 7.3,
"correlation_id": "JtnY93e2ti8"
}
I only want output for such elements.
jq is new to me. I'm using this command now:
sudo tail -f /var/log/gitlab/gitlab-rails/production_json.log |
jq --unbuffered '
if .remote_ip != null and .method == "POST" and
.path == "/users/sign_in" and .action == "create"
then
.ua + " " + .remote_ip else ""
end
'
The output looks like this:
""
""
""
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.2 Safari/605.1.15 46.86.21.18"
""
""
""
""
""
""
I have two questions:
How can i avoid the "" output (there should be no output for other JSON elements)?
Is if the correct jq statement for filtering?
You could use empty instead of "" to solve the problem, but using select() to filter out unwanted stream elements is a cleaner solution.
jq --unbuffered '
select(
.remote_ip != null and
.method == "POST" and
.path == "/users/sign_in" and
.action == "create"
) |
.ua + " " + .remote_ip
'

Importing Postman Collection Fails

trying to import a Postman collection and I'm getting this error in an alert dialog:
Import Failed
TypeError: null is not an object (evaluating 'postmanBodyData.length')
And then this in the console:
JS Exception Line 54. TypeError: null is not an object (evaluating 'postmanBodyData.length')
Here's a sample of a collection that failed to import.
{
"id": "5eb54264-f906-b6d7-9ee4-d045875c8ad4",
"name": "SO Test",
"order": [
"ee9c4b31-f6b3-0799-5d9d-298d8257d6d0",
"513b4473-f1c3-469e-ce67-edaf33faf2d0"
],
"timestamp": 1448497158415,
"requests": [
{
"id": "513b4473-f1c3-469e-ce67-edaf33faf2d0",
"url": "http://stackoverflow.com/questions/33901145/importing-postman-collection-fails?noredirect=1#comment55564842_33901145",
"method": "GET",
"headers": "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\nUpgrade-Insecure-Requests: 1\nUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.69 Safari/537.36\nAccept-Encoding: gzip, deflate, sdch\nAccept-Language: en-US,en;q=0.8\nCookie: prov=41dcce2f-3878-4f81-b102-86ced2fc0edd; __qca=P0-107192378-1422497046148; gauthed=1; _ga=GA1.2.828174835.1422497046; __cfduid=df57f13c8f66daf4cca857b9bde72d0981447728327\n",
"data": null,
"dataMode": "params",
"version": 2,
"name": "http://stackoverflow.com/questions/33901145/importing-postman-collection-fails?noredirect=1#comment55564842_33901145",
"description": "",
"descriptionFormat": "html",
"collectionId": "5eb54264-f906-b6d7-9ee4-d045875c8ad4"
},
{
"id": "ee9c4b31-f6b3-0799-5d9d-298d8257d6d0",
"url": "http://stackoverflow.com/posts/33901145/ivc/2e31?_=1448497117271",
"method": "GET",
"headers": "Accept: */*\nX-Requested-With: XMLHttpRequest\nUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.69 Safari/537.36\nReferer: http://stackoverflow.com/questions/33901145/importing-postman-collection-fails?noredirect=1\nAccept-Encoding: gzip, deflate, sdch\nAccept-Language: en-US,en;q=0.8\nCookie: prov=41dcce2f-3878-4f81-b102-86ced2fc0edd; __qca=P0-107192378-1422497046148; gauthed=1; _ga=GA1.2.828174835.1422497046; __cfduid=df57f13c8f66daf4cca857b9bde72d0981447728327\n",
"data": null,
"dataMode": "params",
"version": 2,
"name": "http://stackoverflow.com/posts/33901145/ivc/2e31?_=1448497117271",
"description": "",
"descriptionFormat": "html",
"collectionId": "5eb54264-f906-b6d7-9ee4-d045875c8ad4"
}
]
}
This bug was due to passing an empty body incorrectly:
"data": null,
"dataMode": "params",
It has been fixed in v1.1.2

Resources