Log entry to Splunk using python - python-requests

In Splunk we have an url, index, token, host, source and sourcetype and with those detail need to post data in splunk using python.
I was able to write a code using requests with URL, index, token and it works
import requests
url='SPLUNK_URL'
Header = {'Authorization': 'Splunk '+'1234567'}
json = {"index":"xxx_yyy", "event": { 'message' : "Value" } }
r = requests.post(url, headers=Header, json, verify=False)
But sometimes get this error ConnectionError: ('Connection aborted.', OSError("(10054, 'WSAECONNRESET')")). How to avoid this error ?

Assuming this is HEC,
I would compare the times you receive this error vs times you have issues on receiver, such as high CPU utilization , or internal logs for connection drops etc. That could be your answer as receiver rejects/resets. Also if you are sending directly to Indexer rather than mid instance, I believe there is a common issue for that.

Related

httr authentication login/password in "xtb" API

I need to authenticate and get prices using this api
I have no experience with api so my attempt to login gives an error
login <- "vikov98261#jesdoit.com"
pass <- "QazQaz123"
library(httr)
resp <- POST("xapi.xtb.com",
body=list(userId = login,
password = pass) )
Error in curl::curl_fetch_memory(url, handle = handle) :
Failed to connect to xapi.xtb.com port 80: Timed out
Can someone show me how to do it right.
I would like an example of how the login request works.
And also I would like an example of how to get the prices of any currency
Their API documentation uses WebSocket syntax, so I assume xapi.xtb.com may only be used by the clients. I, for once, only managed to get WebSocket to work.
In order to make this work in r you would need a WebSocket client library for r, such as websocket. Once you have that:
1. Define connection
ws <- WebSocket$new("wss://ws.xtb.com/demo")
2. Log in
WebSocket clients work with events. The 'open' event is generated once the connection is established and the 'message' events are generated when messages are received. You need to write handlers for them to orchestrate the way you want to use the XTB API.
The first event will be 'open', so use that to send the login command.
ws$onOpen(function(event) {
ws$send({
"command":"login",
"arguments": {
"userId":"1000",
"password":"PASSWORD",
"appId":"test",
"appName":"test"
}
})
})
3. Your logic
The response to your login command will trigger a 'message' event, the output of which you will need to handle in your code.
ws$onMessage( <your-code-goes-here> )
The easiest way would probably be to send new commands based on what is the structure of the received message, although it can get really complicated with many commands.
4. Connect
After all handles have been defined, don't forget to connect.
ws$connect()

Sign in with requests leads to "SSO Request Failed, Session token is null"

I am trying to implement a solution of signing in and retrieving data similar to this question. However, when I send my get request, and print the text, I get the following:
{
"errorCode" : "5011",
"errorMessage" : "SSO Request Failed, Session token is null.",
"errorMessageDetail" : null
}
How do I fix this issue?
My code:
with requests.Session() as s:
r1 = s.post(url1,data={'username':'user_name','password':'1234','rememberme':'true','userprofile':'true'})
r2 = s.get(url2, cookies=r1.cookies)
print(r2.text)
I was using the wrong URL
Perhaps, I have run into one of the complex issues listed here: How to login using requests in Python?

Handle POST request with Firebase function

I am using fulcrum to collect data. fulcrum has a webhook feature
I have created a firebase function and linked the firebase function to fulcrums webhook feature with the functions URL. https://us-central1-example.cloudfunctions.net/fulcrumHook
Here is my existing function.
exports.fulcrumHook = functions.https.onRequest((request, response) => {
console.log(response.data.form_id)
response.send(200)
})
Through hours of debugging, in the logs I can see that the data I want is coming through but I am struggling to access it in the function itself.
When I log the request I get IncomingMessage { _readableState: ReadableState { objectMode: false,.....
When I log the response I get ServerResponse { domain: null, _events: [Object: null prototype] { finish: [ [.... as well as the body much further down with the actual data i need in it.
I have search for all the keywords i can think of about how to handle this data but I am completely stumped.
Do I need to handle the response like a promise with response.then(data => ...stuff)
Do I need to establish a connection like a socket with response.on('data', (data) => ...stuff)
Everything you need is in the documentation for HTTP triggers.
The request and response are essentially Express Request and Response objects.
Used as arguments for onRequest(), the Request object gives you access to the properties of the HTTP request sent by the client, and the Response object gives you a way to send a response back to the client.
You can click through to those linked APIs to understand in detail how they work.
Data passed to the function can be found by reading values from the request. If it's a POST request, form values are read like this:
request.body.form_id
The response is sent using response.send(). Just pass it an object that will get automatically serialized as JSON. Or use the linked API for the response object from above to learn more about your options.

SSE with Leshan LWM2M Demo Server

I am trying to do an http api that interact with a Leshan Demo Server.
I was trying to handle the OBSERVE in LWM2M, but I need to handle the notification with http.
I discovered that leshan notify using SSE. So I was trying to implement the sse client in python using requests and sseclient.
This is my code:
response= requests.post(url_request , "format=TLV" , stream= True)
client = sseclient.SSEClient(response)
for event in client.events():
print(json.loads(event.data))
I tried to run my script but it seems like the stream is not opening and it close immediately without waiting for the answer of the server, even if requests by default implement keep_alive for TCP connection under HTTP and the stream is True.
Does someone know why?
Reading the sseclient documentation, the correct way so use SSEClient seems to be :
from sseclient import SSEClient
messages = SSEClient('http://example.com/sse_stream/')
for msg in messages:
do_something_useful(msg)
Reading the answer on Leshan Github, the stream URL for Leshan Server Demo seems to be http://your.leshan.server.org/event?ep=your_device_endpoint_name
So I tried that :
from sseclient import SSEClient
messages = SSEClient('http://localhost:8080/event?ep=my_device')
for msg in messages:
print (msg.event, msg.data)
And it works for me 🎉 ! Getting this kind of results when I observe the temperature instance of Leshan Client Demo :
(u'NOTIFICATION', u'{"ep":"my_device","res":"/3303/0","val":{"id":0,"resources":[{"id":5601,"value":-18.9},{"id":5602,"value":31.2},{"id":5700,"value":-18.4},{"id":5701,"value":"cel"}]}}')
(u'COAPLOG', u'{"timestamp":1592296453808,"incoming":true,"type":"CON","code":"POST","mId":29886,"token":"889372029F81C124","options":"Uri-Path: \\"rd\\", \\"reWfKIgPYD\\"","ep":"my_device"}')
(u'COAPLOG', u'{"timestamp":1592296453809,"incoming":false,"type":"ACK","code":"2.04","mId":29886,"token":"889372029F81C124","ep":"my_device"}')
(u'UPDATED', u'{"registration":{"endpoint":"my_device","registrationId":"reWfKIgPYD","registrationDate":"2020-06-16T10:02:25+02:00","lastUpdate":"2020-06-16T10:34:13+02:00","address":"127.0.0.1:44400","lwM2mVersion":"1.0","lifetime":300,"bindingMode":"U","rootPath":"/","objectLinks":[{"url":"/","attributes":{"rt":"\\"oma.lwm2m\\""}},{"url":"/1/0","attributes":{}},{"url":"/3/0","attributes":{}},{"url":"/6/0","attributes":{}},{"url":"/3303/0","attributes":{}}],"secure":false,"additionalRegistrationAttributes":{}},"update":{"registrationId":"reWfKIgPYD","identity":{"peerAddress":{}},"additionalAttributes":{}}}')
(u'COAPLOG', u'{"timestamp":1592296455150,"incoming":true,"type":"NON","code":"2.05","mId":29887,"token":"3998C5DE2588F835","options":"Content-Format: \\"application/vnd.oma.lwm2m+tlv\\" - Observe: 2979","payload":"Hex:e3164563656ce8164408c03199999999999ae815e108c032e66666666666e815e208403f333333333333","ep":"my_device"}')
If you are interested by notification only, just add a if msg.event == 'NOTIFICATION': block.

Wikidata Forbidden Access

I was trying to run some wikidata queries with Python requests and multiprocessing (number_workers = 8) and now I'm getting code 403 (Access Forbiden). Are there any restrictions? I've seen here that I should limit myself to 5 concurrent queries, but even with one query I don't get any result now through Python. It used to work.
Is this Access Forbiden temporary or am I blacklisted forever? :(
I didn't see any restrictions in their doc, so I was not aware that I'm doing something that will get me banned.
Does anyone know what the situation is?
wikidata_url = 'https://query.wikidata.org/sparql'
headers = {'User-Agent': 'Chrome/77.0.3865.90'}
r = requests.get(wikidata_url, params={'format': 'json', 'query': query, 'headers': headers})
EDIT AFTER FIXES:
It turned out that I was temporarily banned from the server. I have changed my user agent to follow the recommended template and I waited for my bad to be removed. The problem was that I was ignoring the error 429 that tells me that I have exceeded my allowed limit and I have to retry after some time (some seconds). This leaded to my error 403.
I tried to correct my error caused by inexperience by writing the following code that takes this into account. I added this edit because it may be useful for someone else.
def get_delay(date):
try:
date = datetime.datetime.strptime(date, '%a, %d %b %Y %H:%M:%S GMT')
timeout = int((date - datetime.datetime.now()).total_seconds())
except ValueError:
timeout = int(date)
return timeout
def make_request(params):
r = requests.get(wikidata_url, params)
print(r.status_code)
if r.status_code == 200:
if r.json()['results']['bindings']:
return r.json()
else:
return None
if r.status_code == 500:
return None
if r.status_code == 403:
return None
if r.status_code == 429:
timeout = get_delay(r.headers['retry-after'])
print('Timeout {} m {} s'.format(timeout // 60, timeout % 60))
time.sleep(timeout)
make_request(params)
The access limits were tightened up in 2019 to try and cope with overloading of the query servers. The generic python-request user agent was blocked as part of this (I don't know if/when this was reinstated).
Per the Query Service manual, the current rules seem to be:
One client (user agent + IP) is allowed 60 seconds of processing time each 60 seconds
One client is allowed 30 error queries per minute
Clients who don't comply with the User-Agent policy may be blocked completely
Access to the service is limited to 5 parallel queries per IP [this may change]
I would recommend trying again, running single queries with a more detailed user-agent, to see if that works.

Resources