mitmproxy decode grpc response - grpc

i want to intercept requests using mitmproxy, but response is encode with grpc, this is my current code :
from mitmproxy import http
from mitmproxy import ctx
class TestProxy:
following = "Search"
def response(self, flow: http.HTTPFlow) -> None:
if (self.following in flow.request.pretty_url):
print(flow.request.pretty_url)
#bytes.fromhex(flow.response.content).decode('utf-8')
pass
addons = [TestProxy()]
i want to decode the flow.response.content, i tried using bytes.fromhex and more, without success.
How i can decode grpc response ?

Related

Adafruit_requests library functions differently than Python requests library

So I'm trying to run a post request to a TD Ameritrade API on an adafruit Magtag device using the adafruit_requests module. I have run the same code in Python using the requests module, which has proven to work fine.
My Magtag device has successfully connected to the internet and executed the test requests adafruit supplied, but will not work with my code and returns a "duplicate headers" error. Below is the complete code that I ran on my MagTag device as well as the error message that was returned.
import ssl
import wifi
import socketpool
import adafruit_requests
import json
# Get wifi details from a secrets.py file
from secrets import secrets
# Connect to wifi
wifi.radio.connect(secrets["ssid"],secrets["password"])
pool = socketpool.SocketPool(wifi.radio)
requests = adafruit_requests.Session(pool, ssl.create_default_context())
# Get json data
refreshToken = ''
client_id = ''
with open("TDtokens.json","r") as tokens_file, open("TDsecrets.json","r") as secrets_file:
refreshToken = json.load(tokens_file)['refresh_token']
client_id = json.load(secrets_file)['client_id']
# Refresh token needs to be triggered every 30 minutes, as authToken expires.
url = r"https://api.tdameritrade.com/v1/oauth2/token"
headers = {'Content-Type':'application/x-www-form-urlencoded'}
data = {'grant_type': 'refresh_token','refresh_token': refreshToken,'client_id':client_id}
authReply = requests.post(url=url, headers=headers, data=data)
print(authReply.json())
{'fault': {'faultstring': 'Duplicate Header "Content-Type"', 'detail': {'errorcode': 'protocol.http.DuplicateHeader'}}}
I have done a lot of tweaking and I can't figure out where this error is coming from, whether from adafruit or TD Ameritrade's side. Upon reading the documentation for the adafruit_requests module I do not see any reason why an error would be raised for this type of request.
If you have any experience with adafruit or Python requests any advice would be greatly appreciated.

FastApi Test Client executing the internal api call

This is the first time I'm trying to write test cases.
I've got a simple FastAPI application and I'm trying to create tests with unittest module.
My goal is to test how app behaves for success case
I've got a simple route in my app:
from fastapi import APIRouter,Request
import requests
router = APIRouter()
#router.post("/demo_router")
async def demo_api(request: Request):
# now calling some private APi's
resp = requests.post("https://example.com", json=data)
return {"resp_data": resp.json()}
Now in my unittest module I'm trying to patch above api. I'm using unittest.mock but I'm getting very strange behavior.
import unittest
from fastapi.testclient import TestClient
from unittest.mock import patch
from main import app
class DemoViewTestCase(unittest.TestCase):
def test_demo_api(self):
with patch('src.endpoints.demo_module.demo_api') as mocked_post:
mocked_post.return_value.status_code = 200
mocked_post.return_value.json = {
"message": "request accepted",
"success": True
}
url = router.url_path_for('demo_api') #fetch the api router
client = TestClient(app)
response = client.post(url, json={"id": "BXBksk8920", "name": "Pjp"})
My problem is TestClient is calling the api and executing it. So it is triggering the internal call "https://example.com" which is causing some execution in the pipelines. So how can I overcome this?
Internal Api shouldn't trigger, I should even mock that? Any solution for that?
When testing all the code will be executed. Thus also the calls to the APIs. If you don't want this, you have to provide mock APIs (postman, mockon and many others provide this).
Because you don't want to be bothered to change the URL's etc when you are testing etc, you could look at automating this.
One way of doing this is to provide all URLs for external APIs using pedantic BaseSettings
config.py:
from pydantic import BaseSettings
class Settings(BaseSettings):
external_api_url: str = "https://api.example.com"
And use this in your code:
settings = Settings() # scans environment for any matching env settings!
...
resp = requests.post(setting.external_api_url, json=data)
In your tests you can override these settings:
settings = Settings(external_api_url="https://mockservice")
This is documented further in Pydantic BaseSettings
There are more way do enhance testing and this is found at the FastAPI documentation:
Dependency Override: https://fastapi.tiangolo.com/advanced/testing-dependencies/
Use different databases for testing: https://fastapi.tiangolo.com/advanced/testing-database/

SSE with Leshan LWM2M Demo Server

I am trying to do an http api that interact with a Leshan Demo Server.
I was trying to handle the OBSERVE in LWM2M, but I need to handle the notification with http.
I discovered that leshan notify using SSE. So I was trying to implement the sse client in python using requests and sseclient.
This is my code:
response= requests.post(url_request , "format=TLV" , stream= True)
client = sseclient.SSEClient(response)
for event in client.events():
print(json.loads(event.data))
I tried to run my script but it seems like the stream is not opening and it close immediately without waiting for the answer of the server, even if requests by default implement keep_alive for TCP connection under HTTP and the stream is True.
Does someone know why?
Reading the sseclient documentation, the correct way so use SSEClient seems to be :
from sseclient import SSEClient
messages = SSEClient('http://example.com/sse_stream/')
for msg in messages:
do_something_useful(msg)
Reading the answer on Leshan Github, the stream URL for Leshan Server Demo seems to be http://your.leshan.server.org/event?ep=your_device_endpoint_name
So I tried that :
from sseclient import SSEClient
messages = SSEClient('http://localhost:8080/event?ep=my_device')
for msg in messages:
print (msg.event, msg.data)
And it works for me 🎉 ! Getting this kind of results when I observe the temperature instance of Leshan Client Demo :
(u'NOTIFICATION', u'{"ep":"my_device","res":"/3303/0","val":{"id":0,"resources":[{"id":5601,"value":-18.9},{"id":5602,"value":31.2},{"id":5700,"value":-18.4},{"id":5701,"value":"cel"}]}}')
(u'COAPLOG', u'{"timestamp":1592296453808,"incoming":true,"type":"CON","code":"POST","mId":29886,"token":"889372029F81C124","options":"Uri-Path: \\"rd\\", \\"reWfKIgPYD\\"","ep":"my_device"}')
(u'COAPLOG', u'{"timestamp":1592296453809,"incoming":false,"type":"ACK","code":"2.04","mId":29886,"token":"889372029F81C124","ep":"my_device"}')
(u'UPDATED', u'{"registration":{"endpoint":"my_device","registrationId":"reWfKIgPYD","registrationDate":"2020-06-16T10:02:25+02:00","lastUpdate":"2020-06-16T10:34:13+02:00","address":"127.0.0.1:44400","lwM2mVersion":"1.0","lifetime":300,"bindingMode":"U","rootPath":"/","objectLinks":[{"url":"/","attributes":{"rt":"\\"oma.lwm2m\\""}},{"url":"/1/0","attributes":{}},{"url":"/3/0","attributes":{}},{"url":"/6/0","attributes":{}},{"url":"/3303/0","attributes":{}}],"secure":false,"additionalRegistrationAttributes":{}},"update":{"registrationId":"reWfKIgPYD","identity":{"peerAddress":{}},"additionalAttributes":{}}}')
(u'COAPLOG', u'{"timestamp":1592296455150,"incoming":true,"type":"NON","code":"2.05","mId":29887,"token":"3998C5DE2588F835","options":"Content-Format: \\"application/vnd.oma.lwm2m+tlv\\" - Observe: 2979","payload":"Hex:e3164563656ce8164408c03199999999999ae815e108c032e66666666666e815e208403f333333333333","ep":"my_device"}')
If you are interested by notification only, just add a if msg.event == 'NOTIFICATION': block.

How to force all calls to pythons requests.get to use proxy by default?

I am using a third party library in my code to get access token (ADAL). This library has a lot of calls to requests.get and requests.post. How can I force all the calls to use user provided proxies without having to modify each call to requests.get('http://example.com', proxies=proxies).
I cannot do export HTTP_PROXY. I have to do it from within my script.
You could monkey patch requests.
At the very start of your script:
import requests
import functools
orig_get = requests.get
proxies = {
'http': 'http://10.10.1.10:3128',
'https': 'http://10.10.1.10:1080',
}
requests.get = functools.partial(orig_get, proxies=proxies)

set proxy To hide my IP address for scraping the webpage using scrapy

I am using scrapy to crawl website now I need to set proxy handle the request which has been sent. Can anyone help me solve this set proxy in scrapy app. Please give any sample link too if you have so. And I need solution that from which IP this request is going.
You can do it through the code below found here:
1 – Create a new file called middlewares.py and save it in your scrapy project and add the following code to it.
# Importing base64 library because we'll need it ONLY
#in case if the proxy we are going to use requires authentication
import base64
# Start your middleware class
class ProxyMiddleware(object):
# overwrite process request
def process_request(self, request, spider):
# Set the location of the proxy
request.meta['proxy'] = "http://YOUR_PROXY_IP:PORT"
# Use the following lines if your proxy requires authentication
proxy_user_pass = "USERNAME:PASSWORD"
# setup basic authentication for the proxy
encoded_user_pass = base64.encodestring(proxy_user_pass)
request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
2 – Open your project’s configuration file (./project_name/settings.py) and add the following code
DOWNLOADER_MIDDLEWARES = {
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110,
'project_name.middlewares.ProxyMiddleware': 100,
}
Also, you can use multiple proxies with scrapy. More information can
be found here.

Resources