Secure websocket connection created with QWebSocket is refused - qt

I have a problem with QWebSocket connection with C++.
QWebSocket *mWebSocket = new QWebSocket();
connect(mWebSocket, SIGNAL(connected()), this, SLOT(connected()));
connect(mWebSocket, SIGNAL(disconnected()), this, SLOT(disconnected()));
connect(mWebSocket, SIGNAL(error(QAbstractSocket::SocketError)), this, SLOT(error(QAbstractSocket::SocketError)));
QNetworkRequest lRequest(QUrl("wss://gateway-predix-data-services.run.aws-usw02-pr.ice.predix.io/v1/stream/messages"));
lRequest.setRawHeader("Predix-Zone-Id", <my unique id>);
lRequest.setRawHeader("Authorization", <some token>);
mWebSocket->open(lRequest);
I am getting 3 errors and then disconnect, but never connect.
called slot: error
QAbstractSocket::RemoteHostClosedError
called slot: error
QAbstractSocket::ConnectionRefusedError
called slot: error
QAbstractSocket::RemoteHostClosedError
called slot: disconnected
When I make a small typo in my token (to test if authentication is ok), I am starting receiving only QAbstractSocket::ConnectionRefusedError error.
The most interesting part is that I have implemented websocket connection with python and it works very good, so the problem should not be from websocket server part, or request header setup:
import websocket
import thread
import time
def on_message(ws, message):
print(message)
def on_error(ws, error):
print(error)
def on_close(ws):
print("### closed ###")
def on_open(ws):
def run(*args):
for i in range(3):
time.sleep(1)
#ws.send('{messageId: 1453338376222,body: [{name: Compressor-2017:CompressionRatio,datapoints: [[1453338376222,10,3],[1453338377222,10,1]],attributes: {host: server1,customer: Acme}}]}')
time.sleep(1)
#ws.close()
print("thread terminating...")
thread.start_new_thread(run, ())
if __name__ == "__main__":
websocket.enableTrace(True)
ws = websocket.WebSocketApp("wss://gateway-predix-data-services.run.aws-usw02-pr.ice.predix.io/v1/stream/messages",
on_message = on_message,
on_error = on_error,
on_close = on_close,
header = {'Predix-Zone-Id:my unique id', 'Authorization:token'}
)
ws.on_open = on_open
ws.run_forever()
This websocket connection is part of of my c++ sdk, so I need it to be implemented with c++. Do you have any ideas what I have missed in my C++ code?

Related

Trace failed fastapi requests with opencensus

I'm using opencensus-python to track requests to my python fastapi application running in production, and exporting the information to Azure AppInsights using the opencensus exporters. I followed the Azure Monitor docs and was helped out by this issue post which puts all the necessary bits in a useful middleware class.
Only to realize later on that requests that caused the app to crash, i.e. unhandled 5xx type errors, would never be tracked, since the call to execute the logic for the request fails before any tracing happens. The Azure Monitor docs only talk about tracking exceptions through the logs, but this is separate from the tracing of requests, unless I'm missing something. I certainly wouldn't want to lose out on failed requests, these are super important to track! I'm accustomed to using the "Failures" tab in app insights to monitor any failing requests.
I figured the way to track these requests is to explicitly handle any internal exceptions using try/catch and export the trace, manually setting the result code to 500. But I found it really odd that there seems to be no documentation of this, on opencensus or Azure.
The problem I have now is: this middleware function is expected to pass back a "response" object, which fastapi then uses as a callable object down the line (not sure why) - but in the case where I caught an exception in the underlying processing (i.e. at await call_next(request)) I don't have any response to return. I tried returning None but this just causes further exceptions down the line (None is not callable).
Here is my version of the middleware class - its very similar to the issue post I linked, but I'm try/catching over await call_next(request) rather than just letting it fail unhanded. Scroll down to the final 5 lines of code to see that.
import logging
from fastapi import Request
from opencensus.trace import (
attributes_helper,
execution_context,
samplers,
)
from opencensus.ext.azure.trace_exporter import AzureExporter
from opencensus.trace import span as span_module
from opencensus.trace import tracer as tracer_module
from opencensus.trace import utils
from opencensus.trace.propagation import trace_context_http_header_format
from opencensus.ext.azure.log_exporter import AzureLogHandler
from starlette.types import ASGIApp
from src.settings import settings
HTTP_HOST = attributes_helper.COMMON_ATTRIBUTES["HTTP_HOST"]
HTTP_METHOD = attributes_helper.COMMON_ATTRIBUTES["HTTP_METHOD"]
HTTP_PATH = attributes_helper.COMMON_ATTRIBUTES["HTTP_PATH"]
HTTP_ROUTE = attributes_helper.COMMON_ATTRIBUTES["HTTP_ROUTE"]
HTTP_URL = attributes_helper.COMMON_ATTRIBUTES["HTTP_URL"]
HTTP_STATUS_CODE = attributes_helper.COMMON_ATTRIBUTES["HTTP_STATUS_CODE"]
module_logger = logging.getLogger(__name__)
module_logger.addHandler(AzureLogHandler(
connection_string=settings.appinsights_connection_string
))
class AppInsightsMiddleware:
"""
Middleware class to handle tracing of fastapi requests and exporting the data to AppInsights.
Most of the code here is copied from a github issue: https://github.com/census-instrumentation/opencensus-python/issues/1020
"""
def __init__(
self,
app: ASGIApp,
excludelist_paths=None,
excludelist_hostnames=None,
sampler=None,
exporter=None,
propagator=None,
) -> None:
self.app = app
self.excludelist_paths = excludelist_paths
self.excludelist_hostnames = excludelist_hostnames
self.sampler = sampler or samplers.AlwaysOnSampler()
self.propagator = (
propagator or trace_context_http_header_format.TraceContextPropagator()
)
self.exporter = exporter or AzureExporter(
connection_string=settings.appinsights_connection_string
)
async def __call__(self, request: Request, call_next):
# Do not trace if the url is in the exclude list
if utils.disable_tracing_url(str(request.url), self.excludelist_paths):
return await call_next(request)
try:
span_context = self.propagator.from_headers(request.headers)
tracer = tracer_module.Tracer(
span_context=span_context,
sampler=self.sampler,
exporter=self.exporter,
propagator=self.propagator,
)
except Exception:
module_logger.error("Failed to trace request", exc_info=True)
return await call_next(request)
try:
span = tracer.start_span()
span.span_kind = span_module.SpanKind.SERVER
span.name = "[{}]{}".format(request.method, request.url)
tracer.add_attribute_to_current_span(HTTP_HOST, request.url.hostname)
tracer.add_attribute_to_current_span(HTTP_METHOD, request.method)
tracer.add_attribute_to_current_span(HTTP_PATH, request.url.path)
tracer.add_attribute_to_current_span(HTTP_URL, str(request.url))
execution_context.set_opencensus_attr(
"excludelist_hostnames", self.excludelist_hostnames
)
except Exception: # pragma: NO COVER
module_logger.error("Failed to trace request", exc_info=True)
try:
response = await call_next(request)
tracer.add_attribute_to_current_span(HTTP_STATUS_CODE, response.status_code)
tracer.end_span()
return response
# Explicitly handle any internal exception here, and set status code to 500
except Exception as exception:
module_logger.exception(exception)
tracer.add_attribute_to_current_span(HTTP_STATUS_CODE, 500)
tracer.end_span()
return None
I then register this middleware class in main.py like so:
app.middleware("http")(AppInsightsMiddleware(app, sampler=samplers.AlwaysOnSampler()))
Explicitly handle any exception that may occur in processing the API request. That allows you to finish tracing the request, setting the status code to 500. You can then re-throw the exception to ensure that the application raises the expected exception.
try:
response = await call_next(request)
tracer.add_attribute_to_current_span(HTTP_STATUS_CODE, response.status_code)
tracer.end_span()
return response
# Explicitly handle any internal exception here, and set status code to 500
except Exception as exception:
module_logger.exception(exception)
tracer.add_attribute_to_current_span(HTTP_STATUS_CODE, 500)
tracer.end_span()
raise exception

Waiting for specific message in gRPC stream of messages in Gatling load scripts

What I have: gRPC endpoint which sends a stream of message updates in response.
What I want to test: I want to create a load test that will open a stream, get multiple messages until a message with a specific field will not be captured (within defined timeout). (Without an explicit call of pause method and waiting for some time).
What I get from Gatling-gRPC:
After an explicit call of pause() method, the reconciliation method is returning the last captured message.
Example server: here
Protofile: here
Test code:
package load.api
import com.github.phisgr.gatling.generic.SessionCombiner
import com.github.phisgr.gatling.grpc.Predef._
import example.myapp.helloworld.grpc.hello_world.{GreeterServiceGrpc, HelloRequest}
import io.gatling.core.Predef._
import io.grpc.{CallOptions, Status}
import java.util.concurrent.TimeUnit
import scala.concurrent.duration.DurationInt
import scala.language.postfixOps
class StreamSim extends Simulation {
val greetCall = grpc("Greet")
.serverStream("greeter")
val scn = scenario(s"Greet Flow")
.exec(greetCall.start(GreeterServiceGrpc.METHOD_IT_KEEPS_REPLYING)
(HelloRequest("THIS IS TEST MESSAGE"))
.extract(_.some.get.message.some)(_ saveAs "message")
.sessionCombiner(SessionCombiner.pick("message"))
.endCheck(statusCode is Status.Code.OK)
)
.pause(10 seconds)
.exec(greetCall.copy(requestName = "Reconciliate greet").reconciliate)
.exec { session =>
println(s"%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%> Last character of the message: ${session.attributes("message")}")
session
}
setUp(scn.inject(rampUsers(1) during (10 seconds))
.protocols(grpc(managedChannelBuilder(name = "127.0.0.1", port = 8080).usePlaintext()).shareChannel.disableWarmUp))
}

Python async function returning coroutine object

I am running a python program to listen to azure iot hub. The function is returning me a coroutine object instead of a json. I saw that if we use async function and call it as a normal function this occurs, but i created a loop to get event and then used run_until_complete function. What am i missing here?
async def main():
try:
client = IoTHubModuleClient.create_from_connection_string(connection_string)
print(client)
client.connect()
while True:
message = client.receive_message_on_input("input1") # blocking call
print("Message received on input1")
print(type(message))
print(message)
except KeyboardInterrupt:
print ( "IoTHubClient sample stopped" )
except:
print ( "Unexpected error from IoTHub" )
return
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()
OUTPUT-
Message received on input1
<class 'coroutine'>
<coroutine object receive_message_on_input at 0x7f1439fe3a98>
Long story short: you just have to write await client.receive_message_on_input("input1"). Your main is a coroutine, but receive_message_on_input is a coroutine as well. You must wait for it to complete.
I could tell you the story, but it's too long really. :)

How to disable "check_hostname" using Requests library and Python 3.8.5?

using latest Requests library and Python 3.8.5, I can't seem to "disable" certificate checking on my API call. I understand the reasons not to disable, but I'd like this to work.
When i attempt to use "verify=True", the servers I connect to throw this error:
(Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)')))
When i attempt to use "verify=False", I get:
Error making PS request to [<redacted server name>] at URL https://<redacted server name/rest/v2/api_endpoint: Cannot set verify_mode to CERT_NONE when check_hostname is enabled.
I don't know how to also disable "check_hostname" as I haven't seen a way to do that with the requests library (which I plan to keep and use).
My code:
self.ps_server = server
self.ps_base_url = 'https://{}/rest/v2/'.format(self.ps_server)
url = self.ps_base_url + endpoint
response = None
try:
if req_type == 'POST':
response = requests.post(url, json=post_data, auth=(self.ps_username, self.ps_password), verify=self.verify, timeout=60)
return json.loads(response.text)
elif req_type == 'GET':
response = requests.get(url, auth=(self.ps_username, self.ps_password), verify=self.verify, timeout=60)
if response.status_code == 200:
return json.loads(response.text)
else:
logging.error("Error making PS request to [{}] at URL {} [{}]".format(server, url, response.status_code))
return {'status': 'error', 'trace': '{} - {}'.format(response.text, response.status_code)}
elif req_type == 'DELETE':
response = requests.delete(url, auth=(self.ps_username, self.ps_password), verify=self.verify, timeout=60)
return response.text
elif req_type == 'PUT':
response = requests.put(url, json=post_data, auth=(self.ps_username, self.ps_password), verify=self.verify, timeout=60)
return response.text
except Exception as e:
logging.error("Error making PS request to [{}] at URL {}: {}".format(server, url, e))
return {'status': 'error', 'trace': '{}'.format(e)}
Can someone shed some light on how I can disable check_hostname as well, so that I can test this without SSL checking?
If you have pip-system-certs, it monkey-patches requests as well. Here's a link to the code: https://gitlab.com/alelec/pip-system-certs/-/blob/master/pip_system_certs/wrapt_requests.py
After digging through requests and urllib3 source for awhile, this is the culprit in pip-system-certs:
ssl_context = ssl.create_default_context()
ssl_context.load_default_certs()
kwargs['ssl_context'] = ssl_context
That dict is used to grab an ssl_context later from a urllib3 connection pool but it has .check_hostname set to True on it.
As far as replacing the utility of the pip-system-certs package, I think forking it and making it only monkey-patch pip would be the right way forward. That or just adding --trusted-host args to any pip install commands.
EDIT:
Here's how it's normally initialized through requests (versions I'm using):
https://github.com/psf/requests/blob/v2.21.0/requests/adapters.py#L163
def init_poolmanager(self, connections, maxsize, block=DEFAULT_POOLBLOCK, **pool_kwargs):
"""Initializes a urllib3 PoolManager.
This method should not be called from user code, and is only
exposed for use when subclassing the
:class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
:param connections: The number of urllib3 connection pools to cache.
:param maxsize: The maximum number of connections to save in the pool.
:param block: Block when no free connections are available.
:param pool_kwargs: Extra keyword arguments used to initialize the Pool Manager.
"""
# save these values for pickling
self._pool_connections = connections
self._pool_maxsize = maxsize
self._pool_block = block
# NOTE: pool_kwargs doesn't have ssl_context in it
self.poolmanager = PoolManager(num_pools=connections, maxsize=maxsize,
block=block, strict=True, **pool_kwargs)
And here's how it's monkey-patched:
def init_poolmanager(self, *args, **kwargs):
import ssl
ssl_context = ssl.create_default_context()
ssl_context.load_default_certs()
kwargs['ssl_context'] = ssl_context
return super(SslContextHttpAdapter, self).init_poolmanager(*args, **kwargs)

Ocaml / Async socket issue

I am quite new to OCaml and I am working on a small TCP client utility, using Async/Core.
The connection is opened using
Tcp.with_connection (Tcp.Where_to_connect.of_host_and_port { host = "localhost"; port = myPort })
I need to be able to accept keyboard input, as well as read input from the socket. I use the Deferred.any for this purpose.
Calling Reader.read reader buf on the socket results in `Eof, which is OK, but when the method (containing the Deferred.any code) is called recursively, I get an exception:
“unhandled exception in Async scheduler”
(“unhandled exception”
((monitor.ml.Error
(“can not read from reader” (reason “in use”)
.....
Reader.is_closed on the reader returns false.
How can I “monitor” the socket recursively without this exception?
Michael

Resources