I want to stream gRPC stream data to any pubsub (nats io pub sub) but I am confused on how to achieve it, as client needs to send at least on request to server to start the streaming. How can I achieve that
Now one thing I can think of is publish the data from gRPC client to the pub sub server
gRPC Ssrver ---> gRPC Client ---> publish to the pubsub
But would it be a good approach?.
I know its an open ended question but any suggestions would be welcome
Related
I am planning to use vaultTrack method to track the changes in state object.Once I capture the events at client level am planning to store those data in offline DB or invoke another API. Will there will be any challenge in this implementation. As per my understanding RPC client library will be listening all the time for state changes and also it handles the incoming RPC calls from external parties . Will it slow down the performance. How exactly vaultTrack method working internally .
Hi I don’t see any challenge in your implementation.
In Corda we use Apache Artemis for RPC communication. The Corda-RPC library must be included on the client side in order to connect to the server.
Internally this works like this -
At startup Artemis will be created on the RPC client(client side) and RPC server (within the corda node), client and server queues are created, and sessions are enabled/established between client and server. The Corda-RPC library contains a client proxy, which translates RPC client calls to low-level Artemis messages and sends them to the server Artemis instance. These RPC requests are stored on the server side in Artemis queues. The server side consumer retrieves these messages, approprite RPC calls are made, and an acknowledgement is sent to the client. Once the method completes, a reply is sent back to the client. The reply is wrapped in an Artemis message and sent across by server Artemis to the client Artemis. The client then consumes the reply from client Artemis queue.
The client proxy within the the Corda-RPC library abstracts the above processes. From a client perspective you should only create the proxy instance and make the RPC calls.
I would urge you to use the Reconnecting Client. You can read more about this in a blog which I have written.
Also please read the last part in the blog which talks about how to handle reconnection/failover scenarios.
If I have Slack as a desktop app running on my local machine, is there a way for me to send it a message from another locally running process?
My goal is to use the regular Slack api to ping channels, etc. But instead of using a standard integration, I could do it from another local process. Maybe Slack is listening on localhost?
If the above concept doesn't work, is the only other way to do a Slack integration, where I would send a payload to Slack servers?
note: I said "IPC" in the question, but most likely it would be HTTP/TCP to send a message from my process to the slack process on my machine.
No. You can not send your local running Slack client a "direct" message from another local running app. A Slack client is just a viewer for a Slack workspace that is running in the cloud. It does not listen to local IPC messages.
There are many ways how to send a message with the Slack API. I would suggest to start with looking at incoming webhook. This only requires you to send a POST HTTP request to a URL provided by Slack.
Is it possible to send messages from the ROSbridge server to a connected client? I've connected an Android application using tcp, and able to send JSON messages from the app to the server. But is it possible to send messages in the other direction as well?
Thanks
as in the tutorials you create a websocket by starting rosbridge. You would have to consume that socket. Not sure if rosjava can consume that for your android app
In the SignalR (server) hub I want do a license check. If the check negativ then I want in the OnConnected of the Hub block the connection. The client should get in the Hub start the Task as canceled with a message (no valid licence).
When I return a Task with a Aggregate Exception in the OnConnected of the SignalR Hub then the client gets a fault state, with a timeout exception.
How can I block the connection to the SignalR hub and give the client a message why I have block the connection?
As far as I know you can't just start or stop connections already on the server. The client has to disconnect itself. If you want to use the hub for licence check you need to have the client connect - send licence info - server checks and if it is invalid call $client.disconnect on the client.
The other option like blorkfish mentions is to allow them to connect, add them to a list and check this when they call methods on the server.
I don't think that you should block the connection with an Exception. Your client would then not be able to tell if there was a genuine error in the SignalR connection.
Rather send a specific SignalR message back that there is no license - and then manage the connection object on the server side.
Keep a list of licensed connections, and a list of unlicensed connections.
So instead of using Clients.All to broadcast, use Clients.Client("< client_connection_id >") to broadcast.
Hope this helps.
We have a requirement wherein the server needs to push the data to various clients. So we went ahead with SSE (Server-Sent events). I went through the documentation but am still not clear with the concept. I have following queries :
Scenario 1. Suppose there are 10 clients. So all the 10 clients will send the initial request to server. 10 connections are established. When the data enters the server, a message is pushed from server to client.
Query 1 : Will the server maintain the IP address of all the client? If yes is there an API to check it?
Query 2: What will happen if all the 10 client windows are closed? Will the server abort all connections after a period of time?
Query 3: What will happen if the Server is unable to send messages to client due to unavailability of client like machine shutdown. Will the server abort all connections after a period of time for those client for whom they are unable to send the message?
Please clarify?
This depends on how you implement the server.
If using PHP, as an Apache module, then each SSE connection creates a new PHP instance running in memory. Each "server" is only serving one client at a time. Q1: yes, but not your problem: you just echo messages to stdout. Q2/Q3: If the client closes the connection, for any reason, the PHP process will shutdown when it detects this.
If you are using a multi-threaded server, e.g. using http in node.js. Q1: the client IP is part of the socket abstraction, and you just send messages to the response object. Q2/Q3: as each client connection closes the socket, the request process that was handling it will end. Once all 10 have closed your server will still be running, but not sending data to any clients.
One key idea to realize with SSE is that each client is a dedicated socket. It is not a broadcast protocol, where you push out one message and all clients get exactly the same message. Instead, you have to send the data to each client, individually. But that also means you are free to send customized data to each client.