I understand that gRPC is designed for client-server architecture. A server provides remote services and clients obtain the services by calling the defined RPCs. But is it possible for a client also defines a service so that other clients can request services from that client too?
An example, a server knows every client's locations and can inform other clients about the location information. A client, upon receiving the other clients' locations from the server, can now directly call the services provided by other clients.
Can gRPC do that? Thank you!
Yes, this is possible.
The terms "client" and "server" are overloaded in this context and would be better thought of as (stub) caller and (implementation) receiver. It's possible for the client and server to be the same process but then you don't need the complexity of gRPC.
There's no prohibition on some entity functioning as both a caller ("client") and receiver ("server"). This situation arises commonly, in peer-peer networks and in micro-services where some original client calls some service which (acts as a client and) then calls various other services ....
Related
I am planning to use vaultTrack method to track the changes in state object.Once I capture the events at client level am planning to store those data in offline DB or invoke another API. Will there will be any challenge in this implementation. As per my understanding RPC client library will be listening all the time for state changes and also it handles the incoming RPC calls from external parties . Will it slow down the performance. How exactly vaultTrack method working internally .
Hi I don’t see any challenge in your implementation.
In Corda we use Apache Artemis for RPC communication. The Corda-RPC library must be included on the client side in order to connect to the server.
Internally this works like this -
At startup Artemis will be created on the RPC client(client side) and RPC server (within the corda node), client and server queues are created, and sessions are enabled/established between client and server. The Corda-RPC library contains a client proxy, which translates RPC client calls to low-level Artemis messages and sends them to the server Artemis instance. These RPC requests are stored on the server side in Artemis queues. The server side consumer retrieves these messages, approprite RPC calls are made, and an acknowledgement is sent to the client. Once the method completes, a reply is sent back to the client. The reply is wrapped in an Artemis message and sent across by server Artemis to the client Artemis. The client then consumes the reply from client Artemis queue.
The client proxy within the the Corda-RPC library abstracts the above processes. From a client perspective you should only create the proxy instance and make the RPC calls.
I would urge you to use the Reconnecting Client. You can read more about this in a blog which I have written.
Also please read the last part in the blog which talks about how to handle reconnection/failover scenarios.
In one server, I have 2 web applications. One of them is a Web API, and the other one is SignalR. Both apps are hosted in IIS, under 2 different application pulls.
What is the best way to communicate between those 2 web applications? Is using either SignalR, or REST calls viable, for example?
You can use several way;
1) A message queue system would work. Your server is IIS, you can use MSMQ.
2) Alternate to MSMQ, you can use RabbitMQ.
3) As you mentioned, you can use HTTP calls.
4) You have already a SignalR. So you can use it for communication. Write a Hub that the servers join to hub.
Options are depends on your requirement. Backend servers, mostly, communicate with a message queue system. HTTP calls are also acceptable.
The biggest difference between HTTP and a message queue is async calls. For example, When a HTTP call trying to reach an endpoint, it waits for a response and if the server is down, you have to try again until server up. On the other hand, a message queue system uses a queue. Just fire and forget the data. Other side of the connection can get the data whenever the server is ready.
SignalR is too risky for this job.
short question : How can I host an MQTT server on my remote Ubuntu 16 server while at the same time hosting an HTTP server that will be using the MQTT data ?
true question : I want to build an IoT system that will be MONITORED and CONTROLLED by ESP32, which will SEND FEEDBACK and ACCEPT COMMANDS respectively from a remote server (maybe LAMP ?). I also want the user to log-in in a website hosted on this remote server, where s/he can monitor any sensor values or send commands (e.g. turning a led on or off).
So what's the way to go here?
I was adviced to go with MQTT but then the above problem arised.
what I've found : I 've found that using Mosquitto MQTT, I may be able to serve a website using websockets. But I prefer a more scalable HTTPS approach. That is, I intend to have a database linked with my site and running my PHP scripts.
I'm not that experienced, so please don't take anything for granted :)
MQTT uses TCP connection and follows publish/subscribe API model where as the web(http) follows Restful API model(Create,read,update,delete). If you want to stick with MQTT then you should use SAAS service like enterprise MQTT from HIVE which provide this integrability but will charge some fees and in return, they will provide you with an account and a dashboard for all your devices. Otherwise, you can try to make your own middleware which can integrate MQTT with web services .
Another thing I would recommend is CoAP which is also an M2M protocol but follows Restful API model and UDP connection. It has direct forward proxy to convert coap packets to https packets and vice versa.
In MQTT you have a central server(Broker) to which the nodes send their data and fetch their required data through topic filters.
In CoAP each device having some data to share becomes a server and the other device interested in it's data becomes a client and sends a GET request to the respective server to get its data. Similarly a PUT request along with a payload from a client would update the value at the server.
You really should not be looking to combine the MQTT broker with a HTTP server, especially if you intent the HTTP Server to actually be an application server (Running back end logic e.g. PHP). These are 2 totally separate systems. There is nothing to stop your application logic connecting to the broker as a client.
If you intend to use MQTT over WebSockets you can use something link nginx to proxy the WebSockets connection to the broker so it can sit behind the same logical HTTP/HTTPS address.
I have a situation where messages are being generated by an internal application but the consumers for the messages are outside our enterprise network. Will either of http(s) transport or REST connectivity work in this scenario, with HTTP reverse proxy on DMZ? If not, is it safe to have a broker on the DMZ which can act as gateway to outside consumers?
Well, the rest/http approach to connect to ActiveMQ is very limited as it does not support true messaging semantics.
Exposing an ActiveMQ broker is no less secure than any other communication software if precautions are taken (TLS, default passwords changed, high entropy passwords are used and/or mutual authentication, recent patches applied, web console/jolokia not exposed externally without precautions etc etc).
In fact - you can buy online ActiveMQ instances from Amazon - which indicates that at least they think it's not such a bad idea to put them on the Internet.
I am working on asp.net web based peer to peer chat application. I am using UDP sockets for communication. As my application is P2P I should avoid interactions with server and let peers to send and receive their messages.
Now my doubt is where am I suppose to write socket related coding? If i write socket related coding in controller classes, the coding comes under server side right? Every time user sends a message from browser it will call my controller class where my sockets were defined, and will send messages to the remote peer. Does this kind of socket programming (sockets defined in controller classes) will result in peer to peer application?
In Peer to peer communications, you do not pass any data via your server, but let the clients communicate to each other directly.
In web applications, true P2P is near impossible to achieve.
You could try to achieve something not-entirely-unlike peer-to-peer communication with javascript and HTML5 websockets on the clients.
In this scenario, you would use your asp.net server as a broker to set up the connections between your clients.
(since your server knows where to reach your clients).
Your javascript clients should from that moment on handle the rest of the communication business.