I have a web application that uses websockets to receive real time updates. Since I don't use the websocket connection for anything else, and scaling websocket servers is a pain, I'm considering moving the messaging part to Firebase Cloud Messaging.
It seems a pretty straight forward port for my web app.
However, the web app comes with a companion desktop app that some customers use, that is written in QT+Python. Is there any way to receive the FCM messages in a different client than the official SDKs?
Related
I have some questions regarding SignalR Core on the server side;
My server is written in ASP.NET Core, and it uses SignalR for sending notifications to users. The server uses Controllers with endpoints that clients interact with.
1) Can I host the entire thing in Azure App Service and add the SignalR service to it? Or would it be better to split the SignalR code out to its own server, which is called from the "main" server when needed?
2) The SignalR Service has an option for "Serverless", which according to documentation doesn't support clients calling any server RPCs when in said mode. Could I run this thing in Serverless mode as I'm only using the sockets for sending notifications to the clients. Or is it reserved for Azure functions?
3) Is there a way to get the number of connections for a user in a SignalR hub? I would like to send a push message to the user if he doesn't have any connections to the server. If not - what is the recommended way of handling this? I was thinking of adding a singleton service that keeps count, but am unsure if this would work at scale, especially with the SignalR service.
Thanks.
1) Better use the Azure SignalR.
2) Use it with the hub.
3) If you use Azure SignalR, you can just see it from the portal. In the code, whenever you use Azure SignalR or not, you can save the user Id in some var and count the connections. If you have multiple hubs and servers, you need to do more (if using redis-backplane for example).
I have multiple traditional servers and thousands of users connect to these servers. My server software is written in C++ listening these users on TCP socket and I've defined my own protocol (above TCP). Server code is written such that it is capable of handling client to client communication (for e.g. instant messaging) no matter which client is connected to which server machine. It's typical traditional server farm scenario.
Now when I want to switch this to cloud what changes do I need to do? I am new to cloud and all I know is cloud provider gives us APIs to communicate with cloud instance/DB and we now do not need to worry about actual server instances running behind (load balancing etc it is all taken care by cloud infrastructure).
Can single cloud instance could handle thousands (or say millions) of connections?
My server code is written in C++ and when I want to switch to cloud is it going to be obsolete? and do I need to develop my server from scratch using cloud APIs?
My server code is written in C++ and when I want to switch to cloud is it going to be obsolete? and do I need to develop my server from scratch using cloud APIs?
What you have is an application currently being run on your in house hwardware. With cloud the hardware and OS infrastructure is provided by cloud provider. You need to take your application to cloud and run as-is(almost). If for example, currently you run in your application on CentOS 7, you can create a instance of CentOS 7 in the cloud, and your C++ application should run without issues. Cloud provider "facilitates" with their APIs. It does not enforce application re-write with their APIs. So, there is no need to develope from scratch.
Can single cloud instance could handle thousands (or say millions) of connections?
Depends on the dimensions ( w.r.t to processor, memory, n/w throughput, etc) of the instance that you to use from the cloud provider.
I've been developing a mobile app (iOS) with gRPC via Firebase auth(z). My server is using GKE with the NGinx proxy - so now I'm developing the Web UI for the deeper configuration of a user account. I prefer not to fall back to REST API's, so I was wondering if Google Cloud Endpoints supports websockets, and would it also prevent non-authorised app users from trying to make a request? With websockets I know it's possible, but as I'm tied in with gRPC with Cloud Endpoints, I'm just checking before I fall back to REST API calls (I prefer not to!).
Summary: Does Google Cloud Endpoints support Websockets with JWT auth tokens from Firebase?
Thanks
It looks like ESP supports websockets now, using the "--enable_websocket" flag in the esp config.
Currently, Cloud Endpoints doesn't support WebSockets at all.
Btw, what is your use case for WebSockets? WebSocket won't work with gRPC either. If you just want to talk to your gRPC service from Web UI, transcoding should work. It works with JWT from Firebase auth.
The Google Cloud Endpoints ESP doesn't support websockets.
However, Google Cloud Endpoints have open sourced their Extensible Service Proxy implementation. Internally it's implemented as a custom nginx module. Since Nginx supports websockets, it should be feasible to add support to their nginx:esp module.
But it's definitely out of scope for me. :-)
I am writing a Meteor application that has two components - a frontend meteor app, hosted on one server, and a chat app hosted on another. The chat application uses socket.io to do the actual messaging (because I wanted to use redis pub-sub and that isn't supported by Meteor yet), and of course sockjs for the rest.
I am hosting the two on Kubernetes. At their network IPs, websockets are working.
However, I want to use Cloudflare, where websockets won't work, so I have the DISABLE_WEBSOCKETS env variable set to 1. Additionally, the transports for socket.io should have just defaulted to xhr polling.
The only problem is this:
when I get the conversations, the app hangs because the frontend web app is making a huge number of repeated "xhr" requests to the chat app.
after a while, the chat app is able to respond and send down the information after about 10 seconds, when it should have taken less than 0.5 seconds.
there are a huge number of sockjs xhr requests being made, whereas the number of sockjs xhr requests to the normal frontend app is small.
on development, this issue doesn't arise even with DISABLE_WEBSOCKETS set to 1.
On Cloudflare, I tried the following (from this page: https://modulus.desk.com/customer/portal/articles/1929796-cloudflare-configuration-with-meteor-xhr-polling):
- Set "Pseudo IPv4" to "Overwrite Headers"
Is there a special Meteor configuration I need to get xhr working with cloudflare? Additionally, I have another service on the app as well, but it works completely fine. Could socket.io somehow be interfering with sockjs in the chat service?
I'm sure that was a confusing enough title.
I have a long running Windows service dealing with things happening in the world. This service is my canonical source of truth for the rest of my system. Now I want to slap a web interface onto this so the clients can see what is actually going on. At first this would simply be a MVC5 application with some Web API stuff. Then I plan to use SignalR 2.0 and Ember.js to make this application more interactive and "realtime".
The client communicates with the Windows Service over named pipes using WCF. A client (such as a web app) could request an instance of for example IEventService, would be given a WCF proxy client, and could read about events through this interface. Simple enough.
However, a web application basically just exists in the sense that it responds to requests from the user. The way I understand it, this is not the optimal environment for a long lived WCF client proxy to raise events in, and thus I wonder how to host my SignalR stuff. Keep in mind that a user would log in to the MVC5 site, but through the magic of SignalR, they will keep interacting with the service without necessarily making further requests to the website.
The way I see it, there are two options:
1) Host SignalR stuff as part of the web app. Find a way to keep it "long-running" while it has active clients, so that it can react to events on the WCF client proxy by passing information out to the connected web users.
2) Host SignalR stuff as part of my Windows service. This is already long-running, but I know nada about OWIN and what this would mean for my project. Also the SignalR client will have to connect to a different port than where the web app was served from, I assume.
Any advice on which is the right direction to go in? Keep in mind that in extreme cases, a web user would log in when they get to work in the morning, and only have signalr traffic going back and forth (i.e. no web requests) for a full work day, before logging out. I need them to keep up with realtime events all that time.
Any takers? :)
The benefit of self-hosting as part of your Windows service is that you can integrate the calls to clients directly with your existing code and events. If you host the SignalR server separately, you'd have another layer of communication between your service and the SignalR server.
If you've already decided on using WCF named pipes for that, then it probably won't make a difference whether you self-host or host in IIS (as long as it's on the same machine). The SignalR server itself is always "long-running" in the sense that as long as a client is connected, it will receive updates. It doesn't require manual requests from the user.
In any case, you'll probably need a web server to serve the HTML, scripts and images.
Having clients connected for a day shouldn't be a problem either way, as far as I can see.