I'd like to know if there's a way to communicate directly between two (or more) flask-socketio servers. I want to pass information between servers, and have clients connect a single web socket server, which would have all the combined logic and data from the other servers.
I found this example in JS Socket IO Server to Server where the solution was to use a socket.io-client to connect to another server.
I've looked through the Flask-SocketIO documentation, as well as other resources, however it doesn't appear that Flask-SocketIO has a client component to it.
Any suggestions or ideas?
Flask-SocketIO 2.0 can (maybe) do what you want. This is explained in the Using Multiple Workers section of the documentation.
Basically, the servers are configured to connect to a shared message queue service (redis, for example), and then a load balancer in front of them assigns clients to any of the servers in the pool using sticky sessions. Broadcasting operations are coordinated automatically among the servers by passing messages on the queue.
As an additional feature, if you use this set up, you can have any process connect to the message queue to post messages for clients, so for example, you can emit events to clients from a worker or other auxiliary process that is not a SocketIO server.
From your question it is unclear if you were looking to implement something like this, or if you wanted to have the servers communicate for a different reason. Sending of custom messages on the queue is currently not supported, but your question gave me the idea, this might be useful for some scenarios.
As far as using a SocketIO client as in the question you referenced, that shouud also work. You can use this Python package: https://pypi.python.org/pypi/socketIO-client. If you go this route, you can have a server be a client and receive events or join rooms.
Related
I'm a bit of noob when it comes to deploying web apps and wanted to make sure a little app I'm building will work with the tech I'm trying to use.
I have some experience with flask, but have only ever used the test server. My understanding is that with nginx or apache, if I write a flask app, each user who visits my website could get a different instance of the flask app, exactly how that will work is a little confusing to me.
The app I want to make is similar to chatrooms/a game like "among us". When a user comes to the website, they join a big "lobby" and can either join a "room" that already exists, or launch a new room and generate a code/ID that they can pass to their friends so that their friends can join the same session (I think a socketio "room" can be used for this).
However, if each client is connected to their own flask instance, will every server instance be able to see the "rooms" on the other instances? Suppose my app becomes really popular and I want to scale the lobby across multiple machines/AWS instances in the future, is there anything I can do now to ensure this works? Or is scaling across multiple machines equivalent to scaling across instances on a single machine as far as the flask-socketio/nginx stack is concerned.
Basically, how do I ensure that the lobby part of the code is scalable. Is there anything I need to do to ensure every user has the ability to connect to rooms with other users even if they get a different instance of the flask app?
I will answer this question specifically with regards to the Socket.IO service. Other features of your application or third-party services that you use may need their own support for horizontal scaling.
With Flask-SocketIO scaling from one to two or more instances requires an additional piece, a message queue, which typically is either Redis or RabbitMQ, although there are a few more options.
As you clearly stated in your question, when the whole server is in a single instance, data such as which room(s) each connected client is in are readily available in the memory of the single process hosting the application.
When you scale to two or more instances, your clients are going to be partitioned and randomly assigned to one of your servers. So you will likely end up having the participants that are in a room also spread across multiple servers.
To make things work, the server instances all connect to the message queue and use messages to coordinate complex actions such as broadcasts to a room.
So in short, to scale from one to more instances, all you need to do is deploy a message queue, and change the Flask-SocketIO server to indicate the location of the queue. For example, here is the single instance server instantiation:
from flask_socketio import SocketIO
socketio = SocketIO(app)
And here is the initialization with a Redis message queue running on localhost's default 6379 port:
from flask_socketio import SocketIO
socketio = SocketIO(app, message_queue='redis://')
The application code does not need to be changed, Flask-SocketIO does all the coordination between instances for you by posting message on the queue.
Note that it does not really matter if the instances are hosted in the same server or in different ones. All that matters is that they connect to the same message queue so that they can communicate.
SignalR documentation says that scaleout/backplane works well in case of server broadcast type of load/implementation. However I doubt that in case of pure server broadcast it will cause duplicate messages to be sent to the clients. Consider the following scenario:
I have two instances of my hub sitting on two web servers behind a load balancer on my web farm.
The hub on each server implements a timer for database polling to fetch some updates and broadcast to clients in groups, grouped on a topic id.
The clients for a group/topic might be divided between the two servers.
Both the hub instances will fetch the same or overlapping updates from the database.
Now as each hub sends the updates to clients via the backplane, will it not result in duplicate updates sent to the clients?
Please suggest.
The problem is not with SignalR, but with your database polling living inside your hubs. A backplane deals correctly with broadcast replication, but if you add another responsibility to your hubs then it's a different story. That's the part that is duplicating messages, not SignalR, because now you have N pollers doing broadcast across all server instances.
You could, for example, remove that logic from hubs into something else, and letting just one single instance of your server applications use this new piece in order to do the generation of messages by polling, using maybe a piece of configuration to decide which one. This way you would send messages only from there, and SignalR's backplane would take care of the replication. It's just a very basic suggestion and it could be done differently, but the key point is that your poller should not be replicated, and that's not directly related to SignalR.
It's also true that polling might not be the best way to deal with your scenario, but IMO that would be answering a different question.
I would like to write an application to manage files, directories and processes on hundreds of remote PCs. There are measurement programs running on these machines, which are currently managed manually using TightVNC / RealVNC. Since the number of machines is large (and increasing) there is a need for automatic management. The plan is that our operators would get a scriptable client application, from which they could send queries and commands to server applications running on each remote PC.
For the communication, I would like to use a TCP-based custom protocol, but it is administratively complicated and would take very long to open pinholes in every firewall in the way. Fortunately, there is a program with a built-in TinyWeb-based custom web server running on every remote PC, and port 80 is opened in every firewall. These web servers serve requests coming from a central server, by starting a CGI program, which loads and sends back parts of the log files of measurement programs.
So the plan is to write a CGI program, and communicate with it from the clients through HTTP (using GET and POST). Although (most of) the remote PCs are inside the corporate intranet, they are scattered all over the country, I would like to secure the communication. It would not be wise to send commands, which manipulate files and processes, in plain text. Unfortunately the program which contains the web server cannot be touched, so I cannot simply prepare it for HTTPS. I can only implement the security layer in the client and in the CGI program. What should I do?
I have read all similar questions in SO, but I am still not sure what to do in this specific situation. Thank you for your help.
There are several webshells but as far as I can see ( http://www-personal.umich.edu/~mressl/webshell/features.html ) they run on the top of an existing SSL/TLS layer.
There is also S-HTTP.
There are several ways of authenticating to an server (username/passwort) in a protected way, without SSL. http://www.switchonthecode.com/tutorials/secure-authentication-without-ssl-using-javascript . But these solutions are focused only on sending a username/password to the server.
Would it be possible to implement something like message-level security in SOAP/WS-Security? I realise this might be a bit heavy duty and complicated to implement, but at least it is
standardised
definitely secure
possibly supported by some libraries or frameworks you could use
suitable for HTTP
I need to build a Windows Service in VB.net under Visual Studio 2003. This Windows service should read the flat file (Huge file of about a million records) from the local folder and upload it to the corresponding database table. This should be done in Rollback mode (Database transaction). While transferring data to table, the service should also be listening to additional client requests. So, if in between client requests for a cancel operation, then the service should rollback the transactions and give feedback to the client. This windows service also keeps writing continuously to two log files about the status and error records.
My client is ASPX page (A website).
Can somebody help me explain how to organize and achieve this functionality in a windows service(Processing and listening for additional client requests simultaneously. Ex. Cancellation request).
Also could you suggest me the ideal way of achieving this (like if it is best to implement it as web service or windows service or just a remote object or some other way).
Thank you all for your help in advance!
You can architect your service to spawn "worker threads" that do the heavy lifting, while it simply listens for additional requests. Because future calls are likely to have to deal with the current worker, this may work better than, say, architecting it as a web service using IIS.
The way I would set it up is: service main thread is listening on a port or pipe for a communication. When it gets a call to process data, it spawns a worker thread, giving it some "status token" (could be as simple as a reference to a boolean variable) which it will check at regular intervals to make sure it should still be running. Thread kicks off, service goes back to listening (network classes maintain a buffer of received data so calls will only fail if they "time out").
If the service receives a call to abort, it will set the token to a "cancel" value. The worker thread will read this value on its next poll and get the message, rollback the transaction and die.
This can be set up to have multiple workers processing multiple files at once, belonging to callers keyed by their IP or some unique "session" identifier you pass back and forth.
You can design your work like what FTP do. FTP use two ports, one for commands and another for data transfer.
You can consider two classes, one for command parsing and another for data transfer, each one on separate threads.
Use a communication channel (like a privileged queue) between threads. You can use Syste.Collections.Concurrent if you move to .NET 4.0 and more threading features like CancellationTokens...
WCF has advantages over web service, but comparing it to windows service needs more details of your project. In general WCF is easier to implement in compare to windows service.
I'm building a client-server application and I am looking at adding failover to the client so that when a server is down it will try to connect to another available server. Are there any standards or specifications covering server failover? I'd rather adopt an existing standard than implement my own mechanism.
I don't there is, or needs to be any. It's pretty straight forward and all depends on how you can connect to your sever, but basically you need to keep sending pings/keepalives/heartbeats whatever you want to call em, and when a fail occurs (or n fails in a row, if you want) change a switch in your config.
Typically, the above would be running as a separate service on the client machine. Altenativly, you could create a method execution handler which handles thr execution of all server calls you make, and on Communication failure, in your 'catch' block, flick your switch in config
You're question is very general. here are some general answers:
Google for Fault Tolerant Computing
Google for High Availability Solutions
This is usually handled at either the load balancer or the server level. This isn't something you normally do in code at the client.
Typically, you multihome the servers each having their own IP + one that is shared between all of them. Further, they communicate with each other over tcp for the heartbeat to know which is the Active node in an Active / Passive cluster.
I can't tell what type of servers you have, but most of the windows servers can do this natively.
You might consider asking the question at serverfault to see how to properly configure your servers to support this.