Understanding the bokeh server - bokeh

I am unable to find mention of how many sessions the bokeh-server is capable of handling.
I would like to include some plots in my web app and would like an idea for how a single bokeh server will handle my traffic of ~ 100 users at any given time. Each users' page may have as many as 10 bokeh plots on the page. I would use redis as the backend
My stack is as follows (all on a single core VPS, 1G RAM):
nginx (webserver)
uwsgi (application server)
flask (web framework)
redis (in-memory data persistence)
How does the bokeh-server configuration option --multi-user play into my use case? I am having trouble understanding the scope of the bokeh session.

IMPORTANT: The question above, and the answer below are regarding the old, long-gone first generation Bokeh server, before Bokeh 0.11. For information about using the current Bokeh server, which is more stable, performant, as well as simpler to use and better documented, see:
http://docs.bokeh.org/en/latest/docs/user_guide/server.html
OBSOLETE:
a few thoughts:
regarding load - unknown, but it's not so much about number of users, but how large your data is, as most of the overhead is json serialization/deserialization. One user could swamp the bokeh server if the json content is gigantic. But under normal usage I would expect 100 users to be no problem
Note, If you only using one core, I don't nginx isn't going to help much.
regaring multi user -- it means different users can register with their own username and password. This means that users won't stomp on each others documents. In the single user case, the bokeh session always connects to the bokeh server as the user "defaultuser" in the multiuser case users must register, and login to the session using their credentials. "multi user" is more important when users are publishing content, since (IIUC) you are at the only one pushing content to the server, it should not be an issue.

Related

accessing WordPress DB from remote server

Need some advice before starting develop some things.. I've 15 WordPress websites on different installs, and I've remote server which gets data 24/7 from those websites.
I've reached a point that I want the server to modify the websites based on his calculated data.
The things are this:
Should I allow the server the access the WP DB remotely and modify things without using WP on the circle?
Or, use WP REST API and supply some secured routes which provide data and accept data and make those changes?
My instinct is to use the WP API, but. After all its a PHP (nginx+apache) which have some limits (timeout for example) and I find it hard to run hard and long process on the WP itself.
I can divide the tasks to different levels, for example:
fetching data (simple get)
make some process on the remote server
loop and modify in small batches to another route
My concerns are that this circle require perfect match between remote server and WP API, and any change or fix on WP side brings plugins update on the websites which is not much fun.
Hope for any ideas and suggests to make it forward.
"use WP REST API and supply some secured routes which provide data and accept data and make those changes", indeed.
i don't know why timeout or another limits may cause a problem - but using API is the best way for such kind of cases. You can avoid timeout problems with some adjustments on web servers side.
Or you can increase memory, timeout limit exclusively for requested server.
f.e.
if ($_SERVER["remote_attr"]=='YOUR_MAIN_SERVER_IP') {
ini_set('max_execution_time',1000);
ini_set('memory_limit','1024M');
}

Frequent Unexpected Asp.net Session Drops Hosted on Asure

Since we have moved to azure, we have numerous session lost issues only on production.
We have InProc, cookie based, sticky session, large timeout, no high traffic and no high memory/process usage.
We use HAProxy as loadbalancer.
I have done basic research and none of the following seems to be the cause:
session timeout
application pool settings/recycling
memory size and usage thresholds
no eaten exceptions
there is no changes to file system to cause a restart
I'm particularly more suspicious about how loadbalancer/ssl and application work together and if http headers are fine, but I don't know any tools to really monitor that.
I'm assigned to find a solution at the same time I have no privilege to access the machines.
Logs(Log4Net) are all stored in database but doesn't help to give a clear understanding of what is going on the system and cannot follow a user session using them.
I'm allowed to find the problem by adding required logs to code or to develop some kind of monitoring module or to use profiling/debugging tools.
Only once a month there will be a production deployment so I'm trying to use the opportunity as best as possible.
Question:
Is there any useful monitoring/profiling tool that can give me a clear view of what is happening in the system by aggregating information I may need? for example following a user/session between requests from time of login until session drop plus information about headers and other system application parameters.
if there is not such a tool out there, please give me your ideas to write one?
This is a common issue in load balanced environment. As mentioned in this answer for a similar question,
InProc mode, which stores session state in memory on the Web server. Which means that session data is maintained inside your web server on a given VM and is not shared outside of the VM. So when you have multiple server for load balancing, the session state isn't shared with each other. To solve this, you must store your session state external to the web server.
Use Redis, or SQL Database, or something else.

Single Page Application with signalR: performance testing

I have an issue to evaluate the amount of concurrent users that our website can handle. Website is a Single Page Application built on .net framework with Durandal.js on the frontend. We use signalR (hubs) for real time communication between server and client.
The only option I see is ‘browser testing’, so each test should run browser instance (or use phantomJs etc) to keep real time connection with the server (as in real usage). Are there any other options to do this except use tests that will use browser instance to emulate user’s behaviour? What is the best way to emulate load of e.g. 1000 concurrent users?
I’ve found several cloud services that support such load testing, e.g. loadimpact, blazemeter. Would be great if someone can share their experience of using such tools.
SignalR provides tool called Crank, which can be used to test how many connections can be handled by given machine.
More info: http://www.asp.net/signalr/overview/performance/signalr-connection-density-testing-with-crank
Make your own script to create virtual users! that is the most effective way to recreate real world load/stress! use Akka Actor model(for creating virtual users) with java signalr client! (if you want you can use Gatling tool as framework and attach your script written in java or scala to virtual users of Gatling!)
make script dynamic by storing user info(Authentication token or user credentials) in xml document.
Please comment questions I can guide you end to end as I completed building+deploying such tool...

IIS 7.5 Load Balancing--do Sessions stick to the originating server?

Apologies if there is an answer already out here but I've looked at over 2 dozen threads and can't find the specific answer.
So, for our ASP.NET (2.0) application, our infrastructure team set up a load balancer machine that has two IIS 7.5 servers.
We have a network file server where the single copy of the application files reside. I know very little about the inner workings of load-balancing and even IIS in general.
My question is regarding sessions. I guess I'm wondering if the 'balancing' part is based on sessions or on individual page requests.
For example, when a user first logs in to the site, he's authenticated (forms), but then while he navigates around from page to page--does IIS 7.5 automatically "lock him in" to the particular server that first logged him in and authenticated him, or could his page requests alternate from one server to the next?
If the requests do indeed alternate, what problems might I face? I've read a bit about duplicating the MachineKey, but we have done nothing in web.config regarding MachineKey--it does not exist there at all.
I will add that we are not experiencing any issues (that we know of anyway) regarding authentication, session objects, etc. - the site is working very well, the question is more academic, and I just want to make sure I'm not missing something that may bite me down the road.
Thanks,
Jim
while he navigates around from page to page--does IIS 7.5 automatically "lock him in" to the particular server that first logged him in and authenticated him
That depends on the configuration of the load balancer and is beyond the scope of a single IIS. Since you haven't provided any information on what actual balancer you use, I can only provide a general information - regardless of the balancer type (hardware, software), it can be configured for so called "sticky sessions". In such mode, you are guaranteed that once a browser establishes connection to your cluster, it will always hit the same server. There are two example techniques - in first, the balancer just creates a virtual mapping from source IP addresses to cluster node numbers (which means that multiple requests from the same IP hit the same server), in second - the balancer attaches an additional HTTP cookie/header that allows it to recognize the same client and direct it to the same node.
Note that the term "session" has nothing to do with the server side "session" where you have a per-user container. Session here means "client side session", a single browser on a single operating system and a series of request-replies from it to your server.
If the requests do indeed alternate, what problems might I face
Multiple issues. First, encryption, if relies on machine key, will not work. This means that even forms cookies would be rejected by cluster nodes other than the one that issued the cookie. A solution is to have the same machine key on all nodes.
Another common issue would be the inproc session provider - any data stored in the memory of one application server will not "magically" appear on other cluster nodes, thus, making the session data unavailable. A solution is to configure the session to be stored in a separate process, for example in a sql server database.
I will add that we are not experiencing any issues (that we know of anyway) regarding authentication, session objects
Sounds like a positive coincidence or the infrastructure team has already configured sticky sessions. The latter sounds possible, the configuration is usually obvious and easy.

Is there a way to change the MONGO_URL in code?

I'm searching for a way to change the way Meteor loads the Mongo database. Right now, I know I can set an environment variable when I launch Meteor (or export it), but I was hoping there was a way to do this in code. This way, I could dynamically connect to different instances based on conditions.
An example test case would be for the code to parse the url 'testxx.site.com' and then look up a URL based on the 'textxx' subdomain and then connect to that particular instance.
I've tried setting the process.env.MONGO_URL in the server code, but when things execute on the client, it's not picking up the new values.
Any help would be greatly appreciated.
Meteor connects to Mongo right when it starts (using this code), so any changes to process.env.MONGO_URL won't affect the database connection.
It sounds like you are trying to run one Meteor server on several domains and have it connect to several databases at the same time depending on the client's request. This might be possible with traditional server-side scripting languages, but it's not possible with Meteor because the server and database are pretty tightly tied together, and the server basically attaches to one main database when it starts up.
The *.meteor.com hosting is doing something similar to this right now, and in the future Meteor's Galaxy commercial product will allow you to do this - all by starting up separate Meteor servers per subdomain.

Resources