I enabled http access to my SSAS 2008 project. I am able to connect to
the project and the cubes via http:///olap/msmdpump.dll
I tried anonymous authentication and basic authentication. For
productive use I will have to use basic authentication.
In both settings I have quick access to the project and the cubes. But
it takes 10 times longer to draw the report with basic authentication.
Far too slow for the use in productive environment. After drawing the
report it is possible to navigate inside the cube (filter, drill) with
no performance problems.
In SQL Server profiler I compared both authentications. Con-string has
the same settings. It just takes 10 times longer to "Discover Begin >
Discover End" and in "Query Begin >> Query Subcube >> Query End".
NTUserName is the same in both variations as I put the same user for
the anonymous authentication.
I tried Windows Server 2003 and 2008. Same issue.
I tried to change several Con-settings
In both variations the server does not use proactive caching.
Any idea?
Thanks!
Related
I know this is a bit of a broad question, but please bear with me.
We have a standard Azure Windows VM with IIS and ASP.NET Webservices. All DB queries are run against Azure SQL.
We have a page that makes about 18 REST calls at the same time and all those REST calls internally query the Azure SQL DB.
This is how it looks locally:
And against our live server:
So on the live server, Chrome is obviously limiting the amount of connection to 6.
After a quick glance, it just looks like the first few queries are very slow. They indeed take a little time, but never 5sec or more if you run them individually. Maybe 0.5sec at most.
Ok, so 2nd idea, IIS or SQL server is at the performance limit if I run all these calls at the same time.
But if I run any of the six calls individually from Postman at the same time as waiting for the website to load, they return in sub-second time. So it can't be the IIS or the SQL server that is at peak. It seems like these calls kinda block each other if they are from the same source or something.
Is there anything common in my setup that could cause this slowdown when I fire multiple calls from one client?
The SQL calls are just simple "Selects", no updates or something.
I have an issue to evaluate the amount of concurrent users that our website can handle. Website is a Single Page Application built on .net framework with Durandal.js on the frontend. We use signalR (hubs) for real time communication between server and client.
The only option I see is ‘browser testing’, so each test should run browser instance (or use phantomJs etc) to keep real time connection with the server (as in real usage). Are there any other options to do this except use tests that will use browser instance to emulate user’s behaviour? What is the best way to emulate load of e.g. 1000 concurrent users?
I’ve found several cloud services that support such load testing, e.g. loadimpact, blazemeter. Would be great if someone can share their experience of using such tools.
SignalR provides tool called Crank, which can be used to test how many connections can be handled by given machine.
More info: http://www.asp.net/signalr/overview/performance/signalr-connection-density-testing-with-crank
Make your own script to create virtual users! that is the most effective way to recreate real world load/stress! use Akka Actor model(for creating virtual users) with java signalr client! (if you want you can use Gatling tool as framework and attach your script written in java or scala to virtual users of Gatling!)
make script dynamic by storing user info(Authentication token or user credentials) in xml document.
Please comment questions I can guide you end to end as I completed building+deploying such tool...
We have a system running Windows Server 2008R2 x64 and SQL Server 2008R2 x64 with SSRS installed/configured. This is a shared reporting server used by a large number of people, with some fairly large inefficient databases (400-500gb of data ish), and these users use the system to generate ad-hoc reports based of a reporting model that sits on top of the aforementioned databases. Note that the users are using NTLM to logon and identify for running reports.
Most reports are quick, but if you are running a report for 1 or 2 years worth of data, they can take a while to return (5minutes ish). This is fine for most users, however some of the users are stuck behind a proxy, which has a connection timeout set at 2minutes. As SSRS 2008R2 does not seem to send back a "keep-alive" signal (confirmed via wireshark), when running one of these long reports the proxy server thinks the connection has died, and as such it just gives up and kills the connection. This gives the user a 401 or 503 error and obviously cancels the report (the incorrect error is a known bug in SSRS which Microsoft refuse to fix).
We're getting a lot of flak from the user's about this, even though it's not really our issue..so I am looking for a creative solution.
So far I have come up with:
1) Discovering some as yet unknown setting for SSRS that can make it keep the connection alive.
2) installing our own proxy in between the users and our reports server, which WILL send a keep-alive back (not sure this will work and it's a bit hacky, just thinking creatively!)
3) re-writing our reports databases to be more efficient (yes this is the best solution, but also incredibly expensive)
4) ask the experts :) :)
We have a call booked in with Microsoft Support to see if they can help - but can any experts on Stack help out? I appreciate that this may be a better question for server fault (and I may post it there) but it's a development question too really :)
Thanks!
A few things:
A. For SSRS overall on it's service:
I personally use a keep alive service as I believe the default recycle is 12 hours for SSRS server. I use a tool someone turned me onto called 'VisualCron' that can do many task processes automatically. You can also just make a call in a WCF service or similar to. Basically I know the first report from a user for the day is generally slow. Usually you need to hit http:// (servername)/ReportServer to keep it alive.
B. For cachine report level items:
If this does not help I would suggest possibly caching DataSets when possible. Some people have data that is up to the moment but for a lot of people that is not the case. You may create a shared dataset in SSRS and then cache that on a schedule. So if you have domain like tables that only need to be updated once in a blue moon put them there. Same with data that is nightly or in batches. If you are transactional based shop that is up to the moment this may not help but for batch based businesses this can help tremendously.
You can also cache the reports for their data as a continuation of this. Under 'Manage' drop down for a report when in the /Reports landing page you can set the data to run under a specific schedule. You can also set a snapshot which is an extension of this as it executes with some default parameters set on a schedule and is a copy of the report when it was ran.
You are mentioning ASP.NET so I am not certain how much some of this will work if you are doing this all through a site you are setting up internally as a pass through. But you could email or save files on a schedule as well through SSRS's subscription service.
C. Change how you store your data for reporting.
You can create a Report Warehouse of select item level values of queries. Create a small database that is just a few recent years of data and only certain fields and certain tables. Then index it to death and report off of that. In my experience this method will fly in terms of performance but it does take the extra overhead of setting it up. Generally most companies will whine about this but it often takes a single day to set up and then you create one SSMS job that does it all nightly or an SSIS package then you don't worry about it. I like this method as I know my data is not being reported off of production and is isolated personally.
I am looking for information on how to create an ASP.NET web farm - that is, how to make an ASP.NET application (initially designed to work on a single web server) work on 2, 3, 10, etc. servers?
We created a web application which works fine when, say, there are 500 users at the same time. But now we need to make it work for 10 000 users (working with the web app at the same time).
So we need to set up 20 web servers and make something so that 10 000 users could work with the web app by typing "www.MyWebApp.ru" in their web browsers, though their requests would be handled by 20 web-servers, without their knowing that.
1) Is there special standard software to create an ASP.NET web farm?
2) Or should we create a web farm ourselves, by transferring requests between different web servers manually (using ASP.NET / C#)?
I found very little information on ASP.NET web farms and scalability on the web: in most cases, articles on scalability tell how to optimize and ASP.NET app and make it run faster. But I found no example of a "Hello world"-like ASP.NET web app running on 2 web servers.
Would be great if someone could post a link to an article or, better, tell about one's own experience in ASP.NET "web farming" and addressing scalability issues.
Thank you,
Mikhail.
1) Is there special standard software
to create an ASP.NET web farm?
No.
2) Or should we create a web farm
ourselves, by transferring requests
between different web servers manually
(using ASP.NET / C#)?
No.
To build a web farm, you will need some form of load balancing. For up to 8 servers or so, you can use Network Load Balancing (NLB), which is built in to Windows. For more than 8 servers, you should use a hardware load balancer.
However, load balancing is really just the tip of the iceberg. There are many other issues that you should address, including things like:
State management (cookies, ViewState, session state, etc)
Caching and cache invalidation
Database loading (managing round-trips, partitioning, disk subsystem, etc)
Application pool management (WSRM, pool resets, partitioning)
Deployment
Monitoring
In case it might be helpful, I cover many of these issues in my book: Ultra-Fast ASP.NET: Build Ultra-Fast and Ultra-Scalable web sites using ASP.NET and SQL Server.
I'd say you should configure an NLB cluster (Network Load Balancing), which basically splits all requests between cluster nodes (And as an added benefit detects if things are down and stops sending them requests). There's features built into windows for this, but they don't compare to a hardware device for performance or scalability. If you're using Windows 2008 it really is simple to set one up. If you do this make sure you have a shared machine key or you'll start getting exceptions for viewstate being invalid (When 1 server submits the form and it posts to the other and they're using different keys to encode the data).
You can also use DNS round-robin but at 20 servers presumably in 1 datacenter I wouldn't see a point to going to such crazy lengths. If you've got multiple data centers though this is definitely worth considering (As NLB won't really work well between data centers).
You'll also want to be sure if a user swaps servers they don't loose their session. The simplest way would be to use a Session State database (Configurable in the web.config, or you can do it server-wide in IIS's configs). If you don't use sessions though just turn them off in the Pages directive of the web.config and call it a day. You could also use a session state server, but I don't have any experience with this.
It may also be worth considering spending some time optimizing the code or adding caching directives to static content - it can be very cost-effective even if you only trim the need for a few of those servers.
Hope that helps.
If you keep your server stateless, it is easy with a good router that implements some round-Robbin protocol (that send each call to the single published server ip to a different web server).
if it is not stateless (like - if a login is required, or ssl) than you need to keep each session to the same server.
Here is some info about MS Application Request Routing - you will get everything there:
IIS Load balancing
I would not recommend #2. You will do much better off with a load balancer.
Pay attention to session state management. Unless you configure the load balancer to keep each user on the same web server, you will have to use the session state server or database.
Also, check your code's usage of Application and Cache variables. These will be different on every web server. If those values are static, you may not have a problem. But if they can change, you can end up with different values on each web server.
There used to be a problem with ViewState in 1.x, as explained here. I'm not sure if this problem still exists.
Then, there are some changes that you need to make to the Machine Key in web.config, as explained here.
I currently have an asp.net website hosted on two web servers that sit behind a Cisco load balancer. The two web servers reference a single MSSQL database server.
Since this database server is a single point of failure, I'm adding an additional MSSQL server for backup. I would like to setup my web.config files to write everything to both MSSQL servers, but only read from the "primary" database server unless it is unreachable for some reason, at which point the backup MSSQL server would be used.
Is this possible via a web.config file setting, or must this be done in code? Thanks in advance for any help.
New Information:
I just wanted to add further information on this topic after researching it for the past several days - Microsoft TechNet has a good article title "Implementing Application Failover with Database Mirroring" (http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/implappfailover.mspx#EMD).
This specifically covers the database mirroring feature in Microsoft SQL Server 2005 and the new new "Failover Partner" connection string keyword that allows you to specify two server/db instances in a single connection string.
The article is well worth a read if your interested in implementing this type of feature.
What you want is called "failover", where if one database fails your queries are automatically redirected to the other. This is acheived at the database level, not the application. There are a lot of walkthroughs etc for setting up failover clusters: here's one for SQL 2000, and another for SQL 2005. Basically, once you set it up, the primary database communicates all activity to the secondary one. If the primary fails, the secondary is (almost) up to date and takes over.
The servers form a cluster, and look like a single unit - similar to the way your load-balanced web servers look to the outside world. The backup monitors the primary, and if the primary stops responding, the backup takes over receiving queries. If you're Googling, try also looking adding the keywords "database mirroring" and "quorum".
Its a bit more complex than that. Does your webpage write to the databases? Or just read?
If they write, then you'll have to worry about keeping the 2 databases synchronized, probably using mirroring or log shipping.
But what you are (in essense) talking about doing is setting up a SQL cluster.
I've written a blog entry that shows how to setup MSSQL Database mirroring as well as how to actually utilize it from a managed code perspective:
http://www.improve.dk/blog/2008/03/23/sql-server-mirroring-a-practical-approach
Nice answer from "Rick" but I just wanted to add my 2 cents of information. Normally for a setup with failover without a lot of expensive equipment, I would set it up like that:
You can have your 2 SQL Server box waiting for request and have a third box low-end system with SQL Server 2005 Express as a "health monitoring". What that saves you is 10K$ for the box and one SQL Server licence. SQL Server Express (as in Free) can do the health monitoring between the 2 databases servers without any issues.
That is my setup :)