New Azure Server - CSV Reader takes much longer - asp.net

We have a asp.net website that allows users to import data from a CSV file. Recently we moved to a from a dedicated server to an Azure Virtual Machine and it is taking much longer. The hardware specs of the two systems are similar.
It used to take less than a minute for data to import now it can take 10 - 15 minutes. The original file upload speed is fine it is looping through the data and organizing it in the SQL database that takes the time.
Why is the Azure VM with similar specs taking so much longer and what can I do to fix it?
Our database is using Microsoft SQL Server 2012 installed on the same VM as the website.

Very hard to make a comparison between the two environments. Was the previous environment virtualized? It might do with speed of the hard disks, the placement of the Sql Server files, or some other infrastructural setup (or simply the iron). I would recommend have a look into the performance of the machine under load (resource monitor). This kind of operation is usually both processor and i/o intense. This operation should be done in parallell as well.
Hth
//Peter

Related

Running SQL Server & IIS on same Windows server without making site go slow

I have a dedicated Windows 2008 Server with 32GB RAM & Intel Xeon E3-1230 v2 processor and SQL Server 2008 (Standard Edition).
There's a heavy data import & cleaning process i.e importing data from CSV files which runs daily and whole process takes about 8-10 hours.
My problem is that ASP.NET MVC website which is hosted on this same sever gets slow for small periods during that import process. Most of the time, it runs fine, but in between, the site will become unresponsive and slow. The import process doesn't touch the database which the site uses.
What are the options for me to ensure my site runs smoothly throughout?
is it even possible to achieve, keeping in mind some very resource-heavy operations get performed during the data import on the same server?
The import process is a Windows application which uses SSIS packages to import data and after that runs server SQL Server stored procedures.
Ankit quick solution for your problem is to have 2 VM on your dedicated hosting server. One one VM you can host your website and another VM you can host you database and import process.
From performance perspective it is case by case as your problem is very generic and broad so community can't help but tip still you can follow.
For your website create separate application pool.
You can assign min and max CPU and memory utilization for your processes like import, IIS etc based on your need.
Check clock speed;whether it is matching between your processor and virtual memory. It doesn't matter how much virtual memory you have you need your RAM in sync with you processor cores.
As SQL Server Standard version supports max 4 cores of processors. Check how much cores you have in your dedicated server.
I have got some really good ideas from the community for my problem. Having two Virtual machines is a very good one but it's restricted by my lack of knowledge on the topic. Solution proposed by Anil in comments in also a good one that we can two seperate SQL Server instance and use Resource governor to restrict resources but for that we'll have to first upgrade to enterprise edition which is not feasible for us at the moment.
So keeping the cost in mind, we have decided to try Varnish. Plan is to get a separate Linux VPS and set Varnish there. Window Server will be the backend for Varnish. As once the data is prepared by our import process, our pages stays more or less static, I think we'll do good with this.

Do websites in the same application pool share loaded libraries?

I have a Windows Server 2012 with IIS 8.0. It is hosting many small websites with a low user base which are not mission critical in any way. With small website I mean that the application code and memory footprint is quite low, but due to the loaded libraries, like EntityFramework, the memory consumption of the applications are about 140MB when freshly started and idle.
In general that’s not a big deal for a full-blown webserver, but I only have a VPS with 4GB of RAM which also runs several other applications (databases, BIND, hMail, etc.). I’m using it basically as development server to play with many different technologies. Therefore, I’m running out of RAM quickly while serving dozens of ~140MB w3wp’s.
Beside of suspending when idle I’d like to reduce the memory consumption while still using any framework or library I’d like to use – that’s the purpose of the whole thing actually.
Long story short: As the applications not only share the same .NET version but also some libraries like EF or MVC, would it make more sense to run multiple sites in one app_pool so that they can share the libs? Or would each site load its own copy anyway (due to different Application domains like discussed here)?
Bonus question: when considering a hardware upgrade 1GB of RAM is 20$/month but putting the whole server on SSDs is 10$/month. While I do know that reading from page file is always much slower than reading from RAM I’m thinking about using a big pagefile on the SSD instead of buying 1gig of additional RAM for twice the price – again, speed of the websites isn’t critical, they should just work. Would that make any sense at all?
Looking at a w3wp Process (hosting multiple sites) in Process Explorer shows that it hosts several different application domains with different instances of the same assemblies loaded into memory. So moving the sites into a single AppPool may not help much.
But there is another option. In IIS 8+ you can share common assemblies across AppPools. If certain assemblies are used by multiple AppPools, they are loaded into memory just once and then aliased by the different processes.
Have a look at this bit from asp.net and this TechNet blog post
You have to do a little bit of setup work, but then it seems to work quite well.

deploying two EARs in a single weblogic server

I am trying to migrate an application on a Weblogic server which already has an application in it. Please suggest if having two EARs in the same weblogic server is a feasible design
It is perfectly feasible and standard; however, there are one or two reasons why you might not want to do this.
One is file descriptor exhaustion. If one of the applications (EARs) runs out of file descriptors, it will probably crash / render inoperable the entire process, i.e. the entire Weblogic server.
Another is heap memory exhaustion; much the same problem occurs if one of the applications exhausts the maximum available heap memory.
Application servers try to isolate applications from each other, but cannot completely succeed at this due to the limitations of the JVM. Operating systems and virtual machine hypervisors are actually able to do a better job of isolating applications from each other.

IIS Performance

We have the following setup:
Virtual server, Intel Xeon X5650 # 2.67Ghz (4 processors)
8GB RAM
Windows server 2008 Standard 64bit
Sql Server Express
IIS 7.5
Our database is only 200mb. We are running an ASP.net app. We recently ran into some performance issues, ~200 concurrent connections was causing 100% CPU usage (mostly consumed by IIS) and bringing the response time to around 20sec! After some tweaks to our code we have been able to run a load test from loader.io with 1500 concurrent users over 1 minute and our response time at the end was around 5 seconds and CPU was around 95%, again consumed mainly by IIS, our memory was sitting at around 4GB usage. However we are expecting bigger spikes than 1500, anywhere up to around 4000 users in a short amount of time.
My questions are the following:
1) Is this normal performance for our current setup? Our site is quite intensive on the database and we are using Entity Framework.
2) Would upgrading to Sql Web edition have any benefit seeing as though our Database is so small?
3) Do you think that this type of setup could handle 4000 users?
4) Any suggestions on what we could do to handle this load?
I know this is somewhat subjective, but any answers are much appreciated.
Is this normal performance for our current setup?
Depends on your code. Did you profile the code to make sure you dont have anything stupid in there?
Our site is quite intensive on the database and we are using Entity Framework.
Again, did you pofile to figure out you spend a lot of time in entity framework? It is slow, ut the question is what "intensive" means. This is what profilers are for.
Would upgrading to Sql Web edition have any benefit seeing as though our Database is so
small?
Help, my pizza comes too late. Wiould upgrade to a larger car help? You say yourself that you spend the time in IIS, not sql server.
Do you think that this type of setup could handle 4000 users?
You think my car is big enough? Note I don't tell you what I need it for. Without looking at usage patterns and your code - no idea. THAT SAID: the server is pathetic compared to what you buy today. As such, this is a irrelevant question - just upgrade if you have to.
Any suggestions on what we could do to handle this load?
Load test + profiler, optimize code. Get bigger server. Realize that we dont have crystal balls to figure out how good / bad / stupid your code is.
Number one question arising here, is: did you deploy RELEASE or DEBUG compiled binaries of your project?
Upgrade to WebEdition will not solve any problem here, since the difference in the versions is very simple: WebEdition is just throttled in the internal scheduler/etc. - so you will be just fine with the standard edition.
My experience is that the most crucial aspect of concurrent request is the amount of server memory and the consumption of this memory by your code.
As the physical memory is consumed, the server starts to swap from physical to virtual memory which slows down processing dramatically and leads to symptoms you describe.
I would start with putting another 8gb of ram into the server. In the meantime try to optimize your code so that less data is processed during requests or less memory is used. Also, move sql server to a separate machine so that there is no competition between iis and sql server when it comes to memory availability.
With your current machine, I doubt the problem is the IIS itself, but rather related to the way your app is designed and/or utilize frameworks. I personally learned just recently that IIS requests including multiple rounds trips to the database can be measured in hundreds of micro-seconds, not hundreds of milliseconds... A single locking bug, or unbalanced queuing can limit your application scalability and regardless of your hardware specs [https://twitter.com/michaelzino/status/454512110165184512].
Entity Framework is known for validating your models against the database schema for the first initial calls. I would suggest profiling your app layers, starting from the data access layer, or the intrinsic database calls, and going up.

Build Server Hardware Configuration

So I've seen this question, but I'm looking for some more general advice: How do you spec out a build server? Specifically what steps should I take to decide exactly what processor, HD, RAM, etc. to use for a new build server. What factors should I consider to decide whether to use virtualization?
I'm looking for general steps I need to take to come to the decision of what hardware to buy. Steps that lead me to specific conclusions - think "I will need 4 gigs of ram" instead of "As much RAM as you can afford"
P.S. I'm deliberately not giving specifics because I'm looking for the teach-a-man-to-fish answer, not an answer that will only apply to my situation.
The answer is what requirements will the machine need in order to "build" your code. That is entirely dependent on the code you're talking about.
If its a few thousand lines of code then just pull that old desktop out of the closet. If its a few billion lines of code then speak to the bank manager about giving you a loan for a blade enclosure!
I think the best place to start with a build server though is buy yourself a new developer machine and then rebuild your old one to be your build server.
I would start by collecting some performance metrics on the build on whatever system you currently use to build. I would specifically look at CPU and memory utilization, the amount of data read and written from disk, and the amount of network traffic (if any) generated. On Windows you can use perfmon to get all of this data; on Linux, you can use tools like vmstat, iostat and top. Figure out where the bottlenecks are -- is your build CPU bound? Disk bound? Starved for RAM? The answers to these questions will guide your purchase decision -- if your build hammers the CPU but generates relatively little data, putting in a screaming SCSI-based RAID disk is a waste of money.
You may want to try running your build with varying levels of parallelism as you collect these metrics as well. If you're using gnumake, run your build with -j 2, -j 4 and -j 8. This will help you see if the build is CPU or disk limited.
Also consider the possibility that the right build server for your needs might actually be a cluster of cheap systems rather than a single massive box -- there are lots of distributed build systems out there (gmake/distcc, pvmgmake, ElectricAccelerator, etc) that can help you leverage an array of cheap computers better than you could a single big system.
Things to consider:
How many projects are going to be expected to build simultaneously? Is it acceptable for one project to wait while another finishes?
Are you going to do CI or scheduled builds?
How long do your builds normally take?
What build software are you using?
Most web projects are small enough (build times under 5 minutes) that buying a large server just doesn't make sense.
As an example,
We have about 20 devs actively working on 6 different projects. We are using a single TFS Build server running CI for all of the projects. They are set to build on every check in.
All of our projects build in under 3 minutes.
The build server is a single quad core with 4GB of ram. The primary reason we use it is to performance dev and staging builds for QA. Once a build completes, that application is auto deployed to the appropriate server(s). It is also responsible for running unit and web tests against those projects.
The type of build software you use is very important. TFS can take advantage of each core to parallel build projects within a solution. If your build software can't do that, then you might investigate having multiple build servers depending on your needs.
Our shop supports 16 products that range from a few thousands of lines of code to hundreds of thousands of lines (maybe a million+ at this point). We use 3 HP servers (about 5 years old), dual quad core with 10GB of RAM. The disks are 7200 RPM SCSI drives. All compiled via msbuild on the command line with the parallel compilations enabled.
With that setup, our biggest bottleneck by far is the disk I/O. We will completely wipe our source code and re-checkout on every build, and the delete and checkout times are really slow. The compilation and publishing times are slow as well. The CPU and RAM are not remotely taxed.
I am in the process of refreshing these servers, so I am going the route of workstation class machines, go with 4 instead of 3, and replacing the SCSI drives with the best/fastest SSDs I can afford. If you have a setup similar to this, then disk I/O should be a consideration.

Resources