Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Of course i want to reach maximum perfomance.
What can I do for it?
Use Bundles for CSS & JS files? Ok.
What kind of storage shold I use? Now its SQL Database.
But site and DB are placed in different regions. Size of DB will be not too big -1 gb - is enough. And - how to reduce query-time. Now - it's too long.
Should I turn on "always on" feature for my site?
Is there anything else? Is the any article ti read?
Thanks in advance.
There is only so much optimization you can do - if you really want "maximum performance" then you'd rewrite your site in C/C++ as a kext or driver-service and store all of your data in memcached, or maybe encode your entire website as a series of millions of individual high-frequency electronic logic-gates all etched into an integrated circuit and hooked-up directly to a network interface...
...now that we're on realistic terms ;) your posting has the main performance-issue culprit right there: your database and webserver are not local to each other, which is a problem: every webpage users request is going to trigger a database request, and if the database is more than a few miliseconds away then it's going to cause problems (MSSQL Server has a rather chatty network protocol too, which multiplies the latency effect considerably).
Ideally, total page generation time from request-sent to response-arrived should be under 100ms before users will notice your site being "slow". Considering that a webserver might be 30ms or more from the client, that means you have approximately 50-60ms to generate the page, which means your database server has to be within 0-3ms of your webserver. Even 5ms latency is too great because something as innocuous as 3-4 database queries is going to incur a delay of at least 4 * ( 5ms + DB read time)ms - DB read-time can vary from 0ms (if the data is in memory) or up to 20ms if it's on a slow platter drive, or even slower depending on server-load - that's how you can easily find a "simple" website taking over 100ms just to generate on the server, let alone send to the client.
In short: move your DB to a server on the same local network as your webserver to reduce the latency.
The immediate and simplest way to start in your conditions is to move the database and the site in the same datacenter.
Later you may think to:
INSTRUMENT YOUR CODE
Add (Azure Redis) Cache
Load balance your web site (if it is charged enough)
And everything around compacting/bundling/minimizing your code.
Hope it helps,
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
There is a need to find a performance bottleneck in server application under big load. Application consists of single services instance (.asmx) and some files that are requested over http from time to time. My plan to solve this problem is 1) get to exceptional situation when server starts failing somehow 2) analyze performance counters and logs in that moment of time to deduct what kind of calls caused that.
To start achieving this I've implemented a special client that issues both types of requests and made it repeat respective cycles indefinitely hoping at some point I'll get errors during WebMethod/GET url requests (NB - standard already existing solutions like JMeter and WAPT can't be used duo to complexity of services usage scenario). So far what I am observing is increased response time in service calls and some network timeout exceptions during files loading (using HttpClient that throws OperationCanceledException which is considered timeout according to - this thread). Btw, that's strange, because files are few kb in size, and service methods returns 5-10 mb of data per request. Thought "larger" requests are more likely to fail first.
Perfmon shows increased CPU load and absolutely no memory spikes/leaks. Request Execution Time counters are pretty random and looks irrelevant, Queue Lengths are always 0.
That said, looks like IIS handles my improvised DDoS well and at the same time makes testing approach ineffective (increased response times means more active requests in memory on test client which causes memory overflow at some point, and I'm already flushing data right after I receive it without doing anything with it).
More details : server machine is 4x3Ghz cores, 4 Gb RAM. I generate load of 50-100 requests per second which results in 10-20 Mb/sec bandwidth (test clients are situated on VM inside server's datacenter, 4 Gbps NIC). 30 minute testing session is ~10-30 Gb of pure data transfer between server and client.
How can I actually make Web Service/IIS go down?
Firstly, I wouldn't write my own load testing tool; there are plenty available. I've used JMeter (open source). You can use JMeter (and other similar tools) to send both POST and GET parameters, cookies and other HTTP headers - though admittedly, this does become challenging for complex cases.
Next, make sure your problem really is the server, and not the other infrastructure - network, routers, firewalls etc. all have maximum capabilities, and may be the root cause of the problem. Most of them have logging and reporting tools. For instance, I've seen tests report a throughput issue when they reached the maximum capacity of the firewall; the servers were not even close to breaking point. This happened because we had included a rather large binary file in the test cases, which normally would be served from a CDN.
Next, on the whole it's unlikely that serving static HTTP requests is the problem - IIS is really, really good at that. For the kind of hardware you mention, I'd expect to handle many thousands of requests per second. for static files.
In most situations, it's the dynamic pages that cause the problem - your .asmx. So, I'd ignore all the static files in the load testing, and focus on the .asmx. On the kind of hardware you mention, you probably need to generate many hundreds of requests per second if the asmxes are working properly.
Working on the assumption that your web server is tuned correctly, and the asmx scripts are reasonably performant, I'd expect to need at least twice the (CPU and memory) capacity from the test system as your server has to bring it to breaking point (this is based on my experience with JMeter, which is not as efficient as my web applications, but does make it easy to deploy multiple test clients). So in your case, I'd look for 2 machines matching your server specification.
With JMeter (and pretty much all the other load testing tools I've worked with), you can fairly easily use multiple machines as load test clients; I've also used Cloud-based load generation using JMeter.
I'm not totally sure why this rule of thumb is true - but I've observed it over multiple projects.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm running a ASP.NET website, the server both reads and writes data to a database but also stores some frequently accessed data directly in the process memory as a cache. When new requests come in they are processed depending on data in the cache before it's written to the DB.
My hosting provider suddenly decided to put their servers under a load balancer. This means that my caching system will go bananas as several servers randomly processes the requests. So i have to rewrite a big chunk of my application only to get worse performance since i now have to query the database instead of a lightning fast in memory variable check.
First i don't really see the point of distributing the load on the iis server as in my experience DB queries are most often the bottleneck, now the DB has to take even more banging. Second, it seems like these things would require careful planning, not just something a hosting provider would set up for all their clients and expect all applications to be written to suit them.
Are these sort of things common or was i stupid using the process memory as cache in the first place?
Should i start looking for a new hosting provider or can i expect web farming to arrive sooner or later anywhere? Should I keep transitions like this in consideration for all future apps i write and avoid in process caching and similar designs completely?
(Please don't want to make this into a farming vs not farming battle, i'm just wondering if it's so common that i have to keep it in mind when developing.)
I am definitely more of a developer than a network/deployment guru. So while I have a reasonably good overall understanding of these concepts (and some firsthand experience with pitfalls/limitations), I'll rely on other SO'ers to more thoroughly vet my input. With that caveat...
First thing to be aware of: a "web farm" is different from a "web garden". A web farm is usually a series of (physical or virtual) machines, usually each with a unique IP address, behind some sort of load-balancer. Most load balancers support session-affinity, meaning a given user will get a random machine on their first hit to the site, but will get that same machine on every subsequent hit. Thus, your in-memory state-management should still work fine, and session affinity will make it very likely that a given session will use the same application cache throughout its lifespan.
My understanding is a "web garden" is specific to IIS, and is essentially "multiple instances" of the webserver running in parallel on the same machine. It serves the same primary purpose as a web farm (supporting a greater number of concurrent connections). However, to the best of my knowledge it does not support any sort of session affinity. That means each request could end up in a different logical application, and thus each could be working with a different application cache. It also means that you cannot use in-process session handling - you must go to an ASP Session State Service, or SQL-backed session configuration. Those were the big things that bit me when my client moved to a web-garden model.
"First i don't really see the point of distributing the load on the iis server as in my experience DB queries are most often the bottleneck". IIS has a finite number of worker threads available (configurable, but still finite), and can therefore only serve a finite number of simultaneous connections. Even if each request is a fairly quick operation, on busy websites, that finite ceiling can cause slow user experience. Web farms/gardens increases that number of simultaneous requests, even if it doesn't perfectly address leveling of CPU load.
"Are these sort of things common or was i stupid using the process memory as cache in the first place? " This isn't really an "or" question. Yes, in my experience, web farms are very common (web gardens less so, but that might just be the clients I've worked with). Regardless, there is nothing wrong with using memory caches - they're an integral part of ASP.NET. Of course, there's numerous ways to use them incorrectly and cause yourself problems - but that's a much larger discussion, and isn't really specific to whether or not your system will be deployed on a web farm.
IN MY OPINION, you should design your systems assuming:
they will have to run on a web farm/garden
you will have session-affinity
you will NOT have application-level-cache-affinity
This is certainly not an exhaustive guide to distributed deployment. But I hope it gets you a little closer to understanding some of the farm/garden landscape.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Does anyone have real world experience running a Sqlite database on an SMB share on a LAN (Windows or Linux)?
Its clear from the documentation that this is not really the fastest way to share a Sqlite database.
The obvious caveats are that it may be slow, and Sqlite only supports a single thread writing to the DB at a time. So you become a lot less concurrent cause your DB updates now will block the DB for longer (DB will be locked while data is in transit over the network).
For my application the amount of data that I would like to share is fairly small and writes are not too frequent (a few writes every few seconds at most).
What should I watch out for? Can this work?
I know this is not what Sqlite was designed for, I am less interested in a Postgres/MySql/Sql Server based solution as I am trying to keep my app a light as possible with a minimal amount of dependencies.
Related Links:
From the sqlite mailing list, so I guess one big question is how unreliable are the filelock apis over SMB (windows or linux)
My experience of file based databases (i.e. those without a database server process), which goes back over twenty years, is that if you try to share them, they will inevitably eventually get corrupted. I'd strongly suggest you look at MySQL again.
And please note, I am not picking on SQLite - I use it myself, just not as a shared database.
You asked for real-world experience. Here's some:
SQLite locking is robust, ASSUMING the underlying (networked) file system is also robust. Historically, that's been a poor assumption. Recent operating systems get it much better.
If you play by the rules, your biggest problem will be cases where the database stays "locked" for many minutes at a stretch. For example, if the network drops an "unlock" request from a reader, you might be unable to write until the lock expires. If an "unlock" from a writer goes missing, you'll be unable to read. (To be fair, you can experience the same problems with ordinary documents.)
You'll get fewer problems on a good reliable network with "opportunistic locking" (client-level file caching) disabled for the database.
Well I am not great sqlite expert but I believe the Locking of records/tables may not work correctly and may make database corrupt. Because since there is no single server which maintains central locking, two sqlite dll instances on different machines sharing same file over network may not work correctly at all. If database is opened on same machine, sqlite may use file level locking offered by OS to maintain integrity but I doubt if it works correctly over network share.
"If you have many client programs accessing a common database over a
network, you should consider using a client/server database engine
instead of SQLite. SQLite will work over a network filesystem, but
because of the latency associated with most network filesystems,
performance will not be great. Also, the file locking logic of many
network filesystems implementation contains bugs (on both Unix and
Windows). If file locking does not work like it should, it might be
possible for two or more client programs to modify the same part of
the same database at the same time, resulting in database corruption.
Because this problem results from bugs in the underlying filesystem
implementation, there is nothing SQLite can do to prevent it."
from https://www.sqlite.org/whentouse.html
that also applies for any kind of file-based databases like Microsoft Access
While tracing the active connection on my db i found that some times the connections exceeds 100, is that normal?
and after few minutes it return back to 20 or 25 active connection
more details about my problem
Traffic on the site is around 200 visitor per day.
Why i am asking? because the default MaxPool in the asp.net connection string is 100
Also i am using Connection in the website IIS
That really depends on your site and your traffic. I've seen a site peek out at over 350 active connections to SQL during its peak time. That was for roughly 7,000 concurent web users, on two web servers, plus various backend processes.
Edit
Some additional information that we need to give you a better answer:
How many Web Processes hit your sql
server? For example are you using web
gardens? Do you have multiple servers
how many if you do? This is important because then you can calculate how many connections you can have by figuring out how many worker threads per process you have configured. Assume worse case, each thread is running which would add a connection to the pool.
Are you using connection pooling? If so your going to see the connections stick around after the user's request ends. By default its enabled.
How many concurent users do you have?
But, I think your going after this wrong, your having an issue with no free connections available in your pool. The first thing I'd look for is any leaked connections (connections being held open for longer then they should). For example passing a data reader up to the Web Page, could be a sign of this.
Next thing is to evaluate the default settings. Maybee you should run a web garden which should give you more connections, or increase the number of connections available.
The last thing I would do is try to opitmize queries like in your last question. Let's say you cut those queries in half, all you've done is bought yourself more time until more users come onto the system, and your right back here, only this time you might not be able to optimize that query yet again.
You're leaving out some details making it difficult to answer correctly but...
It depends, really. If you're not using connection pooling then each time a page is hit that requires access to the database a new connection is going to be opened. So sure, it could be perfectly normal.
I would also look into caching. Cache pages, cache query results, etc. You might be surprised how many times you go back to the database to get a list of US States...
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
And what would you recommend for an ASP Net web application, with a not so large SQL server database (around 10Gb)?
I was just wondering, is that a good idea to have an Amazon EC2 instance configured ready to host your app in an emergency?
In this scenario, what would be the best approach to keep the database updated (log shipping? manual backup restore?) and the easiest and fastest way to change the dns settings?
Edit: the acceptable downtime would be something between 4 to 6 hours, thats why i considered using the Amazon ec2 option for its lower cost if compared to renting a secondary server.
Update - Just saw your comment. Amazon EC2 with log shipping is definitely the way to go. Don't use mirroring because that normally assumes the other standby database is available. Changing your DNS should not take more than 1/2 hour if you set your TTL to that. That would give you time to integrate any logs that are pending. Might turn on the server once a week or so just to integrate logs that are pending (or less to avoid racking up hourly costs.)
Your primary hosting location should have redundancy at all levels:
Multiple internet connections,
Multiple firewalls set to failover,
Multiple clustered web servers,
Multiple clustered database servers,
If you store files, use a SAN or Amazon S3,
Every server should have some form of RAID depending on the server's purpose,
Every server can have multiple PSUs connected to separate power sources/breakers,
External and internal server monitoring software,
Power generator that automatically turns on when the power goes out, and a backup generator for good measure.
That'll keep you running at your primary location in the event of most failure scenarios.
Then have a single server set up at a remote location that is kept updated using log shipping and include it in your deployment script (after your normal production servers are updated...) A colocated server on the other side of the country does nicely for these purposes. To minimize downtime of having to switch to the secondary location keep your TTL on the DNS records as low as you are comfortable.
Of course, so much hardware is going to be steep so you'll need to determine what is worth being down for 1 second, 1 minute, 10 minutes, etc. and adjust accordingly.
It all depends on what your downtime requirements are. If you've got to be back up in seconds in order to not lose your multi-billion dollar business, then you'll do things a lot differently to if you've got a site that makes you maybe $1000/month and whose revenue won't be noticeably affected if it's down for a day.
I know that's not a particularly helpful answer, but this is a big area, with a lot of variables, and without more information it's almost impossible to recommend something that's actually going to work for your situation (since we don't really know what your situation is).
The starting point for a rock solid DR Strategy is to first work out what the true cost is to the business of your server/platform downtime.
The following article will get you started along the right lines.
https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-1038783.html
If you require further guidelines good old Google can provide plenty more reading.
A project of this nature requires you to collaborate with your key business decision makers and you will need to communicate to them what the associated costs of downtime are and what the business impact would be. You will likely need to collaborate with several business units in order to gather the required information. Collectively you then need to come to a decision as to what is considered acceptable downtime for your business. Only then can you devise a DR strategy to accommodate these requirements.
You will also find that conducting this exercise may highlight shortcomings in your platforms current configuration with regard to high availability and this may also need to be reviewed as an aside project.
The key point to take away from all of this is that the decision as to what is an acceptable period of downtime is not for the DBA alone to decide but rather to provide the information and expert knowledge necessary so that a realistic decision can be reached. Your task is to implement a strategy that can meet the business requirements.
Don’t forget to test your DR strategy by conducting a test scenario in order to validate your recovery times and to practice the process. Should the time come when you need to implement your DR strategy you will likely be under pressure, your phone will be ringing frequently and people will be hovering around you like mosquitoes. Having already honed and practiced your DR response, you can be confident in taking control of the situation and implementing the recovery will be a smooth process.
Good luck with your project.
I haven't worked with different third party tools but I've experienced cloudendure, and as for the replica you get I can tell it is a really high end product. Replication is done in really tiny time intervals which makes your replica very reliable, but I can see you're not in need of having your site back up within seconds so maybe asking for a price offer or getting away with a different vendor might help.