Asp.net Mvc site inaccessible after 200 user is online and high cpu usage [closed] - asp.net

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
We are running an Asp.net MVC 4 site on a server and when 200 user is online and doing something on the site. Site becomes inaccessible. When i look cpu and ram usae from server. w3wp.exe and sqlserver.exe are using high cpu and there high disk I/O usage for sql server.
I downloaded Apache JMeter to make load tests over application. I set for a load test with 200 user and set http requestes to home/index.Again there is high cpu usage with w3wp.exe (IIS process) and cannot access site using browser window.
But i dont really know how to identify problem. Where can i find the reason that when 200 user is online, why is w3wp.exe process having high cpu usage and sql server also ?

There could be lots of issues. What have you tried so far?
I suggest you try some tools like:
Ants Profiler from Redgate - will help with memory leaks and the like
analyse your database queries and indexes for a start using whichever tools for your DB
download Glimpse - useful to have in your dev environment for tracking down issues
if using an ORM tool like Entity Framework, or NHibernate, consider downloading a free trial of one of Ayende's profiling tools - helps identify many common issues people have with ORM's.
run the site through YSlow - more to do with client side issues but may help identify if there are too many requests happening per session and things like that
None of these will be the magic bullet, and there are other similar ones available, but these should help you get to the bottom of the issue. Good luck.

Related

Simple networked computer monitor [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
My role's responsibilities include monitoring the status of around 300 TFS Build executions. As a result, I have setup an additional computer in an area of our office that runs Catlight.
However, this is only one half of the build executions. I would like to monitor the status of the underlying six Build Controllers for the TFS Build Definitions, as they occasionally become unresponsive and require a reboot.
On the second monitor of this additional computer, what would you recommend for a full-screen, light-weight, simple computer monitoring application that will ascertain whether a networked computer is unresponsive or offline?
To monitor the health of your build server, you can check the steps below:
While logged on to the build server, you can confirm Team Foundation Build Service is running, get information about the resources it is consuming, and confirm the general health of the build server.
Run Windows Task Manager (Task Manager on Windows 8).
On Windows 8, if the More details link appears, choose it.
Choose the Process tab.
On versions of Windows other than Windows 8, make sure Show processes from all users is selected.
On what version of Windows is your build server running?
Windows 8: Locate the Visual Studio Team Foundation Build Service
Host process. It should be located in the Background processes
section, or if your build server is running in interactive mode, in
the Apps section. Observe the CPU, memory, disk, and network
resources that the process is consuming.
Another Windows version: Locate the TFSBuildServiceHost.exe process.
Observe the CPU and memory resources that the process is consuming.
Use the other tabs in Task Manager to confirm the general health of the build server. For example, you can choose the Performance tab to confirm the computer has sufficient processor and memory resources. You can then choose Resource Monitor (on Windows 8, Open Resource Monitor).
You can use AnyStatus to monitor both builds and health state of your servers to view the overall status in one place.
Disclaimer: I am the author of AnyStatus

What should I do with all this RAM? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The powers-that-be have decided to bestow memory upgrades on our developer team. We're all now in control of Mac Pro's with 32GB of RAM. I would have preferred an SSD instead of half of that RAM and I'm struggling to think of ways to make the most of it all. To date I have installed the x64 version of Windows 7 and also set up a 4GB RAM drive for temp files, browser cache etc. as well as codefiles for the various apps I'm working on.
Despite this, even in the middle of a heavy-duty debug session with a massively multi-project solution I always seem to have what to me as obscene amounts of free memory left and I was wondering if there was anything else I could do to make the most of the available RAM. The only other thing I could think of was to run a virtual Windows server on my workstation for 'proper' (i.e. in a mirror of our production environment) local deployment/testing and so on, but any tools or tricks that could put the 4-6GB to good use in any developer or user-friendly ways would be very welcome.
I work with ASP.Net and SQL Server and use VS2010/12 so any 'tricks' specific for this set-up are especially welcome. I was saddened to see that all that RAM has not made VS2010 any less prone to fits of unresponsiveness.
Some ideas:
use RAMdisk, and put your dev. environment on that... This will do wonders! Way quicker than the quickest SSD... But be careful, this is volatile! You could have 16GB, or even 24GB RAMdisk, and still have enough room to play with. Project switching has never been quicker, not to mention all disk based activities.
you can run multiple virtual machines. Like if you use a DB for development, you could have a local copy, not having to rely on shared resources. This can have a lot of benefits, though it has some drawbacks too (replication of changes by other developers, etc.)
combine the above! Get a RAMdisk to run your VMs and the VS from that! This involves a lot of copying when starting work, but that is once/day... I think a coffee break, and reading through the emails would be enough. Benefits: quick... Quicker than anything - once it started.

Registration or Licensing for an Adobe Air software [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am making a Adobe Air software which needs to work on Windows, Mac and Linux. One of the issues that has confused me is the registration/licensing process.
Basically, I want users to try out the full version of software for a month and then buy if they find it useful. What I am not able to figure out is how the licensing would work on all these platforms.
There are no registries in Mac and Linux where I can store the trial information.
If I somehow maintain things locally in a db, post trial, if the user simply uninstalls and re-installs the software, the trial would start again for 30 days.
Don't want to store things in filesystem as that's not even close to actual authentication.
Doing an online activation of the software is a little resource consuming and has network dependency, so that option is also out of scope.
What way should I choose? what other options do I have? Does adobe provide any support for this... any 3rd party libraries that I can use for free?
I use LimeLM (https://wyday.com/limelm) to do licensing for my Adobe Air app (Windows and Mac, no linux). Like you I have a 30 day trial, LimeLM has a trial feature which is tied to the hardware, so uninstalling/reinstalling won't give users another free trial.
LimeLM requires network activation BUT you can allow for grace periods, so someone must connect to the network, say, once in 30 days of use to activate.
I agree with the above post that EncryptedLocalStore is a good idea as well.
Unfortunately the licensing options for Adobe AIR is limited. LimeLM is functional and cheap (they don't take a cut of purchase price). I looked at NitroLM, which is very expensive (I think they take 30% of purchase price) and very complicated - I could never make sense of it. Zaqon also is out there. I didn't like the way their licensing interface looked to our users. LimeLM was the most flexible.
Have you tried EncryptedLocalStore? Data stored in ELS remains even after app uninstallation.

How have you ever interacted with a Nabaztag? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
The Nabaztag I ordered has arrived. I know there is an API to interact with the critter from your own software. Have also seen links to libraries in Perl and .NET among others, and have started work myself on a simple .NET Compact Framework 3.5 library for interacting with the bunny from my mobile phone.
I have seen at least one application claiming to interact with the Wifi bunny: the TFS Build Notification application by Rob Aquila. (Not related to this question, but this does look like a nice app to have running on a central monitor in a large TFS Team...)
I'm just curious to experiences by other people with the Nabaztag:
Have you ever used the Nabaztag API to interact with wireless rabbits?
What did you do? Is it freely available to try it out on my bunny?
How did you like working with the API? Did you just use the HTTP API yourself or did you use a library? And if so, which library did you use?
Even if you did nothing with the API yourself, what applications and/or websites do you know of that can interact with a Nabaztag?
Any other tips?
This is a bit of a shameless plug for my employer, but someone wrote a quick and dirty Perl script to make a bunny read out log events from ZXTM (Zeus Extensible Traffic Manager).
The Perl script (and further up that page, how to plug it into ZXTM)
Video of the bunny
VMware image of ZXTM suitable for use on a desktop to try this out
I extended the start on a python api wrapper that others had made, and have a few apps (ugly control panel, personal weather and traffic reader, Google calendar events of the day). They are all available for download at www.mcgurrin.com/nabaztag.
I created a CruiseControl.net plugin with it. Had some issues with the default API because it is not that good documented, so needed a lot of experimenting. Furthermore it is not that easy to develop to the default API.
So i made an .net API (c#) which abstracts the violet api away and gives you more help while developing, specially while creating choreographys (pain in butt they are Yoda would say).
Currently both are not available to the general public but i am in the process of releasing.
Things that can be neath to implement on your bunny, i don't know, local traffic information (nice to have) new releases for music you like, interfacing with your phone? (send command from phone to bunny)
Hey peSHIr, congrats for getting a rabbit. Now as violet got bought by Mindscape, it's sure it'll continue living...
I would like to develop funny stuff for the rabbit as well, but it seems like a big fuss and it's hard to get started - I checked out several APIs and proxys to get a grip on it - found many projects but either useless or outdated. Although it's written in PHP, the OpenNab Project seems to be one the fewer active around. Maybe worth to check it out?
http://opennab.sourceforge.net/
I hope Mindscape will provide a better API, or even better, open source the rabbit!

Build Server Best Practices [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I heard google has some automated process like that:
When you check in, your code is checked into a temporary location.
It is built.
Style checks run.
Tests run.
If there are no problems, code goes to actual repository.
You receive an e - mail containing test results, performance graphs, style check results and whether your code is checked in.
So if you want to learn if you broke something or the great performance gain you expected occurred, you just check in and receive an e - mail telling you what you need to know.
What are your favorite build server best practices?
What you described for google is what every basic build process does. Specific projects may have additional needs, for example - how we deploy web applications from staging to production:
Build start
Live site is taken offline (Apache redirects to different directory holding an "Under construction" page)
SVN update is ran for production server
Database schema deltas are ran
Tests are ran against updated source and schema
In case of fail: rollback is ran (SVN revert and database schema UNDO)
Site gets back online
Build ends
On the java platform I have tried every single major CI system there is. My tip is that paying for a commercially supported solution has been the cheapest build system I've ever seen. These things take time to maintain, support and troubleshoot. Especially with a heavy load of builds running all the time.
The example workflow you give is similar to the one proposed by TeamCity. The idea being:
Code
Check in to "pre-test"
CI server tests the "pre-commit"
If (and only if) tests pass, the CI server commits the code change to the main repo
It's a religious war but I prefer:
Code - Test - Refactor (loop)
Commit
CI server also validates your commit
Every responsible programmer should run all the tests before committing.
The main argument for the first way is that it gurantees that there is no broken code in SCM. However, I would argue that:
You should trust your developers to test before committing
If tests take to long, the problem is your slow tests, not the workflow
Developers are keen to keep tests fast
Relying on CI server to run tests give you false sense of security

Resources