I have a smallish ASP.NET site. I want to instrument it, in particular logging the values of performance counters. Is there an external service which would do this? E.g. New Relic RPM, except I need the solution to work on a shared hosting provider where I don't have access to the hosting server.
Thanks
Related
I have a SharePoint application that needs to integrate with very sensitive databases. The data required is from multiple databases; almost 40 different databases on different servers.
The suggested design was to have a web service to integrate with, which will then connect to the required database based on the required business logic. However the concern is, if someone somehow got access to the server hosting this web service, all the database connections will be there.
Another suggestion was to have a dedicated web service for each database. This way even if someone got access to this web service, only one database connection will be there.
The question is, is there any known design that can work for this situation to add more security to the database connections?
The answer really depending on your specific requirements. an easy way of doing so is to use "Open Data Protocol" OData. and then secure it with windows directory login, or perhaps ASP.NET login.
take a look at http://www.odata.org/ and http://msdn.microsoft.com/en-us/library/ff478141.aspx
My organisation (a small non-profit) currently has an internal production .NET system with SQL Server database. The customers (all local to our area) submit requests manually that our office staff then input into the system.
We are now gearing up towards online public access, so that the customers will be able to see the status of their existing requests online, and in future also be able to create new requests online. A new asp.net application will be developed for the same.
We are trying to decide whether to host this application on-site on our servers(with direct access to the existing database) or use an external hosting service provider.
Hosting externally would mean keeping a copy of Requests database on the hosting provider's server. What would be the recommended way to then keep the requests data synced real-time between the hosted database and our existing production database?
Trying to sync back and forth between two in-use databases will be a constant headache. The question that I would have to ask you is if you have the means to host the application on-site, why wouldn't you go that route?
If you have a good reason not to host on site but you do have some web infrastructure available to you, you may want to consider creating a web service which provides access to your database via a set of well-defined methods. Or, on the flip side, you could make the database hosted remotely with your website your production database and use a webservice to access it from your office system.
In either case, providing access to a single database will be much easier than trying to keep two different ones constantly and flawlessly in sync.
If a webservice is not practical (or you have concerns about availability) you may want to consider a queuing system for synchronization. Any change to the db (local or hosted) is also added to a messaging queue. Each side monitors the queue for changes that need to be made and then apply the changes. This would account for one of the databases not being available at any given time.
That being said, I agree with #LeviBotelho, syncing two db's is a nightmare and should probably be avoided if you can. If you must, you can also look into SQL Server replication.
Ultimately the data is the same, customer submitted data. Currently it is being entered by them through you, ultimately it will be entered directly by them, I see no need in having two different databases with the same data. The replication errors alone when they will pop-up (and they will), will be a headache for your team for nothing.
Our company is thinking about moving to the cloud. Would we still be able to meet all our current requirements (below). We want to be able to easily scale in the future without high costs.
5 ASP.net 4.0 websites running (using sql databases, see below)
SQL Server 2008 Express (8 databases on this)
2 Scheduler services running (send nightly reports via email e.g. new orders in db)
MongoDB and Memcached are also installed on server
Currently the websites are on a separate server from the database server for security reasons.
We were thinking about Windows Azure and Amazon Web Services (AWS) as providers, which would best fit our requirements?
Are there any other factors we need to consider?
Re: SQL Databases: on Windows Azure this would map to SQL Azure. Costs start at $5/month for up to a 100 MB instance - and goes all the way up to 150 GB - and goes beyond that with Federations.
Re: 5 ASP.net 4.0 websites running: these map naturally into Windows Azure Web Roles. The "small" instance is $0.12/hour/instance, and you'll usually want two instances (to avoid single point of failure for a few scenarios). Depending on your load, you may be able to put all 5 sites on the same instances. If you have very low usage sites, consider the $0.05/hour/instance "extra small" instance.
Re: Currently the websites are on a seperate server from the database server for security reasons: of course this is also doable.
Re: 2 Scheduler services running: Running Windows Services is no problem.
Re: send nightly reports via email e.g. new orders in db: No problem doing, though is not baked into Windows Azure directly, but there are many simple ways to do this (even for free, such as via SendGrid).
Re: We want to be able to easily scale in the future without high costs: you will need to do the math regarding your actual costs, but Windows Azure can surely scale.
Re: MongoDB and Memcache are also installed on server: These can both be run on Azure. Check out https://github.com/mongodb/mongo for MongoDB. Also, the Azure Caching service is also avail (managed for you).
Re: We were thinking about Azure and Amazon as providers, which would best fit our requirements: These are functionally very similar (in capability and cost), with a few noteworthy differences.
Windows Azure is Platform as a Service meaning that you don't need to worry about Virtual Machines, but rather Applications. In other words, you upload your (basically) Zipped app package to the cloud for execution. With Amazon, you will be dealing with the Virtual Machine yourself. In Azure, you get a copy of Windows Server 2008 which is managed for you, but you can also do admin things to it if you need to. This is far less of an advantage if your app is an old messy install that isn't really clean (though may not be a good high-value cloud candidate anyway).
Windows Azure has an emulator that works great - F5 right from visual studio to work with storage system and VMs and more popular features.
Re: Are there any other factors we need to consider: Yes. With any cloud application, you need to be prepared to deal with scaling out (not up), dealing with transient retries (you may need to retry an operation to a cloud service - any cloud service). The benefits of this are much better (and more cost-effective) scalability and higher reliability (when you run across nodes, you don't have a single point of failure). Be sure to understand when/where storage on a VM is persistent vs. ephemeral. There are more considerations, but these are primary ones.
You may want to check out the Windows Azure Pricing calculator.
Good luck! And welcome to the cloud.
with the exception of the scaling question, and the 2 physical servers, you can move this functionality into a hosted environment and you will technically be in "the cloud". This could be a dedicated or VPS (Virtual Private Server), or even a shared server if you are small.
Those can allow for growth over time...you just need to upgrade what you have with the provider.
You also could use a colo-server with a hosting provider, which basically means you put your hardware in an hosting provider rack, and use their electricity and bandwidth. They charge based on bandwidth usage.
Since you are using SQL Express, remember that each database is limited to 8gb. So that will limit your growth at some point. That would entail an upgrade from Express to regular SQL if you don't want to re-engineer anything.
Have you considered AppHarbour? It has Memcached, MongoDB, SQL Server and so on, and is quicker to deploy to than Azure. I like Azure, but there is quite a learning curve and I have found the connection to SQL Azure to be pretty bad - which means re-engineering your DAL to use something like the SQL Transient Failure Library = a bit of a faff for existing projects.
AppHarbour does not have blob storage - so if you are uploading files you will need to use Azure Blob Storage or Amazon S3 or some equivalent as well.
Hope this helps.
Not an expert but being that Asp.net is a Microsoft product it should be easier to migrate to azure, although from what I have heard AWS shouldn't be difficult. Another thing you may want to consider is cost. Last time I checked AWS is significantly less costly unless you already pay for MSDN subscriptions.
All the requirements you sum up are not any issue to deploy in Windows Azure. You can find a lot of information on the internet on how to do this.
Keep in mind, if you want to deploy your services to windows azure, you'll need to do some code review of your applications to fix session state, output cache and so forth on your web applications.
Since you want to scale them out and they are sitting behind a non-sticky round-robin load balancer, you will run into issues with your session state if it is saved on the machine itself. You'll need to part session state to SQL Azure or to the Windows Azure table storage for example.
Installing MongoB and Memcache in Azure is not an issue, you'll find a lot of information on how to do it, but it'll require some to set up your role and the scripting
codingoutloud has given a very detailed answer. I would add two very key considerations to think about when moving any application to Azure (or, indeed, many other cloud providers).
Local state
With normal Azure, they reserve the right to shut down any one instance of a role at any time in order to move or upgrade it. This means you always need at least two instances of any one role and they will be transparently load balanced. If your current websites are currently running on individual servers then they may rely on session state or files in local directories etc. Now, there are ways around this (like putting session state in SQL, using the cookie provider for temp data, using a shared drive for files etc) or, indeed, bypassing a lot of the benefits of Azure and using their "virtual server" concepts which means you don't get the scale benefits etc.
But, sites that rely heavily on local state may be challenging to move to the cloud.
Time Zones
All Azure servers run on UTC time. If you are used to running on dedicated servers serving users from a single time zone then chances are that you use things like DateTime.Now() which won't really correspond to what the user wants.
I don't see any of the above as limitations of Azure, I find them very useful in forcing you to build global and scalable solutions from the start. However, when porting an existing application, the above may be quite a challenge to adapt to, even though there are workarounds.
As also mentioned elsewhere, there is a learning curve to Azure and somehow the documentation - plentiful as it is - just doesn't quite seem to help for some reason. Once you "get it", though, I find Azure really nice and there are a bunch of subtle features that will help you build scalable solutions, like the whole queuing infrastructure, the blob storage and the table storage. In some ways the learning is hampered by having too much choice.
Good luck!
I have been looking for a solution to allow us to monitor our web servers performance counters over an asp.net website.
Is there an existing tool that I can make use of to accomplish this or will I need to roll my own?
The only solution I have found online is the use of perfmon to connect to the remote server, this I need to avoid.
The only criteria we need is the ability to select or configure what counter are used and a web interface to few these counters at a later date. We need a historical record or the servers performance.
We are using asp.net websites on IIS.
Thanks
Using perfmon remotely is the standard way to monitor performance counters remotely. This is done by sys admins across the globe.
Why do you need to avoid this?
However, your will need to roll out your own. I had done this in the past (users who could not figure out perfmon...).
In terms of historical data - you will need to poll the performance counters yourself and record the data somehow (database, flatfiles etc).
You can also setup a website to display current values, control and configure performance counters - the accounts the site runs under will required sufficient permissions, however.
What issues do I need to be aware of when I am deploying an ASP.NET application as a web farm?
All session state information would need to be replicated accross servers. The simplest way would be to use the MSSQL session state provider as noted.
Any disk access, such as dynamic files stored by users, would need to be on an area avialable to all servers. Such as by using some form of Network Attached storage. Script files, images and html etc would just be replicated on each server.
Attempting to store any information in the application object or to load information on application startup would need to be reviewed. The events would fire each time the user hit a new machine in the farm.
Machine keys across each server is a very big one as other people have suggested. You may also have problems if you are using ssl against an ip address rather than a domain.
You'll have to consider what load balancing strategy your going to go through as this could change your approach.
Sessions is a big one, make sure you use SQL Server for managing sessions and that all servers point to the same SQL Server instance.
One of the big ones I've run across is issues with different machineKeys spread across the different servers. ASP.NET uses the machineKey for various encryption operations such as ViewState and FormsAuthentication tickets. If you have different machineKeys you could end up with servers not understanding post backs from other servers. Take a look here if you want more information: http://msdn.microsoft.com/en-us/library/ms998288.aspx
Don't use sessions, but use profiles instead. You can configure a SQL cluster to serve them. Sessions will query your session database way too often, while profiles just load themselfs, and that's it.
Use a distributed caching store like memached for caching data, and ASP.Net cache for stuff you'll need alot
Use a SAN or an EMC to serve your static content
Use S3 or something similar to have a fallback on 3.
Have some decent loadbalancer, so you can easily update per server, without ever needing to shut down the site
HOW TO: Set Up Multi-Server ASP.NET Web Applications and Web Services
Log aggregation is easily overlooked - before processing HTTP logs, you might need to combine them to create a single log that includes requests sent to across servers.