I have moved my SQL Server 2008 db with Stored Procedures onto SQL Azure. Next I wanted to define an API endpoint to that server URL and with the right authorizations & inputs have the stored proc return JSON (or the like).
However research seems to indicate I need to route through another application/webserver/calling mechanism. Frankly I'm hoping to concentrate on learning only db code (I.e. outsource the middle and frontend) so to some extent all I want to do at present is test . . . can anyone help me get a better understanding of the process???
SQL Database (essentially SQL Server-as-a-Service) doesn't provide a data-access API built in. The only API's available for SQL Database service are system-level ones. That is, for provisioning and scaling. Side note: Years ago, there was an experimental OData service offered for SQL Azure (the former name of SQL Database service), but that was terminated.
You'd need to run a separate service to handle your API. How you do that is quite broad and subjective, and I'll avoid giving recommendations due to that, but... assuming you'll run your API in Azure, you have many places to run it, such as App Service (the most straightforward due to you not having to deal with any infrastructure), Cloud Services, and Virtual Machines. And then there's Azure API Management service (and since your question mentions authorization, this might be a good thing for you to look into). This is all documented at azure.com.
Related
I have a SharePoint application that needs to integrate with very sensitive databases. The data required is from multiple databases; almost 40 different databases on different servers.
The suggested design was to have a web service to integrate with, which will then connect to the required database based on the required business logic. However the concern is, if someone somehow got access to the server hosting this web service, all the database connections will be there.
Another suggestion was to have a dedicated web service for each database. This way even if someone got access to this web service, only one database connection will be there.
The question is, is there any known design that can work for this situation to add more security to the database connections?
The answer really depending on your specific requirements. an easy way of doing so is to use "Open Data Protocol" OData. and then secure it with windows directory login, or perhaps ASP.NET login.
take a look at http://www.odata.org/ and http://msdn.microsoft.com/en-us/library/ff478141.aspx
I am starting to port one old desktop single tenant application into the cloud and wish to hear what would be your recommendation about the databases for my cloud-based multi-tenant application?
My basic requirement is simple:
For each tenant, its data is separate to any other tenants' data. I can easily backup, restore, export the data for one single tenant without affecting other tenants.
I don't really want to care about multi-tenancy in the business logic code. It should look like a single tenant application behind the security layer, no tenant ID pass around etc.
Easy to query using some mature technology like LINQ.
Availability and scalability, of course, easy to set up replicas, fail-over and scaling up and down etc.
I have gone through some investigations about multi-tenant application development. I have noticed SQL databases from Azure and AWS are both very expensive(the cost for just SQL database instance is close to the license fee of the original application), so I definitely can't use separate SQL database instances for tenants.
Now I'm reading this book Developing Multi-tenant Applications for the Cloud, 3rd Edition, and it uses Azure Storage Service to implement multi-tenancy. I haven't finished the book yet, it seems you still have to handle the multi-tenancy by yourself and the sample code is already out of date.
I have seen lots of SO questions compare Azure Table Storage with MongoDB. The MongoDB is very new to me, not sure whether it could be easily used to fulfill my requirements?
And I have seen RavenDB as well, it does support multi-tenancy out of box. But I didn't see some good sample code about how to use it in Azure app development.
Hope to hear some good advices from awesome SO guys.
I would better opt with RavenDB on top of MongoDB. Even Raven is a new comer in to the game, it supports most of the features which traditional SQL supports.
Also to make up a decisions the volume of data you are dealing is a also a key decision pointer. Also the amount of traffic you are expecting.
Also keep in mind that operational costs and development efforts. HA and DR scenarios can be problematic when you use Raven or Mongo because of the fact that you need to host them. But when it comes to Azure Storage, it by defaults protects you to a maximum extent by maintaining 3 copies of information.
So I would suggest you to carefully make the trade offs and opt wisely based on your business needs, cost optimization, development and operational effort.
Having a single instance of your application for each tenant is a very expensive way to implement an application, however I realise that if an application was developed with a single tenant in mind, then the costs of changing over can be high.
First can we start out with why you have a desktop application connecting to a database at another location. The latency can really slow down an application. Ideally you would want a locally installed database and have it sync with the cloud DB, or add in appropriate caching into your application.
However the DB would still need to differentiate the clients.
Why do you need this to go to a cloud database? Is it for backup purposes, not installing a DB locally on a clients machine, accessing the same data from many machines or something else?
Unless your application is extremely large, I would recommend rewriting it for multi-tenant to one SQL Azure database. The architecture chosen at the beginning of the project doesn't suit your requirements now. As you expand you will run into further issues.
My organisation (a small non-profit) currently has an internal production .NET system with SQL Server database. The customers (all local to our area) submit requests manually that our office staff then input into the system.
We are now gearing up towards online public access, so that the customers will be able to see the status of their existing requests online, and in future also be able to create new requests online. A new asp.net application will be developed for the same.
We are trying to decide whether to host this application on-site on our servers(with direct access to the existing database) or use an external hosting service provider.
Hosting externally would mean keeping a copy of Requests database on the hosting provider's server. What would be the recommended way to then keep the requests data synced real-time between the hosted database and our existing production database?
Trying to sync back and forth between two in-use databases will be a constant headache. The question that I would have to ask you is if you have the means to host the application on-site, why wouldn't you go that route?
If you have a good reason not to host on site but you do have some web infrastructure available to you, you may want to consider creating a web service which provides access to your database via a set of well-defined methods. Or, on the flip side, you could make the database hosted remotely with your website your production database and use a webservice to access it from your office system.
In either case, providing access to a single database will be much easier than trying to keep two different ones constantly and flawlessly in sync.
If a webservice is not practical (or you have concerns about availability) you may want to consider a queuing system for synchronization. Any change to the db (local or hosted) is also added to a messaging queue. Each side monitors the queue for changes that need to be made and then apply the changes. This would account for one of the databases not being available at any given time.
That being said, I agree with #LeviBotelho, syncing two db's is a nightmare and should probably be avoided if you can. If you must, you can also look into SQL Server replication.
Ultimately the data is the same, customer submitted data. Currently it is being entered by them through you, ultimately it will be entered directly by them, I see no need in having two different databases with the same data. The replication errors alone when they will pop-up (and they will), will be a headache for your team for nothing.
Our company is thinking about moving to the cloud. Would we still be able to meet all our current requirements (below). We want to be able to easily scale in the future without high costs.
5 ASP.net 4.0 websites running (using sql databases, see below)
SQL Server 2008 Express (8 databases on this)
2 Scheduler services running (send nightly reports via email e.g. new orders in db)
MongoDB and Memcached are also installed on server
Currently the websites are on a separate server from the database server for security reasons.
We were thinking about Windows Azure and Amazon Web Services (AWS) as providers, which would best fit our requirements?
Are there any other factors we need to consider?
Re: SQL Databases: on Windows Azure this would map to SQL Azure. Costs start at $5/month for up to a 100 MB instance - and goes all the way up to 150 GB - and goes beyond that with Federations.
Re: 5 ASP.net 4.0 websites running: these map naturally into Windows Azure Web Roles. The "small" instance is $0.12/hour/instance, and you'll usually want two instances (to avoid single point of failure for a few scenarios). Depending on your load, you may be able to put all 5 sites on the same instances. If you have very low usage sites, consider the $0.05/hour/instance "extra small" instance.
Re: Currently the websites are on a seperate server from the database server for security reasons: of course this is also doable.
Re: 2 Scheduler services running: Running Windows Services is no problem.
Re: send nightly reports via email e.g. new orders in db: No problem doing, though is not baked into Windows Azure directly, but there are many simple ways to do this (even for free, such as via SendGrid).
Re: We want to be able to easily scale in the future without high costs: you will need to do the math regarding your actual costs, but Windows Azure can surely scale.
Re: MongoDB and Memcache are also installed on server: These can both be run on Azure. Check out https://github.com/mongodb/mongo for MongoDB. Also, the Azure Caching service is also avail (managed for you).
Re: We were thinking about Azure and Amazon as providers, which would best fit our requirements: These are functionally very similar (in capability and cost), with a few noteworthy differences.
Windows Azure is Platform as a Service meaning that you don't need to worry about Virtual Machines, but rather Applications. In other words, you upload your (basically) Zipped app package to the cloud for execution. With Amazon, you will be dealing with the Virtual Machine yourself. In Azure, you get a copy of Windows Server 2008 which is managed for you, but you can also do admin things to it if you need to. This is far less of an advantage if your app is an old messy install that isn't really clean (though may not be a good high-value cloud candidate anyway).
Windows Azure has an emulator that works great - F5 right from visual studio to work with storage system and VMs and more popular features.
Re: Are there any other factors we need to consider: Yes. With any cloud application, you need to be prepared to deal with scaling out (not up), dealing with transient retries (you may need to retry an operation to a cloud service - any cloud service). The benefits of this are much better (and more cost-effective) scalability and higher reliability (when you run across nodes, you don't have a single point of failure). Be sure to understand when/where storage on a VM is persistent vs. ephemeral. There are more considerations, but these are primary ones.
You may want to check out the Windows Azure Pricing calculator.
Good luck! And welcome to the cloud.
with the exception of the scaling question, and the 2 physical servers, you can move this functionality into a hosted environment and you will technically be in "the cloud". This could be a dedicated or VPS (Virtual Private Server), or even a shared server if you are small.
Those can allow for growth over time...you just need to upgrade what you have with the provider.
You also could use a colo-server with a hosting provider, which basically means you put your hardware in an hosting provider rack, and use their electricity and bandwidth. They charge based on bandwidth usage.
Since you are using SQL Express, remember that each database is limited to 8gb. So that will limit your growth at some point. That would entail an upgrade from Express to regular SQL if you don't want to re-engineer anything.
Have you considered AppHarbour? It has Memcached, MongoDB, SQL Server and so on, and is quicker to deploy to than Azure. I like Azure, but there is quite a learning curve and I have found the connection to SQL Azure to be pretty bad - which means re-engineering your DAL to use something like the SQL Transient Failure Library = a bit of a faff for existing projects.
AppHarbour does not have blob storage - so if you are uploading files you will need to use Azure Blob Storage or Amazon S3 or some equivalent as well.
Hope this helps.
Not an expert but being that Asp.net is a Microsoft product it should be easier to migrate to azure, although from what I have heard AWS shouldn't be difficult. Another thing you may want to consider is cost. Last time I checked AWS is significantly less costly unless you already pay for MSDN subscriptions.
All the requirements you sum up are not any issue to deploy in Windows Azure. You can find a lot of information on the internet on how to do this.
Keep in mind, if you want to deploy your services to windows azure, you'll need to do some code review of your applications to fix session state, output cache and so forth on your web applications.
Since you want to scale them out and they are sitting behind a non-sticky round-robin load balancer, you will run into issues with your session state if it is saved on the machine itself. You'll need to part session state to SQL Azure or to the Windows Azure table storage for example.
Installing MongoB and Memcache in Azure is not an issue, you'll find a lot of information on how to do it, but it'll require some to set up your role and the scripting
codingoutloud has given a very detailed answer. I would add two very key considerations to think about when moving any application to Azure (or, indeed, many other cloud providers).
Local state
With normal Azure, they reserve the right to shut down any one instance of a role at any time in order to move or upgrade it. This means you always need at least two instances of any one role and they will be transparently load balanced. If your current websites are currently running on individual servers then they may rely on session state or files in local directories etc. Now, there are ways around this (like putting session state in SQL, using the cookie provider for temp data, using a shared drive for files etc) or, indeed, bypassing a lot of the benefits of Azure and using their "virtual server" concepts which means you don't get the scale benefits etc.
But, sites that rely heavily on local state may be challenging to move to the cloud.
Time Zones
All Azure servers run on UTC time. If you are used to running on dedicated servers serving users from a single time zone then chances are that you use things like DateTime.Now() which won't really correspond to what the user wants.
I don't see any of the above as limitations of Azure, I find them very useful in forcing you to build global and scalable solutions from the start. However, when porting an existing application, the above may be quite a challenge to adapt to, even though there are workarounds.
As also mentioned elsewhere, there is a learning curve to Azure and somehow the documentation - plentiful as it is - just doesn't quite seem to help for some reason. Once you "get it", though, I find Azure really nice and there are a bunch of subtle features that will help you build scalable solutions, like the whole queuing infrastructure, the blob storage and the table storage. In some ways the learning is hampered by having too much choice.
Good luck!
First of all let me clear that I am not from a web background so if any of my understanding about how it works is not correct please feel free to correct me
Let's say I have a website which I would like to host on cloud because
- I don't want to take care of hardware
- I want to scale my website as needed
Now I am a bit confused between role of SQL Server vs role of SQL Azure in this case.
Normal Web Hosting
When I think of a normal website I know that I need a host/server on which my website will be hosted. The host should be able to support SQL Server. For scaling purpose I will have to host my website/ASP Pages on multiple servers. Similarly if I want to scale up my SQL Server I will have to host it on multiple servers and will have to make sure data is up to date in all servers through some mechanism.
Cloud Based Hosting
Now I think I can setup similar structure on Cloud/Azure as well. If yes, would I be using true capabilities of Cloud in this case?
Or should I use SQL Azure instead of SQL Server? What benefit would I get in that case? Would I be still be responsible for for scaling up and consistency of data? I know I can scale up the website by setting the number of VMs/instance but what about scaling of database?
Edit
Thanks to Florin Dumitrescu the terminology I wanted to use was Scaling Out because I am more concerned about the performance rather than how big my database is in terms of size. I am more concerned about how database would scale between different servers/systems to accommodate the load and hence result in better performance
SQL Azure, as Yossi mentioned, is a Database-as-a-Service. As such, you simply ask for it to be provisioned, magic happens, and you have a database that scales from 1GB to 5GB, 10GB, all the way to 50GB (soon to be 150GB as announced at SQL PASS). The nice thing about SQL Azure: you don't have to worry about any infrastructure, servers, licensing, etc. You simply connect with your connection string. SQL Azure is designed to be scalable to handle a considerable number of concurrent tenants, so you don't have to concern yourself with scaling.
SQL Azure also replicates its data in the data center, to provide "durable" storage. You still need to design a Disaster Recovery scheme, in case the data center becomes unavailable (and you can use the Data Sync service for that).
As far as your website itself: As you scale out to multiple instances, each instance runs the same code and uses the same resources. Taking this one step further, you can move your static (non-changing) web content, such as images and CSS, to Blob storage. This has several advantages over storing them with the website itself:
Ability to enable the Content Delivery Network, a worldwide edge-caching service providing better performance for your end users
Less strain on your web server instances, as requests for those images will now be directed to Blob storage, a completely separate URL than your website
Ability to update an image or stylesheet without having to re-deploy your application - simply upload a new file to Blob storage.
I highly recommend the Windows Azure Platform Training Kit, as there are labs that take you through the fundamentals of all of this, with complete code samples as well. This is updated almost monthly, staying in sync with the latest Windows Azure SDK and tools.
If you're hosting your web site in the cloud, and you need a database, than SQL Azure is almost certainly the best option.
SQL Azure is a database as a service, so you'll create your database and work against it from your code, but not have to worry about the provisioninig, there are no servers as such, it is all being taken care of.
From an application point of view it looks and behaves pretty much like SQL Server, so initially all that changes is the connecting string
As other noted SQL Azure takes away your concerns about setting up and taking care of the infrastructure. This is part of the premise of Azure in general which is to provide a platform rather than just Infrastructure.
The price you pay for that are some limitations on the capabilities (vs. regular SQL). Limitation on the size (at least until Federation will be available) and increased latency (since your database is not running on the same server of your app)
Microsoft Teched has as "SQL Azure Performance and Elasticity Guide" which you should probably take a look at