how to sync data between company's internal database and externally hosted application's database - asp.net

My organisation (a small non-profit) currently has an internal production .NET system with SQL Server database. The customers (all local to our area) submit requests manually that our office staff then input into the system.
We are now gearing up towards online public access, so that the customers will be able to see the status of their existing requests online, and in future also be able to create new requests online. A new asp.net application will be developed for the same.
We are trying to decide whether to host this application on-site on our servers(with direct access to the existing database) or use an external hosting service provider.
Hosting externally would mean keeping a copy of Requests database on the hosting provider's server. What would be the recommended way to then keep the requests data synced real-time between the hosted database and our existing production database?

Trying to sync back and forth between two in-use databases will be a constant headache. The question that I would have to ask you is if you have the means to host the application on-site, why wouldn't you go that route?
If you have a good reason not to host on site but you do have some web infrastructure available to you, you may want to consider creating a web service which provides access to your database via a set of well-defined methods. Or, on the flip side, you could make the database hosted remotely with your website your production database and use a webservice to access it from your office system.
In either case, providing access to a single database will be much easier than trying to keep two different ones constantly and flawlessly in sync.

If a webservice is not practical (or you have concerns about availability) you may want to consider a queuing system for synchronization. Any change to the db (local or hosted) is also added to a messaging queue. Each side monitors the queue for changes that need to be made and then apply the changes. This would account for one of the databases not being available at any given time.
That being said, I agree with #LeviBotelho, syncing two db's is a nightmare and should probably be avoided if you can. If you must, you can also look into SQL Server replication.

Ultimately the data is the same, customer submitted data. Currently it is being entered by them through you, ultimately it will be entered directly by them, I see no need in having two different databases with the same data. The replication errors alone when they will pop-up (and they will), will be a headache for your team for nothing.

Related

Securing a database using web service

I have a SharePoint application that needs to integrate with very sensitive databases. The data required is from multiple databases; almost 40 different databases on different servers.
The suggested design was to have a web service to integrate with, which will then connect to the required database based on the required business logic. However the concern is, if someone somehow got access to the server hosting this web service, all the database connections will be there.
Another suggestion was to have a dedicated web service for each database. This way even if someone got access to this web service, only one database connection will be there.
The question is, is there any known design that can work for this situation to add more security to the database connections?
The answer really depending on your specific requirements. an easy way of doing so is to use "Open Data Protocol" OData. and then secure it with windows directory login, or perhaps ASP.NET login.
take a look at http://www.odata.org/ and http://msdn.microsoft.com/en-us/library/ff478141.aspx

Normal vs Cloud/Azure Hosting and role of SQL Azure vs SQL Server

First of all let me clear that I am not from a web background so if any of my understanding about how it works is not correct please feel free to correct me
Let's say I have a website which I would like to host on cloud because
- I don't want to take care of hardware
- I want to scale my website as needed
Now I am a bit confused between role of SQL Server vs role of SQL Azure in this case.
Normal Web Hosting
When I think of a normal website I know that I need a host/server on which my website will be hosted. The host should be able to support SQL Server. For scaling purpose I will have to host my website/ASP Pages on multiple servers. Similarly if I want to scale up my SQL Server I will have to host it on multiple servers and will have to make sure data is up to date in all servers through some mechanism.
Cloud Based Hosting
Now I think I can setup similar structure on Cloud/Azure as well. If yes, would I be using true capabilities of Cloud in this case?
Or should I use SQL Azure instead of SQL Server? What benefit would I get in that case? Would I be still be responsible for for scaling up and consistency of data? I know I can scale up the website by setting the number of VMs/instance but what about scaling of database?
Edit
Thanks to Florin Dumitrescu the terminology I wanted to use was Scaling Out because I am more concerned about the performance rather than how big my database is in terms of size. I am more concerned about how database would scale between different servers/systems to accommodate the load and hence result in better performance
SQL Azure, as Yossi mentioned, is a Database-as-a-Service. As such, you simply ask for it to be provisioned, magic happens, and you have a database that scales from 1GB to 5GB, 10GB, all the way to 50GB (soon to be 150GB as announced at SQL PASS). The nice thing about SQL Azure: you don't have to worry about any infrastructure, servers, licensing, etc. You simply connect with your connection string. SQL Azure is designed to be scalable to handle a considerable number of concurrent tenants, so you don't have to concern yourself with scaling.
SQL Azure also replicates its data in the data center, to provide "durable" storage. You still need to design a Disaster Recovery scheme, in case the data center becomes unavailable (and you can use the Data Sync service for that).
As far as your website itself: As you scale out to multiple instances, each instance runs the same code and uses the same resources. Taking this one step further, you can move your static (non-changing) web content, such as images and CSS, to Blob storage. This has several advantages over storing them with the website itself:
Ability to enable the Content Delivery Network, a worldwide edge-caching service providing better performance for your end users
Less strain on your web server instances, as requests for those images will now be directed to Blob storage, a completely separate URL than your website
Ability to update an image or stylesheet without having to re-deploy your application - simply upload a new file to Blob storage.
I highly recommend the Windows Azure Platform Training Kit, as there are labs that take you through the fundamentals of all of this, with complete code samples as well. This is updated almost monthly, staying in sync with the latest Windows Azure SDK and tools.
If you're hosting your web site in the cloud, and you need a database, than SQL Azure is almost certainly the best option.
SQL Azure is a database as a service, so you'll create your database and work against it from your code, but not have to worry about the provisioninig, there are no servers as such, it is all being taken care of.
From an application point of view it looks and behaves pretty much like SQL Server, so initially all that changes is the connecting string
As other noted SQL Azure takes away your concerns about setting up and taking care of the infrastructure. This is part of the premise of Azure in general which is to provide a platform rather than just Infrastructure.
The price you pay for that are some limitations on the capabilities (vs. regular SQL). Limitation on the size (at least until Federation will be available) and increased latency (since your database is not running on the same server of your app)
Microsoft Teched has as "SQL Azure Performance and Elasticity Guide" which you should probably take a look at

SaaS: one web app to one database VS. many web apps to many databases

I am planning to develop a fairly small SaaS service. Every business client will have an associated database (same schema among clients' databases, different data). In addition, they will have a unique domain pointing to the web app, and here I see these 2 options:
The domains will point to a unique web app, which will change the
connection string to the proper client's database depending on the
domain. (That is, I will need to deploy one web app only.)
The domains will point to their own web app, which is really the
same web app replicated for every client but with the proper
connection string to the client's database. (That is, I will need to
deploy many web apps.)
This is for an ASP.NET 4.0 MVC 3.0 web app that will run on IIS 7.0. It will be fairly small, but I do require to be scalable. Should I go with 1 or 2?
This MSDN article is a great resource that goes into detail about the advantages of three patterns:
Separated DB. Each app instance has its own DB instance. Easier, but can be difficult to administer from a database infrastructure standpoint.
Separated schema. Each app instance shares a DB but is partitioned via schemas. Requires a little more coding work, and mitigates some of the challenges of a totally separate schema, but still has difficulties if you need individual site backup/restore and things like that.
Shared schema. Your app is responsible for partitioning the data based on the app instance. This requires the most work, but is most flexible in terms of management of the data.
In terms of how your app handles it, the DB design will probably determine that. I have in the past done both shared DB and shared schema. In the separated DB approach, I usually separate the app instances as well. In the shared schema approach, it's the same app with logic to modify what data is available based on login and/or hostname.
I'm not sure this is the answer you're looking for, but there is a third option:
Using a multi-tenant database design. A single database which supports all clients. Your tables would contain composite primary keys.
Scale out when you need. If your service is small, I wouldn't see any benefit to multiple databases except for assured data security - meaning, you'll only bring back query results for the correct client. The costs will be much higher running multiple databases if you're planning on hosting with a cloud service.
If SalesForce can host their SaaS using a multitenant design, I would at least consider this as a viable option for a small service.

Designing a SQL Server database to be used in a shared hosting environment

I've always personally used dedicated servers and VPS so I have full control over my SQL Server (using 2008 R2). Now I'm working on a asp.net project that could be deployed in a shared hosting environment which I have little experience with. My question is are there limitations on the features of SQL Server I can use in a shared environment?
For example, if I design my database to use views, stored procedures, user defined functions and triggers, will my end user be able to use them in shared hosting? Do hosts typically provide access to these and are they difficult to use?
If so, I assume the host will give a user his login, and he can use tools like management studios to operate within his own DB as if it were his own server? If I provide scripts to install these, will they run on the user's credential within his database?
All database objects are available. It includes tables, views, sp, functions, keys, certificates...
Usually CLR and FTS are disabled.
At last, you will not be able to access most of the server objects (logins, server trigger, backup devices, linked servers etc...)
SQL Mail, Reporting Services are often turned off too.
Depends on how the other users are authenticated to the database, if it is one shared database for all users.
If every user on the host will recieve it's own db:
If your scripts are written in a generic way (are not bound to fixed usernames in that case for example), other users will be able to execute them on their database and will have the same functionality. (Secondary click on the db and choose task->backup for example)
You could also provide simple pure backup dumps of a freshly setup database so for other users, the setup is only one click away. Also from the beginning, you should think about how to roll out changes that need to affect every user.
One possible approach is to always supply delta scripts, no matter if you are patching errors away or adding new things.

Access SSAS cube from across domains without direct database connection

I'm working with SQL Server Analysis Services for the first time and have the dilemma of working on a project in which users must be able to access SSAS Cubes (via a custom web dashboard) that live across different servers and domains, but without having access to the other server's SSAS database connection strings. So Organization A and Organization B will have their own cubes on their own servers, but Organization A users must be able to view Organization B's cubes, and Organization B users must be able to view Organization A's cubes, but neither organization should have access to the connection string.
I've read about allowing HTTP access to the SSAS server and cube from the link below, but that requires setting up users for authentication or allowing anonymous access to one organization's server for users of another organization, and I'm not sure this would be acceptable for this situation, or if this is the preferred way to do this. Is performance acceptable here?
http://technet.microsoft.com/en-us/library/cc917711.aspx
I also wonder if perhaps it makes sense to run a nightly/weekly process that accesses the other organization's SSAS database via a web service or something, and pull that data into a database on the organization's server, and then rebuild the cube. Then that cube would be queried without having to go and connect to the other organization server when viewing the cube.
Has anyone else attempted to accomplish something similar? Is HTTP access the standard way to go for this? Or any other possible options? Thanks, and please let me know if you need more info, still unclear on how some of this works.
HTTP is probably the best option for what you sound like you are trying to do. if they are two machines on same network but not same domain, using ipaddress\username on each (same user/pass) will work, like old school windows networking in workgroups.
You could also just backup , ftp and download/restore the cube on the other machine, might work for what you are doing.
as suggested by ScaleOvenStove HTTP is the best solution for your case, but users should be synced on both the servers to get access via HTTP. Users across both the organization's network can to be synced with a AD Sync tool. User has to be created in the other organization's network with bare minimum rights, and you can define role based security for what they can access in the cube

Resources