I have a general question about database hosting in relation to WCF and ASP.NET. We are currently developing a new online web application in ASP.NET, which gets/posts data to our MSSQL database with a WCF service (three tier infrastructure).
Now later in development we will be launching our website and hosting it on an external provider. We are unsure whether to keep the database for the website internally on our own servers, or host it externally along with our provider (they offer database hosting options as well).
If we hosted it externally, we would obviously back it up internally using batch scripts etc.
One major concern is the security of the database, as we are only a small business with not much experience in web security architecture. Due to this, we are leaning towards an external provider for both the website and database, who would obviously have experience and the equipment to manage such things.
Could you please offer some opinions on the matter?
Thanks!
There's always a risk associated with handing sensitive data off to an outside party, and trusting them to be as secure as you need.
There's no mystery here, someone at the provider will have enough access to look at your data if they really wanted to. So it all boils down to how sensitive is your data? Is there bank account info or social security numbers? For these reasons, our company cannot hand off such data to an outside party.
I'm a little confused though about one thing: if you could potentially host the database server when you go to production, why couldn't you host the website as well? Is it a matter of being able to handle high traffic?
Update in response to your comment:
It sounds like your data is somewhat sensitive, not highly sensitive. In which case if we're not being totally bonkers pedantic here, then you can reasonably assume a reputable hosting company will take the proper measures to secure your data, and from the sounds of it, they're probably more capable in this respect then your own company (not because you're careless or wet behind the ears, just because they would have considerable experience in this area where your company does not).
Now for the performance and hardware setup part if your comment... if you dont have the hardware or network infrastructure to meet your requirements, then you either a) upgrade your own infrastructure and hire the appropriate personnel to set it up and maintain it or b) you pay someone else to do it. Sounds like a no-brainer for you guys to go with option b.
Related
Sorry no code here because I am looking for a better idea or if I am on the right track?
I have two websites, lets call them A and B.
A is a website exposed to the internet and only users with valid account can access.
B is a internal (intranet) website with (Windows authentication using Active directory). I want Application B (intranet) to create users for Application A.
Application A is using the inbuilt ASP.NET JWT token authentication.
My idea is to expose a Api on the extranet website (A) and let (B) access this API. I can use CORS to make sure only (B) has access to the end point but I am not sure if this is a good enough protection? We will perform security penetrations test from a third party company so this might fail the security test?
Or
I can use entity framework to a update the AspnetUsers table manually. Not idea if this is feasible or the right way or doing things.
Any other solution?
In my opinion, don't expose your internal obligations with external solutions like implementing APIs etc ...
Just share the database to be accessible for B. In this way, the server administration is the only security concern and nobody knows how you work. In addition, It's not important how you implement the user authentication for each one (whether Windows Authentication or JWT) and has an independent infrastructure.
They are multiple solution to this one problem. It then end it really depends on your specific criteria.
You could go with:
B (intranet) website, reaching into the database and creating user as needed.
A (internet) website, having an API exposing the necessary endpoint to create user.
A (internet) website, having data migration running every now and then to insert users.
But they all comes with there ups and downs, I'll try to break them down for you.
API solution
Ups:
Single responsibility, you have only one piece of code touching this database which makes it easier to mitigate side effect
it is "future proof" you could easily have more services using this api.
Downs:
Attack surface increased, the API is on a public so subject to 3rd parties trying to play with it.
Maintain API as the database model changes (one more piece to maintain)
Not the fastest solution to implement.
Database direct access
Ups:
Attack surface minimal.
Very quick to develop
Downs:
Database model has to be maintained twice
migration + deployment have to be coordinated, hard to maintain.
Make the system more error prone.
Migration on release
Ups:
Cheapest to develop
Highest performance on inserts
Downs:
Not flexible
Very slow for user
Many deployment
Manual work (will be costly over time)
In my opinion I suggest you go for the API, secure the API access with OAuth mechanism. It OAuth is too time consuming to put in place. Maybe you can try some easier Auth protocols.
I am starting to port one old desktop single tenant application into the cloud and wish to hear what would be your recommendation about the databases for my cloud-based multi-tenant application?
My basic requirement is simple:
For each tenant, its data is separate to any other tenants' data. I can easily backup, restore, export the data for one single tenant without affecting other tenants.
I don't really want to care about multi-tenancy in the business logic code. It should look like a single tenant application behind the security layer, no tenant ID pass around etc.
Easy to query using some mature technology like LINQ.
Availability and scalability, of course, easy to set up replicas, fail-over and scaling up and down etc.
I have gone through some investigations about multi-tenant application development. I have noticed SQL databases from Azure and AWS are both very expensive(the cost for just SQL database instance is close to the license fee of the original application), so I definitely can't use separate SQL database instances for tenants.
Now I'm reading this book Developing Multi-tenant Applications for the Cloud, 3rd Edition, and it uses Azure Storage Service to implement multi-tenancy. I haven't finished the book yet, it seems you still have to handle the multi-tenancy by yourself and the sample code is already out of date.
I have seen lots of SO questions compare Azure Table Storage with MongoDB. The MongoDB is very new to me, not sure whether it could be easily used to fulfill my requirements?
And I have seen RavenDB as well, it does support multi-tenancy out of box. But I didn't see some good sample code about how to use it in Azure app development.
Hope to hear some good advices from awesome SO guys.
I would better opt with RavenDB on top of MongoDB. Even Raven is a new comer in to the game, it supports most of the features which traditional SQL supports.
Also to make up a decisions the volume of data you are dealing is a also a key decision pointer. Also the amount of traffic you are expecting.
Also keep in mind that operational costs and development efforts. HA and DR scenarios can be problematic when you use Raven or Mongo because of the fact that you need to host them. But when it comes to Azure Storage, it by defaults protects you to a maximum extent by maintaining 3 copies of information.
So I would suggest you to carefully make the trade offs and opt wisely based on your business needs, cost optimization, development and operational effort.
Having a single instance of your application for each tenant is a very expensive way to implement an application, however I realise that if an application was developed with a single tenant in mind, then the costs of changing over can be high.
First can we start out with why you have a desktop application connecting to a database at another location. The latency can really slow down an application. Ideally you would want a locally installed database and have it sync with the cloud DB, or add in appropriate caching into your application.
However the DB would still need to differentiate the clients.
Why do you need this to go to a cloud database? Is it for backup purposes, not installing a DB locally on a clients machine, accessing the same data from many machines or something else?
Unless your application is extremely large, I would recommend rewriting it for multi-tenant to one SQL Azure database. The architecture chosen at the beginning of the project doesn't suit your requirements now. As you expand you will run into further issues.
Is there any disadvantage on using shared hosting in general (of discountasp.net) for an ecommerce website? security concerns or performance? The site is new and we dont expect many visitors right now, we have at least 30 products.
I am using my own shopping cart, user accounts (Membership provider), credit card processor (paypal), my own CMS, in C# ASP.NET 4.0 webforms and SQL Server 2008.
I dont save credit card information in the database, my system only create an account for users who buy something in the checkout process, and we need only processing power on some paypal apis only in checkout (very low cpu usage I guess).
My website is optimized client-side and server-side, I have the XSS security enabled of ASP and the AntiXSS library of Microsoft in all inputs/outputs (forms, cookies, http headers, query strings and even websevices), stored procedures, parameterized queries to avoid sql injection, SSL connections, anti spam, compiled and obfuscated dlls, encripted web.config, etc...
I am missing something? thanks, and sorry for my bad english
Just to give you a quick answer:
Yes, there are problems. Plenty of them.
Performance is always (typically) worse on shared than on dedicated. Someone might be using all the IO and you get bottlenecked.
Security, as you can't manage the server you have no way of knowing if it's patched, hardeneded etc. If one of the thousand other people on the same shared server manages to exploit it you're done. However, one could argue that if you don't know how to secure a dedicated server it might be better to rely on the shared hosting providers experience.
Also, you have to trust the shared hosting provider not to steal your data etc.
One other thing is that if something crashes you have to wait for the provider to fix it rather then just do it yourserlf. Again, if you don't know how to fix it, it might be better to wait for the provider anyway..
All and all, for your site in the beginning I would go with shared hosting and move up to VPS as soon as you start generating some money on the site.
I want to make a secure website using ASP.NET, but when I publish it, the domain administrator can see all the data stored in my database (SQL Server). I want to hide my data and code from the domain administrator too. Are there any procedures to do that? Please give me the address of a good domain I can use, which will give me all administrative power of my website (Domain owner also cannot access my databases and files.) Thanks for your suggestion.
Have you looked at: SQL Server 2008 Transparent Data Encryption?
Also:
SQL Server 2008 Transparent Data Encryption
Understanding Transparent Data Encryption (TDE)
Have you considered using a Virtual Private Server? I believe with a VPS you should be able to have complete control over who has access to what at the operating system level.
You can encrypt data, but there's no way to protect code (especially not web-facing code), but frankly the question doesn't make sense - if you have trust issues with someone you have an implicit trust relationship with then you need to find a different provider.
If you don't trust anyone (personal psychology not withstanding) you need to host it yourself.
Addendum: look at it from the other way round, why would you host something for someone without being able to inspect it for security and even legal concerns?
If you want total security there's quite a few things you need to implement:
As others have said you need physical encryption of your database. Merely blocking them from accessing the database is not enough because they have access to the physical database files and can use tools on them to access the data directly.
You will want to use web.config encryption
Walkthrough: Encrypting Configuration Information Using Protected Configuration
How To: Encrypt Configuration Sections in ASP.NET 2.0 Using DPAPI
This is rather questionable security however since it requires a key container to be installed upon the server it would be arguably achievable for a nefarious administration to copy your key and then use it to manually decrypt your web.config. To protect yourself further than that you would need to create a secured web service (secured both for message transport, SSL, and secured message that the content itself is encrypted inside the SSL transport tunnel, see WCF services security) that your application constantly talks to for protected data like the login users for the sql server database and then apply rotating passwords to make it if they intercepted one password that it might not be valid anymore if it's been rotated.
After this point you will need to use source code protection that includes decompilation protection and code obfuscation. This will add a layer of protection from prohibiting viewing the source of your application directly for information about how else you protect your application (this will only go so far to stop a sophisticated cracker though).
All in all at this point you've achieved nearly the highest level of code/data security you can inside a hosted environment but this goes back to the core problem. If you have concerns that the system operator is nefarious then all of these protections even can still be beaten if the admin is skilled enough and has enough motivation to do it.
If you need protection above and behind this you would really want to look at colocation hosting or at the very least dedicated server hosting that would allow you to apply encryption at the operating system level as this protects you from the most effective brute strength attacks which involve just ripping out hard drives from a machine and spraying ram with air duster upside down to freeze it and then attempt to steal encryption keys from the ram itself disconnected from the server.
Having security that makes you immune (or nearly immune) to this kind of attack basically requires using TrueCrypt for native encryption of your file system where you do not have it cache the keys/key files in memory. At this point the only last part of security left is to host at a reputable data center like ThePlanet or Rackspace that has 24/7 electronic surveillance that it would be nearly impossible for a nefarious employee to be able to compromise your server without video recordings of it occuring.
Remove the BUILTIN\Administrators group from the sysadmin role - obviously this can only be done by a server admin, but in a proper environment, it is possible for domain admins to only be able to maintain servers nad not see data.
In 2008, the default is to not include this.
As for code, you can obfuscate your DLLs, but there is no complete way to hide code from someone who can access the filesystem.
You won't be able to hide the source code, but you do have some options to make it less inviting to admins:
obfuscate - deter people from knowing what is happening syntactically. While they can follow the code and eventually figure it out (if they want), it requires more effort. After all, with enough effort and know-how, anything can be cracked.
encrypt - because the web page needs to be decrypted by the server, the server needs to have a key to decrypt it. This key needs to be stored in a file that the server (and thus admin) has access to. Using some obfuscation, you can try and hide this (again), but any places there is a symmetric encryption, a superuser has the ability to get at it.
Note:
Any time something is encrypted, it will most likely require a decrypt to use/view. The process will be a negative performance impact.
When things are encrypted, especially from an admin perspective, it is essentially an invitation calling for alarm; it creates curiosity. If it's data, that's one thing, but code should not need to be encrypted where there is trust. It's like saying that you have something you want to hide, generally meaning something "bad" that you don't want found out.
Using a web service is often an excellent architectural approach. And, with the advent of WCF in .Net, it's getting even better.
But, in my experience, some people seem to think that web services should always be used in the data access layer for calls to the database. I don't think that web services are the universal solution.
I am thinking of smaller intranet applications with a few dozen users. The web app and its web service are deployed to one web server, not a web farm. There isn't going to be another web app in the future that can use this particular web service. It seems to me that the cost of calling the web service unnecessarily increases the burden on the web server. There is a performance hit to inter-process calls. Maintaining and debugging the code for the web app and the web service is more complicated. So is deployment. I just don't see the advantages of using a web service here.
One could test this by creating two versions of the web app, with and without the web service, and do stress testing, but I haven't done it.
Do you have an opinion on using web services for small-scale web app's? Any other occasions when web services are not a good architectural choice?
Web Services are an absolutely horrible choice for data access. It's a ton of overhead and complexity for almost zero benefit.
If your app is going to run on one machine, why deny it the ability to do in-process data access calls? I'm not talking about directly accessing the database from your UI code, I'm talking about abstracting your repositories away but still including their assemblies in your running web site.
There are cases where I'd recommend web services (and I'm assuming you mean SOAP) but that's mostly for interoperability.
The granularity of the services is also in question here. A service in the SOA sense will encapsulate an operation or a business process. Data access methods are only part of that process.
In other words:
- someService.SaveOrder(order); // <-- bad
// some other code for shipping, charging, emailing, etc
- someService.FulfillOrder(order); //<-- better
//the service encapsulates the entire process
Web services for the sake of web services is irresponsible programming.
Nick Harrison, a brilliant developer in Charlotte, suggested these scenarios where using a web service makes sense:
On a Web farm, where there are multiple web servers hosting website(s), all pointing to web service(s) running on another web server. This allows for distributing the load over multiple servers.
Client/server, where Windows forms apps can call a web service.
Cross platform
Passing through a firewall
Just because the tool generates a bunch of stubs doesn't mean it's a good use. WS-* excels in scenarios where you expose services to external parties. This means that each operation should be on the granularity of business process as opposed to data access.
The multitude of standards can be used to describe different facets of your contract in great detail and a (hypothetical) fully compliant WS stack can take away a lot of pain from the third party developers and even allow the fabled point and click integration a'la Yahoo Pipes. With good governance controls you can evolve your public interface and manage the backward compatibility as needed.
All this is next to impossible to be generated automatically. The C# stub generator knows only the physical interface of your class, but doesn't have any idea about the semantics involved. See this paper for more detailed discussion.
If you are building a web site, then build a web site. If you want asynchronous messaging inside your application, use MSMQ. If you want to expose data to internal clients, use POX. If you need efficient binary message format, check Google's Protocol Buffers or if you need RPC check Hessian for C# or DCOM.
Web services are a coarse grained integration solution. They are rigid, they are slower than alternatives, they take too much effort to do well (and when not done well are next to pointless).
To summarize: "When should a web service not be used?" - anytime you can get away without it
If you are just coding a tiny (less than 50 users) web application for your intranet, a web service seems overkill. Especially if its primary function (providing a single point of access to many services) won't be used.
I agree that the use of a web service in a small scale web app adds a layer of complexity that does not seem justified. Most of my solutions, internet and intranet, 10-50 users, do not employ web services. I am glad others feel the same...I thought I was the only one.
For a small scale web app I think that using web services is often quite a good idea, you can use it to easily decouple the web server from the data tier. With the straightofrward development requirements and great tooling I don't see the problem.
However don't use web services in the following scenarios:
When you must use Http as the transport and Xml serialization of your data and you need lots of different bits of data, synchronously and often. Whether REST or SOAP or WS-* you're going to suffer performance issues. The more calls you make the slower your system will be. If you want medium size chunks of data less frequently, asynchronously and you can use straight TcpIp (e.g. Wcf netTcpBinding) you'd be better off.
When you need to query and join data from your web service with other data sources, rather motivate for a data warehouse which can be populated with properly consolidated and rationalized data from across the enterprize
This is my experience, hope it helps.
For a small-scale web app (You have to ask the question, "Will it always remain small scale?" though) using web services, separate business layers, data layers, and so on and so forth can be overkill.
Before anyone shoots me, I do agree that separation of logic between layers along with unit tests, continuous integration, et al are bloody brilliant. In my current role I'd be utterly lost and rocking in the corner without them. However for a very small-scale web app being used to, for example, track contact numbers and addresses for a company of 36 employees, the cost/benefit analysis would suggest that all the "niceties" listed above would be overkill.
However... Remember to ask the question "Will it always remain small scale?" :-)