Do we really need the app server? [closed] - asp.net

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm about to start writing a web app (Asp.Net/IIS7) which will be accessible over the internet. It will be placed behind a firewall which accepts http and https.
The previous system which we are going to replace doesn't let this web server talk directly to a database, but rather have it making highly specialized web service calls (through a new firewall which only allows this kind of calls) to a separate app server which then go to the DB to operate on the data.
I have worked on many systems in my day, but this is the first one which has taken security this seriously. Is this a common setup? My first thought was to use Windows Authentication in the connectionstring on the web server and have the user be a crippled DB-user (can only view and update its own data) and then allow DB access through the inner firewall as well.
Am I Naïve? Seems like I will have to do a lot of mapping of data if we use the current setup for the new system.
Edit: The domain of this app is online ordering of goods (Business to business), Users (businesses) log in, input what they can deliver at any given time period, view previous transaction history, view projected demand for goods etc. No actual money is exchanged through this system, but this system provides the information on which goods are available for sale, which is data input to the ordering system

This type of arrangement (DMZ with web server, communicating through firewall with app server, communicating through firewall with db) is very common in certain types of environment, especially in large transactional systems (online corporate banking, for example)
There are very good security reasons for doing this, the main one being that it will slow down an attack on your systems. The traditional term for it is Defence in Depth (or Defense if you are over that side of the water)
Reasonable security assumption: your webserver will be continually under attack
So you stick it in a DMZ and limit the types of connection it can make by using a firewall. You also limit the webserver to just being a web server - this reduces the number of possible attacks (the attack surface)
2nd reasonable security assumption: at some point a zero-day exploit will be found that will get to your web server and allow it to be compromised, which could lead to to an attack on your user/customer database
So you have a firewall limiting the number of connections to the application server.
3rd reasonable security assumption: zero-days will be found for the app server, but the odds of finding zero-days for the web and app servers at the same time are reduced dramatically if you patch regularly.
So if the value of your data/transactions is high enough, adding that extra layer could be essential to protect yourself.

We have an app that is configured similarly. The interface layer lives on a web server in the DMZ, the DAL is on a server inside the firewall with a web service bridging the gap between them. In conjunction with this we have an authorization manager inside the firewall which exposes another web service that is used to control what users are allowed to see and do within the app. This app in one of our main client data tracking systems, and is accessible to our internal employees and outside contractors. It also deals with medical information so it falls under the HIPAA rules. So while I don’t think this set up is particularly common it is not unheard of, particularly with highly sensitive data or in situations where you have to deal with audits by a regulatory body.

Any reasonably scalable, reasonably secure, conventional web application is going to abstract the database away from the web machine using one or more service and caching tiers. SQL injection is one of the leading vectors for penetration/hacking/cracking, and databases often tend to be one of the more complex, expensive pieces of the overall architecture/TOC. Using services tiers allows you to move logic out of the DB, to employ out-of-process caching, to shield the DB from injection attempts, etc. etc. You get better, cheaper, more secure performance this way. It also allows for greater flexibility when it comes to upgrades, redundancy or maintenance.

Configuring the user's access rights seems like a more robust solution to me. Also your DataAccess layer should have some security built in, too. Adding this additional layer could end up being a performance hit but it really depends on what mechanism you're using to move data from "WebServer1" to "WebServer2." Without more specific information in that regard, it's not possible to give a more solid answer.

Related

localised ip assist + DDOS prevention + google billing

We are very new in Google Cloud and learning.
I have two question marks in my mind.
First is
Can I create localisation IP addresses for virtual instances? like I open web site with German IP range or another web site I want assign under Italian IP range.
Where is the best place to start or is it possible under cloud.
Second is
We had DDOS attack to under cloud and resources made peak while under attack, Will google charge extreme price for that peak time or will be normal billing.
Second question brings to third one,
We using cloudflare for domains, Is there stable way yo prevent DDOS attacks under google cloud?
I appreciate your time and answers.
To your first point, are you after finding the shortest path between your users and wherever you serve your content? If that's the case, you can simply put a load balancer in front of your backend services within Google Cloud, with a global public forwarding IP address, and the service itself will take care of redirecting the traffic to the nearest group of machines available. Here is an example of a HTTP(S) Load Balancer setup.
Or is localization what you are trying to achieve? In that case I'd rely on more standard forms of handling the language of choice like using browser settings (or user account settings if existing) or the Accept-Language header. This is a valuable resource from LocalizeJS.
Lastly if you are determined to having multiple versions of your application deployed for the different languages that you support, you could still have an intermediate service that determines the source of the request using IP-based lookups and redirect the user to the version of your choice. Said so, my feeling is that this is a more traditional behavior that in the world of client applications that are responsive and localized on the spot, the extra hop/redirect could get to annoy some users.
To your second point, there is a number of protections that are already built-in on some services within Google Cloud, in order to help you protect your applications and machines in different ways. On the DDoS front, you can benefit from policies and protections on the CDN side, where you get cache and scaling based preventive measures.
In addition to that, and if you have a load balancer put in front of your content, you can benefit from protections on layers 3, 4 and 7 of the OSI model. That includes typical HTTP, SYN floods, port exhaustion or NTP amplification attacks.
What this means is that in many of these situations, your infrastructure will not even notice many of these potential attacks, as they'll be alleviated before they reach your infrastructure (and therefore you will not be billed for that). Said so, I have heard and experienced situations in which these protections did not act in a timely fashion, or were triggered at all. In these scenarios, there is a possibility for your system to need to handle that extra load. However, and especially in events when the attack was obviously malicious and documented to be supposedly handled by Google Cloud, there is a chance to make a point with Google in order to get some support on the topic.
A bit more on that here.
Hope this is helpful.

easy server and client communication

I want to create a program for my desktop and an app for my android. Both of them will do the same, just on those different devices. They will be something like personal assistants, so I want to put a lot of data into them ( for example contacts, notes and a huge lot of other stuff). All of this data should be saved on a server (at least for the beginning I will use my own Ubuntu server at home).
For the android app I will obviously use java and the database on the server will be a MySQL database, because that's the database I have used for everything. The Windows program will most likely be written in of these languages: Java, C#c C++, as these are the languages I am able to use quite well.
Now to the problem/question: The server should have a good backend which will be communicating with the apps/programs and read/write data in the database, manage the users and all that stuff. But I am not sure how I should approach programming the backend and the "network communication" itself. I would really like to have some relatively easy way to send secured messages between server and clients, but I have no experience in that matter. I do have programming experience in general, but not with backend and network programming.
side notes:
I would like to "scale big". At first this system will only be used by me, but it may be opened to more people or even sold.
Also I would really like to a (partly) self programmed backend on the server, because I could very well use this for a lot of other stuff, like some automation features in my house, which will be implemented.
EDIT: I would like to be able to scale big. I don't need support for hundreds of people at the beginning ;)
You need to research Socket programming. They provide relatively easy, secured network communication. Essentially, you will create some sort of connection or socket listener on your server. The clients will create Sockets, initialize them to connect to a certain IP address and port number, and then connect. Once the server receives these connections, the server creates a Socket for that specific connection, and the two sockets can communicate back and forth.
If you want your server to be able to handle multiple clients, I suggest creating a new Thread every time the server receives a connection, and that Thread will be dedicated to that specific client connection. Having a multi-threaded server where each client has its own dedicated Thread is a good starting point for an efficient server.
Here are some good C# examples of Socket clients and servers: https://msdn.microsoft.com/en-us/library/w89fhyex(v=vs.110).aspx
As a side note, you can also write Android apps in C# with Xamarin. If you did your desktop program and Android app both in C#, you'd be able to write most of the code once and share it between the two apps easily.
I suggest you start learning socket programming by creating very simple client and server applications in order to grasp how they will be communicating in your larger project. Once you can grasp the communication procedures well enough, start designing your larger project.
But I am not sure how I should approach programming the backend and
the "network communication" itself.
Traditionally, a server for your case would be a web server exposing REST API (JSON). All clients need to do http requests and render/parse JSON. REST API is mapped to database calls and exposes some data model. If it was in Java, it would be Jetty web server, Jackson Json parser.
I would really like to have some relatively easy way to send secured
messages between server and clients,
Sending HTTP requests probably the easiest way to communicate with a service. Having it secured is a matter of enabling HTTPS on the server side and implementing some user access authentication and action authorization. Enabling HTTPS with Jetty for Java will require few lines of code. Authentication is usually done via OAuth2 technique, and authorization could be based on ACL. You may go beyond of this and enable encryption of data at rest and employ other practices.
I would like to "scale big". At first this system will only be used by
me, but it may be opened to more people or even sold.
I would like to be able to scale big. I don't need support for
hundreds of people at the beginning
I anticipate scalability can become the main challenge. Depending on how far you want to scale, you may need to go to distributed (Big Data) databases and distributed serving and messaging layers.
Also I would really like to a (partly) self programmed backend on the
server, because I could very well use this for a lot of other stuff,
like some automation features in my house, which will be implemented.
I am not sure what you mean self-programmed. Usually a backend encapsulates some application specific business logic.
It could be a piece of logic between your database and http transport layer.
In more complicated scenario your logic can be put into asynchronous service behind the backend, so the service can do it's job without blocking clients' requests.
And in the most (probably) complicated scenario your backend may do machine learning (for example, if you would like you software stack to learn your home-being habits and automate house accordingly to your expectations without actually coding this automation)
but I have no experience in that matter. I do have programming
experience in general, but not with backend and network programming.
If you can code, writing a backend is not very hard problem. There are a lot of resources. However, you would need time (or money) to learn and to do it, what may distract you from the development of your applications or you may enjoy it.
The alternative to in-house developed of a backend could be a Backend-as-a-Service (BaaS) in cloud or on premises. There are number of product in this market. BaaS will allow you to eliminate the development of the backend entirely (or close to this). At minimum it should do:
REST API to data storage with configurable data model,
security,
scalability,
custom business-logic
Disclaimer: I am a member of webintrinsics.io team, which is a Backend-as-a-Service. Check our website and contact if you need to, we will be able to work with you and help you either with BaaS or with guiding you towards some useful resources.
Good luck with your work!

How common is web farming/gardens? Should i design my website for it? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm running a ASP.NET website, the server both reads and writes data to a database but also stores some frequently accessed data directly in the process memory as a cache. When new requests come in they are processed depending on data in the cache before it's written to the DB.
My hosting provider suddenly decided to put their servers under a load balancer. This means that my caching system will go bananas as several servers randomly processes the requests. So i have to rewrite a big chunk of my application only to get worse performance since i now have to query the database instead of a lightning fast in memory variable check.
First i don't really see the point of distributing the load on the iis server as in my experience DB queries are most often the bottleneck, now the DB has to take even more banging. Second, it seems like these things would require careful planning, not just something a hosting provider would set up for all their clients and expect all applications to be written to suit them.
Are these sort of things common or was i stupid using the process memory as cache in the first place?
Should i start looking for a new hosting provider or can i expect web farming to arrive sooner or later anywhere? Should I keep transitions like this in consideration for all future apps i write and avoid in process caching and similar designs completely?
(Please don't want to make this into a farming vs not farming battle, i'm just wondering if it's so common that i have to keep it in mind when developing.)
I am definitely more of a developer than a network/deployment guru. So while I have a reasonably good overall understanding of these concepts (and some firsthand experience with pitfalls/limitations), I'll rely on other SO'ers to more thoroughly vet my input. With that caveat...
First thing to be aware of: a "web farm" is different from a "web garden". A web farm is usually a series of (physical or virtual) machines, usually each with a unique IP address, behind some sort of load-balancer. Most load balancers support session-affinity, meaning a given user will get a random machine on their first hit to the site, but will get that same machine on every subsequent hit. Thus, your in-memory state-management should still work fine, and session affinity will make it very likely that a given session will use the same application cache throughout its lifespan.
My understanding is a "web garden" is specific to IIS, and is essentially "multiple instances" of the webserver running in parallel on the same machine. It serves the same primary purpose as a web farm (supporting a greater number of concurrent connections). However, to the best of my knowledge it does not support any sort of session affinity. That means each request could end up in a different logical application, and thus each could be working with a different application cache. It also means that you cannot use in-process session handling - you must go to an ASP Session State Service, or SQL-backed session configuration. Those were the big things that bit me when my client moved to a web-garden model.
"First i don't really see the point of distributing the load on the iis server as in my experience DB queries are most often the bottleneck". IIS has a finite number of worker threads available (configurable, but still finite), and can therefore only serve a finite number of simultaneous connections. Even if each request is a fairly quick operation, on busy websites, that finite ceiling can cause slow user experience. Web farms/gardens increases that number of simultaneous requests, even if it doesn't perfectly address leveling of CPU load.
"Are these sort of things common or was i stupid using the process memory as cache in the first place? " This isn't really an "or" question. Yes, in my experience, web farms are very common (web gardens less so, but that might just be the clients I've worked with). Regardless, there is nothing wrong with using memory caches - they're an integral part of ASP.NET. Of course, there's numerous ways to use them incorrectly and cause yourself problems - but that's a much larger discussion, and isn't really specific to whether or not your system will be deployed on a web farm.
IN MY OPINION, you should design your systems assuming:
they will have to run on a web farm/garden
you will have session-affinity
you will NOT have application-level-cache-affinity
This is certainly not an exhaustive guide to distributed deployment. But I hope it gets you a little closer to understanding some of the farm/garden landscape.

web service that can withstand with 1000 concurrent users with response in 25 millisecond

Our client requirement is to develop a WCF which can withstand with 1-2k concurrent website users and response should be around 25 milliseconds.
This service reads couple of columns from database and will be consumed by different vendors.
Can you suggest any architecture or any extra efforts that I need to take while developing. And how do we calculate server hardware configuration to cope up with.
Thanks in advance.
Hardly possible. You need network connection to service, service activation, business logic processing, database connection (another network connection), database query. Because of 2000 concurrent users you need several application servers = network connection is affected by load balancer. I can't imagine network and HW infrastructure which should be able to complete such operation within 25ms for 2000 concurrent users. Such requirement is not realistic.
I guess if you simply try to run the database query from your computer to remote DB you will see that even such simple task will not be completed in 25ms.
A few principles:
Test early, test often.
Successful systems get more traffic
Reliability is usually important
Caching is often a key to performance
To elaborate. Build a simple system right now. Even if the business logic is very simplified, if it's a web service and database access you can performance test it. Test with one user. What do you see? Where does the time go? As you develop the system adding in real code keep doing that test. Reasons: a). right now you know if 25ms is even achievable. b). You spot any code changes that hurt performance immediately. Now test with lots of user, what degradation patterns do you hit? This starts to give you and indication of your paltforms capabilities.
I suspect that the outcome will be that a single machine won't cut it for you. And even if it will, if you're successful you get more traffic. So plan to use more than one server.
And anyway for reliability reasons you need more than one server. And all sorts of interesting implementation details fall out when you can't assume a single server - eg. you don't have Singletons any more ;-)
Most times we get good performance using a cache. Will many users ask for the same data? Can you cache it? Are there updates to consider? in which case do you need a distributed cache system with clustered invalidation? That multi-server case emerging again.
Why do you need WCF?
Could you shift as much of that service as possible into static serving and cache lookups?
If I understand your question 1000s of users will be hitting your website and executing queries on your DB. You should definitely be looking into connection pools on your WCF connections, but your best bet will be to avoid doing DB lookups altogether and have your website returning data from cache hits.
I'd also look into why you couldn't just connect directly to the database for your lookups, do you actually need a WCF service in the way first?
Look into Memcached.

What is the best architecture to bridge to XMPP? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
If I have a separate system with its own concept of users and presence, what is the most appropriate architecture for creating a bridge to an XMPP server network? As far as I can tell there are three primary ways:
Act as a server. This creates one touchpoint, but I fear it has implications for compatibility, and potentially creates complexity in my system for emulating a server.
Act as a clients. This seems to imply that I need one connection per user in my system, which just isn't going to scale well.
I've heard of an XMPP gateway protocol, but it's unclear if this is any better than the client solution. I also can't tell if this is standard or not.
Any suggestions or tradeoffs would be appreciated. For example, would any of these solutions require running code inside the target XMPP server (not likely something I can do).
The XMPP gateway protocol you've heard of is most likely to do with transports. A transport is a server that connects to both a XMPP server and a non-XMPP server. By running a transport, I can use my Jabber client to talk to someone using, say, MSN Messenger.
A transport typically connects once to the remote network for each JID that it sees as online. That is, it's your option 2 in reverse. This is because there is no special relationship between the transport and the non-XMPP network; the transport is simply acting as a bunch of regular clients. For this to work, XMPP clients must first register with the transport, giving login credentials for the remote network, and allowing the transport to view their presence.
The only reason this has a chance of scaling better is that there can be many transports for the same remote network. For example, my Jabber server could run a transport to MSN, another Jabber server could run another one, and so on, each one providing connections for a different subset of XMPP users. While this spreads out the load on the Jabber side, and load balancing on your system may spread out the load as well, it still requires many connections between the two systems.
In your case, because (I assume) the non-XMPP side of things is cooperating, putting a XMPP server interface on the non-XMPP server is likely your best bet. That server interface is best suited for managing the mapping between XMPP JIDs and how that JID will appear on its own network, rather than forcing XMPP users to register and so on.
In case you haven't seen these, you might find them useful:
http://www.jabber.org/jabber-for-geeks/technology-overview
http://www.xmpp.org/protocols/
http://www.xmpp.org/extensions/
Hope that helps.
I too am working on a similar system.
I am going with the gateway/component route. I have looked at several options and settled with this one.
The gateway is basically a component with the specific purpose of bridging Jabber/XMPP with another network. You will have to build most of the things you take for granted when using XMPP as a client. Stuff like roster control.
There is very little help online on the actual design and building of a component. Like the above answer I found that the xmpp protocols/extensions to be of help. The main ones being:
Basic Client 2008
Basic Server 2008
Intermediate Client 2008
Intermediate Server 2008
Reading through these will show you what XEPs you will be expected to be able to handle. Ignore the stuff that will be handled by the server that your component will be attched to.
It's a shame that Djabberd has such poor documentation as their system of "everything is a module" gave the possibility of backend of the server could interface directly to the other network. I made no headway on this.
There are basically two types of server to server (s2s) connections. The first is either called a gateway or a transport, but they're the same thing. This is probably the kind you're looking for. I couldn't find specific documentation for the non-XMPP side, but how XMPP thinks about doing translations to legacy servers is at http://xmpp.org/extensions/xep-0100.html. The second kind really isn't explained in any additional XEPs -- it's regular XMPP s2s connections. Look for "Server-to-Server Communication" in RFC 3920 or RFC 3920bis for the latest draft update.
Since you have your own users and presence on your server, and it's not XMPP, the concepts aren't going to map completely to the XMPP model. This is where the work of the transport comes in. You have to do the translation from your model to the XMPP model. While this is some work, you do get to make all the decisions.
Which brings us right to one of the key design choices -- you need to really decide which things you are going to map to XMPP from your service and what you aren't. These feature and use case descriptions will drive the overall structure. For example, is this like a transport to talk to AOL or MSN chat services? Then you'll need a way to map their equivalent of rosters, presence, and keep session information along with logins and passwords from your local users to the remote server. This is because your transport will need to pretend to be those users and will need to login for them.
Or, maybe you're just an s2s bridge to someone else's XMPP based chess game, so you don't need a login on the remote server, and can just act similarly to an email server and pass the information back and forth. (With normal s2s connections the only session that would be stored would be SASL authentication used with the remote server, but at the user level s2s just maintains the connection, and not the login session.)
Other factors are scalability and modularity on your end. You nailed some of the scalability concerns. Take a look at putting in multiple transports to balance the load. For modularity, see where you want to make decisions about what to do with each packet or action. For example, how do you handle and keep track of subscription data? You can put it on your transport, but then that makes using multiple transports harder. Or if you make that decision closer to your core server you can have simpler transports and use some common code if you need to talk to services other than XMPP. The trade off is a more complex core server with more vulnerability potential.
What architecture you should use depends on the non-XMPP system.
Do you operate the non-XMPP system? If yes, you should find a way to add an XMPP-S2S interface to that system, in other words, make it act as an XMPP server. AOL is using this approach for AIM. Unfortunately, they have restricted their gateway to GoogleTalk.
You don't operate the non-XMPP system but it has a federation interface that you can use - i. e. your gateway can talk to the other system as a server and has a namespace of its own. In this case, you can build a gateway that acts as a federated server on both sides. For I don't know of any example of a gateway that uses this approach but you could use it if you want to build a public XMPP-to-SIP bridge.
If the non-XMPP system doesn't give you a federation interface, then you have no other option but acting as a bunch of clients. In the XMPP world, this is called a "transport". The differences between a transport and a normal server are basically:
the JIDs of the transport are mapped from another system (e.g. john.doe\40example.net#msngateway.example.org - really ugly!)
XMPP users who want to use the transport need to create an account on the non-XMPP system and give the login credentials of that account to the transport service. The XMPP protocol even has a protocol extension that allows XMPP users to do transport registrations in-band.
One other approach is to work with your XMPP server vendor. Most have internal APIs that make injecting presence possible from third party applications. For example, Jabber XCP provides an API for this that's really easy to use.
(Disclosure: I work for Jabber, Inc, the company behind Jabber XCP)

Resources