Best practices - When to use Server / Client code - asp.net

I was searching for information about one of my doubts, but I couldn't find any. I'm working in an ASP.NET site and using AJAX to require data, since I'm currently working on my own, I don't know web programming's best practices.
I usually get all the information I need from the server and use Javascript to display / Modify it and AJAX to send it back to the server. A friend of mine uses PHP for most part of the programming, He rarelly uses any javascript and he told me it's way faster this way, since it does not consume the client's resources.
The basic question actually is:
According to the best practices, is it better for the server just to provide the data needed for the
application or is better you use the server for more than this?

That is going to depend on the expected amount of traffic for the site, the amount of content being generated, and the expectations of the end-user.
In a high-traffic site, it is actually "faster" for the end-user if you let javascript generate a portion of the content on the client side. Also, you can deliver a better user experience with long load times through client side scripting than you can if the content is loaded completely on the server.

In most cases you would need at least some backend code. E.g. when validating user input or when retrieving information from a real persistent database. Or what about when somebody has javascript disabled in his user-agent or somebody with a screenreader or searchengine crawlers?
IMHO you should at least (again in most cases) have the backend code which is able to do all the work and spit out a full webpage to the client. In addition to this you can add javascript functionality to make the user interface "smoother" by for example validating user data before submitting it to the server (remember to ALWAYS also check on the serverside) or by loading partial html (AJAX).
The point about being faster or using less resources when doing it serverside doesn't make much sense. Even if it does that it doesn't matter (but again I highly doubt this statement). If you use clientside scripting to only load parts that are needed it would rather use less resources on both the client- and the serverside.

Related

How do I handle use 100 Continue in a REST web service?

Some background
I am planning to writing a REST service which helps facilitate collaboration between multiple client systems. Similar to how git or hg handle things I want the client to perform all merging locally and for the server to reject new changes unless they have been merged with existing changes.
How I want to handle it
I don't want clients to have to upload all of their change sets before being told they need to merge first. I would like to do this by performing a POST with the Expect 100 Continue header. The server can then verify that it can accept the change sets based on the header information (not hard for me in this case) and either reject the request or send the 100 Continue status through to the client who will then upload the changes.
My problem
As far as I have been able to figure out so far ASP.NET doesn't support this scenario, by the time you see the request in your controller actions the POST body has normally already been completely uploaded. I've had a brief look at WCF REST but I haven't been able to see a way to do it there either, their conditional PUT example has the full request body before rejecting the request.
I'm happy to use any alternative framework that runs on .net or can easily be made to run on Windows Azure.
I can't recommend WcfRestContrib enough. It's free, and it has a lot of abilities.
But I think you need to use OpenRasta instead of WCF in order to do what you're wanting. There's a lot of stuff out there on it, like wiki, blog post 1, blog post 2. It might be a lot to take in, but it's a .NET framework thats truly focused on being RESTful, and not RPC like WCF. And it has the ability work with headers, like you asked about. It even has PipelineContributors, which have access to the whole context of a call and can halt execution, handle redirections, or even render something different than what was expected.
EDIT:
As far as I can tell, this isn't possible in OpenRasta after all, because "100 continue is usually handled by the hosting environment, not by OR, so there’s no support for it as such, because we don’t get a chance to respond in the asp.net pipeline"

Do I need extra XSS security for ASP.NET 4 websites?

From what I understand about what ASP.NET does and my own personal testing of various XSS tests, I found that my ASP.NET 4 website does not require any XSS prevention.
Do you think that an ASP.NET 4.0 website needs any added XSS security than its default options? I cannot enter any javascript or any tags into my text fields that are then immediately printed onto the page.
Disclaimer - this is based on a very paranoid definition of what "trusted output" is, but when it comes to web security, I don't think you CAN be too paranoid.
Taken from the OWASP page linked to below: Untrusted data is most
often data that comes from the HTTP request, in the form of URL
parameters, form fields, headers, or cookies. But data that comes from
databases, web services, and other sources is frequently untrusted
from a security perspective. That is, it might not have been perfectly
validated.
In most cases, you do need more protection if you are taking input from ANY source and outputting it to HTML. This includes data retrieved from files, databases, etc - much more than just your textboxes. You could have a website that is perfectly locked down and have someone go directly to the database via another tool and be able to insert malicious script.
Even if you're taking data from a database where only a trusted user is able to enter the data, you never know if that trusted user will inadvertently copy and paste in some malicious script from a website.
Unless you absolutely positively trust any data that will be output on your website and there is no possible way for a script to inadvertently (or maliciously in case of an attacker or disgruntled employee) put dangerous data into the system, you should sanitize all output.
If you haven't already, familiarize yourself with the info here: https://www.owasp.org/index.php/XSS_%28Cross_Site_Scripting%29_Prevention_Cheat_Sheet
and go through the other known threats on the site as well.
In case you miss it, the Microsoft.AntiXss library is a very good tool to have at your disposal. In addition to a better version of the HtmlEncode function, it also has nice features like GetSafeHtmlFragment() for when you WANT to include untrusted HTML in your output and have it sanitized. This article shows proper usage: http://msdn.microsoft.com/en-us/library/aa973813.aspx The article is old, but still relevant.
Sorry Dexter, ASP.NET 4 sites do require XSS protection. You're probably thinking that the inbuilt request validation is sufficient and whilst it does an excellent job, it's not foolproof. It's still essential that you validate all input against a whitelist of acceptable values.
The other thing is that request validation is only any good for reflective XSS, that is XSS which is embedded in the request. It won't help you at all with persistent XSS so if you have other data sources where the input validation has not been as rigorous, you're at risk. As such, you always need to encode your output and encode it for the correct markup context (HTML, JavaScript, CSS). AntiXSS is great for this.
There's lots more info specifically as it relates to ASP.NET in OWASP Top 10 for .NET developers part 2: Cross-Site Scripting (XSS).

How do you push an update to a non-HTML5 browser?

We are considering a web application to provide users with frequent updates to system data. Initially, the data would be limited to system pressure, flow, etc. but the concept could apply to many areas of our business. The data is stored in SQL Server.
The question is, how do we force a table on a webpage to update when new data is inserted into the database. For example, a pump reports a new flow value. The updates to the database can be throttled but realistically we're looking at a new update every minute or two for our purposes.
This seems like a case where push notification would be used but what can we use with ASP.NET? HTML5 is out of the question although we've watched some push demos with web sockets.
Is there a push technology we can use for ASP.NET?
If not, or if it's a better solution, should we poll the database with jQuery / AJAX? Any suggestions for samples we should look at?
Using HTTP you can only send responses to client queries, so pushing content without web sockets is not possible.
The most common solutions are
polling the server for changes and updating the table if there are any
updating the page on the client often and having the server generate the page if there are new data.
The latter method is the closest to pushing content, as the client do not retrieve data, but if you want to manipulate the data client-side it will be better to retrieve only the data.
A bonus in the latter is that the server handles data and turns it into a plain file, that the server can easily serve to many clients instead of creating the page every time it's opened.
Polling via ajax is the best solution here.
Since you are using ASP.NET, some of the built in ajax controls can make this pretty simple:
http://ajax.net-tutorials.com/controls/timer-control/
If you want to make a better job of this, you might consider creating a web service and using raw JavaScript or the JQuery framework to handle the ajax request / update. I say this because ASP.NET ajax sends the full page view state back to the server, which is inefficient and usually unnecessary.
"Comet" is the technology you're looking for. It's basically a handful of techniques people have come up with to do the sort of thing you're asking for. The simplest of these techniques involve causing the browser to make constant requests to the server for any updates it should know about. The most versatile (but complex) technique involves clever use of an embedded <script> tag which references a dynamic script resource.
You can use an ASP.NET Timer control coupled with an UpdatePanel to periodically check for new data and then refresh the UpdatePanel.

Why shouldn't data be modified on an HTTP GET request?

I know that using non-GET methods (POST, PUT, DELETE) to modify server data is The Right Way to do things. I can find multiple resources claiming that GET requests should not change resources on the server.
However, if a client were to come up to me today and say "I don't care what The Right Way to do things is, it's easier for us to use your API if we can just use call URLs and get some XML back - we don't want to have to build HTTP requests and POST/PUT XML," what business-conducive reasons could I give to convince them otherwise?
Are there caching implications? Security issues? I'm kind of looking for more than just "it doesn't make sense semantically" or "it makes things ambiguous."
Edit:
Thanks for the answers so far regarding prefetching. I'm not as concerned with prefetching since is mostly surrounding internal network API use and not visitable HTML pages that would have links that could be prefetched by a browser.
Prefetch: A lot of web browsers will use prefetching. Which means that it will load a page before you click on the link. Anticipating that you will click on that link later.
Bots: There are several bots that scan and index the internet for information. They will only issue GET requests. You don't want to delete something from a GET request for this reason.
Caching: GET HTTP requests should not change state and they should be idempotent. Idempotent means that issuing a request once, or issuing it multiple times gives the same result. I.e. there are no side effects. For this reason GET HTTP requests are tightly tied to caching.
HTTP standard says so: The HTTP standard says what each HTTP method is for. Several programs are built to use the HTTP standard, and they assume that you will use it the way you are supposed to. So you will have undefined behavior from a slew of random programs if you don't follow.
How about Google finding a link to that page with all the GET parameters in the URL and revisiting it every now and then? That could lead to a disaster.
There's a funny article about this on The Daily WTF.
GETs can be forced on a user and result in Cross-site Request Forgery (CSRF). For instance, if you have a logout function at http://example.com/logout.php, which changes the server state of the user, a malicious person could place an image tag on any site that uses the above URL as its source: http://example.com/logout.php. Loading this code would cause the user to get logged out. Not a big deal in the example given, but if that was a command to transfer funds out of an account, it would be a big deal.
Good reasons to do it the right way...
They are industry standard, well documented, and easy to secure. While you fully support making life as easy as possible for the client you don't want to implement something that's easier in the short term, in preference to something that's not quite so easy for them but offers long term benefits.
One of my favourite quotes
Quick and Dirty... long after the
Quick has departed the Dirty remains.
For you this one is a "A stitch in time saves nine" ;)
Security:
CSRF is so much easier in GET requests.
Using POST won't protect you anyway but GET can lead easier exploitation and mass exploitation by using forums and places which accepts image tags.
Depending on what you do in server-side using GET can help attacker to launch DoS (Denial of Service). An attacker can spam thousands of websites with your expensive GET request in an image tag and every single visitor of those websites will carry out this expensive GET request against your web server. Which will cause lots of CPU cycle to you.
I'm aware that some pages are heavy anyway and this is always a risk, but it's bigger risk if you add 10 big records in every single GET request.
Security for one. What happens if a web crawler comes across a delete link, or a user is tricked into clicking a hyperlink? A user should know what they're doing before they actually do it.
I'm kind of looking for more than just "it doesn't make sense semantically" or "it makes things ambiguous."
...
I don't care what The Right Way to do things is, it's easier for us
Tell them to think of the worst API they've ever used. Can they not imagine how that was caused by a quick hack that got extended?
It will be easier (and cheaper) in 2 months if you start with something that makes sense semantically. We call it the "Right Way" because it makes things easier, not because we want to torture you.

ASP.Net, Capture image/screenshot of client error

We currently have fairly robust error handling functionality in our ASP.Net application.
We log all errors in the database, a text file on the server
and also send automated emails containing the error details back to our support people.
This all happens on the server of course.
We would like to capture (and retrieve) an image of the client browser at the time the error occurred to provide additional info for troubleshooting?
Is this at all possible?
If so what would be an elegant approach to this problem?
This is not technically impossible, but it is so impractical for nearly all purposes that it might as well be impossible. You would need a plugin running on the client's machine which can receive instructions from your error page to take the screenshot, connect to the server and upload it.
If your client screens have complex data which affects the state surrounding the exception, you should revisit your design to ensure all of that is recorded before it's sent to the client, so you can keep all relevant state tracked with a given exception.
Saying something is "impractical" is usually easier than actually trying to solve something that is difficult, but not technically impossible.
I have done some more research and have come across
an approach that allows one to get hold of the rendered html server side.
Further more, there are ways to also convert html to images
I will implement the solution using a combination of the two.
Capturing a client browser screenshot is not possible due to security and privacy reasons. What you can (and imho you should) do is capture the url and the browser version and try to reproduce it in the same environment.

Resources