fast-render vs server-side rendering - meteor

I tried to look it up but there is no real comparison, just mentioning that both of these make better performance, maybe anyone knows which of those approaches are better for client and server performance?
It seems like ssr gives all the hard work to server giving plain html to client. Also, it attaches data to whole template, so every time it changes, template is resend to client?
One thing I think would be nice but I'm not sure about it, if data is sent from server are publishes still needed?

Related

Streaming stdout to a web page

This seems like it should be a really simple thing to achieve, unfortunately web development was never my strong point.
I have a bunch of scripts, and I would like to launch them from a webpage and see the realtime stdout text on the page. Some of the scripts take a long time to run so the normal single response isn't good enough (I have this working already).
As far as I can see, my options are
stdout to a file, and periodically (every couple of seconds) send a request from the client and respond with the contents of this file.
Chunked HTTP responses? I'm not sure if this is what they are used for- I tried to implement this already but I think I may be misunderstanding their purpose.
Websockets (I'm using a Luvit server so this isn't an option).
... Something else?
I'm sure there must be a standard way of achieving, I see other sites doing this all the time. Teamcity for example. Or chat rooms (vanilla TCP sockets?).
Any pointers in the right direction appreciated. Simplest method possible, if that's just sending lots of scheduled requests from the client then so be it.
That heavily reminds me of Common Gateway Interfaces.
Your own ideas sound all like the right direction. As you are using a shell script, and some potentially nontrivial interactions with the web server, I feel it could make sense to point out where to dig for examples of this kind of code, which was common a long time ago, and very error-prone, basically allways.
Practically, your script is a CGI script, doing typical things.
In the earlier days and years of the internet, that was the "normal way" to implement web page that are not just static files (HTML or others).
The page is basically implemented as a shell script (or any other programm reading from stdin and writing to stdout).
Part of what you are doing/proposing is very similar, and I think there are useful lessons to learn from old CGI code.
For example, getting buffering right with from inside the script over sdtout, whrough the web server onto the client's page can be tricky of course.
So digging out old examples could help a lot.
(Much of this may be obvious to you, the OP, personally, so take the "you" as potential reader)
The tricky part in general will be the buffering, I expect. If you are used to explicitly handling stdin/out buffers in shell, for programms that do not support it, the kind of things to expect can be imagined - but if not used to it: I remember CGI is worse, as you have to get the buffering of the HTTP server in sync too (let's hope it is handled automatically) - so maybe start to ask questions/dig for examples early.
The CGI style way would be exactly what you have implemented now - and it the buffering is right, that should be as real-time as it can get. But I understand that you get timeouts because of the long runtime? Or do you have strongly varying runtimes?
In terms of getting it as real-time as possible, there is nothing better than writing stdout to the http stream.
(I assume we accept the overhead of going through a HTTP server.)
Also, I'm thinking of line buffering, so not flushing every char - is that good enough for the use case? (i.e. no animated progress indicator lines/ ANSI escapes that you want to see in real time)
Then maybe the best is to work aroung the issues like timeouts, but to keep the concept. If real time is not that important, other ways may be better in many ways, of course. One point would be that other methods could be required for any scalability.

Best practices - When to use Server / Client code

I was searching for information about one of my doubts, but I couldn't find any. I'm working in an ASP.NET site and using AJAX to require data, since I'm currently working on my own, I don't know web programming's best practices.
I usually get all the information I need from the server and use Javascript to display / Modify it and AJAX to send it back to the server. A friend of mine uses PHP for most part of the programming, He rarelly uses any javascript and he told me it's way faster this way, since it does not consume the client's resources.
The basic question actually is:
According to the best practices, is it better for the server just to provide the data needed for the
application or is better you use the server for more than this?
That is going to depend on the expected amount of traffic for the site, the amount of content being generated, and the expectations of the end-user.
In a high-traffic site, it is actually "faster" for the end-user if you let javascript generate a portion of the content on the client side. Also, you can deliver a better user experience with long load times through client side scripting than you can if the content is loaded completely on the server.
In most cases you would need at least some backend code. E.g. when validating user input or when retrieving information from a real persistent database. Or what about when somebody has javascript disabled in his user-agent or somebody with a screenreader or searchengine crawlers?
IMHO you should at least (again in most cases) have the backend code which is able to do all the work and spit out a full webpage to the client. In addition to this you can add javascript functionality to make the user interface "smoother" by for example validating user data before submitting it to the server (remember to ALWAYS also check on the serverside) or by loading partial html (AJAX).
The point about being faster or using less resources when doing it serverside doesn't make much sense. Even if it does that it doesn't matter (but again I highly doubt this statement). If you use clientside scripting to only load parts that are needed it would rather use less resources on both the client- and the serverside.

How do I handle use 100 Continue in a REST web service?

Some background
I am planning to writing a REST service which helps facilitate collaboration between multiple client systems. Similar to how git or hg handle things I want the client to perform all merging locally and for the server to reject new changes unless they have been merged with existing changes.
How I want to handle it
I don't want clients to have to upload all of their change sets before being told they need to merge first. I would like to do this by performing a POST with the Expect 100 Continue header. The server can then verify that it can accept the change sets based on the header information (not hard for me in this case) and either reject the request or send the 100 Continue status through to the client who will then upload the changes.
My problem
As far as I have been able to figure out so far ASP.NET doesn't support this scenario, by the time you see the request in your controller actions the POST body has normally already been completely uploaded. I've had a brief look at WCF REST but I haven't been able to see a way to do it there either, their conditional PUT example has the full request body before rejecting the request.
I'm happy to use any alternative framework that runs on .net or can easily be made to run on Windows Azure.
I can't recommend WcfRestContrib enough. It's free, and it has a lot of abilities.
But I think you need to use OpenRasta instead of WCF in order to do what you're wanting. There's a lot of stuff out there on it, like wiki, blog post 1, blog post 2. It might be a lot to take in, but it's a .NET framework thats truly focused on being RESTful, and not RPC like WCF. And it has the ability work with headers, like you asked about. It even has PipelineContributors, which have access to the whole context of a call and can halt execution, handle redirections, or even render something different than what was expected.
EDIT:
As far as I can tell, this isn't possible in OpenRasta after all, because "100 continue is usually handled by the hosting environment, not by OR, so there’s no support for it as such, because we don’t get a chance to respond in the asp.net pipeline"

Why shouldn't data be modified on an HTTP GET request?

I know that using non-GET methods (POST, PUT, DELETE) to modify server data is The Right Way to do things. I can find multiple resources claiming that GET requests should not change resources on the server.
However, if a client were to come up to me today and say "I don't care what The Right Way to do things is, it's easier for us to use your API if we can just use call URLs and get some XML back - we don't want to have to build HTTP requests and POST/PUT XML," what business-conducive reasons could I give to convince them otherwise?
Are there caching implications? Security issues? I'm kind of looking for more than just "it doesn't make sense semantically" or "it makes things ambiguous."
Edit:
Thanks for the answers so far regarding prefetching. I'm not as concerned with prefetching since is mostly surrounding internal network API use and not visitable HTML pages that would have links that could be prefetched by a browser.
Prefetch: A lot of web browsers will use prefetching. Which means that it will load a page before you click on the link. Anticipating that you will click on that link later.
Bots: There are several bots that scan and index the internet for information. They will only issue GET requests. You don't want to delete something from a GET request for this reason.
Caching: GET HTTP requests should not change state and they should be idempotent. Idempotent means that issuing a request once, or issuing it multiple times gives the same result. I.e. there are no side effects. For this reason GET HTTP requests are tightly tied to caching.
HTTP standard says so: The HTTP standard says what each HTTP method is for. Several programs are built to use the HTTP standard, and they assume that you will use it the way you are supposed to. So you will have undefined behavior from a slew of random programs if you don't follow.
How about Google finding a link to that page with all the GET parameters in the URL and revisiting it every now and then? That could lead to a disaster.
There's a funny article about this on The Daily WTF.
GETs can be forced on a user and result in Cross-site Request Forgery (CSRF). For instance, if you have a logout function at http://example.com/logout.php, which changes the server state of the user, a malicious person could place an image tag on any site that uses the above URL as its source: http://example.com/logout.php. Loading this code would cause the user to get logged out. Not a big deal in the example given, but if that was a command to transfer funds out of an account, it would be a big deal.
Good reasons to do it the right way...
They are industry standard, well documented, and easy to secure. While you fully support making life as easy as possible for the client you don't want to implement something that's easier in the short term, in preference to something that's not quite so easy for them but offers long term benefits.
One of my favourite quotes
Quick and Dirty... long after the
Quick has departed the Dirty remains.
For you this one is a "A stitch in time saves nine" ;)
Security:
CSRF is so much easier in GET requests.
Using POST won't protect you anyway but GET can lead easier exploitation and mass exploitation by using forums and places which accepts image tags.
Depending on what you do in server-side using GET can help attacker to launch DoS (Denial of Service). An attacker can spam thousands of websites with your expensive GET request in an image tag and every single visitor of those websites will carry out this expensive GET request against your web server. Which will cause lots of CPU cycle to you.
I'm aware that some pages are heavy anyway and this is always a risk, but it's bigger risk if you add 10 big records in every single GET request.
Security for one. What happens if a web crawler comes across a delete link, or a user is tricked into clicking a hyperlink? A user should know what they're doing before they actually do it.
I'm kind of looking for more than just "it doesn't make sense semantically" or "it makes things ambiguous."
...
I don't care what The Right Way to do things is, it's easier for us
Tell them to think of the worst API they've ever used. Can they not imagine how that was caused by a quick hack that got extended?
It will be easier (and cheaper) in 2 months if you start with something that makes sense semantically. We call it the "Right Way" because it makes things easier, not because we want to torture you.

ASP.Net, Capture image/screenshot of client error

We currently have fairly robust error handling functionality in our ASP.Net application.
We log all errors in the database, a text file on the server
and also send automated emails containing the error details back to our support people.
This all happens on the server of course.
We would like to capture (and retrieve) an image of the client browser at the time the error occurred to provide additional info for troubleshooting?
Is this at all possible?
If so what would be an elegant approach to this problem?
This is not technically impossible, but it is so impractical for nearly all purposes that it might as well be impossible. You would need a plugin running on the client's machine which can receive instructions from your error page to take the screenshot, connect to the server and upload it.
If your client screens have complex data which affects the state surrounding the exception, you should revisit your design to ensure all of that is recorded before it's sent to the client, so you can keep all relevant state tracked with a given exception.
Saying something is "impractical" is usually easier than actually trying to solve something that is difficult, but not technically impossible.
I have done some more research and have come across
an approach that allows one to get hold of the rendered html server side.
Further more, there are ways to also convert html to images
I will implement the solution using a combination of the two.
Capturing a client browser screenshot is not possible due to security and privacy reasons. What you can (and imho you should) do is capture the url and the browser version and try to reproduce it in the same environment.

Resources