Questions about maxJsonLength in ASP.NET - asp.net

Recently, I ran into a problem with my application: the size of the JSON string returned from the server was exceeding the default maxJsonLength. I've done some research and implemented some fixes including a variation of paging. Everything looks great at the moment. However, I still have some questions unanswered.
First of all, the majority of the sources point to this article:
http://geekswithblogs.net/frankw/archive/2008/08/05/how-to-configure-maxjsonlength-in-asp.net-ajax-applications.aspx
1. Why 2,097,152 (2MB)? 2MB is way too much data to be loaded for a web page. (Unless, the user is downloading something, but that's a different story) Even 1MB is too much.
2. Than, the author goes on with an example of maxJsonLength of 500,000. Why this number? Is this just an example of how to set the property? Some sources state that 500,000 is the limit. Well, it's not, because I tested my application with 2,097,152 (2MB, roughly 4 times the 500,000) and it worked.
3. Some other sources state that 4MB is the limit... So, what is the limit? Is there a limit? Does it have something to do with the limit of the response from the server?
4. Finally, I'd like to get a strong suggestion on a length of JSON string being received from the server. Not the number to which maxJsonLength should be set, but the actual length of the JSON string, kind of "what to strive for".
Thank you in advance.

There is no hard and fast rule here. Your Json length is going to depend on your application and what information you are returning to the client.
If you really want a "rule of thumb", it should be as SMALL as possible to communicate the data that you need.
For max values, the true limitation is again going to depend most likely on browser requirements, but I personally would never go with more than 2mb for a Json message simply due to what it would take to send that down.

I understand that the total limit is determined by the lesser of the maxJsonLength that you have mentioned and the HttpRuntimeSection.MaxRequestLength. I am currently testing this and I will get back to you.
Of course, the big issue here is that it is seldom a good idea to return such large amounts of data. Whenever I have a response that starts to exceed about 100KB, I take another look at
my overall design and find ways to serve out smaller chunks as they are needed. Even this 100KB is high for must pure data scenarios, by which I mean textual data, not images or scripts.

Related

Why the time-out values are so small?

I'm not sure if this is a pure stackoverflow relevant question. It is related to general design practice. Since I cannot think of another relevant stack exchange site, posting it here.
In the general design practice of converting an async call to sync one, we use a time-out and wait for the results. While, this may not exactly a good practice from the point of view of responsiveness, it definitely makes the implementation easier.
I have seen many such implementations and often noticed that the developers tend to give a very small time-out value. I can understand that the people may have the need of a responsive system in mind when they did this. But many of these applications I have seen are very data critical ones where the loss of data is very bad. So, it is always better to wait more and try to get as much data instead of timing out early and giving an error message to the user. Now, the situations where the server failing to give data or the client unable to reach server etc are rare. In those situations, I expect the a large time-out for such waits. After all, these time-outs don't mean that the wait will definitely last until the given time-out value; the timeout value is only an upper limit. So, I have always arguing for higher values here. But I see the use of low values in more and more places and now I'm getting confused if really there is something else in this practice that I don't understand.
So, my question is : Are there any arguments, other than the need for responsiveness to implement a very small time-out for waiting?
As always, the right decision depends on the real-life data.
The timeout should be proportional to the time it usually takes to complete an operation successfully.
Sending a UDP message for example could take between 1 - 50 milliseconds so a timeout of 100 milliseconds is more than reasonable however copying a file over the wire could take minutes or more so a 100 millisecond timeout is laughable.
There are pros and cons to both short and long timeouts so it's a tradeoff. Longer timeouts use more resources (tasks, threads, memory, etc.) for the same amount of work while short timeouts, as you mentioned, may result in loss of data.
In conclusion, you need to set a configurable timeout that sounds reasonable and then figure out whether you timeout too many operations in production or the other way around and calibrate accordingly.

Asp.net guaranteed response time

Does anybody have any hints as to how to approach writing an ASP.net app that needs to have a guaranteed response time?
When under high load that would normally cause us to exceed our desired response time, we want to throw out an appropriate number of requests, so that the rest of the requests can return before the max response time. Throwing out requests based on exceeding a fixed req/s is not viable, as there are other external factors that will control response time that cause the max rps we can safely support to fiarly drastically drift and fluctuate over time.
Its ok if a few requests take a little too long, but we'd like the great majority of them to meet the required response time window. We want to "throw out" the minimal or near minimal number of requests so that we can process the rest of the requests in the allotted response time.
It should account for ASP.Net queuing time, ideally the network request time but that is less important.
We'd also love to do adaptive work, like make a db call if we have plenty of time, but do some computations if we're shorter on time.
Thanks!
SLAs with a guaranteed response time require a bit of work.
First off you need to spend a lot of time profiling your application. You want to understand exactly how it behaves under various load scenarios: light, medium, heavy, crushing.. When doing this profiling step it is going to be critical that it's done on the exact same hardware / software configuration that production uses. Results from one set of hardware have no bearing on results from an even slightly different set of hardware. This isn't just about the servers either; I'm talking routers, switches, cable lengths, hard drives (make/model), everything. Even BIOS revisions on the machines, RAID controllers and any other device in the loop.
While profiling make sure the types of work loads represent an actual slice of what you are going to see. Obviously there are certain load mixes which will execute faster than others.
I'm not entirely sure what you mean by "throw out an appropriate number of requests". That sounds like you want to drop those requests... which sounds wrong on a number of levels. Doing this usually kills an SLA as being an "outage".
Next, you are going to have to actively monitor your servers for load. If load levels get within a certain percentage of your max then you need to add more hardware to increase capacity.
Another thing, monitoring result times internally is only part of it. You'll need to monitor them from various external locations as well depending on where your clients are.
And that's just about your application. There are other forces at work such as your connection to the Internet. You will need multiple providers with active failover in case one goes down... Or, if possible, go with a solid cloud provider.
Yes, in the last mvcConf one of the speakers compares the performance of various view engines for ASP.NET MVC. I think it was Steven Smith's presentation that did the comparison, but I'm not 100% sure.
You have to keep in mind, however, that ASP.NET will really only play a very minor role in the performance of your app; DB is likely to be your biggest bottle neck.
Hope the video helps.

Can you sacrifice performance to get concurrency in Sqlite on a NFS?

I need to write a client/server app stored on a network file system. I am quite aware that this is a no-no, but was wondering if I could sacrifice performance (Hermes: "And this time I mean really slash.") to prevent data corruption.
I'm thinking something along the lines of:
Create a separate file in the system everytime a write is called (I'm willing do it for every connection if necessary)
Store the file name as the current millisecond timestamp
Check to see if the file with that time or earlier exists
If the same one exists wait a random time between 0 to 10 ms, and try again.
While file is the earliest timestamp, do work, delete file lock, otherwise wait 10ms and try again.
If a file persists for more than a minute, log as an error, stop until it is determined that the data is not corrupted by a person.
The problem I see is trying to maintain the previous state if something locks up. Or choosing to ignore it, if the state change was actually successful.
Is there a better way of doing this, that doesn't involve not doing it this way? Or has anyone written one of these with a lot less problems than the Sqlite FAQ warns about? Will these mitigations even factor in to preventing data corruption?
A couple of notes:
This must exist on an NSF, the why is not important because it is not my decision to make (it doesn't look like I was clear enough on that point).
The number of readers/writers on the system will be between 5 and 10 all reading and writing at the same time, but rarely on the same record.
There will only be clients and a shared memory space, there is no way to put a server on there, or use a server based RDMS, if there was, obviously I would do it in a New York minute.
The amount of data will initially start off at about 70 MB (plain text, uncompressed), it will grown continuous from there at a reasonable, but not tremendous rate.
I will accept an answer of "No, you can't gain reasonably guaranteed concurrency on an NFS by sacrificing performance" if it contains a detailed and reasonable explanation of why.
Yes, there is a better way. Don't use NFS to do this.
If you are willing to create a new file every time something changes, I expect that you have a small amount of data and/or very infrequent changes. If the data is small, why use SQLite at all? Why not just have files with node names and timestamps?
I think it would help if you described the real problem you are trying to solve a bit more. For example if you have many readers and one writer, there are other approaches.
What do you mean by "concurrency"? Do you actually mean "multiple readers/multiple writers", or can you get by with "multiple readers/one writer with limited latency"?

What is the best compression library for very small amounts of data (3-4 kib?)

I am working on a game engine which is loosely descended from Quake 2, adding some things like scripted effects (allowing the server to specify special effects in detail to a client, instead of having only a limited number of hardcoded effects which the client is capable of.) This is a tradeoff of network efficiency for flexibility.
I've hit an interesting barrier. See, the maximum packet size is 2800 bytes, and only one can go out per client per frame.
Here is the script to do a "sparks" effect (could be good for bullet impact sparks, electrical shocks, etc.)
http://pastebin.com/m7acdf519 (If you don't understand it, don't sweat it; it's a custom syntax I made and not relevant to the question I am asking.)
I have done everything possible to shrink the size of that script. I've even reduced the variable names to single letters. But the result is exactly 405 bytes. Meaning you can fit at most 6 of these per frame. I also have in mind a few server-side changes which could shave it down another 12, and a protocol change which might save another 6. Although the savings would vary depending on what script you are working with.
However, of those 387 bytes, I estimate that only 41 would be unique between multiple usages of the effect. In other words, this is a prime candidate for compression.
It just so happens that R1Q2 (a backward-compatible Quake 2 engine with an extended network protocol) has Zlib compression code. I could lift this code, or at least follow it closely as a reference.
But is Zlib necessarily the best choice here? I can think of at least one alternative, LZMA, and there could easily be more.
The requirements:
Must be very fast (must have very small performance hit if run over 100 times a second.)
Must cram as much data as possible into 2800 bytes
Small metadata footprint
GPL compatible
Zlib is looking good, but is there anything better? Keep in mind, none of this code is being merged yet, so there's plenty of room for experimentation.
Thanks,
-Max
EDIT: Thanks to those who have suggested compiling the scripts into bytecode. I should have made this clear-- yes, I am doing this. If you like you can browse the relevant source code on my website, although it's still not "prettied up."
This is the server-side code:
Lua component: http://meliaserlow.dyndns.tv:8000/alienarena/lua_source/lua/scriptedfx.lua
C component: http://meliaserlow.dyndns.tv:8000/alienarena/lua_source/game/g_scriptedfx.c
For the specific example script I posted, this gets a 1172 byte source down to 405 bytes-- still not small enough. (Keep in mind I want to fit as many of these as possible into 2800 bytes!)
EDIT2: There is no guarantee that any given packet will arrive. Each packet is supposed to contain "the state of the world," without relying on info communicated in previous packets. Generally, these scripts will be used to communicate "eye candy." If there's no room for one, it gets dropped from the packet and that's no big deal. But if too many get dropped, things start to look strange visually and this is undesirable.
LZO might be a good candidate for this.
FINAL UPDATE: The two libraries seem about equivalent. Zlib gives about 20% better compression, while LZO's decoding speed is about twice as fast, but the performance hit for either is very small, nearly negligible. That is my final answer. Thanks for all other answers and comments!
UPDATE: After implementing LZO compression and seeing only sightly better performance, it is clear that my own code is to blame for the performance hit (massively increased number of scripted effects possible per packet, thus my effect "interpreter" is getting exercised a lot more.) I would like to humbly apologize for scrambling to shift blame, and I hope there are no hard feelings. I will do some profiling and then maybe I will be able to get some numbers which will be more useful to someone else.
ORIGINAL POST:
OK, I finally got around to writing some code for this. I started out with Zlib, here are the first of my findings.
Zlib's compression is insanely great. It is reliably reducing packets of, say, 8.5 kib down to, say, 750 bytes or less, even when compressing with Z_BEST_SPEED (instead of Z_DEFAULT_COMPRESSION.) The compression time is also pretty good.
However, I had no idea the decompression speed of anything could even possibly be this bad. I don't have actual numbers, but it must be taking 1/8 second per packet at least! (Core2Duo T550 # 1.83 Ghz.) Totally unacceptable.
From what I've heard, LZMA is a tradeoff of worse performance vs. better compression when compared to Zlib. Since Zlib's compression is already overkill and its performance is already incredibly bad, LZMA is off the table sight unseen for now.
If LZO's decompression time is as good as it's claimed to be, then that is what I will be using. I think in the end the server will still be able to send Zlib packets in extreme cases, but clients can be configured to ignore them and that will be the default.
zlib might be a good candidate - license is very good, works fast and its authors say it has very little overhead and overhead is the thing that makes use for small amounts of data problematic.
you should look at OpenTNL and adapt some of the techniques they use there, like the concept of Network Strings
I would be inclinded to use the most significant bit of each character, which is currently wasted, by shifting groups of 9 bytes leftwards, you will fit into 8 bytes.
You could go further and map the characters into a small space - can you get them down to 6 bits (i.e. only having 64 valid characters) by, for example, not allowing capital letters and subtracting 0x20 from each character ( so that space becomes value 0 )
You could go further by mapping the frequency of each character and make a Huffman type compression to reduce the avarage number bits of each character.
I suspect that there are no algorithms that will save data any better that, in the general case, as there is essentially no redundancy in the message after the changes that you have alrady made.
How about sending a binary representation of your script?
So I'm thinking in the lines of a Abstract Syntax Tree with each procedure having a identifier.
This means preformance gain on the clients due to the one time parsing, and decrease of size due to removing the method names.

implementing a download manager that supports resuming

I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download).
From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets.
So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those.
Is this the right way to do this?
What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file)
When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request?
When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks.
Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously?
If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers).
Note: I'll probably be using Qt, but this is a general question so I left code out of it.
Regarding the request/response:
for a Range-d request, you could get three different responses:
206 Partial Content - resuming supported and possible; check Content-Range header for size/range of response
200 OK - byte ranges ("resuming") not supported, whole resource ("file") follows
416 Requested Range Not Satisfiable - incorrect range (past EOF etc.)
Content-Range usu. looks like this: Content-Range: bytes 21010-47000/47022, that is bytes start-end/total.
Check the HTTP spec for details, esp. sections 14.5, 14.16 and 14.35
I am not an expert on C++, however, I had once done a .net application which needed similar functionality (download scheduling, resume support, prioritizing downloads)
i used microsoft bits (Background Intelligent Transfer Service) component - which has been developed in c. windows update uses BITS too. I went for this solution because I don't think I am a good enough a programmer to write something of this level myself ;-)
Although I am not sure if you can get the code of BITS - I do think you should just have a look at its documentation which might help you understand how they implemented it, the architecture, interfaces, etc.
Here it is - http://msdn.microsoft.com/en-us/library/aa362708(VS.85).aspx
I can't answer all your questions, but here is my take on two of them.
Chunk size
There are two things you should consider about chunk size:
The smaller they are the more overhead you get form sending the HTTP request.
With larger chunks you run the risk of re-downloading the same data twice, if one download fails.
I'd recommend you go with smaller chunks of data. You'll have to do some test to see what size is best for your purpose though.
In memory vs. files
You should write the data chunks to in memory buffer, and then when it is full write it to the disk. If you are going to download large files, it can be troublesome for your users, if they run out of RAM. If I remember correctly the IIS stores requests smaller than 256kb in memory, anything larger will be written to the disk, you may want to consider a simmilar approach.
Besides keeping track of what were the offsets marking the beginning of your segments and each segment length (unless you want to compute that upon resume, which would involve sort the offset list and calculate the distance between two of them) you will want to check the Accept-Ranges header of the HTTP response sent by the server to make sure it supports the usage of the Range header. The best way to specify the range is "Range: bytes=START_BYTE-END_BYTE" and the range you request includes both START_BYTE and byte END_BYTE, thus consisting of (END_BYTE-START_BYTE)+1 bytes.
Requesting micro chunks is something I'd advise against as you might be blacklisted by a firewall rule to block HTTP flood. In general, I'd suggest you don't make chunks smaller than 1MB and don't make more than 10 chunks.
Depending on what control you plan to have on your download, if you've got socket-level control you can consider writing only once every 32K at least, or writing data asynchronously.
I couldn't comment on the MMF idea, but if the downloaded file is large that's not going to be a good idea as you'll eat up a lot of RAM and eventually even cause the system to swap, which is not efficient.
About handling the chunks, you could just create several files - one per segment, optionally preallocate the disk space filling up the file with as many \x00 as the size of the chunk (preallocating might save you sometime while you write during the download, but will make starting the download slower), and then finally just write all of the chunks sequentially into the final file.
One thing you should beware of is that several servers have a max. concurrent connections limit, and you don't get to know it in advance, so you should be prepared to handle http errors/timeouts and to change the size of the chunks or to create a queue of the chunks in case you created more chunks than max. connections.
Not really an answer to the original questions, but another thing worth mentioning is that a resumable downloader should also check the last modified date on a resource before trying to grab the next chunk of something that may have changed.
It seems to me you would want to limit the size per download chunk. Large chunks could force you to repeat download of data if the connection aborted close to the end of the data part. Specially an issue with slower connections.
for the pause resume support look at this simple example
Simple download manager in Qt with puase/ resume support

Resources