I need to upload a file using WebDAV and my issue is that the server limits upload to max 100MB. Is there a way to upload it in chunks (so multiple requests to bypass this 100MB limitation)? The server does allow you to have larger files there so I'm ok if I'm able to get it there. The limitation is only on the upload part.
WebDAV PUT is the same as HTTP PUT. PUT, as specified, does not support resumable uploads. See http://greenbytes.de/tech/webdav/rfc7231.html#rfc.section.B.p.7.
Either the server needs to be fixed to allow larger PUT operations, or you'll have to check whether it supports a custom protocol for resumable uploads.
Related
We are doing a BI application, and our customers send us data files daily. We are doing data exchange using CSV files, because our customers are used to watch data with Excel, and they are not ready yet to use an API on their system (maybe in few years we will be able to use XML/JSON webservice, we hope).
Currently the data transfer is made with FTP (SFTP in fact). Our customers upload file automatically on an FTP server, and we have a CRON task that watches if a file has been sent.
But there are many disadvantages with that:
We cannot know with reliability if the upload is done, or still in progress (we asked them to upload a file with a temporary name, and move it after, but many of them still don't do that)
So, we can try to guess, and consider upload is done if enough time has passed. But FTP protocol doesn't allow to get server time, and time can be desynced. So we can upload an empty file and read it's date to know the time of the server. But we need write permission to do that...
FTP protocol allow to pause upload...
Then, we are considering to transfer files by asking our customer to upload them directly on our application, using HTTPS. This is more reliable, but less convenient:
Our customer cannot check the content of the file after upload
We have to be careful with upload size and timeout on our server
Files can be quite large (up to 300Mo), so it's better to zip them before upload (can reduce size to 10%).
This is more work for us than just an FTP server (we need to create UI, upload progress, list files to download them back, ...)
There is other solutions? How usually BI applications share data? Is HTTPS a good solutions for us?
We found a solution which is a webdav server. We are using Nextcloud, it provides an online interface, and script access with webdav protocol.
It's more reliable than FTP, because the file appear only when upload is done.
And it's better than HTTP upload on our application. We don't have to handle file upload, create interfaces, ...
I have this intriguing problem on Azure Website. My website uses 4 script files and 3 style files, each minified. They are not so big, bigest has near 200 KBs. Website had already started. Azure's Always On option is turned on. When I call to WebApi for data it returns in <50ms.
And when app is reloaded it needs 250 ms just to get first byte from tiniest script, and others needs much more. Initial Html is loaded in 60 ms. Scripts/styles are cached so they are not downloaded, but the TTFB time is killing the performance. This repeats every single reload. App is not containing any sophisticated configuration so it should run much faster than it.
What can cause such problems?
Although your static files are cached, the browser still issues requests with if-modifies-since header (which results in a 304).
While it doesn't need to download the actual content, it still needs to wait the RTT + server think time to continue.
I would suggest two things:
Adding Cache-Control and Expire headers - will help avoid 304 in some cases (pretty much unless you hit F5)
Using a proper CDN - such as Incapsula or others, that will minimize the RTT + think time. It can also be used to easily control cache settings for various resources.
More good stuff here.
Good Luck!
From here:
As you saw earlier, IIS 7 caches the compressed versions of static
files. So, if a request arrives for a static file whose compressed
version is already in the cache, it doesn’t need to be compressed
again.
But what if there is no compressed version in the cache? Will IIS 7
then compress the file right away and put it in the cache? The answer
is yes, but only if the file is being requested frequently. By not
compressing files that are only requested infrequently, IIS 7 saves
CPU usage and cache space.
By default, a file is considered to be requested frequently if it is
requested two or more times per 10 seconds.
So, the reason your users are being served an uncompressed version of the javascript file is because it didn't meet the default threshold for being compressed; in other words, the javascript file was not requested 2 times within 10 seconds.
To control this, there is one attribute we must change on the <serverRuntime> element, which controls compression: frequentHitThreshold. In order for your file to be compressed when it is requested once, change your <serverRuntime> element to look like this:
<serverRuntime enabled="true" frequentHitThreshold="1" />
This will slightly impact your CPU performance if you have many javascript files that are being served and you have users quite often, but likely if you have users often enough to impact CPU from compressing these files, then they are already compressed and cached!
My guess would be Azures always on.
If it works anything like the one CloudFlare provides, it essentially proxies the request and tries to cache it.
Depending on the exact implementation of this cache on the side of Azure, it might wait for the scripts output to complete to cache it/validate the cache and then pass it on to the browser.
You might have a chance checking the caching configuration and disable always on for your scripts if possible.
The scripts and styles are static files and by default are compressed (you can check this with HTTP header "content-encoding": gzip) before being sent to client. So, the TTFB consists of network latency, browser HTTP channel scheduling and the static file compression time from server.
On the other hand, your Web API data is dynamic data and by default is not compressed, so possible its TTFB is less than the TTFB for static files.
However, you don't need to switch off static compressing, otherwise TTFB is minimized but content transferring time will be extended. Actually, you don't need to worry about TTFB, see more info: https://blog.cloudflare.com/ttfb-time-to-first-byte-considered-meaningles/
I finished with storing files on Azure Storage and serving them by Azure CDN. It provides high speed of response and costs nothing. I add them to blob every publish, in Pre-build event by Gulp.
well... there are 2 main problems with your site:
you are using AZURE - a high priced service with a poor performance.... don't ask me why people think that this is a good service
you are storing client files side-by-side with the server files.. while server files should be stored in a specific server, client files can practically can be served from... everywhere
so - please use a CDN (or any other server) for your client side files (mainly css and js, you may consider moving fonts and images as well)
I have an ASP.NET web application and I want my users to be able to upload large files. However, some files are very large and uses too much memory.
In principle it should be possible to receive the request stream and write it directly to a FileWriter stream, removing any need to load the entire file into memory first.
I've tried accessing Request.InputStream and writing it directly to a file. It works, but a test using larger files reveal that Request.InputStream is only available after the entire request is already loaded into memory.
Can someone tell me an approach I can use to receive a normal Request.InputStream in ASP.NET and directly write it to a file without first loading it into memory?
Note, the file is sent through a normal request in a browser by posting a form with a file field.
(I actually use BlueImp JQuery File Upload but I don't think it's relevant to this question)
The process is called byte serving.
Byte Serving:
Byte serving is the process of sending only a portion of an HTTP/1.1 message from a server to a client. Byte serving begins when an HTTP server advertises its willingness to serve partial requests using the Accept-Ranges response header. A client then requests a specific part of a file from the server using the Range request header.
Is seems that IIS and ASP.NET are capable of handling Accept-Range headers. There is a Range Controller on Microsoft git repositories.
Here is an article that may be useful in configuring IIS to handle these requests.
I am using a Azure cloud storage solution, and as such, each file has it's own URL. I need to allow users to download files such as PDFs from our ASP .Net website. Currently, we are using Response.TransmitFile to send the file to the client's browser, but this requires that we fetch the data from the cloud storage and then send it to the client (seems like an inefficient way to do it).
I'm wondering if we could not just have a direct link to the file, and if so, how would this differ from the Response.TrnasmitFile method? That is, without the TransmitFile method, we cannot set the Content-type header, etc... How does that effect anything?
Thanks
Usually I stay away from using Response.TransmitFile as it does require that you fetch the file and then stream it down to the client.
The only times I have used it was to protect files and only serve them to users that had permission to access them instead of just linking directly to the file.
If the files you are serving are public, then I would recommend just linking to them. If all you need is to set the Content-Type header, then simply make sure the .pdf extension is mapped to the correct MIME type (application/pdf).
I'd like to understand what happen under the hood when you do an web upload.
I guess one of these:
The file is loaded in memory by the browser, sent to the web server buffer memory, and then the app is notified to collect it.
The file is being readed by the browser and at the same time sent to the web server, that can start to save the bytes progresively.
I've tried to upload a very large file, and put a breakpoint on the frist line of the method receiving the upload. I've seen how the browser toke a lot of time loading... but the breakpoint was still not hit, and after a while the breakpoint is hit.
I want to understand this, because in the worst scenario, if I allow big uploads, they could blow up the server memory at some point.
What does happen if I upload a 2Gb file? (considering that the web server/app accepts that length) would it take 2Gb of server memory?
Cheers.
The documentation for the HttpPostedFile class (which represents a file uploaded to the server in ASP.NET) specifies:
Files are uploaded in MIME
multipart/form-data format. By
default, all requests, including form
fields and uploaded files, larger than
256 KB are buffered to disk, rather
than held in server memory.