When submitting a screen to process an uploaded file, I've been having trouble with larger files.
Eventually when submitting I get this error:
An error occurred during the server request:
Timeout reached during screen submit background request. Please retry your action
How can I get around the timeout error?
One option could be to increase the AJAX timeout limit. In Aviarc Admin go to Applications > your app > Variables and add the variable ajax-timeout-millis to a time long enough for the file to upload completely (if the variable doesn't exist you can create it).
How big is the file you're trying to upload? It sounds like you need to configure the maximum POST size for the servlet container.
If using the embedded Jetty, the default POST limit is 10MB - you can configure this in the startup scripts using the HTTP_MAX_POST_SIZE variable.
If using Tomcat the default POST limit is 2MB, you will need to configure the maxPostSize as described here: http://tomcat.apache.org/tomcat-6.0-doc/config/http.html
Other containers will have their own configuration mechanisms for the max POST size.
Related
What I want to achieve:
Send a >50MB file via HTTP to a Logic App
The Logic App to save the file to an SFTP server
An error I am getting in the SFTP-SSH 'Create file' action:
The provided file size '64065320' for the create or update operation
exceeded the maximum allowed file size '52428800' using non-chunked
transfer mode. Please enable chunked transfer mode to create or update
large files.
Chunking on the SFTP-SSH 'Create file' action is enabled. Overriding chunk size doesn't help. Using the body of the 'Compose' action as an input for 'Create file' also doesn't help.
The current workflow:
SFTP-SSH 'Create file' action parameters:
SFTP-SSH 'Create file' action settings:
Error:
Any ideas about the reason of the error?
P.S. I want to clarify the issue; it is about a very specific workflow: when a large file is sent to a Logic App via HTTP (the 'When a HTTP request is received' trigger) it needs to be saved to an SFTP server. Not transformed, just saved as it is. I know that when collecting (pulling) a large file from elsewhere (SFTP/blob/etc.) and saving it to SFTP, chunking works fine. But in this scenario (pushing the file to the Logic App) it doesn't. Although the Handle large messages with chunking in Azure Logic Apps article at first says that "Logic App triggers don't support chunking" and "for actions that support and are enabled for chunking you can't use trigger bodies", then it gives a workaround: "Instead, use the Compose action. Specifically, you must create a body field by using the Compose action to store the data output from the trigger body. Then, to reference the data, in the chunking action, use #body('Compose')". Well, this workaround didn't work for me as seen from the screenshots I provided. I'd appreciate if someone could clarify how to overcome this issue.
According to this Documentation, the endpoint to which you are sending the request, use chunking to download the whole data by sending partial data which enable HTTP connector. To comply with this connector's limit, Logic Apps splits any message larger than 30 MB into smaller chunks. You can split up large content downloads and uploads over HTTP so that your logic app and an endpoint can exchange large messages.
You can even refer HERE which discusses the same topic.
We have page that allows the users to upload documents (multiple). When the upload takes a long time - either due to the size of the files or due to slow upload speeds - we get a exception saying "Request timed out".
We found that the exception is thrown as soon as the upload is complete. So we have modified the executionTimeout config entry to 6000 secs. But this error still shows up consistently.
We are running IIS6, .net 3.5 sp1 (asp .net 2.0).
Update
I'm able to reproduce this issue with relatively small files (multiple files with total of 75MB)
I can't explain it any better than Jon Galloway has, so I won't try :)
Basically there are a lot of forces fighting against you when trying to upload large files via HTTP. The moral of the story is this:
Using a regular upload methods is not
adequate for large files. Instead you
should be using a separate method that
is designed specifically for large
files.
Maybe you should set form to accept multipart data.
By upload, I presume you mean through a .aspx page. You need to set the the following:
Server.ScriptTimeout = 9000 'Time in seconds
Note that this value is server-wide, so you should store the old value somewhere and reset it back to its original value when the upload completes.
http://msdn.microsoft.com/en-us/library/ms524831(VS.90).aspx
try this
<
httpRuntime maxRequestLength="Maximum size you want to upload in KB"
executionTimeout="No. of seconds for Execution Time Out"
/>
in web.config
I have a dynamically generated rss feed that is about 150M in size (don't ask)
The problem is that it keeps crapping out sporadically and there is no way to monitor it without downloading the entire feed to get a 200 status. Pingdom times out on it and returns a 'down' error.
So my question is, how do I check that this thing is up and running
What type of web server, and server side coding platform are you using (if any)? Is any of the content coming from a backend system/database to the web tier?
Are you sure the problem is not with the client code accessing the file? Most clients have timeouts and downloading large files over the internet can be a problem depending on how the server behaves. That is why file download utilities track progress and download in chunks.
It is also possible that other load on the web server or the number of users is impacting server. If you have little memory available and certain servers then it may not be able to server that size of file to many users. You should review how the server is sending the file and make sure it is chunking it up.
I would recommend that you do a HEAD request to check that the URL is accessible and that the server is responding at minimum. The next step might be to setup your download test inside or very close to the data center hosting the file to monitor further. This may reduce cost and is going to reduce interference.
Found an online tool that does what I needed
http://wasitup.com uses head requests so it doesn't time out waiting to download the whole 150MB file.
Thanks for the help BrianLy!
Looks like pingdom does not support the head request. I've put in a feature request, but who knows.
I hacked this capability into mon for now (mon is a nice compromise between paying someone else to monitor and doing everything yourself). I have switched entirely to https so I modified the https monitor to do it. The did it the dead-simple way: copied the https.monitor file, called it https.head.monitor. In the new monitor file I changed the line that says (you might also want to update the function name and the place where that's called):
get_https to head_https
Now in mon.cf you can call a head request:
monitor https.head.monitor -u /path/to/file
Ok, so here's the problem: I'm reading the stream from a FileUpload control, reading in chunks of n bytes and writing the array in a loop until I reach the stream's end.
Now the reason I do this is because I need to check several things while the upload is still going on (rather than doing a Save(); which does the whole thing in one go). Here's the problem: when doing this from the local machine, I can see the file just fine as it's uploading and its size increases (had to add a Sleep(); clause in the loop to actually get to see the file being written).
However, when I upload the file from a remote machine, I don't get to see it until the the file has completed uploading. Also, I've added another call to write the progress to a text file as the progress is going on, and I get the same thing. Local: the file updates as the upload goes on, remote: the token file only appears after the upload's done (which is somewhat useless since I need it while the upload's still happening).
Is there some sort of security setting in (or ASP.net) that maybe saves files in a temporary location for remote machines as opposed to the local machine and then moves them to the specified destination? I would liken this with ASP.net displaying error messages when browsing from the local machine (even on the public hostname) as opposed to the generic compilation error page/generic exception page that is shown when browsing from a remote machine (and customErrors are not off)
Any clues on this?
Thanks in advance.
FileUpload control renders as an <input type="file"> HTML element; this way, your browser will open that file, read ALL content, encode and send it.
Your ASP.NET request just starts after IIS receives all browser data.
This way, you'll need to code a client component (Flash, Java applet, Silverlight) to send a file in small chunks and rebuild that at server-side.
EDIT: Some information on MSDN:
To control whether the file to upload is temporarily stored in memory or on the server while the request is being processed, set the requestLengthDiskThreshold attribute of the httpRuntime element. This attribute enables you to manage the size of the input stream buffer. The default is 256 bytes. The value that you specify should not exceed the value that you specify for the maxRequestLength attribute.
I understand that you want to check the file which is being uploaded for it's content.
If this is your requirement then why not add a textbox and populate it while you are reading the file from HttpPostedFile.
I have an application that scans an input directory every five seconds and when a job (i.e. file) is placed in the directory the application reads it, processes it and outputs another file(s) in an output directory.
My question is, If I wanted to put a web based front end on this application, how would I wait for the processing to be complete?
User submits job
job is placed in input directory
......What am I doing on the web page here?
processing occurs
output file is generated
......How do I know that the job is finished?
The two solutions I came up with were:
poll output directory every x seconds from the webpage
use ajax to poll a webservice or webpage that reports back whether the output file is present
Is there a better design for the server? In other words, how would TCP or Named Pipes help in this situation? (Cannot use remoting due to a DCOM object.)
A solution we have done commercially in the past is basically the daemon writes to a log (typically DB), with a date/time stamp, about what its doing, and the web frontend just shows the latest X amount of entries from the log, with a little toggle to hide all of the "Looked in directory, no file found" messages, worked fairly well, we upgraded it later on with AJAX (Timer that reloaded every 20 seconds).
I don't think that Named Pipes are going to make it any easier to get the web client to poll automatically, but it might make the server better able to notify another process that the conversion has completed--and ultimately queue a message to the web browser.
You can try having the web client poll every few seconds to see if the file process has completed, alternatively you could have something like Juggernaut "push" commands out to the page. Juggernaut works using Flash to open a socket on the web browser that continually feeds JavaScript from the server. It could be responsible for sending a command to alert the browser that the file has completed and then issue a redirect.