TidHTTPServer "Out of memory" on large file upload - http

I'm using Delphi 10.3.1 and Indy TIdHTTP / TIdHTTPServer
I created a client / server application to archive files.
The client uses a TIdHTTP component, the code is something like this:
procedure TForm1.SendFileClick (Sender: TObject);
var
Stream: TIdMultipartFormDataStream;
begin
Stream: = TIdMultipartFormDataStream.Create;
try
Stream.AddFormField ('field1', 'hello world');
Stream.AddFile ('field2', 'c:\temp\gigafile.mp4');
idHTTP.Post ('http://192.168.1.100:1717', Stream);
finally
Stream.Free;
end;
end;
The server uses a TIdHTTPServer component.
Everything seemed to work perfectly until I uploaded very large video files (>= 1GB), because I got the error "Out of memory".
By debugging, I saw that I get the error in function PreparePostStream (line 1229 of the IdCustomHTTPServer unit) when it calls LIOHandler.ReadStream, the event OnCommandGet is not fired yet.
The function LIOHandler.ReadStream goes wrong when it runs AdjustStreamSize (line 2013 of the IdIOHandler unit)
In my last test, with a large video file, in the AdjustStreamSize function, the value of ASize was 1091918544 and I got the error during the execution of
AStream.Size line: = ASize
I think that the origin point of the error is in the System.Classes unit in the following procedure when at the SetPointer ... line.
procedure TMemoryStream.SetCapacity (NewCapacity: NativeInt);
{$ IF SizeOf (LongInt) = SizeOf (NativeInt)}
begin
  SetPointer (Realloc (LongInt (NewCapacity)), FSize);
  FCapacity: = NewCapacity;
end;
I read many articles on the web but I didn't understand if there is something wrong in my code.
How can I solve it or is there a limit to the size of the files I can upload with TIdHTTPServer?

By default, TIdHTTPServer receives posted data using a TMemoryStream, which will obviously not work well for such large files. You can use the server's OnCreatePostStream event to provide an alternative TStream object to receive into, such as a TFileStream.

Delphi by default seems to have some kind of limit on memory usage, adding this lines to .DPR project file:
const
IMAGE_FILE_LARGE_ADDRESS_AWARE = $0020;
{$SetPEFlags IMAGE_FILE_LARGE_ADDRESS_AWARE}
applications can use up to 2.5 GB of RAM on the 32bit versions of Windows and up to 3.5 GB of RAM on the 64bit versions.
(https://cc.embarcadero.com/item/24309)
Anyway I think #RemyLebeau solution is the best

Related

Strange ZQuery behavior

I'm using Zeos and SQLite3 DB in Delphi
ZQuery2.Close;
ZQuery2.SQL.Clear;
ZQuery2.SQL.Add('SELECT * FROM users WHERE un = ' + QuotedStr( UserName ) );
ZQuery2.Open;
OutputDebugString(PWideChar( ZQuery2.FieldDefList.CommaText )); // log : id,un,pw
OutputDebugString(PWideChar(ZQuery2.FieldByName('pw').AsString)); //causes error sometimes
the code is working but sometimes I get the following error message
Exception class EDatabaseError with message 'ZQuery2:Field'pw' not found'.
This is odd because a field of a dataset shouldn't just disappear while the app is in the middle of running, especially if other fields are still operating normally. So, I would suspect something like a memory overwrite being the cause.
Memory overwrites usually happen when something is written to the wrong place in memory, overwriting what is there, usually because of an incorrect pointer value or a so-called "buffer overrun" where the writing operation carries on beyond where is should stop. Usually, the pointer value is so wildly wrong that the OS can detect it and raise an AV, but sometimes it is less obvious.
Delphi's memory manager has a 'full debug mode' which adds special checks for this condition, see here.
I suggest you enable full debug mode as per the linked document and wait for the exception to occur.

Blazor preview 9/mono-wasm memory access out of bounds: max string size for DotNet.invokeMethod?

Since dotnet core 3 preview 9, I am facing an issue invoking a dotnet method passing a large string from JavaScript.
Code is worth more than a thousand words, so the snippet below reproduces the issue. It works when length = 1 * mb but fails when length = 2 * mb.
#page "/repro"
<button onclick="const mb = 1024 * 1024; const length = 2 * mb;console.log(`Attempting length ${length}`); DotNet.invokeMethod('#GetType().Assembly.GetName().Name', 'ProcessString', 'a'.repeat(length));">Click Me</button>
#functions {
[JSInvokable] public static void ProcessString(string stringFromJavaScript) { }
}
The error message is:
Uncaught RuntimeError: memory access out of bounds
at wasm-function[2639]:18
at wasm-function[6239]:10
at Module._mono_wasm_string_from_js (http://localhost:52349/_framework/wasm/mono.js:1:202444)
at ccall (http://localhost:52349/_framework/wasm/mono.js:1:7888)
at http://localhost:52349/_framework/wasm/mono.js:1:8238
at Object.toDotNetString (http://localhost:52349/_framework/blazor.webassembly.js:1:39050)
at Object.invokeDotNetFromJS (http://localhost:52349/_framework/blazor.webassembly.js:1:37750)
at u (http://localhost:52349/_framework/blazor.webassembly.js:1:5228)
at Object.e.invokeMethod (http://localhost:52349/_framework/blazor.webassembly.js:1:6578)
at HTMLButtonElement.onclick (<anonymous>:2:98)
I need to process large strings, which represent the content of a file.
Is there a way to increase this limit?
Apart from breaking down the string into multiple segments and performing multiples calls, is there any other way to process a large string?
Is there any other approach for processing large files?
This used to work in preview 8.
Is there a way to increase this limit?
No (unless you modify and recompile blazor and mono/wasm that is).
Apart from breaking down the string into multiple segments and performing multiples calls, is there any other way to process a large string?
Yes, as you are on the client side, you can use the shared memory techniques. You basically map a .net byte[] to an ArrayBuffer. See this (disclaimer: My library) or this library for reference on how to do it. These examples are using the binary content of actual javascript Files but it's applicable to strings as well. There is no reference documentation on these API's yet. Mostly just examples and the blazor source code.
Is there any other approach for processing large files?
See 2)
I recreated your issue in a netcore 3.2 Blazor app (somewhere between 1 and 2 Mb of data kills it just as you described). I updated the application to netcore 5.0 and the problem is fixed (it was still working when I threw 50Mb at it).

Send a large file with HTTP.jl

I would like to implement a server with HTTP.jl and julia. After some computation the server would return a "large" file (about several 100 MB). I would like to avoid having to read all the file in memory and then send it to the client.
Some framework allow have a specific function for this (e.g. Flask http://flask.pocoo.org/docs/0.12/api/#flask.send_file) or allow to stream the content to the client (http://flask.pocoo.org/docs/0.12/patterns/streaming/).
Are one for these two options also available in HTTP.jl ? Or any other Julia web package?
Here is a test code which reads the file testfile.txt, but I want to avoid loading the complete file in memory.
import HTTP
f = open("testfile.txt","w")
write(f,"test")
close(f)
router = HTTP.Router()
function testfun(req::HTTP.Request)
f = open("testfile.txt")
data = read(f)
close(f)
return HTTP.Response(200,data)
end
HTTP.register!(router, "GET", "/testfun",HTTP.HandlerFunction(testfun))
server = HTTP.Servers.Server(router)
task = #async HTTP.serve(server, ip"127.0.0.1", 8000; verbose=false)
sleep(1.0)
req = HTTP.request("GET","http://127.0.0.1:8000/testfun/")
# end server
put!(server.in, HTTP.Servers.KILL)
#show String(req.body)
You can use memory mapped IO like this:
function testfun(req::HTTP.Request)
data = Mmap.mmap(open("testfile.txt"), Array{UInt8,1})
return HTTP.Response(200,data)
end
data now looks like a normal byte array to julia, but is actually liked to the file, which might be exactly what you want. The file will be closed upon garbage collection - if you have many requests and no garbage collection is triggered, you might end up with a lot of open files. If your request takes quite long anyway, you might consider calling gc() at the begin of the request.

System.Io.Directory::GetFiles() Polling from AX 2009, Only Seeing New Files Every 10s

I wrote code in AX 2009 to poll a directory on a network drive, every 1 second, waiting for a response file from another system. I noticed that using a file explorer window, I could see the file appear, yet my code was not seeing and processing the file for several seconds - up to 9 seconds (and 9 polls) after the file appeared!
The AX code calls System.IO.Directory::GetFiles() using ClrInterop:
interopPerm = new InteropPermission(InteropKind::ClrInterop);
interopPerm.assert();
files = System.IO.Directory::GetFiles(#POLLDIR,'*.csv');
// etc...
CodeAccessPermission::revertAssert();
After much experimentation, it emerges that the first time in my program's lifetime, that I call ::GetFiles(), it starts a notional "ticking clock" with a period of 10 seconds. Only calls every 10 seconds find any new files that may have appeared, though they do still report files that were found on an earlier 10s "tick" since the first call to ::GetFiles().
If, when I start the program, the file is not there, then all the other calls to ::GetFiles(), 1 second after the first call, 2 seconds after, etc., up to 9 seconds after, simply do not see the file, even though it may have sitting there since 0.5s after the first call!
Then, reliably, and repeatably, the call 10s after the first call, will find the file. Then no calls from 11s to 19s will see any new file that might have appeared, yet the call 20s after the first call, will reliably see any new files. And so on, every 10 seconds.
Further investigation revealed that if the polled directory is on the AX AOS machine, this does not happen, and the file is found immediately, as one would expect, on the call after the file appears in the directory.
But this figure of 10s is reliable and repeatable, no matter what network drive I poll, no matter what server it's on.
Our network certainly doesn't have 10s of latency to see files; as I said, a file explorer window on the polled directory sees the file immediately.
What is going on?
Sounds like your issue is due to SMB caching - from this technet page:
Name, type, and ID
Directory Cache [DWORD] DirectoryCacheLifetime
Registry key the cache setting is controlled by
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters
This is a cache of recent directory enumerations performed by the
client. Subsequent enumeration requests made by client applications as
well as metadata queries for files in the directory can be satisfied
from the cache. The client also uses the directory cache to determine
the presence or absence of a file in the directory and uses that
information to prevent clients from repeatedly attempting to open
files which are known not to exist on the server. This cache is likely
to affect distributed applications running on multiple computers
accessing a set of files on a server – where the applications use an
out of band mechanism to signal each other about
modification/addition/deletion of files on the server.
In short try to set the registry key
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Lanmanworkstation\Parameters\DirectoryCacheLifetime
to 0
Thanks to #Jan B. Kjeldsen , I have been able to solve my problem using FileSystemWatcher. Here is my implementation in X++ :
class SelTestThreadDirPolling
{
}
public server static Container SetStaticFileWatcher(str _dirPath,str _filenamePattern,int _timeoutMs)
{
InteropPermission interopPerm;
System.IO.FileSystemWatcher fw;
System.IO.WatcherChangeTypes watcherChangeType;
System.IO.WaitForChangedResult res;
Container cont;
str fileName;
str oldFileName;
str changeType;
;
interopPerm = new InteropPermission(InteropKind::ClrInterop);
interopPerm.assert();
fw = new System.IO.FileSystemWatcher();
fw.set_Path(_dirPath);
fw.set_IncludeSubdirectories(false);
fw.set_Filter(_filenamePattern);
watcherChangeType = ClrInterop::parseClrEnum('System.IO.WatcherChangeTypes', 'Created');
res = fw.WaitForChanged(watcherChangeType,_timeoutMs);
if (res.get_TimedOut()) return conNull();
fileName = res.get_Name();
//ChangeTypeName can be: Created, Deleted, Renamed and Changed
changeType = System.Enum::GetName(watcherChangeType.GetType(), res.get_ChangeType());
fw.Dispose();
CodeAccessPermission::revertAssert();
if (changeType == 'Renamed') oldFileName = res.get_OldName();
cont += fileName;
cont += changeType;
cont += oldFileName;
return cont;
}
void waitFileSystemWatcher(str _dirPath,str _filenamePattern,int _timeoutMs)
{
container cResult;
str filename,changeType,oldFilename;
;
cResult=SelTestThreadDirPolling::SetStaticFileWatcher(_dirPath,_filenamePattern,_timeoutMs);
if (cResult)
{
[filename,changeType,oldFilename]=cResult;
info(strfmt("filename=%1, changeType=%2, oldFilename=%3",filename,changeType,oldFilename));
}
else
{
info("TIMED OUT");
}
}
void run()
{;
this.waitFileSystemWatcher(#'\\myserver\mydir','filepattern*.csv',10000);
}
I should acknowledge the following for forming the basis of my X++ implementation:
https://blogs.msdn.microsoft.com/floditt/2008/09/01/how-to-implement-filesystemwatcher-with-x/
I would guess DAXaholic's answer is correct, but you could try other solutions like EnumerateFiles.
In your case I would rather wait for the files rather than poll for the files.
Using FileSystemWatcher there will be a minimal delay from file creation till your process wakes up. It is more tricky to use, but avoiding polling is a good thing. I have never used it over a network.

Seeking not working in HTML5 audio tag

I have a lighttpd server running locally. If I load a static file on the server (through an html5 audio tag), it plays and seeks fine.
However, seeking doesn't work when running a dev server (web.py/CherryPy) or if I return the bytes via a defined action url instead of as a static file. It won't load the duration either.
According to the "HTTP byte range requests" section in this Opera Page it's something to do with support for byte range requests/partial content responses. The content is treated as streaming instead.
What I don't understand is:
If the browser has the whole file downloaded surely it can display the duration, and surely it can seek.
What I need to do on the web server to enable byte range requests (for non-static urls).
Any advice would be most gratefully received.
Here's some web.py code to get you started (just happened to need this as well and ran into your question):
## experimental partial content support
## perhaps this shouldn't be enabled by default
range = web.ctx.env.get('HTTP_RANGE')
if range is None:
return result
total = len(result)
_, r = range.split("=")
partial_start, partial_end = r.split("-")
start = int(partial_start)
if not partial_end:
end = total-1
else:
end = int(partial_end)
chunksize = (end-start)+1
web.ctx.status = "206 Partial Content"
web.header("Content-Range", "bytes %d-%d/%d" % (start, end, total))
web.header("Accept-Ranges", "bytes")
web.header("Content-Length", chunksize)
return result[start:end+1]
Google tells me you have to use the staticFilter for byte ranges to work in CherryPy - but that is for static files only. Luckily this posting also includes pointers on how to do it for non-static data :-)

Resources