Firebase Cloud Storage - Unity - PutStreamAsync memory crash with large files - firebase

I'm working on a Unity3D application connected to a Firebase backend.
We're using Auth, Firestore and Storage, the connection to Firebase is working smoothly.
When trying to upload a large file in iOS, a video recording of 200mb+, the app starts using all memory out of nowhere and crashes.
I'm using the PutStreamAsync method as described in the docs.
https://firebase.google.com/docs/storage/unity/upload-files#upload_from_a_local_file
I also tried using PutFileAsync but I'm getting a ErrorObjectNotFound, which makes no sense to me because it's an upload, not a download.
Error codes: https://firebase.google.com/docs/storage/unity/handle-errors
Here's our code. Just a simple use case, very similar to what's shown in the docs:
public async Task<string> UploadTestVideo(TestExecution testExecution)
{
string videoPath = TestPathHelper.GetLocalVideoPath(testExecution);
StorageReference folderRef = storageReference.Child($"{StoragePathTestExecutions}/{testExecution.UID}/{StoragePathVideoRecordings}");
StorageReference fileRef = folderRef.Child("recording.mp4");
TVLog.Debug(Tag, $"Upload video <{videoPath}> to CloudStorage at <{fileRef.Path}>");
try
{
MetadataChange newMetadata = new MetadataChange();
newMetadata.ContentType = "video/mp4";
//StreamReader stream = new StreamReader(videoPath);
FileStream stream = File.OpenRead(videoPath);
TVLog.Debug(Tag, "Opened file stream for uploading...");
StorageMetadata metadata = await fileRef.PutStreamAsync(stream, newMetadata);
TVLog.Debug(Tag, "Finished uploading video...");
stream.Close();
stream.Dispose();
// Metadata contains file metadata such as size, content-type, and download URL.
string md5Hash = metadata.Md5Hash;
TVLog.Debug(Tag, "video md5 hash = " + md5Hash);
return md5Hash;
}
catch(StorageException e){
TVLog.Exception(Tag, $"Exception uploading video {e.HttpResultCode} - {e.ErrorCode} - {e.Message}");
return null;
}
}
XCode shows how memory spikes from 300mb to 1.4gb in a few seconds and it crashes.
When trying with a 100mb video, memory goes from 300 to 800mb and the upload succeeds, which confirms that the code is working and doing what's supposed to do.
But when we try with 200mb files, memory goes way beyond that and clearly the OS kills the app.
This is what I see in the XCode log when the crash happens:
WARNING -> applicationDidReceiveMemoryWarning()
2022-07-28 16:13:15.059747-0300 MyProject[84495:5466165] [xpc] <PKDaemonClient: 0x282907f00>: XPC error talking to pkd: Connection interrupted
WARNING -> applicationDidReceiveMemoryWarning()
2022-07-28 16:13:15.189228-0300 MyProject[84495:5466490] [ServicesDaemonManager] interruptionHandler is called. -[FontServicesDaemonManager connection]_block_invoke
WARNING -> applicationDidReceiveMemoryWarning()
I'm using Unity 2020.3.18f1 LTS and Firebase SDK for Unity 8.10.1, installed through package manager. We can't upgrade because the newer versions are compiled by a version of XCode not yet supported by Unity Cloud Build (yeah that's really sad).
Is there anything I could do on my side or is this clearly a bug in the SDK?
Will be trying to find alternatives in the meantime.

Finally I managed to effectively use PutFileAsync.
Documentation is misleading. The name of the parameter is "filePath" and in the docs for the Unity SDK there's a sample that uses a normal file path like "folder/file.png".
It turns out that it's not a path, it's an URI, and I had to prepend "file://" in order to make it work. After that, I could upload 500mb files with no issues.
In the API specification, under the "filePath" parameter, it explains that it is an URI.
https://firebase.google.com/docs/reference/unity/class/firebase/storage/storage-reference#class_firebase_1_1_storage_1_1_storage_reference_1a0b6fee3e69ca0b004e8675f7d867d925
I wonder why they don't name it differently or add a proper example in the docs. Lost hours with this. Anyways, happy it's solved.

Related

Going offline with app and database syncing

I have a NativeScript (Angular) app that makes API-calls to a server to get data. I want to implement a bi-directional synchronization once a device gets online but using current API, no BaaS.
I could do a sort of caching. Once in a while app invalidates info in database and fetches it again. I don't like this approach because there are big lists that may change. They are fetched in batches, i.e. by page. One of them is a list of files downloaded to and stored on a device. So I have to keep those that are still in the list, and delete those that are not. It sounds like a nightmare.
How would you solve such a problem?
I use nativescript-couchebase plugin to store the data. We have following services
Connectivity
Data
API Service
Based on connectivity is Online/Offline, we either fetch data from remote API or via couchebase db. Please note that API service always returns the data from Couchebase only.
So in online mode
API Call -> Write to DB -> Return latest data from Couchebase
Offline mode
Read DB -> Return latest data from Couchebase
Also along with this, we maintain all API calls in a queue. So whenever connectivity returns, API calls are processed in sequence. Another challenge that you may face while coming in online mode from offline mode is the token expiry. This problem can be solved by showing a small popup to user after you come online.
I do this by saving my data as a json string and saving it to the devices file system.
When the app loads/reloads I read it from the file.
ie.
const fileSystemModule = require("tns-core-modules/file-system");
var siteid = appSettings.getNumber("siteid");
var fileName = viewName + ".json";
const documents = fileSystemModule.knownFolders.documents();
const site_folder = documents.getFolder("site");
const siteid_folder = site_folder.getFolder(siteid.toString());
const directoryPath = fileSystemModule.path.join(siteid_folder.path, fileName);
const directoryFile = fileSystemModule.File.fromPath(directoryPath);
directoryFile.writeText(json_string)
.then((result) => {
directoryFile.readText().then((res) => {
retFun(res);
});
}).catch((err) => {
console.log(err.stack);
});

How to generate DownloadUrl from Google-Cloud storage (I came from firebase)

Just trying to figure out something that seemed trivial in firebase, in google-cloud.
It seems as though if you're making a node.js app for HTML (i'm talking to it through Unity actually, but it's a desktop application) you can't use firebase-storage for some odd reason, you have to use google-cloud, even the firebase-admin tools use the cloud storage to do storage from here.
Nevertheless, i got it working, i am uploading the files to the firebase storage; however, the problem is in firebase, you could specify a specific file, and then do storage().ref().child(filelocation).GetDownloadURL(): this would generate a unique url for some set time that can be used publicly, without having to give out access to read to all anonymous users.
I did some research and i need to implement something called GS UTIL in order to generate my own special urls, but it's so damn complicated (im a newbie to this whole server stuff), i don't even know where to start to get this working in my node server.
Any pointers? I'm really stuck here.
-------if anyones interested, this is what im trying to do high level-----
I'm sending 3d model data to node app from Unity
the node app is publishing this model on sketchfab
then it puts the model data onto my own storage, along with some additional data specially made for my app
after it gets signed to storage, it gets saved to my Firebase DB in my global model database
to be accessed later, by users, to try to get the downloadURL of this storage file and send them all back to Unity users(s)
I would just download the files into my node app, but i wanna reduce any server load, it's supposed to be just a middleman between Unity and Firebase
(i would've done it straight from Unity, but apparently firebase isn't for desktop windows apps).
Figured it out:
var firebase_admin = require("firebase-admin");
var storage = firebase_admin.storage();
var bucket = storage.bucket();
bucket.file(childSnapshot.val().modelLink).getSignedUrl({
action: 'read',
expires: expDate
},function(err,url){
if(err){
reject(err);
}
else{
finalData.ModelDownloadLink = url;
console.log("Download model DL url: " + url);
resolve();
}
});

first realm db can be open encrypted but writeCopyToPath cannot

I'm using Realm 0.98.6 with Xcode 7.3 in an OSX app to create an encrypted realm database and then making a clean copy to place in my bundle. The original database opens with the Realm Browser (after pasting in the key), but the copy does not.
Here is the code I use to create both the databases. There are no writes in the routines called, just a mix of realm.adds and realm.appends to create a collection of related objects:
let config = Realm.Configuration(path: realmTempFile, encryptionKey: key)
let realm = try! Realm(configuration: config)
try! realm.write {
loadAuthors(authorFile, realm: realm)
loadVolumes(volumesFile, realm: realm)
}
try! realm.writeCopyToPath(realmFile, encryptionKey: key)
If I remove the encryptionKey parameters from the config and writecopy, then both databases open ok with the Realm Browser.
In case it matters, I'm deleting the db files with the Finder (and associated lock files) before each attempt. (I've also tried changing the names to mitigate any temporary files hanging around.). The only obvious difference between the two files is 7.5 vs 6.9 MB filesize for the 'original' and 'copy' respectively (i.e., as expected the copy is slightly smaller).
I'd love some suggestions! It's a pretty vanilla program of <400 lines that loads a db for later use in an iOS & Android app. I can always ship the bigger file, but it's making me wary of what else I might not know... thanks in advance!

Using ffmpeg in asp.net

I needed a audio conversion library. After already pulling my hair..I have given up on the fact that there is no such audio library out there..every library out there has some or the other problem.
The only option left is ffmpeg which is the best but unfortunately you cannot use it in asp.net (not directly I mean). Every user on the website that will convert a file; will launch an exe?; I think I will hit the server memory max soon.
Bottom Line: I will try using ffmpeg.exe and see how many users it can support simultaneously.
I went to the ffmpeg website and in the windows download section I found 3 different version; static, shared and dev.
Does any one know which would be the best? All packed in one exe (static) or dll's separely and exe small, wrt using it in asp.net?
PS: any one has a good library out there..would be great if you can share.
Static builds provide one self-contained .exe file for each program (ffmpeg, ffprobe, ffplay).
Shared builds provide each library as a separate .dll file (avcodec, avdevice, avfilter, etc.), and .exe files that depend on those libraries for each program
Dev packages provide the headers and .lib/.dll.a files required to use the .dll files in other programs.
ffMpeg is the best library out there from what I have used but I wouldn't recommend trying to call it directly from asp.net.
What I have done, is accepted the upload, stored it on the server, or S3 in my case, then have a worker role (if using something like Azure) and a process that continuously looks and monitors for new files to convert.
If you needed a realtime like solution, you could update flags in your database and have an AJAX solution to poll the database to keep providing progress updates, then a link to download once the conversion is complete.
Personally my approach would be
Azure Web Roles
Azure Worker Role
ServiceBus
The WorkerRole starts up and is monitoring the ServiceBus Queue for messages.
The ASP.NET site uploads and stores the file in S3 or Azure
The ASP.NET site then records information in your DB if needed and sends a message to the ServiceBus queue.
The WorkerRole picks this up and converts.
AJAX will be needed on the ASP.NET site if you want a realtime monitoring solution. Otherwise you could send an email when complete if needed.
Using a queuing process also helps you with load as when you are under heavy load people just wait a little longer and it doesn't grind everything to a halt. Also you can scale out your worker roles as needed to balance loads, should it ever become too much for one server.
Here is how I run ffMpeg from C# (you will need to change the parameters for your requirements)
String params = string.Format("-i {0} -s 640x360 {1}", input.Path, "C:\\FilePath\\file.mp4");
RunProcess(params);
private string RunProcess(string Parameters)
{
//create a process info
ProcessStartInfo oInfo = new ProcessStartInfo(this._ffExe, Parameters);
oInfo.UseShellExecute = false;
oInfo.CreateNoWindow = true;
oInfo.RedirectStandardOutput = true;
oInfo.RedirectStandardError = true;
//Create the output and streamreader to get the output
string output = null; StreamReader srOutput = null;
//try the process
try
{
//run the process
Process proc = System.Diagnostics.Process.Start(oInfo);
proc.ErrorDataReceived += new DataReceivedEventHandler(proc_ErrorDataReceived);
proc.OutputDataReceived += new DataReceivedEventHandler(proc_OutputDataReceived);
proc.BeginOutputReadLine();
proc.BeginErrorReadLine();
proc.WaitForExit();
proc.Close();
proc.Dispose();
}
catch (Exception)
{
// Capture Error
}
finally
{
//now, if we succeeded, close out the streamreader
if (srOutput != null)
{
srOutput.Close();
srOutput.Dispose();
}
}
return output;
}

Adobe AIR HTTP Connection Limit

I'm working on an Adobe AIR application which can upload files to a web server, which is running Apache and PHP. Several files can be uploaded at the same time and the application also calls the web server for various API requests.
The problem I'm having is that if I start two file uploads, while they are in progress any other HTTP requests will time out, which is causing a problem for the application and from a user point of view.
Are Adobe AIR applications limited to 2 HTTP connections, or is something else probably the issue?
From searching about this issue I've not found much but one article did indicated that it wasn't limited to just two connections.
The file uploads are performed by calling the File classes upload method, and the API calls are done using the HTTPService class. The development web server I am using is a WAMP server, however when the application is released it will be talking to a LAMP server.
Thanks,
Grant
Here is the code I'm using to upload the file:
protected function btnAddFile_clickHandler(event:MouseEvent):void
{
// Create a new File object and display the browse file dialog
var uploadFile:File = new File();
uploadFile.browseForOpen("Select File to Upload");
uploadFile.addEventListener(Event.SELECT, uploadFile_SelectedHandler);
}
private function uploadFile_SelectedHandler(event:Event):void
{
// Get the File object which was used to select the file
var uploadFile:File = event.target as File;
uploadFile.addEventListener(ProgressEvent.PROGRESS, file_progressHandler);
uploadFile.addEventListener(IOErrorEvent.IO_ERROR, file_ioErrorHandler);
uploadFile.addEventListener(Event.COMPLETE, file_completeHandler);
// Create the request URL based on the download URL
var requestURL:URLRequest = new URLRequest(AppEnvironment.instance.serverHostname + "upload.php");
requestURL.method = URLRequestMethod.POST;
// Set the post parameters
var params:URLVariables = new URLVariables();
params.name = "filename.ext";
requestURL.data = params;
// Start uploading the file to the server
uploadFile.upload(requestURL, "file");
}
Here is the code for the API calls:
private function sendHTTPPost(apiFile:String, postParams:Object, resultCallback:Function, initialCallerResultCallback:Function):void
{
var httpService:mx.rpc.http.HTTPService = new mx.rpc.http.HTTPService();
httpService.url = AppEnvironment.instance.serverHostname + apiFile;
httpService.method = "POST";
httpService.requestTimeout = 10;
httpService.resultFormat = HTTPService.RESULT_FORMAT_TEXT;
httpService.addEventListener("result", resultCallback);
httpService.addEventListener("fault", httpFault);
var token:AsyncToken = httpService.send(postParams);
// Add the initial caller's result callback function to the token
token.initialCallerResultCallback = initialCallerResultCallback;
}
If you are on a windows system, Adobe AIR is using Microsofts WinINet library to access the web. This library by default limits the number of concurrent connections to a single server to 2:
WinInet limits the number of simultaneous connections that it makes to a single HTTP server. If you exceed this limit, the requests block until one of the current connections has completed. This is by design and is in agreement with the HTTP specification and industry standards.
... Connections to a single HTTP 1.1 server are limited to two simultaneous connections
There is an API to change the value of this limit but I don't know if it is accessible from AIR.
Since this limit also affects page loading speed for web sites, some sites are using multiple DNS names for artifacts such as images, javascripts and stylesheets to allow a browser to open more parallel connections.
So if you are controlling the server part, a workaround could be to create DNS aliases like www.example.com for uploads and api.example.com for API requests.
So as I was looking into this, I came across this info about using File.upload() in the documentation:
Starts the upload of the file to a remote server. Although Flash Player has no restriction on the size of files you can upload or download, the player officially supports uploads or downloads of up to 100 MB. You must call the FileReference.browse() or FileReferenceList.browse() method before you call this method.
Listeners receive events to indicate the progress, success, or failure of the upload. Although you can use the FileReferenceList object to let users select multiple files for upload, you must upload the files one by one; to do so, iterate through the FileReferenceList.fileList array of FileReference objects.
The FileReference.upload() and FileReference.download() functions are
nonblocking. These functions return after they are called, before the
file transmission is complete. In addition, if the FileReference
object goes out of scope, any upload or download that is not yet
completed on that object is canceled upon leaving the scope. Be sure
that your FileReference object remains in scope for as long as the
upload or download is expected to continue.
I wonder if something there could be giving you issues with uploading multiple files. I see that you are using browserForOpen() instead of browse(). It seems like the probably do the same thing... but maybe not.
I also saw this in the File class documentation
Note that because of new functionality added to the Flash Player, when publishing to Flash Player 10, you can have only one of the following operations active at one time: FileReference.browse(), FileReference.upload(), FileReference.download(), FileReference.load(), FileReference.save(). Otherwise, Flash Player throws a runtime error (code 2174). Use FileReference.cancel() to stop an operation in progress. This restriction applies only to Flash Player 10. Previous versions of Flash Player are unaffected by this restriction on simultaneous multiple operations.
When you say that you let users upload multiple files, do you mean subsequent calls to browse() and upload() or do you mean one call that includes multiple files? It seems that if you are trying to do multiple separate calls that that may be an issue.
Anyway, I don't know if this is much help. It definitely seems that what you are trying to do should be possible. I can only guess that what is going wrong is perhaps a problem with implementation. Good luck :)
Reference: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/FileReference.html#upload()
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/FileReference.html#browse()
Just because I was thinking about a very similar question because of an error in one of my actual apps, I decided to write down the answer I found.
I instantiated 11
HttpConnections
and was wondering why my Flex 4 Application stopped working and threw an HTTP-Error although it was working pretty good formerly with just 5 simultanious HttpConnections to the same server.
I tested this myself because I did not find anything regarding this in the Flex docs or on the internet.
I found that using more than 5 HTTPConnections was the reason for the Flex application to throw the runtime error.
I decided to instantiate the connections one after another as a temporally workaround: Load the next one after the other has received the data and so on.
Thats of course just temporally since one of the next steps will be to alter the responding server code in that way that it answers a request that contains the results of requests to more then one table in one respond. Of course the client application logic needs to be altered, too.

Resources