Flex: DeepCopy of FileReference - apache-flex

in my project, I let users pick pictures using the FileReference class. I then load these pictures into their .data properties, using the load() function. After this I perform some local manipulation and send them to the server.
What I would like to do, is to be able to iterate over the picked FileReferences again, load them into .data properties, perform different manipulation and send them to the server once again. I know that I should be able to do this from user-invoked event, that is not an issue here.
Problem is, once the FileReference is loaded for the first time, I can not unload it in any way, and I can not keep the data for all the pictures in the memory because these are huge.
So I guess there is only one thing I can do, which is performing a DeepCopy on the FileReference... Then I could load the first version, scrap it and use the copy for the second 'run'.
I tried to use ObjectUtil.copy, but when I access e.g. .name property of the copy, it fails with:
Error #2037: Functions called in incorrect sequence, or earlier call was unsuccessful.
at flash.net::FileReference/get name()
the relevant snippet:
registerClassAlias("FileReference",FileReference);
masterFileList.addItem(FileReference(ObjectUtil.copy(fr_load.fileList[i])));
trace(masterFileList[i].name)
Is it true that there are some protected properties of FileReference class that prevent it from being copied? If it is so, can I sidestep this somehow? Or is there any other solution to my overall problem?
I appreciate any hints/ideas!

I was trying to do almost exactly what you were doing, and I almost gave up after reading some of the answers, but I think I found a way to do it. I've found that if you have a FileReference object and call load() multiple times, it will work, but the main problem is that you're keeping the high-res bytes in memory after the first load. As you've mentioned, for people who don't know image processing, this is a big no-no.
The way to get around this is that after your first load(), you need to call the cancel() method on FileReference. From my testing so far, it looks like that will clear out the bytes in the FileReference, and load() will still work if you call it a second time later. Just a word of caution, this isn't explicitly-defined behavior in the API, so it is definitely subject to change, but it may help get you where you need to go in the mean time.
Hope that helps.

you cant use a ObjectUtil.copy. This method is designed for copying only data objects (VO classes).
you should create a new FileReference and copy the porperties, one by one. Create a function to do this..

Would copying it to a temporary file and then uploading the temporary file work? For example
var fileRef:FileReference = new FileReference();
fileRef.browse();
......................
var tmpFile:File = File.createTempFile();
try {
var tmpFileStream:FileStream = new FileStream();
tmpFileStream.open(tmpFile, FileMode.WRITE);
trace("Opened file: " + tmpFile.nativePath);
tmpFileStream.writeBytes(fileRef.data);
trace("copied file");
} catch ( error:Error ) {
trace("Unable to open file " + tmpFile.nativePath + "\n");
throw error;
}

I'm thinking that the operation is completely disallowed, for good reasons. If you can duplicate a new FileReference through ActionScript code, then you'd also be able to manufacture a FileReference object through ActionScript code. Of course, that'd be a pretty bad security hole if you could force the upload of an arbitrary file.
Keeping a copy of the data in memory really isn't that bad of a solution. After all, it's temporary. The typical client computer should be able to manage a few hundred extra MB of data with no problem. It's certainly a better option than having their browser do two separate uploads, which is what your attempted solution would end up doing.
A completely different potential solution to this problem is to avoid image manipulation by Flex altogether. Flex could post the uploaded file directly to the server, and the server could do the image manipulation itself. Of course, if the manipulation is driven through user interactions, then that wouldn't work at all.

Related

Where did LoaderService go?

Upgrading AngleSharp from 0.9.6 to 0.9.9 I have this line of code no longer compiling:
return configuration.With(LoaderService(new[] { requester }));
It complains that LoaderService does not exist in the current context. So what happened to LoaderService? Is there a replacement? Does it still exist but just somewhere else?
Good question. Sorry for being late to the party, but even though you may have solved your problem someone else is having a hard time figuring it out.
LoaderService was essentially just a helper to create a loader. But having a service for anything creating a little thing would be overkill and not scale much. Also AngleSharp.Core would need to define all these. So, instead a generic mechanism was introduced, which allows registering such "creator services" via Func<IBrowsingContext, TService>.
However, to solve your piece of code I guess the following line would do the trick:
return configuration.WithDefaultLoader(requesters: requester);
This registers the default loader creator services (one for documents, one for resources inside documents) with the default options (options involve some middleware etc.).
Under the hood (besides some other things) the following is happening:
// just one example, config.Filter is created based on the passed in options
return configuration.With<IDocumentLoader>(ctx => new DocumentLoader(ctx, config.Filter));

Using the same File Output Stream in GUIs

I need to continue writing to the same file using a file output stream, However I'm implementing the program using a javafx GUI from different windows. but:
Since you can not access a global variable without intializing the stream as final I have to make it final however,
Since I have to wrap the the Stream in a try catch, the variable is not recognised as initialised.
I was previously intialising in method like this
eW = outputFactory .createXMLEventWriter(new FileOutputStream("File.txt"));
However, that obivously overwrites the same file when you write the same statement again.
So essentially my question is, how can you set a final variable that needs to be surrounded in a try catch?
final File file = new File("File.txt");
final FileOutputStream fileOS = new FileOutputStream(file);
You don't need to initialize a variable as final to access it from different threads or objects. There are other ways to do it.
You can use a helper class, or even a helper thread. The second option is better in (the most common) case where you have more than 1 thread in your project. You can then use a normal try-catch block that has a (almost infinite) while loop inside, that checks a write queue and writes something when needed.
If you only use 1 thread and only append to a file, you may want to do just that - open the file to append, rather than overwrite. There's a boolean flag in the constructor.
You can take a look at the FileOutputStream(File file, boolean append) constructor, which would allow you to append to the end of the file rather than overwrite each time.
I do not know how you implemented it, but if you are writing to the same file from multiple windows, it might be easier to have a helper class which exclusively handles writing to file. This would allow you to centralize your File output code to one place making it easier to debug and maintain.

Update document in Meteor mini-mongo without updating server collections

In Meteor, I got a collection that the client subscribes to. In some cases, instead of publishing the documents that exists in the collection on the server, I want to send down some bogus data. Now that's fine using the this.added function in the publish.
My problem is that I want to treat the bogus doc as if it were a real document, specifically this gets troublesome when I want to update it. For the real docs I run a RealDocs.update but when doing that on the bogus doc it fails since there is no representation of it on the server (and I'd like to keep it that way).
A collection API that allowed me to pass something like local = true this would be fantastic but I have no idea how difficult that would be to implement and I'm not to fond of modifying the core code.
Right now I'm stuck at either creating a BogusDocs = new Meteor.Collection(null) but that makes populating the Collection more difficult since I have to either hard code fixtures in the client code or use a method to get the data from the server and I have to make sure I call BogusDocs.update instead of RealDocs.update as soon as I'm dealing with bogus data.
Maybe I could actually insert the data on the server and make sure it's removed later, but the data really has nothing to do with the server side collection so I'd rather avoid that.
Any thoughts on how to approach this problem?
After some further investigation (the evented mind site) it turns out that one can modify the local collection without making calls to the server. This is done by running the same methods as you usually would, but on MyCollection._collection instead of just on Collection. MyCollection.update() would thus become MyCollection._collection.update(). So, using a simple wrapper one can pass in the usual arguments to a update call to update the collection as usual (which will try to call the server which in turn will trigger your allow/deny rules) or we can add 'local' as the last argument to only perform the update in the client collection. Something like this should do it.
DocsUpdateWrapper = function() {
var lastIndex = arguments.length -1;
if (arguments[lastIndex] === 'local') {
Docs._collection.update(arguments.slice(0, lastIndex);
} else {
Docs.update(arguments)
}
}
(This could of course be extended to a DocsWrapper that allows for insertion and removals too.)(Didnt try this function yet but it should serve well as an example.)
The biggest benefit of this is imo that we can use the exact same calls to retrieve documents from the local collection, regardless of if they are local or living on the server too. By adding a simple boolean to the doc we can keep track of which documents are only local and which are not (An improved DocsWrapper could check for that bool so we could even omit passing the 'local' argument.) so we know how to update them.
There are some people working on local storage in the browser
https://github.com/awwx/meteor-browser-store
You might be able to adapt some of their ideas to provide "fake" documents.
I would use the transform feature on the collection to make an object that knows what to do with itself (on client). Give it the corruct update method (real/bogus), then call .update rather than a general one.
You can put the code from this.added into the transform process.
You can also set up a local minimongo collection. Insert on callback
#FoundAgents = new Meteor.Collection(null, Agent.transformData )
FoundAgents.remove({})
Meteor.call 'Get_agentsCloseToOffer', me, ping, (err, data) ->
if err
console.log JSON.stringify err,null,2
else
_.each data, (item) ->
FoundAgents.insert item
Maybe this interesting for you as well, I created two examples with native Meteor Local Collections at meteorpad. The first pad shows an example with plain reactive recordset: Sample_Publish_to_Local-Collection. The second will use the collection .observe method to listen to data: Collection.observe().

FileReference.save() duplicates ByteArray

I've encountered a memory problem using FileReference.save(). My Flash application generates of a lot of data in real-time and needs to save this data to a local file. As I understand, Flash 10 (as opposed to AIR) does not support streaming to a file. But, what's even worse is that FileReference.save() duplicates all the data before saving it. I was looking for a workaround to this doubled memory usage and thought about the following approach:
What if I pass a custom subclass of ByteArray as an argument to FileReference.save(), where this ByteArray subclass would override all read*() methods. The overridden read*() methods would wait for a piece of data to be generated by my application, return this piece of data and immediately remove it from the memory. I know how much data will be generated, so I could also override length/bytesAvailable methods.
Would it be possible? Could you give me some hint how to do it? I've created a subclass of ByteArray, registered an alias for it, passed an instance of this subclass to FileReference.save(), but somehow FileReference.save() seems to treat it just as it was a ByteArray instance and doesn't call any of my overridden methods...
Thanks a lot for any help!
It's not something I've tried before, but can you try sending the data out to a php application that would handle saving the ByteArray to the server, much like saving an image to the server, so then you'd use URLLoader.data instead, using something like this:
http://www.zedia.net/2008/sending-bytearray-and-variables-to-server-side-script-at-the-same-time/
It's an interesting idea. Perhaps to start you should just add traces in your extended ByteArray to see how the FileReference#save() functions internally.
If it has some kind of
while( originalByteArray.bytesAvailable )
writeToSaveBuffer( originalByteArray.readByte() );
functionality the overrides could just truncate the original buffer on every read like you say, something like:
override function readByte() : uint {
var b : uint = super.readByte();
// Truncate the bytes (assuming bytesAvailable = length - removedBytes)
length = length - bytesAvailable;
return b;
}
On the other hand, if this now works I guess the original byte array would not be available afterwards in the application anymore.
(i havn't tested this myself, truncating might require more work than the example)

Need suggestion for ASP.Net in-memory queue

I've a requirement of creating a HttpHandler that will serve an image file (simple static file) and also it'll insert a record in the SQL Server table. (e.g http://site/some.img, where some.img being a HttpHandler) I need an in-memory object (like Generic List object) that I can add items to on each request (I also have to consider a few hundreds or thousands requests per second) and I should be able unload this in-memory object to sql table using SqlBulkCopy.
List --> DataTable --> SqlBulkCopy
I thought of using the Cache object. Create a Generic List object and save it in the HttpContext.Cache and insert every time a new Item to it. This will NOT work as the CacheItemRemovedCallback would fire right away when the HttpHandler tries to add a new item. I can't use Cache object as in-memory queue.
Anybody can suggest anything? Would I be able to scale in the future if the load is more?
Why would CacheItemRemovedCalledback fire when you ADD something to the queue? That doesn't make sense to me... Even if that does fire, there's no requirement to do anything here. Perhaps I am misunderstanding your requirements?
I have quite successfully used the Cache object in precisely this manner. That is what it's designed for and it scales pretty well. I stored a Hashtable which was accessed on every app page request and updated/cleared as needed.
Option two... do you really need the queue? SQL Server will scale pretty well also if you just want to write directly into the DB. Use a shared connection object and/or connection pooling.
How about just using the Generic List to store requests and using different thread to do the SqlBulkCopy?
This way storing requests in the list won't block the response for too long, and background thread will be able to update the Sql on it's own time, each 5 min so.
you can even base the background thread on the Cache mechanism by performing the work on CacheItemRemovedCallback.
Just insert some object with remove time of 5 min and reinsert it at the end of the processing work.
Thanks Alex & Bryan for your suggestions.
Bryan: When I try to replace the List object in the Cache for the second request (now, count should be 2), the CacheItemRemovedCalledback gets fire as I'm replacing the current Cache object with the new one. Initially, I also thought this is weird behavior so I gotta look deeper into it.
Also, for the second suggestion, I will try to insert record (with the Cached SqlConnection object) and see what performance I get when I do the stress test. I doubt I'll be getting fantastic numbers as it's I/O operation.
I'll keep digging on my side for an optimal solution meanwhile with your suggestions.
You can create a conditional requirement within the callback to ensure you are working on a cache entry that has been hit from an expiration instead of a remove/replace (in VB since I had it handy):
Private Shared Sub CacheRemovalCallbackFunction(ByVal cacheKey As String, ByVal cacheObject As Object, ByVal removalReason As Web.Caching.CacheItemRemovedReason)
Select Case removalReason
Case Web.Caching.CacheItemRemovedReason.Expired, Web.Caching.CacheItemRemovedReason.DependencyChanged, Web.Caching.CacheItemRemovedReason.Underused
' By leaving off Web.Caching.CacheItemRemovedReason.Removed, this will exclude items that are replaced or removed explicitly (Cache.Remove) '
End Select
End Sub
Edit Here it is in C# if you need it:
private static void CacheRemovalCallbackFunction(string cacheKey, object cacheObject, System.Web.Caching.CacheItemRemovedReason removalReason)
{
switch(removalReason)
{
case System.Web.Caching.CacheItemRemovedReason.DependencyChanged:
case System.Web.Caching.CacheItemRemovedReason.Expired:
case System.Web.Caching.CacheItemRemovedReason.Underused:
// This excludes the option System.Web.Caching.CacheItemRemovedReason.Removed, which is triggered when you overwrite a cache item or remove it explicitly (e.g., HttpRuntime.Cache.Remove(key))
break;
}
}
To expand on my previous comment... I get the picture you are thinking about the cache incorrectly. If you have an object stored in the Cache, say a Hashtable, any update/storage into that Hashtable will be persisted without you explicitly modifying the contents of the Cache. You only need to add the Hashtable to the Cache once, either at application startup or on the first request.
If you are worried about the bulkcopy and page request updates happening simultaneously, then I suggest you simple have TWO cached lists. Have one be the list which is updated as page requests come in, and one list for the bulk copy operation. When one bulk copy is finished, swap the lists and repeat. This is similar to double-buffering video RAM for video games or video apps.

Resources