I'm trying to deep copy my variable:
QVector<PetTeam*> petTeam;
This PetTeam class also has another QVector containing pets and so on. I am looking to deep copy them all so I can use them for states in an minimax AI but so far I have been unable to deep copy it correctly.
If anyone could provide any insight on the correct way to deep copy in this situation it would be much appreciated!
If you want to copy all PetTeam objects automatically, you need to declare your vector as QVector<PetTeam> petTeam. It requires PetTeam to have a constructor without parameters and a copy constructor. They are generated automatically but you should reimplement them if PetTeam contains any information that can't be copied trivially (e.g. file handlers, pointers to manually allocated memory, etc.)
Another option is to copy your objects manually, i.e. iterate over the list and create a new object for each old object using new PetTeam(...) and then put them in the new list.
Related
Do I have to
1) load a data.frame from the physical RData to the memory,
2) make changes,
3) save it back to the physical RData,
4) remove it from the memory to avoid conflicts?
Is there anyway I can skip the load/save steps and make permanent changes to the physical RData directly? Is there a way to work with data.frame like the way working with a SQLite/MySQL database? Or should I just use SQLite/MySQL (instead of data.frame) as the data storage?
More thoughts: I think the major difference is that to work with SQLite/MySQL you establish a connection to the database, but to work with data.frame from RData you make a copy in the memory. The later approach can create conflicts in complex programs. To avoid potential conflicts you have to save the data.frame and immediately remove it from the memory every time you change it.
Thanks!
Instead of using load you may want to consider using attach. This can attach the saved data object to the search path without loading all the objects in it into the global environment. The data frame would then be available to use.
If you want to change the data frame then you would need to copy it to the global environment (will happen automatically for most editing) and then you would need to save it again (there would not be a simple way to save it into a .Rdata file that contains other objects).
When you are done you can use detach (but if you have made a copy in the global environment then you will still need to delete that copy).
If you don't like typing the load/save commands (or attach/detach) each time then you could write your own function that goes through all the steps for you (and if the copy is only in the environment of the function then you don't need to worry about deleting it).
You may also want to consider different ways of storing your data. The typical .Rdata file works well for an all or nothing approach. The saveRDS and readRDS functions will save and read a single object (and do not force you to use the same name when reading it back in). The interfacing with a database approach is probably the best if you are making frequent changes to tables and want them stored outside of R.
I have a Plone instance which contains some structures which I need to copy to a new Plone instance (but much more which should not be copied). Those structures are document trees ("books" of Archetypes folders and documents) which use resources (e.g. images and animations, by UID) outside those trees (in a separate structure which of course contains lots of resources not needed by the ones which need to be copied).
I tried already to copy the whole data and delete the unneeded parts, but this takes very (!) long, so I'm looking for a better way.
Thus, the idea is to traverse my little forest of document trees and transfer them and the resources they need (sparsely rebuilding that separate structure) to the new Plone instance. I have full access to both of them.
Is there a suggested way to accomplish this? Or should I export all of them, including the resources structure, and delete all unneeded stuff afterwards?
I found out that each time that I make this type of migration by hand, I make mistakes that force me to do it again.
OTOH, if migration is automated, I can run it, find out what I did wrong, fix the migration, and do it all over again until I am satisfied.
In this context, to automate your migration, I advise you to look at collective.transmogrifrier.
I recommend jsonmigrator - which is a twist on collective.transmogrifier mentioned by Godefroid. See my blog on it here
You can even use it to migrate from Archetypes to Dexterity types (you just need matching fieldnames (and matching types roughly speaking).
Trying to select the resources to import will be tricky though. Perhaps you can find a way to iterate through your document trees & "touch" (in a unix sense) any resource that you are using. Then copy across only resources whose "timestamp" indicates that they have been touched.
I'm trying to commit a single blob directly to a repository using jgit. I know how to insert the blob and get its sha1, however I'm having difficulties constructing a tree for this scenario. I can't seem to figure out how to correctly use jgit's tree abstractions (TreeWalk and the like) to recursively construct a tree, almost identical to the previous commits', with only different parent trees of the blob.
What is the idiomatic way to do this in JGit?
The reason I ask, is because I'm writing a program that's a kind of editor for documents living in git repositories. In my case, the whole point of using git is to be able to have multiple versions of the documents (aka branches) at the same time. Since it's an editor I have to be able to commit changes, however since I want to see multiple versions of the document at the same time, checking out, modifying a file and committing using JGit porcelain API is not possible, it has to work directly with git objects.
The low-level API you can use for this is TreeFormatter together with CommitBuilder.
An example of using this can be seen here. In this case, it constructs one new tree object with multiple subtrees.
In your case, you probably have to recursively walk the tree and create the new tree objects on the path to the changed file and insert them bottom up. For the rest of the tree, you can use the existing tree IDs and don't have to descend into them. I recommend looking into TreeWalk#setRecursive and TreeWalk#setPostOrderTraversal.
Another option would be to create an in-core DirCache, fill it with DirCacheEntries from the commit and your updated entry, and then call DirCache#writeTree.
I've discovered recently that when using the Session or Application objects within an ASP.net application, that it passes the values by reference.
This works great most of the time, but now I'm in a place where I truly need a copy of the data, and not a reference to it, since I will discard any changes to it when done processing.
I'm also aware of setting variables in the root of the application, effectively creating global application variables that have a bit better performance and eliminate all the casting craziness of the Session/Application objects.
Does anybody know if the application variables pass by reference or value? I can't seem to dig up any data on the topic and would like to eliminate the code I currently have to "copy" the data from the Application object into a new variable for processing.
This is not an ASP.NET phenomenon per se... it is inherent to the CLI in all of .NET framework. You should take a look at Jon Skeet's classic article on Parameter passing.
Assuming you are dealing with reference types, if you want to clone an object rather than simply copying a reference to the object, you will need to manually clone it. Here's a good article that explains how to deep copy an object and why the MemberwiseClone method creates only a shallow copy of the object.
Actually, what happens by default (assuming you have a reference type rather than a value type) is that it passes a copy of the reference.
So you have a different variable but it's pointing at the same object.
An "Application" variable is just a regular member variable. It works just like any other variable, it's only it's scope that is larger than most.
If the value of the variable is copied or not depends on whether the variable is a value type or not.
If it's a value type, for example int, the value is always copied.
If it's a reference type, for example a List<int> you get a copy of the reference to the object. The object itself is not copied.
So, if what you need to copy is a reference type, you have to explicitly create the copy.
What I would like to do is capture an object that's in memory to disk for testing purposes. Since it takes many steps to get to this state, I would like to capture it once and skip the steps.
I realize that I could mock these objects up manually but I'd rather "record" and "replay" real objects because I think this would be faster.
Edit: The question is regarding this entire process, not just the serialization of the object (also file operations) and my hope that a tool exists to do this process on standard objects.
I am interested in Actionscript specifically for this is application but...
Are there examples of this in other
programming languages?
What is this process commonly called?
How would this be done in
Actionscript?
Edit:
Are there tools that make serialization and file operations automatic (i.e. no special interfaces)?
Would anybody else find the proposed tool useful (if it doesn't exist)?
Use case of what I am thinking of:
ObjectSaver.save(objZombie,"zombie"); //save the object
var zombieClone:Zombie = ObjectSaver.get("zombie"); // get the object
and the disk location being configurable somewhere.
Converting objects to bytes (so that they can be saved to disk or transmitted over network etc.) is called serialization.
But in your case, I don't think that serialization is that useful for testing purposes. When the test creates all its test data every time that the test is run, then you can always trust that the test data is what you expect it to be, and that there are no side-effect leaking from previous test runs.
I asked the same question for Flex a few days ago. ActionScript specifically doesn't have much support for serialization, though the JSON libraries mentioned in one of the responses looked promising.
Serialize Flex Objects to Save Restore Application State
I think you are talking about "object serialization".
It's called Serialization
Perl uses the Storable module to do this, I'm not sure about Actionscript.
This used to be called "checkpointing" (although that usually means saving the state of the entire system). Have you considered serializing your object to some intermediate format, and then creating a constructor that can accept an object in that format and re-create the object based on that? That might be a more straightforward way to go.
What is this process commonly called?
Serializing / deserializing
Marshalling / unmarshalling
Deflating / inflating
Check out the flash.utils.IExternalizable interface. It can be used to serialize ActionScript objects into a ByteArray. The resulting data could easily be written to disk or used to clone objects.
Note that this is not "automatic". You have to manually implement the interface and write the readExternal() and writeExternal() functions for each class you want to serialize. You'll be hard pressed to find a way to serialize custom classes "automatically" because private members are only accessible within the class itself. You'll need to make everything that you need serialized public if you want to create an external serialization method.
The closest I've come to this is using the appcorelib ClassUtil to create XML objects from existing objects (saving the xml manually) and create objects from this xml. For objects with arrays of custom types it takes configuring ArrayElementType Metadata tags and compiler options correctly as described in the docs.
ClassUtil.createXMLfromObject(obj);
CreateClassFromXMLObject(obj,targetClass);
If you're using AIR, you can store Objects in the included local database.
Here's a simple example using local SQLite database on the Adobe site, and more info on how data is stored in the database.