Using ARC, how can I make sure a CTFrameRef property is not deallocated ? I am currently using an assign property, which is obviously not doing what I want
edit: my solution based on #KevinBallard answer :
- (void)setCoreTextFrame:(CTFrameRef)coreTextFrame
{
_coreTextFrame = CFRetain(coreTextFrame);
}
- (void)dealloc
{
CFRelease(_coreTextFrame);
}
CTFrameRef isn't an obj-c object. It follows the CoreFoundation memory management rules, so you can retain it with CFRetain() and release it with CFRelease().
So basically, your code dealing with CTFrameRef will look identical to non-ARC code.
Related
I'm trying to modify an existent project, I would like to set a fallback texture when I load a GLTF file
BABYLON.GLTFFileLoader.IncrementalLoading = false;
BABYLON.SceneLoader.AppendAsync(rootPath, 'data:' + gltfContent, scene, undefined, '.gltf').then(function () {
scene.createDefaultCameraOrLight(true);
scene.activeCamera.attachControl(canvas);
scene.activeCamera.wheelDeltaPercentage = 0.005;
I don't know exactly how to do it.
what's the better way to proceed? should I read the GLTF and modify the URI?
I think it is a better solution to use some callback
Anyone is an expert with babylon.js?
Thanks
Babylon's GLTF 2.0 loader has an extension system that can be seen as promise-based plugin for each step of the loading process.
You can see a few examples for the extensions here:
https://github.com/BabylonJS/Babylon.js/tree/master/loaders/src/glTF/2.0/Extensions
What can be very interesting in your case is here:
https://github.com/BabylonJS/Babylon.js/blob/master/loaders/src/glTF/2.0/Extensions/KHR_texture_transform.ts
As you can see, this extension contains a function called loadTextureInfoAsync that returns a promise when done, after receiving the texture info that is about to be loaded.
Depending on your use case you can completely replace the load texture functionality or extend it (like in the example above). To override it you should just implement the function yourself:
public loadTextureInfoAsync(context: string, textureInfo: ITextureInfo, assign: (babylonTexture: BaseTexture) => void): Nullable<Promise<BaseTexture>> {
const texture = loadTheTextureYourselfWithFallback();
return Promise.resolve(texture);
}
Of course, the load function needs to be implemented according to your logic.
If I have the following code:
// objective C++ code .mm
id<MTLTexture> texture = ...;
void* ptr = (void*)CFBridgingRetain(texture);
share_ptr_with_native_code(ptr);
[texture do_stuff]; // is this valid?
// native code .cpp
void share_ptr_with_native(void* ptr)
{
ptr->do_stuff();
CFBridgingRelease(ptr);
}
Will texture be valid and retained by ARC again after the call to share_ptr_with_native()?
Other than various errors in your code snippet, yes, the line in question is valid. ARC continues to maintain its own strong reference to object while it's still in use in the top code, in addition to the one that you become responsible for. CFBridgingRetain() has a +1 effect on the retain count of the object, hence "retain" in its name.
Even everything said is right, it would be nicer if you change your
CFBridgingRelease(ptr);
to
CFRelease(ptr) .
__bridge_retained or CFBridgingRetain casts an Objective-C pointer to a Core Foundation pointer and also transfers ownership to you.
You are responsible for calling CFRelease or a related function to relinquish ownership of the object.
Taken from https://developer.apple.com/library/content/documentation/CoreFoundation/Conceptual/CFDesignConcepts/Articles/tollFreeBridgedTypes.html.
I have the following problem with my EMF based Eclipse application:
Undo works fine. Validation works fine. But when there is a validation error for the data in a GUI field, this blocks the use of the undo action. For example, it is not possible to undo to get back to a valid state for that field.
In this picture it is not possible to use undo:
Tools that are used in the application:
Eclipse data binding
UpdateValueStrategys on the bindings for validation
Undo is implemented using standard UndoAction that calls CommandStack.undo
A MessageManagerSupport class that connects the validation framework to the Eclipse Forms based GUI.
The data bindings look like this:
dataBindingContext.bindValue(WidgetProperties.text(...),
EMFEditProperties.value(...), validatingUpdateStrategy, null);
The problem is this:
The undo system works on the commands that change the model.
The validation system stops updates from reaching to model when there are validation errors.
To make undos work when there are validation errors I think I could do one of these different things:
Make undo system work on the GUI layer. (This would be a huge change, it's probably not possible to use EMF for this at all.)
Make the invalid data in the GUI trigger commands that change the model data, in the same way as valid data does. (This would be okay as long as the data can not be saved to disk. But I can't find a way to do this.)
Make the validation work directly on the model, maybe triggered by a content listener on the Resource. (This is a big change of validation strategy. It doesn't seem possible to track the source GUI control in this stage.)
These solutions either seem impossible or have severe disadvantages.
What is the best way to make undo work even when there are validation errors?
NOTE: I accept Mad Matts answer because their suggestions lead me to my solution. But I'm not really satisfied with that and I wish there was a better one.
If someone at some time finds a better solution I'd be happy to consider to accept it instead of the current one!
It makes sense that the Validator protects your Target value from invalid values.
Therefor the target commandstack remains untouched in case of an invalid value.
Why would you like to force invalid values being set? Isn't ctrl + z in the GUI enough to reset the last valid state?
If you still want to set these values to your actual Target model, you can play around with the UpdateValueStrategy.
The update phases are:
Validate after get - validateAfterGet(Object)
Conversion - convert(Object)
Validate after conversion - validateAfterConvert(Object)
Validate before set - validateBeforeSet(Object)
Value set - doSet(IObservableValue, Object)
I'm not sure where the validation error (Status.ERROR) occurs exactly, but you could check where and then force a SetCommand manually.
You can set custom IValidator for each step to your UpdateValueStrategy to do that.
NOTE: This is the solution I ended up using in my application. I'm not really satisfied with it. I think it is a little bit of a hack.
I accept Mad Matts answer because their suggestions lead me to this solution.
If someone at some time finds a better solution I'd be happy to consider to accept it instead of the current one!
I ended up creating an UpdateValueStratety sub-class which runs a validator after a value has been set on the model object. This seems to be working fine.
I create this answer to post the code I ended up using. Here it is:
/**
* An {#link UpdateValueStrategy} that can perform validation AFTER a value is set
* in the model. This is used because undo dosen't work if no model changed in made.
*/
public class LateValidationUpdateValueStrategy extends UpdateValueStrategy {
private IValidator afterSetValidator;
public void setAfterSetValidator(IValidator afterSetValidator) {
this.afterSetValidator = afterSetValidator;
}
#Override
protected IStatus doSet(IObservableValue observableValue, Object value) {
IStatus setStatus = super.doSet(observableValue, value);
if (setStatus.getSeverity() >= IStatus.ERROR || afterSetValidator == null) {
return setStatus;
}
// I used a validator here that calls the EMF generated model validator.
// In that way I can specify validation of the model.
IStatus validStatus = afterSetValidator.validate(value);
// Merge the two statuses
if (setStatus.isOK() && validStatus.isOK()) {
return validStatus;
} else if (!setStatus.isOK() && validStatus.isOK()) {
return setStatus;
} else if (setStatus.isOK() && !validStatus.isOK()) {
return validStatus;
} else {
return new MultiStatus(Activator.PLUGIN_ID, -1,
new IStatus[] { setStatus, validStatus },
setStatus.getMessage() + "; " + validStatus.getMessage(), null);
}
}
}
I know, that to create a new path in Qt from a given absolute path, you use QDir::makepath() as dir.makepath(path), as it is suggested in this question. I do not have any trouble in using it and it works fine. My question is directed, as to why the developers would not provide a static function to call in a way like QDir::makepath("/Users/me/somepath/");. Needing to create a new QDir instance seems unnecessary to me.
I can only think of two possible reasons:
1. The developers were "lazy" or did not have time so they did not add one as it is not absolutely necessary.
2. The instance of QDir on which mkpath(path) is called, will be set to path as well, so it would be convenient for further usage - but I can not seem to find any hints that this is the actual behaviour within the docs.
I know I repeat myself, but again, I do not need help as of how to do it, but I am much interested as of why one has to do it that way.
Thanks for any reason I might have missed.
Let's have a look at the code of said method:
bool QDir::mkdir(const QString &dirName) const
{
const QDirPrivate* d = d_ptr.constData();
if (dirName.isEmpty()) {
qWarning("QDir::mkdir: Empty or null file name");
return false;
}
QString fn = filePath(dirName);
if (d->fileEngine.isNull())
return QFileSystemEngine::createDirectory(QFileSystemEntry(fn), false);
return d->fileEngine->mkdir(fn, false);
}
Source: http://code.qt.io/cgit/qt/qtbase.git/tree/src/corelib/io/qdir.cpp#n1381
As we can see, a static version would be simple to implement:
bool QDir::mkdir(const QString &dirName) const
{
if (dirName.isEmpty()) {
qWarning("QDir::mkdir: Empty or null file name");
return false;
}
return QFileSystemEngine::createDirectory(QFileSystemEntry(dirName), false);
}
(see also http://code.qt.io/cgit/qt/qtbase.git/tree/src/corelib/io/qdir.cpp#n681)
First, the non-static method comes with a few advantages. Obviously there is something to using the object's existing file engine. But also, I would imagine the use-case of creating several directories under a specific directory (that the QDir already points to).
So why not provide both?
Verdict (tl/dr): I think the reason is simple code hygiene. When you use the API, the difference between QDir::makepath(path); and QDir().makepath(path); is slim. The performance hit of creating the object is also negligible, as you would reuse the same object if you happen to perform the operation more often. But on the side of the code maintainers, it is arguably much more convenient (less work and less error prone) to not maintain two versions of the same method.
I have run into a problem working with Realm migration blocks and the strategy for migrating realms.
Given an object MyObject with a number of properties:
In version 1 we have the property myProperty
In version 2 we change the property to myPropertyMk2
In version 3 we change the property to myPropertyMk3
Given following migration block:
private class func getMigrationBlock(realmPath: String) -> RLMMigrationBlock {
return { migration, oldSchemaVersion in
if (oldSchemaVersion == RLMNotVersioned) {
NSLog("No database found when migrating.")
return
} else {
NSLog("Migrating \(realmPath) from version \(oldSchemaVersion) to \(RealmMigrationHelper.CURRENT_DATABASE_VERSION)")
}
NSLog("Upgrading MyObject from version %d to %d", oldSchemaVersion, CURRENT_DATABASE_VERSION)
if (oldSchemaVersion < 2) {
migration.enumerateObjects(MyObject.className(), block: {
oldObject, newObject in
newObject["myPropertyMk2"] = oldObject["myProperty"]
})
}
if (oldSchemaVersion < 3) {
migration.enumerateObjects(MyObject.className(), block: {
oldObject, newObject in
newObject["myPropertyMk3"] = oldObject["myPropertyMk2"]
})
}
NSLog("Migration complete.")
}
}
When I was version 2 of the DB this worked just fine (obviously without the oldSchemaVersion < 3 block), but when I introduced version 3 I started getting the problems because it does not recognise the newObject["myPropertyMk2"] in oldSchemaVersion < 2 block. If I change it to newObject["myPropertyMk3"] it works just fine.
From reading the RLMMigration code this makes perfectly good sense as we work with the old schme and the new scheme, but based on the documentation on realm.io I do not think it makes sense. Then I would have expected it to be scheme less.
I have an idea about making a scheme less migration within the block by simply using a dictionary and then finally apply this dictionary to the newObject.
Are there any thoughts on the migration strategy of realms that deals with this? It is mentioned on realms website, but only very briefly.
Thank you :)
thanks for your question and report of your issue.
From reading the RLMMigration code this makes perfectly good sense as we work with the old schme and the new scheme, but based on the documentation on realm.io I do not think it makes sense. Then I would have expected it to be scheme less.
As you correctly recognized from the code in RLMMigration, migrations are not scheme-free. The migration closure which you provide should handle migrations from any version in the past to the current version. If your user didn't update your app in between and so skipped one version, there is no chance, that Realm could been aware of your intermediate schema version, as the schema is reflected at runtime. You're generally free to break the backwards-compatibility with existing old versions deliberately, but you would need to take care to reset the configuration to a defined state.
You're for sure right about the point, that this could been better documented. I have created an internal ticket about that.
I have an idea about making a scheme less migration within the block by simply using a dictionary and then finally apply this dictionary to the newObject.
Are there any thoughts on the migration strategy of realms that deals with this? It is mentioned on realms website, but only very briefly.
Depending on your scheme and the amount of data you have, you can reorganize it object-wise in memory via a dictionary and then apply it to newObject as you describe. The current API makes relatively few assumptions and allows an approach like this. But it wouldn't work in that way good for everyone, e.g. if you have large lists of related objects.