I know, that to create a new path in Qt from a given absolute path, you use QDir::makepath() as dir.makepath(path), as it is suggested in this question. I do not have any trouble in using it and it works fine. My question is directed, as to why the developers would not provide a static function to call in a way like QDir::makepath("/Users/me/somepath/");. Needing to create a new QDir instance seems unnecessary to me.
I can only think of two possible reasons:
1. The developers were "lazy" or did not have time so they did not add one as it is not absolutely necessary.
2. The instance of QDir on which mkpath(path) is called, will be set to path as well, so it would be convenient for further usage - but I can not seem to find any hints that this is the actual behaviour within the docs.
I know I repeat myself, but again, I do not need help as of how to do it, but I am much interested as of why one has to do it that way.
Thanks for any reason I might have missed.
Let's have a look at the code of said method:
bool QDir::mkdir(const QString &dirName) const
{
const QDirPrivate* d = d_ptr.constData();
if (dirName.isEmpty()) {
qWarning("QDir::mkdir: Empty or null file name");
return false;
}
QString fn = filePath(dirName);
if (d->fileEngine.isNull())
return QFileSystemEngine::createDirectory(QFileSystemEntry(fn), false);
return d->fileEngine->mkdir(fn, false);
}
Source: http://code.qt.io/cgit/qt/qtbase.git/tree/src/corelib/io/qdir.cpp#n1381
As we can see, a static version would be simple to implement:
bool QDir::mkdir(const QString &dirName) const
{
if (dirName.isEmpty()) {
qWarning("QDir::mkdir: Empty or null file name");
return false;
}
return QFileSystemEngine::createDirectory(QFileSystemEntry(dirName), false);
}
(see also http://code.qt.io/cgit/qt/qtbase.git/tree/src/corelib/io/qdir.cpp#n681)
First, the non-static method comes with a few advantages. Obviously there is something to using the object's existing file engine. But also, I would imagine the use-case of creating several directories under a specific directory (that the QDir already points to).
So why not provide both?
Verdict (tl/dr): I think the reason is simple code hygiene. When you use the API, the difference between QDir::makepath(path); and QDir().makepath(path); is slim. The performance hit of creating the object is also negligible, as you would reuse the same object if you happen to perform the operation more often. But on the side of the code maintainers, it is arguably much more convenient (less work and less error prone) to not maintain two versions of the same method.
Related
I am writing unit tests for some async sections of my code (returning Futures) that also involves the need to mock a Scala object.
Following these docs, I can successfully mock the object's functions. My question stems from the fact that withObjectMocked[FooObject.type] returns Unit, where async tests in scalatest require either an Assertion or Future[Assertion] to be returned. To get around this, I'm creating vars in my tests that I reassign within the function sent to withObjectMocked[FooObject.type], which ends up looking something like this:
class SomeTest extends AsyncWordSpec with Matchers with AsyncMockitoSugar with ResetMocksAfterEachAsyncTest {
"wish i didn't need a temp var" in {
var ret: Future[Assertion] = Future.failed(new Exception("this should be something")) // <-- note the need to create the temp var
withObjectMocked[SomeObject.type] {
when(SomeObject.someFunction(any)) thenReturn Left(Error("not found"))
val mockDependency = mock[SomeDependency]
val testClass = ClassBeingTested(mockDependency)
ret = testClass.giveMeAFuture("test_id") map { r =>
r should equal(Error("not found"))
} // <-- set the real Future[Assertion] value here
}
ret // <-- finally, explicitly return the Future
}
}
My question then is, is there a better/cleaner/more idiomatic way to write async tests that mock objects without the need to jump through this bit of a hoop? For some reason, I figured using AsyncMockitoSugar instead of MockitoSugar would have solved that for me, but withObjectMocked still returns Unit. Is this maybe a bug and/or a candidate for a feature request (the async version of withObjectMocked returning the value of the function block rather than Unit)? Or am I missing how to accomplish this sort of task?
You should refrain from using mockObject in a multi-thread environment as it doesn't play well with it.
This is because the object code is stored as a singleton instance, so it's effectively global.
When you use mockObject you're efectibly forcefully overriding this var (the code takes care of restoring the original, hence the syntax of usign it as a "resource" if you want).
Because this var is global/shared, if you have multi-threaded tests you'll endup with random behaviour, this is the main reason why no async API is provided.
In any case, this is a last resort tool, every time you find yourself using it you should stop and ask yourself if there isn't anything wrong with your code first, there are quite a few patterns to help you out here (like injecting the dependency), so you should rarely have to do this.
let's say you have a function that set an index and then update few variables based on the value stored in the array element which the index is pointing to. Do you check the index to make sure it is in range? (In embedded system environment to be specific Arduino)
So far I have made a safe and unsafe version for all functions, is that a good idea? In some of my other codes I noticed that having only safe functions result in checking conditions multiple time as the libraries get larger, so I started to develop both. The safe function checks the condition and call the unsafe function as shown in example below for the case explained above.
Safe version:
bool RcChannelModule::setFactorIndexAndUpdateBoundaries(factorIndex_T factorIndex)
{
if(factorIndex < N_FACTORS)
{
setFactorIndexAndUpdateBoundariesUnsafe(factorIndex);
return true;
}
return false;
}
Unsafe version:
void RcChannelModule::setFactorIndexAndUpdateBoundariesUnsafe(factorIndex_T factorIndex)
{
setCuurentFactorIndexUnsafe(factorIndex);
updateOutputBoundaries();
}
If I am doing it wrong fundamentally please let me know why and how I could avoid that. Also I would like to know, generally when you program, do you consider the future user to be a fool or you expect them to follow the minimal documentation provided? (the reason I say minimal is because I do not have the time to write a proper documentation)
void RcChannelModule::setCuurentFactorIndexUnsafe(const factorIndex_T factorIndex)
{
currentFactorIndex_ = factorIndex;
}
Safety checks, such as array index range checks, null checks, and so on, are intended to catch programming errors. When these checks fail, there is no graceful recovery: the best the program can do is to log what happened, and restart.
Therefore, the only time when these checks become useful is during debugging and testing of your code. C++ provides built-in functionality for dealing with this through asserts, which are kept in the debug versions of the code, but compiled out from the release version:
void RcChannelModule::setFactorIndexAndUpdateBoundariesUnsafe(factorIndex_T factorIndex) {
assert(factorIndex < N_FACTORS);
setCuurentFactorIndexUnsafe(factorIndex);
updateOutputBoundaries();
}
Note: [When you make a library for external use] an argument-checking version of each external function perhaps makes sense, with non-argument-checking implementations of those and all internal-only functions. If you perform argument checking then do it (only) at the boundary between your library and the client code. But it's pointless to offer a choice to your users, for if you want to protect them from usage errors then you cannot rely on them to choose the "safe" versions of your functions. (John Bollinger)
Do you make safe and unsafe version of your functions or just stick to the safe version?
For higher level code, I recommend one version, a safe one.
High level code, with a large set of related functions and data, the combinations of interactions of data and code are not possible to fully check at development time. When an error is detected, the data should be set to indicate an error state. Subsequent use of data within these functions would be aware of the error state.
For lowest level -time critical routines, I'd go with #dasblinkenlight answer. Create one source code that compiles 2 ways per the debug and release compiles.
Yet keep in mind #pete becker, it this really likely a performance bottle neck to do a check?
With floating-point related routines, use the NaN to help keep track of an unrecoverable error.
Lastly, as able, create functions that do not fail and avoid the issue. With many, not all, this only requires small code additions. It often only adds a constant of time performance penalty and not a O(n) penalty.
Example: Consider a function to lop off the first character of a string - in place.
// This work fine as long as s[0] != 0
char *slop_1(char *s) {
size_t len = strlen(s); // most work is here
return memmove(s, s + 1, len); // and here
}
Instead define the function, and code it, to do nothing when s[0] == 0
char *slop_2(char *s) {
size_t len = strlen(s);
if (len > 0) { // negligible additional work
memmove(s, s + 1, len);
}
return s;
}
Similar code can be applied to OP's example. Note that it is "safe", at least within the function. The assert() scheme can still be used to discovery development issues. Yet the released code, without the assert(), still checks the range.
void RcChannelModule::setFactorIndexAndUpdateBoundaries(factorIndex_T factorIndex)
{
if(factorIndex < N_FACTORS) {
setFactorIndexAndUpdateBoundariesUnsafe(factorIndex);
} else {
assert(1);
}
}
Since you tagged this Arduino and embedded, you have a very resource-constrained system, one of the crappiest processors still manufactured.
On such a system you cannot afford extra error handling. It is better to properly document what values the parameters passed to the function must have, then leave the checking of this to the caller.
The caller can then either check this in run-time, if needed, or otherwise in compile-time with a static assert. Your function would however not be able to implement it as a static assert, as it can't know if factorIndex is a run-time variable or a compile-time constant.
As for "I have no time to write proper documentation", that's nonsense. It takes far less time to document this function than to post this SO question. You don't necessarily have to write an essay in some Word file. You don't necessarily have to use Doxygen or similar.
But you do need to write the bare minimum of documentation: In the header file, document the purpose and expected values of all function parameters in the form of comments. Preferably you should have a coding standard for how to document such functions. A minimal documentation of public API functions in the form of comments is part of your job as programmer. The code is not complete until this is written.
I suspect a problem with an old version of the ObjectBuilder which once was part of the WCSF Extension project and meanwhile moved into Unity. I am not sure whether I am on the right way or not so I hope someone out there has more competent thread-safety skills to explain whether this could be an issue or not.
I use this (outdated) ObjectBuilder implementation in an ASP.Net WCSF web app and rarely I can see in the logs that the ObjectBuilder is complaining that a particular property of a class cannot be injected for some reason, the problem is always that this property should never been injected at all. Property and class are changing constantly. I traced the code down to a method where a dictionary is used to hold the information whether a property is handled by the ObjectBuilder or not.
My question basically comes down to: Is there a thread-safety issue in the following code which could cause the ObjectBuilder to get inconsistent data from its dictionary?
The class which holds this code (ReflectionStrategy.cs) is created as Singleton, so all requests to my web application use this class to create its view/page objects. Its dictionary is a private field, only used in this method and declared like that:
private Dictionary<int, bool> _memberRequiresProcessingCache = new Dictionary<int, bool>();
private bool InnerMemberRequiresProcessing(IReflectionMemberInfo<TMemberInfo> member)
{
bool requires;
lock (_readLockerMrp)
{
if (!_memberRequiresProcessingCache.TryGetValue(member.MemberInfo.GetHashCode(), out requires))
{
lock (_writeLockerMrp)
{
if (!_memberRequiresProcessingCache.TryGetValue(member.MemberInfo.GetHashCode(), out requires))
{
requires = MemberRequiresProcessing(member);
_memberRequiresProcessingCache.Add(member.MemberInfo.GetHashCode(), requires);
}
}
}
}
return requires;
}
This code above is not the latest version you can find on Codeplex but I still want to know whether it might be the cause of my ObjectBuilder exceptions. While we speak I work on an update to get this old code replaced by the latest version. This is the latest implementation, unfortunately I cannot find any information why it has been changed. Might be for a bug, might be for performance...
private bool InnerMemberRequiresProcessing(IReflectionMemberInfo<TMemberInfo> member)
{
bool requires;
if (!_memberRequiresProcessingCache.TryGetValue(member.MemberInfo, out requires))
{
lock (_writeLockerMrp)
{
if (!_memberRequiresProcessingCache.TryGetValue(member.MemberInfo, out requires))
{
Dictionary<TMemberInfo, bool> tempMemberRequiresProcessingCache =
new Dictionary<TMemberInfo, bool>(_memberRequiresProcessingCache);
requires = MemberRequiresProcessing(member);
tempMemberRequiresProcessingCache.Add(member.MemberInfo, requires);
_memberRequiresProcessingCache = tempMemberRequiresProcessingCache;
}
}
}
return requires;
}
The use of the hash code looks problematic if you run a very large number of classes / members, as can happen with the singleton approach you mentioned.
The double lock was totally odd in the old one (Only one thread goes into the whole section in all cases). Note that locking as the first thing certainly hurts performance. It is a trade of, notice that instead they create a copy to avoid modifying the list as it is being read.
[Edit]
The main question here loosely translates as 'is Flex multi-threaded'? I have since found out that it is not, so I won't have data mysteriously changing half way through an operation. The code below worked, but made things awkward and confusing. I eventually fixed the problem with an architecture change, eliminating the need to suppress events. As the first commenter suggested.
Infinite loops were eliminated by changing the way events were listened to and performing certain actions explicitly rather than via events.
Collating events was made easier using a command pattern.
Basically, do not use the code below if you come across this page!
[/Edit]
I'm building some Flex applications using a simple, lightweight MVC pattern. Models extend or encapsulate a dispatcher and fire events when updated. I'm stuck with Flex 3.5.
In some situations, I'll want to suppress these events to avoid infinite loops or help collate multiple actions into a single event.
My first stab at a solution that doesn't litter the models with unnecessary and confusing code is this:
private var _suppressEvents:Boolean = false;
public function suppressEvents(callback:Function):void
{
// In case of error, ensure the suppression is turned off, then re-throw
var err:Error = null;
_suppressEvents = true;
try
{
callback();
}
catch(e:Error)
{
err = e;
}
_suppressEvents = false;
if (err)
{
throw (err);
}
}
public function dispatch(type:String, data:*):void
{
// Suppress if called from a suppress callback.
if (!_suppressEvents)
{
_dispatcher.dispatchEvent(new DataEvent(type, data));
}
}
Obviously I call 'suppressEvents' with a function containing the model code I wish to run.
My questions:
1: Is there a chance I could accidentally lose events using this technique?
2: Do I need to think about any other error edge cases when it comes to ensuring I don't accidentally end up in a suppressed state after a call?
3: Is there a cleaner way I'm missing?
Thanks!
I am trying to duplicate a flex component at run time.
For example if i have this
mx:Button label="btn" id="btn" click="handleClick(event)"/>
i should be able to call a function called DuplicateComponent() and it should return me a UI component thts exactly same as above button including the event listeners with it.
Can some one help me please??
Thanks in advance
Do a Byte Array Copy. This code segment should do it for you:
// ActionScript file
import flash.utils.ByteArray;
private function clone(source:Object):*
{
var myBA:ByteArray = new ByteArray();
myBA.writeObject(source);
myBA.position = 0;
return(myBA.readObject());
}
One note, I did not write this code myself, I'm pretty sure I got it from a post on the Flex Coder's list.
To solve that problem you should use actionscript and create the buttons dynamically.
Lets say you want the button(s) to go in a VBox called 'someVbox'
for (var i:uint = 0; i< 10; i++){
var but:Button = new Button();
but.label = 'some_id_'+i;
but.id = 'some_id_'+i;
but.addEventListener(MouseEvent.CLICK, 'handleClick');
someVbox.addChild(but);
}
I haven't tested it, but that should add 10 buttons to a vbox with a bit of luck.
You can't take a deep copy of UIComponents natively. You're best bet would be to create a new one and analyse the one you have to add a duplicate setup. To be honest this does sound like a bit of a code smell. I wonder if there may be a better solution to the problem with a bit of a rethink..
Same question as: http://www.flexforum.org/viewtopic.php?f=4&t=1421
Showing up in a google search for the same thing. So you've cut&pasted the same question a month later. No luck eh?
There is no easy way to do this that I know of. Many of a component's settings are dependent on the container/context/etc... and get instantiated during the creation process, so there's no reason to clone from that perspective.
You can clone key settings in actionscript and use those when creating new elements.
For instance, assuming you only care about properties, you might have an array ["styleName","width","height",...], and you can maybe use the array like this:
var newUI:UIComponent = new UIComponent();
for each(var s:String in propArray) {
newUI[s] = clonedUI[s];
}
If you want more bites on your question (rather than waiting a month), tell us what you are trying to achieve.
mx.utils.ObjectUtil often comes in handy, however for complex object types, it's typically good practice to implement an interface that requires a .clone() method, similar to how Events are cloned.
For example:
class MyClass implements ICanvasObject
{
...
public function clone():ICanvasObject
{
var obj:MyClass = new MyClass(parameters...);
return obj;
}
}
This gives your code more clarity and properly encapsulates concerns in the context of how the object is being used / cloned.
You are right but as per my understanding UI Components are not cloned by mx.utils.ObjectUtil.
from : http://livedocs.adobe.com/flex/201/langref/mx/utils/ObjectUtil.html#copy()
copy () method
public static function copy(value:Object):Object
Copies the specified Object and returns a reference to the copy. The copy is made using a native serialization technique. This means that custom serialization will be respected during the copy.
This method is designed for copying data objects, such as elements of a collection. It is not intended for copying a UIComponent object, such as a TextInput control. If you want to create copies of specific UIComponent objects, you can create a subclass of the component and implement a clone() method, or other method to perform the copy.
Parameters value:Object — Object that should be copied.
Returns Object — Copy of the specified Object