Timer vs setTimeout - apache-flex

The docs for flash.utils.setTimeout() state:
Instead of using this method, consider
creating a Timer object, with the
specified interval, using 1 as the
repeatCount parameter (which sets the
timer to run only once).
Does anyone know if there is a (significant) advantage in doing so? Using setTimeout is a lot easier when you only need to delay 1 call.

setTimeout actually uses a Timer subclass, the SetIntervalTimer, which is an internal class. You can check by doing setTimeout(function ():void { throw "booom"; }, 1);. You'll see it in the stack trace.
As such, I cannot really see a big disadvantage. The only difference is, that you have 2 anonymous calls instead of one. OTOH, in performance critical situations, you shouldn't be using either (except one internal timer) to avoid frequent instantiation of TimerEvent objects.
Basically, I think it's a matter of taste. Adobe decided, the AS3 event system is the shizzle, so they promote it.

Timer:
Gives you more control as you can
register more event listeners to
receive the event rather than a
single one with setTimeout
You can control the start time and
the number of repetitions ( not very
useful against setTimeout, as this
has to run just once and after a
delay considering the immediate time
it was called)
More lines to write, even more if you
need to differentiate with parameters
( custom event class for this )
Use of event listeners which is
standard practice in as3.
Cleaner look
setTimeout:
Easier to use
Less code to write
Parameters can be easily sent;
I prefer the Timer class but I've seen setTimeout being frequently used by programmers.
Also if you are using Tweening libraries,some support delayed call
For example TweenMax
TweenMax.delayedCall(2, myFunction, ["myParam"]);
For all those who say that setTimeout is deprecated, this is non sense..
http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/flash/utils/package.html#setTimeout%28%29
I believe you can't see any "deprecated" keyword around setTimeout here

setTimeout is working perfectly well in external .as files.
Just use this in the class :
import flash.utils.*;
import flash.events.TimerEvent;

It's my understanding that setTimeout is depreciated in AS3. I'm having a bit of trouble finding the source of the setTimeout code, but I also believe it's easier to clear up any references to the Timer object, than with setTimeout (if I remember correctly from AS2).

Usually something becomes deprecated if there becomes new and more powerful way to achieve something.
Yes setTimeout is much more easier to setup in some cases, but it is much more limited in other cases.
I would use the Timer class, because usually when something is deprecated, it means support may be removed for it sometime in the future, and then your code won't work.

The problem is, the Timer object is not at all accurate, and is subject to framerate fluctuations. Read http://forums.adobe.com/message/892631. I created my own Timer (called RealTimer) using the Date object and it is much more accurate. I recommend doing the same.

setTimeout is not working in external .as files.

Related

Why should nesting of QEventLoops be avoided?

In his Qt event loop, networking and I/O API talk, Thiago Macieira mentions that nesting of QEventLoop's should be avoided:
QEventLoop is for nesting event Loops... Avoid it if you can because it creates a number of problems: things might reenter, new activations of sockets or timers that you were not expecting.
Can anybody expand on what he is referring to? I maintain a lot of code that uses modal dialogs which internally nest a new event loop when exec() is called so I'm very interested in knowing what kind of problems this may lead to.
A nested event loop costs you 1-2kb of stack. It takes up 5% of the L1 data cache on typical 32kb L1 cache CPUs, give-or-take.
It has the capacity to reenter any code already on the call stack. There are no guarantees that any of that code was designed to be reentrant. I'm talking about your code, not Qt's code. It can reenter code that has started this event loop, and unless you explicitly control this recursion, there are no guarantees that you won't eventually run out of stack space.
In current Qt, there are two places where, due to a long standing API bugs or platform inadequacies, you have to use nested exec: QDrag and platform file dialogs (on some platforms). You simply don't need to use it anywhere else. You do not need a nested event loop for non-platform modal dialogs.
Reentering the event loop is usually caused by writing pseudo-synchronous code where one laments the supposed lack of yield() (co_yield and co_await has landed in C++ now!), hides one's head in the sand and uses exec() instead. Such code typically ends up being barely palatable spaghetti and is unnecessary.
For modern C++, using the C++20 coroutines is worthwhile; there are some Qt-based experiments around, easy to build on.
There are Qt-native implementations of stackful coroutines: Skycoder42/QtCoroutings - a recent project, and the older ckamm/qt-coroutine. I'm not sure how fresh the latter code is. It looks that it all worked at some point.
Writing asynchronous code cleanly without coroutines is usually accomplished through state machines, see this answer for an example, and QP framework for an implementation different from QStateMachine.
Personal anecdote: I couldn't wait for C++ coroutines to become production-ready, and I now write asynchronous communication code in golang, and statically link that into a Qt application. Works great, the garbage collector is unnoticeable, and the code is way easier to read and write than C++ with coroutines. I had a lot of code written using C++ coroutines TS, but moved it all to golang and I don't regret it.
A nested event loop will lead to ordering inversion. (at least on qt4)
Lets say you have the following sequence of things happening
enqueued in outer loop: 1,2,3
processing 1 => spawn inner loop
enqueue 4 in inner loop
processing 4
exit inner loop
processing 2
So you see the processing order was: 1,4,2,3.
I speak from experience and this usually resulted in a crash in my code.

Delegate execution in Unity3d using c#

I have callbacks implements in my unity3d game in such a way that they are all nested,i.e. one callback leads to another call, whose callback leads to another call and so on upto 5 times. But the last two callbacks are losing their order. Before the second last delegate finishes execution, the last one gets executed! I am using delegates as means of message transfer (other way could be to implement interfaces). Do delegates in C# behave asynchronously by any chance? Implementin callbacks using delegates and using interfaces should yield the same results everytime, correct? And both are synchronous? Any lead on the issue would be greatly helpful.
Thanks
Depending on how you're using delegates the invocation order could not be specified.
See: Whats is the difference between nested method call and delegates?

Replacing WaitForMultipleObjects in Qt

I am not familar with WINAPI, and I am looking for a way to replace WaitForMultipleObjects used in one example I'm porting to Qt by anything using Qt only. Is it possible?
EDIT: (Providing more information as requested in comments)
A 3rd party API provides an array of events:
HANDLE m_hEv[MAX_EV];
In an endles-loop of a thread, the program waits for the events like this:
WaitForMultipleObjects(m_EvMax, m_hEv, FALSE ,INFINITE )
The HANDLE type seems to be void*.
So I wonder, if any Qt class could observe m_hEv for changes and unlock thread execution.
There is no simple way of porting WaitForMultipleObjects outside WinAPI. WinAPI has an "advantage" of that all lockable resources (sockets, files, processes) provide the same generic non-typesafe HANDLE, which is your void*. Unlike other platforms which have different ways of locking and signalling per the type of resource, the event handling in WinAPI is largely independent of the resources. Then a generic function like WaitForMultipleObjects can exist, which doesn't need to care who produced the HANDLEs. So you'll have to understand what the code is trying to do and mimic it differently per scenario.
The biggest difference is in WaitForMultipleObjects third parameter, which is FALSE in your case. Which means that the it will exit waiting as soon as any single event of the waiting array will happen. That is the easier scenario and can be replaced with a QWaitCondition.
Instead of m_hEv, you will pass a QWaitCondition* into the code which signals the event (most probably via WinAPI SetEvent(m_hEv[x]))
Instead of WaitForMultipleObjects, do QWaitCondition::wait().
Instead of SetEvent(), do QWaitCondition::wakeOne().
Would the third parameter be TRUE, then the WinAPI code waits until ALL m_hEv events are signalled. The established name for such functionality is a synchronization barrier and it can be simulated with QEventCondition too, but does not come out of the Qt box. I never needed to do any myself, but SO has some ideas how to do it:
Qt synchronization barrier?
WaitForMultipleObjects is a kind of generic function that works with many things: threads, processes, mutexes, etc. Qt is an OOP library where every class exposes the operations it supports. So the equivalent operation in Qt depends on what class you're using. For example, with threads, use QThread::wait. With mutexes, use QMutex::lock.

API design: is "fault tolerance" a good thing?

I've consolidated many of the useful answers and came up with my own answer below
For example, I am writing a an API Foo which needs explicit initialization and termination. (Should be language agnostic but I'm using C++ here)
class Foo
{
public:
static void InitLibrary(int someMagicInputRequiredAtRuntime);
static void TermLibrary(int someOtherInput);
};
Apparently, our library doesn't care about multi-threading, reentrancy or whatnot. Let's suppose our Init function should only be called once, calling it again with any other input would wreak havoc.
What's the best way to communicate this to my caller? I can think of two ways:
Inside InitLibrary, I assert some static variable which will blame my caller for init'ing twice.
Inside InitLibrary, I check some static variable and silently aborts if my lib has already been initialized.
Method #1 obviously is explicit, while method #2 makes it more user friendly. I am thinking that method #2 probably has the disadvantage that my caller wouldn't be aware of the fact that InitLibrary shouln't be called twice.
What would be the pros/cons of each approach? Is there a cleverer way to subvert all these?
Edit
I know that the example here is very contrived. As #daemon pointed out, I should initialized myself and not bother the caller. Practically however, there are places where I need more information to properly initialize myself (note the use of my variable name someMagicInputRequiredAtRuntime). This is not restricted to initialization/termination but other instances where the dilemma exists whether I should choose to be quote-and-quote "fault tolorent" or fail lousily.
I would definitely go for approach 1, along with an easy-to-understand exception and good documentation that explains why this fails. This will force the caller to be aware that this can happen, and the calling class can easily wrap the call in a try-catch statement if needed.
Failing silently, on the other hand, will lead your users to believe that the second call was successful (no error message, no exception) and thus they will expect that the new values are set. So when they try to do something else with Foo, they don't get the expected results. And it's darn near impossible to figure out why if they don't have access to your source code.
Serenity Prayer (modified for interfaces)
SA, grant me the assertions
to accept the things devs cannot change
the code to except the things they can,
and the conditionals to detect the difference
If the fault is in the environment, then you should try and make your code deal with it. If it is something that the developer can prevent by fixing their code, it should generate an exception.
A good approach would be to have a factory that creates an intialized library object (this would require you to wrap your library in a class). Multiple create-calls to the factory would create different objects. This way, the initialize-method would then not be a part of the public interface of the library, and the factory would manage initialization.
If there can be only one instance of the library active, make the factory check for existing instances. This would effectively make your library-object a singleton.
I would suggest that you should flag an exception if your routine cannot achieve the expected post-condition. If someone calls your init routine twice, and the system state after calling it the second time will be the same would be the same as if it had just been called once, then it is probably not necessary to throw an exception. If the system state after the second call would not match the caller's expectation, then an exception should be thrown.
In general, I think it's more helpful to think in terms of state than in terms of action. To use an analogy, an attempt to open as "write new" a file that is already open should either fail or result in a close-erase-reopen. It should not simply perform a no-op, since the program will be expecting to be writing into an empty file whose creation time matches the current time. On the other hand, trying to close a file that's already closed should generally not be considered an error, because the desire is that the file be closed.
BTW, it's often helpful to have available a "Try" version of a method that might throw an exception. It would be nice, for example, to have a Control.TryBeginInvoke available for things like update routines (if a thread-safe control property changes, the property handler would like the control to be updated if it still exists, but won't really mind if the control gets disposed; it's a little irksome not being able to avoid a first-chance exception if a control gets closed when its property is being updated).
Have a private static counter variable in your class. If it is 0 then do the logic in Init and increment the counter, If it is more than 0 then simply increment the counter. In Term do the opposite, decrement until it is 0 then do the logic.
Another way is to use a Singleton pattern, here is a sample in C++.
I guess one way to subvert this dilemma is to fulfill both camps. Ruby has the -w warning switch, it is custom for gcc users to -Wall or even -Weffc++ and Perl has taint mode. By default, these "just work," but the more careful programmer can turn on these strict settings themselves.
One example against the "always complain the slightest error" approach is HTML. Imagine how frustrated the world would be if all browsers would bark at any CSS hacks (such as drawing elements at negative coordinates).
After considering many excellent answers, I've come to this conclusion for myself: When someone sits down, my API should ideally "just work." Of course, for anyone to be involved in any domain, he needs to work at one or two level of abstractions lower than the problem he is trying to solve, which means my user must learn about my internals sooner or later. If he uses my API for long enough, he will begin to stretch the limits and too much efforts to "hide" or "encapsulate" the inner workings will only become nuisance.
I guess fault tolerance is most of the time a good thing, it's just that it's difficult to get right when the API user is stretching corner cases. I could say the best of both worlds is to provide some kind of "strict mode" so that when things don't "just work," the user can easily dissect the problem.
Of course, doing this is a lot of extra work, so I may be just talking ideals here. Practically it all comes down to the specific case and the programmer's decision.
If your language doesn't allow this error to surface statically, chances are good the error will surface only at runtime. Depending on the use of your library, this means the error won't surface until much later in development. Possibly only when shipped (again, depends on alot).
If there's no danger in silently eating an error (which isn't a real error anyway, since you catch it before anything dangerous happens), then I'd say you should silently eat it. This makes it more user friendly.
If however someMagicInputRequiredAtRuntime varies from calling to calling, I'd raise the error whenever possible, or presumably the library will not function as expected ("I init'ed the lib with value 42, but it's behaving as if I initted with 11!?").
If this Library is a static class, (a library type with no state), why not put the call to Init in the type initializer? If it is an instantiatable type, then put the call in the constructor, or in the factory method that handles instantiation.
Don;t allow public access to the Init function at all.
I think your interface is a bit too technical. No programmer want to learn what concept you have used while designing the API. Programmers want solutions for their actual problems and don't want to learn how to use an API. Nobody wants to init your API, that is something that the API should handle in the background as far as possible. Find a good abstraction that shields the developer from as much low-level technical stuff as possible. That implies, that the API should be fault tolerant.

How can I find out when a PyQt-application is idle?

I'd like to know when my application is idle so that I can preload some content. Is there an event or something similar implemented in PyQt?
(I could also do it with threads, but this feels like being too complicated.)
You have at least two different options, you can use a thread or use a timer. Qt's QThread class provides a priority property that you can set to make it only process when no other threads are running, which includes the GUI thread. The other option is a single shot timer. A QTimer with a timeout of 0 milliseconds puts an event on the back of the event queue so that all events and synchronous functions already active or scheduled will be processed first.
In code, the two options would look like the following:
// (1) use idle thread processing
MyQThreadSubclass idleThread;
idleThread.run(QThread::IdlePriority);
// (2) use QTimer::singleShot
QTimer::singleShot(0, receiver, SLOT(doIdleProcessingChunk));
If you go with the single shot QTimer, be careful how much processing you do as you can still block the Gui. You'd likely want to break it into chunks so that GUI won't start to lag:
// slot
void doIdleProcessingChunk() {
/* ... main processing here ... */
if (chunksRemain())
QTimer::singleShot(0, receiver, SLOT(doIdleProcessingChunk));
}
Obviously, the above is C++ syntax, but to answer with respect to PyQt, use the single shot timer. In Python, the global interpreter lock is basically going to render much of your concurrency pointless if the implementation being called is performed within Python.
You also then have the choice of using Python threads or Qt threads, both are good for different reasons.
Have a look at QAbstractEventDispatcher. But ... I still suggest to use a thread. Reasons:
It will be portable
If you make a mistake in your code, the event loop will be broken -> You app might hang, exit all of a sudden, etc.
While the preloading happens, your app hangs. No events will be processed unless you can preload the content one at a time, they are all very small, loading takes only a few milliseconds, etc.
Use a thread and send a signal to the main thread when the content is ready. It's so much more simple.

Resources