Fragment Life cycle from onPause to OnResume - android-fragments

In Activity life cycle we can go from OnPause to Onresume directly (this can occur if our activity leaves the foreground but is still visible, i.e. a dialog pop ups). Checking the fragment life cycle diagram: http://developer.android.com/guide/components/fragments.html
When the activity is paused then the fragments respective onPause is called. But at this point when the activity call onResume what state is the fragment in ? What life cycle callback gets called ?

The Fragment lifecycle is tied to the Activity lifecycle. If the Activity is changing the state the Fragment will aswell. Because of that a Fragment has the same major lifecycle components as an Activity like onCreate(), onResume and so on. In addition to these, there are some specific ones like onAttached(), onDetached(), onActivityCreated() etc.
A Fragment is able to draw an UI and is controlled by the Activity. If that wouldn't be the case some strange things could happen. Like the Activity goes into the background but the Fragment is still visible. That's why these two components have to sync their state.

onResume as well, check the official documentation: http://developer.android.com/reference/android/app/Fragment.html#onResume()

Related

What is the lifecycle of Watch App?

There are two subclasses of WKInterfaceController in my Apple Watch app.
The first one is the entrance of another one, their relationships is nextPage with the Interface Builder.
For the awakeWithContext, willActivate and didDeactivate method in each InterfaceController, I printed them all out while watch app launching.
And I got this output:
awakeWithcontext -> First
awakeWithContext -> Second
willActivate -> First
willActivate -> Second
didDeactivate -> Second
and I swipe to the next InterfaceController:
willActivate -> Second
didDeactivate -> First
So now the question is:
Will all the awakeWithContext method of all InterfaceControllers in Watch App be fired as long as launched?
What about the willActivate method?
The life cycle of watchOS apps is as described below.
awakeWithContext
When initializing the page, awakeWithContext will be called. This is the first method to be called, and no UI is displayed yet.
You should do something like updating model arrays for table views, setting properties, etc in awakeWithContext. This method has a very similar job to initializers in simple classes (init()), but this time in WKInterfaceControllers.
The answer of your first question:
awakeWithContext will be called on ALL PAGES as soon as the watchOS
app launches.
willActivate
When the interface controller is about to be displayed, willActivate will be called.
You should update label values, actions and anything else related to view elements there.
The answer of your second question:
willActivate will be called on ALL PAGES as soon as the watchOS app
launches, but in contrast with awakeWithContext, this will be called
again as soon as you view the controller (in other words, when you
navigate to that desired interface).
For the first time you launch the app, all controllers' didDeactivate will be called, except the current, and when going to another, its willActivate will be called prior to didDeactivate being called on first one.
So the life cycle is:
1- awakeWithContext of all views
2- willActivate of all views
3- didDeactivate of all views, except the first one (the current one)
And when swiping to the second:
1- willActivate of the second view
2- didDeactivate of the first view
awakeWithContext is called on initialization. This method will be called on all your pages in your watch app on launch.
willActivate is called when the interface controller is about to be displayed. The reason your second interface controller's willActivate followed by didDeactivate is called, is because it's the next page that can be onscreen. This happens in order to help load the next interface controller with relevant data since it may come on screen soon.
Therefore, if you had a 3rd page interface controller its willActivate followed by didDeactivate would be called when the 2nd interface controller is onscreen.
Apple Doc on willActivate. Page-based navigation on the watch may not explicitly say this, but they always help to read.

How to catch all and only button release events in Qt?

My team is developing an UI for an apparatus with touch screen and we would like it to emit a sound (from a buzzer) each time the user correctly presses a button (so using the release event). Notice that I don't want to play the sound after each click on the interface, but only when the click is over a button.
We use many types of button, sometimes QPushButton and most of the times customized buttons derived from QAbstractButton. In most cases these buttons get an objectName.
So I supposed in order to do that, I would have to catch the MouseButtonRelease event and since I'm already working with a subclass of QApplication to handle excetions, I decided to do this in the notify function.
I tried, then, some methods to recognized when the MouseButtonRelease was related to a button but none of them were successfull. The best one, verifying the receiver's objectName was still not good enought not only because not all buttons had an objectName (which, of course, can be handled), but specially because not always the event was caught for buttons with names set. In other words, sometimes I would click in a button and it recognizes the event and sometimes I would click in the same button and the event is not recognized.
I did some research and another method I found was to set an event filter in the MainWindow, but not all widgets have the MainWindow as their parent which means I would have to Ctrl+c / Ctrl+V the same code time after time when I obviously want something more localized (i.e. in only one spot).
So why it happens that the notify not always handles the events? And how could I do this? Any suggestions are appreciated specially one that is less heavier then handling the events globally.
As info, the other two ways I tried to catch the events with similar or even worst results inside notify were with receiver->inherits("...") and qobject_cast< QAbstractButton* >(receiver).

filter mouse move event and send again

My program is working on Qt, and I have a problem and there is free answer for it in website.
Our products need to update image while user move mouse, but the updating image is very time-consuming.If user move the mouse quickly, the system will generate a lot of mouse movement events, eventually leading to clogging of the background process.Therefore, we need to filter out part of the event.
I filter mouse move event by insert event filter in QApplication:
qApp->insertEventFilter(this)
Once I catch mouse event, I will store QMouseEvent and pointer of QObject, and active QTimer. other mouseMouseEvent will can overwrite them before timeout. After timeout, the last event will be post.
I can't use:
QApplication::sendEvent(XX) or postEvent(xx)
because it will be catched by my event filter again.
How can I make it work?
Don't filter the events. Instead of that, change your background worker which is responsible for producing the data to make sure that you do not spend time on stuff you won't need.

Playn and multitouch?

I've made this Button class, to catch the Pointer event :
public class Button implements Pointer.Listener {
public void initLayer(Image defaultImage) {
layer = parent.createImageLayer(this.defaultImage);
layer.addListener(this);
}
...
If I touch one of the instanciated buttons, I get the onPointerStart & onPointerEnd events. But, if one of my button is already touched and I start to touch another, I don't get the onPointerStart event of the second button.
Is there a way to get these multi touch events with playn ?
The Pointer service is meant to abstract over either a simple touch interaction or a mouse interaction. Thus it does not support any sort of multi-touch interactions. You will not receive notifications about any touches other than the first via the Pointer service.
If you want to handle multiple touches, you have to use the Touch service, and there is currently no way to register Touch listeners directly on layers. So you'll have to register a global listener and do your own hit testing, and map touch movements to the layer that was first hit by that touch, etc.

Recognize a holding/long touch on a list component in Flex Mobile

I´am trying to create my own custom list component in a Flex mobile Project which fires an Event when the user touches a listitem and holds the finger down for a given time.
Some kind of "longTouch"-Event like its implemented on native android listitems to edit the entry for example.
I tried to listen for the MOUSE_DOWN Event to start a timer and dispatch an event when the timer finished. But this approach failed because i cant get the listitem that was pressed by the user because the List component updates the "selectedItem"-property only after the user lifts his finger from the list.
thanks in advance
Andre Uschmann
There is no longTouch (or longPress) event exposed through the Flash Player Native APIs.
One option is to roll your own using TOUCH_BEGIN, TOUCH_END, and a timer.
Basically:
When user starts the touch, start the timer.
When the touch_End event fires; check the timer to see how long it has been running using currentCount. If it is long enough to be considered a "long touch", then dispatch your custom longPress event. If not; then stop the timer and ignore.
This could all happen inside the renderer; so you'd know exactly what item was pressed.
I expect this would be more solid than using mouse events, which seem to be inconsistent on touch based devices

Resources