A new behavior in iOS 5.1 related to UISplitViewController apps seems to be intercepting UISlider motion with undesired results. This might also apply to UISegmented Controls and any other control surface that handles left-to-right gestures.
With a UISplitView in Portrait orientation, the Master view is normally hidden. Starting in iOS 5.1 a right swipe on screen brings up the Master View on the left side of the device. The problem is, sliding the thumb of my UISlider control is misinterpreted as a screen swipe: if I give the UISlider thumb a sharp push to the right, the Master panel pops up.
In my app, there are undesired side-effects (and performance issues) with brining up the Master view.
I consider this behavior an Apple bug. Any ideas how to work around it? Can I somehow have the UISlider capture the gesture and process it, without passing it up the responder chain?
Thanks in advance for any insight!
Apple confirmed the issue as a duplicate of a previously reported bug that is currently under investigation (Bug ID# 10170209).
The workaround seems to be functioning fine for now.
Related
I have an iPad app (in C#) with a custom UIView that allows input via touch and Apple Pencil stylus touches. I am trying to integrate support for trackpad/mouse indirect (cursor, or "pointer" as Apple calls it).
I got hover working using HoverGestureRecognizer. I got right-click and control-click working using normal touch Began/Moved/Ended/Cancelled events and checking for .type == .indirectPointer and then checking if the control key modifier in event.ModifierFlags is set, or if event.ButtonMask == secondary.
I have spent a lot of time searching through the documentation on the Apple Developer website, starting here and branching out:
UIApplicationSupportsIndirectInputEvents
Somehow I cannot find the API that the system calls in my code when a two-finger trackpad scroll (or mouse scrollwheel scroll) occurs. (On another view that is a scrollview, I can get the scrollview's scroll event when I do a two-finger scroll, since this is built-in to iPadOS 13.4+ for scroll views, but my custom view is not a scroll view, it just has some scrollable areas inside of it.)
Things I tried:
UISwipeGestureRecognizer. Nothing was called for two-finger trackpad scroll gesture.
UIPanGestureRecognizer. Nothing.
Subclassing UIScrollView and adding a UIScrollViewDelegate, just to see if it would work... Nothing.
Subclassing GestureRecognizer and adding that, then overriding ShouldReceive(UIEvent evt) but that was never called.
What does iPadOS 13.4+ convert the trackpad two-finger scroll gesture into? Can I get this as some sort of event? The documentation linked above is pretty disappointingly barebones, but it mentions UIEvent.EventType.scroll but not how or when or where the system will call any of my methods with an event of that type. Pretty infuriating. They should just spell this out more clearly.
Answers in Swift or C# are welcomed.
OK, strangely I thought I tried PanGestureRecognizer, but I must have set it up wrong. The example code project by Apple, Integrating Pointer Interactions into Your iPad App had the answer (C# code):
panRecognizer = new UIPanGestureRecognizer(() => {
Console.WriteLine("panned -- " + panRecognizer.VelocityInView(this));
});
panRecognizer.AllowedScrollTypesMask = UIScrollTypeMask.Continuous;
AddGestureRecognizer(panRecognizer);
Glad I figured this out!
Something seems to have changed in Qt 5: you can't get a drop or move event if you don't move at least one pixel from the start point where you were when QDrag::exec() was called. Try putting a breakpoint in the dropEvent of the Draggable Icons Sample, then click a boat and release it without moving the mouse. That generates an "ignore" without any drop signal.
(This is on Kubuntu 13.10 with Qt 5.1.)
When teaching how to start a drag operation, the documentation suggests you might use manhattanDistance() to determine if the mouse has moved enough to really qualify as "the user intending to start a drag". But you don't have to use that; you can start up a QDrag on the click itself.
Anyone know of a workaround to have that same kind of choice on the drop side, or is that choice gone completely? :-/
Why I care: I've long had frustrations trying to get a tight control on mouse behavior in GUI apps—Qt included. There seems to be no trustworthy state transition diagram you can draw of the invariants. It's a house of cards you can disprove very easily with simple tests like:
virtual void enterEvent(QEvent * event) {
Q_ASSERT(!_entered);
_entered = true;
}
virtual void leaveEvent(QEvent * event) {
Q_ASSERT(_entered);
_entered = false;
}
This breaks all kinds of ways, and how it breaks depends on the platform. (For the moment I'll talk about Kubuntu 13.10 with Qt 5.1.) If you press the mouse button and drag out of the widget, you'll receive a leaveEvent when you cross the boundary...and then another leaveEvent when the button is released. If you leave the window and activate another app in a window on screen and then click inside the widget to reactivate the Qt app, you'll get two consecutive enterEvents.
Repeat this pattern for every mouse event, and try and get a solid hold on the invariants...good luck! Nailing these down into a bulletproof app that "knows" it's state and doesn't fall apart (especially in the face of wild clicking and alt-Tabbing) is a bit of a lost cause.
This isn't good if your program does allocations and has heavy processing, and doesn't want to do a lot of sweeping under the rug (e.g. "Oh, I was doing some processing in response to being entered... but I just got entered again without a leave. Hm, I guess that happens! Throw the current calculations away and start again...")
In the past what I've done is to handle all my mouse operations (even simple clicking) with drag & drop. Getting the OS drag & drop facility involved in the operation tended to produce a more robust experience. I can only presume this is because the testers actually had to consider things like task switching with alt-Tab, etc. and not cause multiple drop operations or just forget that an operation had been started.
But the "baked in at a level deeper than the framework" aspect actually makes this one-pixel-move requirement impossible to change. I tried to hack around it by setting a timer event, then faking a QMouseEvent to bump the cursor to a new position once the drag was in effect. However, I surmise that the drag and drop is hooked in at the platform level, and doesn't consult the ordinary Qt event queue: src/plugins/platforms/xcb/qxcbdrag.cpp
The issue has--as of 1-May-2014--been acknowledged as a bug by the Qt team:
https://bugreports.qt-project.org/browse/QTBUG-34331
It seems that me bountying it here finally brought it to their attention, though it did not generate any SO answers I could accept to finalize the issue. So I'm writing and accepting my own. Good work, me. (?) Sorry for not having a better answer. :-/
There is another unfortunate side effect of the Qt5 change, pointed out by a "Dmitry Mordvinov":
Same problem here. Additionally app events are not handled till the first mouse event after drag started and this is really nasty bug. For example all app animations are suspended during that moment or application hangs up when you try to drag with touch monitor.
#dvvrd had to work around it, but felt the workaround was too ugly to share. So it seems that if you're affected by the problem, the right thing to do is go weigh in...and add your voice to the issue tracker to perhaps raise the priority of a solution.
(Or even better: patch it and submit the patch. 'tis open source, after all...)
I've been looking through the iOS 7 / UIKIT framework, and although it looks quite different aesthetically it's really the same SDK underneath from what I can see.
My question, is there any extra code that needs to be included to get the draggable behaviour between pushed tableviews/views?
When you push a view onto a UINavigationController you can now drag back to the previous controller from the side rather than pressing the back button.
This behavior can be seen in mail.
How is this achieved, do I need to add any code to add it to my app?
This has nothing to do with UITableView or UITableViewController, but with UINavigationController. And yes, you get this behavior for free as long as the back button is visible.
Ever since iOS5, I have a problem where when I present and then dismiss a modal view, my Navigation Controller bar is hidden underneath the status bar. I have read the forums and tried many things but I cannot find the fix for this behavior.
Also, I get this behavior when presenting any modal view controller so it does not appear to be specific to the view controller I am presenting. At first I thought it was a problem with ZXing but this seems to be generic with the iOS5 update.
Additionally, if I select a UITextField after dismissing the modal and my navigation bar is hidden under the status bar, the keyboard comes up misplaced in my window. Again, if I do a rotate back and forth, the navigation controller bar and the keyboard work just fine.
Any ideas would be appreciated.
RESOLVED
OK. I finally found the problem here. Again this only appeared in iOS5 but when my RootViewController launches it holds off on rotations until the animation is done. Once it is done, then it allows rotations again. The problem was that it was returning NO for all aspects (including portrait). The view showed fine but when I would present a modal and return, the view geometry was mangled. Once I changed it to return YES for portrait mode even during animation, the problem went away.
RESOLVED OK. I finally found the problem here. Again this only appeared in iOS5 but when my RootViewController launches it holds off on rotations until the animation is done. Once it is done, then it allows rotations again. The problem was that it was returning NO for all aspects (including portrait). The view showed fine but when I would present a modal and return, the view geometry was mangled. Once I changed it to return YES for portrait mode even during animation, the problem went away.
I have two overlapping images. I touch the topmost image and start moving (touchmove) finger around. All subsequent touchmove events are received by that image. In the middle of this interaction I want the events to go to the image underneath, so that I can move it around instead.
How to change the event source to the image underneath? That is, once an object has started receiving touch events, how do I change the target of those events?
I suspect that Joe Blow is talking about handling touch events in the context of a native iPhone app built with Objective-C.
Eric's question is about handling touch events in Mobile Safari with JavaScript.
I could be confused though...