I have added 3d view objects using urhosharp for my xamarin uwp/ios/android project . The only event that work is touch event, but i also want to use drag and drop so that the objects can move to different locations within the 3D view. Any suggestions?
https://us.v-cdn.net/5019960/uploads/editor/ni/u16pg79v2m62.png
Haven't used urhosharp yet , but here are some suggestions about using of drag and drop, not sure if it helps here for you.
urhosharp: Basic Actions
From document of urhosharp , there is some bascic actions explains,but no drag and drop in it.Maybe you can do it by combining actions and drag methods on each platform. But this requires you to try.
UWP: reference link here
Here's an overview of what you need to do to enable drag and drop in your app:
Enable dragging on an element by setting its CanDrag property to
true.
Build the data package. The system handles images and text
automatically, but for other content, you'll need to handle the
DragStarted and DragCompleted events and use them to construct your
own data package.
Enable dropping by setting the AllowDrop property to true on all the
elements that can receive dropped content.
Handle the DragOver event to let the system know what type of drag
operations the element can receive.
Process the Drop event to receive the dropped content.
Code example:
<Grid AllowDrop="True" DragOver="Grid_DragOver" Drop="Grid_Drop"
Background="LightBlue" Margin="10,10,10,353">
<TextBlock>Drop anywhere in the blue area</TextBlock>
</Grid>
private void Grid_DragOver(object sender, DragEventArgs e)
{
e.AcceptedOperation = DataPackageOperation.Copy;
}
IOS: reference link here
With drag and drop in iOS, users can drag items from one onscreen location to another using continuous gestures. A drag-and-drop activity can take place in a single app, or it can start in one app and end in another.
Use drag items to convey data representation promises between a source app and a destination app.
Adopt drag interaction APIs to provide items for dragging.
Adopt drop interaction APIs to selectively consume dragged content.
Demonstrates how to enable drag and drop for a UIImageView instance.
Code of example:
func customEnableDragging(on view: UIView, dragInteractionDelegate: UIDragInteractionDelegate) {
let dragInteraction = UIDragInteraction(delegate: dragInteractionDelegate)
view.addInteraction(dragInteraction)
}
func dragInteraction(_ interaction: UIDragInteraction, itemsForBeginning session: UIDragSession) -> [UIDragItem] {
// Cast to NSString is required for NSItemProviderWriting support.
let stringItemProvider = NSItemProvider(object: "Hello World" as NSString)
return [
UIDragItem(itemProvider: stringItemProvider)
]
}
Here is a sample for Xamarin IOS.
Or you can using UIPanGestureRecognizer in IOS to move view.Here is Walkthrough: Using Touch in Xamarin.iOS.All you need to do is let view.center follow panGesture to change.
Android: reference link here
With the Android drag/drop framework, you can allow your users to move data from one View to another using a graphical drag and drop gesture. The framework includes a drag event class, drag listeners, and helper methods and classes.
There are basically four steps or states in the drag and drop process:
Started:In response to the user's gesture to begin a drag, your application calls startDrag() to tell the system to start a drag.
Continuing:The user continues the drag.
Dropped:The user releases the drag shadow within the bounding box of a View that can accept the data.
Ended:After the user releases the drag shadow, and after the system sends out (if necessary) a drag event with action type ACTION_DROP, the system sends out a drag event with action type ACTION_DRAG_ENDED to indicate that the drag operation is over.
Table. DragEvent action types:
Or in Android can use onTouchEvent to move view, need to calculate position of the view.Walkthrough - Using Touch in Android
The main thing is to handle pressing and moving two messages, overloading onTouchEvent. Mathematical knowledge (translation): Record the coordinate point when ACTION_DOWN, and calculate the translation amount according to the current position and the position when pressed at ACTION_MOVE. Refresh the control, causing the control to redraw, and move the coordinates of the upper left corner of the drawing when redrawing.
Here also has a discussion about Drag & Drop in Xamarin forms.It may be helpful.
Related
Is it possible to receive a mouse click even in a Qt application, evaluate it, and if necessary, let it fall through to whatever might happen to be below the Qt application window?
Note that Qt::WA_TransparentForMouseEvents doesn't facilitate evaluating the click before passing it through.
And since the click evaluation incorporates some dynamic logic, it is not applicable to set a static mask either, on top of this having a visual impact as well.
Ideally, I would like a way to selectively allow the mouse click to pass through the application window in a platform portable way, ideally from QML and without bringing in the widgets module, or at the very least, without involving digging into private C++ internal APIs.
Qt::WA_TransparentForMouseEvents is used to filter mouse events out. The name is a bit of a misnomer: it allows widgets that would otherwise consume mouse events, not to consume them. E.g. you could make a button not notice any mouse events. If you're writing a custom widget, there's never any need for this attribute, since it's up to you to inspect the mouse events and simply not handle them: they are automatically passed to the parent widget.
But all of this doesn't matter much, since the WA_ attributes are for widgets, and do nothing for windows. You want something else entirely: to make the window itself transparent for input. Thus, in QML:
window.flags = window.flags | Qt.WindowTransparentForInput
use QWidget.setMask to set the repsonsible area of window:
def resizeEvent(self, event):
geo = self.frameGeometry()
path = QtGui.QPainterPath()
path.addEllipse(0, 0, 300, 200)
poly = path.toFillPolygon().toPolygon()
reg = QtGui.QRegion(poly)
print('resize event', geo, poly)
self.setMask(reg)
I'm implementing a basic shape drawing tool using a custom subclass of Qt's QGraphicsScene and several QGraphicsItem. Now there are several situations where I don't want any "global" actions to be executed:
For example, while dragging items around, the user should not be allowed to create a new file or to undo the last action (by pressing Ctrl-Z for example) since this would lead to several problems that would have to be handled separately (if the user is currently drawing an edge between two nodes, what should happen if he presses Ctrl-Z with the last recorded action being the creation of the first node?)
I noticed that several commercial applications like Microsoft Word and Adobe Photoshop just seem to ignore any usual keyboard shortcuts while being in such an "intermediate" state. Furthermore, when dragging items out of the viewport, these tools display a "forbidden" cursor and do not allow any mouse press events to reach the outer window (like a right click on the toolbar, for example).
How should I implement this in my case, when using QGraphicsScene? I already tried to add the following override:
void MyGraphicsScene::keyPressEvent(QKeyEvent* keyEvent)
{
keyEvent->accept();
}
But any pressed keys were still delivered to the main window. In addition to that, I'm not sure if filtering just keyboard events is safe enough, since there might be other input events that could trigger forbidden actions.
Is there any generic approach to this problem that I could use in my software?
Generally we use UIImagePickerDelegate to get the selected image from user. I need the image after the user captured it without asking usephoto/ cancel. Is it possible?
Step 1 : Remove default camera controls by this method.
[pickerController setShowsCameraControls:NO];
Step 2 : Create an Overlay view and put one button on the overlay view to capture image.Set the view as picker controller's overlay view.
[pickerController setCameraOverlayView:camOverlay];
In the button action write this method:
[pickerController takePicture];
This automatically calls the delegate method and we will get the image directly with out use photo or cancel Step.
NOTE: Source type of picker controller should be camera to implement the above stuff.
pickerController.sourceType = UIImagePickerControllerSourceTypeCamera;
set delegate to self.
Functionnaly :
On one of my components of my application, I have an editing/lock system. When a user starts editing, he locks the file so other users cannot edit it.
Problem scenario : When the user activates "edition mode" and leaves screen, I would like to show a alert with two options : save changes, or discard changes.
There are different ways to exit screen :
There is a List on the left side containing other possible editabel data. A click changes the data in my component.
There is a menubar on top leading to other screens.
The edition component is embedded in a Tab navigator. When changing tabs, the alert has to show.
Closing browser.
Do I have to catch all of these events and plug at all those places?
Is there any kind of focusout mecanism?
The answer to the first question is: YES.
You need to watch all possible exit events that could harm the currently edited data.
Well, the problem is now how to manage this properly. Using an MVC framework you would trigger the appropriate commands from your components:
CHANGE_LIST_ITEM (new item)
CHANGE_TAB (new tab)
CHANGE_SCREEN (new screen)
Each command then checks if the currently edited tab has been saved or not. If not, it displays the Alert. Else, if there are no changes, it allows the list, the screen chooser and the tab bar to continue.
So your components (list, screens, tabs) need to implement some kind of rollback or preventDefault mechanism. Generally, changing their state must be allowed by a central validator (in MVC the command).
In the case of the list: I would suggest that the list is not selectable by mouse click but only programmatically. You set a listener on the list item click event. If the command allows setting of a new item it will notify the list. In MVC usually by sending an async message that gets received by the list's mediator. [[And even more correct: The command would set some model properties (e.g. currentListItem) and the model than sends an async message.]]
Edit: For the browser close event, you need to call a JavaScript expert.
Looking through the documentation, it seems that the new advanced gestures API doesn't determine the direction of a swipe beyond the basic { left, right, up, down }.
I need the start point of the swipe and the direction.
Is there anyway to retrieve this other than coding my own advanced gesture library from scratch out the basic gestures?
And if this is my only option, could anyone point me to some open source code that does this?
Got it! Documentation is here, under 'Creating Custom Gesture Recognizers' at the bottom.
Basically the six gestures Apple provides all derive from UIGestureRecognizer, and you can make your own gesture recogniser in the same way.
then, inside your view's init, you hook up your recogniser. and just the act of hooking it up automatically reroutes incoming touch events.
Actually, the default behaviour is to make your recogniser an Observer of these events. Which means your view gets them as it used to, and in addition if your recogniser spots a gesture it will trigger your myCustomEventHandler method inside your view (you passed its selector when you hooked up your recogniser).
But sometimes you want to prevent the original touch events from reaching the view, and you can fiddle around in your recogniser to do that. so it's a bit misleading to think of it as an ' observer '.
There is one other scenario, where one gesture needs to eat another. Like you can't just send back a single click if your view is also primed to receive double clicks. You have to wait for the double-click recogniser to report failure. and if it is successful, you need to fail the single click -- obviously you don't want to send both back!