How do I create a platform-specific custom renderer within a Xamarin.Forms class library?
I'm creating a Xamarin Forms class library that allows me to expand a picker's list with a tap anywhere on the control. That means I need to open the picker's list programmatically within the control's Tapped event handler.
This is trivial in the Android and iOS picker implementations - just a call to the picker's .Focus() method. However, the .Focus() method of the UWP's picker doesn't respond the same way - mainly because UWP must also handle mouse events, not just touch events.
Conceptually no problem, though, since I should be able to create a custom renderer for UWP to set the combobox's .DropDown property within the Tapped even handler.
Except...I'm not sure how to create a platform-specific custom renderer within a class library, since - unlike a normal Xamarin.Forms project - there are no platform-specific projects in which to implement custom renderers. Unfortunately, I haven't been able to find information anywhere about how to do this.
Related
I want to handle table view didSelectAtRow delegate method in Siri Shortcuts Custom IntentUI.
Also I wan to handle UIButton Action in Siri Shortcuts Custom IntentUI.
Is any of the case possible?
Please share some solution (if possible)
#SiriShortcuts
#iOS12
#Custom IntentUI
You can't handle interactions in a Custom Intent UI as the intent controller does not receive touch interactions. This is from the official documentation -> Check the section requirements and limitations.
You can update your view controllers as needed using timers or other programmatic means. Your view controllers also receive the normal callbacks when they are loaded, shown, and hidden. However, your view controllers do not receive touch events or any other events while they are onscreen, and you cannot add gesture recognizers to them. Therefore, never create an interface with controls or views that require user interactions.
I am attempting to implement the Daydream keyboard into an app built in Unity and am not able to get this to work. I have added the keyboard prefab as a sibling of the main camera and added two input fields with the onpointerclick function added as instructed. I however get a null reference exception and assume this is due to the daydream keyboard delegate field being blank. The example scene in the SDK shows the daydream delegate example prefab but I am unsure how to implement this for two input fields. Also does the keyboard render in the Unity editor or must it be built and run on a phone?
This is an old question and has probably already been answered, but I figured I'd publicize my answer anyway.
For those reading, if you haven't checked out the Keyboard Demo scene that can be found within the Demos folder of the Google VR Unity package, I would highly recommend doing so. Following this object hierarchy has worked for me in the past.
To answer your first question, it seems that they have included a KeyboardDelegateExample object within the scene's hierarchy, and then used this object as the Keyboard Delegate in the GVRKeyboardManager.
They manage to fake an Input Field by creating a background and overlaying a Text object on top. If this method does not suffice and using an Input Field is crucial in your particular case, then drop your Input Fields into two separate GVRKeyboardCanvas objects.
Clicking on either canvas will activate the GVR Keyboard. You may have to add a small script to manage the transitioning of the input field.
Lastly, no the GVR Keyboard does not render in the Unity Editor, it only appears while running a build. Hopefully this will be addressed in later releases. There are also Keyboard plugins that you may find useful on the Asset Store.
I have a very common problem, but I couldn't find any valid solution.
I want to create a Button able to contain more than a simple Label or Image. In fact, Xamarin Button exposes only Text and Image properties, but in my case I want to construct a more flexible set of controls (e.g. a StackPanel with a list of controls).
I implemented a ContentView acting as a Button after having added TapGestureRecognizer and it works from the pure functional point of view. What I don't like is the missing of all Visual States of a Button.
Therefore, I was thinking how to implement a Custom Renderer of a Button. I would like to expose a ContentPresenter BindableProperty and then set that property to the Button.Content (speaking in UWP terms) in the Renderer class. I think this could be a solution, the problem is that I don't know how to "cast" a Xamarin.ContentPresenter to a UWP.ContentPresenter. Do you have any idea about how to implement a Button able to contain any generic Content?
Greetings! I have a Flex 4.5 Mobile project rolling, and I've hit a pretty crazy snag. I'm using a StageWebView object to render web pages, embedded within the rest of my spark layouts. I'm trying to add a gesture event to the component that contains the StageWebView, but since the StageWebView object doesn't belong to the Flex stack (it inherits from EventDispatcher, not UIComponent) all of my events seem to be getting eaten. Any mouse based event (click, gesture, etc) doesn't seem to register, and I'm not sure how to get around it. The gesture events work if I use the area where the browser is not rendered. How can I get the gesture event from the outer SkinnableContainer?
StageWebView Reference:
http://help.adobe.com/en_US/FlashPlatform/beta/reference/actionscript/3/flash/media/StageWebView.html
UIComponent Wrapped StageWebView:
http://soenkerohde.com/2010/11/air-mobile-stagewebview-uicomponent/
Thanks!
I guess you might have to wire up the gesture events yourself, just doing a quick digging in UIComponent.as, it has these:
[Event(name="touchInteractionStarting", type="mx.events.TouchInteractionEvent")]
[Event(name="touchInteractionStart", type="mx.events.TouchInteractionEvent")]
[Event(name="touchInteractionEnd", type="mx.events.TouchInteractionEvent")]
it's not a bug, from what i understand any mouse interaction over a stagewebview means an interaction with the html currently loaded in itself. you should capture events there and trigger it back to the swf.
surely there are some jquery plugins or something that have gestures to help achieving that.
it's a bit of a bummer that you cant overlay stuff over them though.
I am new to flex framework.
I have created an application using flex framework 4.1 which is having various components that are shown to the end user in the form of a popup window using <mx:TitleWindow>.
This titlewindow is closed either on the click of the close button (displayed in it's titlebar) or by pressing the "escape key" on the keyboard.
I coded a functionality wherein I close the current TitleWindow whenever an 'escape' button is pressed.
Here is what I did.
On the keydown event of TitleWindow I called this function
private function detectescapekeypress(event:KeyboardEvent):void
{
if(event.charCode == Keyboard.ESCAPE)
PopUpManager.removePopUp(this);
}
But this function does not work when I define it in the main home screen of my application and call it using parentApplication.detectescapekeypress(event) on the keydown event of TitleWindow
I had to repeat this code for every TitleWindow that I have used in the project.
How can I write the above functionality only once and reuse it amongst various TitleWindow and other components so that the code for the same is not repeated across various components?
Note: Every TitleWindow that I am using has different code, scripts and layout in it.
Thanks
Why don't u just extend the TitleWindow component and add that functionality to your new custom component? Then use it everywhere instead of the original TitleWindow.
I assume u're using at least SDK 4.1
Create a new mxml file called for example CustomTitleWindow.mxml and paste the following
http://www.copypastecode.com/68211/
Then change all your title windows to CustomTitleWindow.
P.S. Note that in order for the key event to be dispatched, the component must have focus.
blz