Carplay automation with appium(XCUITEST driver). Unable to click on element - appium-ios

I am looking for the solution for the automation of native IOS application with CarPlay component. All the CarPlay context is implemented using Swift and consist of XCUIElements.
When my testing device is connected to the headunit and CarPlay application under the test is launched I can access native context of the application by using iOS appium driver. The phone and CarPlay application native contexts are mixed together and available for interaction (getAttributes(), getText(), isDisplayed() and etc.). The only problem is clicks.
When I am trying to click the element the driver clicks in a different place. I didn't figure out why. I was thinking that the deviation is similar on each click, but I was wrong. I was not able to find any consistent pattern.
Any ideas how to avoid coordinates clicking because I am planning to test on a different headunit screen resolutions?

Related

Will ASP.NET Form Get value from Barcode

ASP.NET Form. If running a form in a browser on a small (Android) device with a barcode scanner, will the scanned barcode go into the ASP.NET textbox? Or I need to add something to the application?
Well, it going to depend on which of the 150+ barcode scanners you decide to grab from google play.
However, the answer is yes, or no. It will depend on the kind of scanner.
If you download just a scanning application (software based - not built in scanner).
The reason is Android (and even iOS) don't allow one application to set focus, get/grab/take data from other applications. Nor is the reverse allowed. If that was possible, then the app could also get/grab/take values from when you are say running your on-line banking application.
I don't think Android thus supports focus to another application during scan that has focus. Now if this is factory supplied software on the phone? Then yes, this works like a desktop keyboard "wedge". That means the program does not know if you are typing from keyboard, or input is from the scanner (hence the name keyboard wedge). These will work with a web form.
However, we now seeing the rise of software based keyboard wedges. That means the software scanner is installed on android as a custom keyboard. And this in case, then once again, it will work in a web form.
So, for devices with a built in scanner? yes, that will work in all applications. For a software only (uses built in camera), then again, this is possible if the software in question works as a keyboard/wedge scanner.
If you going to adopt android scanning? then use a purpose built Android scanner.
And another possible if you want to use a software scanner? Write a small android application and have it talk to your web site. This I think is the best solution, but of course means you have to adopt some Android dev tools.
So how this works will depend on if the android device has a built in scanner, or it is a software + camera based scanner. However, it would seem that even now installable software based scanners in theory can be made to work for any application since the application is running and behaving as a user installed keyboard.
So, you have to check the particular device. The answer is not in all cases, and the answer depends on if you using a Android device with a built in scanner, or you looking to use any Android phone as that scanner.

QTP Writing test on Win32 app ObjectSpy not finding object id

I am experienced writing automation tests for web apps using Selenium.
However I now have to automate a Windows Desktop app which I'm new to.
I'm using QTP 11 (old version) and I can get QTP to login type username/password to the desktop app. However when the app loads there are icons like a Windows desktop. I tried using ObjectSpy on the Actions folder icon but it can't find the object ID and it thinks the icon is a WinObject("COMPOSITE")
Also tried using QTP Record feature but the code that it generates uses hardcoded x and y values. I don't want to use x,y values as if the Actions icon moves 3cms left or right in future the test will fail.
e.g.
Window("Loan IQ").WinObject("COMPOSITE").Click 369,33
Need help finding the object ID in a Win32 app. Thanks
First of all you should make sure that UFT is configured to test your application. In the Record and Run Settings dialog, make sure that either _any windows application__ is selected or your app is explicitly listed.
If this doesn't improve the situation you can try using image based testing (aka Insight).
WIN32 Apps can be a nightmare to automate especially with QTP 11, as it is a kinda outdated version. If you want to get stable automation I propose the following:
Upgrade to a newer version of UFT (14+)
This will most probably not help you indentify the objects but will have a lot of new technologies supported that may help you as described in the following steps
Use Image Based Recognition
Even if your screen resolution changes UFT is still able to identify pictures.IT does not use absolute vectors to compare bitmaps but a different technology which I won't go in detailed (long story short, screen resolution changes are okay)
Provide support for your Widgets
Microsoft has 2 frameworks that can be used to provide UI Automation capabilities (initially for people with accessibility needs, but now is used for RPA and GUI Testing). UFT supports the MSAA and UIA frameworks of Microsoft so if your company is ready to implement support for the UI widgets via one of these Technologies, you are on your way for a smooth Test Automation Experience. Please note: This is mostly a huge investment, so if the tool is something internal and not planned for longer term usage, go with the image based Recognition

Possible to cross-platform develop Watch/Wearable applications?

since I am new in the world of developing apps for watches, and the fact that it exists for smartphones the following frameworks:
Xamarin
PhoneGap
appcelerator
kony
Cordova
...
I wonder if there exists for watches apps similar frameworks? So that you code once but run overall.
Thanks
Edit 1:
At this day (12.05.2015) regarding to the answer of a nativescript maintainer here. I will go with nativescript to start writing app for wearables.
Cordova/PhoneGap apps don't work directly on the wearable devices/watches. Cordova/PhoneGap is basically a javascript API which can run on WebKit/WebView on all the mobile OS's. But the Android Watch and Apple Watch doesn't support WebKit and so the apps developed with Cordova don't work directly on Watch devices. But if want to extend some of the features from the existing Cordova app to the wearable app, you need to create the extension app in native language and the extension should be able to communicate with the paired app on the mobile device. The extension on the Watch will have only UI and the bussiness logic etc runs on the Cordova app on the mobile. It is possible to establish communication between these apps which will drive the display on the watch devices.
I am not sure about the other frameworks you listed above on how much they support wearable devices.
As #kiran and #NRimer have mentioned, these cross platform frameworks are relying on the WebKit/WebView which is the almost universal layer supported on every mobile device. They dont run directly on the device, but device runs WebKit platform that runs these cross platform apps. So comparing the capabilities of the native app with cross platform app, native app is bigger, because it can have a hands on device hardware related features. The thing particular to the smart watches is that they mostly rely on other smart phone device, and it uses it's communication protocols, that are hardware specific, and WebKit doesnt have its hands on it.
It depends on what you're looking to do with the framework. Watch apps build off data provided by their containing app. For example if you want to provide custom notifications on the watch, the app (or server for remote notifications) constructs them. When your watch app needs information, it makes a request to the containing app. Lets say you have a group of apps that you want to provide the same notifications or functions on each of their watch apps, you could make a framework that handles these functions for the containing app. As for the watch portion, think of it as more of a display of information provided. Unfortunately i dont think there's a way to generate frameworks for watch apps yet. If you're looking to have a lot of code within the watch app this might be more difficult but for simple display of information you should be alright.

capturing keyboard events in adobe air background

I am struggling to find a way so that i can capture keyboard events in adone air even when application is in background mode and sitting in system tray on windows.
Basically i want to make it so that if a user presses a certain combination of keys then adobe air detects it and performs a task. This all happens when the air desktop application is in background and focus is not on air app.
I found extension to capture native mouse movements but was not able to find any extension for capturing keyboard evenets.
Please suggest.
Thanks
As the answers to this question state, you'll need to write your own native extension (or external app, invoked with NativeProcess) to globally capture keyboard events.

How to port my existing Flex3.6 application to mobile iOs, Android platforms

I have a Flex web application which uses Flex 3.6 sdk.
What are all the ways to port this application iOs and Android devices.
Before the release of Flash builder 4,
I have converted my Flex 3.6 project to AIR 2.0. (Which required very minimal code change)
and used some command line tools to package it to .ipa and .apk.
Sucessfully deployed it on a iPad. Application worked as expected.
This is all I remember, I totally forgot about the procedures that i followed as it was before two years.
Now in Flash builder 4, There is a option to create "Mobile Project" which exports the application for different mobile platforms just by Rightclick on project - Export - Release Build .
but this page tells that
"Except for the MX charting controls and the MX Spacer control, mobile applications do not support the MX component set defined in the mx.* packages."
Now I really confused which approach to follow.
Can some one please clarify on these
What are all the ways to port Flex3.6 web application to iOs and Android devices.
Do I need to convert my Flex3.6 project to Flex 4 project with all MX components changed to Spark components (This requires major change in my project) for mobile platform support.
Is there any other ways to port my existing Flex3.6 application to mobile with very minimal code change.
(I understand that changes like UI size, etc needs to be taken care)
Thanks.
First of all, I don't think reusing your web application as a mobile app is a very good idea from a user experience point of view, unless the interface is extremely simple (which wouldn't warrant a full blown Flex application in the first place).
the screen size is so different that it would probably be unusable, or at least uncomfortable
you have no touch interactions defined
But to answer to your questions: the mobile components are a completely different component set. They are more lightweight and optimized for mobile interactions. To achieve this gain in performance they were based on the Spark architecture. Which means that:
There is no way to port your Flex 3 code
Yes, you'll have to convert
I can't think of any other way to port your application; minimal code change is out of the question
conclusion
Both the fact that it is technically impossible to automagically port the application from web (Flex 3) to mobile and that - even if it was - it wouldn't be recommended because of UX concerns, lead me to this suggestion: rebuild it from the ground up, both taking care of clean, optimized code for mobile and designing a UI that is appropriate for the targeted platform.

Resources