Android - different App Info activities when using intents - android-13

When I open the App Info activity manually, there are 3 dots on the upper-right corner:
Which open the Restricted Settings menu(Android 13):
But opening the activity with the following code:
intent.setAction(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
Uri uri = Uri.fromParts("package", sApp.getApplicationContext().getPackageName(), null);
intent.setData(uri);
Doesn't include the side menu.

Related

WatchOS Background URLSession handle(_:) not called when app is not closed (lowered wrist)

I'm trying to create a Watch-App where I download data (around 30 MB).
Therefore I create a URLSession with a background configuration like so:
let config = URLSessionConfiguration.background(withIdentifier: "<some-id>")
self.session = URLSession(configuration: config, delegate: self, delegateQueue: .main)
and then start the download:
let request = URLRequest(url: "<some-url>", method: .GET)
self.task = session.downloadTask(with: request)
self.task.resume()
Since the user is not gonna stare at the watch for 5 minutes, I want to notify him with some haptic feedback when the download finished.
Therefore I wanted to use the handle(_:) function of the ExtensionDelegate. According to the documentation, this should be called with a WKURLSessionRefreshBackgroundTask when the download finished:
The system creates a background URLSession task when any of the following events occur:
Authentication is required to complete a background transfer.
All background transfers associated with a session identifier have completed (either successfully or unsuccessfully). https://developer.apple.com/documentation/watchkit/wkurlsessionrefreshbackgroundtask
This handle(_:) method is only called if I close my app though (pressing the crown), not when i lower my wrist to lock the screen.
I also noticed, that if I look at the watch again, the callbacks for urlSession(.. didWriteData ..) are not working anymore.
Am I missing something or is this even expected behaviour? I'm testing on a real Apple-Watch Series 4 with WatchOS 5 installed.

How to find which TVs are currently using my app?

I am developing an app for Android TV is it possible to know, on which TV my app is running or what action is performed on it?
Thank you.
To review your app's supported devices:
Sign in to your Play Console.
Select an app.
On the left menu, click
Release management > Device catalog. If you haven't already, review
and accept the Terms of Service.
Select the All, Supported, or
Excluded tabs. If you want to download a list of devices as a CSV
file, near the right side of the page, click Download device list.
For more infos:
https://support.google.com/googleplay/android-developer/answer/7353455
To track action performed on your app, you can use Fabric's Answers plug in.
Here is a sample code you'll need to add in your code to track events in Answers:
public void onKeyMetric() {
// TODO: Use your own string attributes to track common values over time
// TODO: Use your own number attributes to track median value over time
Answers.getInstance().logCustom(new CustomEvent("Video Played")
.putCustomAttribute("Category", "Comedy")
.putCustomAttribute("Length", 350));
}
For more infos:
https://fabric.io/kits/android/answers

"Iteration ID" for a CustomVision Project (for use in MSFlow action)?

I'm building an MSFlow which sends a SharePoint pic lib pic to a just-trained CustomVision Classifier, which then sends back a label (eg "Green", "Red", etc);
Challenges:
My MSFlow "CustomVision" action is failing, stating "there's no default iteration for this project. please provide an Iteration ID"
There is nowhere on the CustomVision project's settings page which displays this IterationID !
How / where to find this iteration ID (appears to be a GUID) ???
Turns out the IterationID can be found as follows:
Browse to your custom vision projects page URL
(eg https://www.customvision.ai/projects)
=> browser will display a set of "tiles" - one for each of your existing projects;
Navigate (click) on your particular project for which you seek the IterationID;
=> browser will redirect to the "manage" page (note: defaults to Training Images page) for your project;
It will look something like this:
https://www.customvision.ai/projects/<project GUID here>#/manage
Navigate (click) on the Performance tab of this project
=> browser will direct to the "performance" page, something like this:
https://www.customvision.ai/projects/<project GUID here>#/performance
Note: all of the "iterations" (ie training iterations) will be tabbed along the left side
Select the (training) iteration you wish to use as the "web service" for actually classifiying incoming images;
=> browser will display details/metrics for that (training) iteration
Click on the "PredictionURL" tab in the upper left region of the page
=> a pop-up window will display all the settings-related data you'll need to consume the underlying web service ("API") wrapped around this classifier!
In particular, you'll see 2 different URLs:
For ImageURL-as-input:
https://southcentralus.api.cognitive.microsoft.com/customvision/v2.0/Prediction/<projectGUIDhere>/url?iterationId=g9fc4e82-3f95-4ec1-acf2-9b12bba2b409
For ImageFILE-as-input:
https://southcentralus.api.cognitive.microsoft.com/customvision/v2.0/Prediction/<projectGUIDhere>/image?iterationId=g9fc4e82-3f95-4ec1-acf2-9b12bba2b409
No matter which URL you inspect, you'll see the same value for IterationID - and there you have it!
Copy & paste this IterationID GUID into your MSFlow CustomVision Action, and it should work!
In the custom vision portal home, Select the project you are using, then select the Performance Tab. On the left side of the page you would see Iterations. Select the Iteration that you want and select Prediction URL. This will open a new dialog which gives the URL's for image URL and image file. In this URL the iteration id is a parameter that is passed, Copy the id and use it in your application.
If you choose any iteration as default the iteration id would not be required in the image URL.

Display direction route on WKInterfaceMap in WatchKit

In one of my watchKit application I need to show the route between two locations on the WKInterfaceMap. I searched for different links including Apple Developer Link. But I didn't find any way to display the route on the WKInterfaceMap.
Then, how the Uber watchKit App displays the route on the WKInterfaceMap? Is it the Map or an WKInterfaceImage/UIImage?
Below is the screenshot for the Uber App.
My guess is that they are be finding the route on the iOS app then passing it on to the watch in the form of a WKInerfaceImage
WKInterfacemap is non interactive map. If you just want to give the user driving directions, use below code where you can specify destination lat and long and open Map application on iWatch. It will launch Map application on iPhone, from there WatchOS will receive routing information.
let coordinate = CLLocationCoordinate2DMake( <ENTER DESTINATON LATITUDE>, <ENTER DESTINATON LONGITUDE>)
let mapItem = MKMapItem(placemark: MKPlacemark(coordinate: coordinate, add ressDictionary:nil))
mapItem.name = <DESTINATION NAME>
mapItem.openInMaps(launchOptions: [MKLaunchOptionsDirectionsModeKey : MKLaunchOptionsDirectionsModeDriving])

How to get the desired format for a Windows phone 8 push notification sent from asp.net

I followed this tutorial:
https://msdn.microsoft.com/en-us/library/windows/apps/hh202967(v=vs.105).aspx
It works, but the toast that appears on the phone screen contains all of this:
Received Toast 4:05 PM:
wp:Text1: Please
wp:Text2: Help!
wp.Param: /Page2.xaml?
NavigatedFrom=Toast Notification
I would like for the toast to only contain text1 and text2. In this instance I only want "Please Help!" to appear. I've looked at everything on MSDN and everywhere else on google and there is nothing on it.
You should see what you want if the demo app isn't running when the toast arrives: an alert with "Please Help!" will show at the top of the screen.
If the app is running then the app's ShellToastNotificationReceived event fires instead of the toast appearing on the phone. This lets the app decide what to show. The demo code parses the received data and explicitly adds each key and value to a string and shows it in a MessageBox. This is purely for demonstration. A real app would never do that.
Typically a real app would find the interesting information and display it in-line rather than in a MessageBox, but the details will depend on the app.
If you want to display the contents of wp:Text1 and wp:Text2 in a TextBlock you can create the string something like:
StringBuilder message = new StringBuilder();
message.AppendFormat("{0} {1}",e.Collection["wp:Text1"],e.Collection["wp:Text2"]);
MyTextBlock.Text = message;
In production you'd probably want to verify that wp:Text1 and wp:Text2 existed, etc.

Resources