watchOS AudioRecorder has no input, doesn't ask for permission - watchkit

I am using this code to show the AudioRecorder on the Apple Watch (taken from https://www.raywenderlich.com/345-audio-recording-in-watchos-tutorial)
let outputURL = chatMasterController.newOutputURL()
let preset = WKAudioRecorderPreset.narrowBandSpeech
let options: [String : Any] =
[WKAudioRecorderControllerOptionsMaximumDurationKey: 30]
presentAudioRecorderController(
withOutputURL: outputURL,
preset: preset,
options: options) {
[weak self] (didSave: Bool, error: Error?) in
guard didSave else { return }
print("finished audio to \(chatID) at \(outputURL)")
print(outputURL)
}
The Recorder pops up however it doesn't seem to take any input. The wave forms don't rise while speaking and trying to play the recording afterwards leaves me with 0.2seconds of silence no matter how long the recording is.
I've tried another app that's making use of the microphone and this app did ask me for permission to record audio. I have feared having dismissed the permission before so I have reinstalled my app which however didn't change anything - no permission being asked, no input being generated.
Is there something I've missed e.g. importing a lib?

I've now figured it out. You don't just need the Privacy - Microphone Usage Description string in your Watch app's plist - you also need to set it in the iPhone's plist.
Only setting it on the Watch does nothing, only setting it on the iPhone doesn't let you allow it on the Watch directly. So you need it on both.
No idea why this isn't documented anywhere but it fits Apple's "we are going downhill" movement :)

Related

Custom FCM notification sound for unity android

In my unity project I want play custom sound when I get firebase cloud message, not system default sound.
So after I followed other answers my message looks like,
{
"to": "some_key",
"notification": {
"title": "Title",
"android_channel_id": "2",
"body": "Body",
"sound": "custom_sound.wav"
}
}
and I placed custom_sound.wav in Asset/Plugins/Android/res/raw. When I unzip my .apk, I can find my sound file is in right location.
But it keeps playing system default sound. Even after I remove sound field. Is there any other thing should I check?
First: a quick tip when debugging. If you select "Export Project", you can open the generated Gradle project with Android Studio:
Occasionally you have to update the gradle wrapper, but it helps a ton debug things like "is my sound file in res/raw" without having to decompress your APK and poke around.
I think that the issue you're running into now is that sounds are now associated with NotificationChannels (as of Android O) rather than individual notifications, as noted by this StackOverflow post expressing a similar issue. Since this isn't exposed via the Unity SDK.
Fortunately, you can add a channel with Unity.Notifications.Android.
It should be as simple as creating a new
public AndroidNotificationChannel(string id, string title, string description, Importance importance)
with your id set to "2" (to match your sample notification above. Since this is a string, I would recommend giving this a better name :D).
Then you can call RegisterNotificationChannel with that channel you create as your parameter.
For example, to get your notification above to work, I believe you can write:
var notificationChannel = new NotificationChannel("2", "Channel 2 (working title)", "This is the 2nd channel", Importance.Default);
AndroidNotificationCenter.RegisterNotificationChannel(notificationChannel);
Let me know if this helps!
--Patrick

A-Frame Daydream control?

Just started playing with A-Frame and I can see vive-controls and oculus-touch-controls but nothing for google daydream.
I've looked at the component repo and don't see anything that looks like it'll do the job. The closest thing to now investigate would be the Gamepad API, but I'm amazed I can't find anything.
I've got a Pixel XL & daydream and would like to incorporate the controller rather than just head tracking and gaze based control. Can someone point me in the right direction please.
Thanks
UPDATE - I've got the Daydream controller working for clicks! Running the 360-image-gallery(https://aframe.io/examples/showcase/360-image-gallery/) accepts clicks from the Daydream controller. I guess maybe it had timed out on my previous attempts or I hadn't paired it properly! I'll keep playing!
Working on setting up a Daydream remote in an Aframe project. There are no components for the daydream remote yet, but I'm hoping to complete one soon – and it sounds like they are gonna mainline support in an upcoming Aframe release.
But you can hand roll support no problem.
First, there are a few things you'll need to do in preparation:
Download Chrome Beta 56 on your Pixel:https://www.google.com/chrome/browser/beta.html
.
Open Chrome Beta, navigate to chrome://flags and enable the WebVR and Gamepad flags.
Now, you will be able to launch experiences that are built with Aframe v0.4 or higher in true WebVR. You'll get prompted with the usual Daydream screens (place your phone in the headset, and connect the remote.) If you are connecting to a local development environment, you'll see a secure connection warning but this, while annoying, won't stop you from working.
Second, now that you are running true WebVR, you need to leverage the Gamepad API to get information from your daydream remote. Lets start by just logging that it is connected.
window.addEventListener('gamepadconnected', function(evt) {
console.log("Gamepad connected at index %d: %s. %d buttons, %d axes.",
e.gamepad.index, e.gamepad.id,
e.gamepad.buttons.length, e.gamepad.axes.length);
});
Third, now that you are logging a connection, you will need to setup an update loop to get the current state of the Gamepad. You can do this with requestAnimationFrame. Follow the tutorial here: https://developer.mozilla.org/en-US/docs/Web/API/Gamepad_API/Using_the_Gamepad_API
Once I've published a basic dayframe-remote component, I'll post a link here. Hope this helps get you started!
EDIT: Looks like the suggestion below works great. Just pass "Daydream Controller" as the id for tracked controls: tracked-controls="id: Daydream Controller".
Here is a sample Daydream controller output. At the moment, only the trackpad button appears to be exposed – not the app or home buttons.
{
axes: [0, 1],
buttons: [{
pressed: false,
touched: false,
value: 0
}],
connected: true,
displayId: 16,
hand: "left",
id: "Daydream Controller",
index: 0,
mapping: "",
pose: {
angularAcceleration: null,
angularVelocity: [0, 0, 0],
hasOrientation: true,
hasPosition: false,
linearAcceleration: [0,0,0],
orientation: [0,0,0,1],
position: null
},
timestamp: 1234567890123
}
Something for you to try...
the way the current A-Frame 0.4.0 support in tracked-controls should work:
if you specify that it should only match an ID value of empty string '' then it should match any gamepad with a pose... so you can try something like
<a-entity tracked-controls="id:"></a-entity>
and see if that gets events etc.?
A-Frame master branch now contains a daydream controller component: https://aframe.io/docs/master/components/daydream-controls.html

Alexa Echo Dot - ASK skill problems

I'm tying to make a simple test custom Alexa Skill, but I'm stuck and I'm not sure what the problem is. Maybe someone more experienced know what I'm missing?
Invocation Name
home system
Intent Schema
{
"intents": [
{
"intent": "AMAZON.HelpIntent",
"slots": []
},
{
"intent": "TestIntent",
"slots": [
{"name": "test", "type": "AMAZON.NUMBER"}
]
}
]
}
Sample Utterances
TestIntent set state {test}
TestIntent add state
I have written my own little python server on my own self hosted server, I already have a working news flash skill on the same system. I have spend plenty of time looking at the documentation, reading tutorials and I looks like I have done what I'm supposed to do.
The result I get is this:
A LaunchRequest works, both in the Service Simulator and on the Echo. It triggers a HTTP POST with the expected JSON, and I get the expected voice reply.
But the IntentRequest only works from the Service Simulator, it never works on the Echo. I say for example "alexa home system set state eight", no requests are made to my server, the echo just makes a sound and that's all.
I have no idea how to debug this, the skill is a US skill and my Echo is in US mode. I have tried to set the endpoint in both Europe and North America. Tried different trigger words, different slots, no slots ... and I have of course checked under Settings -> History to make sure that the device understood me correctly.
Any idea what to try next? How to debug this?
I found the problem, it was a classic PEBCAK (Problem Exists Between Chair And Keyboard) problem.
I had missed that I had to be much more precise how to invoke an intent (a single sentence that contains both the trigger word and intent in one go). A example of valid and working examples are:
Alexa, ask home system to set state nine
Alexa, set state twelve using home system
Alexa, tell home system set state one
I realised this when I used the alternative 2-step invoking, and realized that it worked. It had to be the way I invoked the skill, not the backend:
Alexa, open home system
(Alexa responds, and listens for the command)
Set state to eight
(Intent triggered, Alexa responds)
The first request above is the LaunchRequest
The LaunchRequest responds with shouldEndSession: false, if not the session will end. That's maps to question(...) in my code.
There are plenty of more ways to trigger the skills, a full list see this page: https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/supported-phrases-to-begin-a-conversation (scroll down to the tables)
Finally thank you u-gen for the feedback, bst was a interesting project (never tried it), guess it can be really useful if you uses a hosted solution like lambda. But thanks to the docs I found flask-ask, a project that simplified my code.
Finally, the python part of my test project if someone else like to try it out.
#!/usr/bin/env python
from flask import Flask, render_template
from flask_ask import Ask
from flask_ask import statement, question, convert_errors
app = Flask(__name__)
ask = Ask(app, '/ask/')
#app.route('/')
def hello_world():
return 'Hello, World!'
#ask.launch
def launched():
return question('Welcome to Foo')
#ask.intent('TestIntent')
def hello():
return statement('Hello, world')
#ask.session_ended
def session_ended():
return "", 200
if __name__ == '__main__':
app.run(debug=True, host="0.0.0.0", threaded=True)

Meteor: send message to user at hot code push

How can I let the user know when they are getting a hot code push?
At the moment the screen will go blank during the push, and the user will feel it's rather weird. I want to reassure them the app is updating.
Is there a hook or something which I can use?
Here's the shortest solution I've found so far that doesn't require external packages:
var ALERT_DELAY = 3000;
var needToShowAlert = true;
Reload._onMigrate(function (retry) {
if (needToShowAlert) {
console.log('going to reload in 3 seconds...');
needToShowAlert = false;
_.delay(retry, ALERT_DELAY);
return [false];
} else {
return [true];
}
});
You can just copy that into the client code of your app and change two things:
Replace the console.log with an alert modal or something informing the user that the screen is about to reload.
Replace ALERT_DELAY with some number of milliseconds that you think are appropriate for the user to read the modal from (1).
Other notes
I'd recommend watching this video on Evented Mind, which explains what's going on in a little more detail.
You can also read the comments in the reload source for further enlightenment.
I can image more complex reload logic, especially around deciding when to allow a reload. Also see this pacakge for one possible implementation.
You could send something on Meteor.startup() in your client-side code. I personally use Bert to toast messages.

When listening for keypress in Flash Lite should I be listening for Key.Down or the numeric code for this key?

The adobe documentation says that when listening for a keypress event from a phone you should listen for Key.Down, however when I trace the Key.getCode() of keypresses I see a number not the string "Key.Down". I am tesing this locally in device central and do not have a phone to test this with at present. Here is my code -
keyListener = new Object();
keyListener.onKeyDown = function() {
switch (Key.getCode()) {
trace(Key.getCode()) // outputs 40
case (Key.DOWN) : // according to the docs
pressDown();
break;
}
}
My question is - is this simply because Im testing in device central and when I run it on the phone I will need to be listening for Key.Down? or is the documentation wrong? Also is the numeric code (40) consistent across all devices? What gives adobe?
thanks all
Key.Down is equal to 40 so it will recognize it as the same. So you can use whichever one you prefer, however, I would recommend using Key.Down because it will be easily recognizeable for those who dont have Key Codes memorized (most of us).
These are the Key Code Values for Javascript. However, I think they are pretty much universal

Resources