I am about to develop my first custom skill for Alexa. I do not have a Echo device.
What I did was to creating and testing a basic skill with the amazon developer console (Alexa Skill + Lambda).
Now I'm have some general (nooby) questions here:
1) Is this really the way you have to develop and test your custom skills? I mean it is not the real user experience that can be tested. You have to enter the text and analyse the JSON request/responses. So, there is no realistic end-to-end testing possible?
2) What happens when you finish the developing phase in the Amazon developer console? I'm currently in the Testing step but I can see that the next steps are about publishing information (images, texts, etc.) and I can also see the button "Submit for Certification". So for me it seems that my custom skill gets published on some kind of market to other Alexa users? Is this correct? Is there a way to just use this skill for my personal usage - just like a APK-file Android app?
3) I'm developing a custom skill that needs some kind of authorization (User). I see there is a large article about it and it seems that there is some action on the Alexa App needed on the smartphone. My question is now here, how to test it without having a real device? Is it actually possible?
I'd suggest, first test locally, then use the test console and
finally, you can use https://echosim.io which will provide you a
very close test bed to what you get when interacting with the Echo
(more precisely, the Echo Tap, you have to tap the button for it to
listen).
If you just want the skill for yourself, forget about
anything past the testing step. That extra information is only for
the "store" as you guessed.
If you only need to identify individual
users, then you DO NOT need to use the user authentication stuff.
There is a unique user identifier provided in every request. If you
want to authenticate users with a third party Oauth-like scheme,
then read that document.
There's a pretty useful series by Big Nerd Ranch about developing the skills locally using NodeJS: https://www.bignerdranch.com/blog/developing-alexa-skills-locally-with-nodejs-setting-up-your-local-environment/. They use alexa-app, mocha, chai, and alexa-app-server.
Related
I am writing an application that should interact with DocuSign to create envelopes and then download the signed document when all the signatories have signed.
There are several other use cases, but that does not matter for this question.
I am wondering what is the best way to write automated integration tests.
Do I need to automate the interaction of the signatories withe DocuSign? This would mean that I have to receive the email, click the link, etc...
Even if it seems possible, it does not seem ideal. Is there a way to "simulate" in a dev environment the actions of the signatories?
There has been a lot of talks if a document can be signed without viewing it. And the conclusion was that NO, user cannot sign a document without viewing it. User has to view/see what is to be signed. So, that part needs to be automated using Selenium or one of its "flavors" or pretty much any UI automation you are comfortable with. And Yes, that involves receiving email, clicking the link, opening a document and signing it. You can use a Mailinator or any other email service which API you can leverage to facilitate things for you.
As for other parts of DocuSign integration automation it is encouraged to use API (makes things more stable).
So, very simple workflow steps would look like this:
Use API to prepare environment, sent variables and values (in your product and in DocuSign)
Send envelope for signing using DocuSign API
Get the link to the document
Sign using UI automation
Do verification (of envelope status and more) using DocuSign API
I would like to develop a WordPress plugin that will allow users to voice-interact with a WordPress website. I want it to be based on Alexa Skill.
What would be the architecture for this task?
If you think your use case is relatively standard, you can take a look at VoiceWP, which was built to allow for management of an Alexa skill mostly from within WordPress.
If you need something more custom, you can use the WordPress REST API to provide Alexa with the data you need. With this architecture, your plugin on the WordPress side would just be setting up and managing all the REST API endpoints.
From the top down the architecture looks like this:
This leaves you with 3 pieces to build:
Set up the Alexa Skill
First, you have to set up the skill with the Alexa Skills Kit. This involves setting up things like the name of your skill, the icon, and most importantly, where the skill should look to get it's functionality. In our example, we'll point the skill to an AWS Lambda function.
Set up the Lambda Skill to fulfill the Alexa input
Once the Skill knows to look to the Lambda function for it's functionality, we actually need to code the Lambda function. This can be done in Node.js (JavaScript), Python, Java (Java 8 compatible), C# (.NET Core) or Go. What the Lambda function needs to do is parse the JSON that comes from Alexa and determine which endpoint to call or which parameters to pass to this endpoint. For an example of this in Python, you can check out my example on GitHub.
Set up WordPress endpoints to provide data
Once you have the Lambda function parsing the user's intent and pushing the request to the specific endpoints, you need to write the code from within WordPress to make sure all the endpoints you need are available. This is the part that I'm able to give the least input on because the specific endpoints that you will need are based on your use case, which I don't really know at this point. But for an example of how we created a settings field and returned that value through a custom REST API endpoint, you can see this example on GitHub.
Wrapping up and Extending it Further
So once the data is returned from WordPress, formatted by the Lambda function and returned to Alexa, the user will hear the results of their query.
This can be customized and further functionality added by adding more endpoints to WordPress and more routing to the Lambda function based on new Alexa voice inputs.
Further Reading Watching
If you're interested in learning more, I've given a couple talks about this:
WP REST API as the Foundation of the Open Web Voice stuff starts at 11:06
Voice Is The New Keyboard: Voice Interfaces In 2018 And Beyond - This uses Google Home for the custom skill, but the ideas presented here are the same.
I am trying to create a skill that will reach out to an application that uses Basic authentication to render APIs (albeit i know this is bad practice). I was wanting to go down a route similar to account linking, however seems they enforce the usage of OAuth 2.0.
Is there an alternative to this or am I forced to use OAuth 2.0 in order to request APIs to a 3rd party application?
My wanted workflow:
customer enables skill
Skill card request for username/pw combo
after setup, the skill can be utilized fully
Not sure if its helpful, but Im using Lambda to run my skill source code.
That is a terrible practice.
First of all, what if your user's password includes case sensitive letters and numbers and possibly other characters?
You can use Literal Slots but they are not case sensitive and probably won't return a number-word combination either. For example your user's pass is Word123 literal slots may return word one two three
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-interaction-model-reference#literal-slot-type-reference
I am not sure if you can force user to spell his password's characters and so then you can try to detect the password though... Again this sounds like a terrible practice.
So as you mentioned: Users link their accounts using the Amazon Alexa app. Note that users must use the app. There is no support for establishing the link solely by voice
I guess you have to do the linking the way amazon requires
https://developer.amazon.com/blogs/post/Tx3CX1ETRZZ2NPC/alexa-account-linking-5-steps-to-seamlessly-link-your-alexa-skill-with-login-with-amazon
On their Overview page one of the bullet points under "What it does" is:
Multiple users can collaborate with the same data at the same time
However there's nothing in the documentation to suggest how this can be done, all the real time syncing is done only between devices logged in with the same user. Their own Simplenote app which is built on the platform does allow multi-user collaboration, but this would appear to be using a private API that is not available to normal Simperium clients.
Is there something I've missed? Is it a feature that will be added in the future? If so, when?
We didn't release the collaboration feature yet, but if you'd like to test it, please, mail us and we'll get in touch with you: jorge.perez#automattic.com
Thank you!
I need a custom multi-user multi-chatroom app to extend an existing Flex app that I have.
I obviously wouldn't like to develop it from scratch, but focus only on the customizations and integration.
Are there any products (free or commercial) that provide multi-chatroom functionality from which I could start?
http://www.adobe.com/devnet/flashplatform/services/collaboration.html
Have a look at Union Platform chat tutorial:
http://www.unionplatform.com/?page_id=1216
You can also check BlazeDS chat example:
http://livedocs.adobe.com/blazeds/1/blazeds_devguide/help.html?content=build_apps_3.html
I wrote an AS3 Chat Application that makes use of Player.io's free server package of 20 gigs of data transfer, other small limitations. The app is open source, and you can find the source code on GitHub.
The chat itself only uses one room, since it is averaging only around 10-15 users on at any time and its specialized to helping flash game developers, meaning it has a code storage area (simple database interaction), developer links, actionscript help, etc, but it does have some basic features if you want to see how I code them.
The chat itself has a few features you might be interested in checking out even if you don't use the source code, are such:
Support for authentication on server-side
Different types of users. (Currently overlord admin, admin, mod, developer, regular users)
Editable individual user data (Currently saves how long each user has spent on the app)
Server-side Silencing and banning individual users
Support for tags near usernames
Sound Settings on message received
Code box for users to share large amounts of text without spamming the chat
Support for multiple rooms (uses 1 public currently + 1 hidden for select users)
The server-side is written in C# and hosted on playerio.com and is supposed to be an authoritative server (meaning it checks all the client data and makes sure its valid before doing anything). The server code is also included on github.
If your interested you can comment and I will answer any questions.