When to alexa-smapi and alexa-avs - automated-tests

I need recommendations as when to use the above mentioned .Can i use
both of them is it possible .
Also in the botium.json's -
Containermode
only one of these can be mentioned
how to understand what and when exactly I have to chose any one of these
Below is my botium.json file
{
"botium": {
"Capabilities": {
"PROJECTNAME": "Alexa Conversation Sample",
"SCRIPTING_FORMAT": "xlsx",
"SCRIPTING_XLSX_STARTROW": 2,
"SCRIPTING_XLSX_STARTCOL": 1,
"CONTAINERMODE": "alexa-avs",
}
} }
I want to know if I can use alexa-smapi or alexa-avs
for eg inwatson we use something like below in botium.json
"CONTAINERMODE": "watson",
"WATSON_USER": "0274cb6f-3680-4cf7-bd6b-71c7f447542d",
"WATSON_PASSWORD": "ZWDE5xo02sby",
"WATSON_WORKSPACE_ID": "97513bc0-c581-4bec-ac9f-ea6a8ec308a9"
In the same sense how and what should i do to run alexa skill

The Botium Connector for Amazon Alexa Skills API is for testing the Skill on a very low API level. You can test the intent resolution (with the Skill Simulation API), and the backend response (with the Skill Invocation API). The recommendation is to start testing on API level with this connector.
The Botium Connector for Amazon Alexa with AVS simulates a virtual Alexa-enabled device connected to the Alexa services to test on End-2-End level - it uses Speech-to-text and Text-to-speech technologies to connect to your Alexa skill like a real user would do, and navigates through the conversation with voice commands.

Related

Can I use Apple and Google's Contact Tracing Spec?

I want to use Apple and Google's new APIs to support Covid contact tracing as desribedin this API document. But when I try to use these APIs in XCode, the classes are not found:
let request = CTSelfTracingInfoRequest()
How do I enable these APIs?
The APIs for iOS are restricted. While you can write code against the ExposureNotifcation framework using XCode 11.5 and iOS 13.5, you can't run the code even in a simulator without Apple granting you a provisioning profile with the com.apple.developer.exposure-notification entitlement. And Apple is only giving that entitlement to developers associated with government health agencies after a manual approval process.
Below is more information on what you can do without special permission from Apple.
iOS releases prior to 13.5 disallowed transmitting the Exposure Notification Service beacon bluetooth advertising format in the specification. Starting with 13.5, advertising is possible only by the operating system -- 3rd party apps cannot emit that advertisement without using higher-level APIs.
Starting with iOS 13.5, Apple also blocks direct detection of this beacon format by third party apps, forcing them to use higher-level APIs. Earlier versions of iOS do allow detection of this beacon format.
Android, however, is another story.
While Google has similarly restricted use of these APIs in Google Play Services to API keys with special permissions granted from Google, Android versions 5.0+ allows 3rd party apps to both sending and detect the Exposure Notification Service beacon advertisement that the bluetooth specification envisions:
Using the free and open-source Android Beacon Library 2.17+, you can transmit this beacon like this:
String uuidString = "01020304-0506-0708-090a-0b0c0d0e0f10";
Beacon beacon = new Beacon.Builder()
.setId1(uuidString)
.build();
// This beacon layout is for the Exposure Notification Service Bluetooth Spec
BeaconParser contactDetectionBeaconParser = new BeaconParser()
.setBeaconLayout("s:0-1=fd6f,p:-:-59,i:2-17");
BeaconTransmitter beaconTransmitter = new
BeaconTransmitter(getApplicationContext(), contactDetectionBeaconParser);
beaconTransmitter.startAdvertising(beacon
And scan for it like this:
BeaconManager beaconManager = BeaconManager.getInstanceForApplication(this);
beaconManager.getBeaconParsers().add(new BeaconParser().setBeaconLayout("s:0-1=fd6f,p:-:-59,i:2-17"));
...
beaconManager.startRangingBeaconsInRegion(new Region("All Exposure Notification Service beacons", null));
...
#Override
public void didRangeBeaconsInRegion(Collection<Beacon> beacons, Region region) {
for (Beacon beacon: beacons) {
Log.i(TAG, "I see an Exposure Notification Service beacon with rolling proximity identifier "+beacon.getId1());
}
}
On Android, the above transmission and detection is possible even in the background. See library documentation for details.
The ability to transmit and receive Exposure Notification Service beacons is built into the BeaconScope Android app. You can use this as a tool to help test any apps you build.
You can read more in my blog post which shows you how to build your own app to do this.
As for iOS, while transmission is impossible as of this writing, you can scan for these beacons on iOS 13.4.x and earlier with code like this:
let exposureNotificationServiceUuid = CBUUID(string: "FD6F")
centralManager?.scanForPeripherals(withServices: [exposureNotificationServiceUuid], options: [CBCentralManagerScanOptionAllowDuplicatesKey: true])
...
func centralManager(_ central: CBCentralManager, didDiscover peripheral: CBPeripheral, advertisementData: [String : Any], rssi RSSI: NSNumber) {
if let advDatas = advertisementData[CBAdvertisementDataServiceDataKey] as? NSDictionary {
if let advData = advDatas.object(forKey: CBUUID(string: "FD6F")) as? Data {
let hexString = advData.map { String(format: "%02hhx", $0) }.joined()
let proximityId = String(hexString.prefix(32))
let metadata = hexString.suffix(8)
NSLog("Discovered Exposure Notification Service Beacon with Proximity ID\(proximityId), metadata \(metadata) and RSSI \(RSSI)")
}
}
}
Beware, however, that Apple blocked this from working as of iOS 13.5 beta 2. The didDiscover method above is never called for advertisements with the Exposure Notification Service UUID.
Full Disclosure: I am the lead developer on the Android Beacon Library open source project and the author of the BeaconScope app built on this library.
EDIT April 26, 2020: Updated answer above to link to the revised 1.1 version of the Exposure Notification Service Bluetooth Spec, to update naming conventions from that change, and to revise code samples to show the metadata.
EDIT April 30, 2020: Updated answer based on Apple's release of iOS 13.5 beta 2 and XCode 11.5 beta, and the fact that Apple now blocks 3rd party apps from detecting the Exposure Notification Service beacon.
EDIT June 2, 2020: Updated answer based on Apple's final release of iOS 13.5 and Google's release of Google Play Services.
You also may use other open-source contact tracing protocols like Apple/Google's.
For instance OpenCovidTrace – it is an open source implementation of the Google/Apple protocol with minor changes, or DP-3T – it is a protocol proposed by european science community.

Unable to create knowledgebase for azure cognitive service (Error: "No Endpoint keys found.")

I am creating a new knowledge base connecting it to an already existing Azure Cognitive Service. But I am getting error: "No Endpoint keys found." when i click "Create KB".
See capture of the error:
My QnAMaker cognitive service has the endpoint
It seems that there is sometimes the problem that the endpoint keys can only be found, if the Resource Group holding all resources for the QnA Maker Service (like App Service, Application Insights, Search Service and the Application Service Plan) is hosted in the same region as the QnA Maker Service itself.
Since the QnA Maker service can only be hosted in West US (as far a I know and was able to find: https://westus.dev.cognitive.microsoft.com/docs/services?page=2), the current workaround for this case is to create a new QnA Maker service with the resource group being hosted in the West US region. Then the creation of a knowledge base should work as always.
PS: seems like this issues was already reported, but the problem still occurs for me from time to time (https://github.com/OfficeDev/microsoft-teams-faqplusplus-app/issues/71)
My resources and resource group were all in West US but I still got the same "No Endpoint keys found." error.
Eventually I figured out that the issue was related to my subscription levels. Make sure that they are all the same for all your created resources.
If you are using the deploy.ps1 script in the Virtual Assistant VS template, open the file at .\Deployment\Resources\template.json
That is a template for the resource creation. You can look through it to see exactly which resources will be created and what parameters are sent to Azure for each of the resources.
I am using a My Visual Studio subscription so it is registered as a free tier in Azure. What worked for me, is that I had to update all the "standard" subscriptions to free in the Parameters JSON array. I didn't update anything lower down for fear that it might interfere with the creation process too much.
An example is the appServicePlanSku parameter. It was set to
"appServicePlanSku": {
"type": "object",
"defaultValue": {
"tier": "Standard",
"name": "S1"
}
}
I updated it to
"appServicePlanSku": {
"type": "object",
"defaultValue": {
"tier": "Free",
"name": "F0"
}
}
I made multiple of these updates in the parameters array. After those changes, deleting the resource group for the 100th time and running the deployment script again, it worked.

Rofxord: how to change the endpoint and connect with API Azure Cognitive Service?

I try to connect with Azure Cognitive Service using Roxford package. I got error propably due to wrong endpoint (after including Oxford Project into Azure Services there are several, region specific end points).
I got the key from personal account in Azure Cognitive Service project:
library(Roxford)
library(plyr)
library(rjson)
facekey <- "xxx" #look it up on your subscription site
getFaceResponseURL("http://getwallpapers.com/wallpaper/full/5/6/4/1147292-new-women-faces-wallpaper-2880x1800-for-phone.jpg",key= facekey)
#I got error
# {"error":{"code":"Unspecified","message":"Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key."}}
How to change the endpoint to the: "https://westcentralus.api.cognitive.microsoft.com/face/v1.0" ???
If your Roxford lib is the one here: https://github.com/flovv/Roxford/blob/master/R/videoAnalysis_LIB.R#L182
Then you can add the region when you call the method. Cognitive Services keys are dedicated to an Azure region, so you should use the same region when you use it. If you don't remember which region you choose when you generated the key, it's written in the overview in Azure portal.
Then when you use getFaceResponseUrl:
getFaceResponseURL <- function(img.url, key, region="westus")
Pass the region:
getFaceResponseURL("http://getwallpapers.com/wallpaper/full/5/6/4/1147292-new-women-faces-wallpaper-2880x1800-for-phone.jpg", key=facekey, region="theAzureRegionOfYourKey")

Processing Custom Alexa Skill Cards in AVS-SDK

I am building some simple custom skills with card like this:
this.response.cardRenderer(skillName,textCard);
this.emit(':responseReady');
I have AVS SDK installed on a Raspberry Pi, but I can't see where the card info ends up. The service simulator has the following info in the service response:
"card": {
"content": "hum, I can't sense any fan here.",
"title": "FanControlIntent on"
},
Is there anyway I can extract the card info so I can process it in the SDK on my raspberry pi? My first guess is this will be in the payload of the directive but is not.
Based on amazon developer forum, Card data is not exposed in any API at the moment. See the following ref.
https://forums.developer.amazon.com/questions/67919/how-do-i-access-the-alexa-card-responses-on-my-ras.html
https://forums.developer.amazon.com/questions/63071/display-alexa-card-information.html

Programming a Web Portal for Microsoft Dynamics CRM

I'm working on a web portal for customers that will connect to Microsoft Dynamics. I don't want to make Dynamics CRM directly a internet facing deployment (IFD), so I'd like to use a separate database that the web interface interacts with and then use web services to move the data between the web portal database and Dynamics CRM.
I'm just looking for thoughts on whether this is the best way to proceed and whether there are any good code examples, etc. that I can look at for implementing this?
I saw Microsoft has a Customer Portal but it looks like it requires (at a cursory glance) an IFD deployment - which I don't want.
First, after creating your ASP.NET project (WebForms or MVC 3), add the following references:
Microsoft.crm.sdk.proxy.
Microsoft.xrm.sdk.
System.Runtime. Serialization.
System.ServiceModel.
In your code-behind Create a class then add the following code:
private IOrganizationService GetCrmService(string userName, string password, string domain, Uri serviceUri)
{
OrganizationServiceProxy _serviceProxy;
ClientCredentials credentials = new ClientCredentials();
credentials.Windows.ClientCredential = new System.Net.NetworkCredential(userName, password, domain);
//credentials.UserName.UserName = userName; // uncomment in case you want to impersonate
//credentials.UserName.Password = password;
ClientCredentials deviceCredentials = new ClientCredentials();
using (_serviceProxy = new OrganizationServiceProxy(serviceUri,
null,
credentials,
deviceCredentials))
{
_serviceProxy.ServiceConfiguration.CurrentServiceEndpoint.Behaviors.Add(new ProxyTypesBehavior());
return (IOrganizationService)_serviceProxy;
}
}
If you want to retrieve multiple records:
string fetch = #"My Fetch goes here";
EntityCollection records = getCrmService().RetrieveMultiple(new FetchExpression(fetch));
I highly recommend to download the SDK or check this
You'll find many samples and walkthroughs which will help you to build good portals.
I think it's a good strategy because:
It allows you to asynchronously put the data entered on the website into the CRM. This decoupling ensures neither the CRM nor the Website will become eachother's bottleneck.
Only the intermediate service layer is internet facing, so you'll be in control over what CRM information would be disclosed/open for alteration if this service layer is compromised.
The architecture you're after is reminiscent of the way the CRM Asynchronous Service works (asynchronous plugins and workflows work this way).:
A job is put in a queue (table) in the CRM DB.
A scheduled service awakes every x seconds and fetches the latest y records from the queue table.
The service performs each job and writes the result (success, error message log) back to the queue table's records.
So the thing that is probably hardest is writing a good scheduled service that never throws an exception (but always digests it) and properly logs the results back to the DB.
To learn more about the Dynamics CRM's "Asynchronous Service Architecture", refer to the following: http://msdn.microsoft.com/en-us/library/gg334554.aspx
It looks like a good approach.
It will improve the performance of both the portal and CRM.
The data shown on portal is NEARLY realtime. i.e it is NOT realtime.
Throughout the development, you better keep checking that there is not TOO MUCH async processing to keep the CRM server busy all time.
I don't think, that the accelerators/portals REQUIRE CRM to be an IFD instance, I guess only the portal part needs to be Internate facing (of course to make it usable for the purpose!)
Anwar is right, SDK is a good lauchpad for such research.
Customer Portal Does not require IFD deployment. And if you do not like the Customer Portal you can always use SDK Extension for Portal development (microsoft.xrm.client.dll & microsoft.xrm.portal.dll and portalbase solution) which are all included in SDK.
There is a great resource regarding how to build portal by using SDK Portal Extenstion.
Dynamics CRM 2011 Portal Development

Resources