I am building some simple custom skills with card like this:
this.response.cardRenderer(skillName,textCard);
this.emit(':responseReady');
I have AVS SDK installed on a Raspberry Pi, but I can't see where the card info ends up. The service simulator has the following info in the service response:
"card": {
"content": "hum, I can't sense any fan here.",
"title": "FanControlIntent on"
},
Is there anyway I can extract the card info so I can process it in the SDK on my raspberry pi? My first guess is this will be in the payload of the directive but is not.
Based on amazon developer forum, Card data is not exposed in any API at the moment. See the following ref.
https://forums.developer.amazon.com/questions/67919/how-do-i-access-the-alexa-card-responses-on-my-ras.html
https://forums.developer.amazon.com/questions/63071/display-alexa-card-information.html
Related
I am new to biometrics. I bought a new Persona U are U 4500 Device and SDK from a vendor. The SDK has some samples (as expected). All of the samples run smoothly except the WebSample. it do not detects my device in addition it gives an error in the console.
Can anyone please help me how to fix this issue and guide me as why am i facing this problem? is it something related to my wss://localhost?
Update
By further diving into the program i found the specified url https://127.0.0.1:52181/get_connection in websdk.client.bundle.min.js when i opened the link it says
{
"code": -2147024894,
"message": "The system cannot find the file specified."
}
Am i missing some file?
I don't have it in front of me now, because I switched back to the U.are.U 2.2.3 SDK, which does not have this feature.
But it sounds like you possibly have not installed the Digital Persona Lite client component. This runs a separate WebSocket service on port 9001 (IIRC) through which the JavaScript client then communicates.
It is described here: https://hidglobal.github.io/digitalpersona-devices/tutorial.html
After installation, you will need to restart.
The call to https://127.0.0.1:52181/get_connection should then respond with details of the WebSocket service, to which the JavaScript client will connect.
NOTE: The WebSkd library requires DigitalPersona Agent running on a
client machine. This agent provides a secure communication channel
between a browser and a fingerprint or card device driver. The
DigitalPersona Agent is a part of a HID DigitalPersona Workstation. It
can be also installed with a DigitalPersona Lite Client. If you expect
your users do not use HID DigitalPersona Workstation, you may need to
provide your users with a link to the Lite Client download, which you
should show on a reader communication error:
A link is provided there to download the Lite client from here: https://www.crossmatch.com/AltusFiles/AltusLite/digitalPersonaClient.Setup64.exe
you just add a script call of the following code "crossorigin = '' ". "crossorigin=''". It will look like this:
<script src="scripts/websdk.client.bundle.min.js" crossorigin="*"></script>
<script src="scripts/fingerprint.sdk.min.js" crossorigin="*"></script>
I want to use Apple and Google's new APIs to support Covid contact tracing as desribedin this API document. But when I try to use these APIs in XCode, the classes are not found:
let request = CTSelfTracingInfoRequest()
How do I enable these APIs?
The APIs for iOS are restricted. While you can write code against the ExposureNotifcation framework using XCode 11.5 and iOS 13.5, you can't run the code even in a simulator without Apple granting you a provisioning profile with the com.apple.developer.exposure-notification entitlement. And Apple is only giving that entitlement to developers associated with government health agencies after a manual approval process.
Below is more information on what you can do without special permission from Apple.
iOS releases prior to 13.5 disallowed transmitting the Exposure Notification Service beacon bluetooth advertising format in the specification. Starting with 13.5, advertising is possible only by the operating system -- 3rd party apps cannot emit that advertisement without using higher-level APIs.
Starting with iOS 13.5, Apple also blocks direct detection of this beacon format by third party apps, forcing them to use higher-level APIs. Earlier versions of iOS do allow detection of this beacon format.
Android, however, is another story.
While Google has similarly restricted use of these APIs in Google Play Services to API keys with special permissions granted from Google, Android versions 5.0+ allows 3rd party apps to both sending and detect the Exposure Notification Service beacon advertisement that the bluetooth specification envisions:
Using the free and open-source Android Beacon Library 2.17+, you can transmit this beacon like this:
String uuidString = "01020304-0506-0708-090a-0b0c0d0e0f10";
Beacon beacon = new Beacon.Builder()
.setId1(uuidString)
.build();
// This beacon layout is for the Exposure Notification Service Bluetooth Spec
BeaconParser contactDetectionBeaconParser = new BeaconParser()
.setBeaconLayout("s:0-1=fd6f,p:-:-59,i:2-17");
BeaconTransmitter beaconTransmitter = new
BeaconTransmitter(getApplicationContext(), contactDetectionBeaconParser);
beaconTransmitter.startAdvertising(beacon
And scan for it like this:
BeaconManager beaconManager = BeaconManager.getInstanceForApplication(this);
beaconManager.getBeaconParsers().add(new BeaconParser().setBeaconLayout("s:0-1=fd6f,p:-:-59,i:2-17"));
...
beaconManager.startRangingBeaconsInRegion(new Region("All Exposure Notification Service beacons", null));
...
#Override
public void didRangeBeaconsInRegion(Collection<Beacon> beacons, Region region) {
for (Beacon beacon: beacons) {
Log.i(TAG, "I see an Exposure Notification Service beacon with rolling proximity identifier "+beacon.getId1());
}
}
On Android, the above transmission and detection is possible even in the background. See library documentation for details.
The ability to transmit and receive Exposure Notification Service beacons is built into the BeaconScope Android app. You can use this as a tool to help test any apps you build.
You can read more in my blog post which shows you how to build your own app to do this.
As for iOS, while transmission is impossible as of this writing, you can scan for these beacons on iOS 13.4.x and earlier with code like this:
let exposureNotificationServiceUuid = CBUUID(string: "FD6F")
centralManager?.scanForPeripherals(withServices: [exposureNotificationServiceUuid], options: [CBCentralManagerScanOptionAllowDuplicatesKey: true])
...
func centralManager(_ central: CBCentralManager, didDiscover peripheral: CBPeripheral, advertisementData: [String : Any], rssi RSSI: NSNumber) {
if let advDatas = advertisementData[CBAdvertisementDataServiceDataKey] as? NSDictionary {
if let advData = advDatas.object(forKey: CBUUID(string: "FD6F")) as? Data {
let hexString = advData.map { String(format: "%02hhx", $0) }.joined()
let proximityId = String(hexString.prefix(32))
let metadata = hexString.suffix(8)
NSLog("Discovered Exposure Notification Service Beacon with Proximity ID\(proximityId), metadata \(metadata) and RSSI \(RSSI)")
}
}
}
Beware, however, that Apple blocked this from working as of iOS 13.5 beta 2. The didDiscover method above is never called for advertisements with the Exposure Notification Service UUID.
Full Disclosure: I am the lead developer on the Android Beacon Library open source project and the author of the BeaconScope app built on this library.
EDIT April 26, 2020: Updated answer above to link to the revised 1.1 version of the Exposure Notification Service Bluetooth Spec, to update naming conventions from that change, and to revise code samples to show the metadata.
EDIT April 30, 2020: Updated answer based on Apple's release of iOS 13.5 beta 2 and XCode 11.5 beta, and the fact that Apple now blocks 3rd party apps from detecting the Exposure Notification Service beacon.
EDIT June 2, 2020: Updated answer based on Apple's final release of iOS 13.5 and Google's release of Google Play Services.
You also may use other open-source contact tracing protocols like Apple/Google's.
For instance OpenCovidTrace – it is an open source implementation of the Google/Apple protocol with minor changes, or DP-3T – it is a protocol proposed by european science community.
I am creating a new knowledge base connecting it to an already existing Azure Cognitive Service. But I am getting error: "No Endpoint keys found." when i click "Create KB".
See capture of the error:
My QnAMaker cognitive service has the endpoint
It seems that there is sometimes the problem that the endpoint keys can only be found, if the Resource Group holding all resources for the QnA Maker Service (like App Service, Application Insights, Search Service and the Application Service Plan) is hosted in the same region as the QnA Maker Service itself.
Since the QnA Maker service can only be hosted in West US (as far a I know and was able to find: https://westus.dev.cognitive.microsoft.com/docs/services?page=2), the current workaround for this case is to create a new QnA Maker service with the resource group being hosted in the West US region. Then the creation of a knowledge base should work as always.
PS: seems like this issues was already reported, but the problem still occurs for me from time to time (https://github.com/OfficeDev/microsoft-teams-faqplusplus-app/issues/71)
My resources and resource group were all in West US but I still got the same "No Endpoint keys found." error.
Eventually I figured out that the issue was related to my subscription levels. Make sure that they are all the same for all your created resources.
If you are using the deploy.ps1 script in the Virtual Assistant VS template, open the file at .\Deployment\Resources\template.json
That is a template for the resource creation. You can look through it to see exactly which resources will be created and what parameters are sent to Azure for each of the resources.
I am using a My Visual Studio subscription so it is registered as a free tier in Azure. What worked for me, is that I had to update all the "standard" subscriptions to free in the Parameters JSON array. I didn't update anything lower down for fear that it might interfere with the creation process too much.
An example is the appServicePlanSku parameter. It was set to
"appServicePlanSku": {
"type": "object",
"defaultValue": {
"tier": "Standard",
"name": "S1"
}
}
I updated it to
"appServicePlanSku": {
"type": "object",
"defaultValue": {
"tier": "Free",
"name": "F0"
}
}
I made multiple of these updates in the parameters array. After those changes, deleting the resource group for the 100th time and running the deployment script again, it worked.
I need recommendations as when to use the above mentioned .Can i use
both of them is it possible .
Also in the botium.json's -
Containermode
only one of these can be mentioned
how to understand what and when exactly I have to chose any one of these
Below is my botium.json file
{
"botium": {
"Capabilities": {
"PROJECTNAME": "Alexa Conversation Sample",
"SCRIPTING_FORMAT": "xlsx",
"SCRIPTING_XLSX_STARTROW": 2,
"SCRIPTING_XLSX_STARTCOL": 1,
"CONTAINERMODE": "alexa-avs",
}
} }
I want to know if I can use alexa-smapi or alexa-avs
for eg inwatson we use something like below in botium.json
"CONTAINERMODE": "watson",
"WATSON_USER": "0274cb6f-3680-4cf7-bd6b-71c7f447542d",
"WATSON_PASSWORD": "ZWDE5xo02sby",
"WATSON_WORKSPACE_ID": "97513bc0-c581-4bec-ac9f-ea6a8ec308a9"
In the same sense how and what should i do to run alexa skill
The Botium Connector for Amazon Alexa Skills API is for testing the Skill on a very low API level. You can test the intent resolution (with the Skill Simulation API), and the backend response (with the Skill Invocation API). The recommendation is to start testing on API level with this connector.
The Botium Connector for Amazon Alexa with AVS simulates a virtual Alexa-enabled device connected to the Alexa services to test on End-2-End level - it uses Speech-to-text and Text-to-speech technologies to connect to your Alexa skill like a real user would do, and navigates through the conversation with voice commands.
Followed the below steps
Build the custom speech service -(https://github.com/Microsoft/Cognitive-Custom-Speech-Service).
After build the custom speech service, Used the ( https://github.com/Azure-Samples/Cognitive-Speech-STT-Windows) sample code for testing the custom speech model. While testing got the exception as Failed with Login error detail message as "Transport error".
Is there a way to test the custom speech model in a windows app?
I would recommend to use the new speech SDK which works with the custome speech service. You can find samples for using custom speech service here. You can also use the new speech SDK in UWP (see details).
Thanks,