I am trying to build an application with Microsoft Cognitive Speaker Identification Service. But when I check it using its api some audio are not recognized correctly. I would like to know how much is the accuracy level of the service. Is there any way to improve it.
There are various things that can affect the accuracy of the identification e.g. Noise level, microphone quality, echo, etc.
To improve the performance in your condition, you can make sure the enrollment audio is recorded in the same conditions as the test audio (e.g. same microphone) and try to ensure that recording is done in a quiet environment.
It does work across multiple users and tried on different PCs/microphones.
I'd make sure that:
It is in a quiet room/environment
You are sending the audio correctly... (it is just byte array data, no additional encoding.)
Also check the header MediaTypeHeaderValue/content type, all request seem to be 'application/json' even though we send wav files.
Take care when mapping your users to the azure guids, and make sure you are using the correct ones. If you are using the SDK rather than API for profile creation and enrollment, there's no retreival of profile by id at the moment, I have done a workaround, to recreate the profile , and update the id in a database just before Enrollment. (the API doesn't need this though)
Also make sure you are using the latest API, (urls ending .../speaker/verification/v2.0/ etc... Some of the text independent features in the SDK are V2 only, and can fail verification becuse V2 stores profiles in 3 separate locations depending on the verification method.
Also check the profile was created/enrolled using the same verification method you are using to verify. try with a new profile if unsure.
Related
My case is that I want to make the data protected even from people who have access to the back-end (the keys store), so they couldn't read it without the user's (represented by the client app, in my case the browser) assistance.
One option is to have the decryption keys stored on the client and passed with each request which sounds pretty messy to me and i'm not sure I want my keys to wander around the net like this. What I imagine though is that the client will keep some token (it might be a password the user knows) and the decryption can't happen without it.
I thought about using the purpose string for this, I have the feeling it is not a good idea since its main purpose is isolation. On the other hand it is part of the additional authenticated data used for subkey derivation. (based on this article https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/implementation/subkeyderivation?view=aspnetcore-2.1#additional-authenticated-data-and-subkey-derivation).
I came across some examples that create their own symmetric encryption with a lower level classes. (like this post Encrypt and decrypt a string in C#?). Since I'm not an expert in this area I would like to use as much build in classes as possible.
What is the recommended way to achieve what I need with the classes from the Data Protection API? (I'm using .net core 1.1 on Ubuntu)
On the client device, a synced Realm can be setup with an encryption key that's unique to the user and stored on the device keychain, so data is stored encrypted on the client.
(related question: Can "data at rest" in the Realm Mobile Platform be encrypted?)
Realm Object Server and the clients can communicate via TLS, so data is encrypted in transit.
But the Realm Object Server does not appear to store data using encryption, since an admin user is able to access all the database contents via Realm Browser (https://realm.io/docs/realm-object-server/#data-browser).
Is it possible to setup Realm Mobile Platform so user data is encrypted end-to-end, such as no one but the user (not even server admins) have access to the decryption key?
Due to the way we handle conflict resolution, we currently are unable to provide end-to-end encryption, as you correctly deduced. Let's go a tiny bit into detail with regards to the conflict resolution.
In order to handle conflicts the way we do, we use something called operational transformation. This means that instead of sending the data over directly, the client tells the server the intent of the change, rather than the result. For example, when two users edit a text field, we would tell the server insert(data='new text', offset=0) because the first user prepended data at the beginning of the text field, and insert(data='some more stuff', offset=10) because the second user added data in the middle of the field. These two separate operations allow the server to uniquely resolve what happened, and have conflictless resolution of the two writes.
This also means that if we encrypt everything, the server would be unable to handle this conflict resolution.
This being said, that's for the current version. We do have a number of thoughts on how we could handle this in the future, while providing (some degree) of encryption. Mainly this would mean more work on the client, and maybe find a new algorithm that would allow us to tell the client the intent, and let the client figure out how to merge everything. This is a quadratic problem, though, so we're reticent to putting too much work on the client side, as it could really drain the battery.
That might be acceptable for some users, which is why we're looking into it. Basically, there will be a trade-off. As the old adage goes: fast, secure, convenient: pick two. We just have to figure out how to handle this properly.
I just opened a feature request around possibly using Tresorit's ZeroKit to solve the end-to-end encryption question posed. Sounds like the conflict resolution implementation will still cause an issue though, but maybe there is a different conflict resolution level that can be applied for those that don't need the realtime dynamic editing of individual data fields (like patient health data, where only a single clinician ever really edits a record at any given time).
https://github.com/realm/realm-mobile-platform/issues/96
I've nearly finished my lambda service for my smart home skill, and everything works great. The Echo is receiving my confirmations and correctly relaying their information. I'm now trying to build in error handling.
From the SHS API reference, there are a bunch of error messages listed that correspond to different circumstances. Are these errors supposed to change what Alexa says? Regardless of which one, if any, that I use Alexa just responds that the command doesn't work on that device. Right now I'm literally just using callback(err) and return the copy and pasted object from the API reference and still Alexa responds with the generic error.
It's easy to put in a bunch of constants to define error returns. It's harder to wire all of that into a firmware patch of a hardware device. Also, they only release an update to the SDK a few times a year. While they patch the hardware every couple of weeks.
Given that, I suspect that they put those error returns into the SDK to meet with a ship date with the SDK. More as placeholders than specific functionality. Over time, and if there is increased adoption of home skills, they will roll out updates to the hardware device that will take advantage of those returns.
My advice would be to use them. But not to expect there to be a difference right now. And don't mention differences in your documentation. If there is another place you can surface diagnostic information, you might want to do that so your customers can fix their problems.
For client security and privacy reasons, we want to deploy a unique database for each client while using the same website.
I envision that during the session_start event, we would determine which database to use for them (by looking at the subdomain they come in on) and set the connection string in a session variable. Then on every page_init, we'd dynamically set any object's connection string. In code behind, we'd do the same thing with the connection string.
Is there a better approach to doing this and will setting the connection string in page_init work? Is using a session variable wise? I've tended not to ever use them except when no other solution was possible.
The problem with the model itself it is really complex and can let you with some errors specially when we are talking about changes in the database. Imagine that you need to add an extra field on the interface. if you have 100 clients this will mean updating 100 different databases. when we talk about dealing with downtime them things get even worst.
I would do with that in a light different abstract your database layer create one api that will call the database. And from the website you always call the api passing the domain that you want the data to come from.
You can ask me what advantage this will give to you. The biggest one that you will see it is when doing upgrades and maintenance. Having one api per client it is a lot better to think them having one database per client. and if you really want to have just one (I would really recommend having one per client and deploying automatically) you can have a switch on the call and base with some parameters that you pass to the api ( can be on the header like the subdomain on the header) you can chose what database to connect.
Let me give you a sample scenario and how I would suggest to approach this scenario (that is true for database or api)
I want to include a new data field. So first thing it is to add this field on the backend (api or database) deploy this new field if it is one api you can even test that calling the api and see that the new field it is now returned that is not a problem for your ui because it is just a field that it does not use. after that you change the ui to actually use this field and deploy that to production.
This is a theoretical question.
imagine an aspnet website. by clicking a button site sends mail.now:
I can send mail async with code
I can send mail using QueueBackgroundWorkItem
I can call a ONEWAY webservice located in same website
I can call a ONEWAY webservice located in ANOTHER website (or another subdomain)
none of above solutions wait for mail operation to be completed.so they are fine.
my question is why I should use service solution instead of other solutions. is there an advantage ?
4th solution adds additional tcpip traffic to use service its not efficient right ?
if so, using service under same web site (3rd solution) also generates additional traffic. is that correct ?
I need to understand why people using services under same website ? Is there any reason besides make something available to ajax calls ?
any information would be great. I really need to get opinions.
best
The most appropriate architecture will depend on several factors:
the volume of emails that needs to be sent
the need to reuse the email sending capability beyond the use case described
the simplicity of implementation, deployment, and maintenance of the code
Separating out the sending of emails in a service either in the same or another web application will make it available to other applications and from client side code. It also adds some complexity to the code calling the service as it will need to deal with the case when the service is not available and handle errors that may occur when placing the call.
Using a separate web application for the service is useful if the volume of emails sent is really large as it allows to offload the work to one or servers if needed. Given the use case given (user clicks on a button), this seems rather unlikely, unless the web site will have really large traffic. Creating a separate web application adds significant development, deployment and maintenance work, initially and over time.
Unless the volume of emails to be sent is really large (millions per day) or there is a need to reuse the email capability in other systems, creating the email sending function within the same web application (first two options listed in the question) is almost certainly the best way to go. It will result in the least amount of initial work, is easy to deploy, and (perhaps most importantly) will be the easiest to maintain.
An important concern to pay significant attention to when implementing an email sending function is the issue of robustness. Robustness can be achieved with any of the possible architectures and is somewhat of an different concern as the one emphasized by the question. However, it is important to consider the proper course of action needed if (1) the receiving SMTP refuses the take the message (e.g., mailbox full; non-existent account; rejection as spam) and (2) an NDR is generated after the message is sent (e.g., rejection as spam). Depending on the kind of email sent, it may be OK to ignore these errors or some corrective action may be needed (e.g., retry sending, alert the user at the origination of the emails, ...)