Do we need to build separately for each language when using alexa dialog model - alexa-skills-kit

I changed interaction model in two languages and clicked save for each. Does clicking the build model cause both of them to build or just the language that is currently selected.

Yes you have to build interaction model specifically for each language but you can point all language to same Lambda function or HTTP endpint

Related

Firebase Tensorflow Lite Model

I am building a Tensorflow model to predict a certain feature for each user. So, I am making a model for each user of the application. I was wondering how to I upload the Tensorflow lite model to the Firebase in such a way that each user can access the model specific to him/her. For example, let's say there are two users A and B. When A makes a call to firebase, I want to make sure A's model is called and when B uses the app, I want to make sure B's model is called.
Hope that makes sense, Could someone please tell me if there is a way to do this. Thank you!
If you have a different model for each user it wouldn't scale very well if you have a large number of users.
You can use some properties of the user as input in your model, or encode them into some embedding space then use it as part of the input, you can find more info about it here:
https://developers.google.com/machine-learning/crash-course/embeddings/video-lecture
Then you can have a single model that works for all users.

Azure form recognizer - model versioning

Couple of questions about Form Recognizer (FR) model management:
Background:
I'm using FR Labeling tool to train models and C# Function app to interact with FA service and analyze forms.
Each time a model is trained - a new instance is created. New version does not hold any reference to previous versions and there is no way of selecting a model by name in code. Latest model might be queried using TrainingCompletedOn property but it's not failproof and cannot be used if FR has more than one project. Question: if continuous development is being done and model is constantly improved - is there a way (or best practice) to manage which model has to be targeted.
In connection to 1st Q - since FR always creates a new model - it ends up with a big list of not used models that are still active. and since there is no connection between them - there is no safe way of performing a cleanup. old Models can be removed using API but it's a manual process. Any recommendations on how old model versions can be managed?
Can a model be exported added to version control and deployed to other environments from version control? There is an API endpoint to copy models between FR instances, but I would like to keep it in version control and deploy to environments from there.
In connection to Q3 - What is the recommended practice for managing FR project in DevOps? how can work be versioned and deployed across different environments?
Thank you
each model is unique and independent. it's immutable, you need to pick the model with best accuracy based on your test data set.
you can call DELETE api to delete a model.
not such support at this point. as each model can't be changed after it's created, I don't think there is much value in version control for a model.
please see #1, you could use a test data set to measure model performance. If the model does poorly on one test file, you could label that test file and add it to the training set, and train a new (better) model.
-xin (MS Form Recognizer Team)

Is there any way to provide your own speech files to improve custom keywords?

I have been trying to use custom keyword using the speech devices sdk but have had problems when I use my own custom keyword and deploy to android phone (the standard ones are better but still not as good as I need or would expect in commercial application). The screen shots on linked page implies that you can "Add training data to train keyword model" however that doesn't appear when I use the speech studio.
My suspicion is that the generated speech files that are automatically created by the speech studio are not good enough to train model for users with accents (like myself).
We have not yet widely enabled the KWS model adaption.
The Custom Keyword generated from the portal aims to be sufficient for initial trial, it is not currently at the level for commercial application.
We are enabling the ability to upload data to adapt the model, this is being trialed with customers before wider roll-out. It is an upload on the Custom Keyword page and not the Custom Speech page.
Thank you for using the speech SDK!
Did you follow the instructions here:
https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-devices-sdk-create-kws
And here (for how to prepare the data):
https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-custom-speech-test-data#upload-data

Multivariate Testing for Sublayouts in Sitecore

Having toyed with the concept in the past, I am interested in using multivariate testing on my companies Sitecore website. There are a number of places where I feel we can definitely improve sales through the use of A/B testing in:
Running two entirely different templates to see what layouts work better for users
Running a number of different Sublayouts (forms) on the site to see which ones people are more likely to fill out
Trialling different content - Running two different sets of copy to see if users are more likely to stay on the page
I want to use the Marketing Suite within Sitecore, and I want to be able to measure who visits pages more and count, out of two or more sublayout forms, which form is used the most. Sadly, I have no experience with the OMS and am struggling to see how one actually implements these things.
Let's say I have a content item, with a bunch of sublayouts attached to it within its template. Can someone help guide me towards a way of achiving the three things I want to run multivariate testing on?
EDIT: On the subject of the two sublayouts I want to test on a template; I have two sublayouts, which are both simple ASP.NET email forms. Once a user fills in the form the contents of the form are written to a database and an email (using Sitecore.Context.Item to get an "Email From" field from the content item that runs the form).
This is where I get stuck. A number of the sublayouts I have don't seem to have any "content" that needs pulling from a data source. The only content I can see in the case of the two forms I want to test is the "Email To" fields. So, if I were to abstract those away into their own data templates, and then added those as data sources I assume that I would then have to change my code for these to stop using Sitecore.Context.Item?
The point where I get stuck is with the data sources for the Multivariate Test Variables and the data sources for the Sublayouts. If I have two data templates containing the Email fields for each, two sublayouts that contain the forms that need testing and two multivariate variables, what goes where?
I believe you can read about it in the Analytics Configuration Reference (PDF link) under section 2.2.
You essentially create a MV test that wraps over potential data sources of a sublayout. The test then randomly assigns a DataSource, so your sublayouts need to be written to work with a DataSource.
With Sitecore 8 released Multivariate Testing is now supported out of the box as well as AB Testing.
You can run two entirely different templates to see which Layout works best for the user by Page Test in Sitecore's Optimization Tool on the Launch Pad. Creating a Page Test you can select the current version of the Item then create a new Version of the Item with the different Layout. This can also be done for Content on the Page
After that you need to decide how a winner can be chosen e.g. most goals completed by users, registrations etc then Sitecore will automatically run the test for you showing A and B to various users and ultimately choose a winner based on the Test Objective. You can choose a winner mannually or let Sitecore automatically choose after a set Duration.
Creating a Mulitvariate Test on number of different Sublayouts as well as imagery, personalisation, content etc is a little more interesting. To create a Multivariate Test is done via Workflow Actions, I've posted a blog recently how to add Maultivariate Testing to workflow.
Approving with a Test will prompt Sitecore to create a Multivariate Test for all variables (Sublayouts, Content, Personalization etc). It creates an 'Experience' for every possible combination of these variables and tests them against each other.
For a more in-depth explination and guide I have recently posted a tutorial to create a Multivariate Test in Sitecore.
There are two trainings that you (and a developer on your team) should really consider attending: OMS Certified Marketer and OMS .NET Developer.
Working with a Sitecore Certified OMS .NET Developer, you will be able to accomplish your marketing objectives. This is what Sitecore Training is for!
Please see the following and regsiter for the next available trainings:
http://www.sitecore.net/Training/Course-Overview/OMS-11-Certified-Marketer.aspx
http://www.sitecore.net/Training/Course-Overview/OMS-11-NET-Developer.aspx

Best practice for developing an ASP.NET website in 30 languages?

We are going to develop an ASP.NET website in 30 language.
What is the best solution for developing that site? which architecture to be used?
I suggest storing UI properties in resource files (.resx) and have the CurrentUICulture to the specific language for each request:
<globalization culture="auto" uiCulture="auto" />
If your Web site is mostly content-oriented rather than a business oriented application that differs heavily based on the language, you might want to consider building separate set of pages for each language and redirect the user based on a cookie or profile property or Request.UserLanguages. It's not possible to give a general prescription for globalization problem. The best architecture differs significantly based on the nature of each individual project.
NLS is a recurring requirement, and often when the question about NLS functionality is asked, the people asking are not aware of the complexity. NLS typically split into (at least) 2 areas:
NLS in the UI
NLS in the data
In your case, a content-based website, you may even split the second point into - data generated by the website provider and - user generated data.
For UI NLS you can use the .resx mechanism as mentioned by Mehrdad, but you should be aware that every work on localization always requires to edit the source code (i.e. the resx files).
When I had to develop a multi-languge web app, I therefore chose to handle the NLS requirement in my code, and created a couple of NLS-specific tables that mirrored the UI (btw this was the motivation to write graspx: extract all visible texts from aspx source, such as Label.Text etc). There is a separate application to upload the UI definition, and let translators do their work. The main application has an import functionality for the translated texts.
The data model looks like this: Page - PageItems - PageItemTexts (with reference to a Language), so it's quite simple.
The same model can be applied to the content: instead of Page and PageItems, you simply have ContentItems, which hold a PK and a identifier only, and a table holding the text of the ContentItems associated with a language.
Additionally, you may define some sort of language fallback chain, so that a text which is not yet translated is displayed in the original language, or some other (closely related) language.
The displayed language can be selected by the language provided by the browser (HTTP_ACCEPT_LANGUAGE), but should be allowed to be overwritten by the user (e.g. via a combobox). The selected language should be stored in a session variable, in a cookie, or in the database (for registered users).

Resources