Why Testing Result and ASR output after deploy is different in Microsoft Custom Speech? - microsoft-cognitive

I am training ASR using Microsoft Custom Speech, and I found that when I test the model throught Testing tab, output ASR is correctly recognized, but after deploy to endpoint and test using Check endpoint using same input audio file, recognized ASR is different.
Anyone has idea what make two result different? and how I can fix it?
Basically same input file and same model but ASR reult is different according to Testing, or Deployment->check endpoint.
Thanks in advance.

Related

Postman-Jenkins-Jira Integration

I have a postman collection which in turn is integrated to Jenkins via newman.
I need to integrate my Jenkin results with Jira via X-ray plugin.
I tried using newman junitxray reporter but this report consider each request as test case.
In my collection i always need to run some series of request before running the actual request which contain pm.test.
But junitxray report is considering those series of request as also test cases.
I want only a specific request to be taken as test-case.
Can someone please help me on this.
You can use different reporters with newman; depending on that, what will be on the JUnit XML report will be different. Some time ago I've prepared a tutorial showing the differences.
If you're using Xray on Jira cloud, please check this tutorial and related code.
If you're using Xray on Jira server/datacenter, please check this tutorial and related code instead.

Import documentation field from test cases from robot to test in XRAY

Im currently working on a project were i'm importing the output.xml from my test suite to JIRA via XRAY. This create the test (in case they don't exists already) and the execution test (with the results).
My problem is that right now, the test are created without description or documentation. Is there a way were I can set the [Documentation] field from the test cases in robot to the JIRA issue (test) via XRAY? I don't find anything in the XRAY API doc.
Maybe my only option is parse all the [Documentation] field of the tests cases and then import in the test on JIRA without XRAY?
Currently this is not yet possible but it make sense in my perspective.
You can vote and follow this suggestion so the Xray can be aware about your interest on it.
You can implement your own script to go over the Tests and update the Description field (using Jira's REST API) but you would need to know their key in advance; if that's the case, then fine.

How to implement Jira Xray + Robot Framework?

Hello im a new junior test software and i've been asked to study about xray and robot framework and how to implement both.
I've made a few Test cases in xray and after i've started to learn about robot framework and till there was all good.
Now i've been trying to implement the results of this tests cases that i made on robot to the tests execution in xray but everytime that i try to import the output.xml from robot to xray instead of "syncronize" this tests the xray creats me new tests care with the result of the robot.
There is anyone around that has done it before that could help me? i've tryed to implement tags in robot or even use the same name of tests (in xray and robot) but it didnt work. Thanks in advance.
I recommend using Jenkins with the XRay - Jira plugin to sync the results of automated tests into xray test items.
You would use a Tag in robot to link a test case to an Xray Test item or if you don't specify an ID, the plugin would create a new Test item and keep it updated based on name
*** Test Cases ***
Add Multiple Records To Timesheet By Multi Add Generator
[Tags] PD-61083
Check this link for details on how to configure the integration
https://docs.getxray.app/display/XRAY/Integration+with+Jenkins
The plugin can keep track of the execution in a specific Test Execution item or create one per run but should keep referring to the same Test item.
When you upload the RF results, Xray will auto-provision Test issues, one per each Robot Framework's Test Case. This is the typical behavior, which you may override in case you want to report results against an existing Test issue. In that case, you would have a Test in Jira and then you would add a tag to the RF Test Case entry, with the issue key of the existing Test issue.
However, taking advantage of auto-provisioning of Tests is easier and is probably the most used case. Xray, will only provision/create Test issues if they don't exist; for this, Xray tries to figure out if a generic Test exists, having the same definition (i.e. name of RF Test suites plus the Test Case name). If it does find it, then it will just report results (i.e. create a Test Run) against the existing Test issue.
If Test issues are always being created each time you submit the test results, that's an unexpected behavior and needs to be analyzed in more detail.
There is another entity to have in mind: Test Execution.
Your results will be part of a Test Execution. Every time that you submit test results, a Test Execution... unless, you specify otherwise. In the REST API request (or in the Jenkins plugin) you may specify an existing Test Execution by its issue key. If you do so, then the results will be overwritten on that Test Execution and no new Test Execution issue will be created. Think on it as reusing a given Test Execution.
How the integration works and the available capabilities are described in some detail within the documentation.
As an additional reference, let me also share this RF tutorial as it may be useful to you.

Using Microsoft custom translation (free tier), we could build a customized model but could we test the model?

We are trying the Microsoft custom translation.
We follow the quick start document and we succeeded in building a model.
However, it seems we could train the model but not deploy the model using the free plan.
In this case, how could we use the trained model? Is it possible to download it and try it locally?
Edit:
I am using a dictionary with only one word. And I didn't see the system test result for the model. Is it expected?
The free tier is to test and view the results, by doing so you will be able to check what is to be the output of a deployed model to see if it would be efficient or not so that you'd decide whether you want to proceed in deploying it or not!
You can view the System Test Results by selecting a project, then select the models tab of that project, locate the model you want to use and finally select the test tab as stated in the documentation
This allows you to try the service and test its efficiency without deploying it, but if you want to deploy the model you will have to switch to a paid tier.

PHPUnit Code coverage analysis for code called over HTTP

I am trying to find a reasonable approach to getting a code coverage report for code that is called from within a test via HTTP. Basically I am testing my own API the way it is supposed to be called but because of that PHPUnit/Xdebug are unaware of the execution of the code within the same codebase.
Basically what I want to achieve is already done using the PHPUnit Selenium extension but I don't run Selenium, I call the code through an OAuth2 Client which in turn uses curl.
Is it be possible to call my API with a GET-parameter that triggers a code coverage report and to have PHPUnit read that report and merge it with the other code coverage? Is there a project that already does that or do I have to resort to writing my own PHPUnit extension?
OP says the problem is that Xdebug-based code coverage collection, won't/can't collect coverage data because Xdebug isn't enabled in all (PHP) processes that execute the code.
There would seem to only be two ways out of this.
1) Find a way to enable Xdebug in all processes invoked. I don't know how to do this, but I would expect there to be some configuration parameter for the PHP interpreter to cause this. I also can't speak to whether separate Xdebug-based coverage reports can be merged into one. One the face of it, the raw coverage data is abstractly just a set of "this location got executed" signals, so merging should just be a set union. How these sets are collected and encoded may make this more problematic.
2) Find a coverage solution that doesn't involve Xdebug, so whether Xdebug is enabled or not is irrelevant. My company's (see bio) PHP Test Coverage Tool does not use Xdebug, so it can collect the test coverage data you want without an issue. You can download it and try it; there's a built in-example of test coverage collection triggered exactly by HTTP requests. The tool has a built-in ability to merge separate test-coverage runs into an integrated result. (I'd provide a direct link, but some SO people are virulently against this).

Resources