I am trying to integrate TestRail with TestCafe in order to update the test script execution status on TestRail. I followed the below link but did not succeed
https://www.npmjs.com/package/testcafe-reporter-html-testrail
tried with below one, but not succeeded.
test('<< Group Name>> | << Test Name >> | << Testrail Case_ID >> ', async t => { .... });
Can you please help me with this?
note: my question is same as : https://testcafe-discuss.devexpress.com/t/is-there-anyway-to-post-test-results-from-a-run/377
As I understand from the https://testcafe-discuss.devexpress.com/t/is-there-anyway-to-post-test-results-from-a-run/377 thread, it's enough to send some requests to TestRail API to solve the issue.
I took a look at testcafe-reporter-html-testrail and, at first glance, it should send such requests.
I would recommend you review code of testcafe-reporter-html-testrail and debug it to find the cause why it does not work. I suggest you start from these reporter methods:
https://devexpress.github.io/testcafe/documentation/extending-testcafe/reporter-plugin/reporter-methods.html.
Since testcafe-reporter-html-testrail is not an official TestCafe reporter, we cannot provide any detailed information about it. I cannot find the repository of testcafe-reporter-html-testrail on GitHub, but it still exists on npm, so probably you can contact the author of this module.
If you are running tests using the TestCafe CLI, then this is how you would pass the required testrail environment variables required for the testcafe-reporter-html-testrail plugin to work:
TESTRAIL_ENABLE=true TESTRAIL_HOST=http://example.net/ TESTRAIL_USER=abc#example.net TESTRAIL_PASS=password PROJECT_NAME='ABC' testcafe chrome test.js
Related
I am orchestrating Dataflow Template job via Composer and using DataflowTemplatedJobStartOperator and DataflowJobStatusSensor for running the job. I am getting following error with sensor operator
Failure log of DataflowJobStatusSensor
job_status = job["currentState"]
KeyError: 'currentState'
Error
Dataflow Template job runs successfully but DataflowJobStatusSensor fails always with the error . I have attached screenshot of the whole orchestration
[2022-02-11 04:18:11,057] {dataflow.py:100} INFO - Waiting for job to be in one of the states: JOB_STATE_DONE.
[2022-02-11 04:18:11,109] {credentials_provider.py:300} INFO - Getting connection using `google.auth.default()` since no key file is defined for hook.
[2022-02-11 04:18:11,776] {taskinstance.py:1152} ERROR - 'currentState'
Traceback (most recent call last):
Code
wait_for_job = DataflowJobStatusSensor(
task_id="wait_for_job",
job_id="{{task_instance.xcom_pull('start_x_job')['dataflow_job_id']}}",
expected_statuses={DataflowJobStatus.JOB_STATE_DONE},
location=gce_region
)
Xcom value -
return_value
{"id": "2022-02-12_02_35_39-14489165686319399318", "projectId": "xx38", "name": "start-x-job-0b4921", "type": "JOB_TYPE_BATCH", "currentStateTime": "1970-01-01T00:00:00Z", "createTime": "2022-02-12T10:35:40.423475Z", "location": "us-xxx", "startTime": "2022-02-12T10:35:40.423475Z"}
Any clue why I am getting the Error - currentstate
Thanks
After checking documentation for version 1.10.15, it gives you the option to run airflow providers (from version 2.0.*) on airflow 1.0. So you shouldn't haver issues, as described in my comments you should be able to run example_dataflow although you might need to update the code to reflect your version.
For what I see from your error message, have you also check your credentials as described on Google Cloud Connection page. Use the example or a small dag run using the operators to test your connection. You can find video-guides like this video. Remember that the credentials must be within reach of your airflow application.
Also, If you are using google-dataflow-composer you should be able to setup your credentials as show on DataflowTemplateOperator Configuration.
As a final note, if you find messy to move forward with airflow migration and latest updates, your best approach is to use kubernetes Operator. In the short term, this will allow to create image with latest updates and you only have to pass credential info to the image and you will be able to update your docker image to the latest and it will still working regardless of the version of airflow that you are using. It's a short term solution, still you should consider migrating to 2.0.*.
I've configured firebase ab-testing. Everything works fine except there is no impact user on console.
Actually, I can see UI and log show ab-testing is applied.
Moreover, by checking the other StackoverFlow topic, activateFetched also invoked after fetch successfully.
Moreover, I've referenced
Firebase Remote Config A/B testing shows no results after 24 hours
Firebase Remote Config results on initial request
Remote Config A/B Test does not provide results on iOS
But those are no work on my case.
Is there anything miss or any other need to check so that client can response AB testing result to firebase console.
Thanks for your help first.
Code snippet:
[FIRApp configure];
FIRRemoteConfigSettings* configSettings = [[remoteConfig configSettings] initWithDeveloperModeEnabled:YES];
[[FIRRemoteConfig remoteConfig] setConfigSettings:configSettings];
[[FIRRemoteConfig remoteConfig] fetchWithExpirationDuration:duration completionHandler:^(FIRRemoteConfigFetchStatus status, NSError *error) {
if (status == FIRRemoteConfigFetchStatusSuccess) {
BOOL configFound = [[FIRRemoteConfig remoteConfig] activateFetched];
A couple things to check or take note of:
Make sure you're using and have deployed the latest Remote Config SDK. Earlier versions don't work with A/B test experiments.
Be sure to verify your experiment on a test device by following the documentation here
It can take a couple days for data to come in for your experiment.
Please call the functions in the following order:
fetch()
Call activatefetched() in the completion handler of fetch().
Fire activation event. If you need to call activation event immediately after activatefetched(), add a time delay of a few seconds. This is because activatefetched() process asynchronously and hence the function may not execute completely, before the activation event is fired.
Once done, test a running experiment on test device. In the debug logs search with string "exp_X" where 'X' is the experiment Id. You will find the experiment Id in the URL of the experiment. If you find the experiment ID in the debug logs while executing the code on test device, it means the device was covered in experiment.
Also if the experiment setup is correct, the running experiment will show 1 active experiment user in the console.
I am getting this error every so many runs with my HTTP Firebase Cloud Function:
Function execution took ****ms, finished with status: 'connection error'
It happens inconsistently but I can't quite narrow down what the problem is. I don't believe the error is in my app as it's not showing an error printout. And my own connection with firebase while running this cloud function isn't cutting out.
Any ideas why Firebase randomly fails cloud function executions with "connection error"?
Function execution took ****ms, finished with status: 'connection error' or ECONNRESET usually happens when a function doesn’t know whether a promise resolved or not.
Every promise must be returned, as mentioned in the docs here. There is also a blog post (with helpful video!) about this.
A couple of examples of unreturned promises:
exports.someFunc = functions.database.ref('/some/path').onCreate(event => {
let db = admin.database();
// UNRETURNED PROMISE
db.ref('/some/path').remove();
return db.ref('/some/other/path').set(event.data.val());
});
exports.makeUppercase = functions.database.ref('/hello/{pushId}').onWrite(event => {
return event.data.ref.set('world').then(snap => {
// UNRETURNED PROMISE
admin.database().ref('lastwrite').set(admin.database.ServerValue.TIMESTAMP);
});
});
exports.makeUppercase = functions.database.ref('/hello/{pushId}').onWrite(event => {
// UNRETURNED PROMISE
event.data.ref.set('world').then(snap => {
return admin.database().ref('lastwrite').set(admin.database.ServerValue.TIMESTAMP);
});
});
To help catch this mistake before deploying code, check out this eslint rule.
For an in-depth look at promises, here are some helpful resources:
Mozilla docs
Ponyfoo promises deep dive
Links to the ECMA standard
Egghead.io course
Even though this question has an approved answer, you may have followed the steps in that answer and still reached a point where the error was still occurring.
In that case, we were informed by GCP that there's a known issue with Node 8 CFs and this connection error, for which the workaround is to update the node version to 10.
Related github issue: https://github.com/firebase/firebase-functions/issues/429
Specific comment: https://github.com/firebase/firebase-functions/issues/429#issuecomment-577324193
I think it might be too many simultaneous firebase database connections :/ https://groups.google.com/forum/#!topic/firebase-talk/4RjyYIDqMVQ
I faced the same issue while deploying uninstallTracking event to firebase for android device,
Turns out that the property I was trying to access was available for only some users ,
So when it couldn't find the property for those other users it gives this error
So first just check the property you are trying to access is there or not
I've been getting this on an HTTP trigger that immediately calls response.end() with no other code!
I had a very complex function that was working great then it stopped working due to this error. I tried for hours playing with my code until there was nothing left but a response.end() and still the error persisted.
I found that by deleting the trigger (deploying my triggers with the offending trigger commented out), then deploying again with the trigger uncommented seems to have fixed it.
Perhaps there is a bug that works in that gets reset when you delete the trigger in the cloud.
Hope this saves somebody some frustration.
it could be outdated libraries.
go to terminal
inside functions folder write command
npm outdated
this will show all libraries to require to be updated.
To update libraries write command
npm update
deploy cloud functions with
firebase deploy --only functions
For debugging purposes, I did the following:
response.send(someArray.length)
...which resulted in the following:
response.send(218)
...which resulted in a bodyless response, just a "status code" (namely 218) being sent. To fix this, I did:
response.send("count: " + someArray.length)
I am using spring-boot-1.4.0. In my project i am using sentry for logging, sometimes log events are not reflected in sentry.While browsing google about this issue i saw something called "Raven-Sentry" but it was written using Python. Is there any Raven-sentry available for spring-boot.I am using following Raven-callback but still I am unsure how to curl or create a rest endpoint which would let me know the status of sentry whether it is up or down. Please let me know for any more details even i am ready to provide a code samples if needed.
Your help should be appreciable.
As per Brett comments I have updated my question by providing Python Sentry connection test link:
Python-sentry-test
In the above link they are running the test to find out connection to sentry is successful or not. Similarly i want to check the connection to sentry is successful or not via spring-boot.Also i would like to add sentry status to health check, so that when ever my logging events are not reflected in sentry, immediately i will flip the health of sentry to down.
We are using Robot framework and RIDE tool for test case execution. we have 100+ testcases and test execution takes more than 6 hours to complete.
RF result and log html is great for viewing results. But these 2 files are viewable only after completion of test case execution.
Is there any plugin / tool or mechanism to view the testcase result status during execution. in RIDE tool -"Run" tab - only shows pass:<> fail:<> and not very user useful.
Need real time testcase status report instead of waiting for completion
You can use the listener interface. With it, you can have robot framework call a python function each time a keyword, testcase or suite starts and finishes. For the case where they finish, the data that is passed in will include the pass or fail status.
Using the listener interface (as Bryan Oakley suggested) is surely the most flexible way to intercept test progession status. If you are looking for tools, Jenkins (with Robot Framework plugin) gives you the opportunity to follow a test run in real time at test case granularity. Just start a job and switch to (Jenkins) console to see the output dropping in.