how to find DataDog synthetic tests location for current test run - automated-tests

We have the same test running concurrently in different locations, currently the tests re-use a single user and the problem is they cannot be logged in in both locations at the same time. We would like to have an alternative user used when the location of the test run is non default.
So we are trying to find the way to identify the location used for execution in the UI synthetic test, so we can swap out the variable for the username/password at runtime.
Has anyone got any advice on how to achieve this? I was hoping there would be some standard run configuration settings populated that we could query, but can't find any documentation about them?

Related

How to retry R testthat test on API error?

Some tests rely on some external services (e.g. APIs). Sometimes, these external services will go down. This can cause tests to fail and (worse) continuous integration tools to fail.
Is there a way to instruct testthat tests and regular package examples re-run examples/tests more than once, ideally with the second attempt being 5 minutes after the first?
Ideally you would write your tests in a way that they don't call API or database.
Instead you will mock API end points according to the specification and also write test for cases where API returns unexpected results or errors.
Here is an example of package that allows you to do so:
https://github.com/nealrichardson/httptest
If you are worried that your vendor might change API, talk to them and extract details on their API change management.
Ask them this:
What is your change management process?
How do you avoid introducing break changes to existing endpoints that people are using?
(modified from this post)
If you have to check that API is still the same, draw the line between API validation and testing of your code.
You will need two separate processes:
Unit / Acceptance tests that are executed against the mocks of the API end points. Those run fast and are focused on the logic of your application.
pipeline for regular validation of the API. If your code is already live, you are likely to find out of any breaking changes in API anyway. So this is highly redundant. In exceptional cases this can be useful, but only with a very bad vendor.

isolating database operations for integration tests

I am using NHibernate and ASP.NET/MVC. I use one session per request to handle database operations. For integration testing I am looking for a way to have each test run in an isolated mode that will not change the database and interfere with other tests running in parallel. Something like a transaction that can be rolled back at the end of the test. The main challenge is that each test can make multiple requests. If one request changes data the next request must be able to see these changes etc.
I tried binding the session to the auth cookie to create child sessions for the following requests of a test. But that does not work well, as neither sessions nor transactions are threadsafe in NHibernate. (it results in trying to open multiple DataReaders on the same connection)
I also checked if TransactionScope could be a way, but could not figure out how to use it from multple threads/requests.
What could be a good way to make this happen?
I typically do this by operating on different data.
for example, say I have an integration test which checks a basket total for an e-commerce website.
I would go and create a new user, activate it, add some items to a basket, calculate the total, delete all created data and assert on whatever I need.
so the flow is : create the data you need, operate on it, check it, delete it. This way all the tests can run in parallel and don't interfere with each other, plus the data is always cleaned up.

How to verify Jmeter Recorded Load Test Results

I have created a recorded test plan for my web application using Jmeter. My web application basically creates a financial plan for new and existing customers. I recorded all the steps required to create a financial plan for a new customer.
I am not sure how to validate if Jmeter actually runs recorded steps. I am using Graph Results and checking throughput at the end of the recorded plan.
I am not sure how to validate if Jmeter is actually running all Thread users with the recorded steps. Any suggestions would be appreciated. Thanks!
Add a View Results Tree listener to your test plan and execute your test with 1-2 virtual users. Inspect "Response Data" tab of each request to ensure it does what it is supposed to do
If you use any JMeter Variables and want to check their values - add Debug Sampler(s) to Test Plan where needed. Variables values can be checked via aforementioned View Results Tree listener.
See How to debug your Apache JMeter script guide for advanced information on debugging your JMeter test.
Don't forget to remove or disable View Results Tree listener for the actual load test as it is too resource intensive. Also make sure you run JMeter in command-line non-GUI mode for the actual load.

Retrieve value of root variables in state machine workflow, during resume

I have a State Machine workflow which I am self-hosting within a WorkflowApplication.
When I resume a persisted workflow, I need to be able to extract the value of a given root-level variable (hence it should be in scope) - out of the workflow and into my hosting environment - just to be clear, I need to access the variables from OUTSIDE of the workflow.
I have followed instructions here: HOW to get Context of WorkflowApplication? but I am not receiving any events (upon resumption) that would contain the values of the root-level variables.
I am starting to think that the 'resume' of a workflow does not fire any tracking that allows one to grab root level variables.
Does anybody have a clearer explanation, perhaps a test harness?
Thanks

When writing UI tests, how does one test one thing that lies at the end of a long sequence?

I just got started running UI tests against my ASP.NET MVC application using WatiN. It's a great tool and really intuitive, but I find myself wondering what belongs in an individual test.
I found a number of people suggesting that these tests should be treated like unit tests and as such there should be no expectations on order or side effects.
I run into problems when the user story assumes that the user has completed a series of steps before completing the activity I want to test.
Some examples...
A user must register, log out, and enter the wrong password 3 times to verify that the system won't let them log in again with the right password
A user must register, add some foos, add some bars, submit a form that allows them to select among their foos and bars, and see their submission on another page
With unit tests, I can use mocking to take care of the prerequisite tasks.
What are some good ways of handling this scenario such that I can avoid writing individual tests that go through the same prerequisite steps yet have tests that complete reliably every time?
Hey.
I would split integration tests and story acceptance tests.
Check PageObjects pattern - You create LoginPage class with proper methods like loginAs(String username, String password), loginAsExpectingError(String username, String password). You write other classes like that - it gives you automation framework for your app. You can use this in the following way:
On integration level you are checking if application components work properly if you provide proper credentials (loginAs) and that when you are provide wrong credentials (loginAsExpectingError).
On acceptance level you use LoginPage.loginAs() to make first step in your acceptance test. Second could be something like MainPage.addSomeFoos(). Then MainPage.addSomeBars(). Then MainPage.logOut().
If your unit tests pass, then run integration tests, if they pass run acceptance tests.

Resources