Is there any way to rerun the setup step every few minutes in a long duration test? - k6

I am starting with K6, and I have created my first test scenario.
The first thing I do in the "setup" stage is to get a token, and then in the test code, I use that token to perform operations. However, the token has an expiration of half an hour, so if the duration of my test is longer, the token is no longer valid and the test fails.
Would there be any way to rerun the "setup" step after half an hour, to get the token again? I can think of a few ways, but none seems good to me, since in cloud testing the new token would not propagate, or the duration of some iterations would be longer because of having to get a new token from another URL.
Thank you very much in advance!

Related

LogicApp HTTP Handling non 200 response without the app failing

I have a logic app which grabs some info, iterates over the information and then makes some HTTP requests.
Some of these requests will succeed, and it is expected some will fail from time to time.
I would like the logic app to "not fail" just because sometimes a get request will fail when we're okay for that to happen.
The reason this is important is that we're looping over an array of values.
Each GET request is partly formed from the data item we are iterating over.
We want the requests which a successful to continue working and basically ignore errors.
The loop is like this
Get list of IDs
FOR EACH ID
GET REQUEST
IF FAIL CONTINUE
NEXT ID
At the end of the run, the app is marked as success and all IDs in the list have had a GET request.
Thanks in advance
Just found the answer
The step after the one which can fail HTTP in this case.
Click on the three vertical dots on where the options item lives.
Then click on run after
Remember this is on the step after the step which can fail

Firebase Timestamp Syncing

In my turn based online game I have a timer in-game, that ticks down from 24 hours to 0, when it reaches 0 for any player they have lost.
When a player makes their turn they write something like this to the database:
action: "not important"
timeStamp: 1670000000
What I want is for either of the two players to be able to get into the ongoing game at any time, read "timeStamp" and set the clock accordingly, showing how much time is left since the last action.
When writing to the database I am using ServerValue.TIMESTAMP (Android). I am aware of the ability to estimate the server time using ServerTimeOffset described here;
https://firebase.google.com/docs/database/android/offline-capabilities#server-timestamps
But I feel it's not always accurate when testing, so I wanted to explore if there is any other way to do this. What I really want is to get the actual server timestamp when reading the node:
timeLeft = actionTimeStamp - currentServerTime + 24h
Is this possible to do in ONE call? I am using RTDB, but I am open to moving to Firestore if it is possible there somehow.
There's no way to get the server timestamp without writing it to the database, but you can of course have each client write it and then immediately read it back.
That said, it shouldn't make much of a difference from using the initial start time that was written and the serverTimeOffset value.
For a working example, have a look at How to implement a distributed countdown timer in Firebase

Firebase: Is there a difference between client transactions and cloud function transactions?

As pointed out here I am trying to find a way to show trending posts with Firebase.
Since I see no other way to solve this, I've decided to go with redundancy:
-trendingToday
-$date
-$postId
-numberOfLikes // negative number for descending order
-trendingMonth
-$date
-$postId
-numberOfLikes // negative number for descending order
When a user likes a post, first trendingToday/$date/$postId/numberOfLikes gets decreased by 1 with a transaction. Then there should be a for-loop to decrease the number in trendingMonth/$date/$postId/numberOfLikes where $date loops through the next 30 days. This should also be performed with transactions.
Now the question is: Am I better off with doing this logic on the client or is it preferable to solve this with cloud functions?
If you choose to perform a lot of items of work on the client, there is a chance that the work may not all complete if the user kills the app or it loses connectivity or some other interruption.
A Cloud Function is highly unlikey to get interrupted during its course of execution, so there is a much better chance of all your transactions completing consistently.

How to add pause between requests?

I have a set of requests I run together in a group. One of them starts some async operations on the server. I need to insert a n-second pause between this request and the next to give those async operations time to complete. How can I do this?
Unfortunately it isn't possible yet with Paw. Though we're going to bring a nice testing flow (with assertions, request flow, waits, etc.) in a future release.
As a temporary workaround, you could add a dummy request to a "sleep" page alike: http://httpbin.org/delay/3
A screenshot to explain this better (and a video here to see it run):

Graphite Render URL API to Splunk - Track received events?

I'd like to setup a scripted input in Splunk to do a curl against the render url api for Graphite. I imagine I could configure this input to run on the minute, and retrieve that last minutes worth of events.
My concern with this is that some events might be missed, or duplicated.
Has anybody done something similar to this? How could I keep track of the events from Graphite that I have already read?
If you write a modular input you can use data checkpoints. See the docs for more info: http://docs.splunk.com/Documentation/Splunk/6.2.1/AdvancedDev/ModInputsCheckpoint
My concern with this is that some events might be missed, or duplicated.
Yes, it may go missing. In two cases-
If you're pushing your graphite server to the limits, there is a lag between the point wherein the datapoint is received and its flushing to disk. With large queues, i have seen this go upto 20 mins. (IO is the constraint here).
For example- in the case above wherein there's a 20 minute lag, and i am storing data at a 1m granularity- i will have the latest 20 datapoints with NULL against the timestamp. Of-course, they will soon fill in with the next flush.
Know that these are indeterminate. So if you have a zero lag deployment- go for this approach.
The latest datapoint can or cannot be NULL at any given point, because of the flushing nature of graphite, even if nothing is throttling. You can use something like &from=-21m&to=-1m to make sure you never encounter this. Note: Your monitoring now lags by a minute. :)
All said, graphite is a great monitoring tool if your requirements aren't realtime.

Resources