Deleting test executions in test plan using REST API - jira-xray

so i'm trying to figure out how to implement some sort of retention policy in Xray JIRA for test executions, i.e. deleting all test executions past some certain date. The plan is to implement an external application to periodically (maybe once a day) check and delete test executions satisfied by a policy.
However, I tried using the REST API for deleting test executions (https://confluence.xpand-it.com/display/public/XRAY/Test+Plans+-+REST) but it is only disassociating the test execution with the test plan and it is not really deleted. Is deleting a test execution entry not available in Xray JIRA REST API?
I am open to suggestions as well regarding the retention policy if there is another way to do it, since I can't seem to find any documentation if it is supported natively in Xray JIRA.

Found a solution,
I used the native JIRA REST API for deleting issues
DELETE /rest/api/2/issue/{testExecutionKey}
Reference: https://developer.atlassian.com/cloud/jira/platform/rest/v2/#api-rest-api-2-issue-issueIdOrKey-delete

Related

Pact flow for Event Driven Applications

Although Pact supports testing of messages, I find that the recommended flow in the "Pact Nirvana" doesn't quite match the flow that I understand an Event Driven Application needs.
Let's say we have an Order management service and a Shipping management service.
The Shipping service emits ShippingPreparedEvents that are received by the Order service.
If we deleted a field inside the ShippingPreparedEvent, I'd expect first to make a change to the Order service so that it stops reading the old field. Deploy it. And then make the change in the Shipping service and deploy it.
That way, there wouldn't have any downtime on the services.
However, I believe Pact would expect to deploy the Shipping service first (it's the provider of the event) so that the contract can be verified before deploying the consumer. In this case, deploying the provider first will break my consumer.
Can this situation be avoided somehow? Am I missing anything?
Just to provide more context, we can see in this link that different changes would require different order of deployment. https://docs.confluent.io/current/schema-registry/avro.html#summary
I won't be using Kafka nor Avro, but I believe my flow would be similar.
Thanks a lot.
If we deleted a field inside the ShippingPreparedEvent, I'd expect first to make a change to the Order service so that it stops reading the old field. Deploy it. And then make the change in the Shipping service and deploy it. That way, there wouldn't have any downtime on the services.
I agree. What specifically in the Pact Nirvana guide gives you the impression this isn't the way to go? Pact (and the Pact Broker) don't actually care about the order of deployments.
In your case, removing the field would cause a failure of a can-i-deploy check, because removing the field would break the Order Management Service. The only approach would be to remove the field usage from the consumer, publish a new version of that contract and deploy to Production first.

How to atomically update and roll back a Firebase Hosting site + Cloud Run service?

Suppose we have a site on Google Firebase Hosting that routes some requests to a Google Cloud Run service. The service is considered entirely an implementation detail and its only client is the single website. The only reason for using a Cloud Run service is that it is the only suitable technical option within the Firebase platform.
Now, suppose that the API of the service may have a breaking change with every update, so the Firebase Hosting content must change too. How do you update or roll back both parts together so as to avoid incompatibilities?
Straightforwardly, we can update the service and the site content in separate steps, but that means some requests from the old revision of the site may reach the new revision of the service or the other way around, causing errors due to API mismatch. The same issues are present when rolling back the site content and the service at the same time.
One theoretical solution would be to deterministically route requests to different service revisions based on revision labels, but that is not supported on Cloud Run.
One realistic solution would be to create a new service for every update of the site content. However, that would result in unbounded accumulation of services which are not automatically deleted like service revisions are.
Another solution (proposed below) would be to maintain backwards compatibility in the service - it would support both the latest and the previous API version. However, this can be considered an unnecessary overhead. Since the two parts (static content and the service) have no real need to ever be updated independently, it would be very convenient to avoid the overhead of maintaining backwards compatibility in the service.
For what I know there is no way to make this update in a single transaction to avoid this behavior you mentioned as Firebase and Cloud Run are different products.
Also a good Practice in API design is to allow Service Evolution this means that updating the API shall not break the apps consuming it and new versions of the app shall be able to evolve in a way that they can consume the current API.
Something that is done when a new API will not allow retrocompatibility is to have different endpoints this is why some APIs are apiName/V1/method and apiName/v2/method but in this case both versions of the API are deployed.

Firebase remote config versus A/B testing features

I can not find it anywhere, so I hope someone already stumbled upon this and can give me an answer.
I have been playing for a long time with Firebase remote config. In some occasions I have set parameters to be applied with different values to certain % of my user base.
Recently I started being interested in proper A/B testing and saw that Firebase has a feature for this (in beta right now). In the description of the A/B testing feature they state that one of the use cases is by setting parameters through remote config to alter the app's behaviour (makes sense, this is what I did until now).
My question though is whether A/B testing feature is doing anything different (or in addition to) to what remote config is doing. In particular, I am very interested in knowing whether remote config ensures me that when users open the app multiple times they will always be getting the same remote config values (maybe through tracking their device/user ID?) or is this only achieves if I use A/B config?
My experiments on this are not conclusive. It seems though that remote config does not ensure the same values over time.
Firebase A/B Testing builds on top of Firebase Remote Config, and Google Analytics for Firebase (and some other Firebase products) to offer full A/B testing capabilities.
Once a user is part of a certain group in an A/B Testing experiment, they will remain part of that group for the duration of the experiment.
When you use Firebase Remote Config without A/B Testing, you are in control of the groups completely yourself. So in that case you determine what value(s) the user gets.

Creating Stackdriver alerting policies via Monitoring v3 API

Using the Stackdriver v3 monitoring api, I can create monitoring groups, uptime checks, to include uptime checks based on the created monitoring groups, alerting policies, and uptime check alerting policies. The policies trigger as expected and I receive the configured notifications (notifications are configured manually via console UI).
I am using a combination of the API Explorer for REST methods and scripted, gcloud commands. The alerting policies are created using JSON files.
All is well except when viewing the Uptime Checks Overview UI > Policies (the little blue bell) from the Stackdriver console, every alerting policy created via the API has a grey bell, i.e. no association with an alerting policy (although all policies function as expected).
I’ve been at this for a while and I am out of ideas as to why. Has anyone observed this or possibly have any idea where the problem would be?
I tried to reproduce the issue again. However, I couldn't. The Bell sign in Uptime Check UI seems to work as expected now. Thus, it seems the issue is fixed. You can follow its public tracker here.

How to host daemon process that aperiodically updates firebase database?

I have so far been very impressed with the firebase platform for hosting a client-side single page app and for data storage. However, I have one component that I don't know where to host...
I want to have a background process that aperiodically updates the database. The nature of when an update is needed is based on an external source and, although the general timeframe of when updates are available is known, the exact timing is not. My thinking was to have a background task running that has some smarts to determine when an update is needed, and then trigger an update at that time.
I don't know where I would host something like this. I considered running it in a loop in a firebase function, but due to pricing model being based on time, that would get very expensive, and functions are not suited for daemon-type processes. The actual "database update" would be suitable for a function, but not the triggering logic. Also, I have seen functions-cron which does offload the triggering logic, but since my updates are not truly periodic, it doesn't seem exactly appropriate. I haven't looked too much into AppEngine and how that relates to the firebase platform...so basically my question:
What are the options for "reasonably-priced" hosting an always-running background task?
Google App Engine - Standard is something you want to look at more. It is reasonably priced since what you are doing will likely fit into GAE-Std's free daily quota. In GAE-Std, you create a scheduled cron job: GAE will call you task as if it was an incoming web request.
See Firebase doc for integrating with GAE
See GAE doc for cron jobs

Resources