I had built an AWS CDK code pipeline that triggers build on every git commit, but I want to trigger the build throw client function can anyone please guide me on how can I trigger the start-pipeline function throw the client-side.
You can use the AWS CLI to start the pipeline manually, the same thing is possible via a API call.
From: Start a pipeline manually
aws codepipeline start-pipeline-execution --name MyFirstPipeline
Alternatively you can add a review button within CodePipeline.
From: Manage approval actions in CodePipeline
In AWS CodePipeline, you can add an approval action to a stage in a
pipeline at the point where you want the pipeline execution to stop so
that someone with the required AWS Identity and Access Management
permissions can approve or reject the action.
Related
we are using AWS CodeDeploy and would like to have a notification when a roll back occurs due to failed deployment.
Do we set the Events that trigger notifications to be "Failed" in this case? Was expecting to have a Roll back choice.
Rollback is not available as a notification type on the Application level, but on the Deployment Group level. Inside the Deployment Group, go to "Triggers" and you will be able to chose "Deployment rollback" as option under Events.
More information on this in the CodeDeploy docs.
I am using a deploy hook from my headless CMS (prismic) to kick off a build and deploy on my vercel hosted statically rendered nextJS site. The build time is variable, and I need to run a script after a successful build/deploy, which sends an email and push notifications to my registered users that new content is available. I don't want to do it before the build has finished, otherwise the users will get server side rendered pages instead of static.
I haven't been able to find any way to receive a notification that the build finished. What are my options here?
I do see that I could create a webhook and subscribe to the deployment-ready notification. Does anyone have a working example of this they could share?
https://vercel.com/docs/api#integrations/webhooks
I have deployed a JVM application to Google cloud run that uses the Firebase Admin SDK to send notifications on Firebase Cloud Messaging.
Everything works fine locally using GOOGLE_APPLICATION_CREDENTIALS. The deployed app, however, throws errors as follows:
java.lang.IllegalArgumentException: Project ID is required to access messaging service. Use a service account credential or set the project ID explicitly via FirebaseOptions. Alternatively you can also set the project ID via the GOOGLE_CLOUD_PROJECT environment variable.
I create the FirebaseMessaging instance as:
FirebaseMessaging.getInstance(
FirebaseApp.initializeApp(
FirebaseOptions.Builder().setCredentials(
GoogleCredentials.getApplicationDefault()
).build()
I have deployed the Cloud Run instance with a service account which has admin permissions for Cloud Run.
My understanding is that an application deployed to any GCP service acquires application credentials automatically. Is there a difference for the combination of Cloud Run and Firebase Admin SDK?
Any help would be much appreciated.
You have two options:
Query the cloud metadata service to find the project ID where your Cloud Run instance is deployed to. This can only be done within Cloud Run - it won't work if you're testing locally.
Make the project ID available to Cloud Run in some way during deployment.
An easy way to implement #2 is to put the project ID in the environment for the Admin SDK to automatically pick up. For example, a shell script:
service_id="your-service-id"
project_id="your-project-id"
gcloud run deploy "$service_id" \
--project "$project_id" \
--image "..." \
--platform managed \
--update-env-vars "GOOGLE_CLOUD_PROJECT=$project_id"
Note the --update-env-vars which populates GOOGLE_CLOUD_PROJECT.
During amplify init there is a question:
"Do you want to use AWS profile"
What is "AWS profile in this context"? When should i choose yes, and when no? What is the decision impact on the project?
After installation of AWS CLI, you can configure your CLI using aws configure command. You provide access key, secret access key and default region. Once you are done with this it creates a default profile for your CLI. All your aws commands use credentials from this default profile. Your amplify init command refers to this profile.
You can have multiple AWS profiles for your CLI to use.
Coming to your question.
1) If your aws default profile is configured for the same account where you want your amplify project to deploy you can say yes to that question.
2) If you are not sure what is there in your default profile you can opt for no and provide access key, secret key and other information by own.
Hope this will clear your doubt.
I've implemented real time remote config updates via the documentation here.
In general, it works as expected, except when it comes to experiments via A/B Testing. Changes to A/B Testing that affect remote config do not fire the update cloud function hook.
Does anyone know if its possible to have the functions.remoteConfig.onUpdate cloud function hook trigger when a change to remote config is made via an A/B Testing experiment change?
The only workaround I can think of is to have a dummy value in remote config itself that I change whenever an experiment is created/updated.
firebaser here
There is nothing built into Remote Config for that at the moment. But thanks to the integration between Cloud Functions and Remote Config, you can build it yourself.
One of our engineers actually just gave a demo for this last week. I recommend you check it out here: https://youtu.be/lIzQJC21uus?t=3351.
In this demo, there are a few steps:
You publish a change from the Remote Config console.
This change triggers Cloud Functions through a functions.remoteConfig.onUpdate event.
The Cloud Function sends an FCM message to all apps through a topic.
When an app receives this message, it shows a prompt that the configuration is out of date.
When the user clicks the "fetch" button, the app fetches the new configuration data from Remote Config.