I've been trying to merge a source branch with a target branch, but have consistently gotten the following error message on my failed job(s):
$ firebase use project_name --token $FIREBASE_TOKEN
Error: Invalid project selection, please verify project project-name exists and you have access.
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 1
I have followed the advice from this thread and logged out/in from Firebase to use the project again, which unfortunately hasn't worked:
firebase logout
firebase login
firebase use project_name
I've triple-checked that I'm using the correct project name in Firebase, rather than the name of my Gitlab repo.
Unsure if it's related, but when setting up the merge request, GitLab notes that The source branch is 3 commits behind the target branch. I don't believe this is part of the issue, but worth bringing up.
Merging branches has never been an issue until today, and this is the first time I'm seeing this particular error causing the failed jobs. Any advice is appreciated
EDIT:
I added a screenshot image of the projects list, showing I've logged into the necessary Project by Project ID on Firebase. Everything should be connected, but I can't see what I'm missing that's causing the failed jobs.
UPDATE:
I've added firebase projects:list within the pipeline editor and get the following error message.
The issue I have here is I cannot find a firebase-debug.log file, after searching ways to find it, and trying to recreate the file by commenting out # firebase-debug.log* in my .gitignore file and running firebase init to try solutions from posts like this. Any thoughts on the original merging issue or how to find firebase-debug.log to move closer to a solution are greatly appreciated.
The setup appears incorrect for CI; eg. one doesn't need to pass $FIREBASE_TOKEN.
The issue might even stem from, that you use --token once and then not anymore.
Please refer to the user manual: https://github.com/firebase/firebase-tools/#general
Using a service account might be the least troublesome.
There's also a login --reauth option; and login:ci.
Related
Duplicate Question: By Uber, previously asked for google-cloud-firestore
My question is Related to Deploy Firebase from Azure DevOps. I have a list of tasks in my CI Pipeline which are running fine but the one firebase publish job consistently getting failed.
Error:
Also, I have put my token inside variables
install firebase tools:
firebase publish:
How can this be solved? what I'm doing wrong here?
I think the issue comes from the usage of your variables.
See all of the variables usage:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#runtime-expression-syntax
You can find that no usage like this:
$key
I think it should be:
$(key)
In your situation, the issue should comes from $FIREBASE_TOKEN itself was been recognized as string "$FIREBASE_TOKEN", the actual value was not pass in.
I've deployed hundreds of function and this is the first time I encounter this issue. Simply, it stops deploying function process, saying:
Unhandled error cleaning up build images. This could result in a small monthly bill if not corrected. You can attempt to delete these images by redeploying or you can delete them manually at https://console.cloud.google.com/gcr/images/[project-name]/us/gcf
The way I deploy is through Firebase CLI command: firebase deploy --only functions:nameOfFunction
Question is what are those images I have to delete? Why? How can I solve it?
Cloud Functions uses another product called Cloud Build to build the server images that actually get deployed. Those images are stored in Cloud Storage, and that storage is billed to your account.
Read more about it:
https://github.com/firebase/firebase-tools/issues/3404
https://krasimirtsonev.com/blog/article/firebase-gcp-saving-money
Watch:
https://www.youtube.com/watch?v=aHaI0jZ5rwM
You should be able to locate and delete the files manually in the Google Cloud console. But it sounds like there is a bug here with the files not being cleaned up automatically, so you contact Firebase support directly.
For me the issue appeared to be related to my GCF billing (https://console.cloud.google.com/billing)
I had to go to my billing account to see that my payment method was expired or something, and GCF had forecasted a monthly cost of $0.01, so deploying cloud functions was sort of locked until I updated the payment method. Then the deploy immediately worked again after updating it. The build-cleanup console warning also disappeared.
The error I was seeing in my function logs in firebase console was "billing account is not available". Which I found almost zero results for in Google. Which is why I'm posting it here.
For me, the issue was caused by a silly typo.
The error:
Functions deploy had errors with the following functions:
sendNotification(europe-west) i functions: cleaning up build files... ⚠ functions: Unhandled error cleaning up build images. This
could result in a small monthly bill if not corrected. You can attempt
to delete these images by redeploying or you can delete them manually
at...
The fix was choosing the right region.
The incorrect region:
exports.sendNotification = functions
.region("europe-west")...
The correct region:
exports.sendNotification = functions
.region("europe-west3")...
Go to https://console.cloud.google.com/apis/api/artifactregistry.googleapis.com/overview
select your project
enable Artifact Registry API
deploy functions again
In my experience going into Cloud Storage didn't solve the issue: there was no image there to be deleted.
I solved by changing the Node version, moving from 18 to 14.0.0
nvm install 14.0.0
nvm use 14.0.0
Check your Function logs, by either navigating through functions in the Firebase console or through https://console.cloud.google.com/logs
For me, the logs indicated that one of my typescript files had compiled under the wrong name and it was messing up the build.
Check the logs!
General troubleshooting tips given the variety of answers here:
if build is failing, read for other errors further up in the build output
add --debug to your CLI call, i.e.:
firebase deploy --only functions --debug
With --debug I found this in my output: "Permission 'artifactregistry.packages.delete' denied on resource"
My solution was to add Artifact Registry Administrator role to the IAM user deploying the function (in addition to Firebase Admin & Service Account User roles).
After adding a second project to my code using the command $ firebase use --add second-project, I get the error
There was an issue deploying your functions. Verify that your project has a Google App Engine instance setup at https://console.cloud.google.com/appengine and try again. If this issue persists, please contact support.
Error: HTTP Error: 404, Could not find Application "second-project".
when I run $ firebase deploy.
I have added separate targets and a web app through Firebase console for the second project.
What should I be checking to get rid of this error?
I just ran into this same exact issue as well. Leaving this here for anyone that runs into this issue in the future. What caused this error for me was a permissions error, when Firebase tried to access specific resources in Google Cloud such as Cloud Functions without the necessary IAM/service accounts in place.
This happens when you create a new Firebase project without setting the Default GCP resource location under Settings > General in the Firebase Console, which occurs when you create a new Firebase project without doing any additional setup. You can set this in the settings or this is also set when you follow the walk-through instructions for setting up services such as Firestore or Firebase Storage in the Firebase console.
Without this set, the <YOUR_FIREBASE_PROJECT_NAME>#appspot.gserviceaccount.com IAM/service account will not be created in Google Cloud (which is needed to create/access specific resources), therefore when you run firebase deploy, it will fail with the error you mentioned above.
You can also check why your firebase deploy is failing in the firebase-debug.log that is generated when running this command (that's how I found out the cause of this error). Though I think this file is deleted after the command finishes execution, so you'll have to pipe the output into a file or save it some other way.
TL;DR: Set Default GCP resource location, one way this can be done is in the Firebase Console under Settings > General.
I'm setting up aws-amplify to my project. I am facing a problem in amplify push when I configured for the first time it worked fine. now i changed the repository since i had to do sub-tree from the old repo.
Now when i do amplify push i get
Resource is not in the state stackUpdateComplete
⠸ Updating resources in the cloud. This may take a few minutes...Error updating cloudformation stack
⠸ Updating resources in the cloud. This may take a few minutes...
Following resources failed
✖ An error occurred when pushing the resources to the cloud
Resource is not in the state stackUpdateComplete
An error occured during the push operation: Resource is not in the state stackUpdateComplete
Just to give some background about this error - what does Resource is not in the state stackUpdateComplete actually mean?
Well basically Amplify is telling you that one of the stacks in your app did not deploy correctly, but it doesn't know why (which is remarkably unhelpful, but in fairness it's deploying a lot of potentially complex resources).
This can make diagnosing and fixing the issue really problematic, so I've compiled this kind of mental checklist that I go through to fix it. Each of the techniques will work some of the time, but I don't think there are any that will work all of the time. This list is not intended to help you diagnose what causes this issue, it's literally just designed to get you back up and running.
The fast options (will solve most problems)
Try running amplify push --iterative-rollback. It's supposed to roll your environment back to the last successful deployment, but tbh it rarely works.
Try running amplify push --force. Although counter-intuitive, this is actually a rollback method. It basically does what you think --iterative-rollback will do, but works more frequently.
In the AWS console, go to the deployment bucket for your environment (the bucket will be named amplify-${project_name}-${environment_name}-${some_random_numbers}-deployment). If there is a file called deployment-state.json, delete it and try amplify push again from the CLI.
If you are working in a team of more than one developer, or have your environment in several different repos locally, or across multiple different machines, your amplify/team-provider-info.json file might be out of sync. Usually this is caused by the environment variable(s) in an Amplify Lambda function being set in one of the files but not in another. The resolution will depend on how out of sync these files are, but you can normally just copy the contents of the last working team-provider-info.json file across to the other repo (from where the deployment is failing) and run the deployment again. However, if you've got multiple devs/machines/repos, you might be better off diffing the files and checking where the differences are.
The slow option (production-friendly)
Hopefully you haven't got this far, but at this point I'd recommend you open a ticket in the amplify-cli GitHub with as much info as you can. They tend to respond in 1-2 working days.
If you're pre-production, or you're having issues with a non-production environment, you could also try cloning the backend environment in the Amplify console, and seeing if you can get the stack working from there. If so, then you can push the fixed deployment back to the previous env (if you want to) using amplify env checkout ${your_old_env_name} and then amplify push.
The complex option (solves more intricate problems with your stack)
If none of the above work (or you don't have time to wait for a response on a GitHub issue), head over to CloudFormation in the AWS console and search for the part of your stack that is erroring. There's a few different ways to do this:
Check the CLI output for your last push and find the item whose status is something other than UPDATE_COMPLETE. You can copy the name of the stack and search for it in CloudFormation.
Search CloudFormation for your environment name, click on any of the resulting stacks, click the link under Parent stack, repeat until you find a stack with no parent. You are now in the root stack of your deployment, there are two ways to find your erroring stack from here:
Click on the Resources tab and find one with something red in the status column. Select the stack from this row.
Click on the Events tab and find one with something red in the status column. Select the stack from this row.
Once you've found the broken stack, click the Stack actions button and select Detect drift from the dropdown menu.
Click the Stack actions button again and select View drift results from the dropdown menu.
In the Resource drift results page, you'll see a list of resources in the stack. If any of them show DRIFTED in the Drift status column, select the radio button to the left of that item and then click the View drift details button. The drift details will be displayed side by side, git-style, on the next page. You can also click the checkbox(es) in the list above to highlight the drift change(s). Keep the current page open, you'll need it later.
Fixing the drift will depend on what it is - it's usually something in an IAM policy that's changed, you can fix this directly in the console. Sometimes it's a missing environment variable on a Lambda function, which you're better off fixing in the CLI (in which case you would need to run amplify push again and wait for the build to complete in order for the fix to be deployed to your environment).
Once you've fixed the drift, you can click the orange Detect stack drift button at the top of the page and it will update. Hopefully you've solved the problem.
GraphQL bonus round (completely bananas DDB drift)
Another fun thing that Amplify does from time-to-time is to (seemingly spontaneously) change the server-side encryption setting on the definition of some or all of your DynamoDB tables without you even touching it. This is by far and away the most bizarre Amplify error I've encountered (and that's saying something)!
I have a sort-of fix for this, which is to open amplify/backend/api/${your_api_name}/parameters.json and change the DynamoDBEnableServerSideEncryption setting from false to true, save it, then run amplify push. This will fail. But it's fine, because then you just reverse the change (set it back to false), save it, push again and voila! I still cannot for the life of me understand how or why this happens.
I said it's a sort-of fix, and that's because you'll still see drift for the stacks that deploy the affected tables in CloudFormation. This goes away after a while. Again, I have no idea how or why.
The nuclear option (DO NOT USE IN PRODUCTION)
Obviously this one comes with a huge disclaimer: don't do this in production. If working with any kind of DB, you will lose the data.
You can make backups of everything and then start to remove the problematic resources one at a time, with an amplify push in between each one, until the stack build successfully. Once it's built, you can start adding your resources back in.
Hopefully this helps someone, please feel free to suggest edits or other solutions.
This worked for me:
$ amplify update auth
Choose the option “Yes, use default configuration” (uses the Cognito Identitypool).
Then:
$ amplify push
Another reason can be this
The issue is tied to the selection of this option - Select the authentication/authorization services that you want to use: User Sign-Up & Sign-In only (Best used with a cloud API only) which creates just the UserPool and not the IdentityPool which the rootstack is looking for. It's a bug and we'll fix that.
To unblock, for just the first question, you could select - ❯ User
Sign-Up, Sign-In, connected with AWS IAM controls (Enables per-user
Storage features for images or other content, Analytics, and more)
which would create a user pool as well as as the identity pool and
then choose any of the other configurations that you've mentioned
above.
I debugged my AWS Amplify CLI push error by doing the following:
Open CloudFormation
Find parent stack with name such as: amplify-companyName-envName-123456
Click Events tab
Scroll down until you find UPDATE_FAILED, which should give you a detailed description of why it failed. e.g. The following resource(s) failed to create: ...
Alternatively (to find parent stack):
Navigate to environment in AWS Amplify site, Overview tab
Click View in CloudFormation
Under Stack info tab, click link for Parent stack
On the parent page, click Events tab
You can try as below
First do
amplify env checkout {environment} and then
amplify push
The solution is:
a. Go to the s3 bucket containing project settings.
b. locate deployment-state.json file in root folder and delete it.
c. amplify push
I got this after making some modifications to my GraphQL schema. I adjusted the way I was making #connection directives on a few tables. I was able to fix this by following these steps:-
Make a backup copy of your new schema that you're trying to push
Run amplify pull to restore your local to be in sync with your backend in the cloud.
Once that completes, you should have the local synced to the cloud and amplify push should work without flaws because it is synced to the cloud and there should be no updates.
Copy over the new schema onto the pulled schema and try running the amplify push once more to see if it works.
If it doesn't work, undo the overwrite to the pulled schema and compare what is different between the pulled schema and the updated schema that you backed up. Do a line by line diffcheck and see what has changed and try to push the changes one by one to see where it is failing. I think it is wiser to not push too many changes to the schema at once. Do it one by one so that you can troubleshoot more easily. If you do have other issues, then it should be unrelated to the one highlighted in this question, because the pulling should solve this particular issue.
In my case the issue was due to multiple #connections referring to GSI, which were not getting removed and added correctly when I do the amplify push api.
I was able to resolve this by amplify pull then, comment off the #connection then the GSI linked to connection then, add each new changes manually, but there was trouble in GSI getting linked again because the local update considered the GSI already removed but in cloud it seems to be retained, and I got error that a GSI is being added which was already in cloud. So I renamed the model name, so it got recreated to new tables in dynamoDB then I reverted it back to the correct name. This is ideal for dev environment which has no much impact.
But of course it ate up most of my time, but it did fix my issue.
In my case it was an issue when switching between amplify env (checkout), the error was not clear but this is what I did to fix it without having to "clear" api and lose the whole database :
Delete the existing API Key by setting the "CreateAPIKey" to "0" in the "amplify/backend/api//parameters.json" then save file and execute "amplify push".
once done, do the same process with "CreateAPIKey" to "1" then "amplify push".
This fixed my issue.
This worked for me
amplify remove storage
And, then
amplify add storage
Then, again
amplify push
As after amplify add storage I mistakenly choose Y to Do you want to add a Lambda Trigger for your S3 Bucket?
I didn't have any Lamda function and also I didn't have anything in my bucket.
In my opinion, these kind of problems always related to 3rd party auth.
Amplify update auth,
then update auth flow the id and secret of 3rd party.
Then push.
It will fix the problem
It's look like a conflict between backend and local
The only thing that work for me is backing up the local schema and initiating the amplify pulling command.
Then use the back up schema file and initial the amplify push.
In most of case updates in the following file must be set manually (for Android):
app/src/main/res/raw/amplifyconfiguration.json
As mentioned by others in this thread - the issue comes from one of the resources that you updated locally.
Check which ones did you modify:
$ amplify status
Then remove and add it again, followed by push. The Api is known not to work with updates right now, so you must remove it if you've changed it locally:
$ amplify api remove YourAPIName
$ amplify api add
$ amplify push
I have been using Azure DevOps for a project for quite some time, but suddenly publishing to my own organisation/collection feed results in a 403.
I created a feed and I can select it on the nuget push build step, but it does not work. I created a new feed to publish the NuGet packages to and this works perfectly again. It seems to me like a token expired, but I never created one or used it to authenticate. I also do not want to change my NuGet feed to the new one, as I want to use older packages as well.
This is the buildpipeline:
And this is the stack trace:
Active code page: 65001 SYSTEMVSSCONNECTION exists true
SYSTEMVSSCONNECTION exists true SYSTEMVSSCONNECTION exists true
[warning]Could not create provenance session: {"statusCode":500,"result":{"$id":"1","innerException":null,"message":"User
'a831bb9f-aef5-4b63-91cd-4027b16710cf' lacks permission to complete
this action. You need to have
'ReadPackages'.","typeName":"Microsoft.VisualStudio.Services.Feed.WebApi.FeedNeedsPermissionsException,
Microsoft.VisualStudio.Services.Feed.WebApi","typeKey":"FeedNeedsPermissionsException","errorCode":0,"eventId":3000}}
Saving NuGet.config to a temporary config file. Saving NuGet.config to
a temporary config file. [command]"C:\Program Files\dotnet\dotnet.exe"
nuget push d:\a\1\a\Microwave.0.13.3.2019072215-beta.nupkg --source
https://simonheiss87.pkgs.visualstudio.com/_packaging/5f0802e1-99c5-450f-b02d-6d5f1c946cff/nuget/v3/index.json
--api-key VSTS error: Unable to load the service index for source https://simonheiss87.pkgs.visualstudio.com/_packaging/5f0802e1-99c5-450f-b02d-6d5f1c946cff/nuget/v3/index.json.
error: Response status code does not indicate success: 403
(Forbidden - User 'a831bb9f-aef5-4b63-91cd-4027b16710cf' lacks
permission to complete this action. You need to have 'ReadPackages'.
(DevOps Activity ID: 2D81C262-96A3-457B-B792-0B73514AAB5E)).
[error]Error: The process 'C:\Program Files\dotnet\dotnet.exe' failed with exit code 1
[error]Packages failed to publish
[section]Finishing: dotnet push to own feed
Is there an option I am overlooking where I have to authenticate myself somehow? It is just so weird.
"message":"User 'a831bb9f-aef5-4b63-91cd-4027b16710cf' lacks
permission to complete this action. You need to have 'ReadPackages'.
According to this error message, the error you received caused by the user(a831bb9f-aef5-4b63-91cd-4027b16710cf) does not have the access permission to your feed.
And also, as I checked from backend, a831bb9f-aef5-4b63-91cd-4027b16710cf is the VSID of your Build Service account. So, please try with adding this user(Micxxxave Build Service (sixxxxss87)) into your target feed, and assign this user the role of Contributor or higher permissions on the feed.
In addition, here has the doc you can refer:
There is a new UI in the Feed Permissions:
To further expand on Merlin's solution & related links (specifically this one about scope), if your solution has only ONE project within it, Azure Pipelines seems to automatically restrict the scope of the job agent to the agent itself. As a result, it has no visibility of any services outside of it, including your own private NuGet repos held in Pipelines.
Solutions with multiple projects automatically have their scope unlocked, giving build agents visibility of your private NuGet feeds held in Pipelines.
I've found the easiest way to remove the scope restrictions on single project builds is to:
In the pipelines project, click the "Settings" cog at the bottom left of the screen.
Go to Pipelines > Settings
Uncheck "Limit job authorization scope to current project"
Hey presto, your 403 error during your builds involving private NuGet feeds should now disappear!
I want to add a bit more information just in case somebody ends up having the same kind of problem. All information shared by the other users is correct, there is one more caveat to keep into consideration.
The policies settings are superseded by the organization settings. If you find yourself unable to modify the settings or they are grayed out click on "Azure DevOps" logo at the left top of the screen.
Click on Organization Settings at the bottom left.
Go to Pipeline --> Settings and verify the current configuration.
When I created my organization it was limiting the scope at the organization level. It took me a while to realize it was superseding the project.
Still wondering where that "Limit job authorization scope to current project" setting is, took me a while to find it, its in the project settings, below screenshot should help
It may not be immediately obvious or intuitive, but this error will also occur when the project your pipeline is running under is public, but the feed it is accessing is not. That might be the case, for instance, when accessing an organization-level feed.
In that scenario, there are three possible resolutions:
Make the feed public, in which case authentication isn't required; or
Make the project private, thus forcing the service to authenticate; or
Include the Allow project-scoped builds under your feed permissions.
The instructions for the last option are included in #Merlin Liang - MSFT's excellent answer, but the other options might be preferable depending on your requirements.
At minimum, this hopefully provides additional insight into the types of circumstances that can lead to this error.
Another thing to check, if using a yaml file for the Pipelines, is if the feed name is correct.
I know this might seem like a moot point, but I spent a long time debugging the ..lacks permission to complete this action. You need to have 'AddPackage'. error only to find I had referenced the wrong feed in my azure-pipelines.yaml file.
If you don't want to/cannot change Project-level settings like here
You can set this per feed by clicking 'Allow Project-scoped builds' (for me greyed out as it's already enabled).
That's different from the accepted answer, as you don't have to explicitly add the user and set the permissions.
Adding these two permissions solved my issue.
Project Collection Build Service (PROJECT_NAME)
[PROJECT_NAME]\Project Collection Build Service Accounts
https://learn.microsoft.com/en-us/answers/questions/723164/granting-read-privileges-to-azure-artifact-feed.html
If I clone an existing pipeline that works and modify it for a new project the build works fine.
But if I try to create a new pipeline I get the 403 forbidden error.
This may not be a solution but I have tried everything else suggest here and elsewhere but I still cannot get it to work.
Cloning worked for me.