I have been using Azure DevOps for a project for quite some time, but suddenly publishing to my own organisation/collection feed results in a 403.
I created a feed and I can select it on the nuget push build step, but it does not work. I created a new feed to publish the NuGet packages to and this works perfectly again. It seems to me like a token expired, but I never created one or used it to authenticate. I also do not want to change my NuGet feed to the new one, as I want to use older packages as well.
This is the buildpipeline:
And this is the stack trace:
Active code page: 65001 SYSTEMVSSCONNECTION exists true
SYSTEMVSSCONNECTION exists true SYSTEMVSSCONNECTION exists true
[warning]Could not create provenance session: {"statusCode":500,"result":{"$id":"1","innerException":null,"message":"User
'a831bb9f-aef5-4b63-91cd-4027b16710cf' lacks permission to complete
this action. You need to have
'ReadPackages'.","typeName":"Microsoft.VisualStudio.Services.Feed.WebApi.FeedNeedsPermissionsException,
Microsoft.VisualStudio.Services.Feed.WebApi","typeKey":"FeedNeedsPermissionsException","errorCode":0,"eventId":3000}}
Saving NuGet.config to a temporary config file. Saving NuGet.config to
a temporary config file. [command]"C:\Program Files\dotnet\dotnet.exe"
nuget push d:\a\1\a\Microwave.0.13.3.2019072215-beta.nupkg --source
https://simonheiss87.pkgs.visualstudio.com/_packaging/5f0802e1-99c5-450f-b02d-6d5f1c946cff/nuget/v3/index.json
--api-key VSTS error: Unable to load the service index for source https://simonheiss87.pkgs.visualstudio.com/_packaging/5f0802e1-99c5-450f-b02d-6d5f1c946cff/nuget/v3/index.json.
error: Response status code does not indicate success: 403
(Forbidden - User 'a831bb9f-aef5-4b63-91cd-4027b16710cf' lacks
permission to complete this action. You need to have 'ReadPackages'.
(DevOps Activity ID: 2D81C262-96A3-457B-B792-0B73514AAB5E)).
[error]Error: The process 'C:\Program Files\dotnet\dotnet.exe' failed with exit code 1
[error]Packages failed to publish
[section]Finishing: dotnet push to own feed
Is there an option I am overlooking where I have to authenticate myself somehow? It is just so weird.
"message":"User 'a831bb9f-aef5-4b63-91cd-4027b16710cf' lacks
permission to complete this action. You need to have 'ReadPackages'.
According to this error message, the error you received caused by the user(a831bb9f-aef5-4b63-91cd-4027b16710cf) does not have the access permission to your feed.
And also, as I checked from backend, a831bb9f-aef5-4b63-91cd-4027b16710cf is the VSID of your Build Service account. So, please try with adding this user(Micxxxave Build Service (sixxxxss87)) into your target feed, and assign this user the role of Contributor or higher permissions on the feed.
In addition, here has the doc you can refer:
There is a new UI in the Feed Permissions:
To further expand on Merlin's solution & related links (specifically this one about scope), if your solution has only ONE project within it, Azure Pipelines seems to automatically restrict the scope of the job agent to the agent itself. As a result, it has no visibility of any services outside of it, including your own private NuGet repos held in Pipelines.
Solutions with multiple projects automatically have their scope unlocked, giving build agents visibility of your private NuGet feeds held in Pipelines.
I've found the easiest way to remove the scope restrictions on single project builds is to:
In the pipelines project, click the "Settings" cog at the bottom left of the screen.
Go to Pipelines > Settings
Uncheck "Limit job authorization scope to current project"
Hey presto, your 403 error during your builds involving private NuGet feeds should now disappear!
I want to add a bit more information just in case somebody ends up having the same kind of problem. All information shared by the other users is correct, there is one more caveat to keep into consideration.
The policies settings are superseded by the organization settings. If you find yourself unable to modify the settings or they are grayed out click on "Azure DevOps" logo at the left top of the screen.
Click on Organization Settings at the bottom left.
Go to Pipeline --> Settings and verify the current configuration.
When I created my organization it was limiting the scope at the organization level. It took me a while to realize it was superseding the project.
Still wondering where that "Limit job authorization scope to current project" setting is, took me a while to find it, its in the project settings, below screenshot should help
It may not be immediately obvious or intuitive, but this error will also occur when the project your pipeline is running under is public, but the feed it is accessing is not. That might be the case, for instance, when accessing an organization-level feed.
In that scenario, there are three possible resolutions:
Make the feed public, in which case authentication isn't required; or
Make the project private, thus forcing the service to authenticate; or
Include the Allow project-scoped builds under your feed permissions.
The instructions for the last option are included in #Merlin Liang - MSFT's excellent answer, but the other options might be preferable depending on your requirements.
At minimum, this hopefully provides additional insight into the types of circumstances that can lead to this error.
Another thing to check, if using a yaml file for the Pipelines, is if the feed name is correct.
I know this might seem like a moot point, but I spent a long time debugging the ..lacks permission to complete this action. You need to have 'AddPackage'. error only to find I had referenced the wrong feed in my azure-pipelines.yaml file.
If you don't want to/cannot change Project-level settings like here
You can set this per feed by clicking 'Allow Project-scoped builds' (for me greyed out as it's already enabled).
That's different from the accepted answer, as you don't have to explicitly add the user and set the permissions.
Adding these two permissions solved my issue.
Project Collection Build Service (PROJECT_NAME)
[PROJECT_NAME]\Project Collection Build Service Accounts
https://learn.microsoft.com/en-us/answers/questions/723164/granting-read-privileges-to-azure-artifact-feed.html
If I clone an existing pipeline that works and modify it for a new project the build works fine.
But if I try to create a new pipeline I get the 403 forbidden error.
This may not be a solution but I have tried everything else suggest here and elsewhere but I still cannot get it to work.
Cloning worked for me.
I have a website on which I have published several of my applications.
Right now I have to update it each time one of the applications is updated.
The applications themselves check for updates so the user only visits the website if they don't have a previous version installed.
I would like to make it easier for me by creating a single executable that when downloaded and executed, will check with the database which version is the most recent and then download that one and run that setup.
Now I can make a downloader for each application, but I rather make something more universal with a parameter or argument as the difference.
For the download the 'know' which database to check for the most recent version, I need to pass on the data to the downloader.
My first thought was putting that in a XML file, so I only have to generate different xml files for each application, but then it wouldn't be a single executable anymore.
My second thought was using commandline arguments like: downloader.exe databasename
But how would I do that when the file is downloaded?
Would a link like: "https://my.website.com/downloader.exe databasename" work?
How could I best do this?
rg.
Eric
We have a folder of elmah error logs in XML format. These files will be in millions and each file might be upto 50 kb in size. We need to be able to search on the files(eg: What errors occured, what system failed etc). Do we have a open source system that will index the files and perhaps help us search through the files using keywords? I have looked at Lucene.net but it seems that I will have the code the application.
Please advise.
If you need to have the logs in a folder in XML, elmah-loganalyzer might be of use.
You can also use Microsoft's Log Parser to perform "sql like" queries over the xml files:
LogParser -i:XML "SELECT * FROM *.xml WHERE detail like '%something%'"
EDIT:
You could use a combination of nutch+SOLR or logstash+Elastic Search as an indexing solution.
http://wiki.apache.org/nutch/NutchTutorial
http://lucene.apache.org/solr/tutorial.html
http://blog.building-blocks.com/building-a-search-engine-with-nutch-and-solr-in-10-minutes
http://www.logstash.net/
http://www.elasticsearch.org/tutorials/using-elasticsearch-for-logs/
http://www.javacodegeeks.com/2013/02/your-logs-are-your-data-logstash-elasticsearch.html
We are a couple of developers doing the website http://elmah.io. elmah.io index all your errors (in ElasticSearch) and makes it possible to do funky searches, group errors, hide errors, time filter errors and more. We are currently in beta, but you will get a link to the beta site if you sign up at http://elmah.io.
Unfortunately elmah.io doesn't import your existing error logs. We will open source an implementation of the ELMAH ErrorLog type, which index your errors in your own ElasticSearch (watch https://github.com/elmahio for the project). Again this error logger will not index your existing error logs, but you could implement a parser which runs through your XML files and index everything using our open source error logger. Also you could import the errors directly to elmah.io through our API, if you don't want to implement a new UI on top of ElasticSearch.
Using Drupal/Search API module/Solr/Tika we are trying to index a large number of files.
I've set up the index and everything works fine until I include the Search API attachments module.
When we run cron, tika is not being called. We know this because we put in a snippet of PHP code to write to the system log at the end of the tika module and that message never shows up. It does show up when running the index manually.
Additionally, the number of items index does not go up after a cron run.
We also noticed that if we run tika from the command line we get the following error at the top of the output:
INFO - unsupported/disabled operation: EI
The index works as expected without checking the box to index attachments both on cron and by manually indexing.
Any idea what the problem might be?
Thanks!
Site Built On:
Drupal 7
Modules In Question:
Search API
Search API Attachments
Indexing with:
Apache Solr
Indexing Attachments using:
Tika Library
I have the same problem. But it does not seem to be a problem at all, because the document seems to get indexed anyway.
I guess it is a Tika problem, because some documents (pdf) are working well, others not. Maybe it depends on the pdf version. Try something more simple. I.E. I wrote a sample text and used the print to pdf function on my mac to get a simple pdf document. Or use a Word doc. We also had to apply the real-path patch to get Tika working with the files ... and the transliteration module to have clean filenames. For debugging search_api I use dd()-function from devel. In search_api_solr/includes/solr_httptransport.inc performHttpRequest() I call
dd($url); dd($options); right before $response = drupal_http_request($url, $options); (line:92) ... hopefully this helps.
We have a Qt app that when it starts tries to connect to a servlet to get config parameters that it needs to keep running.
The URL may change frequently because we have to test the application in several environments. Right now (as a temporary solution) the URL is a constant in source code, but it is a little bit ugly.
Where is the best place to mainting this URL, so that we do not need to change the source code every time I want to change the environment target?
In a database table maybe (my application uses a SQLite DB), in a settings file, or in some other way?
Thank you for you replies.
You have a number of options:
Hard coded (like you have already)
Run-time user input
Command line arguments
QSettings
Read from a bespoke file as text.
I would think option 3 would be the most simple to implement without being intrusive, but it does depend on what kind of application you have.
I would keep the list of url in a document, e.g. a XML, stored in a central, well known place, e.g. a known web server, and hardcode the url of the known place in the app.
The list could then be edited externally without recompiling your app;
The app would at startup download and parse the list, pointing to the right servlet based upon an environment specified as a command line parameter.