Don't ask me how but I'm in a situation where I have DCPs published that have component ids that no longer exists in Tridion!
I know the GUI will stop you deleting a component if its published but somehow (perhaps unpublishing failed but the CM still deleted the component???), they've been removed from the CM and now I've a load of DCPs in broker I can't get rid off!
Anyone ever experienced this?
Anyway to rectify other than manually updating db?
This is Tridion 2011 setup, single deployer, single broker db.
The most common supported way to solve this is to manually create a transport package that removes the offending DCPs.
So:
Set Cleanup to False in your cd_deployer_conf.xml
Unpublish any DCP
Capture the transport zip file
Open the instructions.xml in the zip
Change it to point to your DCP
Drop the updated zip file into your deployer's incoming folder
Compliments to Puf for creativity. I checked his approach with Tridion Customer Support and - although they say it is a grey area - they will allow this.
The only alternative is to create a new broker database and publish everything that was already published there. Then you can swap it with the live db.
Related
I have been using Azure DevOps for a project for quite some time, but suddenly publishing to my own organisation/collection feed results in a 403.
I created a feed and I can select it on the nuget push build step, but it does not work. I created a new feed to publish the NuGet packages to and this works perfectly again. It seems to me like a token expired, but I never created one or used it to authenticate. I also do not want to change my NuGet feed to the new one, as I want to use older packages as well.
This is the buildpipeline:
And this is the stack trace:
Active code page: 65001 SYSTEMVSSCONNECTION exists true
SYSTEMVSSCONNECTION exists true SYSTEMVSSCONNECTION exists true
[warning]Could not create provenance session: {"statusCode":500,"result":{"$id":"1","innerException":null,"message":"User
'a831bb9f-aef5-4b63-91cd-4027b16710cf' lacks permission to complete
this action. You need to have
'ReadPackages'.","typeName":"Microsoft.VisualStudio.Services.Feed.WebApi.FeedNeedsPermissionsException,
Microsoft.VisualStudio.Services.Feed.WebApi","typeKey":"FeedNeedsPermissionsException","errorCode":0,"eventId":3000}}
Saving NuGet.config to a temporary config file. Saving NuGet.config to
a temporary config file. [command]"C:\Program Files\dotnet\dotnet.exe"
nuget push d:\a\1\a\Microwave.0.13.3.2019072215-beta.nupkg --source
https://simonheiss87.pkgs.visualstudio.com/_packaging/5f0802e1-99c5-450f-b02d-6d5f1c946cff/nuget/v3/index.json
--api-key VSTS error: Unable to load the service index for source https://simonheiss87.pkgs.visualstudio.com/_packaging/5f0802e1-99c5-450f-b02d-6d5f1c946cff/nuget/v3/index.json.
error: Response status code does not indicate success: 403
(Forbidden - User 'a831bb9f-aef5-4b63-91cd-4027b16710cf' lacks
permission to complete this action. You need to have 'ReadPackages'.
(DevOps Activity ID: 2D81C262-96A3-457B-B792-0B73514AAB5E)).
[error]Error: The process 'C:\Program Files\dotnet\dotnet.exe' failed with exit code 1
[error]Packages failed to publish
[section]Finishing: dotnet push to own feed
Is there an option I am overlooking where I have to authenticate myself somehow? It is just so weird.
"message":"User 'a831bb9f-aef5-4b63-91cd-4027b16710cf' lacks
permission to complete this action. You need to have 'ReadPackages'.
According to this error message, the error you received caused by the user(a831bb9f-aef5-4b63-91cd-4027b16710cf) does not have the access permission to your feed.
And also, as I checked from backend, a831bb9f-aef5-4b63-91cd-4027b16710cf is the VSID of your Build Service account. So, please try with adding this user(Micxxxave Build Service (sixxxxss87)) into your target feed, and assign this user the role of Contributor or higher permissions on the feed.
In addition, here has the doc you can refer:
There is a new UI in the Feed Permissions:
To further expand on Merlin's solution & related links (specifically this one about scope), if your solution has only ONE project within it, Azure Pipelines seems to automatically restrict the scope of the job agent to the agent itself. As a result, it has no visibility of any services outside of it, including your own private NuGet repos held in Pipelines.
Solutions with multiple projects automatically have their scope unlocked, giving build agents visibility of your private NuGet feeds held in Pipelines.
I've found the easiest way to remove the scope restrictions on single project builds is to:
In the pipelines project, click the "Settings" cog at the bottom left of the screen.
Go to Pipelines > Settings
Uncheck "Limit job authorization scope to current project"
Hey presto, your 403 error during your builds involving private NuGet feeds should now disappear!
I want to add a bit more information just in case somebody ends up having the same kind of problem. All information shared by the other users is correct, there is one more caveat to keep into consideration.
The policies settings are superseded by the organization settings. If you find yourself unable to modify the settings or they are grayed out click on "Azure DevOps" logo at the left top of the screen.
Click on Organization Settings at the bottom left.
Go to Pipeline --> Settings and verify the current configuration.
When I created my organization it was limiting the scope at the organization level. It took me a while to realize it was superseding the project.
Still wondering where that "Limit job authorization scope to current project" setting is, took me a while to find it, its in the project settings, below screenshot should help
It may not be immediately obvious or intuitive, but this error will also occur when the project your pipeline is running under is public, but the feed it is accessing is not. That might be the case, for instance, when accessing an organization-level feed.
In that scenario, there are three possible resolutions:
Make the feed public, in which case authentication isn't required; or
Make the project private, thus forcing the service to authenticate; or
Include the Allow project-scoped builds under your feed permissions.
The instructions for the last option are included in #Merlin Liang - MSFT's excellent answer, but the other options might be preferable depending on your requirements.
At minimum, this hopefully provides additional insight into the types of circumstances that can lead to this error.
Another thing to check, if using a yaml file for the Pipelines, is if the feed name is correct.
I know this might seem like a moot point, but I spent a long time debugging the ..lacks permission to complete this action. You need to have 'AddPackage'. error only to find I had referenced the wrong feed in my azure-pipelines.yaml file.
If you don't want to/cannot change Project-level settings like here
You can set this per feed by clicking 'Allow Project-scoped builds' (for me greyed out as it's already enabled).
That's different from the accepted answer, as you don't have to explicitly add the user and set the permissions.
Adding these two permissions solved my issue.
Project Collection Build Service (PROJECT_NAME)
[PROJECT_NAME]\Project Collection Build Service Accounts
https://learn.microsoft.com/en-us/answers/questions/723164/granting-read-privileges-to-azure-artifact-feed.html
If I clone an existing pipeline that works and modify it for a new project the build works fine.
But if I try to create a new pipeline I get the 403 forbidden error.
This may not be a solution but I have tried everything else suggest here and elsewhere but I still cannot get it to work.
Cloning worked for me.
I've come across this really strange scenario that I'm hoping that someone else has come across (best scenario) or perhaps that someone with a lot of CRM experience can perhaps point me in the direction of where to look for a possible solution. I cant seem to find anything online thats close to this issue.
We have 4 environments : DEV, DEV-IST, UAT and PreProd. We progress CRM deployments in the following order: DEV -> DEV-IST -> UAT -> PreProd. We havent gone live with our development and hence I've not included Production.
We progress CRM Deployments using the following steps.
Go to DEV.
Export any unmanaged solutions to managed solutions. We then copy those managed solutions into the 'PkgFolder' of the package deployer.
We then take a backup of the DEV-IST CRM (Organisation A) database and restore it to the DEV database server as (Organisation A). We then run the package deployer on Organisation A so that it now has all the changes that we'd made in development. Before we do anything further, we login to Organisation A and go to the following screen: Home -> Settings -> Customisations -> Customize the System. I can see under the Entities node all the custom entities.
I now backup Organisation A (which now includes my Dev updates) and copy the backup file to the DEV-IST database server.
I first go to Deployment Manager on DEV-IST and disable the organisation and then delete it.
Now I go to the SQL Database for DEV-IST and restore the backup from step 4 above and overwrite the Organisation A database.
I go back into Deployment Manager on DEV-IST and use the import organisation to reimport the organisation A and thus now have a organisation A now updated with my dev updates. The above process ensures that we deploy updates with very little downtime.
However, my issue is that after step 7 when I login to the organisation and go to Home -> Settings -> Customisations -> Customize the System... I cant see any of the custom entities!
Any ideas what might be causing this. Strangely I've noticed that I cant see custom entities in DEV-IST and PreProd but can see them in the DEV and UAT environments.
PS. I've already tried clicking the 'Publish all customizations' and it didnt have any effect.
Your deployment process is really complicated and I can't see any reason why are you doing customization transfer in so complex way. You should be simply transferring Solutions with your customizations from DEV environment to DEV-IST environment and then to UAT and PreProd.
Backing up your database on one environment and restoring it on another is for sure not faster way (because you stated that such process gives you short downtime of environment).
Also if you insist of doing deployment this way, you should not only remove your organization from Deployment Manager, but also database from the server (and restore the backed up database).
If you are simply customizing CRM for your client/company, you should not use managed solutions. Managed solutions are used to share some reusable customizations that you want to share or sell to the others. Please read this great post by Shan McArthur, that will make you better understand the problem:
https://community.adxstudio.com/blogs/shan/2014-01-17-converting-crm-solutions-from-managed-to-unmanaged/
And if you must use managed solutions - you should only do that on your final PROD environment, not on your staging environments (where you should have full control over customizations to provide hotfixes)
EDIT:
I would add it as a comment, but I cannot comment yet, so I have to edit my response instead. I replicated your process and I can see all the entities on all environments, so there must be something specific that you are doing which is causing this behaviour. Have you checked that after you restored the database all the custom entities exist in your restored database? (every entity is a separate table which has the same name as the entity). Also I would compare all the solution-related tables in database where you can see the custom entities vs the database where you cannot see custom entities (so that would be tables SolutionBase, SolutionComponenBase, MetadataSchema.Entity and all other MetadataSchema.* tables). Last experiment you can do is to save somewhere the db where you cannot see the custom entities, remove the managed solution, import the managed solution (standard way) and check if the entities are there. I bet that they would be there, so the next step would be to again compare the databases (all the metadata/solution tables) with some red-gate comparison tool, maybe that would point you to the right direction.
Basically the metadata in CRM is built this way - take everything from the default solution, then apply all the managed solutions in order they were imported and build the resulting metadata. Your case looks like CRM somehow "forgot" to apply the managed solutions over the default solution ( I believe that this might be an unknown bug of deployment manager, that should set something during organization import). I hope that you will discover this when you compare your database data between environments.
I understand that this is not a valid response and normally I would not write it as an answer but as a comment, but I cannot comment yet, unfortunately :(
I finally got to the bottom of this issue and thought I'd update it here for anyone else who comes along with the same issue. 'Someone' put a registry edit on the CRM server to get CRM to return only the top 250 records on any query. As custom entities are above this number, they weren't being shown. I've since reverted this change and I'm able to see the custom entities now.
I have a legacy ASP application that I support. By support I mean that I haven't touched it since about 2005 because its just worked.
However there were a couple of data issues in the Access database that the ASP application uses. So like a fool I opened the database directly over a fileshare (using MS Access 2007), fixed the data and saved it down (in Access 2000 format).
Now the application will retrieve and display the data OK, but any updates fail with the error 3705: Operation is not allowed when the object is open. I have not changed the code in any way, the only change was the data update and database save.
I've found plenty of examples of this error, but they all relate to fairly simple issues like ensuring the recordset is closed before opening it, changing the CursorLocation enum, etc. I've tried most of these in the vain hope that something will work, but nothing has.
Any ideas how can I fix this?
Thanks.
UPDATE
I've installed a web based access database management system, and have tried to compact and repair the database. I received the error:
The Microsoft Jet database engine cannot open the file '<snip>'. It is
already opened exclusively by another user, or you need permission to view
its data. (-2147217911)
I have run the macro detailed here to determine who is logged onto the database, and just showed the admin user (which was me - while running it)
Those errors mean one thing: the database file is opened by some other process and thus is being locked.
Most likely that "web based access database management system" is the culprit, try to find how you can configure it to not lock the file, or get rid of it.
As a work around or way to verify the real problem, you can copy the .mdb file into different location and change the classic ASP connection string to check if you can update the database in its new location.
#Remou's comment above about checking the file and folder permissions was correct.
I had our server admin check the permissions, and it seems that the write access had dropped off the folder (and the files also inherit their permissions from the folder). He said that this has happened before when saving directly over the fileshare.
(accepting in lieu of an answer from #Remou)
There appears to be a slight problem with my Livecycle server with regards to documents suddenly disappearing!
After looking around, it seems that the documents are not deleted completely, but moved to another folder ('livecycle\content\lccs_data\contentstore.deleted'). Thankfully these 'deleted' documents can simply be moved back to the contentstore. Livecycle's ContentServices uses Alfresco, and I found the following with regards to this automated deleting of documents:
http://wiki.alfresco.com/wiki/Content_Store_Configuration
For now, I will attempt to remove this functionality (removing the StoreCleaner's listener, so no action will be taken), but I'm unsure as to whether removing the finding of orphaned documents could be detremental to the system.
Is there anyone who knows exactly why the system is incorrectly flagging files as orphaned and then removing them? In the Livecycle adminui I can still see the documents that have been deleted, so at least the UI is still holding references to them.
I'm using LiveCycle quite a bit and no problem here. I'm doing custom Alfresco stuff all the place and no disappearing doc's here.
I guess there might be some custom rules/js LiveCycle process who flags the doc's. So check al these and try to analyze it more in depth, so we can help out.
Does they get deleted randomly, or just from specific folders, try to turn off the rules to check if it still happens.
And if it still occurs, just log a call at Adobe, you never know what Adobe has build in it ;)
I recently started with the BAM from BizTalk.
I created a simple orchestration.
I configured the BAM for BizTalk ofcourse.
I used excel to create a simple schema with only textfields.
I deployed this xml schema to the BizTalk primary import using: bm deploy-all -DefinitionFile:myxml.xml.
Opened the TPE and opened the deployed schema.
Opened the orchestration and here opened the used schema and linked the schemafields to the bamschemafields.
After this I applied the tracking profile.
I then put a file through BizTalk which uses the orchestration. The file was outputted.
If I now check in the primary import database, I can see that the file is visible in the active messages. But the completed field is set to false. And it doesn't change. Also no data is filled in, only the ActivityID and LastModified, none of the columns which i specified myself are filled, and also RecordID = null.
What am I doing wrong?
I thought I did all the necessary steps, I know it's all still pretty basic but I need to get this to work if I want to do more, right?
Getting BAM to work can be tricky sometimes. First, did you restart your biztalk hosts after deploying everything? That could cause issues if you didn't.
Almost the first thing I do when I run into any issues with BAM is to turn on BAM tracing and either redirect it to a file or use DbgView to check for any errors BAM might be running into.
One of the crappy things about BAM is that it will fail silently sometimes, with the only information about the error being dump on the BAM tracing, so getting familiar with it is important.