To enable custom user preferences during publishing we would like to find out in the resolve step (inside a custom resolver) what the publishing user is (so not the user account configured for the Publisher service but the user that initiated the publish operation).
To find the original publishing user we would need to have access to the PublishTransaction object (specifically the Creator property); we cannot use the User property from the Session inside the custom resolver as this Session is created by the Publisher service (and would give us the service account).
To find the current PublishTransaction Mihai has provided us with an excellent hack.
In essence; if we can get our hands on an Engine object we can determine the context publish transaction.
In our custom resolver the Resolve method is called with four parameters:
public void Resolve(
IdentifiableObject item,
ResolveInstruction instruction,
PublishContext context,
ISet<ResolvedItem> resolvedItems
) { }
The item can be used to provide us with a Session object but neither an IdentifiableObject nor a Session hold a reference to the Engine.
The resolve instruction is just a set of data properties for the resolve.
The publish context (unfortunately not a PublishingContext) holds the publication and publication target only.
A ResolvedItem can give us access to the Session again but not the Engine.
My question (finally) would be two-fold:
1. have I missed any potential points where the context user account can be determined from (other than the PublishTransaction)?
2. have I missed any potential points where the Engine can be determined from the parameters the IResolver.Resolve() method is being called with?
Edit: I realize I left out the broader picture on why we want to customize the publish activity with extra metadata (from user preferences) because it is a bit of a long story;
What I ultimately need is to activate for a specific version of a component in the component template (by walking up the version list and finding a version that is linked to a dedicated marker component) but in order to do that I need to know what the marker component is. For this reason we publish the marker component (which will resolve all linked components and ultimately pages) and the custom resolver allowed us to push the TCMURI of the marker component into the session cache (making it accessible in the CT).
Now we want to set a "preference" for a specific marker component at a user level to allow smaller batches of assets to be published within this marker context (as opposed to publishing everything linked to the marker at once).
Because the TBBs running inside the CT actually DO have an Engine object available, we can use Mihai's method and determine the publishing user (as opposed to pushing the marker context from the resolver what we initially did) and in this way bypass the issue completely.
I was wondering why there is such a difference in information availability between a resolve and a render operation; both are afterall part of the same publishing context. I cannot help but feel I'm overlooking something very basic but maybe I'm not and accessing a publishing context or engine from the resolver is simply impossible.
Edit: as presumed by Dominic and confirmed by Nuno there is no "Engine" at the time of resolving; as such this half of my question has been answered.
That leaves
have I missed any potential points where the context user account can be determined from (other than the PublishTransaction)?
I went down this road before in a project (trying to get the user in a Resolver extension) and it was a world of pain.
I moved from a Resolver extension to a Render Extension, and even considered a Transport extension, just to go back to the simplest approach possible: a TBB.
Granted, my use case was different than yours, as it looks like you may want to change the resolving behavior based on the user (some users don't like link propagation, right? - if they're afraid of changing content, then they shouldn't change it ;-) ) but I ended up not implementing it that way.
I think there was an Enhancement Request to include more info about the user triggering publishing actions, but if that is implemented in the product it will be for Tridion 2013, not sure you can wait that long.
I'd guess that you can't get your hands on an engine at this point. After all, you're resolving, not rendering. You mention that you want to "enable user preferences", but you don't tell us much more about your actual problem, rather than the line of investigation you are currently following (which may or may not be a dead end).
What kind of user preferences are relevant to publishing, and why do you need to have them in the resolver?
Related
I've got an application that I'm trying to convert from a uni-tenant (single client, separate database, separate website) solution, to a partially multi-tenant one (single client, separate database, SHARED website).
I'm trying to figure out how to get my simplemembership setup to work in a situation where I can't have WebSecurity.InitializeDatabaseConnection associated with Application_Start. Instead, I want each session to setup its own 'web security context', independent of the application. The reason I need this behavior, is that my user/membership data is held in the uni-tenant (separate database) portion of the system, and, if initialized at app_start, only allows access to the FIRST tenant to visit the app. If I could, instead, make the websecurity system work within the session (or request) scope, this issue could be resolved.
I've been unable to find any documentation that says what I'm trying to do CAN NOT be done, but the indication is there all the same (In terms of lots of example posts saying that WebSecurity.InitializeDatabaseConnection should happen on app_start.
Short of completing the FULL transition to a multitenant database (which is on the schedule, but can't be completed in the allotted time), what can I do to workaround this 'limitation' ?
(my best idea so far, is that all client websites on a single website node (or better yet, across ALL nodes) would have to share a special login database)
Edit
2015-04-17
I found this post (ASP.NET - SimpleMembershipProvider initialized incorrectly, how do I reinitialize it?) which, though not being definitive, suggests that once the WebSecurity database is 'wired up', there is no facility for re-initializing later on.
We have in one of our customisations implemented permission checks with dynamic authorities in Alfresco. When migrating to solr the search results for those nodes affected by our dynamic permissions became faulty. The reason seems to be that permission checks are done at query time, however our dynamic permissions are not taken into account :(
Here is a short explanations of how our dynamic authorities work:
Check if a node has an association to an authority, if the current user belongs to that authority (group) -> approve access. The node has a lot of different associations and everyone is checked and given READ or WRITE access depending on to which association it belongs.
Is there anyway to tell the Search service to do permission checking on the returned nodes instead (like lucene does)? One workaround I thought of would be to run the query as administrator, then iterate over the result and manually do the permission checks?
Could that be a way to solve it? Any other ideas you could share with me?
Alfresco will perform after-query permission checks on SOLR results when the security.anyDenyDenies property is set to true. This check will involve any dynamic authorities, i.e. it will be a standard check.
The main problem then would be to get the full results from SOLR without pre-filtering there. Other than setting the runAs user to System in a custom sub-class of org.alfresco.repo.search.impl.solr.SolrQueryLanguage (within / around super.executeQuery method call - bean(s) search.lucene.alfresco, search.solr.alfresco, search.fts.alfresco.index and search.solr.cmis in solr-search-context.xml), I see no simpler way to achieve this.
Note: This applies to Alfresco 4.2d and later - I don't know when after-query permissions for SOLR have actually been introduced, but they weren't present when 4.0 came out AFAIK.
I'm a nascent coder creating a simple iOS app. I'm experimenting with coding push notifications for the first time and I have a simple question regarding the Parse Installation Object and a scenario where multiple users log on the same device (let's say a loner iPad at a library).
Based on the Parse documentation I've seen, when a user subscribes to a channel - let's say "The Giants" - it saves this info on the Installation Object. But if the user logs out and another user logs in, does Parse assume that we are to erase the previous channels? Should channels therefore be saved to the User class first, and only saved to Installation when a user logs in? And similarly how do we handle advanced targeting where I want to query Installation for a specific User objectId? Is the best practice to always leave the last user logged in listed as 'owner'/'user'?
If you find the library example impractical, also consider something like signing into your Spotify account on a friend's device in order to play a private playlist at a party. I know these are less common scenarios, but I want to make sure I know how to handle them.
I'm new to Push Notifications so I may be missing something fundamental here, but if any experienced developer can lend some advice as to how they handle this scenario, it would be greatly appreciated.
Store a reference to PFUsers when you save the installation. Add a field #"owner" and tag the pfuser to it.
After a user logs in, if they are not associated with the current installation, send an alert asking if they'd like to receive pushes on this device. If that's the case, resave and update the current installation. Otherwise leave it as is.
This is a tricky area, let me know what you come up with.
It's pretty rare that people will sign onto a service using someone else's phone, so I don't think its a huge issue if you want to just "see what happens" and if there's demand work it out.
I have 3 iOS apps using a single Parse application which supports push notifications for all 3 apps. I have a flag set on the project for the Release configuration for NDEBUG. I use #ifndef NDEBUG to set the boolean on a value I set on the current installation. This way it makes it easy to identify which installation that I can use for testing push notifications. I also use the appIdentifier value to filter to the application I am testing.
I also set other values as needed but these values are a good start.
if (debug) {
[currentInstallation setObject:[NSNumber numberWithBool:YES] forKey:#"debug"];
}
else {
[currentInstallation setObject:[NSNumber numberWithBool:NO] forKey:#"debug"];
}
I am currently running Tridion 2011 SP1.
I am writing some code that runs whenever a page is published. It loops through each component template in the page, gets the component and writes out various fields to an XML document. For pages with many component templates or components with many fields this process can take a while to run. If the process takes more than 30 seconds I get an error
The operation performed by thread "EventSystem0" timed out.
Component: Tridion.ContentManager
Errorcode: 0
User: NT AUTHORITY\NETWORK SERVICE
followed by another
Thread was being aborted.
Component: Tridion.ContentManager
Errorcode: 0
User: NT AUTHORITY\NETWORK SERVICE
StackTrace Information Details:
at System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor)
at System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments)
at System.Delegate.DynamicInvokeImpl(Object[] args)
at Tridion.ContentManager.Extensibility.EventSubscription.DeliverEvent(IEnumerable`1 subjects, TcmEventArgs eventArgs, EventPhases phase)
I believe I have three options.
1. Increase the timeout
This seems like a lazy solution and only hides the problem. There is no guarantee that the timeout problem won't reoccur. I'm also not sure where the timeout value is stored (I've tried changing a few values in the Tridion Content Manager.msc snap-in but no luck).
2. Do less in the actual event handler routine and have a separate process do all the hard work
This doesn't seem like the correct solution either. I would really like to keep all my event handler code in the one place. We have a solution like this in place for our live 5.3 installation and is a nightmare to maintain (it is very old and poorly written).
3. Make my code more efficient
My components have many fields and my code must delve deeper into each field if they are ComponentLinks. I guess because the properties of Tridion objects are lazy loaded there is one call to the API/database for each property I access. It takes on average 0.2 seconds to retrieve a property which soon stacks up when accessing multiple properties. If there was a way to retrieve all properties in one call this would be useful.
Any ideas?
Have you considered running your event asynchronously? You can do this by changing the following line:
EventSystem.Subscribe<IdentifiableObject,TcmEventArgs(....)
to
EventSystem.SubscribeAsync<IdentifiableObject,TcmEventArgs(....)
One thing you might consider doing is using the Component's .ToXml() method and get your values from the XML DOM instead of using the Tridion API. This is usually considerably faster, and you can use XSLT or Linq to "walk" through your fields.
If you are really only interested in fields, then just use the .Content (and .Metadata) properties and, again, use Linq or XSLT or whatever technology you want to parse the xml (except RegEx perhaps).
You are simply doing a lot of processing and that takes time. Maybe there's a technical fix, but the first thing to do in this situation is to go back to Why and What? Publishing a page is fundamentally about rendering the HTML and binaries that you want to output for that page. How long should that take?
So please could you tell us why you are doing this? Perhaps part of the effort can be moved somewhere else without compromising on good design. If we know what the purpose is, perhaps we can help more.
SDL Customer Support have advised that I increase the timeout. While not a great solution its the only one that is available. To do this
On the server that the content manager is installed open the Tridion.ContentManager.config which should be located in the config/ subdirectory of the Content Manager root location, which defaults to C:\Program Files\Tridion\ or c:\Program Files (x86)\Tridion\
Find the <eventSystem> node
Increase the threadtimeout value (this is in seconds) to something higher (I put it to 120)
Save the Tridion.ContentManager.config and restart the Tridion Content Manager Service Host service
Further documentation is available http://sdllivecontent.sdl.com/LiveContent/web/pub.xql?action=home&pub=SDL_Tridion_2011_SPONE&lang=en-US#addHistory=true&filename=ConfiguringEventSystem.xml&docid=concept_48C53F76CBFD45A783A3975CA72ECC49&inner_id=&tid=&query=&scope=&resource=&eventType=lcContent.loadDocconcept_48C53F76CBFD45A783A3975CA72ECC49. It does require a username and password to access.
If you really need the processing time then I think you should write a web service that performs the actions you need, which you can call from the event handler. This would not influence user experience (in the case of a synchronous event handler) as much either.
I have a use case where I need to add information about the user that created the current publish transaction (more than just their user name, I also need group memberships and some other details) and pass it on to a deployer extension.
When publishing this is relatively easy to do with the following code
engine.PublishingContext.RenderedItem.AddInstruction(
InstructionScope.Global, instruction);
As you may notice this method "AddInstruction" is only available for a "RenderedItem", but Unpublish instructions do not render items, and therefore I cannot use the same technique.
Short of hacking the package manifest in the file system when generating it (for instance in a custom resolver) how would you tackle this requirement?
Do you have more info on what you need to do with this information in the Deployer. Would it be an option to capture the un-publish action after it happens with an event handler, and then create a second publish action which sends the message to the Deployer with the additional information? (I know that means 2 round trips, but I can't think of another approach at this point). Un-publish actions have been a bit tricky ever since R4, back in R3 we actually had code which was executed by templates in the unpublish phase (although it was all Perl back then).
I wonder whether this is a missing extensibility point. After all, I can see why you would want to transmit extra data with an unpublish. So firstly, I'd suggest an enhancement request to have some functionality added to support this use case.
Getting to the point of your question... how to implement something without hacking the package. Perhaps you could make the information available through another mechanism. For example, you could write a web service that runs on the content manager and which serves the data when queried for a given publish transaction ID.