i'm about to setup a new server that will be dedicated for CC views i'm wondering if there is any way to move the existing views to the new server?
In theory, yes: you can unregister a view (cleartool untegister + cleartool rmtag -view), and register it again on the new server.
See:
"Moving a view to a host with the same architecture or to a NAS device"
"Moving a view to a host with a different architecture": it involves a cleartool reformatview -dump/-load in addition of the unregister/register steps.
(after the more general page "About moving ClearCase servers")
Honestly, in the past, I've just found it easier to throw away views and start over. We used a standard set of config specs that created task-specific branches per view. We worked with dynamic views (if you're working with snapshot views in clearcase, I think that you're using the wrong Version Control System), but had our developers checkin all of their changes (which by default would checkin against their feature branch), we'd then delete all of the views for the host being decommisioned, and had developers re-create their views normally (which would automatically start them up on the new server). We naturally abstracted away a lot of the customized config specs and setting up metadata for them so they only needed to run a simple command to continue.
We were not using UCM, however.
Now that I think about it, we just had a small handful of scripts that were used to do this work - basically wraps all of the dirty "view" details away from the developers (which honestly, they don't need to know about in general).
Related
We have quite a large MVC4 application and we would like to have Selenium go through every page and make sure it loads - some sort of smoke test.
I can use reflection to go through the assembly, find all controllers and all actions, check if actions are not post, come up with parameters for actions that require parameters.
Then I'll feed this list to Selenium and check that everything I need on the pages is done appropriately.
But before I start playing with reflection, I'd like to check if this has already been done, so I don't reinvent the bicycle. I have googled for such thing, but could not find anything.
p.s. Writing the reflection code is not an issue. Selenium is covered as well. Just checking if this has already been done.
The AttributeRouting project has a route debugger in place, which does work even if you don't use attribute routing inside your project.
You can see the class that handles displaying the routes over on Github but I'm not sure it will display the routing information when the project isn't run locally. You may need to adapt that code so you can access it safely from your Selenium instance (and make it machine readable using JSON or something).
We have a requirement that on a page publish, we need to:
Find a component presentation that has a component based upon a particular schema.
Extract certain field vales from that component and store them in a custom database table that's available to our .NET application (on the Content Delivery side).
I think this is a good candidate for either a Deployer extension or a Storage extension - but I'm a little unclear which and why having never written either?
I've ruled out the Event System as this kind of code would be located on the CM, which seems like the wrong "side" to me - my focus is on extending what happens on the CD-side after a page is published.
Read a few articles on Tridion World (this, this, this and this) and I think a storage extension would be the better choice?
Mihai's article seems to be very close to what we need, where he uses a new item type mapping:
<ItemTypes defaultStorageId="brokerdb" cached="true">
<Item typeMapping="PublishAction" cached="false" storageId="searchdb" /></ItemTypes>
But how does Tridion "know" to use this new item type when content is published, (its not one of the defined TYPE_NAMEs, which is kind of the point)?
I should clarify I'm a .NET/C# dev not a Java dev so this is probably really obvious to Java people - apologies if it is!
Cheers
Tridion will not know by default how to deploy your new entity. My advise is to create a Deployer Module (your links should give you enough information about how you can do that) that executes in post-processing phase (of the deployment process), that processes all components from the deployment/transport package, extracts the needed information and uses a custom Storage Extension to store the needed information.
Be careful: you need to set-up in config your new type but you also need to use it yourself from that Deployer Module.
Hope this helps.
Needed to understand your inputs on: Is there a way in Tridion 2011 to Publish or Unpublish components/pages/templates in a custom resolver code. I understand we can play with the list of resolved items. (By giving a CP,etc). But is there a way to push an item in the publishing Q from a custom resolver code.
You can add or remove any number of items to be part of the existing package / transaction.
If you want it to be part of a new entry in the Publishing Queue instead, the event system seems more appropriate than a resolver. But the items you are publishing automatically won't show up in the "Items to Publish" screen if you Publish them separately, so you need to decide if that's a good thing or not.
Peter (and Nuno) have really answered your question in the best way. You should use a resolver to add the Pages or Component Presentations to the package rather than making new publish transactions. However you can publish items using the core service, so there is no reason you could not call the core service from a resolver and initiate your new publish actions that way.
However it does not sound like a good idea, perhaps you can update you question to explain why you need to do this.
I used to use the PublishEngine object in my templates to add items to the Publish Queue (see http://www.tridiondeveloper.com/the-story-of-sdl-tridion-2011-custom-resolver-and-the-allowwriteoperationsintemplates-attribute), but custom resolvers and other techniques are far superior.
The challenge is to determine whether ASP.NET is enabled within IIS7 in a reliable and correct way.
Enabling/Disabling is done in this case by going into:
Server Manager ->
Roles ->
Web Server (IIS) ->
Remove Role Services ->
Remove ASP.NET
The natural place to determine this should be within the applicationHost.config file. However, with ASP.NET enabled or disabled, we still have the "ManagedEngine" module available, and we still have the isapi filter record in the tag.
The best I can find at the moment is to check if the <isapiCgiRestriction> tag includes the aspnet_isapi.dll, or that the ASPNET trace provider is available.
However these aren't detecting the presence of the ASP.NET config directly, just a side effect that could conceivably be reconfigured by the user.
I'd rather do this by examining the IIS configuration/setup rather than the OS itself, if possible, although enumerating the Roles & Services on the server might be acceptable if we can guarantee that this technique will always work whenever IIS7 is used.
Update
Thanks for the responses. Clarifying exactly what I want to do, I'm pulling settings from a variety of places in the server's configuration into a single (readonly) view to show what the user needs to have configured to allow the software to work.
One of the settings I need to bring in is this one:
The one highlighted in red.
I don't need to manipulate the setting, just reproduce it. I want to see whether the user checked the ASP.NET box when they added the IIS role to the server, as in this example they clearly didn't.
I'd like to do this by looking at something reliable in IIS rather than enumerating the role services because I don't want to add any platform specific dependencies on the check that I don't need. I don't know if it will ever be possible to install IIS7 on a server that doesn't have the Roles/Services infrastructure, but in preference, I'd rather not worry about it. I also have a load of libraries for scrubbing around IIS already.
However, I'm also having trouble finding out how to enumerate the Roles/Services at all, so if there's a solution that involves doing that, it would certainly be useful, and much better than checking the side effect of having the ASPNET trace provider lying around.
Unfortunately, if you don't check the ASP.NET button, you can still get the ManagedEngine module in the IIS applicationHost.config file, so it's not a reliable check. You can also have ASP.NET mapped as an isapi filter, so checking them isn't enough. These things are especially problematic in the case where ASP.NET was installed but has been removed.
It looks like the best solution would be to examine the Role Services. However, API information on this is looking pretty rare, hence the cry for help.
The absolute way to know if they checked that or not is to search the following registry key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\InetStp\Components
In there you should see two values set to 1, ASPNET and NetFxEnvironment and NetFxExtensibility. This registry key is the IIS Setup key that contains all the components that have been enabled in IIS.
Determining if asp.net is even an installed feature (prerequisite for enabling it) can be done through PowerShell, which implies there is .net api out there for it if you dig hard enough. The PowerShell methods:
Import-Module servermanager
Get-WindowsFeature web-asp-net
Which will return an object of type Microsoft.Windows.ServerManager.Commands.Feature. The installed property is boolean and indicates whether or not the feature is installed.
So do you want the easy way? Make a nice pretty .aspx page that displays as HTML with an error block in a div in a placeholder saying "You need to install ASP.NET" and have it change on ASP.NET being installed to instead say "ASP.NET is installed" and then just have the tool launch this webpage in the default browser after copying it to the directory identified in IIS as the *:80 site (or create the directory mapping in IIS programmatically by altering the XML and then removing it later)
May not be the most elegant but it does ensure that testing shows what features are truly installed versus what's in an XML file.
Because that will scream "do it the lazy ignorant way" I'll remind you that the only way for me to know in javascript what features I can use is to test them before I try to use them, or assume they're there and watch it blow up. My point is, it doesn't matter what gets reported in a file, it matters what you can actually use. Just because C:\Windows\Micrsoft.Net\Framework\v3.xxxxxxxx exists and has files doesn't mean the dll's are registered in the GAC, does it?
I'm sure there's a simple explanation for this, but I haven't had much luck at finding the answer yet, so I figured I'd put the word out to my colleagues, as I'm sure some of you've run into this one before.
In my (simple) dev environment, I'm working with a handful of WCF Web Services, imported into my FB3 project and targeting a local instance of the ASP.NET development Web server. All good, no problems -- but what I'd like to know now is, What's the right way to deploy this project to test, staging and production environments? If my imported proxies all point, say, to http://localhost:1234/service.svc (from which their WSDLs were imported), and all I'm deploying is a compiled SWF, does Flex Builder expect me to "Manage Web Services > Delete", "> Add", recompile and release ever time I want to move my compiled Flex project from development to test, and to staging, and ultimately into production? Is there a simpler workflow for this?
Thanks in advance -- hope my question was clear.
Cheers,
Chris
If you have path names which will change depending on the enviroment then you will likely need to recompile for each environment since these will be compiled in the swf.
I typically use ANT scripts to handle my compile/deployment process when moving from development and production environments. This gives me the ability to dynamically change any path names during the compile. These build files can be integrated into Flex Builder making this process very easy once you have everything set up, and can be done with one click or scheduled.
Thanks Brett. I've been meaning to dig into automating my build processes anyway, so now's probably as good a time as any. :)
You do not need to build a SWF for each environment. Here's a technique I use commonly:
Externalize your configuration properties into an XML file; in this case, it could be a URL for each service or a base URL used by all your services
When the application starts up, make an HTTPService call to load the XML file, parse it, and store your properties onto some bindable "configuration object"
Bind the values from that object against your objects that depend on the URLs
Dispatch an event that indicates your configuration is complete. If you have some kind of singleton event dispatcher used by some components in your app, use that, so that the notification is global
Now proceed with the rest of the initialization of your application
It takes a little work to orchestrate your app such that certain parts won't initialize until steps 1-5 take place. However I think it's good practice to handle a lot of this initialization explicitly rather than in constructors or various initialize or creationComplete events for components. You may need to reinitialize things when a user logs out and a different user logs in; if you already have your app set up to that initialization is something you can control then reinitialization will not be a problem.