We are using Tridion 2011 SP1. Content Delivery is in .Net.
We want to make only Transport Package available to Search engine. This Search engine will extract required content/metadata from transport package and will index it. Search engine is installed on different domain/Server.
To achieve this we want to configure a Publication Target which will publish the content to search engine’s server but will not deploy the content. Only Transport Package will be made available in some folder (incoming\success). Can we do it using HttpUpload.aspx and by disabling some settings in cd_storage_config or cd_deployer_conf.
The standard approach for this would be to extend the CD storage process. You can find a good explanation of how to do this at http://www.sdltridionworld.com/articles/sdltridion2011/tutorials/extending-content-delivery-storage-sdltridion-2011-1.aspx
I would argue that #Jeremy's approach is the correct answer, but if writing a storage mechanism in Java sounds too taxing, you could just allow the items to get published to the file system, and schedule a simple script to delete all the files daily/hourly to save disk space.
This would require no integration effort with SDL Tridion.
Related
I'm working on migration from Alfresco 4 to 5 and applying any add-ons on Alfresco 4 for the purpose is not applicable. Database used for the both versions are different from each other. I have tried with ACP files and it is very time consuming. Is there a size limitation on ACP files? What other methods can be used?
Use Standard Upgrade Procedure
What is your main intention? "Just" doing an upgrade from 4 to 5?
In that case the robust, easy way would be to:
Install required modules having custom models in your target sytstem (or if you customized models in the extension path than you have to copy that config)
backup and restore the alfresco repo database to your new (5.x) system. If your target system uses a different db product (not just a different version) you need to manage the db migration using db specific migration tools. It is no alternative to use Alfresco export/import.
sync alf_data/contentstore to your new system (make sure the db dump
is always older or you need to do an offline sync)
During startup Alfresco recognizes that the repo needs to be upgraded and does everything. Check the catalina.out for any output during migration.
If you need a subset from your previous system it is much easier to delete the content afterwards (don't forget to purge the trash and you should configure the cleaner job not to wait 14 days).
Some words concerning ACP
It is a nice tooling to export single directories but unfortunately it is limited:
no support accross Alfresco versions (exactly your case)
no support for site metadata / no site export/import (maybe it is working after the changes in 4.x when putting site metadata in nodes but I suppose nobody tested this)
must run in one transaction. So hard limits depend on your hardware / JVM configuration but I wouldn't recommend to export/import more than some thousand nodes at once.
If you really need to use export/import a huge number of documents you should use the import/export in a separate java process which means your Alfresco needs to be shut down. s. https://wiki.alfresco.com/wiki/Export_and_Import#Export_Tool
ACP does have a file limit (I can't remember the actual number), but we've had problems with ones below that limit too. We've given up on this approach in favor of using Alfresco bulk import tool.
One big advantage this tool has, it can continue a failed import from the point of failure, no need to delete the partially imported batch and start all over again. It can also update files as needed, something ACP method can't (would fail with DuplicateChildNameNotAllowed).
I'm trying to set up a deployment chain for some of our ASP.NET applications. The tool of choice is Web Deploy (msdeploy) - for now. Unfortunately I'm stuck on a problem.
A high level overview of the chain is thus:
Web developer creates the code and checks it in SVN;
Buildserver sees the update and builds the msdeploy .zip package of the website;
The .zip package is automatically put inside our installer and sent to various clients;
The clients run the installer on their webserver(-s);
The installer uses msdeploy internally to deploy the .zip package and create a new website or upgrade an existing one.
Msdeploy makes it easy to deploy a new instance, but I'm stumped about how to perform an "upgrade" install. The main problem is the web.config file. Each client will most certainly have made some customizations there to suit their specific environment. The installer itself offers to set some more critical parameters at the first-time installation (achieved by msdeploy's parameter mechanism), but they can do others by hand.
On the other hand, we developers also occasionally make changes to web.config, adding some new settings or removing obsolete ones. So I can't just tell msdeploy to ignore the file entirely. I need some kind of advanced XML modification mechanism. It could be a script that the developers maintain, but then it needs to be run ONLY at upgrades, not new installs.
I've no idea how to accomplish this.
Besides that, sometimes there's also some completely weird upgrade logic. For example, the application comes with our company logo, but some clients have replaced that .png file to show their own logo. Recently we needed to update the logo - but only for clients that hadn't replaced it with their own.
Similarly, there might be some cache folders that might need to be cleaned at SOME upgrades but not at others. Or folders with user content that may not be touched (but come with default content at the initial installation). Etc.
How do you normally achieve this dual behavior for msdeploy packages? Do I really need to create 2 distinct packages for every application?
Suggestion from personal experience:
Isolate customisations
Your customers should have the ability to customise their set up and the best way is to provide them with something like an override file. That way you install the new package and follow by superimposing your customer's customisations on top of your standard setup. If its a brand new install then there will be nothing to superimpose.
> top-level --
> standard files |
images | This will never be touched or changed by customer
settings.txt |
__
> customer files --
images | Customer hacks this to their heart's content
settings.txt_override |
--
Yes, this does mean that some kind of merging process needs to happen and there needs to be some script that does that but this approach has several advantages.
For settings that suddenly become redundant just issue a warning to that effect
If a customer has their own logo provide the ability to specify this in the override file
The message is clear to customers. Stay off standard files.
If customers request more customisable settings then write the default if it does not exist into the override file during upgrades.
Vilx, in answer to your question, the logic for knowing whether it is an upgrade or not must be contained in the script itself.
To run an upgrade script before installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -preSync:runcommand="c:\UpgradeScript.bat"
Or to run an upgrade script after installation
msdeploy -verb:sync -source:contentPath="C:\Test1" -dest:contentPath="C:\Test2" -postSync:runcommand="c:\UpgradeScript.bat"
More info here
As to how you know its an upgrade your script could check for a text file called "version.txt" and if it exists the upgrade bat script will run. Version to be contained within the text file. Bit basic but it should work.
This also has the added advantage of giving you the ability of more elegantly merging customer's custom settings between versions as you know which properties could be overriden for that particular version.
There are some general suggestions (not specific to msdeploy), but I hope that helps:
I think you'll need to provide several installers anyway: for the initial setup and for each version-to-version upgrade.
I would suggest to let your clients to merge the config files themselves. You could just provide them either detailed desciption of waht was added/changed/removed, and/or include the utility that simplifies the merge. Maybe this and this links will give you some pointers.
as for merging the replaced logos, other client's customization, I think the best approach would be to support branding your application. I mean - move all branding details to the place where your new/upgrade installers won't touch that.
as for the rest of the adjustments made by your clients, they do that on their own risk, so the only help you could provide them is to include the detailed list of changes (maybe even the list of changed files since the previous version) and the How-To article about merging the sources with tools like Araxis Merge or similar
Or.. you could create a utility and include it to the installer, which will try to do all the tricky merging stuff on client's machine. I would not recommend this way as it requires a lot of efforts/resources to maintain.
One more thing: you could focus on backup-ing the previous client copy before upgrade. So even client will have troubles with upgrading - that will be always possible to roll back. The only thing here for you is to provide a good feedback channel which your clients can use to shoot their troubles. This feedback will allow you to figure out what the troubles your clients have and how to make their upgrade process more comfortable.
I would build on what the above have said, but I would do it with transformations, and strict documentation about who configures what. The way you have it now relies on customer intervention against a config that is mission critical to the app deploy process.
Create three config file areas. One for development, one for the "production generic" build, and one that is an empty template for the customer to edit.
The development instance should be self explanatory. This is the transform that takes the production generic template and creates a web config for your development server. (it sounds like you are shooting for a CI type process here)
The "production generic" transform should set the app up for a hypothetically perfect instance of the app. This is what the install would look like if the architect had his way.
The customer transform is used by the customers to set up the web config as required to meet their own needs. Write some documentation and see what happens. Edit the docs as you help customers through the process.
It that what you were looking for? Thoughts?
I'm tryign to use 51Degrees in a .NET project that I deploy to Azure. August 2011, they released v1.2.1.3 marked as "Azure Compatible":
Foundation can now be deployed on to the Windows Azure Cloud service.
See the release note for full details on requirements and how to
setup. Azure related changes include: Instead of a log file, log
entries are written to a log table Instead of a devices file, previous
device requests are written to a device table A new conditional
compilation symbol - 'AZURE'. AZURE enabled builds will not work in
traditional ASP.NET.
Since then there have been a dozen releases and they are up to v2.1.4.9. However, their documentaiton is super light on how to use it with Azure. In fact, there was a bug originally because v1.2.1.3 stated
To make use of the changes you must create a storage account called
‘fiftyonedegrees’. The foundation will then create two tables, one for
previous devices, and another for logs.
This isn't possible because Azure storage accounts need to be unique across all instances so everyone can't create ones named fifityonedegrees.
Their response was:
After rereading the blog it seems I've made an oversight in this
regard, and will update shortly.
The storage account that the Foundation looks for can be changed in
the Foundation source code. Go to Foundation/Properties/Constants.cs
and change the string 'AZURE_STORAGE_NAME' to the name of your storage
account.
However, I'm still at a loss at how to utilize it within my project. Here's my issues:
I'm not clear whether v1.2.1.3 is the only Azure compatible release, or every release after is Azure compatible. Their documentation doesn't say.
When I install 51Degrees via NuGet, my project doesn't get an App_Data folder created which contradicts their documentation. The web.config file even has entries in it that reference the App_Data folder such as <log logFile="~/App_Data/Log.txt" logLevel="Info"/>.
Based on the response to the Azure storage account bug I quoted earlier, they are sayign IN need to edit the file Foundation/Properties/Constants.cs. However, since I'm installing via NuGet and it's a DLL, NuGet is presumably the wrong approach? Do I need to download the source and compile it myself and wire it up to my project manually?
I'm generally new to .NET, NuGet, VS, etc so appreciate the help.
All versions are Azure compatible from 1.2.1.3 onwards. I'm assuming this is the blog post you were talking about. After you've created your azure storage account, you'll have to edit the Constants.cs file in the source code and add in your account name. It's my understanding that this means you'll have to get access to the source code and edit it directly. One you have done this you'll need to recompile for the software to work correctly. I'm not sure if there is a way to perform the same task using NuGet, but I'll look into it. Hope this helps.
Currently we upload all build artifacts to corporate ftp. Login/password was hardcoded in build scripts. Anyone can replace content on FTP so any dependent project get damaged libs...
I look for software solutions which allow easy right management and data integrity.
Currently I have some suggestions:
Sign packages, all dependent package verify signature (this is complicated, what tools to use, GPG? how about GNU Make/ANT support for signing/verifying?)
Allow upload to release storage only from build machine (through WEB-interface you force the build).
Why do you not use systems like Maven? It has good multiversion mechanism and all stuffs what you want
I have developed a web based application in ASP.NET and C# where users have the facility to upload files on the server through this application I want the application to Scan the uploaded files for viruses before saving on the server. Same like when we attach files with our email in Yahoo. Please guide me how I can achieve this functionality Any API which can be integrated in ASP.NET application or any other way you can suggest. We can purchase the licensed version of a product which can achieve this. I have googled but did not find specific results.
Thanks in advance!
First of all the file must be saved onto the server before you can scan it. If you notice Yahoo will upload the file first - but not allow the attachment to be sent until scanned.
Then you can use an antivirus with a command line interface or some other kind of API. Both of these can be called via C# and should provide the functionality you require. Parhaps write a wrapper class that takes a file and returns true or false depending on whether a virus was detected.
Other applications that provide you with a command line interface:
Microsoft Security Essentials
clamAv
I believe MS AV provides better results.
Just purchase antivirus software that has a command-line interface (several popular packages include this). Once the file has been uploaded, run the scan.
I would think, in order to upload and scan at the same time, you might need to implement your own antivirus software as I'm not familiar with any package that would provide that sort of interface.
I run a shareware site. It doesn't work as you described, but I download each file to my local computer and run a scan on them. You would be doing something similar.