Can I run code at Alfresco startup? - alfresco

I have an Alfresco module that I would like to have do some cleanup when a new version of it is installed.
In the current situation, an older version of the module created a folder node with custom properties at the root of the repository. We've since decided to have multiple such nodes, and none of them at that location. I'd like to put into the next version of the module code that would run at Alfresco startup, check for the existence of the old node, copy its properties into the appropriate new nodes, and delete the old node.
Is such a thing possible? I've looked at the Bootstrap configuration file, but that appears to only allow one to add things to the repository, not modify or delete them.

My suggestion is that you write a patch. That is a class that implements
org.alfresco.repo.admin.patch.AbstractPatch
Then you can do pretty much anything you want on bootstrap (except executing searches against solr since it wont be available).
Add some spring configuration, take a look at the file patch-services-context.xml for inspiration.

Yes you can do that, probably you missed the correct place in the documentation about that:
If you open Import Strategy you'll find a section Per BootstrapView, you should be using something like REPLACE_EXISTING or UPDATE_EXISTING for your ACP packaged content (if you're using ACPs as your bootstrap importing strategy).
Here is a more detailed description of the UUID Bindings values.
Hope that helps.

You can use patches.
When alfresco server starts it applies patches and executes database updates etc.
Definition :
A patch is a piece of Java code that executes once when Alfresco
Content Services starts. Custom patches can be implemented.
Documentation Link

Related

The transaction currently built is missing an attachment for class - Attempted to find a suitable attachment but could not find any in the storage

Full Error:
transactions.TransactionBuilder. - The transaction currently built is missing an attachment for class: com/gibtn/corda/printutilities/PrintLedgerTransaction. Attempted to find a suitable attachment but could not find any in the storage.
This has been asked here and here but I hope to get better clarification.
Problem:
I have built a set of libraries to perform common tasks in my Flows that I include in all my CorDapps. For now I just copy the JARs into each project, make some changes to the gradle files and everything works great.
I recently put together a small library for performing common tasks in Contracts and added the JAR the same way.
This works fine with MockNodes. But when I test with real nodes I will get this error in the CRaSH shell and the transaction will fail with a NoClassDefFoundError exception.
Question:
Is what I am doing even possible? Or do I always have to keep my utility classes inside the Contracts module in IntelliJ so they are bundled together with the Contracts into a single JAR? That way when the node starts the JAR (containing the Contracts and any utilities) is added to Attachment storage as a single Attachment.
I found a way to solve this. It's a bit dirty but initial testing seems to work. I just created a blank class in my utilities JAR that implements Contract. It's verify() method is empty. Now when the Corda node starts it sees this Contract and adds the JAR to Attachment storage. So from the CRaSH shell if I run:
attachments trustInfo
...my utility JAR will be listed (it wasn't before). I see when I use one of the utility methods in a Contract the utility JAR will be included as a separate Attachment in the WireTransaction.
I'm not crazy about this solution and will probably stop using a utility JAR for Contracts. I'll go back to copying the classes into each project. Nevertheless there is a way to do it. I would just need a more experienced Corda developer to give it their blessing before I'd go forward into production with it.

how to develop a custom connector in SailPoint

I am novices to the field of Identity and Access management.
Till now I know, Sail point has provided the some direct connectors to integrate the known systems like LDAP, HR systems, OIM, Databases..
And sailpoint also provided the support for disconnected applications with the use of Custom connectors.
Here, My question is how to develop a custom connector..?
I do not have jar file provided by sailpoint which contain "AbstractConnector" class.
So that I can write my own class and develop..?
I also so not understand, what to do with that class?(if i have a jar)
How sailpoint will refer to that class..
Do we need to deploy that class to somewhere...
Here I am expecting the complete flow to develop and deploy the custom connector..
If anyone is working please help..
If you unzip your identityiq.war, you'll find a JAR file called WEB-INF/lib/connector-bundle.jar. This is the JAR where you'll find AbstractConnector. Once you've written your connector code, you will need to compile it and bundle it into a JAR file, which you will place into WEB-INF/lib.
Finally, you will need to update the ConnectorRegistry object (under Configuration on the debug screen) to reference the new class, which will make it available as an Application type. If it has custom connection parameters (as most do), you will also need an xhtml page that will be embedded into the Sailpoint UI to prompt the user configuring the Application.
If you have Compass access, they have a whitepaper called Custom Connectors that you will find helpful.
All that said, I encourage you to try to find a way to use an out-of-box connector if possible.
Most of the times it will be better if you use the DelimitedFile connector, you can import a CSV of identity data, and make it work within Sailpoint's workflow. You will be able to map fields, correlate accounts and create multi-valued group memberships rapidly. Of course, this means that Sailpoint will not be connected directly to the application, and you will have to develop a workflow to extract the identities and upload them. But at least, you can integrate without going the Custom Connector way.

vaadin external javascript file location

I've got several javascript files.
I want to import it on my page, created using vaadin.
I added annotation #JavaScript to my UI.
#JavaScript({ "prettify.js", "vkbeautify.js", "additional.js" })
I put it into VAADIN\themes\theme-name.
However when I try to run it
WARNING: prettify.js published by com.folder.ui.AdminUi not found. Verify that the file com/folder/ui/prettify.js is available on the classpath.
Where I should put it?
It depends.
For maven based projects, the script files belong under the resource folder.
Example: src/main/resources/com/folder/ui
For Ivy/Eclipse based projects, the scripts go in the same path as your class src/main/java/com/folder/ui
The maven based projects generally mess people up because all Vaadin docs are written for Ivy.
Hope this helps,
Malcolm

How to find the use of default tables available in drupal

How can I able to find the usage of default tables available in drupal.
Is there any documentation available?
For example: there is a table called node. I need to know what is the usage of it and how it acts.
Any suggestions or answers will be helpful and grateful.
Your question is not very clear (the term "usage" is quite ambiguous), but you could install the Devel module. After setting it up it will show, for every page loaded (home page included), which SQL queries are run.
Every module can add tables to the database. A default Drupal install uses core modules, either required ones or those installed as dependencies of the default installation profile. These modules install their own tables.
Each module declares its tables in its implementation of hook_schema. The Schema module use the information from the implementations of this hook to provide a schema documentation.
Most of the time, you shouldn't directly access the database but use the API provided by the modules managing the data. Tables are usually considered private for their modules. New release of a module may change its schema in an incompatible way. Using API is much safer. Unfortunately, sometimes database access is the only option. In these cases, implementation of a data access layer between your code and the database is advised.

The workspace with the iOS project and related a static library project

I am fighting with Xcode 4 workspaces. Currently Xcode 4 wins. Thus, my situation:
I have the workspace with the iOS app project. There is also static library project iOS app depends on in the this workspace.
Solution #1
I try to configure like this:
the app project:
add to target's Build Phases > Link Binary With Library a product (libmystaticlib.a);
set USER_HEADER_SEARCH_PATHS to $(TARGET_BUILD_DIR)/usr/local/include $(DSTROOT)/usr/local/include;
the static library project:
add some header files to target's Build Phases > Copy Headers > Public;
set SKIP_INSTALL to YES.
And important thing: both projects must have configurations named the same. Otherwise, if I have, e.g., configuration named Distribution (Ad Hoc) for the app and Release for the static library, Xcode can't link the app with the library.
With this configuration archiving results to an archive with the application and public headers from static library projects. Of course, I am not able to share *.ipa in this case. :(
Solution #2
I have also tried another configuration:
Xcode preferences:
set source tree for the static library, e.g, ADDITIONS_PROJECT;
the app project:
add to target's Build Phases > Link Binary With Library a product (libmystaticlib.a);
set USER_HEADER_SEARCH_PATHS to $(ADDITIONS_PROJECT)/**;
the static library project:
don't add any header files to Public!;
set SKIP_INSTALL to YES.
I still need to care about configuration names for both projects. But in result I can build and archive successfully. In the result I get archive and I can share *.ipa.
I don't like the second solutions, because in this case I don't get any real advantage of the Xcode 4 workspace. The same effect I can add get, if I add the static lib project inside the app project. Therefore, I think something is wrong with my solution.
Any suggestion how better to link a static libraries?
I also found a solution that works with build and with archive.
In your static library set the Public Headers Folder Path to ../../Headers/YourLib
In your app config set the Header Search Paths to $(BUILT_PRODUCTS_DIR)/../../Headers
In your app you will be able to code #import <YourLib/YourFile.h>
Don't forget the Skip Install = YES option in your static lib.
We've found an answer, finally. Well, kind of. The problem occurred because Xcode 4 places public headers into InstallationBuildProductsLocation folder during build for archive. Apparently, when archiving it sees the headers and tries to put them into archive as well. Changing Public Headers Folder Path of the lib to somewhere outside of InstallationBuildProductsLocation, for example, to $(DSTROOT)/../public_folders and adding this path to Header Search Path solve the problem.
This solution doesn't look very elegant, but for us it seems to be the only option. May be you'll find this useful.
Here is a solution a get from Apple DTS. I don't like it, because it is suggests to use absolute path. But I still publish it here, maybe someone feels it is right for him.
How to set up the static library:
Add a build configuration named "Archive" by copying the Release Configuration.
Move your headers to the Project group of the Copy Headers build phase.
Set the Per-configuration Build Products Path of the "Archive" configuration to $(BUILD_DIR)/MyLibBuildDir. Xcode will create the MyLibBuildDir folder inside the BuildProductsPath, then add your static library into that folder. You can use "MyLibBuildDir" or provide another name for the above folder.
Set Skip Install to YES for all configurations.
Set Installation Directory of "Archive" to $(TARGET_TEMP_DIR)/UninstalledProducts.
Edit its scheme, set the Build Configuration of its Archive action to "Archive."
How to set up the project linking against the library:
Add a build configuration named "Archive" by copying the Release Configuration.
Set the Library Search Paths of "Archive" to $(BUILD_DIR)/MyLibBuildDir.
Set the User Header Search Paths to the recursive absolute path of your root of your workspace directory for all configurations.
Set Always Search User Paths of "Archive" to YES.
Set Skip_Install to NO for all configurations.
Edit its scheme, set the Build Configuration of its Archive action to "Archive."
I was not real happy with any of the other solutions that were provided, so I found another solution that I prefer. Rather than having to use relevant paths to put the /usr/local/include folder outside of the installation directory, I added a pre-action to the Archive step in my scheme. In the pre-action I provided a script that removed the usr directory prior to archiving.
rm -r "$OBJROOT/ArchiveIntermediates/MyAppName/InstallationBuildProductsLocation/usr"
This removes the usr directory before archiving so that it does not end up in the bundle and cause Xcode to think it has multiple modules.
so far I also struggled with the same problem, but did come to a solution with a minimal tradeoff:
This requires Dervied Data to be your Build Location.
I set the Public Headers Folder path to ../usr/local/include
This will ensure, that the headers will not be placed into the archive.
For the app, I set the Header Search Path to:
$(OBJROOT)/usr/local/include
$(SYMROOT)/usr/local/include
There are 2 entries necessary since the paths slightly change when building an archive and I haven't figured out how to describe it with only one variable.
The nice thing here is, that it doesn't break code sense. So except for having 2 entries rather than one, this works perfectly fine.
I'm struggling with the same problem at the moment. I didn't progress much farther than you. I can only add that in the second solution you can drag headers you need to use from the library to the app project, instead of setting ADDITIONS_PROJECT and USER_HEADER_SEARCH_PATH. This will make them visible in app project. Value of SKIP_INSTALL flag doesn't matter in this case.
Still, this solution isn't going to work for me, because I'm moving rather big project, with dozens of libraries, from Xcode 3 to Xcode 4, and it means really a lot of drag and drop to make my project build and archive correctly. Please let us know if you find any better way out of this situation.
I could use Core Plot as a static library and workspace sibling, with two build configurations:
Release:
in project, Header Search Path: "$(BUILT_PRODUCTS_DIR)"
in CorePlot-CocoaTouch, Public Headers Folder Path: /usr/local/include
AdHoc (build configuration for "Archive" step in Scheme, produces a shareable .ipa):
in project, Header Search Path: "$(BUILT_PRODUCTS_DIR)"/../../public_folders/**
in CorePlot-CocoaTouch, Public Headers Folder Path: ../../public_folders
Hope it will help someone to not waste a day on this.

Resources