How to get organization for Artifacts in UpdateReport? - sbt

In my SBT plugin I am able to obtain instances of sbt.Artifact via UpdateReport, this comes with a File.
However, for jars, I need the organization that the jar belongs to. This is available in sbt.ModuleID.
How do I get the organization when I have the Artifact?

If you're getting the aritfact from the update report, you should also be able to get the module id. The update report is a nested structure, lining up with your Ivy Configurations. Inside each configuration is a set of Modules, and inside each of those is the set of Artifacts included by that Module (Modules can have more than one artifact, a difference from Maven/Aether).
So, here's example code to grab ALL artifacts form ALL configurations and their ModuleID:
for {
conf <- update.value.configurations
moduleReport <- conf.modules
(artifact, file) <- moduleReport.artifacts
} yield (moduleReport.module, artifact, file)

Related

How to check build info from a user plugin before downloading an artifact?

We want to prevent downloading artifacts without build info with a user plugin in our on-prem Artifactory installation. We are struggling to find a connection between the Request and the corresponding BuildInfo.
import org.artifactory.request.Request
import org.artifactory.repo.RepoPath
download {
beforeDownloadRequest { Request request, RepoPath repoPath ->
if (isRelease(repoPath.repoKey)) {
log.warn "Is a release artifact"
// How to verify build info here??
}
}
}
def isRelease(String repoKey) {
return repoKey in ["libs-release-local"]
}
Using the Artifactory Query Language you can find builds based on an artifact, and if the result is empty, then there is no such a build: https://www.jfrog.com/confluence/display/JFROG/Artifactory+Query+Language
For example:
builds.find({"module.artifact.item.name": "artifactory.war"})
Also artifacts linked to a build will have a property "build.number" and "build.name", so that's one way to approach it
The proper solution would be to use JFrog Xray. You can then set scans for your builds, so that all artifacts part of a build will get scanned (plus you get security and license compliance checks there too), and then block the download of unscanned artifacts
Lastly, when you create a build you can also promote it, for instance from "staging" to "release" and on that operation copy or move the artifacts to a repository that is build-release only.
The properties "build.name" and "build.number" are likely the best way for what you are trying to do.

Can I run code at Alfresco startup?

I have an Alfresco module that I would like to have do some cleanup when a new version of it is installed.
In the current situation, an older version of the module created a folder node with custom properties at the root of the repository. We've since decided to have multiple such nodes, and none of them at that location. I'd like to put into the next version of the module code that would run at Alfresco startup, check for the existence of the old node, copy its properties into the appropriate new nodes, and delete the old node.
Is such a thing possible? I've looked at the Bootstrap configuration file, but that appears to only allow one to add things to the repository, not modify or delete them.
My suggestion is that you write a patch. That is a class that implements
org.alfresco.repo.admin.patch.AbstractPatch
Then you can do pretty much anything you want on bootstrap (except executing searches against solr since it wont be available).
Add some spring configuration, take a look at the file patch-services-context.xml for inspiration.
Yes you can do that, probably you missed the correct place in the documentation about that:
If you open Import Strategy you'll find a section Per BootstrapView, you should be using something like REPLACE_EXISTING or UPDATE_EXISTING for your ACP packaged content (if you're using ACPs as your bootstrap importing strategy).
Here is a more detailed description of the UUID Bindings values.
Hope that helps.
You can use patches.
When alfresco server starts it applies patches and executes database updates etc.
Definition :
A patch is a piece of Java code that executes once when Alfresco
Content Services starts. Custom patches can be implemented.
Documentation Link

Editing configuration files in Pax Exam

I am using Pax Exam to perform integration tests to my OSGi application. I have a configuration factory in which I specify the Karaf feature of my application to be installed in the test container and then modify some of a proerty of a .cfg file installed as part of my feature.
public class TestConfigurationFactory implements ConfigurationFactory {
#Override
public Option[] createConfiguration() {
return options(
karafDistributionConfiguration()
.frameworkUrl(
maven().groupId("org.apache.karaf")
.artifactId("apache-karaf")
.version("3.0.1").type("tar.gz"))
.unpackDirectory(new File("target/exam"))
.useDeployFolder(false),
keepRuntimeFolder(),
// Karaf (own) features.
KarafDistributionOption.features(
maven().groupId("org.apache.karaf.features")
.artifactId("standard").classifier("features")
.version("3.0.1").type("xml"), "scr"),
// CXF features.
KarafDistributionOption.features(maven()
.groupId("org.apache.cxf.karaf")
.artifactId("apache-cxf").version("2.7.9")
.classifier("features").type("xml")),
// Application features.
KarafDistributionOption.features(
maven().groupId("com.me.project")
.artifactId("my-karaf-features")
.version("1.0.0-SNAPSHOT")
.classifier("features").type("xml"), "my-feature"),
KarafDistributionOption.editConfigurationFilePut(
"etc/com.me.test.cfg", "key", "value"));
}
}
The property I specify in editConfigurationFilePut is modified correctly, however the rest of the .cfg file's properties are deleted. If I use the editConfigurationFilePut method to edit one of Karaf's configuration files it works as expected (just adds the new property without modifying the existing ones) so I am thinking that perhaps the problem is that Pax Exam attempts to modify the configuration before the .cfg file is installed by my feature and therefore creates a new file to put the property in. If this is the case is there some way to synchronise this process so that the .cfg file is edited only after the feature is properly installed?
There are a two different reasons for this.
1) The feature does get installed after the configfile has been "edited"
2) The feature only contains a config section and not a configfile section
I'd guess reason one is the most likely cause of this since it needs a running Karaf to install a feature through Pax Exam. So to work around reason one, replace the config with a config file present in your test project.
For reason two, make sure the feature actually does reference a config instead of a configuration admin config, or add your config to the configuration of the config-admin service. You can achieve this by injecting the ConfigAdmin service in your unit test and add your properties to the configuration pid.
EDIT:
Combine both solutions
Since because of 1) it takes longer for the config-file to be actually available, let config-admin service do the rest.
Make sure your test does retrieve the config-admin service either by injecting it or by waiting for it's availability.
Now within a #Before method make sure you wait till your config is complete and change it from there on. This way you don't need to duplicate the config files.

Access uniquely-named Maven snapshot using ivysettings.xml in sbt?

I use an ivysettings.xml file to configure the repositories to use for sbt, which uses Ivy.
However, it's not able to download a particular snapshot which uses unique naming (i.e. date-based naming). It only tries the patterns listed explicitly in my ivysettings.xml file (which makes sense), so it can't see the details in maven-metadata.xml which tell it the filename of the snapshot jar to download.
I tried specifying the version explicitly instead of as a snapshot in Build.scala:
"com.jolbox" % "bonecp" % "0.8.1-20131105.191813-1"
(which would be my ideal solution, because then it would be cached in our maven repository and I'd be guaranteed to always use the same snapshot), but this generated the wrong URL (there should be an 0.8.1-SNAPSHOT in there, but of course there isn't):
http://maven/nexus/content/groups/softwaretools-snapshot-group/com/jolbox/bonecp/0.8.1-20131105.191813-1/bonecp-0.8.1-20131105.191813-1.pom
I then tried specifying the URL explicitly using from, but this didn't work.
I then tried using latest.integration as the version, but that didn't correctly identify the latest version - it thought it was 0.8.0-rc1, which is clearly wrong.
Download the dependency manually and add it to the lib directory of the project (create it if necessary); remove it from the Build.scala file.

The workspace with the iOS project and related a static library project

I am fighting with Xcode 4 workspaces. Currently Xcode 4 wins. Thus, my situation:
I have the workspace with the iOS app project. There is also static library project iOS app depends on in the this workspace.
Solution #1
I try to configure like this:
the app project:
add to target's Build Phases > Link Binary With Library a product (libmystaticlib.a);
set USER_HEADER_SEARCH_PATHS to $(TARGET_BUILD_DIR)/usr/local/include $(DSTROOT)/usr/local/include;
the static library project:
add some header files to target's Build Phases > Copy Headers > Public;
set SKIP_INSTALL to YES.
And important thing: both projects must have configurations named the same. Otherwise, if I have, e.g., configuration named Distribution (Ad Hoc) for the app and Release for the static library, Xcode can't link the app with the library.
With this configuration archiving results to an archive with the application and public headers from static library projects. Of course, I am not able to share *.ipa in this case. :(
Solution #2
I have also tried another configuration:
Xcode preferences:
set source tree for the static library, e.g, ADDITIONS_PROJECT;
the app project:
add to target's Build Phases > Link Binary With Library a product (libmystaticlib.a);
set USER_HEADER_SEARCH_PATHS to $(ADDITIONS_PROJECT)/**;
the static library project:
don't add any header files to Public!;
set SKIP_INSTALL to YES.
I still need to care about configuration names for both projects. But in result I can build and archive successfully. In the result I get archive and I can share *.ipa.
I don't like the second solutions, because in this case I don't get any real advantage of the Xcode 4 workspace. The same effect I can add get, if I add the static lib project inside the app project. Therefore, I think something is wrong with my solution.
Any suggestion how better to link a static libraries?
I also found a solution that works with build and with archive.
In your static library set the Public Headers Folder Path to ../../Headers/YourLib
In your app config set the Header Search Paths to $(BUILT_PRODUCTS_DIR)/../../Headers
In your app you will be able to code #import <YourLib/YourFile.h>
Don't forget the Skip Install = YES option in your static lib.
We've found an answer, finally. Well, kind of. The problem occurred because Xcode 4 places public headers into InstallationBuildProductsLocation folder during build for archive. Apparently, when archiving it sees the headers and tries to put them into archive as well. Changing Public Headers Folder Path of the lib to somewhere outside of InstallationBuildProductsLocation, for example, to $(DSTROOT)/../public_folders and adding this path to Header Search Path solve the problem.
This solution doesn't look very elegant, but for us it seems to be the only option. May be you'll find this useful.
Here is a solution a get from Apple DTS. I don't like it, because it is suggests to use absolute path. But I still publish it here, maybe someone feels it is right for him.
How to set up the static library:
Add a build configuration named "Archive" by copying the Release Configuration.
Move your headers to the Project group of the Copy Headers build phase.
Set the Per-configuration Build Products Path of the "Archive" configuration to $(BUILD_DIR)/MyLibBuildDir. Xcode will create the MyLibBuildDir folder inside the BuildProductsPath, then add your static library into that folder. You can use "MyLibBuildDir" or provide another name for the above folder.
Set Skip Install to YES for all configurations.
Set Installation Directory of "Archive" to $(TARGET_TEMP_DIR)/UninstalledProducts.
Edit its scheme, set the Build Configuration of its Archive action to "Archive."
How to set up the project linking against the library:
Add a build configuration named "Archive" by copying the Release Configuration.
Set the Library Search Paths of "Archive" to $(BUILD_DIR)/MyLibBuildDir.
Set the User Header Search Paths to the recursive absolute path of your root of your workspace directory for all configurations.
Set Always Search User Paths of "Archive" to YES.
Set Skip_Install to NO for all configurations.
Edit its scheme, set the Build Configuration of its Archive action to "Archive."
I was not real happy with any of the other solutions that were provided, so I found another solution that I prefer. Rather than having to use relevant paths to put the /usr/local/include folder outside of the installation directory, I added a pre-action to the Archive step in my scheme. In the pre-action I provided a script that removed the usr directory prior to archiving.
rm -r "$OBJROOT/ArchiveIntermediates/MyAppName/InstallationBuildProductsLocation/usr"
This removes the usr directory before archiving so that it does not end up in the bundle and cause Xcode to think it has multiple modules.
so far I also struggled with the same problem, but did come to a solution with a minimal tradeoff:
This requires Dervied Data to be your Build Location.
I set the Public Headers Folder path to ../usr/local/include
This will ensure, that the headers will not be placed into the archive.
For the app, I set the Header Search Path to:
$(OBJROOT)/usr/local/include
$(SYMROOT)/usr/local/include
There are 2 entries necessary since the paths slightly change when building an archive and I haven't figured out how to describe it with only one variable.
The nice thing here is, that it doesn't break code sense. So except for having 2 entries rather than one, this works perfectly fine.
I'm struggling with the same problem at the moment. I didn't progress much farther than you. I can only add that in the second solution you can drag headers you need to use from the library to the app project, instead of setting ADDITIONS_PROJECT and USER_HEADER_SEARCH_PATH. This will make them visible in app project. Value of SKIP_INSTALL flag doesn't matter in this case.
Still, this solution isn't going to work for me, because I'm moving rather big project, with dozens of libraries, from Xcode 3 to Xcode 4, and it means really a lot of drag and drop to make my project build and archive correctly. Please let us know if you find any better way out of this situation.
I could use Core Plot as a static library and workspace sibling, with two build configurations:
Release:
in project, Header Search Path: "$(BUILT_PRODUCTS_DIR)"
in CorePlot-CocoaTouch, Public Headers Folder Path: /usr/local/include
AdHoc (build configuration for "Archive" step in Scheme, produces a shareable .ipa):
in project, Header Search Path: "$(BUILT_PRODUCTS_DIR)"/../../public_folders/**
in CorePlot-CocoaTouch, Public Headers Folder Path: ../../public_folders
Hope it will help someone to not waste a day on this.

Resources