Resolve artifact with sbt programmatically - sbt

The following blog propose how to fetch an artifact directly from java using ivy (http://developers-blog.org/blog/default/2010/11/08/Embed-Ivy-How-to-use-Ivy-with-Java).
public class IvyArtifactResolver {
public File resolveArtifact(String groupId, String artifactId, String version) throws Exception {
//creates clear ivy settings
IvySettings ivySettings = new IvySettings();
//url resolver for configuration of maven repo
URLResolver resolver = new URLResolver();
resolver.setM2compatible(true);
resolver.setName("central");
//you can specify the url resolution pattern strategy
resolver.addArtifactPattern(
"http://repo1.maven.org/maven2/"
+ "[organisation]/[module]/[revision]/[artifact](-[revision]).[ext]");
//adding maven repo resolver
ivySettings.addResolver(resolver);
//set to the default resolver
ivySettings.setDefaultResolver(resolver.getName());
//creates an Ivy instance with settings
Ivy ivy = Ivy.newInstance(ivySettings);
File ivyfile = File.createTempFile("ivy", ".xml");
ivyfile.deleteOnExit();
String[] dep = null;
dep = new String[]{groupId, artifactId, version};
DefaultModuleDescriptor md =
DefaultModuleDescriptor.newDefaultInstance(ModuleRevisionId.newInstance(dep[0],
dep[1] + "-caller", "working"));
DefaultDependencyDescriptor dd = new DefaultDependencyDescriptor(md,
ModuleRevisionId.newInstance(dep[0], dep[1], dep[2]), false, false, true);
md.addDependency(dd);
//creates an ivy configuration file
XmlModuleDescriptorWriter.write(md, ivyfile);
String[] confs = new String[]{"default"};
ResolveOptions resolveOptions = new ResolveOptions().setConfs(confs);
//init resolve report
ResolveReport report = ivy.resolve(ivyfile.toURL(), resolveOptions);
//so you can get the jar library
File jarArtifactFile = report.getAllArtifactsReports()[0].getLocalFile();
return jarArtifactFile;
}
}
I'm wondering if sbt exposes this kind of interface since it uses ivy.
resolve :: ModuleId -> File

Scripts, REPL, and Dependencies
There's a document called Scripts, REPL, and Dependencies you might be interested in. Script runner for example lets you write something like this:
#!/usr/bin/env scalas
!#
/***
scalaVersion := "2.9.0-1"
libraryDependencies ++= Seq(
"net.databinder" %% "dispatch-twitter" % "0.8.3",
"net.databinder" %% "dispatch-http" % "0.8.3"
)
*/
import dispatch.{ json, Http, Request }
import dispatch.twitter.Search
driving sbt programmatically
You can also use any subparts of sbt as a library and drive it yourself. Because of the plugin ecosystem, it's pretty good about maintaining binary compatibility among the point releases. The key task that grabs jars would be update, so def updateTask (Defaults.scala#L1113) could be a good place to start. If you are driving sbt from the client code, however, wouldn't you end up re-implementing sbt shell or including all the sbt's dependencies? You might as well have a separate sbt shell window or sbt script section.
Custom Resolvers
sbt ships with variety of customizable resolvers, so the first place to check out should be: Resolvers:
sbt provides an interface to the repository types available in Ivy: file, URL, SSH, and SFTP. A key feature of repositories in Ivy is using patterns to configure repositories.
Construct a repository definition using the factory in sbt.Resolver for the desired type. This factory creates a Repository object that can be further configured. The following table contains links to the Ivy documentation for the repository type and the API documentation for the factory and repository class. The SSH and SFTP repositories are configured identically except for the name of the factory. Use Resolver.ssh for SSH and Resolver.sftp for SFTP.
For example you can do:
resolvers += Resolver.file("my-test-repo", file("test")) transactional()
RawRepository
But if you truly want a programmable resolver, there is RawRepository:
final class RawRepository(val resolver: DependencyResolver) extends Resolver
{
def name = resolver.getName
override def toString = "Raw(" + resolver.toString + ")"
}
This is a thin wrapper around org.apache.ivy.plugins.resolver.DependencyResolver, which you should be able to write by extending one of the resolvers they have. (I haven't tried this myself.)

Related

Corda - Failed to find a store at certificates\sslkeystore.jks

Corda open source on Linux. Node RPC SSL enabled. I am getting error "Failed to find a store at certificates\sslkeystore.jks". Any ideas? I have entered absolute path in keyStorePath.
You must follow the steps of this paragraph: https://docs.corda.net/clientrpc.html#wire-security which I detailed for you below.
When you enable RPC SSL, you must run this command one time (you will be asked to supply 2 new passwords):
java -jar corda.jar generate-rpc-ssl-settings
It will create the rpcsslkeystore.jks under certificates folder, and rpcssltruststore.jks under certificates/export folder.
Inside your node.conf supply the path and password of rpcsslkeystore.jks:
rpcSettings {
useSsl=true
ssl {
keyStorePath=${baseDirectory}/certificates/rpcsslkeystore.jks
keyStorePassword=password
}
standAloneBroker = false
address = "0.0.0.0:10003"
adminAddress = "0.0.0.0:10004"
}
Now if you have a webserver, inside NodeRPCConnection you must use the constructor that takes a ClientRpcSslOptions parameter:
// RPC SSL properties.
#Value("${config.rpc.ssl.truststorepath}")
private String trustStorePath;
#Value("${config.rpc.ssl.truststorepassword}")
private String trustStorePassword;
#PostConstruct
public void initialiseNodeRPCConnection() {
NetworkHostAndPort rpcAddress = new NetworkHostAndPort(host, rpcPort);
ClientRpcSslOptions clientRpcSslOptions = new ClientRpcSslOptions(Paths.get(trustStorePath),
trustStorePassword, "JKS");
CordaRPCClient rpcClient = new CordaRPCClient(rpcAddress, clientRpcSslOptions, null);
rpcConnection = rpcClient.start(username, password);
proxy = rpcConnection.getProxy();
}
We added above 2 extra attributes that you must now supply when starting the webserver, for that; modify your clients module build.gradle:
task runNodeServer(type: JavaExec, dependsOn: jar) {
classpath = sourceSets.main.runtimeClasspath
main = 'com.example.server.ServerKt'
args '--server.port=50005', '--config.rpc.host=localhost',
'--config.rpc.port=10005', '--config.rpc.username=user1', '--config.rpc.password=test',
'--config.rpc.ssl.truststorepath=/path-to-project/build/nodes/your-node/certificates/export/rpcssltruststore.jks',
'--config.rpc.ssl.truststorepassword=password'
}
If you're planning to connect to the node with a standalone shell, you must do something similar, but it didn't work for me; I reported the following bug: https://github.com/corda/corda/issues/5955

XUnit Net Core Web API Integration Test: "The ConnectionString property has not been initialized."

Just trying to build an Integration Test project for a NET Core Web API.
So I've followed a few examples, including this one (https://dotnetcorecentral.com/blog/asp-net-core-web-api-integration-testing-with-xunit/) and naturally, I run into issues. When I run the simple GET test I get an exception:
"System.InvalidOperationException : The ConnectionString property has not been initialized."
Any help would be appreciated.
For server = new TestServer(new WebHostBuilder().UseStartup<Startup>());, you need to manually configure the appsettings.json path like
var server = new TestServer(WebHost.CreateDefaultBuilder()
.UseContentRoot(#"D:\Edward\SourceCode\AspNetCore\Tests\IntegrationTestMVC")
// This is the path for project which needs to be test
.UseStartup<Startup>()
);
For a convenience way, I would suggest you try Basic tests with the default WebApplicationFactory.
The WebApplicationFactory constructor infers the app content root path by searching for a WebApplicationFactoryContentRootAttribute on the assembly containing the integration tests with a key equal to the TEntryPoint assembly System.Reflection.Assembly.FullName. In case an attribute with the correct key isn't found, WebApplicationFactory falls back to searching for a solution file (*.sln) and appends the TEntryPoint assembly name to the solution directory. The app root directory (the content root path) is used to discover views and content files.
Reference:How the test infrastructure infers the app content root path
I had to override CreateHostBuilder in my derived WebApplicationFactory in order to add the configuration for the connection string (since it was read from user secrets).
public class CustomApplicationFactory : WebApplicationFactory<Sedab.MemberAuth.Startup>
{
protected override IHostBuilder CreateHostBuilder()
{
var initialData = new List<KeyValuePair<string, string>> {
new KeyValuePair<string, string>("ConnectionStrings:DefaultConnection", "test")
};
return base.CreateHostBuilder().ConfigureHostConfiguration(config => config.AddInMemoryCollection(initialData));
}
}

Provider org.togglz.slf4j.Slf4jLogProvider not a subtype

I have a library, built with Maven, that uses Spring 4.0.3.RELEASE and Togglz 2.2.0.Final. I'm trying to write a JUnit 4.11 test of my Spring class and running into the following error on the first test that gets executed:
testCreateItem_throwsItemServiceBusinessException(impl.ItemServiceImplTest) Time elapsed: 1.771 sec <<< ERROR!
java.util.ServiceConfigurationError: org.togglz.core.spi.LogProvider:
Provider org.togglz.slf4j.Slf4jLogProvider not a subtype
Here is the relevant java test snippet:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = AppConfig.class, loader = AnnotationConfigContextLoader.class)
#PrepareForTest({ ItemServiceImpl.class })
public class ItemServiceImplTest {
#Rule
public TogglzRule togglzRule = TogglzRule.allDisabled(Features.class);
#Rule
public PowerMockRule powerMockRule = new PowerMockRule();
#Test(expected = ItemServiceBusinessException.class)
public void testCreateItem_throwsItemServiceBusinessException() throws Exception {
PowerMockito.doReturn(mockMetricsData).when(serviceUnderTest, START_METRICS_METHOD_NAME, any(MetricsOperationName.class), any(RequestContext.class));
when(mockDao.createItem(any(Item.class), any(RequestContext.class))).thenThrow(dataBusinessException);
serviceUnderTest.createItem(item, context);
verify(mockItemServiceValidator).validate(any(Item.class), any(RequestContext.class));
PowerMockito.verifyPrivate(serviceUnderTest).invoke(START_METRICS_METHOD_NAME, any(MetricsOperationName.class), any(RequestContext.class));
verify(mockDao).createItem(any(Item.class), any(RequestContext.class));
}
}
Subsequent test calls get the following error:
java.lang.NoClassDefFoundError: Could not initialize class org.togglz.junit.TogglzRule
Here are some relevant dependencies I have:
org.mockito:mockito-all=org.mockito:mockito-all:jar:1.9.5:compile,
org.powermock:powermock-module-junit4=org.powermock:powermock-module-junit4:jar:1.5.6:test,org.powermock:powermock-module-junit4-common=org.powermock:powermock-module-junit4-common:jar:1.5.6:test,
org.powermock:powermock-reflect=org.powermock:powermock-reflect:jar:1.5.6:test,
org.powermock:powermock-api-mockito=org.powermock:powermock-api-mockito:jar:1.5.6:test,
org.powermock:powermock-api-support=org.powermock:powermock-api-support:jar:1.5.6:test,
org.powermock:powermock-module-junit4-rule=org.powermock:powermock-module-junit4-rule:jar:1.5.6:test,
org.powermock:powermock-classloading-base=org.powermock:powermock-classloading-base:jar:1.5.6:test,
org.powermock:powermock-core=org.powermock:powermock-core:jar:1.5.6:test,
org.powermock:powermock-classloading-xstream=org.powermock:powermock-classloading-xstream:jar:1.5.6:test,
org.togglz:togglz-core=org.togglz:togglz-core:jar:2.2.0.Final:compile,
org.togglz:togglz-slf4j=org.togglz:togglz-slf4j:jar:2.2.0.Final:compile,
org.togglz:togglz-spring-core=org.togglz:togglz-spring-core:jar:2.2.0.Final:compile,
org.togglz:togglz-testing=org.togglz:togglz-testing:jar:2.2.0.Final:test,
org.togglz:togglz-junit=org.togglz:togglz-junit:jar:2.2.0.Final:test
And I have provided a LogProvider (org.togglz.slf4j.Slf4jLogProvider) via SPI, located at META-INF/serivces/org.togglz.core.spi.LogProvider
This error is baffling as Slf4jLogProvider should be assignable from LogProvider. Sorry for the verbosity, but I wanted to try and show a complete picture. The code in class "under test" is making a call to see if a single feature is enabled inside the create method.
First of all: You don't need to configure the log provider in your application. Including togglz-slf4j on your application path is sufficient because this jar contains the corresponding SPI file.
Could you please check if there are multiple conflicting versions of the Togglz JAR files on your classpath? For example using togglz-core-2.2.0.Final.jar together with togglz-slf4j-2.1.0.Final.jar could result in an error like this.
This can happen if you update Togglz and your IDE didn't remove the old archives. Running a clean build and/or selecting "Update Maven Configuration" on Eclipse will fix this problem.

How to read data from file in task and use it to set another setting?

We are migrating our application to Play Framework. We've been working with the gradle build system and are facing a couple of issues with sbt.
We use jooq for our database, which means that our build file needs to contain the database username/password (since jooq generates code by reading the db schema). Since it isn't a good idea, all the sensitive data is stored in a protected file on each host the build might potentially run on, and the build system reads from the file and then configures the system accordingly. It was pretty straightforward in gradle, but I have hit a deadend with sbt. This is what I have till now:
import org.json4s._
import org.json4s.native.JsonMethods.
val jsonBuildConfig = TaskKey[JValue]("json-build-config")
jsonBuildConfig := {
val confLines = scala.io.Source.fromFile("/etc/application.conf").mkString
parse(confLines)
}
jooqOptions := Seq(
"jdbc.driver" -> "org.postgresql.Driver",
"jdbc.url" -> "FIXME",
"jdbc.user" -> "FIXME",
"jdbc.password" -> "FIXME"
)
The problem is that the three configuration parameters, with FIXME as their current values in jooqOptions, need to be picked from the file.
Within jsonBuildConfig, I can do this:
val confLines = scala.io.Source.fromFile("/etc/application.conf").mkString
val jsonConf = parse(confLines)
(jsonConf / "stagingdb" / "url").values
But how do I set it in jooqOptions conf set?
If I've understood your question correctly, you want the jooqOptions value to depend on the value of jsonBuildConfig. There's a section about that here:
http://www.scala-sbt.org/0.13.5/docs/Getting-Started/More-About-Settings.html
Basically, you would want to use <<= instead of := to set jooqOptions, e.g.
jooqOptions <<= jsonBuildConfig.apply { jsonConf =>
val dbSettings = jsonConf / "stagingdb"
val dbUrl = dbSettings / "url"
val dbUser = ...
...
Seq("jdbc.driver" -> "...", "jdbc.url" -> dbUrl, ...)
}
For newer versions of SBT, you can avoid the setting.apply{...} pattern by calling setting.value within a setting initializer block, e.g.
jooqOptions := {
val dbSettings = jsonBuildConfig.value / "stagingdb"
...
}
I linked to the docs for SBT 0.13.5, which does support the .value feature. Double-check which version of SBT you are using, and open the appropriate docs page to see if it supports that feature.

groovy upload jar to nexus

I have some jar file (custom) which I need to publish to Sonatype Nexus repository from Groovy script.
I have jar located in some path on machine where Groovy script works (for instance: c:\temp\module.jar).
My Nexus repo url is http://:/nexus/content/repositories/
On this repo I have folder structure like: folder1->folder2->folder3
During publishing my jar I need to create in folder3:
New directory with module's revision (my Groovy script knows this revision)
Upload jar to this directory
Create pom, md5 and sha1 files for jar uploaded
After several days of investigation I still have no idea how to create such script but this way looks very clear instead of using direct uploading.
I found http://groovy.codehaus.org/Using+Ant+Libraries+with+AntBuilder and some other stuff (stackoverflow non script solution).
I got how to create ivy.xml in my Groovy script, but I don't understand how to create build.xml and ivysetting.xml on the fly and setup whole system to work.
Could you please help to understand Groovy's way?
UPDATE:
I found that the following command works fine for me:
curl -v -F r=thirdparty -F hasPom=false -F e=jar -F g=<my_groupId> -F a=<my_artifactId> -F v=<my_artifactVersion> -F p=jar -F file=#module.jar -u admin:admin123 http://<my_nexusServer>:8081/nexus/service/local/repositories
As I understand curl perform POST request to Nexus services. Am I correct?
And now I'm trying to build HTTP POST request using Groovy HTTPBuilder.
How I should transform curl command parameters into Groovy's HTTPBuilder request?
Found a way to do this with the groovy HttpBuilder.
based on info from sonatype, and a few other sources.
This works with http-builder version 0.7.2 (not with earlier versions)
And also needs an extra dependency: 'org.apache.httpcomponents:httpmime:4.2.1'
The example also uses basic auth against nexus.
import groovyx.net.http.Method
import groovyx.net.http.ContentType;
import org.apache.http.HttpRequest
import org.apache.http.HttpRequestInterceptor
import org.apache.http.entity.mime.MultipartEntity
import org.apache.http.entity.mime.content.FileBody
import org.apache.http.entity.mime.content.StringBody
import org.apache.http.protocol.HttpContext
import groovyx.net.http.HttpResponseException;
class NexusUpload {
def uploadArtifact(Map artifact, File fileToUpload, String user, String password) {
def path = "/service/local/artifact/maven/content"
HTTPBuilder http = new HTTPBuilder("http://my-nexus.org/")
String basicAuthString = "Basic " + "$user:$password".bytes.encodeBase64().toString()
http.client.addRequestInterceptor(new HttpRequestInterceptor() {
void process(HttpRequest httpRequest, HttpContext httpContext) {
httpRequest.addHeader('Authorization', basicAuthString)
}
})
try {
http.request(Method.POST, ContentType.ANY) { req ->
uri.path = path
MultipartEntity entity = new MultipartEntity()
entity.addPart("hasPom", new StringBody("false"))
entity.addPart("file", new FileBody(fileToUpload))
entity.addPart("a", new StringBody("my-artifact-id"))
entity.addPart("g", new StringBody("my-group-id"))
entity.addPart("r", new StringBody("my-repository"))
entity.addPart("v", new StringBody("my-version"))
req.entity = entity
response.success = { resp, reader ->
if(resp.status == 201) {
println "success!"
}
}
}
} catch (HttpResponseException e) {
e.printStackTrace()
}
}
}
`
Ivy is an open source library, so, one approach would be to call the classes directly. The problem with that approach is that there are few examples on how to invoke ivy programmatically.
Since groovy has excellent support for generating XML, I favour the slightly dumber approach of creating the files I understand as an ivy user.
The following example is designed to publish files into Nexus generating both the ivy and ivysettings files:
import groovy.xml.NamespaceBuilder
import groovy.xml.MarkupBuilder
// Methods
// =======
def generateIvyFile(String fileName) {
def file = new File(fileName)
file.withWriter { writer ->
xml = new MarkupBuilder(writer)
xml."ivy-module"(version:"2.0") {
info(organisation:"org.dummy", module:"dummy")
publications() {
artifact(name:"dummy", type:"pom")
artifact(name:"dummy", type:"jar")
}
}
}
return file
}
def generateSettingsFile(String fileName) {
def file = new File(fileName)
file.withWriter { writer ->
xml = new MarkupBuilder(writer)
xml.ivysettings() {
settings(defaultResolver:"central")
credentials(host:"myrepo.com" ,realm:"Sonatype Nexus Repository Manager", username:"deployment", passwd:"deployment123")
resolvers() {
ibiblio(name:"central", m2compatible:true)
ibiblio(name:"myrepo", root:"http://myrepo.com/nexus", m2compatible:true)
}
}
}
return file
}
// Main program
// ============
def ant = new AntBuilder()
def ivy = NamespaceBuilder.newInstance(ant, 'antlib:org.apache.ivy.ant')
generateSettingsFile("ivysettings.xml").deleteOnExit()
generateIvyFile("ivy.xml").deleteOnExit()
ivy.resolve()
ivy.publish(resolver:"myrepo", pubrevision:"1.0", publishivy:false) {
artifacts(pattern:"build/poms/[artifact].[ext]")
artifacts(pattern:"build/jars/[artifact].[ext]")
}
Notes:
More complex? Perhaps... however, if you're not generating the ivy file (using it to manage your dependencies) you can easily call the makepom task to generate the Maven POM files prior to upload into Nexus.
The REST APIs for Nexus work fine. I find them a little cryptic and of course a solution that uses them cannot support more than one repository manager (Nexus is not the only repository manager technology available).
The "deleteOnExit" File method call ensures the working files are cleaned up properly.

Resources