Creating a config that overrides some Docker settings while keeping docker:publish behavior - sbt

I'm trying to create a SBT build that can publish a Docker container either to DockerHub or to our internal Docker repository. I'm using sbt-native-packager 1.0.3 to build the Docker image.
Here's an excerpt from my build.sbt:
dockerRepository in dockerInternal := Some("thomaso"),
packageName in dockerInternal := "externalname",
sbt docker:publish now successfully publishes to thomaso/externalname on DockerHub.
To add the option to publish to our internal Docker repo I added a configuration called dockerInternal:
val dockerInternal = config("dockerInternal") extend Docker
I then added these two settings to override the defaults:
dockerRepository in Docker := Some("docker.nrk.no/project"),
packageName in Docker := "internalname",
My expectation was that sbt dockerInternal:publish should publish a Docker image to docker.nrk.no/project/internalname. Instead, I get this error message:
delivering ivy file to /home/n06944/repos/nrk.recommendations/api/target/scala-2.10/ivy-0.1- SNAPSHOT.xml
java.lang.RuntimeException: Repository for publishing is not specified.
It seems to me SBT tried to publish to Ivy, not to Docker - when I hardcode the values to the internal repo the publishing works fine and there is no mention of Ivy in the logs. The Docker configuration modifies the publish task, and I hoped that by letting dockerInternal extend Docker I would inherit the Docker-specific publish behavior. Is that an incorrect assumption? Am I missing some incantations, or is there another approach that would be better?

You forgot to import all the necessary task to your new config. sbt-native-packager recommends generating submodules for different packaging configurations.
If you want to fiddle around with configuration scopes (which gets pretty messy very fast) here is another SO answer I gave.
cheers,
Muki

Related

How to use gitlab ci to test if a java maven project can be built and run under multiple jdks platform?

In some other ci, for example, with Travis, it supports multiple JDKs test (e.g https://blog.travis-ci.com/support_for_multiple_jdks).
However, I'm not sure how can I make it under the GitLab ci.
Assume I have a java project, I want to make sure this project can both build and run correctly under jdk8 and jdk11, How can I do this in the Gitlab CI?
Many thanks!
One way to do this would be to define pipeline jobs with different images with required dependencies. You can use any public images from dockerhub. After quick search I choose for my yaml example codenvy/jdk8_maven3_tomcat8 (jdk8 + maven) and appinair/jdk11-maven (jdk11 + maven), I'm not sure they will work for you though.
tests_jdk8:
image: codenvy/jdk8_maven3_tomcat8
script:
- <your mvn test/build script>
tests_jdk11:
image: appinair/jdk11-maven
script:
- <your mvn test/build script>
If you can dockerize your environment you can use your custom images instead. Or you write some before_script which can install your project requirements to any os docker image. Would be perfect to reflect existing production environment.
If you have already instances where you test your project you can always connect them as Gitlab shell/docker runner and run your test on custom runner.
All depends on what resources you've got and what do you want to achieve.

Docker: how to manage development and production settings?

I'm just getting started with Docker. With the official NGINX image on my OSX development machine (with Docker Machine as the Docker host) I ran up against the bug with sendfile and VirtualBox which means the server fails to show changes I make to files.
The workaround for this is to use a modified nginx.conf file that turns off sendfile. This guy's solution has an instruction in the Dockerfile to copy a customised conf file into the container. Alternatively, this guy maps the NGINX configuration to a new folder with modified conf file.
This kind of thing works OK locally. But what if I don't need this modification on my cloud host? How should I handle this and other differences when it comes to deployment?
You could mount your custom nginx.conf into the container in development via e.g. --volume ./nginx/nginx.conf:/etc/nginx/nginx.conf and simply omit this parameter to docker run in production.
If using docker-compose, the two options I would recommend are:
Employ the limited support for environment variable interpolation and add something like the following under volumes in your container definition: ./nginx/nginx.${APP_ENV}.conf:/etc/nginx/nginx.conf
Use a separate YAML file for production overrides.

profile-refresh in Fuse 6.2 does not reload snapshot bundle

I am running JBoss Fuse 6.2.0.
I built a small camel application that just writes to the log every 5 seconds.
I built it and installed the SNAPSHOT bundle jar in my local Maven repository.
In the Karaf console I did the following:
fabric:profile-create --parent feature-camel logdemo
fabric:profile-edit --bundle mvn:com.company.project/logdemo logdemo
fabric:container-create-child --profile logdemo root child1
The camel application now worked as intended.
I then made a small change to the application, rebuilt it and installed the new SNAPSHOT bundle jar in my local Maven repo.
In the Karaf console I then did the following to get Karaf to load the new jar:
fabric:profile-refresh logdemo
But the loaded application is still the old version.
How do I get Karaf to look for the updated jar in my local maven repo? It seems like it has some internal cache it looks in instead.
Note: We're not using Maven to build the application, so all answers about using Maven plugins like the fabric8 plugin will be rejected.
You should use the fabric:watch * command for that. This will update all containers that run a snapshot version of an artifact that is updated in the local maven repo. If you want only a specific container to watch for updates use dev:watch * on the shell of that container.
See http://fabric8.io/gitbook/developer.html

Robolectric 2.x - dependent jars are downloading while running the tests

How will I download all the robolectric dependent jars, to avoid runtime downloading and make it offline? I need to use Robolectric.buildActivity(), which is part of 2.x.x versions.
any idea on this ?
Starting with Robolectric 2.4 they added two system properties to allow you to tell the Robolectric test runner to use local copies of the dependencies. See the Configuring Robolectric page.
The settings are:
robolectric.offline - Set to true to disable runtime fetching of jars
robolectric.dependency.dir - When in offline mode, specifies a folder containing runtime dependencies
One way to figure out which files you need to copy to the dependencyDir, is to run gradlew testDebug -i (or maybe with -d) and watch the output to see which jars are being downloaded at runtime. Then copy them to a known location on your build machine. (Another way to see which files you need, is to look at SdkConfig.java and get the dependency jars mentioned there along with their dependencies.)
For the current Robolectric 3.0-rc2, these are the files it needs:
accessibility-test-framework-1.0.jar
android-all-5.0.0_r2-robolectric-1.jar
icu4j-53.1.jar
json-20080701.jar
robolectric-annotations-3.0-rc2.jar
robolectric-resources-3.0-rc2.jar
robolectric-utils-3.0-rc2.jar
shadows-core-3.0-rc2.jar
sqlite4java-0.282.jar
tagsoup-1.2.jar
vtd-xml-2.11.jar
Copy these files to a known location, like say /home/jenkins/robolectric-files/, and then edit your build.gradle with something like this:
afterEvaluate {
project.tasks.withType(Test) {
systemProperties.put('robolectric.offline', 'true')
systemProperties.put('robolectric.dependency.dir', '/home/jenkins/robolectric-files/')
}
}
Here is how I solved it for org.robolectric:robolectric:3.0
https://gist.github.com/kotucz/60ae91767dc71ab7b444
I downloads the runtime dependencies into the build folder and configures the tests to use it - see setting the system properties.
I had this issue too, and found the cause to be the org.robolectric.Testrunner creating a org.robolectric.MavenCentral object, which declares a Maven repository using an Internet-url (Robolectric 2.3-release). Offline builds will not be able to access that url.
In my case I'm required to use a Maven repository proxy, so I replaced the url pointing to http://oss.sonatype.org with my local Maven repository proxy. That meant subclassing RobolectricTestRunner to org.robolectric.MyRobolectricTestRunner, and creating a custom MavenCentral object for it to use, and overriding the methods where RobolectricTestRunner references its private MAVEN_CENTRAL object.
The source code for RobolectricTestRunner and MavenCentral are available on Robolectric's Github page.
I used Robolectric version 3.0, and the dependency jars were downloaded from my repository, instead of sonatype.

TideSDK | Bundle packaging

I have developed a TideSDK application and am now ready to package it, but I'm having problems with the network type installer.
It always gives me code 404 on the Application first run:
Could not query info: Invalid HTTP Status Code (404)
I presume the installer is having difficulty with reaching the correct servers and downloading the needed runtime, but I have run through most solutions on this forum, and none have worked.
So I tried a bundle packaging, as it should include such runtime, but I must be doing something wrong, since it does not bundle within the MSI.
The code I'm executing is as follows:
C:\TideSDK\sdk\win32\1.2.0.RC6d\tibuild.py -p --type=BUNDLE --os=win32 "C:\path_to_app\app_dir"
I also tried:
C:\TideSDK\sdk\win32\1.2.0.RC6d\tibuild.py -p -t bundle --os=win32 "C:\path_to_app\app_dir"
And all the uppercase/lowercase combinations. Also tried version 1.2.0.4, without sucess. Am I doing something wrong?
the network type installer is not available anymore, since appcelerator has canceled their services for titanium desktop.
So you can only do bundle packaging. Try the following command:
python tibuild.py --dest=. --type=bundle --package=. "c:\path\to\your\app\dir"
This should build and package your app and create a installer for it.
Change "dest" and "package" to the directories where you want to have the built app and installation package.
You can omit the OS parameter, since the builder can only generate builds for the current OS.

Resources