I want to create some kind of "branding" for scala application using sbt-native-packager.
The idea is to configure two profiles in sbt (aprofile and bprofile) and when calling specific
target it defines correct configuration file to be used when starting application.
Two configuration files
conf/a.config
conf/b.config
I want to package application using following keys
sbt aprofile:stage
sbt bprofile:stage
Sbt-native-profile defines
bashScriptExtraDefines += """addJava "-Dconfig.file=${app_home}/../conf/app.config""""
Now the question is how to create those two profiles in sbt to define bashScriptExtraDefines property accordingly. I am trying to use https://stackoverflow.com/a/20573422/289043 somehow, however I don't know how to correctly overwrite bashScriptExtraDefines.
Related
I built a console app with .Net Core 3.1. I have it building using Yaml leaning heavily on the learn.microsoft.com documentation. The release is pushing to the correct box. But I have an appsettings.json file that has a conn string variable that is different between my TEST, QA and PROD regions. I knew how to do this with the xml file transforms in .NET and MVC but I can't get this to work. Any help would be great since I don't even know the term for what I am trying to do here.
How do you change the connectionstring in the appsettings.json based on a variable or do I have to create 3 branches each with settings and create 3 build and release pipelines?
Thank you.
In order to push to different environments you usually
Have seperate release pipelines that trigger from different branches.
You have one release pipeline with different stages that need pre-approval to move to the next stage TEST -> QA -> PROD.
In both cases you will make use of Stage.
There you need to add a task named "File transformation"
In the File Format select JSON
Now, any variable found in the appsettings.json file will be replaced by the variables you set in the pipeline.
Be careful because nested variables like
{
SerilogSettings: {
BatchSize: 100
}
}
need to be set with a "." instead like
SerilogSettings.BatchSize
My dotnetcore app has one appsettings.json per environment (appsettings.json and appsettings.Development.json for example) and I would like to take advantage of this on my pipeline.
I see 2 options for the pipeline:
Build Artifact for Dev -> Deploy on Dev -> Build Artifact for Prod -> Deploy on Prod
or
Build Artifact -> Deploy on Dev -> Deploy on Prod
For the first option, I could set the environment as a parameter for the build.
For the second option, how could I build the App only once, and set the environment according to the current deployment step? Taking advantage of the multiple appsettings.json I have.
And finally, are these approaches aligned with the best practices? If not, what would be the best practices for pipelines with multiple environments?
Generally we can generate a single artifact, then deploy the artifact to different environments and perform the different transformations at any environment within it's own stage release phase. That means we can change and override the settings which defined in the appsettings.json in each release environment.
Please refer to File transforms and variable substitution reference on how to do the transformation with .json files.
Besides, we can try to install the Replace Tokens extension, then use Replace Tokens task to load and change the settings defined in the appsettings.json file in each release environment/stage.
You can also transform the settings or use File Creator to create a new appsettings.jsonfile to overwrite the existing one.
Below blogs for your reference:
Replace appsetting tokens in config files with Build & Release
Management in VSTS (TFS)
Transform configurations in a .NET Core 2.2 Web API using Azure
DevOps
Using custom appsettings.json with ASP.NET Core integration
tests
You could go with Azure AppConfiguration and add it as an extra source for the configuration. This way your building/releasing process stays extremely simple.
See this documentation: https://learn.microsoft.com/en-us/azure/azure-app-configuration/enable-dynamic-configuration-dotnet-core
It's very powerful: you can select only part of the configuration (through filters), you can have feature flags, and you can have secrets (from linked key vaults).
I am working on application for Mac OS written in C++ using Qt5.7.1, which is distributed both as direct download from website and through App Store.
Now I have a new requirement to add self-update to the application. Which would require building two different versions of application - one for App Store (without self-update mechanism) and one for direct download (with self-update).
I have no problem to implement the self-update, but I am stuck with making a separate build configuration. So the question is -- in Qt5 is it possible to define two separate Release configurations (in a single .pro file) and if yes, then how? It also needs to work with qmake since the builds are automatic (with Jenkins).
You cannot really define 2 Release configurations in a .pro file.
However you can define different configuration options in your .pro file like this:
foo {
#something
DEFINES += FOO
} else {
#something else
DEFINES += NOT_FOO
}
bar {
#another thing
DEFINES += BAR=42
}
And then when you run qmake, add either CONFIG+=foo or CONFIG+=bar or both.
Finally in QtCreator you can define as many build profile as you want and customize the call to qmake for each profile by adding or not CONFIG+=foo options.
For more information check qmake's CONFIG documentation, especially the last example.
I'm trying to leverage GruntJS to create a build process that is uniform across multiple teams and projects at my company. The idea hear is that we have a config file for each application that only specifies the files that need to be processed and what bundles they need to be concatenated into at the end. The build process would be the same for all apps: pick up the config for the app, process files in each bundle using a uniform build process.
For Example:
asset.json config file specifies two bundles, "main" with 1.js + 2.js and "secondary" with 2.js and 3.js
Build process says for each bundle, preprocess, minify, then concatenate into a js file based on the bundle
Get output of "main.js" and "secondary.js"
The problem I'm running into is that Grunt takes a "static" configuration and executes it. I've already abstracted out the building of the configuration so that I can add chunks dynamically, but right now I don't see a better way forward than literally looping over each bundle and building out a unique task for each section of the build process for each bundle, building up queues of tasks to execute, and then running each task in the queues during the build process. Its definitely possible, but its a lot of manual work and seems prone to breaking. Is there way to just execute each task in order as I loop over the bundles? Any better way to achieve the same net result of config + source in, N bundles out?
I want to be clear that I am fully aware that Grunt CAN build multiple files. What I'm trying to do is separate the specification of how many bundles from the build steps themselves. Grunt core has to bake these two things together which means each project would have to go in and alter their build steps rather than an external configuration. As per the example above, I should be able to swap out the asset.json file specified in step 1 for any config file that has 1, 2, 3, ... N bundles with N files in each one (and potentially specifying a "type" like scripts or styles).
Edit 10/12/13: The Nitty Gritty posted an article yesterday that might be another approach to tackling your issue.
This can be done by passing the module name you want to build as a command line argument and loading in the whole assets file in your grunt config. Please note this is example code, I have not tested this, so it's possible you need to set paths etc. correct for your case.
Start with updating the assets.json file to a plain JavaScript file, and reform it like so:
module.exports = {
main: ["1.js", "2.js"],
secondary: ["2.js","3.js"]
}
Next, you can pass a command line argument to Grunt, which should specify one of the module names in assets.js. Example:
grunt --bundle=main
Now, you'll need to load in the assets.js file in the Gruntfile:
var assets = require('./assets'); // assuming assets.js is on the same level as your Gruntfile
And then you can get the argument name by using:
var bundle = grunt.option("bundle");
Now you can use bundle as your output file name and assets.bundle to get the array files for that bundle.
My application supports running on many dbms and it requires user to configure dbms connection setting and also provide the jdbc jar file.
Now the application is to be packaged as OSGi bundle. There will be another main jar which lanches OSGi server and starts the application as bundle.
Can you please suggest how can I package the application as bundle and let user provide the jdbc jar file.
Will it require something like the main launcher jar specifying JDBC driver classes as FRAMEWORK_SYSTEMPACKAGES property?
Thanks in advance,
Aman
There are two ways of doing this:
1) Adding the driver.jar to the classpath of the main launcher and, like you say, expose its packages via the framework by specifying that property (or actually you can use the FRAMEWORK_SYSTEMPACKAGES_EXTRA property to just specify additional packages, instead of specifying all of them).
2) Manually wrapping the driver.jar as a bundle, or doing it dynamically at runtime. For example, you could try to wrap bundles that are copied to a certain folder (similar to what Apache Felix File Install does) by using Pax URL or some other tool that can create a bundle out of an ordinary jar file for you (see http://team.ops4j.org/wiki/display/paxurl/Pax+URL).