I'm trying to deploy Corda nodes in a windows server. While saving the corda app jar in plugins folder, which jar file do I have to save? Should I generate jar using intellij artifacts or just copy the plugins file from respective nodes which was created using gradlew deployNodes command?
If I've understood your question correctly, the easiest way would be to grab the jar(s) that are created inside the respective node's plugins folder after you've run deployNodes
Related
In composer (airflow 1.10.10), is it possible to create an airflow_local_settings.py file? And if so where should it be stored? I need this as I need an initContainer for my pod.
Add a airflow_local_settings.py file to your $PYTHONPATH or to $AIRFLOW_HOME/config folder.
For me, the above statement is unclear for Cloud-composer, as this config folder in an env bucket would probably not be synced with a worker.
Based on Slack discussions in the Apache Airflow Community Slack. It is not supported yet.
airflow_local_settings.py can be provided into dags folder in Composer GCS bucket according to Airflow initialization implementation where DAGs folder is added to sys.path before the configuration folder ($AIRFLOW_HOME/config). Keep in mind that by doing so you are overriding the Composer default policies.
I am new to Flyway. I like it a lot and I'm about to integrate it into our Java workspace. I need a little "push" however when planning the actual release procedure of these migrations. According to Flyway's documentation I have the choice of
distributing a list of Sql files or
distributing a list of both Sql and Java files packed into a jar archive.
I would like to try the second option because it gives us more flexibility and it sounds like I could simply include the migration scripts as resources in the jar file of the executable. However as we deliver database changes quite often in a continuous release process I see the jar file eventually being polluted with tons of script files. Moreover, when using Ant to create the jar file Ant will put the name of every file into the manifest's classpath which will make the manifest just messed up.
With these concerns in mind, for those of you that use Flyway in production:
What is the recommended way of distributing migration scripts? Do you put all of them into a jar and pass it to flyway on the server? Or do you copy all scripts to the server using a batch file everytime you make a release? Thanks for your advice.
war project is built with maven
war project src/main/resources/db/migration has the migration scripts.
DB migration scripts are packages when the war is built.
When the war is deployed as part of container startup, db migration using flyway is executed.
The above approach is followed for local development environment, Test/Stage environment are continuously deployed by Jenkins pipeline and DB Migrations are automatically run.
In case of failures in Continuous deployment pipeline, we manually run db migration using flyway command line.
We also have the option of running flyway using command line for production deployment and for trouble shooting.
You can setup a jenkins job to automate flyway migration.You can do it in 2 ways.
By configuring the goals and options to run the
compile flyway:migrate -Dflyway.user="xxxxx" ...(your other options)...
By installing a Flyway Runner Plugin on Jenkins
I would prefer doing it using the first option as you won't have to install anything on your Jenkins box.
I am trying to deploy my first firebase app. I am getting the message "* You are currently outside your home directory" I googled it and found this reply
"commented on Dec 6, 2016
Just to make sure you're aware. If someone is experiencing the same problem with with the command firebase init
Make the files .firebaserc and firebase.json manually and the deploy should work normally."
I do not know where to create them or what the should contain.
I have also gone to https://www.npmjs.com/package/firebase-tools to try to fix this problem.
If any one can help with this problem I would appreciate it.
If anyone else worried about this just keep going. I continued with the deployment and it deployed OK.
download firebase CLI binary (in case you didn't download it yet. this is an .exe file if you use windows).
copy the downloaded file into your project's root folder (Folder which contains all the files and folders of your projects).
run the firebase-tools-instant-win_2.exe (firebase CLI binary).
a command window will open.
execute all your command in there.
Initialize a Firebase project
Many common tasks performed using the CLI, such as deploying to a Firebase project, require a project directory. You establish a project directory using the firebase init command. A project directory is usually the same directory as your source control root, and after running firebase init, the directory contains a firebase.json configuration file.
To initialize a new Firebase project, run the following command from within your app's directory:
I used firebase init project in my project's directory to get it to work
My current workflow when building a Wordpress site is to create the project in a dropbox folder (so it gets backed up). I then push to a remote REPO.
My problem is that although I can gitignore the GULP dependencies for the repo, I cannot with Dropbox. Each project can generate 500,000 files (albeit tiny), and this really breaks dropbox - for DAYS!
Am I missing something? possible solutions?
Create the project outside dropbox - rely on the repo as a backup :(
Create the project outside dropbox - then clone repo back into DB (without GULP dependencies)?
Install dependencies globally? (npm install -g) - problem is all projects using the same dependencies isn't always what I want.
Thanks in advance
We have an app with web and worker nodes - the code for both is in the same git but gets deployed to different autoscaling groups. The problem is that there is only one appspec file, however the deployment scripts (AfterInstall, AppStart, etc.) for the web/worker nodes are different. How would I go about setting my CodeDeploy to deploy both apps and execute different deployment scripts ?
(Right now we have an appspec file that just invokes chef recipes that execute different actions based on the role of the node)
I know this question is very old, but I had the same question/issue recently and found an easy way to make it work.
I have added two appspec files on the same git: appspec-staging.yml, appspec-storybook.yml.
Also added two buildspec files buildspec-staging.yml, buildspec-storybook.yml(AWS CodeBuild allows specify the buildspec file).
The idea is after the build is done, we will copy and rename the specific appspec-xx.yml file to the final appspec.yml file, so finally, in the stage of CodeDeploy, we will have a proper appspec.yml file to deploy. Below command is for linux environment.
post_build:
commands:
- mv appspec-staging.yml appspec.yml
Update - according to an Amazon technical support representative it is not possible.
They recommend having separate gits for different environments (prod,staging,dev,etc.) and different apps.
Makes it harder to share code, but probably doable.
You can make use of environment variables exposed by the agent in your deployment scripts to identify which deployment group is being deployed.
Here's how you can use them https://blogs.aws.amazon.com/application-management/post/Tx1PX2XMPLYPULD/Using-CodeDeploy-Environment-Variables
Thanks,
Surya.
The way I have got around this is to have an appspec.yml.web and an appspec.yml.worker in the root of the project. I then have two jobs in Jenkins; one each that correspond to the worker and the web deployments. In each, it renames the appropriate one to be just appspec.yml and does the bundling to send to codedeploy.