I checked https://docs.corda.net/deploying-a-node.html for deploying in windows server. I can see deploying nodes using NSSM Manager.
When I deploy nodes, how it will access my application which is placed as a jar under /opt/corda /CordaApp.jar
Also, When I run nssm.bat file under each nodes, my cmd is going on running with the first cmd and not stopping. Nothing proceed after that.
There is a typo in the docs. Where it says:
Create a directory called plugins in /opt/corda and save your CorDapp jar file to it. Alternatively, download one of our sample CorDapps to the plugins directory
It should read instead:
Create a directory called plugins in C:\Corda\ and save your CorDapp jar file to it. Alternatively, download one of our sample CorDapps to the plugins directory
This was fixed by the following PR: https://github.com/corda/corda/pull/2607.
Related
I am following documentation in here :https://docs.corda.net/network-bootstrapper.html to bootstrap test network. In the section: "Providing CorDapps to the Network Bootstrapper", we are asked to place cordapp jar along with the conf files.
i run the command below:
java -jar corda-network-bootstrapper-3.2-corda-executable.jar "."
After, I see message Bootstrapping complete!
But, when i go into the node folders, none of them have the cordapp in their directories ? How do i know that cordapp is installed ?
Also, another issue with the command is I cant provide the directory using --dir parameter.
Can you please suggest any work around for these issues ?
Thanks.
Update
Below are the screenshots for commands in Mac OSX:
This is my folder structure:
Error I am getting after I execute the command:
Your jar should be present inside your node's root/cordapps directory. If it's not, than bootstrapper is unable to identify your cordapp. Try adding --verbose and it should print something like "Found the following CorDapps: "
Also when you start the corda nodes, CLI will show all installed cordApps.
For your dir issue, please add the error message here and the type of your operating system. It's working fine here on Linux.
I have some Clojurescript source files that output messages to the browser console on a timer. Eventually I would like to make a Clojars library from these files. So far I have created an uberjar using lein. All the user of this library would need to do is :require a namespace from the library, and messages should be emitted to the browser console. Seeing these messages is the "all working fine" test I want to perform.
In other words how do I check that the jar file I have created works? Can I start off with a fresh lein project and just put the jar file in some special 'un-managed' directory and :require the namespace? Actually I don't think you can do such a thing with lein, hence the question.
Assuming you have a project.clj file already with the line
(defproject bigco/biglib "0.1.0-SNAPSHOT”
...
run
lein install
This will build the JAR and install it in your local Maven repo.
Then in your new project, add that dependency and run it.
If your jar (definition of jar includes uberjar of course) does not come neatly from a lein project then an alternative is to use Maven 2 directly:
mvn install:install-file -Dfile=./my-deps.jar -DgroupId=my-deps -DartifactId=my-deps -Dversion=1.0.0 -Dpackaging=jar
Here mvn will store the jar in your local .m2 maven repository. Once stored you can use this jar in any lein project on your machine by referring to it in the dependencies section:
[my-deps "1.0.0"]
Maven documentation for this.
I am working with a community-developed OpenShift cartridge for nginx. The cartridge's build script (without any modifications) works well; it starts the nginx server with the configuration file that I provide it. However, I am trying to modify the build script so that it first changes directory into my OpenShift repository, runs npm install and then grunt build to build an Angular application that I have created.
When I do this, I continuously get the error EACCES, mkdir '/var/lib/openshift/xxxxxxxxxx/.npm' when the script gets to npm install. Some OpenShift forum posts have attempted to solve the issue, but it appears as though a different solution is required (at least in my case).
Thus, I am interested in whether or not it is possible to use npm in this way, or if I need to create a cartridge that does all of this myself.
Since we do not typically have the access required to create ~/.npm, we have to find ways of moving the npm cache (normally ~/.npm) and the npm user configuration (normally ~/.npmrc) to accessible folders to get things going. The following information comes partially from a bug report that I submitted to Redhat on this matter.
We must begin by creating an environmental variable to control the location of .npmrc. I created a file (with shell access to my application) called .env in $OPENSHIFT_DATA_DIR. Within this file, I have placed:
export NPM_CONFIG_USERCONFIG=$OPENSHIFT_HOMEDIR/app-root/build-dependencies/.npmrc
This moves the .npmrc directory to a place where we have the privileges to read/write. Naturally, I have to also create the directory .npmrc in $OPENSHIFT_HOMEDIR/app-root/build-dependencies/. Then, in my pre-start webhook/early in my build script, I have placed:
touch $OPENSHIFT_DATA_DIR/.env
This ensures that the environmental variable that configures the location of .npmrc will be accessible each time we deploy/build. Now we can move the location of the npm cache. Start by running touch on the .env file manually, and create the .npm directory in $OPENSHIFT_HOMEDIR/app-root/build-dependencies/. Run the following to complete the reconfiguration:
npm config set cache $OPENSHIFT_HOMEDIR/app-root/build-dependencies/.npm
NPM should now be accessible each time we deploy, even if we are not using the NodeJS cartridge. The above directory choices may be changed as desired.
You do not have write access to the ~/.npm directory in your gear. You might try reviewing how the native node.js cartridge is setup (https://github.com/openshift/origin-server/tree/master/cartridges/openshift-origin-cartridge-nodejs) and see if you can apply it to your custom cartridge.
Firstly I am new to Nexus. So please bear if it is too noob a question. Let me first explain how our current build/deployment process works.
HOW WE DO IT AT PRESENT:
We have a project that is Maven based. There is a parent POM.xml and two module pom.xmls Each child module POM.xmls create a JAR file each when built. Currently I am doing the build/ deployments manually. I checkout code from SVN to my local machine. I run mvn clean install. I have created a bash script to bundle the 2 Jar files + few other resources (Present just in SVN repo and gets downloaded to local) into a tar.gzip file. Now I SCP this to the app server. Run install scripts that deploys the tar.gzip file.
HOW WE WANT TO DO IT:
We plan to automate the build in Bamboo (Which I have already done). Then the built artifact needs to be uploaded to a Nexus repository (Due to security issues, the SCP task in Bamboo does not work because of establishing SSH connectivity from Bamboo Server to App Server).
MY FIRST HURDLE:
I have created a Bash Script task in Bamboo which does the bundling ( 2 Jars from each child Module POM + resources) to a tar.gzip. This tar.gzip is prersent in a path a/b/c/d on my bamboo machine.
How do I upload this tar.gzip to Nexus Repository?
MY CONFUSION:
I have read about uploading artifacts to Nexus. But I understand it if just 1 jar/ear/war file is created from the build. But we want the bundle. So if I make changes to settings.xml & POM.xml to configure the upload to NEXUS, each JAR file will be uploaded into separate paths in Nexus. And then I have to configure separately to upload the resource files (Not part of build). Is my understanding correct? Please let me know how to proceed with this?
Thanks in advance!!!
Use the Maven Assembly Plugin to create an assembly that contains your artifacts and resources, and then your regular maven deploy will deploy it into Nexus.
I am having the same issue described in this post on the py2app mailing list.
I have a python application that uses a sqlite database. On my machine, which has all the dependencies installed, there are no issues. However, when I bundle the application with py2app, clicking a menu that causes the database to be accessed results in this error:
Database error: Driver not loaded Driver not loaded
For the Windows installer, the files in \Qt\version\plugins\sqldrivers\*.* can be copied to \myApp\sqldrivers\*
The same files on the Mac can be found in /opt/local/share/qt4/plugins/sqldrivers (installed via Macports).
However, copying the sqldrivers directory to my application's Resources or Frameworks directories still results in the same error.
How can I add sqlite support into my application that is built using py2app?
Turns out the pyside recipe does have a way to specify which qt-plugins you need...
options=dict(py2app={
'argv_emulation': True,
'qt_plugins' : "sqldrivers",
}
),
This puts all the sqldrivers into the right directory and setups qt.conf correctly.
have you tried what he said
in this post ?
py2app setup.py usage question
it mentioned
you need to include the sqlalchemy.dialects.sqlite as a package
I managed to get this to work as follows:
After building with py2app, inside the application's Contents directory, make a new plugins directory.
Then copy sqldrivers/libqsqlite.dylib into this plugins directory.
Afterwards, install_name_tool has to be used to change the library links in libqsqlite.dylib to point to the Qt libraries in the application's Frameworks directory rather than the system Qt libraries.