How to manually generate a node's SignedNodeInfo file - corda

I am using an Azure VMs to run my nodes on Corda V3. How do I manually generate the nodeInfo-... file for each node so I can distribute across the network (without using the network-bootstrapper.
There's some documentation about using --just-generate-node-info here but I'm not sure how do I run this in the command line on the node?

Go to the node folder and run the following:
java -jar corda.jar --just-generate-node-info
This will create the node-info file in the same folder, with the name nodeInfo-<IDENTIFIER>.

Related

How to install and use mirthSync on MacOS?

Setup
I'm following the installation directions in the mirthSync readme, which is to clone the repo. The next indication of usage that I can see is in the Examples section, which via CLI is to "pull Mirth Connect code from a Mirth Connect instance":
java -jar mirthsync.jar -s https://localhost:8443/api -u admin -p admin pull -t /home/user/
I'm assuming that after cloning the repo, one should cd into that directory and then run the java -jar... command with all the appropriate flag values (server, username, password, etc).
Error
After running the CLI command, I get this error:
Error: Unable to access jarfile mirthsync.jar
Question
Where is this mirthsync.jar file supposed to come from? Is there something I need to do in order to generate the mirthsync.jar file?
Generate it via lein uberjar (which creates target/uberjar/*-standalone.jar) or download it from a release.

I have hadoop 1.0.3, name node is not showing when I use jps command

I did all steps to create the multinode hadoop 1.0.3 set up. when trying to start all services using start-all.sh, namenode is starting but when trying to give jps command. inthe list namenode is displaying. can any one help me out this issue.
Please try to run Name node through shell command line "hadoop-demons.sh start namenode" and check the status

How can I run my MPI code with several nodes?

I have a ROCKS Cluster with 1 frontend and 2 nodes (compute-0-1, compute-0-4) and. I can run my code only in frontend but when I try to run my code whit the nodes of many ways:
output by console of frontend
It always returns me:
mpirun was unable to launch the specified application as it could not find an executable
machine_file is located in default path and I tried to put it in a path of my project and contains:
compute-0-1
compute-0-4
¿What am doing wrong?

User in Unix not able to run hadoop command

I installed Hadoop and Created a user named hduser and changes owner of hadoop folder to hduser.
After installing Hadoop i try to execute the hadoop command to check whether it is installed or not but it is giving "hadoop" command not found.
Then i had given execute privilege to hduser on all the files inside hadoop folder include bin folder
But still output is same.
When i am trying the same hadoop command with root as a user its working fine.
I think it is related to unix commands. Please help me out to give my user the privilege to execute hadoop command.
One more thing if i switch to root then hadoop commands works fine.
It is not a problem of privileges. You can still execute hadoop, if you type /usr/local/hadoop/bin/hadoop, right?
The problem is that $PATH is user-specific.
You have to add your $HADOOP_HOME/bin to the $PATH, as hduser, not as root. Login as hduser first (or just type su hduser) and then export PATH=$PATH:/$HADOOP_HOME/bin, as #iamkristian suggests, where $HADOOP_HOME is the directory in which you have placed hadoop (usually /usr/local/hadoop).
I sounds like hadoop isn't in your path. You can test that with
which hadoop
If that gives you command not found the you probably just need to add it to your path. Depending on where you installed hadoop, you need to add this to your ~/.bashrc
export PATH=$PATH:/usr/local/hadoop/bin/
And then reopen your terminal

Running hadoop job without creating a jar file

I have wriiten a simple hadoop job. Now I want to run it without creating the jar file as opposed to lots of tutorials found on net.
I am calling it from a shell script on ubuntu platform which runs a cloudera CHD4 distribution of hadoop(2.0.0+91).
I can't create the jar file of the job because it depends on several other third party jars and configuration files which are already centrally deployed over my machine and are not accessible at the time of creating the jar. Hence I am looking out for a way where I can include these custom jar files and configuration files.
I also can't use -libjars and DistributedCache options because they only affect map/reduce phase but my driver class also is using these jar and configuration files. My job uses several in house utility code which internally uses these third party libraries and configuration files which I have only access to read from a centrally deployed location.
Here is how I am calling it from a shell script.
sudo -u hdfs hadoop x.y.z.MyJob /input /output
It shows me a
Caused by: java.lang.ClassNotFoundException: x.y.z.MyJob
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
My calling shell script successfully sets the hadoop classpath and contains all my required third party libraries and configuration files from a centrally deployed location.
I am sure that my class x.y.z.MyJob and all required libraries and configuration files are found in both the $CLASSPATH and $HADOOP_CLASSPATH environment varibales which I am setting before calling the hadoop job
Why at the time of running the script my program is not able to find the class.
Can't I run the job as a normal java class? All my other normal java programs are using the same classpath and they can always find the classes and configuration files without any problem.
Please let me know how can I access centrally deployed haddop job code and execute it.
EDIT: Here is my code to set the classpath
CLASSES_DIR=$BASE_DIR/classes/current
BIN_DIR=$BASE_DIR/bin/current
LIB_DIR=$BASE_DIR/lib/current
CONFIG_DIR=$BASE_DIR/config/current
DATA_DIR=$BASE_DIR/data/current
CLASSPATH=./
CLASSPATH=$CLASSPATH:$CLASSES_DIR
CLASSPATH=$CLASSPATH:$BIN_DIR
CLASSPATH=$CLASSPATH:$CONFIG_DIR
CLASSPATH=$CLASSPATH:$DATA_DIR
LIBPATH=`$BIN_DIR/lib.sh $LIB_DIR`
CLASSPATH=$CLASSPATH:$LIBPATH
export HADOOP_CLASSPATH=$CLASSPATH
lib.sh is the file to concatenate all third party files to a : separated format and CLASSES_DIR contains my job code x.y.z.MyJob class.
All my configuration files are unders CONFIG_DIR
When I print my CLASSPATH and HADOOP_CLASSPATH it shows me correct values. However whenever I call hadoop classpath just before executing the job, it shows me following output.
$ hadoop classpath
/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:myname:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-0.20-mapreduce/./:/usr/lib/hadoop-0.20-mapreduce/lib/*:/usr/lib/hadoop-0.20-mapreduce/.//*
$
It obviously does not have any of those previously set $CLASSPATH and $HADOOP_CLASSPATH varibales appended. Where are these environment varibales.
Inside my shell script I was running the hadoop jar command with Cloudera's hdfs user
sudo -u hdfs hadoop jar x.y.z.MyJob /input /output
This code was actually being called from the script with a regular ubuntu user which was setting the CLASSPATH and HADOOP_CLASSPATH varibles as mentioned above. And at the time of execution the hadoop jar command was not called using the same regular ubuntu user. Hence there was an exception indicating that the class was not found.
So you have to run the job with the same user who is actually setting the CLASSPATH and HADOOP_CLASSPATH environment variables.
Thanks all for your time.

Resources