How to install and use mirthSync on MacOS? - jar

Setup
I'm following the installation directions in the mirthSync readme, which is to clone the repo. The next indication of usage that I can see is in the Examples section, which via CLI is to "pull Mirth Connect code from a Mirth Connect instance":
java -jar mirthsync.jar -s https://localhost:8443/api -u admin -p admin pull -t /home/user/
I'm assuming that after cloning the repo, one should cd into that directory and then run the java -jar... command with all the appropriate flag values (server, username, password, etc).
Error
After running the CLI command, I get this error:
Error: Unable to access jarfile mirthsync.jar
Question
Where is this mirthsync.jar file supposed to come from? Is there something I need to do in order to generate the mirthsync.jar file?

Generate it via lein uberjar (which creates target/uberjar/*-standalone.jar) or download it from a release.

Related

Unable to install spark 2.2 in Cloudera Quickstart VM (5.10)

I have followed the blog (Below mentioned) here and downloaded the parcel and put as per required.
Please let me know if any one has installed and the steps.
(https://www.cloudera.com/documentation/spark2/latest/topics/spark2_installing.html)
/opt/cloudera/csd/SPARK2-2.1.0.cloudera2-1.cdh5.7.0.p0.171658-el5.parcel
But service cloudera-scm-server restart is not executing.
To use Cloudera Express (free), run:
sudo /home/cloudera/cloudera-manager --express
This requires at least 8 GB of RAM and at least 2 virtual CPUs.
SPARK 2.2 Installation Setup on Cloudera VM
Step 1: Download a quickstart_vm from the link:
Prefer a vmware platform as it is easy to use, anyways all the options are viable.
Size is around 5.4gb of the entire tar file. We need to provide the business email id as it won’t accept personal email ids.
Step 2: The virtual environment requires around 8gb of RAM, please allocate sufficient memory to avoid performance glitches.
Step 3: Please open the terminal and switch to root user as:
su root
password: cloudera
Step 4: Cloudera provides java –version 1.7.0_67 which is old and does not match with our needs. To avoid java related exceptions, please install java with the following commands:
(a). Downloading Java:
wget -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz
(b). Switch to /usr/java/ directory with “cd /usr/java/” command.
(c). cp the java download tar file to the /usr/java/ directory.
(d). Untar the directory with “tar –zxvf jdk-8u31-linux-x64.tar.gz”
(e). Open the profile file with the command “vi ~/.bash_profile”
(f). export JAVA_HOME to the new java directory.
“export JAVA_HOME=/usr/java/jdk1.8.0_131”
Save and Exit.
(g). In order to reflect the above change, following command needs to be executed on the shell:
source ~/.bash_profile
Step 5: The Cloudera VM provides spark 1.6 version by default. However, 1.6 API’s are old and do not match with production environments. In that case, we need to download and manually install Spark 2.2.
(a). Switch to /opt/ directory with the command:
“cd /opt/”
(b). Download spark with the command:
wget https://d3kbcqa49mib13.cloudfront.net/spark-2.2.0-bin-hadoop2.7.tgz
(c). Untar the spark tar with the following command:
tar -zxvf spark-2.2.0-bin-hadoop2.7.tgz
(d). We need to define some environment variables as default settings:
Please open a file with the following command:
vi /opt/spark-2.2.0-bin-hadoop2.7/conf/spark-env.sh
Paste the following configurations in the file:
SPARK_MASTER_IP=192.168.50.1
SPARK_EXECUTOR_MEMORY=512m
SPARK_DRIVER_MEMORY=512m
SPARK_WORKER_MEMORY=512m
SPARK_DAEMON_MEMORY=512m
Save and exit
(e). We need to start spark with the following command:
/opt/spark-2.2.0-bin-hadoop2.7/sbin/start-all.sh
Export spark_home :
export SPARK_HOME=/opt/spark-2.2.0-bin-hadoop2.7/
(f). Change the permissions of the directory:
chmod 777 -R /tmp/hive
(g). Try “spark-shell”, it should work.
Please follow below video it has all the necessary step required in order to install Sprak2 in Clouedra VM.
youtubue link - https://www.youtube.com/watch?v=lQxlO3coMxM
Also for for starting Cloudera Express (free) your VM should have at-least 8Gb RAM allocated or if you have default 4GB RAM allocated then you can forcefullly start ysing below command and then follow the above video.
sudo /home/cloudera/cloudera-manager --force --express
Try this command
sudo /home/cloudera/cloudera-manager --express --force
I gave up on this, nothing works well with parcel and non-parcel installation.
As soon as cloudera express is started numerous errors and Java 7 instead of Java 8.
I got a mapr VM install with Spark 2.x. No issues. Works first time.
That works well. This is my advice # 1.
If you want KUDU, then I would install centos and install things oneself. This is advice # 2. OK, you may miss Impala, but if for pure research and development then not so much of an issue.
With following two command my spark2.2 was automatically updated to spark 2.4:
(i) sudo yum update
It might be that your java, home path is screwed, in that case please export the java home path in bash file.
(a) vi ~/.bash_profile
(b)
(c) source ~/.bash_profile
Just download the right version of spark that you need say 'spark-2.2.0-bin-hadoop2.6'
open bashrc_profile through vi editor
vi ~/.bash_profile. Paste the below 2 lines
SPARK_HOME=/home/cloudera/Downloads/spark-2.2.0-bin-hadoop2.6
PATH=$PATH:$HOME/bin:$SPARK_HOME/bin
Save it
Then run the command : source ~/.bash_profile
Now start spark-shell .
Note : Make sure you have JDK 1.8 installed
SnPARK 2.2 Installation Setup on Cloudera VM
Step 1: Download a quickstart_vm from the link:
Prefer a vmware platform as it is easy to use, anyways all the options are viable.
Size is around 5.4gb of the entire tar file. We need to provide the business email id as it won’t accept personal email ids.
Step 2: The virtual environment requires around 8gb of RAM, please allocate sufficient memory to avoid performance glitches.
Step 3: Please open the terminal and switch to root user as:
su root
password: cloudera
Step 4: Cloudera provides java –version 1.7.0_67 which is old and does not match with our needs. To avoid java related exceptions, please install java with the following commands:
(a). Downloading Java:
wget -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz
(b). Switch to /usr/java/ directory with “cd /usr/java/” command.
(c). cp the java download tar file to the /usr/java/ directory.
(d). Untar the directory with “tar –xvzf jdk-8u31-linux-x64.tar.gz”
(e). Open the profile file with the command “vi ~/.bash_profile”
(f). export JAVA_HOME to the new java directory.
“export JAVA_HOME=/usr/java/jdk1.8.0_131”
Save and Exit.
(g). In order to reflect the above change, following command needs to be executed on the shell:
source ~/.bash_profile
Step 5: The Cloudera VM provides spark 1.6 version by default. However, 1.6 API’s are old and do not match with production environments. In that case, we need to download and manually install Spark 2.2.
(a). Switch to /opt/ directory with the command:
“cd /opt/”
(b). Download spark with the command:
wget https://d3kbcqa49mib13.cloudfront.net/spark-2.2.0-bin-hadoop2.7.tgz
(c). Untar the spark tar with the following command:
tar -xvzf spark-2.2.0-bin-hadoop2.7.tgz
(d). We need to define some environment variables as default settings:
Please open a file with the following command:
vi /opt/spark-2.2.0-bin-hadoop2.7/conf/spark-env.sh
Paste the following configurations in the file:
SPARK_MASTER_IP=192.168.50.1
SPARK_EXECUTOR_MEMORY=512m
SPARK_DRIVER_MEMORY=512m
SPARK_WORKER_MEMORY=512m
SPARK_DAEMON_MEMORY=512m
SPARK_LOCAL_IP=127.0.0.1
Save and exit
(e). We need to start spark with the following command:
/opt/spark-2.2.0-bin-hadoop2.7/sbin/start-all.sh
Export spark_home :
export SPARK_HOME=/opt/spark-2.2.0-bin-hadoop2.7/
(f). Change the permissions of the directory:
chmod 777 -R /tmp/hive
(g). Try “spark-shell”, it should work.
Same answeras swapnil shashank with small modification below
SPARK_LOCAL_IP=127.0.0.1
tar -xvzf spark-2.2.0-bin-hadoop2.7.tgz

Jar file run on a server background with close putty session

I have tried the run spring boot jar file using putty. but the problem is after closed the putty session service was stopped.
then i tried up the jar file with following command. its working fine .
**nohup java -jar /web/server.jar **
You should avoid using nohup as it will just disassociate your terminal and the process. Instead, use the following command to run your process as a service.
sudo ln -s /path/to/your-spring-boot-app.jar /etc/init.d/your-spring-boot-app
This command creates a symbolic link to your JAR file. Which then, you can run as a service using the command sudo service your-spring-boot-app start. This will write console log to /var/log/your-spring-boot-app.log
Moreover, you can configure spring-boot/application.properties to write console logs at your specified location using logging.path=path-to-your-log-directoryor logging.file=path-to-your-log-file.txt. Also, it may be worth noting that logging.file takes priority over logging.path

How do I enable python35 from Software Collections at login?

I followed the Software Collections Quick Start and I now have Python 3.5 installed. How can I make it always enabled in my ~/.bashrc, so that I do not have to enable it manually with scl enable rh-python35 bash?
Use the scl_source feature.
Create a new file in /etc/profile.d/ to enable your collection automatically on start up:
$ cat /etc/profile.d/enablepython35.sh
#!/bin/bash
source scl_source enable python35
See How can I make a Red Hat Software Collection persist after a reboot/logout? for background and details.
This answer would be helpful to those who have limited auth access on the server.
I had a similar problem for python3.5 in HostGator's shared hosting. Python3.5 had to be enabled every single damn time after login. Here are my 10 steps for the resolution:
Enable the python through scl script python_enable_3.5 or scl enable rh-python35 bash.
Verify that it's enabled by executing python3.5 --version. This should give you your python version.
Execute which python3.5 to get its path. In my case, it was /opt/rh/rh-python35/root/usr/bin/python3.5. You can use this path to get the version again (just to verify that this path is working for you.)
Awesome, now please exit out of the current shell of scl.
Now, lets get the version again through this complete python3.5 path /opt/rh/rh-python35/root/usr/bin/python3.5 --version.
It won't give you the version but an error. In my case, it was
/opt/rh/rh-python35/root/usr/bin/python3.5: error while loading shared libraries: libpython3.5m.so.rh-python35-1.0: cannot open shared object file: No such file or directory
As mentioned in Tamas' answer, we gotta find that so file. locate doesn't work in shared hosting and you can't install that too.
Use the following command to find where that file is located:
find /opt/rh/rh-python35 -name "libpython3.5m.so.rh-python35-1.0"
Above command would print the complete path (second line) of the file once located. In my case, output was
find: `/opt/rh/rh-python35/root/root': Permission denied
/opt/rh/rh-python35/root/usr/lib64/libpython3.5m.so.rh-python35-1.0
Here is the complete command for the python3.5 to work in such shared hosting which would give the version,
LD_LIBRARY_PATH=/opt/rh/rh-python35/root/usr/lib64 /opt/rh/rh-python35/root/usr/bin/python3.5 --version
Finally, for shorthand, append the following alias in your ~/.bashrc
alias python351='LD_LIBRARY_PATH=/opt/rh/rh-python35/root/usr/lib64 /opt/rh/rh-python35/root/usr/bin/python3.5'
For verification, reload the .bashrc by source ~/.bashrc and execute python351 --version.
Well, there you go, now whenever you login again, you have got python351 to welcome you.
This is not just limited to python3.5, but can be helpful in case of other scl installed softwares.

libsqlplus.so: connot open shared object file : No such file or directory even though PATH contain the path

I downloaded Instant Oracle Client Version 11.2.0.4.0(basic, sqlplus, devel .rpm file) by Oracle website in Ubuntu.
After converting .rpm into .deb using alien, I installed it, basic first and sqlplus and last devel.
And then I tried to run sqlplus.
But It is saying sqlplus64: error while loading shared libraries: libsqlplus.so: cannot open shared object file: No such file or directory
Even though my PATH contains the PATH.
The below shows my PATH and the location of libsqlplus.so.
A#ubuntu:~$ sudo find / -name libsqlplus.so
/usr/lib/oracle/11.2/client64/lib/libsqlplus.so
A#ubuntu:~$ echo $PATH
/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/sangmin/eclipse:/usr/lib/oracle/11.2/client64/lib:/usr/lib/oracle/11.2/client64
Test your Oracle client. User either sqlplus either sqlplus64 depending on your platform. In my case, I used:
$ sqlplus64 username/password#//dbhost:1521/SID
If you get the next message, then you need to instruct sqlplus to use the proper libray:
sqlplus64: error while loading shared libraries: libsqlplus.so: cannot open shared object file: No such file or directory.
To do so, first find the location of Oracle libraries. The path should be something like /usr/lib/oracle/<version>/client(64)/lib/. In my case (Ubuntu 14.04 LTS, Intel on 64-bit), it was /usr/lib/oracle/11.2/client64/lib/.
Now, add this path to the system library list. Create and edit a new file:
$ sudo nano /etc/ld.so.conf.d/oracle.conf
Add inside the path:
/usr/lib/oracle/11.2/client64/lib/
Run now the dynamic linker run-time bindings utility:
$ sudo ldconfig
If sqlplus yields of a missing libaio.so.1 file, run:
$ sudo apt-get install libaio1
For other errors when trying to run sqlplus, please consult the Ubuntu help page.
Might worth checking the permissions issue:
sqlplus: error while loading shared libraries
PERMISSIONS:
I want to stress the importance of permissions for "sqlplus".
For any "Other" UNIX user other than the Owner/Group to be able to run sqlplus and access an ORACLE database , read/execute permissions are required (rx) for these 4 directories :
$ORACLE_HOME/bin , $ORACLE_HOME/lib, $ORACLE_HOME/oracore, $ORACLE_HOME/sqlplus
Environment. Set those properly:
A. ORACLE_HOME
(example: ORACLE_HOME=/u01/app/oranpgm/product/12.1.0/PRMNRDEV/)
B. LD_LIBRARY_PATH
(example: ORACLE_HOME=/u01/app/oranpgm/product/12.1.0/PRMNRDEV/lib)
C. ORACLE_SID
D. PATH
export PATH="$ORACLE_HOME/bin:$PATH"

Varying Vagrant Vagrants switching wordpress-trunk to git

I am trying to convert Varying Vagrant Vagrant's wordpress-trunk (or development) site to be provisioned via git instead of svn.
There seems to be a script (I presume it is a script even though it has no file extension) as part of the VVV project that will switch after the machine has been provisioned:
https://github.com/Varying-Vagrant-Vagrants/VVV/blob/master/config/homebin/develop_git
And the author told me that running the following from command line should do it:
vagrant ssh -c "develop_git"
but when I run that I get the following error:
Unknown cipher type 'develop_git'
There appears to be some code in the provision script that mentions git, but I have no idea what I am looking at.
So, does anyone know how to run/implement that script? Or otherwise convert the www/wordpress-trunk folder to git? Are there options somewhere to direct VVV to provision the trunk folder from git in the first place?
Contrary to the Vagrant-Documentation of vagrant ssh the -c option is delivered to the ssh command and therefore interpreted as the cipher-specification.
I would suggest you to try vagrant ssh -- "develop_git", since everything after "-- (two hyphens)[is] passed directly into the ssh executable".

Resources