Passing environment variables through jar file which app uses - jar

I am currently trying out on the docker link between my app and db containers. I've checked on my app container and environment variables are automatically set when I link the containers together.
What I want to do is for my config file, which is packaged into a jar file, to receive the environment variables and set the required values to it. Any advice or help?
And this is how I create a config file in my jar file to connect to MySQL
database { url="jdbc:mysql://${MYSQL_PORT_3306_TCP_ADDR}:${MYSQL_PORT_3306_TCP_PORT}/mydb" driver="com.mysql.jdbc.Driver"}

Updating the config file inside the jar could be quite overkill.
It think you have several choices
read the config environment variable directly in you program
use variable either directly or generate the config file there
create launch script (details of this depends of you guest os in docker how to do it; sh/bash for linux etc..)
that script can generate new config file from environment and put it on classpath before jar so you program sees it.
EDIT: added example
You can save this kind of launcher script on docker image which dynamically creates configuration before launching actual program.
#!/bin/bash
# some default values for testing even without links to other container
MYSQL_PORT_3306_TCP_ADDR=${MYSQL_PORT_3306_TCP_ADDR:-127.0.0.1}
MYSQL_PORT_3306_TCP_PORT=${MYSQL_PORT_3306_TCP_PORT:-3306}
cat << EOF > /opt/yourprogram/dbconfig.conf
database { url="jdbc:mysql://${MYSQL_PORT_3306_TCP_ADDR}:${MYSQL_PORT_3306_TCP_PORT}/mydb" driver="com.mysql.jdbc.Driver"
}
EOF
scala -classpath /opt/yourprogram YourProgram

What I did is that I wrote the sh file in my directory /tmp/restcore-1.0-SNAPSHOT/bin like this:
#!/bin/bash echo "database{url="jdbc:mysql://"${MYSQL_PORT_3306_TCP_ADDR}":"${MYSQL_PORT_3306_TCP_PORT}"/mydb" driver="com.mysql.jdbc.Driver" }" > myconf.conf
jar uf /tmp/restcore-SNAPSHOT/lib/com.organization.restcore-1.0-SNAPSHOT.jar /tmp/restcore-1.0-SNAPSHOT/bin/myconf.conf
After building the Dockerfile and running the sh file in CMD, I use cat myconf.conf to check the config file and I'll be able to see the environment set.

Related

Laravel Homestead setup but does not sync hidden files

Hi I've setup Homestead correctly but when I ssh into my Vagrant instance I can see all my files just not the hidden ones (.env, .git, .gitignore are missing). I'm trying to run webpack-dev-server within my instance and it needs my .env file to run. Is it normal that hidden files are not synced?
My .yaml file:
Hidden files don't show when using the ls command (which I guess is what u're doing). They are still there, however. You can try something like nano .env and you'll see that you can edit your env from the console :)
Another command you could use is ls -a, which should show all files regardless of whether they're hidden or not.

Creating temp folder in Linux

I was using R installed on a Linux server using SSH. Everything was fine, but now I have been denied access to temp folder and if I am loading R it is giving error cannot create 'R_TempDir', as it can't create the temp folder.
Can you please tell me how to create own local temp folder so that R can create temporary directory there ?
You can try to set one of these environment variable :
TMPDIR, TMP, TEMP:
Consulted (in that order) when setting the temporary directory for the session: see tempdir. TMPDIR is also used by some of the utilities see the help for build
by doing for instance :
export TMPDIR=/tmp
source
Hope this answers.
From what I understand,
I just thought that you could use .bashrc files in your /home/username/ directory
~# nano /home/username/.bashrc
You can put the command to create the folder inside this .bashrc file by just adding this line mkdir /your/dir/path/yourDir
This file is just like an autorun file which run everytime you upstart your linux server
But this is just working per user setting

SparkR : how to acces files passed with --files in yarn-cluster mode

I am sending a sparkR Job to run on a Yarn cluster in cluster mode with ./bin/spark-submit script. I need to upload a file (external dataset) by the --file option. This action upload files to HDFS tempory directory. But I need to access to the path where the file was downloaded to include it directly in my SparkR code.
For java and PySpark, files distributed using --files can be accessed via SparkFiles.get(filename) method which return the absolute path of filename. Is there an equivalent in SparkR ?
I know we can work around the problem by different ways :
Put files manualy to HDFS
Deploy files on worker nodes
But I want to use this option for convinient reasons.

How to save Robot framework test run logs in some folder with timestamp?

I am using Robot Framework, to run 50 Testcases. Everytime its creating following three files as expected:
c:\users\<user>\appdata\local\output.xml
c:\users\<user>\appdata\local\log.html
c:\users\<user>\appdata\local\report.html
But when I run same robot file, these files will be removed and New log files will be created.
I want to keep all previous run logs to refer in future. Log files should be saved in a folder with a time-stamp value in that.
NOTE: I am running robot file from command prompt (pybot test.robot). NOT from RIDE.
Could any one guide me on this?
Using the built-in features of robot
The robot framework user guide has a section titled Timestamping output files which describes how to do this.
From the documentation:
All output files listed in this section can be automatically timestamped with the option --timestampoutputs (-T). When this option is used, a timestamp in the format YYYYMMDD-hhmmss is placed between the extension and the base name of each file. The example below would, for example, create such output files as output-20080604-163225.xml and mylog-20080604-163225.html:
robot --timestampoutputs --log mylog.html --report NONE tests.robot
To specify a folder, this too is documented in the user guide, in the section Output Directory, under Different Output Files:
...The default output directory is the directory where the execution is started from, but it can be altered with the --outputdir (-d) option. The path set with this option is, again, relative to the execution directory, but can naturally be given also as an absolute path...
Using a helper script
You can write a script (in python, bash, powershell, etc) that performs two duties:
launches pybot with all the options you wan
renames the output files
You then just use this helper script instead of calling pybot directly.
I'm having trouble working out how to create a timestamped directory at the end of the execution. This is my script it timestamps the files, but I don't really want that, just the default file names inside a timestamped directory after each execution?
CALL "C:\Python27\Scripts\robot.bat" --variable BROWSER:IE --outputdir C:\robot\ --timestampoutputs --name "Robot Execution" Tests\test1.robot
You may use the directory creation for output files using the timestamp, like I explain in RIDE FAQ
This would be in your case:
-d ./%date:~-4,4%%date:~-10,2%%date:~-7,2%
User can update the default output folder of the robot framework in the pycharm IDE by updating the value for the key "OutputDir" in the Settings.py file present in the folder mentioned below.
..ProjectDirectory\venv\Lib\site-packages\robot\conf\settings.py
Update the 'outputdir' key value in the cli_opts dictionary to "str(os.getcwd()) + "//Results//Report" + datetime.datetime.now().strftime("%d%b%Y_%H%M%S")" of class _BaseSettings(object):
_cli_opts = {
# Update the abspath('.') to the required folder path.
# 'OutputDir' : ('outputdir', abspath('.')),
'OutputDir' : ('outputdir', str(os.getcwd()) + "//Results//Report_" + datetime.datetime.now().strftime("%d%b%Y_%H%M%S") + "//"),
'Report' : ('report', 'report.html'),

Running jar file map reduce without Hdfs

I have bundled a jar from my eclipse project. I would like to pass arguments to the jar. Basically an input file to the jar. I would like to know how to give an input file that is not in Hdfs. I know that's not now hadoop works but this is for testing purposes. Eclipse has the feature for local files. Is there a way to do this via command line?
You can run hadoop in 'local' mode by overriding the job tracker and file system properties from the command line:
hadoop jar <jar-file> <main-class> -fs local -jt local <other-args..>
You need to be using the GenricOptionsParser (which is the norm if you're using ToolRunner to launch your jobs.

Resources