I would like to build a container which shows up in its startup log its build datetime. Is there a way to have that information injected from my build machine into the container ?
The output of each RUN step during a build is the changes to the filesystem. So you can output the date to a file in your image. And the logs from the container are just the stdout from commands you run. So you can cat out the date inside your entrypoint.
In code, you'd have at the end of your Dockerfile:
RUN date >/build-date.txt
And inside an entrypoint script:
#!/bin/sh
#.... Initialization steps
echo Image built: $(cat /build-date.txt)
#.... More initialization steps
# run the command
exec "$#"
You could use an ARG to pass in the current build timestamp at build time. You'd have to docker build --build-arg build-date=$(date) or something like that. Having done that, you can refer to the argument using something similar to shell variable syntax at build time.
This is probably more useful if you have a significant build step in your Dockerfile (you're compiling a Go application) or if the metadata you're trying to embed is harder to generate programmatically (the source control version stamp, the name of the person running the build command).
Related
I'm trying to give special permissions to a file located inside a container but I'm getting a "No such file or directory" error message.
The Dockerfile basically runs a R Script that generates an output.pptx file located inside an output folder created inside the container.
I want to send that output into a s3 bucket but for some reason it isn't finding the file inside the container.
# Make the output directory
RUN mkdir output
# Process main file
CMD ["RScript", "Script.R"]
# install AWS CLI
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install -i /usr/local/bin/aws -b /usr/local/bin
# run AWS CLI to push output file to s3 folder
RUN chmod 775 ./output/*.pptx
RUN aws s3 cp ./output/*.pptx s3://bucket
Could this be related to the path I'm using for the file?
(Edited to fix a word-swap brain-fart in the first version.)
I get the idea that there is a misunderstanding of how the image should be used. That is, a DOCKERFILE creates an image, and the CMD is not actually run when building the image.
Up front:
an image is really just a tarball with filesystems; multiple "layers" are there, to indicate the layers of the build process (which can be squashed); an image has no "running" component, no processes are active in an image; and
a container is an image that is in a running state. It might be the CMD you specify, or it might be something else (e.g., docker run -it --rm myimage /bin/bash to run a bash shell with the container as the filesystem/environment). When the running command finishes and exits, the container is stopped.
Typically, you create an image "once" (security updates and such notwithstanding), and then run it as needed (either manually or via some trigger, e.g., cron or CI triggers). That might look like
docker run --rm myimage # using the default `CMD`
docker run --rm myimage R Script.R # specifying the command manually
with a couple assumptions:
the image has R installed and within the PATH ... though you could specify the full /path/to/bin/R instead; and
the default working directory (dockerfile's WORKDIR directive) contains Script.R ... or you can specify the full internal path to Script.R
With this workflow, I suggest a few changes:
from the DOCKERFILE, remove the lines after # run AWS CLI;
add to Script.R steps to copy the file to your S3 instance, either using the awscli you installed in the image, or by using the aws.s3 R package (which might preclude the need to install the awscli);
I don't use AWS S3, but I suspect that you need credentials to be able to access the bucket; there are many ways for dealing with images and "secrets" like S3 credentials, the most naïve approaches involve hard-coding the credentials into the container, which is a security risk; others involve "docker secrets" or environment variables. For instance,
docker run -it --rm -e "S3TOKEN=asldkfjlaskdf"
though even that might be intercepted by neighboring users on the docker host.
I am following the example. It is publishing an ASP.NET application to a docker container on a remote Linux sever.
There is a small piece of work to run script.
Let's quote it.
Also be sure to explore the PublishProfiles folder that gets created
in your Visual Studio project under "Properties." A PowerShell script
and a Shell script get created in that folder that you can use to
publish your app from the command line. For example:
.\hanseldocker-Docker-publish.ps1 -packOutput $env:USERPROFILE\AppData\Local\Temp\PublishTemp -pubxmlFile .\hanseldocker-Docker.pubxml
I am not sure the location of .\. Actually I am using Bamboo to build it, I have to place the script into the body.
.
I don't know whether I should modify the script because of .\?
Get a solution at here.
To launch it using the regular old, command line and pass any powershell script as a parameters.
C:\Windows\System32\cmd.exe /c "echo Hesus | C:\WINDOWS\SysWOW64\WindowsPowerShell\v1.0\powershell.exe %*"
Put this in the Argument:
scripts\RemoteWSPDeployment.ps1 –parameter1 value1 –parameter2 value2
I want to use the intel compiler for Qt, but using the intel compiler implies running the script
$ source /opt/intel/bin/compilervars.sh intel64
Of course, I could add this to ~/.bashrc, but this would not run it in QtCreator, where it still complains about missing icpc. So I want it to be a part of the main mkspec qmake file.
How can I execute that full bash command in qmake?
Short Answer: Using QMAKE_EXTRA_TARGETS and PRE_TARGET_DEPS, you can execute source /opt/intel/bin/compilersvars.sh intel64, but simply sourcing them will not solve your issue.
Long Answer: The QMake file is converted into a Makefile. Make then executes the Makefile. The problem you will run into is that Make executes each command in its own shell. Thus, simply sourcing the script will only affect one command, the command that executes the script.
There are a couple of possible ways to make things work:
Execute the script before starting Qt-Creator. I've actually done this for some projects where I needed to have special environment variables setup. To make my life easier, I created a shell command to setup the environment and then launch Qt-Creator.
Within Qt-Creator, modify the Build Environment for the project I've also used this trick. In your case, simply look at the environment setup by the script and change the "Build Environment" settings under the project tab for your project to match those setup by the script.
It might also be possible to modify QMake's compiler commands, but I am not sure you can make it execute two commands instead of one (source the script then execute the compiler). Further more, this will make the project very un-transportable to other systems.
You can create a shell script that does more or less the following:
#! /usr/bin/env sh
# Remove the script's path from the PATH variable to avoid recursive calls
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
export PATH=${PATH/$SCRIPT_DIR:/}
# Set the environment up
source /opt/intel/bin/compilervars.sh intel64
# Call make with the given arguments
make "$#"
Save it into a file named "make" in an empty directory somewhere, make it executable, then change the build environment in QT Creator to prepend PATH with the script's directory:
PATH=[dir-of-make-script]:${Env:PATH}
You can do this either in the project settings or once and for all in the kit settings.
Like this, QT Creator will be fooled into calling your make script which sets up your environment before calling the actual make binary. I use a similar technique under Windows with Linux cross-toolchains and it has been working well so far.
OS: UNIX Solaries, Oracle Application Server 10g
To run shell script from Oracle Forms, I used the following host('/bin/bash /u01/compile.sh') and it works well
Now, I need to run unix command something like
host('mv form1.fmx FORM1.FMX') but it's not working
I tried to append the command mv form1.fmx FORM1.FMX' to the compile.sh shell script but also it's not working although the rest lines of the shell script is running well
The solution is to just add the full path of the mv command and it worked well, as follow
/bin/mv /u01/oracle/runtime/test/form1.fmx /u01/oracle/runtime/test/FORM1.FMX
In case anyone else encounters the same problem, the cause is that Forms process creates a subprocess to execute host() command, and that subprocess inherits environment variables of the parent process, which are derived from default.env (or other env file as defined in server config). There is a PATH variable defined in that file, but it doesn't contain usual /bin or /usr/bin, so the commands will not execute unless full path is specified.
The solution is to set the correct PATH variable either in the executed script (via export PATH=$PATH:...) or in default.env. I set it in the script, since, knowing Oracle, there's no guarantee that modifying default.env won't break something.
I'm trying to call a script in Tcl with the command:
exec source <script path>
and I get the error
couldn't execute "source": no such file or directory
How can I call another script from tcl?
Edit: I am running a command I got from another person in my office. I was instructed to run "source " explicitly with source. So in other words, how would I run any command that would work in cshell, in Tcl?
If the script you were given is a cshell script, you can exec it like this:
exec /bin/csh $path_to_script
In effect, this is what the 'source' command does from within an interactive shell. It's not clear whether this is really what you want to do or not (not exactly, but close enough for this discussion).
The reason you can't exec the source command is that exec will only work on executable files (hence the name 'exec'). The source command isn't implemented as an exectuable file, it is a command built-in to the shell. Thus, it can't be exec'd.
If you really feel the need to exec the source command or any other built-in command you can do something like this:
exec /bin/csh -c "source $path_to_script"
In the above example you are execing the c shell, and asking it to run the command "source ". For the specific case of the source command, this doesn't really make much sense.
However, I'm not sure any of this will really do what you expect. Usually if someone says "here's some commands, just do 'source ', it usually just defines some aliases and whatnot to be used from within an interactive shell. Those aliases won't work from within Tcl.
source in csh, like . in bash, executes a script without spawning a new process.
The effect is that any variable that is set in that script is available in current csh session.
Actually, source is a built-in command of csh, thus not available from tcl exec, and using exec without source would not give the specific source effect.
There is no simple way to solve your problem.
source load the source file
you should do:
source <script path>
If you want to execute it, then you need to call the main proc.
another option would be to do:
exec [info nameofexecutable] <scritp path>
Some confusion here. exec runs a separate program, possibly with arguments.
source is not a separate program, it is another Tcl command which reads a file of Tcl commands and executes them, but does not pass arguments. If the other script you are trying to call is written to be run on from the command line, it will expect to find its arguments as a list in variable argv. You can fake this by setting argv to the list of arguments before running source, eg.
set argv {first_arg second_arg}
source script_path
Alternatively you could use exec to start a whole separate Tcl executable and pass it the script and arguments:
exec script_path first_arg second_arg
the error speaks for itself. Make sure you give the correct path name, specify full path if necessary. and make sure there is indeed the file exists in that directory
Recently I wanted to set some UNIX environment variables by sourcing a shell script and stumbled across the same problem. I found this simple solution that works perfectly for me:
Just use a little 3-line wrapper script that executes the source command in a UNIX shell before your Tcl script is started.
Example:
#!/bin/csh
source SetMyEnvironment.csh
tclsh MyScript.tcl