I cannot get bash scripts to run in the .profile.d folder. This seems like it should be very straight forward but I'm not having any luck.
I have a .profile.d folder in the root of my application. In it I have one script which builds ffmpeg from source called:
ffmpeg.sh
This script executes fine when I run it in the container and makes / installs ffmpeg. I've tried chmod +x and adding a #!/bin/bash to the top of the file.
I feel like I'm missing a basic understanding somewhere here, any tips?
Ok, this was running correctly, it just didn't output echo statements. I only found it by attaching to the running docker container after the deploy had finished.
Related
Hi I've setup Homestead correctly but when I ssh into my Vagrant instance I can see all my files just not the hidden ones (.env, .git, .gitignore are missing). I'm trying to run webpack-dev-server within my instance and it needs my .env file to run. Is it normal that hidden files are not synced?
My .yaml file:
Hidden files don't show when using the ls command (which I guess is what u're doing). They are still there, however. You can try something like nano .env and you'll see that you can edit your env from the console :)
Another command you could use is ls -a, which should show all files regardless of whether they're hidden or not.
OwinHost.exe seems to only want to run against an application that lives in a folder called "bin". If I try against another folder e.g. "bin2", OwinHost says that it can't find the startup attribute.
I've tried running from within the bin2 folder, outside it, and tried the -d and -b arguments - nothing seems to work.
I'm new to Autosys and I'm using the WCC front end in order to run autosys jobs. I also have access to the terminal if the answer proves to only work in the terminal
I was wondering how to change the directory before running a job. At the moment, Autosys searches its root directory for a batch file buit I want it to point to C:/scripts. IS there any way to define this as the starting directory or am i stuck leaving script files in the root directory of Autosys?
Thank you for your help.
Full path to the script should stand in the "command" section, e.g.:
insert_job: my_job job_type: c
command: C\:\Scripts\script1.bat
machine: appserver.domain.com
Alternative solution is to add C:\Scripts to the PATH variable on your app server.
I've created a nginx server in a chroot at /srv/http with php-fpm. Both services use the http user and it works fine. The problem comes when I try to run an exec command such as
echo shell_exec('/usr/bin/ls');
There is no output at all on the web page or in the errors. I've also tried
error_log(shell_exec('/usr/bin/ls');
and still nothing.
Things I've Tried or Know:
safe mode off
exec enabled
user is http (using phpinfo())
display_errors = on
error_reporting = E_ALL
sudo /usr/bin/chroot --userspec=http:http /srv/http ls works fine
Can create file and read from it using file_puts_content and fopen/fread
tried shell_exec,exec,system, and passthrough - nothing worked
tried appending 2>&1 to the end of the command and nothing
I've copied all the executables and libraries necessary over
all libraries, binaries, and everything under /srv/http/www (where the webpages are) have executable and read permissions
doc_root is www
As far as I know, everything works in the chroot, except shell commands through php-fpm. Anyone have any idea where I went wrong and how to fix it?
This may sound stupid but you must just copy /bin/sh (not /bin/bash!) to you chroot.
For example see this question: How do I change the shell for php's exec()
If you chroot to some directory, then this directory becomes the root for all your PHP scripts. That means, that if you execute /usr/bin/ls from within PHP, it will try to exectue /srv/http/usr/bin/ls instead.
You can copy the executable to that directory - but be aware of the security implications. If you copy critical system executables into the chrooted directory you basically bypass the positive effects of chroot.
I get no output for
echo shell_exec('/usr/bin/ls');
either. Presumably because ls isn't a file but a built-in command. Running:
echo shell_exec('ls');
outputs:
css demos favicon.ico images js path.php robots.txt routing.php test
which is the list of files in my root directory for the site.
I have a personal scripts folder: ~/google_drive/code/scripts. This folder is on my $path. $path is set in ~/.zshenv and not changed anywhere else (I've disabled OS X path_helper and I don't touch $path in any other zsh startup file). In this scripts folder, there is a subdirectory called alfred_workflows. From the command line, from any location I am able to run scripts in this folder with relative paths. This is expected:
$ alfred_workflows/test.sh
#=> test successful
But in a script, this does not work. It generates an error:
$ zsh -c "alfred_workflows/test.sh"
#=> zsh:1: no such file or directory: alfred_workflows/test.sh
Once again, the scripts directory that contains alfred_workflows is on $path, which is set in ~/.zshenv, and I am able to run executables that reside in the top level of this directory from a script. The issue only seems to be when I try to go through a subdirectory. What might be the problem?
Searching of the $path is only done for names containing a slash if the
path_dirs option is set. Apparently that's set in your interactive shell,
but isn't set in the process that's executing the script.