set `CC='clang'` with `export`, but `./configure` is still using `gcc` - r

Compiling R from the R-devel svn branch, I do
export CC='clang'
export CXX='clang++'
sudo ./configure
but the configuration script still tries to use gcc as the compiler. Why?

Because sudo reads environment variables of the root user, but export saves shell variables to your user environment. (within that terminal session only)
To fix this, you need to configure with sudo -E ./configure,
which reads environment variables from your user account (= login name) when executing ./configure with heightened privileges. Also have a look at the sudo -H flag (within man sudo).
Or you can first sudo su into the root account and export CC='clang' from within that root shell.
(the root shell prompt might begin with a # rather than a $, and be missing other config niceties—eg colourisation—from /home/user/.bashrc)

Related

crontab not working: perhaps an error in notation?

In an Amazon EC2 terminal, I type: `sudo nano crontab -e' to bring up the editor. I have the following (empty line at the end included):
#reboot echo "Running RMV scrape & R Shiny via: nano crontab -e"
#reboot nohup python /home/ec2-user/RMV/RMV_scrape.py &
#reboot nohup shiny-server &
#reboot service start httpd
#hourly cp -f /home/ec2-user/RMV/wait_times.csv /var/shiny-server/www/wait_times.csv
Here, I'm trying to run (a) my program, (b) apache, (c) R Shiny server and (d) a script that runs hourly to copy a file.
For some reason, this fails to run. pgrep chron does show chron runs upon startup. It shouldn't be a permissions issue because I ran crontab using sudo. I had one relative pathname in my .py script but I changed it to an absolute pathname.
I've consulted:
https://askubuntu.com/questions/23009/reasons-why-crontab-does-not-work
http://www.unix.com/answers-to-frequently-asked-questions/13527-cron-crontab.html
Any ideas why this may not be working?
I think your problems is with the command you used to edit the crontab sudo nano crontab -e does not edit the crontab you made a file named crontab in whatever directory you were working in, but crontab files are in /var and are not intended to be edited directly. For any given user crontab -e will edit the crontab using the editor specified in the environment variable EDITOR. So to edit root's crontab the command is sudo crontab -e.
That said adding entries to root's crontab is probably not what you want. You probably want to use the system crontab for some thing like this. In almost all cases the system crontab is /etc/crontab which can be edited using sudo nano /etc/crontab. Note that for the system crontab you need to add the user of the command between the time and command sections. e.g.
#reboot root echo "Running RMV scrape & R Shiny via: nano crontab -e"
Also note that crontab uses a very minimal PATH environment variable for security reasons. If a command you issue is not on the path it will not execute. Remember to either add the paths you need to the crontab PATH (specified in the particular crontab file) or use the full path to a given executable from the (filesystem) root directory.

Unable to automate script: sudo: no tty present and no askpass program specified

I would like to automate a build - for now, during my development, so no security stuff involved.
I have created a script that moves libs to /usr/local/lib and issues ldd command.
These things require sudo.
Running the script from the builder (Qt Creator), I am not prompted to enter my sudo password, and I get the error
sudo: no tty present and no askpass program specified
Sorry, try again.
I have found a few solutions to this but it just did not work... what am I missing ?
Exact code:
in myLib.pro
#temporary to make my life easier
QMAKE_POST_LINK = /home/me/move_libs_script
in move_libs_script:
#!/bin/bash
sudo cp $HOME/myLib/myLib.so.1 /usr/local/lib/
sudo ldconfig
I did as suggested by the answer above: edited visudo and added the script... even added qmake...
sudo visudo
added at the end:
me ALL=NOPASSWD: /home/me/move_libs_script, /usr/bin/qmake-qt4
It saved file: /etc/sudoers.tmp (and doing the command sudo visudo again I saw that my changes were kept so I am not sure what is with the tmp)
Still same errors
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: 3 incorrect password attempts
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: 3 incorrect password attempts
Edit: after asking the question I found a suggested similar question:
https://stackoverflow.com/a/10668693/1217150
So I tried to add a custom step...
Result:
09:50:03: Running build steps for project myLib...
09:50:03: Could not start process "ssh-askpass Sudo Password | sudo -S bash ./move_libs_script"
Error while building project myLib (target: Desktop)
When executing build step 'Custom Process Step'
(if I run from terminal I get asked for password)
New edit: so I thought I can outsmart the system and call a script that calls my script...
myLib.pro
QMAKE_POST_LINK = /home/me/sudo_move_libs_script
sudo_move_libs_script:
#!/bin/bash
ssh-askpass Sudo Password | sudo -S bash $HOME/move_libs_script
got it !!! I will post as answer i guess
New edit as answer to comment:
in mainExe.pro:
QMAKE_POST_LINK = ./link_lib
in link_lib:
#!/bin/sh
LD_LIBRARY_PATH=$HOME/myLib
export LD_LIBRARY_PATH
Result: Executable fails because the lib is not to be found (of course before testing I emoved the copy from /usr/local/lib)
Solution 1:
To run the command I needed with sudo from Qt, and be asked for password
myLib.pro
QMAKE_POST_LINK = /home/me/sudo_move_libs_script
sudo_move_libs_script:
#!/bin/bash
ssh-askpass Sudo Password | sudo -S bash $HOME/move_libs_script
move_libs_script:
#!/bin/bash
cp $HOME/myLib/myLib.so.1 /usr/local/lib/
ldconfig
Solution 2
Avoid using sudo: run a custom executable from Qt Creator (just like I would when project is deployed) - I can execute my program with correct dependency. Unfortunately it does not work for debug.
In the Run Configuration, place a script "dummy_executable" instead the app executable:
dummy_executable:
#!/bin/sh
LD_LIBRARY_PATH=$HOME/myLib
export LD_LIBRARY_PATH
$HOME/myExec/myExec "$#
This method is great for running the program ... unfortunately gdb fails... I think it tries to debug the shell script ? So it won't work if I try to step through my program.
Choosing Solution 1, because it has advantages: allows me to build and debug the program, which is what I really need, and with the dependencies set I get the library built before running the program - which is perfect.
Thank you Etan Reisner

Error installing Meteor on linux x86_64 chrome os

I am trying to install Meteor on the HP14 Chromebook. It is a linx x86_64 chrome os system.
Each time I try to install it I run into errors.
The first time I tried to install it the installer just downloaded the Meteor preengine but never downloaded the tarball or installed the actual meteor application structure.
So, I decided to try as sudo.
sudo curl https://install.meteor.com | /bin/sh
This definitely installed it because you can see it when ls
chronos#localhost ~/projects $ chronos#localhost ~/projects $ ls /home/chronos/user/.meteor/
bash: chronos#localhost: command not found
Now when I try to run meteor --version or meteor create myapp without sudo I get the following error.
````
chronos#localhost ~/projects $ meteor create myapp
'/home/chronos/user/.meteor' exists, but '/home/chronos/user/.meteor/meteor' is not executable.
Remove it and try again.
````
When I try to run sudo meteor --version or sudo meteor create myapp I get this error.
chronos#localhost ~/projects $ sudo meteor create myapp
mkdir: cannot create directory ‘/root/.meteor-install-tmp’: Read-only file system
Any ideas? Thinking I have to make that partition writeable. I made partition 4 writeable.
Put your chrome book into dev mode.
http://www.chromium.org/chromium-os/developer-information-for-chrome-os-devices
Boot into dev mode.
ctrl-alt t to crosh
shell
sudo su -
cd /usr/share/vboot/bin/
./make_dev_ssd.sh --remove_rootfs_verification --partitions 4
reboot
After rebooting
sudo su -
mount -o remount,rw /
mount -o remount,exec /mnt/stateful_partition
Write yourself a read/write script
sudo vim /sbin/rw
#!/bin/bash
echo "Making FS Read/Write"
sudo mount -o remount,rw /
sudo mount -o remount,exec /mnt/stateful_partition
sudo mount -i -o remount,exec /home/chronos/user
echo "You should now have full Read/Write access"
exit
Change permissions on script
sudo chmod a+x /sbin/rw
Run to set read/write root
sudo rw
Install Meteor as indicated on www.meteor.com via curl and meteor create works!
Alternatively you can edit the chomeos_startup though that might not be the best idea. It is probably best to have read/write on demand as illustrated above.
cd /sbin sudo
sudo vim chromeos_startup
Go to lines 51 and 58 and remove the noexec options from the mount command.
Down at the bottom of the script, above the note about ureadahead and below the if statement, add in:
mount -o remount,exec /mnt/stateful_partition
#uncomment this to mount root r/w on boot
mount -o remount,rw /
Again, editing chromeos_startup probably isn't the best idea unless you are so lazy you can't type sudo rw.
Enjoy.
This is super easy to fix!!
Just run this (or put it in .bashrc or .zshrc to make it permanent):
sudo mount -i -o remount,exec /home/chronos/user
Based on your question (you are using sudo) I assume you already have Dev Mode enabled, which is required for the above sudo command to work.
ChromeOS mounts the home folder using the noexec option by default, and this command remounts it with exec instead. And boom, Meteor will work just fine after that (and so will a bunch of other programs running out of your home folder).
Original tip: https://github.com/dnschneid/crouton/issues/928

testlink installation failed: checking if /var/testlink/logs/ directory exists

Today, I installed testlink. And after I select 'new Installation' and choose 'I agree' option, it failed at the second step. The failed message are as following:
Read/write permissions
For security reason we suggest that directories tagged with [S] on following messages, will be made UNREACHEABLE from browser
Checking if C:\xampp\htdocs\testlink\gui\templates_c directory exists OK
Checking if C:\xampp\htdocs\testlink\gui\templates_c directory is writable (by user used to run webserver process) OK
Checking if /var/testlink/logs/ directory exists [S] Failed!
Checking if /var/testlink/upload_area/ directory exists [S] Failed!
So, can anyone give me a hand? Many thanks!
In C:\xampp\htdocs\testlink\config.inc.php file, change
$g_repositoryPath = 'C:\xampp\htdocs\testlink\upload_area';
$tlCfg->log_path = 'C:\xampp\htdocs\testlink\logs';
Worked for me , make sure you dont have the slash at the end.
i.e, make sure that it is NOT:
$g_repositoryPath = 'C:\xampp\htdocs\testlink\upload_area\';
$tlCfg->log_path = 'C:\xampp\htdocs\testlink\logs\';
If you installed the XAMPP or testlink in another directories, change the paths above accordingly.
Go to config.inc.php and log directory ($tlCfg->log_path) edit the path to C:\xampp\testlink\logs and upload directory ($g_repositoryPath) to C:\xampp\testlink\upload_area
In some cases, you would do like this:
Go to C:\xampp\htdocs\testlink\config.inc.php1
and log directory ($tlCfg->log_path) edit the path to C:\xampp\htdocs\testlink\logs
and upload directory ($g_repositoryPath) to C:\xampp\htdocs\testlink\upload_area
Then you have:
$g_repositoryPath = 'C:\xampp\htdocs\testlink\upload_area';
$tlCfg->log_path = 'C:\xampp\htdocs\testlink\logs';
I had paths set correct , also user, group and access are set correct and still can not get rid of issue. It took me very long to go to the root cause, the issue lies at- http daemon does not have access to file in concern due to SELinux policies. So simple chown, chmod would not help(group and user access). For testlink 1.16 I resolved it with re-installing with sudo user, but for upgrade, an issue arose again even with sudo user.
And resolved issue by executing following commands, I hope this helps. (Note: You might have to mend attributes to run it successfully)
$chcon -t httpd_sys_content_rw_t "<path_to_testlink_folder>/gui/templates_c/"
$chcon -t httpd_sys_content_rw_t "/<path_to_testlink_folder>/upload_area/"
$chcon -t httpd_sys_content_rw_t "<path_to_testlink_folder>/logs"
$semanage fcontext -a -t httpd_sys_content_rw_t "<path_to_testlink_folder>(/.*)?"
$restorecon -R -v path_to_testlink_folder
Ubuntu 12.04 - All you have to do is chmod 777 these directories, Fails will become Pass.
~$ cd into /var/www/testlink
~$ sudo chmod 777 ./gui/templates_c/
~$ sudo chmod 777 ./upload_area/
~$ sudo chmod 777 ./logs/
Whatever the instructions say is total BS. Making these directories unreachable from browser is optional, and that created a confusion. if you chmod 777 them, your Fails will turn into pass and now you'll be able to proceed to step 3 of your testlink installation. Tested with testlink version 1.9.5.
For Mac OS Users try this in 1.9.19 version:
Make Sure with your folder name.
In config.inc.php file:
$tlCfg->log_path = TL_ABS_PATH . 'logs' . DIRECTORY_SEPARATOR;
$g_repositoryPath= TL_ABS_PATH . 'upload_area' . DIRECTORY_SEPARATOR;
After this if you got read write permission issue failed.
Goto testlink -> logs / upload_area -> press Command + I -> in Permission Enable Read Write to Everyone.
on Linux; ensure the paths are as follows:
$tlCfg->log_path
$g_repositoryPath
are
/var/www/html/testlink/logs/
/var/www/html/testlink/upload_area/
Valid for ubuntu 16.04 LTS add permisions
Change:
$g_repositoryPath = 'var/www/html/testlink/upload_area'; //linux user
$tlCfg->log_path = 'var/www/html/testlink/logs';
~$ cd into /var/www/testlink
~$ sudo chmod 777 ./gui/templates_c/
~$ sudo chmod 777 ./upload_area/
~$ sudo chmod 777 ./logs/
In CentOS go to /var/www/html/testlink-code-1.9.16 and edit the file custom_config.inc.php replace these two lines
// $tlCfg->log_path = '/var/testlink-ga-testlink-code/logs/'; /* unix example */
// $g_repositoryPath = '/var/testlink-ga-testlink-code/upload_area/'; /* unix example */
with
$tlCfg->log_path = '/var/www/html/testlink-code-1.9.16/logs/';
$g_repositoryPath = '/var/www/html/testlink-code-1.9.16/upload_area/';
Make sure you have disabled the selinux. If not to do so edit the file /etc/sysconfig/selinux and change the variable SELINUX to disabled and reboot the machine. Now these errors should have gone.
on ubuntu 18.04, will need to run
apt-get remove apparmor
in order to install it
To solve the problem :
Checking if /var/www/html/testlink-1.9.16/gui/templates_c directory is writable (by user used to run webserver process) on Centos 7.
Disable SELinux, and then restart your system.
You should no longer have this error message.

Sudo Path - not finding Node.js

I need to run node on my Ubuntu machine with sudo access. The directory of node is in the sudo path but when trying to run it i get a command not found. I can explicitly call node which does work.
//works
node
>
which node
/root/local/node/bin/node
echo sudo $PATH
sudo /root/local/node/bin:/usr/bin/node:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
sudo node --version
sudo: node: command not found
//explicitly calling it works
sudo node /root/local/node/bin
>
Um, I don't think there's such a thing as a "sudo path" - your second command there is just echoing "sudo" followed by your regular path. In any case, if you're running things with sudo you really, really should not depend on a path - you should give the explicit pathname for every command and file argument whenever possible, to minimize security risks. If sudo doesn't want to run something, you need to use visudo to add it to /etc/sudoers.

Resources