Unable to access dotnet via sudo dotnet in CentOS - .net-core

I'm using CentOS headless server on azure, & I set up .Net core there.
I'm able to access dotnet --info but unable to access sudo dotnet --info. Provided, I cant access root user.

Looking deeper into sudo here, I found out that, When running sudo, many systems are configured to clear the environment of all non-whitelisted values, and to reset the PATH variable to a sanitized value.
This was actually clearing out the PATH for dotnet, restricting the command to not be executed with sudo.
For the solution, You will find Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin in /etc/sudoers. Removing that line from sudoers file will resolve that issue.
You can access sudoers file by visudo command.

Related

AWS CodeDeploy deployment failed at event ApplicationStop

I am trying to set up auto-deployment from GitHub to AWS, using EC2.
I set Role with CodeDeployServiceRole auto policy
After following the Tutorial: Use AWS CodeDeploy to Deploy an Application from GitHub, my deployment fails at the ApplicationStop event, after trying for couple of minutes with error code HEALTH_CONSTRAINT. I'm not sure how to troubleshoot the issue/where to look.
These are few hints of how you can navigate your way
Logs as mentioned in comments in /var/log/aws/codedeploy-agent
As AWS support recommend you can add for one time --ignore-application-stop-failures so it will skip that step in case it failed last time and see، because the application stop Lifecycle event uses the appspec file from the last successful build so if that one is corrupted somehow this step will fail in the following builds
(not recommended) you can delete the file, that CodeDeploy uses to keep track of the previous successful deployment in the following path /opt/codedeploy-agent/deployment-root/deployment-instructions/
Check the latest logs at /var/log/aws/codedeploy-agent
If your deployment is failing at ApplicationStop event, then most likely the issue is your EC2 instance does not have the necessary permissions to get the artifacts from S3 bucket.
Your EC2 instance must have an IAM role attached which gives it enough permissions to download the artifacts from S3 bucket
Your EC2 must be started with an IAM role. So you may have to reboot your instance after attaching the role to it.
From your configuration, looks like you have provided permissions to CodeDeploy to perform certain actions on your EC2. You may want to check if your EC2 also has the necessary permissions to download packages from S3 bucket.
Another reason for this error is that the CodeDeploy Service is not running on your machine. On Windows machines, Code Deploy Service terminates sometimes, and as a result, the deployment is not downloaded on the machine. Nothing appears in the logs either.
Run services.msc and check the code deploy agent service. If it is not running, start it and retry the deployment.
I had the same issue and solved it by solving codedeploy-agent that wasn't working on my EC2 instance.
sudo service {httpd/apache2} status
Something might have cause the agent not to run properly
Hope it will help
I had the same issue. You also need to make sure that your EC2 instance has code-deploy-agent installed.
Follow the below aws guide. It worked for me.
AWS guide to install code agent in linux server
Check if the codedeploy-agent is running.
sudo service codedeploy-agent status
if not running then use below command to run
sudo service codedeploy-agent start
If you are using aws Windows server, check the logs at :
C:\ProgramData\Amazon\CodeDeploy\log\codedeploy-agent-log.txt
AWS Docs
To check if the codedeployagent running in windows. Open powersheel command window and run these command.
powershell.exe -Command Get-Service -Name codedeployagent
Better to stop and start again.
powershell.exe -Command Stop-Service -Name codedeployagent
powershell.exe -Command Start-Service -Name codedeployagent
Or Restarting also works
powershell.exe -Command Restart-Service -Name codedeployagent
For me I had to uninstall the codedeployagent on windows by uninstalling and deleting old files of codedeploy.
Run the below command in powershell one by one to uninstall.
wmic
product where name="CodeDeploy Host Agent" call uninstall /nointeractive
exit
After this delete the codedeploy folder at this location.
C:\ProgramData/Amazon/CodeDeploy/
Now install codedeployagent on windows.
Start the codedeployagent again.
powershell.exe -Command Start-Service -Name codedeployagent

Missing value auth-url required for auth plugin password

I am trying to install openstack using devstack on ubuntu 16.04.
I followed the following link.
https://docs.openstack.org/developer/devstack/guides/single-machine.html
When I run
sudo openstack service list
, it prompt the following error.
Missing value auth-url required for auth plugin password
Make sure you are logged into horizon and download the rc file (right corner). After that do source admin-openrc.sh
Note that you have to download the rc file of the project you are working on
This should do. keystonerc_admin file is generated at the end of openstack packstack installation
source keystonerc_admin
always run admin-openrc file before running any openstack command
eg.
$source admin-openrc
then run whatever openstack command you want to run
eg.
$openstack --debug server list
Take rocky version of OpenStack Keystone as an example. https://docs.openstack.org/keystone/rocky/install/keystone-openrc-rdo.html#using-the-scripts
You can create an admin-openrc.sh (If you are installing OpenStack Keystone for the first time.) and put environment params. And source this file before you run any "openstack" command. This resolves the issue.
I put the admin-openrc.sh in directory /usr/share/keystone/admin-openrc.sh for a recommend.

How to correctly install dokku - with or without sudo?

I'm learning dokku right now for simple web deployment. Offical install instructions state this command:
wget -qO- https://raw.github.com/progrium/dokku/v0.3.12/bootstrap.sh | sudo DOKKU_TAG=v0.3.12 bash
I'm not a devop or admin, but as far as I understand this line, it performs all bootstrapping and installation under the root account, thanks to sudo. So dokku will be checked out into a directory with root access rights, and all additional directories like /var/lib/dokku/ will also have root access rights.
The problem is - all articles across the internet about dokku instructs to execute dokku command or do dokku-related actions without sudo. For example, instructions about this dokku database plugin, https://github.com/krisrang/dokku-mariadb, instructs to install it via:
cd /var/lib/dokku/plugins
git clone https://github.com/krisrang/dokku-mariadb mariadb
dokku plugins-install
This is not working, since /var/lib/dokku/plugins have root access rights and git clone will fail with acces denied. It's hard to be a non-admin nowadays, but maybe someone will hint what I'm doing wrong? Do I need to install dokku some other way, or all dokku-related tutorials across internet assume that I'm executing them under root (which is, by my limited admin knowledge, highly not recommended for security reasons).
You should run those three commands as sudo:
sudo su -
The dokku binary will run code as the dokku user even if you execute as root. So it should be fine to run that as is. Once you are the sudo user, just run the install instructions listed in your question. Hope my answer helps ! :)
I also contacted them as they mentioned:
In the future, we'll have a method to install plugins directly with a
dokku command
As far as I can tell, you need to run it as root. A traditional way to install a program without root-privileges is to download the source and compile it, which can be done by running:
git clone https://github.com/progrium/dokku.git
make
make install
Dokku's makefile depends on apt-get, which requires root access to run.
I'm not familiar with dokku or dokku-mariadb, but I think the author of dokku-mariadb also assumes root access.
For people running into the question on wether its fine to install through root user (on fresh created VMs as per the guide), try checking this Github issue:
https://github.com/dokku/dokku/issues/961
Since the commands related to dokku are prefixed with # rather than $, it means that its not necessary to run them from non-root user. It also makes writing suddo unnecessary (and form my experience counterproductive).

Error "could not delete" with Composer on Vagrant

I have a Vagrant running Linux and I'm trying to install Symfony.
After the command composer create-project symfony/framework-standard-edition ./ "2.5.*" I have the error :
[RuntimeException]
Could not delete ./.git/objects/pack/tmp_idx_llwUKb:
If I try to composer update another project, I always have this kind of error Could not delete
Any ideas?
Edit: For a simple sudo composer update -vvv on another project:
- Installing sonata-project/admin-bundle (dev-master 8a022aa)
Failed to download sonata-project/admin-bundle from source: Could not delete /vagrant/crm_neo/vendor/sonata-project/admin-bundle/.git/objects/pack/tmp_idx_hchQhc:
Now trying to download from dist
- Installing sonata-project/admin-bundle (dev-master 8a022aa)
Failed: [RuntimeException] Could not delete /vagrant/crm_neo/vendor/sonata-project/admin-bundle/.git/objects/pack/tmp_idx_hchQhc:
[RuntimeException]
Could not delete /vagrant/crm_neo/vendor/sonata-project/admin-bundle/.git/o
bjects/pack/tmp_idx_hchQhc:
Exception trace:
() at phar:///usr/local/bin/composer/src/Composer/Util/Filesystem.php:193
Composer\Util\Filesystem->unlink() at phar:///usr/local/bin/composer/src/Composer/Util/Filesystem.php:151
Composer\Util\Filesystem->removeDirectoryPhp() at phar:///usr/local/bin/composer/src/Composer/Util/Filesystem.php:129
Composer\Util\Filesystem->removeDirectory() at phar:///usr/local/bin/composer/src/Composer/Util/Filesystem.php:35
Composer\Util\Filesystem->remove() at phar:///usr/local/bin/composer/src/Composer/Util/Filesystem.php:80
Composer\Util\Filesystem->emptyDirectory() at phar:///usr/local/bin/composer/src/Composer/Downloader/FileDownloader.php:108
Composer\Downloader\FileDownloader->doDownload() at phar:///usr/local/bin/composer/src/Composer/Downloader/FileDownloader.php:89
Composer\Downloader\FileDownloader->download() at phar:///usr/local/bin/composer/src/Composer/Downloader/ArchiveDownloader.php:35
Composer\Downloader\ArchiveDownloader->download() at phar:///usr/local/bin/composer/src/Composer/Downloader/DownloadManager.php:201
Composer\Downloader\DownloadManager->download() at phar:///usr/local/bin/composer/src/Composer/Installer/LibraryInstaller.php:156
Composer\Installer\LibraryInstaller->installCode() at phar:///usr/local/bin/composer/src/Composer/Installer/LibraryInstaller.php:87
Composer\Installer\LibraryInstaller->install() at phar:///usr/local/bin/composer/src/Composer/Installer/InstallationManager.php:152
Composer\Installer\InstallationManager->install() at phar:///usr/local/bin/composer/src/Composer/Installer/InstallationManager.php:139
Composer\Installer\InstallationManager->execute() at phar:///usr/local/bin/composer/src/Composer/Installer.php:548
Composer\Installer->doInstall() at phar:///usr/local/bin/composer/src/Composer/Installer.php:217
Composer\Installer->run() at phar:///usr/local/bin/composer/src/Composer/Command/UpdateCommand.php:128
Composer\Command\UpdateCommand->execute() at phar:///usr/local/bin/composer/vendor/symfony/console/Symfony/Component/Console/Command/Command.php:252
Symfony\Component\Console\Command\Command->run() at phar:///usr/local/bin/composer/vendor/symfony/console/Symfony/Component/Console/Application.php:889
Symfony\Component\Console\Application->doRunCommand() at phar:///usr/local/bin/composer/vendor/symfony/console/Symfony/Component/Console/Application.php:193
Symfony\Component\Console\Application->doRun() at phar:///usr/local/bin/composer/src/Composer/Console/Application.php:135
Composer\Console\Application->doRun() at phar:///usr/local/bin/composer/vendor/symfony/console/Symfony/Component/Console/Application.php:124
Symfony\Component\Console\Application->run() at phar:///usr/local/bin/composer/src/Composer/Console/Application.php:84
Composer\Console\Application->run() at phar:///usr/local/bin/composer/bin/composer:43
require() at /usr/local/bin/composer:15
It happened once to me and it turns out that I was hitting composer's timeout.
You could take the following measures to gain some speed:
Increase composer process-timeout (default 300) (not really needed if the following settings will help you gain speed, but can't hurt)
Set dist as preferred install type.
Enable https protocol for github, which is faster.
~/.composer/config.json
{
"config": {
"process-timeout": 600,
"preferred-install": "dist",
"github-protocols": ["https"]
}
}
If you still have problems after that, you can also clear composer's cache:
rm -rf ~/.composer/cache
I was trying to update project dependencies (using composer update) during a Laravel Framework upgrade exercise in my local Homestead environment (having run vagrant ssh to login as the default "vagrant" user) and none of the previous answers in this thread made any difference to the...
Could not delete /home/vagrant/projects/projectname/vendor/kylekatarnls/update-helper/src/UpdateHelper
...error message I repeatedly encountered.
The only thing that worked for me was to include a composer option as follows:
composer update --no-plugins
Plugins are used to alter or extend the functionality of Composer. The above command disables all installed plugins. Unfortunately, I'm not clear as to why this command worked for me, as I certainly haven't written any plugins myself. All I can conclude is that there was an erroneous Composer plugin installed that was causing this issue.
TL;DR Switch to Docker. It is the industry standard.
I came across this issue and spent quite some time doing research. I've tried every possible option to fix it but none of them worked for me. For me, the bug occurred on GNU/Linux host with Vagrant and VirtualBox provider.
It turns out it's a VirtualBox bug related to the file system layer and race conditions when creating/deleting files. It occurs only for VirtualBox shared folders, not for regular ones. The sad part is that it seems like it's not going to be fixed any time soon.
Some guys reported that they were able to solve the problem using the following tricks:
Downgrading to VirtualBox version 6.0.4.
Using nfs or rsync instead of shared folders.
Patching composer to add some pauses after certain operations.
Disabling plugin usage with --no-plugins option.
But all of this seemed dirty to me. I personally was able to use a workaround suggested on GitHub which is to configure composer to install packages from sources. That's a simple and kind of clean trick which should not have significant negative side effects on your workflow. Try putting the following config into your ~/.config/composer/config.json. Or instead you can edit your composer.json accordingly depending on your needs. Keep in mind that composer.json will override your global config.
{
"config": {
"preferred-install": "source"
}
}
Just got the same issue.
I see the problem in accessing to some local files. In my case target directory was under "root" and I'm not the root user.
Solution
Change permissions/owner of your files/directory.
Redefine owner:
sudo chown myuser:myuser -R /path/to
Maybe there is some lack of permissions for group which you are in.
So, try to run:
sudo chmod g+rwX -R /path/to
Or maybe you may run your command with "sudo" if it works for you (not recommended). :)
P.S. Never use 777. It's not secure.
UPD1
Another thing, you may found out useful to solve the root of the cause, to wrap up your composer binary to run it always behalf a certain user.
$ cat /usr/local/bin/composer
#!/bin/bash
# run composer behalf www-data user
set -o pipefail
set -o errexit
set -o nounset
#set -o xtrace
[[ "${DEBUG:-}" = "true" ]] && set -o xtrace || true
composer_debug=$([[ 'true' != "${COMPOSER_DEBUG:-}" ]] || echo '-vvv' )
sudo -u www-data -- /usr/bin/composer ${composer_debug:-} $#
I had this problem when provisioning the machine, which was bootstrapped to run composer install. I simply exited the VM and ran composer install on the code on my host machine and it worked.
So, if you're facing this problem while running Composer inside the VM, just try running Composer from outside the VM.
Update: As pointed in the comments below, this can pose some problems with different versions of packages being installed owing to the difference in system configurations between the local and Vagrant environments, so exercise appropriate caution while trying this.
We're running into issues also. There are several people who seem to have this issue, a fix has not been provided. For more information you can look into github issues of vagrant-winnfsd.
for my case, I only used the NFS folders type instead of the shared folders and it works:
folders:
- map: ~/code/cs-cart-trial
to: /home/code/cs-cart-trial
type: "nfs"
Just run
sudo chmod -R 777 /folder/path
This will give you write access to the folder you are running composer in.
I know this is an old post but this works so I have to share it.
In my case I was trying composer update but I got
[RuntimeException]
Could not delete .../vendor/bin/php-parse:
Despite I'm using Laravel framework, this question was the first link in Google, so I decided to post an answer.
My solution was to grant ownership for vendor: sudo chown -R $USER:www-data vendor/ and
sudo chown -R $USER:www-data composer.json
Update: my host OS was Ubuntu 16.04.
Having the same issue for Cakephp 4.2.1
Error:
Could not delete /var/www/vendor/cakephp/plugin-installer/src:
Solution:
Based of https://stackoverflow.com/a/63139337/1110760
After trying out several options mentioned above, for me this was the easiest way to solve it.
composer install --prefer-source
The argument --no-plugins worked as well, sort of. It skipped some packages but my localhost seemed to work just fine. This is faster, but it's missing some.
On AWS I got this error while deploying Yii framework project there was this
/var/app/current/vendor/
folder i deleted everything inside it came back to my document root and ran composer update it fetched all the repos again.
In my case , by removing the plugin and re-create the box solve the issue.
For me it caused by composer's timeout. I checked my internet speed and found it dropped to 0.7M which is nearly unusable. After I reconnected the wifi and have my internet connection speed back to normal, the errors are gone.
This has something do to with the synchronization of the folders between host and guest OSes, the folder might be simply temporarily locked from your host machine.
The solution is simply to delete the offending .git folder from your host OS or reboot the machine and launch composer install again.
Ideally each OS has its own dependencies and different binaries, therefore you should isolate your /vendor folder out from the rsync/vagrant folder share, likewise you would do the same with /node_modules on a Nodejs project.
Another thing to check for, Composer needs to run in the context of a directory it has permissions to.
In my case I was trying to issue a create-project command from /var/www, aimed against /var/www/html. /var/www is owned by root, /var/www/html is owned by the same user I executed Composer as (www-data). I got the following error; Could not delete /var/www/html/:
Issued the same Composer command from within /var/www/html itself and it worked perfectly.
To me it helped to install a (new) version via command line from download homepage https://getcomposer.org/download/. I can exclude some file permissions as I was root with chmod +R 0777, though I had virtualbox mounted drive. Anyway since new version worked, would mean it was version, or running a new version via php phar, and the original bin belonged to root
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
php -r "if (hash_file('sha384', 'composer-setup.php') === '48e3236262b34d30969dca3c37281b3b4bbe3221bda826ac6a9a62d6444cdb0dcd0615698a5cbe587c3f0fe57a54d8f5') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
php composer-setup.php
php -r "unlink('composer-setup.php');"
I have solved the problem by creating a mount :
In /home/vagrant create a folder named vendor
then apply command : mount --bind /home/vagrant/vendor /path/to/source/vendor
It's a bit unrelated with the question, but in my case with Docker. It was failing because Webpack was watching and it didn't allow other files to be deleted.
It worked when I turned off Webpack.
I had same problems trying composer install
- Installing aws/aws-sdk-php (3.218.3): Extracting archive
Install of aws/aws-sdk-php failed
In Filesystem.php line 330:
Could not delete /home/vagrant/code/my-project/vendor/composer/cefa44c2/aws-aws-sdk-php-a1bd217/src:
What I did I comment it out
type: "nfs"
from my homestead.yaml
and make a fresh vagrant provision
I'm using Oracle Virtual box 6.1 on Windows 10.
Turn of Dropbox or other file sync
Best hack i found was to replace the unlink commands with the one below. I am running ubuntu.
sudo nano +219 /usr/share/php/Composer/Util/Filesystem.php
exec("sudo rm -rf $path");
return true;
For Windows users
Wow, I can't believe how long it took me to realize this, and sadly it has happened multiple times, and I'm finally writing this note so that I and others can quickly recover next time.
Just use Windows Explorer to go delete the /vendor/whatever_project_name folder instead of trying to delete it from the Vagrant command line.
Then run composer update to reinstall the dependencies.

Symfony2 Composer Install

I am trying to install Symfony 2.1.3 (latest). I am running composer and installs everything okay. The only error that I get is:
Script Sensio\Bundle\DistributionBundle\Composer\ScriptHandler::clearCache
handling the post-install-cmd event terminated with an exception
[RuntimeException]
An error occurred when executing the "'cache:clear --no-warmup'" command.
It's being installed under www folder. I am running nginx and followed the composer approach. I read on internet that apache should be run manually not as a service, however I am using nginx instead. Does apache still have any bearing on it? I'm using debian squeeze.
Edit: As per AdrienBrault's suggestion the error was because the timezone was not set in the php.ini. Only with --verbose I could see the warning. Thanks guys.
Apache is not related - PHP is called via command line.
Most likely is the permission in the cache folder: did you check if the user that runs the composer update can actually write the cache folder?
Try to manually run rm -Rf app/cache/dev (for production environment replace dev with prod) and see if you get any permission error.
Also you will get this error if the default.timezone setting is not configured in php when running in CLI. To verify just run
php --info | grep timezone
and check that the setting date.timezone is correctly configured.
On the security side, setting 777 to the folder is not the optimal solution - if you have ACL enabled you could use that to correctly set up the permission for the cache and logs folder. Read more at the Symfony2 official installation page
I had this same issue for a while and after hours of face to brick wall pounding I realized... I have a .gitmodule in my project, and on initial checkout these submodules are NOT initialized and as such are not there for your composer to update, which results in the above error.
Make sure you run the following
git submodule update --init src/Acme/Sadness/Bundle
of course replace src/Acme/Sadness/Bundle with YOUR project namespace.
Hope this helps someone not go through the same pain I just did.
If you have vendor folder already I would remove it and install symfony 2.1.3 again via "composer.phar install". Problem might be coming from outdated version of composer
I had the same problem and I resolve in this way.
execute this on the console
and you should see something like this
$ locate php.ini
/etc/php5/apache2/php.ini
/etc/php5/cli/php.ini
/etc/php5/fpm/php.ini
the first line is probably your php.ini that appear when you do a phpinfo();
the problem is that when you execute composer update this no check the same php.ini
in my case the second line
all my sites work fine but always I had problems not now
after edit the second file and put the same time zone that you set in the first one
run
$ sudo service apache2 reload
and now
$ composer update
I hope that this work for you like work for me
regards
Emiliano

Resources