I'm trying to install opnstack grizzly on a fresh ubuntu 12.04 server.
The sript runs fin until it reach this point :
screen -S stack -p key -X stuff 'cd /opt/stack/keystone &&
/opt/stack/keystone/bin/keystone-all --config-file /etc/keystone/keystone.con' --log-
config
/etc/keystone/logging.conf -d --debug || touch "/opt/stack/status/stack/key.failure"
2013-07-16 17:33:03 + echo 'Waiting for keystone to start...'
2013-07-16 17:33:03 Waiting for keystone to start...
2013-07-16 17:33:03 + timeout 60 sh -c 'while ! http_proxy= curl -s
http://192.168.20.69:5000/v2.0/ >/dev/null; do sleep 1; done'
2013-07-16 17:34:03 + die 311 'keystone did not start'
2013-07-16 17:34:03 + local exitcode=0
2013-07-16 17:34:03 + set +o xtrace
2013-07-16 17:34:03 [ERROR] ./stack.sh:311 keystone did not start
the log file :
File "/opt/stack/keystone/bin/keystone-all", line 112, in <module>
options = deploy.appconfig('config:%s' % paste_config)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 261, in appconfig
global_conf=global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext
global_conf=global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 320, in _loadconfig
return loader.get_context(object_type, name, global_conf)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 413, in get_context
defaults = self.parser.defaults()
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 68, in defaults
defaults[key] = self.get('DEFAULT', key) or val
File "/usr/lib/python2.7/ConfigParser.py", line 623, in get
return self._interpolate(section, option, value, d)
File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 75, in _interpolate
self, section, option, rawval, vars)
File "/usr/lib/python2.7/ConfigParser.py", line 669, in _interpolate
option, section, rawval, e.args[0])
ConfigParser.InterpolationMissingOptionError: Error in file /etc/keystone/keystone.conf:
Bad value substitution:
section: [DEFAULT]
option : admin_endpoint
key : admin_port
rawval : http://192.168.20.69:%(admin_port)s/
the parsing instruction :
https://github.com/openstack/keystone/blob/master/keystone/common/config.py
the ConfigParser.InterpolationMissingOptionError :
Exception raised when an option referenced from a value does not exist. Subclass of InterpolationError.
I actually don't understan which option referenced does not exist..
Thank you in advance for your help.
Damien
I had the same problem when I ran stack.sh. The localrc file at the time of running stack.sh was:
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
# enable_service q-lbass
disable_service mysql
enable_service postgresql
# enable_service swift
# SWIFT_HASH=devstack
#
LOGFILE=$DEST/logs/stack.log
SCREEN_LOGDIR=$DEST/logs/screens
#
SERVICE_TOKEN=devstack
SCHEDULER=nova.scheduler.chance.ChanceScheduler
# Repositories
GLANCE_BRANCK=stable/grizzly
HORIZON_BRANCH=stable/grizzly
KEYSTONE_BRANCH=stable/grizzly
NOVA_BRANCH=stable/grizzly
NEUTRON_BRANCH=stable/grizzly
CINDER_BRANCH=stable/grizzly
SWIFT_BRANCH=stable/grizzly
PBR_BRANCH=master
REQUIREMENTS_BRANCH=stable/grizzly
CEILOMETER_BRANCH=stable/grizzly
...
However, after I removed the repositories definition, and let the defaults in stackrc take over, ie. all branches pointed to 'master', the problem went away.
Further, The contents of /opt/stack/keystone/bin/keystone-all script are different between the stable/grizzly and master branches. I think the one in 'master' branch seems to work now withe neutron enabled.
this error because
you run this "stack.sh" as root
or you forget to chmod your config in /etc/keystone/keystone.conf
chmod 777 /etc/keystone/keystone.conf
unstack.sh and then re run stack.sh
just simply
visudo
add stack as user who will do same as root but no password required
stack ALL=(ALL:ALL) ALL
su stack
cp -r /root/devstack /home/stack/
cd /home/stack/devstack/
./stack.sh
clean all first if necessary
Looks like a bug that has been filed for keystone https://bugs.launchpad.net/keystone/+bug/1201861 and it is still open.
Modify devstack/lib/keystone as follows:
iniset $KEYSTONE_CONF DEFAULT public_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:35357/"
iniset $KEYSTONE_CONF DEFAULT admin_endpoint "$KEYSTONE_SERVICE_PROTOCOL://$KEYSTONE_SERVICE_HOST:5000/"
I just ran into this myself. The problem is that DevStack is building a Keystone configuration file in /etc/keystone/keystone.conf in which the option "admin_port" is used before it's been set. And you can't just edit keystone.conf and re-run stack.sh, because your edited version will be overwritten. I'm still chasing down the code that borks the configuration file....
Related
I've got kali linux from microsoft store.
I wanted to run ./ngrok authtoken <my_authtoken>
but got -bash: ./ngrok: cannot execute binary file: Exec format error
so I tried chmod +x ./ngrok authtoken <my_authtoken> and sudo chmod +x ./ngrok authtoken <my_authtoken>
but either way I get chmod: cannot access 'authtoken': No such file or directory chmod: cannot access '<my_authtoken>'
what should I do?
I really need to run ./ngrok authtoken <my_authtoken>
P.S: I want to use blackeye and when I chose the number it downloaded Ngrok
edit 1: I downloaded another version from https://ngrok.com/download and I removed the previous Ngrok in blackeye directory and unziped the new one instead.
now I'm getting bash: ./ngrok: Permission denied
edit 2: It's been 12 days with no accurate answer guess I gotta get the real Kali Linux and the problem is the windows version.
Always Google and try to find an answer before you post a question.
Your first error (-bash: ./ngrok: cannot execute binary file: Exec format error) is probably because your trying to run a program made for a different architecture such as x86 or ARM (see https://askubuntu.com/a/648558).
Your second error (chmod: cannot access 'authtoken': No such file or directory chmod: cannot access '<my_authtoken>') is because your trying to run a command from within chmod, you have to chmod the file then run it.
Your third error (bash: ./ngrok: Permission denied) is because you need to chmod the file to an executable before you can run it, and there is no need for sudo unless chmod returns chmod: cannot access '<yourfile>': Permission denied then you should use sudo.
What your should run is:
curl -L https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip -o ngrok.zip
unzip ngrok.zip
chmod +x ngrok
./ngrok authtoken <myauthtoken>
this was the only thing that work for me:
curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null && echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | sudo tee /etc/apt/sources.list.d/ngrok.list && sudo apt update && sudo apt install ngrok
I'm using gitlab on a rasberry pi model 3 B. Following some information about my setup (sudo gitlab-rake gitlab:env:info):
System information
System: Raspbian 8.0
Current User: git
Using RVM: no
Ruby Version: 2.3.6p384
Gem Version: 2.6.13
Bundler Version:1.13.7
Rake Version: 12.3.0
Redis Version: 3.2.11
Git Version: 2.14.3
Sidekiq Version:5.0.5
Go Version: go1.3.3 linux/arm
GitLab information
Version: 10.6.0-rc3
Revision: 52fa89e
Directory: /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: postgresql
URL: http://gitlab.example.com
HTTP Clone URL: http://gitlab.example.com/some-group/some-project.git
SSH Clone URL: git#gitlab.example.com:some-group/some-project.git
Using LDAP: no
Using Omniauth: no
GitLab Shell
Version: 6.0.3
Repository storage paths:
- default: /mnt/SeagateExpansion/GitLab/repositories
Hooks: /opt/gitlab/embedded/service/gitlab-shell/hooks
Git: /opt/gitlab/embedded/bin/git
After the gitlab update to version 10.6.0 I need to change the url again but when I do the necessary changes in /etc/gitlab/gitlab.rb and run sudo nano gitlab-ctl reconfigure I get the following error messages:
========================================================================
Error executing action `run` on resource 'ruby_block[directory resource:
/mnt/SeagateExpansion/GitLab]'
========================================================================
and
============================================================================
Error executing action `create` on resource
'storage_directory[/mnt/SeagateExpansion/GitLab]'
============================================================================
The result message says:
There was an error running gitlab-ctl reconfigure:
storage_directory[/mnt/SeagateExpansion/GitLab] (gitlab::gitlab-rails line 42) had an error: Mixlib::ShellOut::ShellCommandFailed: ruby_block[directory resource: /mnt/SeagateExpansion/GitLab] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/package/resources/storage_directory.rb line 33) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of chmod 00700 /mnt/SeagateExpansion/GitLab ----
STDOUT:
STDERR: chmod: changing permissions of ‘/mnt/SeagateExpansion/GitLab’: Operation not permitted
---- End output of chmod 00700 /mnt/SeagateExpansion/GitLab ----
Ran chmod 00700 /mnt/SeagateExpansion/GitLab returned 1
So the problem seems to be, that the execution of the run and create command on the resource storage (GitLab folder on the external HDD [HDD = SeagateExpansion]) expects the permissions to be 700, right?
According to this errors I tried to change the permission of
the external HDD folder /mnt/SeagateExpansion/GitLab see the ls -l output:
drwxrwxrwx 1 root GitLabUser 0 Jan 4 17:55 GitLab
With the help of this post I tried to change the permission with the command:
sudo find /mnt/SeagateExpansion/GitLab -type d -exec chmod 700 {} \;
to the required permission 700. But the changes don't take affect. I also tried chmod -R 700 /mnt/SeagateExpansion/GitLab and executed the commands as root but the changes don't take effect. Even after restarting the raspberry pi. What am I doing wrong?
I also tried to change the options settings/flag of the HDD in /etc/fstab to user but this doesn't help ether.
I'm thankful for every hint and answer :).
Best regards,
Bredjo
I finally figured it out. The solution is to change the mount settings in the /etc/fstab. Because if you have the wrong options settings (see: https://en.wikipedia.org/wiki/Fstab) you are not able to change the permissions because its a ntfs filesystem.
So my old fstab entry was this:
UUID=FE820568820526AD /mnt/SeagateExpansion ntfs defaults,gid=GitLabUser 0 0
And the new entry is this:
UUID=FE820568820526AD /mnt/SeagateExpansion ntfs-3g permissions 0 0
Note that you need to install ntfs-3g to use it in fstab. And the permissions options only comes with ntfs-3g. See: https://www.tuxera.com/community/ntfs-3g-advanced/ownership-and-permissions/
After this change I executed again:
sudo gitlab-ctl reconfigure
Now the error disappeared and the permission 700 of the folder /mnt/SeagateExpansion/GitLab could be set. I also noticed that the owner of the GitLab folder was also changed to user git after the reconfiguration:
drwx------ 1 git root 0 Jan 4 17:55 GitLab
That's because I don't need the option gid=GitLabUserany more.
Now everything works again :).
Follow this document http://docs.sulu.io/en/latest/book/getting-started.html at the end of installation process I got this error:
Target: cache
cache:clear ({"--no-optional-warmers":true,"--no-debug":true,"--no-interaction":true})
// Clearing the admin cache for the dev environment with debug true
[Symfony\Component\Filesystem\Exception\IOException]
Failed to remove directory "/home/vagrant/Code/sulu/var/cache/admin/de~/doctrine": .
sulu:build [-D|--nodeps] [--destroy] [-h|--help] [-q|--quiet] [-v|vv|vvv|--verbose] [-V|--version] [--ansi] [--no-ansi] [-n|--no-interaction] [-s|--shell] [--process-isolation] [-e|--env ENV] [--no-debug] [--] <command> [<target>]
Before that I was trying to set up file permissions with this:
HTTPDUSER=`ps axo user,comm | grep -E '[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1 | cut -d\ -f1`
sudo setfacl -R -m u:"$HTTPDUSER":rwX -m u:`whoami`:rwX var/cache var/logs var/uploads var/uploads/* web/uploads web/uploads/* var/indexes var/sessions
sudo setfacl -dR -m u:"$HTTPDUSER":rwX -m u:`whoami`:rwX var/cache var/logs var/uploads var/uploads/* web/uploads web/uploads/* var/indexes var/sessions
and also had these errors:
setfacl: web/uploads: Operation not supported
setfacl: web/uploads/media: Operation not supported
setfacl: web/uploads/media: Operation not supported
My host OS : Ubuntu 16.04
Vagrant : v.1.9.3
VirtualBox : v.5
Homestead: v.5.2.1
Does anyone have successful installation Sulu CMS with Homestead?
What is the option to solve these my issues?
Sulu CMS looks very promising but unfortunately I could not install it locally still with many attempts.
UPDATE
After comment of Daniel I've tried other way to install Sulu but again got error at the very end of installation:
Executing builders
==================
Target: cache
cache:clear ({"--no-optional-warmers":true,"--no-debug":true,"--no-interaction":true})
// Clearing the admin cache for the dev environment with debug true
[Symfony\Component\Filesystem\Exception\IOException]
Failed to remove directory "/home/vagrant/Code/sulu/app/cache/admin/de~/annotations": .
Opened test.app in browser I see this error:
Fatal error: Uncaught
Symfony\Component\Debug\Exception\FatalThrowableError: Call to a member function getLocale() on null in /home/vagrant/Code/sulu/vendor/sulu/sulu/src/Sulu/Bundle/WebsiteBundle/Twig/Content/ContentPathTwigExtension.php on line 70
Symfony\Component\Debug\Exception\FatalThrowableError: Call to a member function getLocale() on null in /home/vagrant/Code/sulu/vendor/sulu/sulu/src/Sulu/Bundle/WebsiteBundle/Twig/Content/ContentPathTwigExtension.php on line 70
Tried to delete cache folders manually - the same error.
All console commands work fine.
Any ideas?
P.S. I have a good experience of installation other Symfony-based applications under Homestead and all went smooth basically (Sylius, eZ etc.). So I am very surprised....
To get around this, you need to remove this folder from the host machine, because it's an artifact from the previous process of trying to set things up and it's in god-mode, created by a no-longer existing almighty entity.
vagrant destroy (needed to release filesystem locks)
Remove folder on host: rm -rf .../sulu/app/cache/admin
type: nfs on folder binding
vagrant up
Should work then, encountered this same problem recently.
If you're using Homestead Improved, use the built-in "sulu" project type for a decent Nginx auto-setup:
sites:
- map: homestead.app
to: /home/vagrant/Code/Project/web
type: sulu
I couldn't find existing posts related to my issue. On a Digital Ocean Droplet, mup setup went fine, but when I try to deploy, I get the following error. Any ideas? Thanks!
root#ts:~/ts-deploy# mup deploy
Meteor Up: Production Quality Meteor Deployments
Building Started: /root/TS/
Bundling Error: code=137, error:
-------------------STDOUT-------------------
Figuring out the best package versions to use. This may take a moment.
-------------------STDERR-------------------
bash: line 1: 31217 Killed meteor build --directory /tmp/dc37af3e-eca0-4a19-bf1a-d6d38bb8f517
Below are the logs. node -v indicates I am using 0.10.31. How do I check which script is exiting with the error? Any other ideas? Thanks!
error: Forever detected script exited with code: 1
error: Script restart attempt #106
Meteor requires Node v0.10.29 or later.
error: Forever detected script exited with code: 1
error: Script restart attempt #107
Meteor requires Node v0.10.29 or later.
error: Forever detected script exited with code: 1
error: Script restart attempt #108
stepping down to gid: meteoruser
stepping down to uid: meteoruser
After I went back to an old backup of the DO Droplet, and re-ran mup setup and mup deploy, I now get this in the command line output
Building Started: /root/TS
Bundling Error: code=134, error:
-------------------STDOUT-------------------
Figuring out the best package versions to use. This may take a moment.
-------------------STDERR-------------------
FATAL ERROR: JS Allocation failed - process out of memory
bash: line 1: 1724 Aborted (core dumped) meteor build --directory /tmp/bfdbcb45-9c61-435f-9875-3fb304358996
and this in the logs:
>> stepping down to gid: meteoruser
>> stepping down to uid: meteoruser
Exception while invoking method 'login' TypeError: Cannot read property '0' of undefined
at ServiceConfiguration.configurations.remove.service (app/server/accounts.js:7:26)
at Object.Accounts.insertUserDoc (packages/accounts-base/accounts_server.js:1024)
at Object.Accounts.updateOrCreateUserFromExternalService (packages/accounts-base/accounts_server.js:1189)
at Package (packages/accounts-oauth/oauth_server.js:45)
at packages/accounts-base/accounts_server.js:383
at tryLoginMethod (packages/accounts-base/accounts_server.js:186)
at runLoginHandlers (packages/accounts-base/accounts_server.js:380)
at Meteor.methods.login (packages/accounts-base/accounts_server.js:434)
at maybeAuditArgumentChecks (packages/ddp/livedata_server.js:1594)
at packages/ddp/livedata_server.js:648
Exception while invoking method 'login' TypeError: Cannot read property '0' of undefined
at ServiceConfiguration.configurations.remove.service (app/server/accounts.js:7:26)
at Object.Accounts.insertUserDoc (packages/accounts-base/accounts_server.js:1024)
at Object.Accounts.updateOrCreateUserFromExternalService (packages/accounts-base/accounts_server.js:1189)
at Package (packages/accounts-oauth/oauth_server.js:45)
at packages/accounts-base/accounts_server.js:383
at tryLoginMethod (packages/accounts-base/accounts_server.js:186)
at runLoginHandlers (packages/accounts-base/accounts_server.js:380)
at Meteor.methods.login (packages/accounts-base/accounts_server.js:434)
at maybeAuditArgumentChecks (packages/ddp/livedata_server.js:1594)
at packages/ddp/livedata_server.js:648
The memory issue stems from using DigitalOcean's $5 Droplet. To solve the problem, I added swap to the server, as explained in detail below.
Create and enable the swap file using the dd command :
sudo dd if=/dev/zero of=/swapfile bs=1024 count=256k
“of=/swapfile” designates the file’s name. In this case the name is swapfile.
Next prepare the swap file by creating a linux swap area:
sudo mkswap /swapfile
The results display:
Setting up swapspace version 1, size = 262140 KiB
no label, UUID=103c4545-5fc5-47f3-a8b3-dfbdb64fd7eb
Finish up by activating the swap file:
sudo swapon /swapfile
You will then be able to see the new swap file when you view the swap summary.
swapon -s
Filename Type Size Used Priority
/swapfile file 262140 0 -1
This file will last on the virtual private server until the machine reboots. You can ensure that the swap is permanent by adding it to the fstab file.
Open up the file:
sudo nano /etc/fstab
Paste in the following line:
/swapfile none swap sw 0 0
Swappiness in the file should be set to 10. Skipping this step may cause both poor performance, whereas setting it to 10 will cause swap to act as an emergency buffer, preventing out-of-memory crashes.
You can do this with the following commands:
echo 10 | sudo tee /proc/sys/vm/swappiness
echo vm.swappiness = 10 | sudo tee -a /etc/sysctl.conf
To prevent the file from being world-readable, you should set up the correct permissions on the swap file:
sudo chown root:root /swapfile
sudo chmod 0600 /swapfile
This only worked for me by increasing the swap space to 1gb:
Make all swap off
sudo swapoff -a
Resize the swapfile
sudo dd if=/dev/zero of=/swapfile bs=1M count=1024
Make swapfile usable
sudo mkswap /swapfile
Make swapon again
sudo swapon /swapfile
I have a path issue. I can't seem to figure out why I am getting this code 23 error. Here is the complete error message: I am guessing that rsync cant write to my local /private/tmp directory.
Here is the output:
```
Do you really want to continue? (y/n): y
rsync: link_stat "/tmp/SGDU55.sql" failed: No such file or directory (2)
rsync error: some files could not be transferred (code 23) at /SourceCache/rsync/rsync-42/rsync/main.c(1400) [receiver=2.6.9]
Could not rsync from xxx#staging-5244.prod.xxx.com:/tmp/SGDU55.sql to [error]
/private/tmp/-to-drupal_db.sql.p0YIBu
```
Here is the drush simulate command abbreviated output.
```
$ drush sql-sync #aq6 #aqsolo --simulate
.....
Calling system(rsync -e 'ssh -i /Users/dave.ferrera/.vagrant.d/insecure_private_key' -akz --exclude=".git" --exclude=".gitignore" --exclude=".hg" --exclude=".hgignore" --exclude=".hgrags" --exclude=".bzr" --exclude=".bzrignore" --exclude=".bzrtags" --exclude=".svn" /private/tmp/-to-drupal_db.sql.iXOzSo vagrant#12.12.12.12:tmp/drupal_db.sql);
Calling system(ssh -i /Users/dave.ferrera/.vagrant.d/insecure_private_key vagrant#12.12.12.12 'mysql --database=drupal_db --host=localhost --user=root --password=password --silent < tmp/drupal_db.sql 2>&1');
$
```
Is there a way change the /private/tmp path to something else?
I have added chmod 1777 to /private and /private/tmp
Since I was using Acquia the problem seemed to be solved as soon as I changed to the correct %dum-dir path.
so now I have:
'%dump-dir' => '/mnt/tmp/',
If you alias root is beginning with 'root' => '/mnt/gfs..... than it should be the same.