Configure the administrative account - openstack

I'm deploying openstack- stein version on ubuntu pro 18.04 LTS.
I come across these lines when configuring keystone - identity service, as of this article.
Would anybody please explain how to set this following configuration:
$ export OS_USERNAME=admin
$ export OS_PASSWORD=ADMIN_PASS
$ export OS_PROJECT_NAME=admin
$ export OS_USER_DOMAIN_NAME=Default
$ export OS_PROJECT_DOMAIN_NAME=Default
$ export OS_AUTH_URL=http://controller:5000/v3
$ export OS_IDENTITY_API_VERSION=3
If I'm already in root mode, is there any need to these env variables ?
If the question helped, up-vote it.

Whether you are root or not has no meaning for the openstack command. The OpenStack admin user has nothing to do with the Linux root user.
You don't need the variables, but your command line becomes very long without them, for example openstack --os-username=admin --os-password=ADMIN_PASS --os-project-name=admin --os-user-domain-name=Default --os-project-domain-name=Default --os-auth-url=http://controller:5000/v3 --os-identity-api-version=3 server list. These variables are the most convenient way to tell the openstack command under which identity it should perform its actions.
How to set them? Type them on the command line, but the most common method is putting them in a file that you source. You can then have several such files for several different identities, such as the admin and demo identities in the linked document, which allows you to quickly switch from one identity to the other.

In short, put those commands in admin-openrc.sh, then source admin-openrc.sh when you need to use openstack-cli with administrative account.

Related

How to mount bucket in GCE and make it available to R Studio-Server

I have setup a Google Compute Engine (GCE) instance and I want to mount a Google Cloud Bucket to it. Basically, I have uploaded my data to Google Cloud and I want to make it available for use in the R Studio-server I have installed in my instance. It seems my mounting was successful, but I cannot see the data on R (or in the shell).
I want the bucket to be mounted in /home/roberto/remote. I have run chmod 777 /home/roberto/remote and then gcsfuse my-project /home/roberto/remote. I got the following output:
2023/01/28 22:49:01.004683 Start gcsfuse/0.41.12 (Go version go1.18.4) for app "" using mount point: /home/roberto/remote
2023/01/28 22:49:01.022553 Opening GCS connection...
2023/01/28 22:49:01.172583 Mounting file system "my-project"...
2023/01/28 22:49:01.176837 File system has been successfully mounted.
However, I can't see anything inside /home/roberto/remote when I run ls or when I look inside of it from R Studio-server (see image below). What should I do?
UPDATE: I had uploaded my folders to google cloud, but when I uploaded an individual file, it suddenly showed up! This makes me think the issue has something to do with implicit directories. Supposedly, if I run the same command as before with the --implicit-dirs flag that would be enough (something like this: gcsfuse --implicit-dirs my-project /home/roberto/remote). However, this is returning an error message and I am not sure how to deal with it.
Error message:
2023/01/29 01:33:15.428752 Start gcsfuse/0.41.12 (Go version go1.18.4) for app "" using mount point: /home/roberto/remote
2023/01/29 01:33:15.446696 Opening GCS connection...
2023/01/29 01:33:15.548211 Mounting file system "my-project"...
daemonize.Run: readFromProcess: sub-process: mountWithArgs: mountWithConn: Mount: mount: running /usr/bin/fusermount3: exit status 1
Try to edit the VM Cloud API access scopes of Storage to Full.
Follow the steps below:
Click/Select the VM instance
Stop the VM instance, then edit the VM instance.
Scroll down to Access scopes and select "Set access for each API"
Change the Storage from Read Only to Full.
Save and start your VM instance.
Then SSH to your VM instance and try to ls /home/roberto/remote
Other answers here could be useful depending on your issue. In my case, what solved it was indeed running the command gcsfuse --implicit-dirs my-project /home/roberto/remote. The error I was getting in the edit for my question was due to the fact that I had previously mounted the bucket and was trying to mount it again without unmounting it first (here is the official documentation on how to unmount the bucket). For more details on the importance of the --implicit-dirs flag take a look at the official documentation here. There are very similar solutions using, for instance, the /etc/fstab file. For that, take a look at this discussion in the official github page of gcsfuse.
Try running gcsfuse mount command with debug flags which will help in knowing why the mount failed.
Eg: gcsfuse --implicit-dirs --debug_fuse --debug_gcs --debug_http my-project /home/roberto/remote

Unable to connect Cassandra CQL shell to Azure-CosmosDB

I have entered the following command on my windows cql shell
set SSL_VERSION=TLSv1_2;
and got this error
No viable alternative at input 'set'([set]..)
Are you getting this error when you atemmpt to set the variable or upon launching the cqlsh cmd?
Please ensure you have the two following variables set, where SSL_CERTFILE references a trusted root ca bundle. This is the trusted root bundle from an OpenSSL install on Ubuntu.
export SSL_VERSION=TLSv1_2
export SSL_CERTFILE=/usr/lib/ssl/certs/ca-certificates.crt
Optionally, you can use: export SSL_VALIDATE=true should there be any concerns with the certificate.
In windows, you use set instead of export.
Use export in place of set. Export is used for Linux and Serbia used for windows

meteor build command faild with message "killedking"

meteor create cool
cd cool
meteor build /root/cool/production --directory --server=wdksw.com:3030
Killedking \
There is no production directory appear. How to use this command?
I knows its 6 1/2 years later, but just in case it helps anyone...
It was "Killed" by the OS, but appeared over another console message being displayed at the time which ended "......king". maybe something to do with linking.
And it will probably have been killed because the OS ran out of memory, try setting NODE_OPTIONS and/or TOOL_NODE_FLAGS environmental variables, eg on Linux
export NODE_OPTIONS=--max_old_space_size=16384
export TOOL_NODE_FLAGS=--max_old_space_size=16384
meteor build ...

Automounting riofs in ubuntu

I have several buckets mounted using the awesome riofs and they work great, however I'm at a loss trying to get them to mount after a reboot. I have tried entering in the following to my /etc/fstab with no luck:
riofs#bucket-name /mnt/bucket-name fuse _netdev,allow_other,nonempty,config=/path/to/riofs.conf.xml 0 0
I have also tried adding a startup script to run the riofs commands to my rc.local file but that too fails to mount them.
Any idea's or recommendations?
Currently RioFS does not support fstab. In order to mount remote bucket at the startup time, consider adding corresponding command line to your startup script (rc.local, as you mentioned).
If for some reason it fails to start RioFS from startup script, please feel free to contact developers and/or fill issue report.
If you enter your access key and secret access key in the riofs config xml file, then you should be able to mount this via fstab or an init.d or rc.local script ..
See this thread
EDIT:
I tested this myself and this is what I find. Even with the AWS access details specified in the config file, there is no auto-matic mounting at boot. But to access the system, all one needs to do is to issue mount /mount/point/in-fstab .. and the fstab directive would work and persist like a standard fstab mounted filesystem.
So, it seems the riofs system is not ready at that stage of the boot process when filesystems are mounted. That's the only logical reason I can find so far. This can be solved with an rc.local or init.d script that just issues a mount command (at the worst)
But riofs does work well, even as the documentation seems sparse. It is certainly more reliable and less buggy than s3fs ..
Thanks all,
I was able to get them auto-mounting from rc.local with the syntax similar to:
sudo riofs --uid=33 --gid=33 --fmode=0777 --dmode=0777 -o "allow_other" -c ~/.config/riofs/riofs.conf.xml Bucket-Name /mnt/mountpoint
Thanks again!

openstack compute (nova) "error"

I'm trying to install OpenStack compute (nova) .. when I run command **nova list**
then out the results ERROR: You must provide a username via either --os_username or env[OS_USERNAME]
how to code a solution for me?
If you used devstack (http://devstack.org/) to deploy OpenStack you can use openrc trick:
$cd devstack/
$source openrc admin admin # for admin rights
or
$source openrc demo demo # for demo user
Otherwise you need to export OS variables manually:
$export OS_USERNAME = admin
$export OS_TENANT_NAME = <yourtenant>
$export OS_PASSWORD = <yourpasswd> # password which you used during deployment etc
Related question How to manage users/passwords in devstack?
If you want manually install all the services here's handy manual https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst
I'd recommend to install it once by this manual to learn how it works, and then use latest stable devstack each time when you need to set up a new environment just to save your time.
Regards
to remove this error you just need to execute command
"source openrc"
where openrc is the file where in all the credentilas are stored .. make sure you have that file in the folder. You might also have name of the file something other than open but it ll end with rc .. just change the name according to your file
In my case I needed to call "source stackrc" and it fixed the problem.
I did the following to get rid of the error.
cd devstack
. openrc #this will setup the environment
the below command is used to get access right of "admin" and use the project "admin"
. openrc admin admin

Resources