I'm new to OpenStack and I used DevStack to configure a multi-node dev environment, currently compound of a controller and two nodes.
I followed the official documentation and used the development version of DevStack from the official git repo. The controller was set up in a fresh Ubuntu Server 16.04.
I automated all the steps described in the docs using some scripts I made available here.
The issue is that my registered VM images don't appear on the Dashboard. The image page is just empty. When I install a single-node setup, everything works fine.
When I run openstack image list or glance image-list, the image registered during the installation process is listed as below, but it doesn't appear at the Dashboard.
----------------------------------------------------------
| ID | Name | Status |
----------------------------------------------------------
| f1db310f-56d6-4f38 | cirros-0.3.5-x86_64-disk | active |
----------------------------------------------------------
openstack --version openstack 3.16.1
glance --version glance 2.12.1.
I've googled a lot but got no clue.
Is there any special configuration to make images available in multi-node setup?
Thanks.
UPDATE 1
I tried to set the image as shared using
glance image-update --visibility shared f1db310f-56d6-4f38-b5da-11a714203478, then to add it to all listed projects (openstack project list) using the command openstack image add project image_name project_name but it doesn't work either.
UPDATE 2
I've included the command source /opt/stack/devstack/openrc admin admin inside my ~/.profile file so that all environment variables are set. It defines the username and project name as admin, but I've already tried to use the default demo project and demo username.
All env variables defined by the script is shown below.
declare -x OS_AUTH_TYPE="password"
declare -x OS_AUTH_URL="http://10.105.0.40/identity"
declare -x OS_AUTH_VERSION="3"
declare -x OS_CACERT=""
declare -x OS_DOMAIN_NAME="Default"
declare -x OS_IDENTITY_API_VERSION="3"
declare -x OS_PASSWORD="stack"
declare -x OS_PROJECT_DOMAIN_ID="default"
declare -x OS_PROJECT_NAME="admin"
declare -x OS_REGION_NAME="RegionOne"
declare -x OS_TENANT_NAME="admin"
declare -x OS_USERNAME="admin"
declare -x OS_USER_DOMAIN_ID="default"
declare -x OS_USER_DOMAIN_NAME="Default"
declare -x OS_VOLUME_API_VERSION="3"
When I type openstack domain list I get the domain list below.
----------------------------------------------------
| ID | Name | Enabled | Description |
----------------------------------------------------
| default | Default | True | The default domain |
----------------------------------------------------
As the env variables show, the domain is set as the default one.
After reviewing all the installation process, the issue
was due to an incorrect floating IP range defined inside the local.conf file.
The FLOATING_RANGE variable in such a file must be defined as a subnet of the node network. For instance, my controller IP is 10.105.0.40/24 while the floating IP range is 10.105.0.128/25.
I just forgot to change the FLOATING_RANGE variable (I was using the default value as shown here).
Related
I have been struggling to use vi editor in WordPress container (on Kubernetes) to edit a file wp-config.php
I am currently using this helm chart of WordPress from Artifactub: https://artifacthub.io/packages/helm/bitnami/wordpress
Image: docker.io/bitnami/wordpress:6.1.1-debian-11-r1
These are the errors I'm getting when trying to edit the wp-config.php inside the pod with either vi or vim
# vi wp-config.php
bash: vi: command not found
When I tried installing the vi, I get this error:
apt-get install vi
# Error
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?
Then I tried by first ssh-ing into the node hosting the WordPress pod, then exec into the container using docker with sudo privileges as shown below:
docker exec -it -u root <containerID> /bin/bash
I then tried installing the vi editor in the container by still getting this same error
The content I want to add to the wp-config.php is the following. It's a plugin requirement so that I can be able to store media files right into my AWS S3 bucket:
define('SSU_PROVIDER', 'aws');
define('SSU_BUCKET', 'my-bucket');
define('SSU_FOLDER', 'my-folder');
Can I run the command like this:
helm install my-wordpress bitnami/wordpress \
--set mariadb.enabled=false \
--set externalDatabase.host=my-host \
--set externalDatabase.user=my-user \
--set externalDatabase.password=my-password \
--set externalDatabase.database=mydb \
--set wordpressExtraConfigContent="define('SSU_PROVIDER', 'aws');define('SSU_BUCKET', 'my-bucket');define('SSU_FOLDER', 'my-folder');"
In the chart documentation repository here there are 2 possible ways to do it:
So, to the value files you could use wordpressExtraConfigContent variable and add extra content, or use the variable wordpressConfiguration to set a new wp-config.php
EDIT: You seem to be trying to define environment variables with php define, in that case you can pass environment variables to the pods with the variables:
So --set extraEnvVars or create a configmap with the variables that you want( would be better) and pass --set extraEnvVarsCM <you-configmap> (which will mount the configmap as an env var into the wordpress container.
The Fix for me after a marathon of different options was just to use a plugin to sync my media files with my AWS s3 bucket. There was literally no way for me to be able to do any function with the bitnami wordpress container. Can't edit nor install any edition (vi/vim/nano). It was locked and I didn't want to edit and build from their base image because we had running wordpress applications on a k8s cluster
This is the plugin that I used media cloud
I am running this command:
{{ aws ec2 describe-availability-zones --region ca-central-1 | jq '.AvailabilityZones[]|(.ZoneName)}}'
on 2 identical MacOs and one Amazon Linux.
The MacOs subject to this question is showing this error :
parse error: Invalid numeric literal at line 1, column 18
However, the Amazon and the Other MacOS are showing the correct output
Please help me! This is driving me crazy
This error message indicates that the input piped to jq wasn't valid JSON. Since this input comes directly from the output of the aws ec2 describe-instances command it looks like it isn't emitting JSON, or it's emitting other text as well as JSON.
The fastest way to diagnose this is going to be for you to find out what text the aws command is emitting.
One possible cause could be that you have it configured via an environment variable (AWS_DEFAULT_OUTPUT) or configuration file (e.g. ~/.aws/config) to output YAML or text or tables. (In fact, I consider this probable. I can reproduce the error message exactly down to the column number if I set it to output YAML.) You could rule this out by explicitly specifying --output json.
Beyond that, I suggest you compare these machines to each other. For example, try this on each machine and see what's different on the odd machine:
echo Versions:
aws --version
jq --version
echo Environment:
env | grep '^AWS_'
echo AWS configuration:
aws configure list
echo AWS config file:
cat ~/.aws/config
I am trying to use Flyway to set up a DB2 test/demo environment in a Docker container. I have an image of DB2 running in a docker container and now am trying to get flyway to create the database environment. I can connect to the DB2 docker container and create DB2 objects and load them with data, but am looking for a way for non-technical users to do this (i.e. clone a GitHub repo and issue a single docker run command).
The Flyway Docker site (https://github.com/flyway/flyway-docker) indicates that it supports the following volumes:
| Volume | Description |
|-------------------|--------------------------------------------------------|
| `/flyway/conf` | Directory containing a flyway.conf |
| `/flyway/drivers` | Directory containing the JDBC driver for your database |
| `/flyway/sql` | The SQL files that you want Flyway to use |
I created the conf, drivers, and sql directories. In the conf directory, I placed the file flyway.conf that contained my flyway Url, user name, and password:
flyway.url=jdbc:db2://localhost:50000/apidemo
flyway.user=DB2INST1
flyway.passord=mY%tEst%pAsSwOrD
In the drivers directory, I added the DB2 JDBC Type 4 drivers (e.g. db2jcc4.jar, db2jcc_license_cisuz.jar),
And in the sql directory I put in a simple table creation statement (file name: V1__make_temp_table.sql):
CREATE TABLE EDS.REFT_TEMP_DIM (
TEMP_ID INTEGER NOT NULL )
, TEMP_CD CHAR (8)
, TEMP_NM VARCHAR (255)
)
DATA CAPTURE NONE
COMPRESS NO;
Attempting to perform the docker run with the flyway/flyway image as described in the GitHub Readme.md, it is not recognizing the flyway.conf file, since it does not know the url, user, and password.
docker run --rm -v sql:/flyway/sql -v conf:/flyway/conf -v drivers:/flyway/drivers flyway/flyway migrate
Flyway Community Edition 6.5.5 by Redgate
ERROR: Unable to connect to the database. Configure the url, user and password!
I then put the url, user, and password inline and It could not find the JDBC driver.
docker run --rm -v sql:/flyway/sql -v drivers:/flyway/drivers flyway/flyway -url=jdbc:db2://localhost:50000/apidemo -user=DB2INST1 -password=mY%tEst%pAsSwOrD migrate
ERROR: Unable to instantiate JDBC driver: com.ibm.db2.jcc.DB2Driver => Check whether the jar file is present
Caused by: Unable to instantiate class com.ibm.db2.jcc.DB2Driver : com.ibm.db2.jcc.DB2Driver
Caused by: java.lang.ClassNotFoundException: com.ibm.db2.jcc.DB2Driver
Therefore, I believe it is the way that I am setting up the local file system or associating to local files with the flyway volumes that is causing the issue. Does anyone have an idea of what I am doing wrong?
You need to supply absolute paths to your volumes for docker to mount them.
Changing the relative paths to absolute paths fixed the volume mount issue.
docker run --rm \
-v /Users/steve/github-ibm/flyway-db-migration/sql:/flyway/sql \
-v /Users/steve/github-ibm/flyway-db-migration/conf:/flyway/conf \
-v /Users/steve/github-ibm/flyway-db-migration/drivers:/flyway/drivers \
flyway/flyway migrate
I have deployed a Node.js app on a Google compute instance via a Docker container. Is there a recommended way to pass the GOOGLE_APPLICATION_CREDENTIALS to the docker container?
I see the documentation states that GCE has Application Default Credentials (ADC), but these are not available in the docker container. (https://cloud.google.com/docs/authentication/production)
I am a bit new to docker & GCP, so any help would be appreciated.
Thank you!
So, I could find this documentation on where you can inject your GOOGLE_APPLICATION_CREDENTIALS into a docker in order to test cloud run locally, I know that this is not cloud run, but I believe that the same command could be used in order to inject your credentials to the container.
As I know that a lot of the times the community needs the steps and commands as the links could change and information also could change I will copy the steps needed in order to inject the credentials.
Refer to Getting Started with Authentication for instructions
on generating, retrieving, and configuring your Service Account
credentials.
The following Docker run flags inject the credentials and
configuration from your local system into the local container:
Use the --volume (-v) flag to inject the credential file into the
container (assumes you have already set your
GOOGLE_APPLICATION_CREDENTIALS environment variable on your machine):
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro
Use the --environment (-e) flag to set the
GOOGLE_APPLICATION_CREDENTIALS variable inside the container:
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
Optionally, use this fully configured Docker run command:
PORT=8080 && docker run \
-p 9090:${PORT} \
-e PORT=${PORT} \
-e K_SERVICE=dev \
-e K_CONFIGURATION=dev \
-e K_REVISION=dev-00001 \
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json \
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro \ gcr.io/PROJECT_ID/IMAGE
Note that the path
/tmp/keys/FILE_NAME.json
shown in the example above is a reasonable location to place your
credentials inside the container. However, other directory locations
will also work. The crucial requirement is that the
GOOGLE_APPLICATION_CREDENTIALS environment variable must match the
bind mount location inside the container.
Hope this works for you.
I need to edit the physical_interface_mappings. Per the instructions,this setting is in the file below. However, there is no such linuxbirdege folder in /etc/neutron/plugins/. So where should i edit the physical_interface_mappings?
/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
If neutron-linuxbridge-agent is already running, run to following command to see what config files were used to start the service.
ps aux | grep neutron-linuxbridge-agent
Else, it might be in /etc/neutron/plugins/ml2/.