I'м newbie on Docker.
In the dashboard, I deploy the Wordpress and the Mariadb in different layers. In a container with Wordpress, I made a connection with the Mariadb.
What variables should I edit in the WordPress container, what would it be initialized with the Mariadb database?
The Links section is intended for establishing connection between your Docker containers (obviously, they should be placed at different layers inside a single environment for that).
After such a connection is set, a container will be able to work with environment variables of the linked template (herewith, the imported properties will have a special prefix to be easily separated from this container’s native ones).
To set a new link, click the Add button and fill in the appeared fields:
Node - select the layer with the required image using the drop-down list of ones, available within the current environment
Alias - type a connection alias (DB in our case). Subsequently, it will be used as a prefix for the chosen container’s variables, imported to the currently configured one.
After that click Save to confirm linking settings. You can link as many different nodes to a single container as you require.
You always can Edit or Remove the unnecessary link with the corresponding buttons at the top pane of the Docker layer settings frame.
docker layer
After the new settings are applied, you can check the results by switching to the Variables section (where the newly imported parameters will be listed).
Tip: Upon linking Docker containers, Jelastic also adds the corresponding DNS record (with the identical to the used alias name)
to Jelastic DB. In such a way, you can refer to a particular container
from inside of these two environment layers not just over its IP
address or NodeID, but also specifying the assigned alias with
counter, i.e. {alias_name}_N.
For example, after linking with DB alias (as it’s shown above), you
can ping specific containers at the appropriate layer as “db_1”,
“db_2”, etc while working with Platform internal network via Jelastic
SSH Gateway. Herewith, if using common layer alias (i.e. without
counter, “db” in our case), the system will use Round-Robin algorithm
to choose any container within the defined node group.
https://docs.jelastic.com/docker-links
UPD1
In order to initialize database add these variables to MariaDB:
MYSQL_ROOT_PASSWORD
This variable is mandatory and specifies the password that will be set for the MariaDB root superuser account. In the above example, it was set to my-secret-pw.
MYSQL_DATABASE
This variable is optional and allows you to specify the name of a database to be created on image startup. If a user/password was supplied (see below) then that user will be granted superuser access (corresponding to GRANT ALL) to this database.
MYSQL_USER, MYSQL_PASSWORD
These variables are optional, used in conjunction to create a new user and to set that user's password. This user will be granted superuser permissions (see above) for the database specified by the MYSQL_DATABASE variable. Both variables are required for a user to be created.
Related
We are using MariaDB v10.5.15 and clustering of Galera-4 v26.4.11.
The cluster is in the weighted quorum mode so that the primary site has more votes than the non-primary. During a network outage, the primary site with more votes prevails and continues service without the other peer.
The system needs to undergo a regular disaster recovery examination, including switching the primary site to the other peer. So, we need to change the weight assignments for the design reasons explained above.
Within MySQL client, we can set pc.weight dynamically by command like set global wsrep_provider_options="pc.weight=2";. However, this command only changes the in-memory configuration. So, if reboot the host, the MariaDB will start again with the old value written in the configuration file /etc/my.cnf.d/server.cnf.
To make the new weights non-volatile, we need to edit the below part of the configuration file. And the edit is error-prone because the file contains many other items in the same line of wsrep_provider_options, with pc.weight in the middle.
[galera]
...
wsrep_provider_options="socket.ssl=true; socket.ssl_key=/etc/pki/galera/server-key.pem; socket.ssl_cert=/etc/pki/galera/server-cert.pem; socket.ssl_ca=/etc/pki/galera/ca-cert.pem; pc.weight=2"
...
I am wondering:
Is it possible to set the pc.weight non-volatile without editing the configuration files?
Otherwise, is it possible to separate pc.weight into another .cnf file while keeping the other items of wsrep_provider_options in the original file?
We highly appreciate your help and suggestions.
No pc.weight cannot be non-volatile without editing a configuration file.
If you put it into another configuration file and start the server with --defaults-extra-file=/path/to/other/file.cnf that would pick it up.
Another option is to start another node, even an arbitrator node, on the secondary site during DR/DR testing. That node might need more than a 2 weight.
How does the primary site know the weight of a node it hadn't seen? I'm not sure. So be careful.
Environment
ASP.NET WebForms app over IIS
Docker container host
AWS ECS hosting platform
Each client hosting its own copy of the app with private database connection string
Background
In the non-docker environment, each copy is a virtual directory under IIS, and thus have their own individual web.config pointing to dedicated databases. The underlying codebase is the same for each client, with no client-specific customization involved. The route becomes / here.
In the docker environment (one container per client), each copy goes over as a central root application.
Challange
Since the root image is going to be the same, how to have the web.config overridden for each client deployment.
We shouldn't create multiple images (one per client) as that will mean having extra deployment jobs and losing out on centralization. The connection strings should ideally be stored in some kind of dictionary storage applicable at ECS level which can provide client-specific values upon loading of corresponding containers.
Presenting the approach we used to solve this issue. Hope it may help others struck in similar cases.
With the problem statement tied to having a single root image and having any customization being applied at runtime, we knew that there needs to be a transformation of web.config at time of loading of the corresponding containers.
The solution was to use a PowerShell script that will read the web.config and get replace the specific values which were having a custom prefix embedded to the key. The values got passed from custom environmental variables within ECS and the web.config also got updated to have the keys with the prefix added.
Now since the docker container can have only a single entry point, a new base image was created which instantiated an IIS server and called a PowerShell script as startup. The called script called this transformation script and then set the ServiceMonitor on the w3cwp.
Thanks a lot for this article https://anthonychu.ca/post/overriding-web-config-settings-environment-variables-containerized-aspnet-apps/
I would use environment variables as the OP suggests for this with a start up transform, however I want to make the point that you do not want sensitive information in ENV variables, like DB passwords, in your ECS task definition.
For that protected information, you should use ECS secrets coupled with Parameter Store in Systems Manager. These values can be stored encrypted in the Parameter Store (using a KMS key) and the ECS Agent will 'inject' them as ENV variables on task startup.
For me, to simplify matters, I simply use secrets for everything although you can choose to only encrypt the sensitive information and leave the others clear.
I dynamically add the secrets for the given application into my task definitions at deploy time by looking up the 'secrets' for the given app by 'namespace' (something that Parameter Store supports). Then, if I need to add a new parameter, I can just add a new secret to the store in the given namespace and re-deploy the app. It will pick up and inject into the task definition any newly defined secrets automatically (or remove ones that have been retired).
Sample ruby code for creating task definition:
params = ssm_client.get_parameters_by_path(path: '/production/my_app/').parameters
secrets = params.map{ |p| { name: p.name.split("/")[-1], value_from: p.arn } }
task_def.container_definitions[0].secrets = secrets
This last transform injects the secrets such that the secret 'name' is the ENV variable name... which ends up looking like this:
"secrets": [
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_HOSTNAME",
"name": "DB_HOSTNAME"
},
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_PASSWORD",
"name": "DB_PASSWORD"
}
You can see there are no values now in the task definition. They are retrieved and injected when ECS starts up your task.
More information:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html
Is it possible to scope Openstack CLI output for listing networks only for a single project. I have tried multiple options like --os-project-id, --os-project-name etc but it seems to list down all networks across multiple projects/tenants.
Currently, the command I am using is:
openstack network list --os-username XXX --os-password YYY --os-project-id ZZZ
Note: The credentials that I am using here are of an 'admin' account
Parameters set in the environment are :
OS_PROJECT_ID=XXX
OS_REGION_NAME=XXX
OS_TENANT_ID=XXX
OS_USER_DOMAIN_NAME=XXX
OS_PROJECT_NAME=XXX
OS_AUTH_VERSION=XXX
OS_IDENTITY_API_VERSION=XXX
OS_PASSWORD=XXX
OS_AUTH_URL=XXX
OS_USERNAME=XXX
OS_TENANT_NAME=XXX
OS_INTERFACE=XXX
OS_PROJECT_DOMAIN_NAME=XXX
May be your networks are shared by all tenants. If you only have a few networks you can verify with neutron net-show Network-Name and review the shared attribute
BTW I use the env variable OS_PROJECT_NAME to switch between projects
Without any explicit filter specified in the parameters, Neutron's network API returns all networks that the user accessing the API has privileges to list. The recommended way to scope down the list of networks to a specific project is to explicitly specify that filter.
Via CLI, you can scope the list to a specific project "demo" using the following example:
openstack network list --project demo
You can see more filtering options via the help text:
openstack help network list
Issues were caused by an older version of Openstack CLI v3.7.0
Using Openstack CLI version v3.13.0, I was able to solve my requirement. By default, with the domain admin account, the CLI still dumped the entire network list but with the --long flag, the 'project' field this time was populated and I could filter out the results for the specific project.
This was not the case with the previous CLI versions. Usage of '--long' flag had all the values of 'Project' as none.
I have recently created a very small Google Compute Engine instance, naively thinking it's one of those easily scalable things Google people keep raving about.
I used the quick deployment feature of Wordpress and it all installed itself nicely, so I started configuring and adding data etc.
However, I then found out that I can't scale an existing instance (i.e. it won't allow me to change the instance type to a bigger one. I don't get why not, but there you go.), so it looks like I need to find a way to migrate my Wordpress installation to a new instance.
Will I simply be able to create a new instance and point it at the persistent disk my small instance currently uses, et voila, Bob's your uncle?
Or do I need to manually get the files and MySql data off the first instance and re-import into an empty new instance?
What's the easiest way?
Any advise or helpful links would be appreciated.
Thanks.
P.S.: Btw, should I try to use the Google Cloud SQL store instead of a local MySql installation?
In order to upgrade your VM:
access the VM's settings in the Developers Console (your project -> Compute -> Compute Engine -> VM instances -> click on the VM's name)
Scroll down to the "Disks" section, and un-check "Delete boot disk when instance is deleted"
Delete the VM in question. Take note that the disk, named after the instance, will remain.
Create a new VM, selecting "Existing disk" under Boot disk - Boot source. In the next box down, select the disk from point 3 above, as well as a bigger machine type.
The resulting new instance will use the existing disk from the old one, with improved hardware / performance.
As for using Cloud SQL in lieu of a VM-installed database, it's perfectly feasible, and allows to adjust the Cloud SQL instance to match your actual use. A few consideration when setting up this kind of instance:
limit the IPs allowed to connect to your Cloud SQL instance to your frontend's IP, and perhaps the workstation's IP or subnet from which you maintain the database out of.
configure Cloud SQL to use SSL certificates.
Sammy's answer covers the important stuff I just wanted to clarify how your files are arranged on the two disks that are attached to your instance:
The data disk contains /var/www/ which is all of the wordpress files. It's mounted on the instance at /wordpress
The boot disk contains everything else, including the MySQL database that was created for the Wordpress installation.
How to create data base link in oracle 11 g to Access Tables.
You seem to have copied the example in the documentation without really understanding it.
The USING 'local' part of the statement is creating a link to 'the local database', where local is the service name of a database. (The example is a bit confusing, to be fair).
When the link is used it tries to interpret local as a service name, appending the current database's domain, as the docs say:
USING 'connect string'
Specify the service name of a remote database. If you specify only the
database name, then Oracle Database implicitly appends the database
domain to the connect string to create a complete service name.
Therefore, if the database domain of the remote database is different
from that of the current database, then you must specify the complete
service name.
If you're trying to create a link back into the same database - which would be a bit odd but I've seen it done in place of grant access across schemas, and that seems to be what the example is hinting at - then you can replace 'local' in the USING clause with the service name of your current database (e.g. USING 'orcl', or whatever).
You can also use a TNS alias; if your tnsnames.ora has an entry for SOME_DB which points to the SID or service name of another database, you can have USING'some_db'`. You should be able to use any connect string I think; certainly Easy Connect is allowed. There's more in the net services admin guide.