pax jdbc datasource configuration variable - apache-karaf

I'm using a connected Docker mysql instance with my docker container which contains a Karaf 4 instance configured with a Pax JDBC datasource.
My problem is that my jdbc url depends on some environement variables setup by docker (as the mysql container IP is not always the same). The IP address variable is MYSQL_PORT_3306_TCP_ADDR.
I tried to start karaf with -DMYSQL_PORT_3306_TCP_ADDR=XXX.XXX.XXX.XXX and to setup my datasource with a config file (etc/org.ops4j.datasource.mydb.cfg) which would contains:
url=jdbc:mysql://${mysql.port.3306.tcp.addr}:3306/mydb
but looking at the service:list in karaf I see:
url = jdbc:mysql://:3306/pandoradb
so the variable is clearly not used.
Is there an way to do what I want ?
Best.

I finally found the solution!
I don't know why I tried to use the ${mysql.port.3306.tcp.addr} variable in my conf.... using the correct variable is properly interpreted:
url=jdbc:mysql://${MYSQL_PORT_3306_TCP_ADDR}:3306/mydb
Best.

Related

what is the url for wp core install --url for kubernetes?

In kubernetes, I have a wordpress container and a wp-cli Job.
To create the WordPress tables in the database using the URL, title, and default admin user details, I am running this command in the wp-cli Job:
wp core install --url=http://localhost:8087 --title=title --admin_user=user --admin_password=pass --admin_email=someone#email.com --skip-email
The --url parameter prevents Minikube from serving the wordpress site.
You should put the ip address of your service there in place of "localhost".
When i say service, I'm talking about the service that exposes your deployment/pods (its another kubernetes object you got to create).
You can pass the IP address using an environment variable . When creating a service, the pods that are started inherit extra environment variables that kubernetes places in them, through which you can access the ip address, the port, both etc... check documentation.
The second option is to place there the name of your service (still talking about the kubernetes object you created to expose your deployment). It will be resolved to the IP address in fine by the DNS of the cluster (CoreDNS as of today is started along with minikube).
Those two options are in the documentation in the same section called discovering services.
I had trouble to understand that things like: service-name.name-space.svc.cluster.default are like any url (like subsubdomain.subdomain.stackoverflow.com) but being resolved within the cluster.

Wordpress container changing concurrent maximum clients

I need to change the number of clients that can connect concurrently to a wordpress Docker instance. As far as I know this is done through the MaxClients directive but I don't know in which file should I change it. Also I wonder if there is any environment variable that can be set when launching the Docker instance to change this parameter.
MaxRequestWorkers was called MaxClients before version 2.3.13
The field MaxRequestsWorkers can be configured inside the container under /etc/apache2/mods-available/mpm_prefork.conf

How to communicate with Kafka server running inside a docker

I am using apache KafkaConsumer in my Scala app to talk to a Kafka server wherein the Kafka and Zookeeper services are running in a docker container on my VM (the scala app is also running on this VM). I have setup the KafkaConsumer's property "bootstrap.servers" to use 127.0.0.1:9092.
The KafkaConsumer does log, "Sending coordinator request for group queuemanager_testGroup to broker 127.0.0.1:9092". The problem appears to be that the Kafka client code is setting the coordinator values based on the response it receives which contains responseBody={error_code=0,coordinator={node_id=0,host=e7059f0f6580,port=9092}} , that is how it sets the host for future connections. Subsequently it complains that it is unable to resolve address: e7059f0f6580
The address e7059f0f6580 is the container ID of that docker container.
I have tested using telnet that my VM is not detecting this as a hostname.
What setting do I need to change such that the Kafka on my docker returns localhost/127.0.0.1 as the host in its response ? Or is there something else that I am missing / doing incorrectly ?
Update
advertised.host.name is deprecated, and --override should be avoided.
Add/edit advertised.listeners to be the format of
[PROTOCOL]://[EXTERNAL.HOST.NAME]:[PORT]
Also make sure that PORT is also listed in property for listeners
After investigating this problem for hours on end, found that there is a way to
set the hostname while starting up the Kafka server, as follows:
kafka-server-start.sh --override advertised.host.name=xxx (in my case: localhost)

Docker Swarm JDBC connection

Running a Postgresql DB on a Docker Swarm containing multiple nodes where the Database can be deployed. Using Docker version 1.12+.
Using a Data container, the Postgresql failover is working nicely. Now I would like to have a Java client connect to the DB and also survive failover. How should the JDBC connections be managed here? Does the connection string change? Or should it be managed through something like an nginx container running elsewhere? Is there an example of this available for study anywhere? Conceptually, I think I get moving this off to another (nginx-like) container, but can't quite grok the details!
In swarm mode, you get service discovery by DNS name for services in the same overlay network, you don't need to add a proxy layer yourself. The swam networking docs go into detail, but in essence:
docker network create -d overlay app-net
docker service create --name app-db --network app-net [etc.]
docker service create --name app-web --network app-net [etc.]
Your database server is available by DNS within the network as app-db, to any service in the same app-net network. So app-db is the server name you use in your JDBC connection string. You can have multiple replicas of the Postgres container, or a single container which moves around at failover - the service will always be available at that address.
But: I would be cautious about failover with your data container. You have a single container with your database state there; even if your state is in a volume, it won't move around the cluster. So if the node with the data fails, your data container will start somwhere else, but the data won't go with it.

Error in configuring multiple networks using weave network driver plugin for docker

I am going through an article weave net driver and was trying my hands on it. I was able to use the default weavemesh driver for container-to-container communication on single host. The issue comes when i try to create multiple networks using weave network driver plugin. I get the following error.
[ankit#local-machine]$ docker network create -d weave netA
Error response from daemon: failed to parse pool request for address space "GlobalDefault" pool "" subpool "": cannot find address space GlobalDefault (most likely the backing datastore is not configured)
Now, as i understand from docker documentation at Getting Started with Docker Multi-host Networking , It needs a key value store to be configured. I was wondering if my understanding is correct? Is there any way to create multiple networks over weave network to achieve network isolation. I want to be able to segregate network traffic for one container from another container running on the same box.
There is a new weave 1.4 plugin docker networking without cluster store plugin announcement recently which says it supports docker networking without external cluster store. how does it exactly work. its not very clear if it could be used to create multiple networks over weave.
This issue asked:
Did you start the docker daemon with --cluster-store?
You need to pass peers ips to weave launch-router $peers when starting docker with --cluster-store and --cluster-advertise.
The doc mentions:
The Weave plugin actually provides two network drivers to Docker
one named weavemesh that can operate without a cluster store and
one named weave that can only work with one (like Docker’s overlay driver).
Hence the need to Set up a key-value store first.
If you are using the weave plugin, your understanding is correct.
PR 1738 has more on the new weave 1.4+ ability to operate without a keystore with the weavemesh driver. Its doc does mention:
If you do create additional networks using the weavemesh driver, containers attached to them will be able to communicate with containers attached to weave; there is no isolation between those networks.
But PR 1742 is still open "Allow user to specify a subnet range for each docker host".

Resources