Wordpress container changing concurrent maximum clients - wordpress

I need to change the number of clients that can connect concurrently to a wordpress Docker instance. As far as I know this is done through the MaxClients directive but I don't know in which file should I change it. Also I wonder if there is any environment variable that can be set when launching the Docker instance to change this parameter.

MaxRequestWorkers was called MaxClients before version 2.3.13
The field MaxRequestsWorkers can be configured inside the container under /etc/apache2/mods-available/mpm_prefork.conf

Related

Multiple dokku apps one domain

The behavior I want:
If the user goes to http://www.example.com/{anything-but-admin} one dokku app responds.
However if the user goes to http://www.example.com/admin a different dokku app responds.
Does dokku provide a simple way to do this? I believe I would have to disable the proxy port mapping and add a custom nginx implementation, but even if I do that, the docs specify
If a proxy is disabled, Dokku will bind your container's port to a random port on the host for every deploy, e.g. 0.0.0.0:32771->5000/tcp.
If this is the correct thing to do, how do I force a static port number, so I can add that port number to my custom nginx configuration?
You can deploy two apps and have one of the apps reference the other's upstream.

Docker Google cloud

I have a CentOS VM instance in google cloud and I have installed docker on CentOS. I have created a container with web interface. I am not able to access it When i try to access it from outside (In browser Other tab). What do I need to do to access it from outside of cloud?
There are several leaps between your browser your containerised web interface.
The first will be from the IP through the GCP firewall into the Instance, you might be getting stuck here, when you created the instance, in the Firewall section, did you select "Allow HTTP traffic and Allow HTTPS traffic"?
If you click through to your instance details in the GCP dashboard you can see under Firewalls if this is selected, also if you look under Network you can see which network profile your instance it using, you can click the network listed to check if it is set up to allow the traffic you are trying to send though.
If this all looks right and traffic is getting to the instance but not the web interface, it could be that the port from docker is not mapped to the port of the host, when you started the container did you use the -p option to map the ports?
If this is also right, then it could be that the Docker image is not exposing it's port internally, in the Dockerfile used to create the Image for the container is there a line starting with EXPOSE, or does if build FROM an Image that does?
There are more possible points of failure in this chain but I have tried to list some likely answers. If none of this helps then let me know in the comments and we can try and debug the issue.

how to setup dns nameserver in docker permanently in user-defined network?

I need define nameserver in /etc/resolv.conf user defined network.
I know that directive --dns is working in default bridge of docker engine.
But I created user defined network.
There are two documentation pages describing the different modes
for the bridge mode:
https://docs.docker.com/engine/userguide/networking/default_network/configure-dns/
for the user defined network:
https://docs.docker.com/engine/userguide/networking/configure-dns/
The documentation for user defined network states:
The exact details of how Docker manages the DNS configurations inside the container can change from one Docker version to the next. So you should not assume the way the files such as /etc/hosts, /etc/resolv.conf are managed inside the containers and leave the files alone and use the following Docker options instead.
If you now look at the details of the documentation you'll find out that
--dns will also work in the user defined network mode but that there are more options available and certain default behavior applies.

Automatically append docker container to upstream config of nginx load balancer

I'm running Docker Compose (v2) and have a node service (website) and python based api deployed with nginx sitting in front of them.
One thing I would like to do is be able to scale the services by adding more containers. If I know ahead of time how many containers I will have, I can hardcode the nginx upstream config with the references to the IPs of the containers which docker makes available. However, the problem is that I want the upstream nginx config to be dynamic e.g. if I add another Docker container, it simply adds appends the location of the container to the upstream list of IPs in the upstream block.
My idea was to create a script which will automatically append the upstream servers using env variables when the containers change but I'm unsure where to start and can't find a good example.
There are a couple ways to achieve this. What you are referring to is usually called service discovery and comes in many forms. I'll describe two of them that I have used before.
The first and simplest one (which works fine for single servers or only discovering containers locally on one server) is a local proxy which makes use of the Docker socket or API. https://github.com/jwilder/nginx-proxy is one of the popular ones and should work well for prototyping scalable services in Compose.
Another way (which is more multi-host friendly but more complicated) would be registering services in a registry (such as etcd or Consul) and then dynamically writing out the configuration. To do this, you can use a registration system (such as https://github.com/gliderlabs/registrator) to register the containers and their ports. Then your proxy or application can consume a configuration file written out using a template system like https://github.com/kelseyhightower/confd.

Modifying nginx config directly in memory?

This might be a very silly question but I'll still ask it.
Nginx reads nginx.conf file & keeps information in memory/cache until you do a 'nginx -s reload'.
Is there a way were I can modify nginx configuration directly in memory as I need to do reload multiple times per minute and config file can be huge.
Basically the problem I'm trying to solve is that I have multiple docker containers coming up & down dynamically on a set of host machines. Every time a container comes up, it'll have a different IP & port open (application design constraint). And I'm thinking of using Nginx as reverse proxy. What should I do to solve this problem considering the fact that final product might have 3000 - 5000 containers running on a cluster of hosts. The rate at which containers are launched/destroyed might be around 100 per second.I need a fast way to make sure routing is happening properly
hmmm, probably not, nginx loads its config in multiple workers, so this does not look like a good idea to try to change it on the fly.
What it your goal ? You seem to need to do some dynamic routing or other sort of treatment. You should instead look at:
nginx directives and modules such as eval
Lua scripting
nginx module dev (in C/C++)
This would allow you to do more or less whatever you want, you can read some config in a db like redis, and change the behavior of your code according to the value in Redis.
For example, you could do a lot just by reading a value in Redis, and then use if directive in your nginx config file. You can use How can I get the value from Redis and put it in a variable in NGiNX? to get redis value in nginx with eval module.
UPDATE :
For dynamic IP in nginx, you should look at Dynamic proxy_pass to $var with nginx 1.0.
So I would suggest that you :
have a process that write in redis the IP address of your dockers
read it with eval and redis module in nginx
use the value to proxy

Resources