Can we create more then 2 riak cluster - riak

Can we setup Riak Cluster with only 2 nodes like this
node01
node02
or we add more cluster or less then 2 cluster if we can then please let me how we can achieve that .

It will depends if you are running your cluster under docker or not.
If you are under docker
You can in this case start a new riak node with the command
docker run -d -P -e COORDINATOR_NODE=172.17.0.3 --label cluster.name=<your main node name> basho/riak-kv
For more explaination about this line you can go on the basho post: Running Riak in Docker
If you are not under a docker container
As I didn't experimented this case personally I will only link the documentation to run a new node on a riak server: Running a Cluster
Hope I understood the question correctly

Related

Cloudera Docker image - not able to access Hue & Cloudera manager

I've installed Cloudera Docker on Mac (referred link - https://blog.cloudera.com/blog/2015/12/docker-is-the-new-quickstart-option-for-apache-hadoop-and-cloudera/)
Command used for starting Cloudera Docker image ->
docker run --privileged=true --hostname=quickstart.cloudera -t -i <image_hash> /usr/bin/docker-quickstart -p 80:80 -p 8888:8888 -p 7180:7180
I've re-started Hue (successfully) using command :
service hue start
Also, i started Cloudera Manager (successfully), using command :
/home/cloudera/cloudera-manager --express --force
However, when i try to access Cloudera Manager or Hue using UI, it doesnt show up
(url cannot be found)
urls i tried :
http://localhost:7180
http://localhost:8888
http://quickstart.cloudera:7180
http://quickstart.cloudera:8888
what do i need to do to access this ?
Also, i was trying to check if there is any other port is allocated by dockers
command ->
docker port quizzical_kowalevski // quizzical_kowalevski - name of the container
This shows up nothing :(
Pls note - This is on my local m/c (Mac)
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b2d26270435 4239cd2958c6 "/usr/bin/docker-qui…" 3 minutes ago Up 3 minutes sharp_bohr
Error logs (for hue) :
[29/Nov/2018 01:42:20 ] supervisor ERROR Exception in supervisor main loop
Traceback (most recent call last):
File "/usr/lib/hue/desktop/core/src/desktop/supervisor.py", line 386, in main
wait_loop(sups, options)
File "/usr/lib/hue/desktop/core/src/desktop/supervisor.py", line 396, in wait_loop
time.sleep(1)
File "/usr/lib/hue/desktop/core/src/desktop/supervisor.py", line 218, in sig_handler
raise SystemExit("Signal %d received. Exiting" % signum)
SystemExit: Signal 15 received. Exiting
As per your input, the docker run command is malformed.
You shouldn't add additional switches (in this case port mapping switches) after the image identification and command to start the containerized application. All additional arguments will be passed as arguments of the containerized application (i.e: to /usr/bin/docker-quickstart instead of being taken up by the docker engine to configure the port mapping)
Your output of docker ps show that you have no port mapping definition because of this.
You can read more about docker run command here. The general form of the docker run command is:
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
You should change the order of your switches to something like this:
docker run --hostname=quickstart.cloudera --restart unless-stopped --privileged=true -dti -p 8888:8888 -p 80:80 -p 7180:7180 cloudera/quickstart /usr/bin/docker-quickstart

Docker and Rancher - Run multiple workers

I need to run 3 commands to run my application:
$ celery -A name worker
$ daphne name.asgi:channel_layer -b 0.0.0.0 -p 8000
$ python manage.py runworker
I need to do this for the same image, I do not know if it is viable to create a container for each command. What should I do?
Thanks for your help.
I realized that they are all services, there must be a container for each one.

NGINX service failure in Dockers on limiting memory and CPU usage

I have one Master and 5 worker nodes, I am using the following command while deploying the nginx service.
It fails-
docker service create --name foo -p 32799:80 -p 32800:443 nginx --limit-cpu 0.5 --limit-memory 512M
On the other hand this works-
docker service create --name foo -p 32799:80 -p 32800:443 nginx
Please let me know how do I reduce my CPU to 1 core and limit memory to 512M
Change your command to the following and try again:
docker service create --limit-cpu 0.5 --limit-memory 512M --name foo -p 32799:80 -p 32800:443 nginx
Anything following the image name is treated as COMMAND and parameters.

Access a docker container in docker multi-host network

I have created a Docker multi-host network using Docker Overlay network with 4 nodes: node0, node1, node2, and node3. Node0 act as key-value store which shares the information of nodes while node1, node2, and node3 are bound to the key-value store.
Here are node1 networks:
user#node1$ docker network ls
NETWORK ID NAME DRIVER
04adb1ab4833 RED overlay
[ . . ]
As for node2 networks:
user#node2$ docker network ls
NETWORK ID NAME DRIVER
04adb1ab4833 RED overlay
[ . . ]
container1 is running on node1, that hosts the RED-named network.
user#node1$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f9bacac3c01d ubuntu "/bin/bash" 3 hours ago Up 2 hours container1
Docker added an entry to /etc/hosts for each container that belongs to the RED overlay network.
user#node1$ docker exec container1 cat /etc/hosts
10.10.10.2 d82c36bc2659
127.0.0.1 localhost
[ . . ]
10.10.10.3 container2
10.10.10.3 container2.RED
From node2, I'm trying to access the container1 running on node1. I tried to run container1 using command below but it returns error.
`user#node2$ docker docker exec -i -t container1 bash`
Error response from daemon: no such id: container1
Any suggestion?
Thanks.
The network is shared only for the containers.
While the network is shared among the containers across the multi-hosts overlay, the docker daemons cannot communicate between them as is.
The user#_node2_$ docker exec -i -t container1 bash doest not work because, indeed, no such id: container1 are running from node2.
Accessing remote Docker daemon
Docker daemons communicate through socket. UNIX socket by default, but it is possible to add an option, --host to specify other sockets the daemon should bind to.
See the docker daemon man page:
-H, --host=[unix:///var/run/docker.sock]: tcp://[host:port] to bind or unix://[/path/to/socket] to use.
The socket(s) to bind to in daemon mode specified using one or more
tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd.
Thus, it is possible to access from any node a docker daemon bind to a tcp socket.
The command user#node2$ docker -H tcp://node1:port exec -i -t container1 bash would work well.
Docker and Docker cluster (Swarm)
I do not know what you are trying to deploy, maybe just playing around with the tutorials, and that's great! You may be interested to look into Swarm that deploys a cluster of docker. In short: you can use several nodes as it they were one powerful docker daemon access through a single node with the whole Docker API.

Is it safe to remove Docker containers listed with `docker ps -f status=created`?

I've already seen posts showing how to remove exited containers listed with docker ps -q -f status=exited, but I also want to clean up 'created' but not 'running' containers. Is it safe to remove containers with the 'created' status, or is there a downside to this?
Docker containers with created status are containers which are created from the images, but never started. Removing them has no impact as you would not have run any process within the container and causing a change in the state of the created container, in the later case requires to be committed. This is generally done to speed up starting the container and making sure all the configuration is kept ready.
Refer Docker Docs
The docker create command creates a writeable container layer over the
specified image and prepares it for running the specified command. The
container ID is then printed to STDOUT. This is similar to docker run
-d except the container is never started. You can then use the docker start command to start the container at any point.
This is useful when you want to set up a container configuration ahead
of time so that it is ready to start when you need it. The initial
status of the new container is created.
There is two possibility for a container to be in the created status :
As explained by #askb docker container created from the image using docker create command will end up in the create command
A docker container created by the run command but not able to start. Multiple causes here but the easiestone is a docker container with a port mapping to an already bind ones
To answer the question, in both cases, removing them is safe.
A way to reproduce the docker container in a created state via the run command is :
docker pull loicmathieu/vsftpd
docker run -p 621:21 -d loicmathieu/vsftpd ftp
docker run -p 621:21 -d loicmathieu/vsftpd ftp
Then docker ps -a will give you something like
CONTAINER ID IMAGE COMMAND CREATED STATUS
e60dcd51e4e2 loicmathieu/vsftpd "/start.sh ftp" 6 seconds ago Created
7041c77cad53 loicmathieu/vsftpd "/start.sh ftp" 16 seconds ago Up 15 seconds

Resources