Move metabase container to another host - metabase

I have a metabase installed in a web host, but i have to move this metabase container to a new host keeping the data in the actual host.
For example:
this is the actual container: 132.12.78.35:3000
and this gonna be the new container: 186.45.35.78:3000
is any way to do that? thanks

Related

How to build multi tenant application using docker

I am pretty much new to the docker concept and know basics of it.
I just wanted to know how can we build multi tenant application using docker.
Where the containers will use the local hosted database with different schema.With the nginx we can do reverse proxy but how we can achieve it?
because every container will be accessed by localhost:8080 and how we can add upstream and server part.
It will be very helpful if some one explains it to me.
If I understand correctly you want processes in containers to connect to resources on the host.
From you containers perspective in bridge mode (the default), the host's IP is the gateway. Unfortunate the gateway IP address may vary and can only be determinate at runtime.
Here are a few ways to get it:
From the host using docker inspect: docker inspect <container name or ID>. The gateway will be available under NetworkSettings.Networks.Gateway.
From the container you can execute route | awk '/^default/ { print $2 }'
One other possibility is to use --net=host when running your container.
This will run you processes on the same network as your processes on your host. Doing so will make your database accessible from the container on localhost.
Note that using --net=host will not work on Docker for mac/windows.

register docker container to host network dns

Good Day
I want to know whether there is a way to dynamically add a docker container to the host networks' DNS server.
The issue is I have an image I want to host multiple times for test and UAT purposes. I'm using traefik to discover them dynamically within the docker network.
All I need to do is have them dynamically added to the DNS server or have them picked up by the domain as a host. When the dev team then needs to access the machine all they need to do is type in the name of the server, eg app.uat.domain/app.develop.domain, and carry on without me having to update the dns records manually the whole time.
Thanks in advance

Receiving and serving static files in kubernetes

In the pre-k8s, pre-container world, I have a cloud VM that runs nginx and lets an authorized user scp new content into the webroot.
I'd like to build a similar setup in a k8s cluster to host static files, with the goal that:
An authorized user can scp new files in
These files are statically served on the web
These files are kept in a persistent volume so they don't disappear when things restart
I can't seem to figure out a viable combination of storage class + containers to make this work. I'd definitely appreciate any advice!
Update
What I didn't realize is that two containers running in the same pod can both have the same gcePersistentDisk mounted as read/write. So my solution in the end looks like one nginx container running in the same pod as an sshd container that can write to the nginx webroot. It's been working great so far.
I think you're trying to fit a square peg into a round hole here.
Essentially, you're building an FTP server (albeit with scp rather than FTP).
Kubernetes is designed to orchestrate containers.
The two don't really overlap at all.
Now, if you're really intent on doing this, you could hack something together by creating a docker container running an ssh daemon, plus nginx running under supervisor. The layer you need to be concentrating on is getting your existing VM setup replicated in a docker container. You can then run it on Kubernetes and attach a persistent volume.

Automatically append docker container to upstream config of nginx load balancer

I'm running Docker Compose (v2) and have a node service (website) and python based api deployed with nginx sitting in front of them.
One thing I would like to do is be able to scale the services by adding more containers. If I know ahead of time how many containers I will have, I can hardcode the nginx upstream config with the references to the IPs of the containers which docker makes available. However, the problem is that I want the upstream nginx config to be dynamic e.g. if I add another Docker container, it simply adds appends the location of the container to the upstream list of IPs in the upstream block.
My idea was to create a script which will automatically append the upstream servers using env variables when the containers change but I'm unsure where to start and can't find a good example.
There are a couple ways to achieve this. What you are referring to is usually called service discovery and comes in many forms. I'll describe two of them that I have used before.
The first and simplest one (which works fine for single servers or only discovering containers locally on one server) is a local proxy which makes use of the Docker socket or API. https://github.com/jwilder/nginx-proxy is one of the popular ones and should work well for prototyping scalable services in Compose.
Another way (which is more multi-host friendly but more complicated) would be registering services in a registry (such as etcd or Consul) and then dynamically writing out the configuration. To do this, you can use a registration system (such as https://github.com/gliderlabs/registrator) to register the containers and their ports. Then your proxy or application can consume a configuration file written out using a template system like https://github.com/kelseyhightower/confd.

Change IIS domain url

I have a deployed web application on my local IIS (7.0) and is working perfectly. Since it's on my local, it is accessible via http://<>/webapp/index.aspx. Now, what I am trying to achieve is to access it on a custom url i.e. http://www.someuniqueweburl.com, and will make sure that it doesn't exist on the internet and of course, can only be accessed when you are on the same network with the local IIS web server. Is this achievable?
Thanks a bunch!
You need some kind of DNS service to achieve this.
You may :
register a DNS entry in your local network DNS Server, if you have one. Then all your network should be able to sample.custom.url.com to your IP address
or add an entry line to your %windir%\System32\drivers\etc\hosts file :
127.0.0.1 sample.custom.url.com
but you will be the only one able to resolve sample.custom.url.com . Other machines will need a similar entry in their hosts file (with your network ip address, instead of 127.0.0.1)
Open IIS Manager. For information about opening IIS Manager, see Open IIS Manager (IIS 7).
In the Connections pane, expand the Sites node in the tree, and then select the site for which you want to configure a host header.
In the Actions pane, click Bindings.
In the Site Bindings dialog box, select the binding for which you want to add a host header and then click Edit or click Add to add a new binding with a host header.
In the Host name box, type a host header for the site, such as www.contoso.com.
Click OK.
To add an additional host header, create a new binding with the same IP address and port, and the new host header. Repeat for each host header that you want to use this IP address and port.
I got this from the Microsoft technet website: Configure a Host Header for a Web Site (IIS 7)

Resources