How to run hasura console on a different port when using docker - hasura

I am writing this question because I couldn't find a way to change the default Hasura console port when using Hasura docker image.
The page I'm referring is this
There is no variable defined to change the Hasura default console port in the above page.
The reason I'm requesting this feature is to separate query histories of my two hasura projects. If I could manage to run these two consoles on two different ports I will be able to save the query/mutation history seperately.

I have not used Hasura image previously, but I have used docker a lot to run MySQL instances etc. What I typically do depends on whether or not I am using docker-compose or just plain docker run.
If using docker compose you can specify a port mapping for each container, e.g. map 9695 -> 9005 would look like so:
hasura:
image: hasura
ports:
- 9005:9695
or if using docker run following these docs e.g.
docker run --expose 9695:9005 hasura ...

Hasura's Dockerfile has graphql-engine serve as its CMD. You can override this default command by your own command and pass --server-port or any config to serve, as per the reference page.
setting the port will change the hasura api endpoint, not the hasura console
That's not entirely true. If HASURA_GRAPHQL_ENABLE_CONSOLE=true Hasura will serve the console on the /console route of the server API endpoint. However, for production, it's recommended to disable the console and run Hasura CLI locally, connected to the production instance.

Related

what is the url for wp core install --url for kubernetes?

In kubernetes, I have a wordpress container and a wp-cli Job.
To create the WordPress tables in the database using the URL, title, and default admin user details, I am running this command in the wp-cli Job:
wp core install --url=http://localhost:8087 --title=title --admin_user=user --admin_password=pass --admin_email=someone#email.com --skip-email
The --url parameter prevents Minikube from serving the wordpress site.
You should put the ip address of your service there in place of "localhost".
When i say service, I'm talking about the service that exposes your deployment/pods (its another kubernetes object you got to create).
You can pass the IP address using an environment variable . When creating a service, the pods that are started inherit extra environment variables that kubernetes places in them, through which you can access the ip address, the port, both etc... check documentation.
The second option is to place there the name of your service (still talking about the kubernetes object you created to expose your deployment). It will be resolved to the IP address in fine by the DNS of the cluster (CoreDNS as of today is started along with minikube).
Those two options are in the documentation in the same section called discovering services.
I had trouble to understand that things like: service-name.name-space.svc.cluster.default are like any url (like subsubdomain.subdomain.stackoverflow.com) but being resolved within the cluster.

Transfering Docker image saving IP address from Windows to Linux

I'm very new to Docker (in fact I've been only using it for one day) so maybe I'm misunderstanding some basic concept but I couldn't find a solution myself.
Here's the problem. I have an ASP.NET Core server application on a Windows machine. It uses MongoDB as a datastore. Everything works fine. I decided to pack all this stuff into Docker containers and put it to a Linux (Ubuntu Server 18.04) server. I've packed mongo to a container so now its PUBLISHED IP:PORT value is 192.168.99.100:32772
I've hardcoded this address to my ASP.NET server and also packed it to a container (IP 192.168.99.100:5000).
Now if I run my server and mongo containers together on my Windows machine, they work just fine. The server connects to a container with the database and can do whatever it needs.
But when I transfer both containers to Ubuntu and run them, the server cannot connect to the database because this IP address is not available there. I've beed googling for a few hours to find a solution and still I'm struggling with it.
What is the correct way to go about thes IP addresses? Is it possible to set an IP that will be the same for a container regardless of environment?
I recommend using docker-compose for the purpose you described above.
With docker-compose, you can access the database via a service name instead of an IP (which potentially is not available on another system). Here two links to get started
https://docs.docker.com/compose/gettingstarted/
https://docs.docker.com/compose/compose-file/
Updated answer (10.11.2019)
Here a concrete example for your asp.net app:
docker-compose.yaml
version: "3"
services:
frontend:
image: fqdn/aspnet:tag
ports:
- 8080:80
links:
- database
database:
image: mongo
environment:
MONGO_INITDB_DATABASE: "mydatabase"
MONGO_INITDB_ROOT_USERNAME: "root"
MONGO_INITDB_ROOT_PASSWORD: "example"
volumes:
- myMongoVolume:/data/db
volumes:
myMongoVolume: {}
From the frontend container, you can reach the mongo db container via the service name "database" (instead of an IP). Due to the link definition in the frontend service, the frontend service will start after the linked service (database).
Through volume definition, the mongo database will be stored in a volume that persists independently from the container lifecycle.
Additionally, I assume you want to reach the asp.net application via the host IP. I do not know the port that you expose in your application so I assume the default port 80. Via the ports section in the frontend, we define that container port 80 is exposed as port 8080 on the host IP. e.g. you can open your browser and type your host IP and port 8080 e.g. 127.0.0.1:8080 for localhost and reach your application.
With docker-compose installed, you can start your app, which consists of your frontend and database service via
docker-compose up
Available command options for docker-compose can be found here
https://docs.docker.com/compose/reference/overview/
Install instructions for docker-compose
https://docs.docker.com/compose/install/
Updated answer (10.11.2019, v2)
From the comment section
Keep in mind that you need to connect via the servicename (e.g. database) and the correct port. For MongoDB that port is 27017. That would tanslate to database:27017 in your frontend config
Q: will mongo also be available from the outside in this case?
A: No, since the service does not contain any port definition the database itself will not be directly reachable. From a security standpoint, this is preferable.
Q: could you expain this
volumes:
myMongoVolume: {}
A: in the service definition for your database service, we have specified a volume to store the database itself to make the data independent from the container lifecycle. However just by defining a volume in the service section the volume will not be created. Through the definition in the volume section, we create the volume myMongoVolume with the default settings (indicated through {}). If you would like to customize your volume you can do so in this volumes section of your docker-compose.yaml. More information regarding volumes can be found here
https://docs.docker.com/compose/compose-file/#volume-configuration-reference
e.g. if you would like to use a specific storage driver for your volume or use an external storage.

How can I use nginx as a dynamic load balancing proxy server on Bluemix?

I am using docker-compose to run an application on the bluemix container service. I am using nginx as a proxy webserver and load balancer.
I have found an image that uses docker events to automatically detect new web servers and adds those to the nginx configuration dynamically:
https://github.com/jwilder/nginx-proxy
But for this to work, I think the container needs to connect to a docker socket. I am not very familiar with docker and I dont know exactly what this does, but essentially it is necessary so that the image can listen to docker events.
The run command from the image documentation is the following:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
I have not been able to run this in the container service, as it does not find the /var/run/docker.sock file on the host.
The bluemix documentation has a tutorial explaining how to do load balancing with nginx. But it requires a "hard coded" list of web servers in the nginx configuration.
I was wondering how I could run the nginx-proxy image so that web instances are detected automatically?
The containers service on Bluemix doesn't expose that docker socket (not surprising, it would be a security risk to the compute host). A couple of alternate ways to accomplish what you want:
something like amalgam8 or consul, which is basically doing just that
similar, but self written - have a shared volume, and then each
container on startup adds a file to that shared volume saying what it
is, plus its private ip. nginx container has a watch on the shared
volume, and reloads when those change. (more work than amalgam8 or
consul, but perhaps more control)

How to handle configuration for Nginx on Kubernetes (GKE)?

Due to limitations of ingress resources (in my case I need more than 50 routes which is not supported in Google Container Engine) I'm considering using Nginx as a reverse proxy to other backend services. What I want to do is essentially the same as an ingress resource provides such as routing path "/x" to service x and "/y" to service y. I'd like to run more than one instance of Nginx for HA, probably behind a service. My question mainly concerns configuration where I have a couple of options:
Create a custom Docker image with nginx as base image and then copy
our nginx configuration into this image. This would make it very
easy to run this nginx-based image on Kubernetes. But while this
works it would require rebuilding, publishing and storing a new
custom nginx image every time the configuration changes. We already
have pipelines setup for this so it won't be a big problem
operationally.
Use the vanilla nginx docker image, create a GCE persistent disk (we're running on Google Container Engine) that is shared between all nginx pods in read only mode. The problem I see with this is approach is how to copy configuration updates to the disk in an easy manner?
Is there a better option? I've looked at config maps and/or secrets (which would solve the configuration update problem) but I don't think they can contain arbitrary data such as an nginx config file.
ConfigMaps containing text-files should be no problem at all. Take a look at the --from-file option: http://kubernetes.io/docs/user-guide/configmap/.
Im unsure about binary files inside a ConfigMap. I'm able to add a JPEG but trying to read object results in an error so this might not be intended (needs additional base64 encoding or such).
$ kubectl create configmap test --from-file=foo1=/tmp/scudcloud_U0GQ27N0M.jpg
configmap "test" created
$ kubectl get configmap test -o yaml
error: error converting JSON to YAML: %!(EXTRA *errors.errorString=yaml: control characters are not allowed)

Specify an ip for an url in a Jenkins Job

I have the following situation.
The webapp in my company is deployed to several environments before reaching live. Every testing environment is called qa-X and has a different IP Address. What I would like to do is to specify in the jenkins job "test app in qa-x" the app's IP for the x environment so that my tests start running only knowing the apps url.
Jenkins itself is outside the qa-x environments.
I have been looking around for solutions but all of them destroy the other tests of qa-X. For instance, changing /etc/hosts, or changing the dns server. What would be great is that I can specify in that job only the ip as a config parameter and that that definition remains local.
Any thoughts/ideas?
If I'm understanding your query correctly, you should look into creating a Parameterized build which would expose an environment variable with the desired server IP, which your test script could consume.

Resources