Docker / Azerothcore connection configuration, unable to connect - azerothcore

I can not connect to server running on Docker through WoWclient on same computer let alone on network.
Tried changing worldserver.conf inside C:\Users\Seth\azerothcore-wotlk\docker\worldserver\etc to
LoginDatabaseInfo = "127.0.0.1;3306;root;password;acore_auth"
WorldDatabaseInfo = "127.0.0.1;3306;root;password;acore_world"
CharacterDatabaseInfo = "127.0.0.1;3306;root;password;acore_characters"
I've also left it default.
I am able to connect through HeidiSQL with 127.0.0.1 3306 and am able to change realmlist to 127.0.0.1
When I type "docker ps" into gitBash
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ba3bc132e076 azerothcore/worldserver "/azeroth-server/bin…" 24 hours ago Up 11 seconds 0.0.0.0:8085->8085/tcp azerothcore-wotlk_ac-worldserver_1
6b4d4d41f814 azerothcore/authserver "/azeroth-server/bin…" 24 hours ago Up 11 seconds 0.0.0.0:3724->3724/tcp azerothcore-wotlk_ac-authserver_1
8501ee8e2202 azerothcore/database "docker-entrypoint.s…" 24 hours ago Up 12 seconds 0.0.0.0:3306->3306/tcp, 33060/tcp azerothcore-wotlk_ac-database_1
I don't know if I am not doing something right with docker or if its my WoW 3.3.5a client I downloaded.

Problem was Client I downloaded automatically patched config.wtf file to connect to their server. Had to go inside WoTLK\Data\enGB and change realmlist inside there. Not sure if this is true for all clients.

The ip address must refer to your container database address, which is ac-database by default with the Docker setup. For example:
LoginDatabaseInfo = "ac-database;3306;root;password;acore_auth"
so you should NOT use 127.0.0.1 here.
Then you should set your realmlist as:
set realmlist localhost

Related

wp-env cannot connect to the server

I set up a wp-env environment a while ago to tinker with a website I was working on..
Today I went to work on it some more and when I ran wp-env start I got the error in the image.
I have tried using the default "localhost" instead of 127.0.0.1 as well as including the port like 127.0.0.1:8888 to no avail. I have run all docker clean and destroy commands and restarted and updated the mysql server and Docker itself. What are some next steps I can take? Thanks!

Amazon EC2 Ubuntu 20 - DNS resolution doesn't work

I posted my solution too. I hope this saves someone else a lot of time.
I have an EC2 instance running Ubuntu 20. DNS resolution never works, or fails a lot.
My file /etc/resolv.conf has
nameserver 127.0.0.53
The file is not a symlink, and I can certainly edit it to use nameserver 8.8.8.8 ,
But the file periodically gets overwritten and the 127.0.0.53 (or something similar) is back.
I just want dns to work!
See my solution below.
Get your nic's name from a config file.
cat /etc/netplan/50-cloud-init.yaml
On my system, amazon sets the nic name to ens5.
As root create new file: /etc/netplan/99-custom-dns.yaml
with the following content.
Replace ens5 with your nic's name.
network:
version: 2
ethernets:
ens5:
nameservers:
addresses: [8.8.8.8]
dhcp4-overrides:
use-dns: false
Reboot
sudo shutdown -r now
Verify. After the reboot you can try pinging something by name
ping yahoo.com
or you can view the output of:
systemd-resolve --status
Done
Here's a link to the Amazon help doc, though it misses the nontrivial detail about your nic's name:
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-static-dns-ubuntu-debian/

How to access docker container from another machine on local network

I'm using Docker for Windows( I am not using Docker Toolbox that use a VM) but I cannot see my container from another machine on local network. In my host everything is perfect and runs well,however, I want that other people use my container.
Despite being posting the same question in Docker's Forum , The answer was not show it. Plus, I have been looking for here but the solutions found it are about setting up the bridge option in the virtual machine , and as I said before, I am using Docker for windows that no use Virtual machine.
Docker version Command
Client:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 21:15:28 2016
OS/Arch: windows/amd64
Server:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 21:15:28 2016
OS/Arch: linux/amd64
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
789d7bf48025 gogs/gogs "docker/start.sh /bin" 5 days ago Up 42 minutes 0.0.0.0:10022->22/tcp, 0.0.0.0:5656->3000/tcp gogs
7fa7978996b8 mysql:5.7.14 "docker-entrypoint.sh" 5 days ago Up 56 minutes 0.0.0.0:8989->3306/tcp mysql
The container I want to use is gogs that is working in the port 5656.
When I use localhost:5656 y 127.0.0.1:5656 work properly, but when I use My local network IP (192.168.0.127) from other machine the container is unreachable.
Thanks in advance.
Solution:
When I installed Docker for Windows, it creates a network called vEthernet (DockerNAT) (Usually with the ip 10.0.75.1)
My local machine had a network called local area connection with the ip 192.168.0.172(With this ip I was trying to access from other PCs).
So far, My local Machine had Two networks Conections so that I went to Control panel > NetWork and Sharing center > Change Adapter Settings I selected the two networks and I right-click selected Add to bridge. That create a Third network called Ethernet.
At this point, I didnt know what was the Ip of Ethernet network, so I executed ipconfig command that show me the ip 192.168.0.17(The settings of local area connection and vEthernet (DockerNAT) disappeared and the ips 10.0.75.1 and 192.168.0.172 stop working).
With this new ip (192.168.0.17) I tried from other machine in the network and finally I could access to the container(192.168.0.17:5656).
In Hyper-V settings, putting "Docker NAT" network in "external" mode worked for me. (I can access to my container on my local network with my host's IP)

Hosting multiple meteor apps on one server

I have 2 meteor apps running on one Ubuntu server on DO. I have also set up nginx for "servering"
Config files:
sailsadria.conf : http://pastebin.com/eCicpNxK
ytp.klancir.work.conf : http://pastebin.com/cNKtA0dV
Now...
http://sailsadria.com/ which is on port 3000 works smoothly as expected while http://ytp.klancir.work/ goes on ngnix root. On the other hand http://ytp.klancir.work:3010 goes to the right app that is working on that port (but I suppose that any URL or the IP i forward with the appended port will end up on the right location)
Symlinks are also set up
The domains are configured:
sailsadria: http://screencast.com/t/iqKUlQlDgj8
ytp.klancir.work: http://screencast.com/t/DJJdLfqna
I dont know how to set up that http://ytp.klancir.work/ goes directly to port 3010 in other words - directly to the app...
The SOLUTION: sudo service nginx restart....

Getting "connection refused" when trying to access etcd from within a Docker container

I am trying to access etcd from within a running Docker container. When I run
curl http://172.17.42.1:4001/v2/keys
I get
curl: (7) Failed to connect to 172.17.42.1 port 4001: Connection refused
I have four other hosts where this works fine, but every container on this machine has this problem. I'm really at a loss as to what's going on, and I don't know how to debug it.
My etcd environment variables are
ETCD_ADVERTISE_CLIENT_URLS=http://10.242.10.2:2379
ETCD_DISCOVERY=https://discovery.etcd.io/<token_removed>
ETCD_INITIAL_ADVERTISE_PEER_URLS=http://10.242.10.2:2380
ETCD_LISTEN_CLIENT_URLS=http://10.242.10.2:2379,http://127.0.0.1:2379,http://0.0.0.0:4001
ETCD_LISTEN_PEER_URLS=http://10.242.10.2:2380
I can also access etcd from the host with
curl http://localhost:4001/v2/keys
So there seems to be some error when routing from the container out to the host. But I can't figure out what it is. Can anyone point me in the right direction?
I observed I had to use the --advertise-client-urls and --listen-client-urls. Like so:
./etcd --advertise-client-urls 'http://0.0.0.0:2379,http://0.0.0.0:4001' --listen-client-urls 'http://0.0.0.0:2379,http://0.0.0.0:4001'
Then I was able to successfully do
curl -L http://hostname:2379/version
from any machine that could reach that server and it worked.
It turns out etcd was only listening on localhost:4001 on that machine, which is why I couldn't access it from within a container. This is despite me configuring one of the listen client urls to 0.0.0.0:4001.
It turns out that I had run sudo systemctl enable etcd2, which caused it to run before the cloud-config service ran. As such, etcd started with default configuration instead of the one that I had specified in my cloud config. Running sudo systemctl disable etcd2 fixed the issue.

Resources