I just downloaded Atom-Editor and Installed package remote-ftp but not able to connect to my server.
Here is the .ftpconfig flie
{
"protocol": "ftp",
"host": "www.xxx.com",
"port": 21,
"user": "xxx",
"pass": "xxx",
"promptForPass": false,
"secure": false,
"secureOptions": null,
"connTimeout": 10000,
"pasvTimeout": 10000,
"keepalive": 10000,
"watch":[],
"watchTimeout":500
}
Its just trying to connect "Remote FTP: Connecting.." and after 10-15 seconds "Remote FTP: Connection Closed".
Related
I am trying to create dictionary out of the servers stored in different env variables in ansible.
What i currently have is:
env_loadbalancer_vservers2: "{{ hostvars[inventory_hostname] | dict2items | selectattr('key', 'match', 'env_.*_loadbalancer_vservers(?![_.])') | list | items2dict }} "
Which will:
get all variables in ansible for a specific host,
change dict to items type
as we can easily access now key value I will match only keys I want using regex
Change it back to list
Back to dict
problem is that output looks like this:
{
"env_decision_manager_loadbalancer_vservers": {
"decision_central": {
"ip_or_dns": "ip",
"port": "port",
"protocol": "SSL",
"ssl": true,
"timeout": 600,
}
},
"env_ftp_loadbalancer_vservers": {
"ftp_1": {
"ip_or_dns": "ip",
"port": "port",
"protocol": "FTP",
"ssl": false,
"timeout": 9010,
}
},
"env_jboss_loadbalancer_vservers": {
"jboss": {
"ip_or_dns": "ip",
"port": "port",
"protocol": "SSL",
"ssl": true,
"timeout": 600,
}
"jboss_adm": {
"ip_or_dns": "som_other_ip",
"port": "rando_number",
"protocol": "SSL",
"ssl": true,
"timeout": 86410,
}
}
While my desired output should look like:
{
"decision_central": {
"ip_or_dns": "ip",
"port": "port",
"protocol": "SSL",
"ssl": true,
"timeout": 600,
},
"ftp_1": {
"ip_or_dns": "ip",
"port": "port",
"protocol": "FTP",
"ssl": false,
"timeout": 9010,
},
"jboss": {
"ip_or_dns": "ip",
"port": "port",
"protocol": "SSL",
"ssl": true,
"timeout": 600,
},
"jboss_adm": {
"ip_or_dns": "som_other_ip",
"port": "rando_number",
"protocol": "SSL",
"ssl": true,
"timeout": 86410,
}
So practically I need to remove "Top-level key tier" and merge their values. I've spent quite a time on this solution without any good progress and I would be happy for any advice :)
PS. The solution should be "clean" without any custom modules or actual tasks, the best idea would just add some functions to the filter pipeline mentioned above that will result in the correct format of dict
Thank you :)
Select the attribute value
regexp: 'env_.*_loadbalancer_vservers(?![_.])'
l1: "{{ hostvars[inventory_hostname]|
dict2items|
selectattr('key', 'match', regexp)|
map(attribute='value')|
list }}"
gives the list
l1:
- decision_central:
ip_or_dns: ip
port: port
protocol: SSL
ssl: true
timeout: 600
- ftp_1: null
ip_or_dns: IP
port: port
protocol: FTP
ssl: false
timeout: 9010
- jboss:
ip_or_dns: ip
port: port
protocol: SSL
ssl: true
timeout: 600
jboss_adm:
ip_or_dns: som_other_ip
port: rando_number
protocol: SSL
ssl: true
timeout: 86410
Combine the items of the list
d1: "{{ {}|combine(l1) }}"
gives the dictionary you're looking for
d1:
decision_central:
ip_or_dns: ip
port: port
protocol: SSL
ssl: true
timeout: 600
ftp_1:
ip_or_dns: ip
port: port
protocol: FTP
ssl: false
timeout: 9010
jboss:
ip_or_dns: ip
port: port
protocol: SSL
ssl: true
timeout: 600
jboss_adm:
ip_or_dns: som_other_ip
port: rando_number
protocol: SSL
ssl: true
timeout: 86410
I created 3 ubuntu 20.04 VM in Proxmox VE 7 for the docker swarm. I tried to follow the site https://documentation.portainer.io/v2.0/deploy/ceinstallswarm/ to setup the Portainer on my Swarm. However I can’t browse any IP address of the ubuntu VMs to access Portainer site to setup the docker container.
Something is go wrong on the overlay network on my swarm. It looks like the ingress not enable. Please see the below network inspect for portainer_agent_network.
And I found that all swarm machines not listen the port 4789. When I run the command sudo lsof -i:4789, it shows nothing.
Does anyone help me to troubleshoot it? What is going wrong on my docker swarm?
ubuntu#swarm01:~$ docker network inspect portainer_agent_network
[
{
"Name": "portainer_agent_network",
"Id": "tzm9sx2zifgaxhpmrd8xk7gti",
"Created": "2021-08-07T14:24:33.835202371Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.11.0/24",
"Gateway": "10.0.11.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"54a9491638f699fc6441961b04b91c8ca923bd8e4980dbe36651fa2618cdbe2c": {
"Name": "portainer_portainer.1.fd5m3wvccnxrl43iwst2imwti",
"EndpointID": "4537774ec3c146843b48ab89707df7b04a6a76880af85dbe025fcc4d7422262c",
"MacAddress": "02:42:0a:00:0b:0c",
"IPv4Address": "10.0.11.12/24",
"IPv6Address": ""
},
"83044215d796b649ee8fc78be2d1364c80646448db3a933ee9a48ff0b0b7fe24": {
"Name": "portainer_agent.idso1hec0iqiyvm1jhu1iaoq1.qidcsempp75po4znf1c7pj09r",
"EndpointID": "dfdd91e83969150ea70674b9ea998690b47a6abf113c9a644315d641c6b68e1c",
"MacAddress": "02:42:0a:00:0b:05",
"IPv4Address": "10.0.11.5/24",
"IPv6Address": ""
},
"lb-portainer_agent_network": {
"Name": "portainer_agent_network-endpoint",
"EndpointID": "be0b5a8bdda9ccae975314fad1424d96e3c57763b1c145f4a67e286f54300195",
"MacAddress": "02:42:0a:00:0b:08",
"IPv4Address": "10.0.11.8/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4107"
},
"Labels": {
"com.docker.stack.namespace": "portainer"
},
"Peers": [
{
"Name": "0589007b93f4",
"IP": "10.0.0.241"
},
{
"Name": "be83a3dd8fbd",
"IP": "10.0.0.242"
},
{
"Name": "f937ea4c2dbf",
"IP": "10.0.0.243"
}
]
}
]
ubuntu#swarm01:~$ sudo lsof -i:7946
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dockerd 451 root 30u IPv6 14558 0t0 TCP *:7946 (LISTEN)
dockerd 451 root 32u IPv6 14559 0t0 UDP *:7946
ubuntu#swarm01:~$ sudo lsof -i:4789
ubuntu#swarm01:~$
Thanks with the best regards,
Patrick Lee
The overlay network is a virtual network that the nodes use to communicate with each other internally.
If you want any traffic that's external to the swarm (including curl from the same VM) to reach your portainer containers, then you'll need to expose that port.
Using Docker CLI: https://docs.docker.com/engine/reference/commandline/service_create/#publish-service-ports-externally-to-the-swarm--p---publish
or Docker Compose: https://docs.docker.com/compose/compose-file/compose-file-v3/#ports
Note: you want to expose these containers as services, not as individual containers.
I tried to deploy a typeorm express server to Cloud Function for Firebase.
ormconfig:
{
type: "postgres",
username: [username],
password: [password],
database: [dbname],
extra: {
socketPath: `/cloudsql/[INSTANCE_CONNECTION_NAME]`,
},
synchronize: false,
dropSchema: false,
logging: true,
}
(values inside brackets are placeholders)
After deployed, the log complains about
TypeORM connection error: Error: connect ECONNREFUSED 127.0.0.1:5432
I don't understand why the DB address turned to 127.0.0.1? Did I miss something?
host should be specified in you ormconfig.json, if you did not use a default port for you db instance, port also required. And please make sure your ormconfig.json is valid json format
{
"type": "mysql",
"host": "localhost",
"port": 3306,
"username": "test",
"password": "test",
"database": "test",
"host": "localhost"
}
You also need to set the host value to /cloudsql/project:region:instance along with the socketPath
I'm starting my Selenium node with the profile setting provided in nodeConfig.json, but Chrome driver still doesn't use it.
java -Dwebdriver.chrome.driver=C:\Selenium_server\chromedriver.exe -jar C:\Selenium_server\selenium-server-standalone-3.141.59.jar -role node -nodeConfig nodeConfig.json
nodeConfig.json:
{
"capabilities": [
{
"browserName": "chrome",
"maxInstances": 5,
"version": "username",
"seleniumProtocol": "WebDriver",
"profile-directory": "Profile 2"
"user-data-dir": "C:\\Users\\username\\AppData\\Local\\Google\\Chrome\\User Data"
}
],
"proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy",
"maxSession": 5,
"port": -1,
"register": true,
"registerCycle": 5000,
"hub": "<hub ip>",
"nodeStatusCheckTimeout": 5000,
"nodePolling": 5000,
"role": "node",
"unregisterIfStillDownAfter": 60000,
"downPollingLimit": 2,
"debug": false,
"servlets" : [],
"withoutServlets": [],
"custom": {}
}
I am trying to disconnect a Docker container (ContainerA1) connected to a network (NetworkA), but am unable to do so, even with the --force flag.
$ docker network disconnect NetworkA ContainerA1
I get an error response: container c5d345a09c6d is not connected to the network. (container IDs trimmed for brevity).
Oddly enough, I am able to disconnect other containers from NetworkA.
I inspected the network using docker network inspect NetworkA. I see :
[
{
"Name": "NetworkA",
"Id": "9e4895ee72a1648ad10f297357447529b277beb92fe21069a244a8265b8f7306",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1/16"
}
]
},
"Internal": false,
"Containers": {
"aded6369aef63b5237a7f543333f0b7fafbe2f01496efb2012bb7f5d67f14268": {
"Name": "ContainerA2",
"EndpointID": "c93b9dde46884181ca5acb63c03b2fb5fb3141e98416dda3e6cbc98b166b88ee",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"ep-0f7d832a8d0cd86d8655ea9e0c1f7bbf33f1102b7bbe6454aca1ab8a48a6e4cd": {
"Name": "ContainerA1",
"EndpointID": "0f7d832a8d0cd86d8655ea9e0c1f7bbf33f1102b7bbe6454aca1ab8a48a6e4cd",
"MacAddress": "02:42:ac:12:00:07",
"IPv4Address": "172.18.0.7/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Notice the ep- prefix for ContainerA1.
I tried removing the container, but still see it in the list of containers when I do docker network inspect NetworkA. The "EndpointID" is different from the container ID, but having same name.
How can I remove stale entries from network, NetworkA?