Cannot get Firebase Emulators running - firebase

Trying to run Firebase Emulators with the command firebase emulators:start I'm not able to start it. Checking the logs I see this.
firebase-debug.log
[debug] [2022-10-13T17:03:11.665Z] ----------------------------------------------------------------------
[debug] [2022-10-13T17:03:11.667Z] Command: /usr/local/bin/node /usr/local/share/npm-global/bin/firebase emulators:exec --project=demo-project --ui ng serve
[debug] [2022-10-13T17:03:11.667Z] CLI Version: 11.14.2
[debug] [2022-10-13T17:03:11.667Z] Platform: linux
[debug] [2022-10-13T17:03:11.667Z] Node Version: v16.17.1
[debug] [2022-10-13T17:03:11.673Z] Time: Thu Oct 13 2022 17:03:11 GMT+0000 (Coordinated Universal Time)
[debug] [2022-10-13T17:03:11.674Z] ----------------------------------------------------------------------
[debug]
[debug] [2022-10-13T17:03:11.760Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"]
[debug] [2022-10-13T17:03:12.144Z] openjdk version "11.0.16" 2022-07-19
[debug] [2022-10-13T17:03:12.145Z]
OpenJDK Runtime Environment (build 11.0.16+8-post-Debian-1deb11u1)
OpenJDK 64-Bit Server VM (build 11.0.16+8-post-Debian-1deb11u1, mixed mode, sharing)
[debug] [2022-10-13T17:03:12.149Z] Parsed Java major version: 11
[info] i emulators: Starting emulators: auth, functions, firestore, hosting {"metadata":{"emulator":{"name":"hub"},"message":"Starting emulators: auth, functions, firestore, hosting"}}
[info] i emulators: Detected demo project ID "demo-project", emulated services will use a demo configuration and attempts to access non-emulated services for this project will fail. {"metadata":{"emulator":{"name":"hub"},"message":"Detected demo project ID \"demo-project\", emulated services will use a demo configuration and attempts to access non-emulated services for this project will fail."}}
[info] i emulators: Shutting down emulators. {"metadata":{"emulator":{"name":"hub"},"message":"Shutting down emulators."}}
[debug] [2022-10-13T17:03:12.160Z] Error: listen EADDRNOTAVAIL: address not available ::1:4400
at Server.setupListenHandle [as _listen2] (node:net:1415:21)
at listenInCluster (node:net:1480:12)
at doListen (node:net:1629:7)
at processTicksAndRejections (node:internal/process/task_queues:84:21)
[error]
[error] Error: An unexpected error has occurred.
My project is in a docker container and if I run ifconfig this is what I'm getting
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
RX packets 251 bytes 123951 (121.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 238 bytes 36631 (35.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 7924 bytes 28381616 (27.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7924 bytes 28381616 (27.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I've tried to force the host within firebase.json like this (also with 0.0.0.0), without luck.
"emulators": {
"auth": {
"port": 9099,
"host": "127.0.0.1"
},
"functions": {
"port": 5001,
"host": "127.0.0.1"
},
"firestore": {
"port": 8080,
"host": "127.0.0.1"
},
"hosting": {
"port": 5000,
"host": "127.0.0.1"
},
"ui": {
"enabled": true,
"host": "127.0.0.1"
},
"singleProjectMode": true
}
Can you help me with this issue, please?

The project is in a docker container, for some weird reason, destroying the container and building it again, worked. However, I'm interested in knowing more about what could happen so if anyone has some clues I will be glad to read more.

Finally!!!
After checking this Github Issue it seems that the best way to deal with this problem is to set the ip and port inside firebase.json.
"auth": {
"port": 9099,
"host": "0.0.0.0"
},
"functions": {
"port": 5001,
"host": "0.0.0.0"
},
"firestore": {
"port": 8080,
"host": "0.0.0.0"
},
"hosting": {
"port": 5000,
"host": "0.0.0.0"
},
"hub": {
"host": "0.0.0.0",
"port": 4400
},
"logging": {
"host": "0.0.0.0",
"port": 4500
},
"eventarc": {
"host": "0.0.0.0",
"port": 9299
},
"ui": {
"enabled": true,
"port": 4000,
"host": "0.0.0.0"
},

Related

Do Firebase Scheduled Functions run automatically on the emulator?

I am trying to build a scheduled function on Firebase using the emulator. I've set things up as shown and have verified that PubSub is running on my emulator. However, nothing happens.
I have the following function:
exports.scheduledFunction = functions.pubsub.schedule('every 5 minutes').onRun((context) => {
console.log('This will be run every 5 minutes!');
return null;
});
I can verify the function works using firebase functions:shell:
firebase functions:shell
i functions: Loaded functions: createUserDocument, newSlackUserNotify, scheduledFunction
⚠ functions: The following emulators are not running, calls to these services will affect production: firestore, database, pubsub, storage, eventarc
firebase > scheduledFunction()
'Successfully invoked function.'
firebase > > This will be run every 5 minutes!
My firebase.json:
{
"firestore": {
"rules": "firestore.rules",
"indexes": "firestore.indexes.json"
},
"emulators": {
"auth": {
"port": 9099,
"host": "0.0.0.0"
},
"firestore": {
"port": 8080,
"host": "0.0.0.0"
},
"functions": {
"port": 5001,
"host": "0.0.0.0"
},
"pubsub": {
"port": 8085,
"host": "0.0.0.0"
},
"ui": {
"enabled": true
},
"singleProjectMode": true
},
"functions": [
{
"source": "apps/functions",
"codebase": "default",
"ignore": ["node_modules", ".git", "firebase-debug.log", "firebase-debug.*.log"],
}
]
}
I can see here that someone suggests using a trigger, but shouldn't scheduled functions "just work"?
I have also verified that the scheduled function works when deployed, so this seems to be an emulator-specific thing.
The only possible thing is that SLF4J seems to have not loaded, per pubsub-debug.log:
This is the Google Pub/Sub fake.
Implementation may be incomplete or differ from the real system.
Jan 13, 2023 2:09:25 PM com.google.cloud.pubsub.testing.v1.Main main
INFO: IAM integration is disabled. IAM policy methods and ACL checks are not supported
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Jan 13, 2023 2:09:26 PM com.google.cloud.pubsub.testing.v1.Main main
INFO: Server started, listening on 8085
Jan 13, 2023 2:09:31 PM io.gapi.emulators.netty.HttpVersionRoutingHandler channelRead
INFO: Detected HTTP/2 connection.
As Chris mentioned in above comment.Emulator doesn't support scheduled functions yet. There is a feature request raised for Support for scheduled functions in emulator in github. You can also add your concern over there. You need to trigger them manually using the firebase functions:shell command .You can run the scheduled functions on a interval:
firebase > setInterval(() => yourScheduledFunc(), 60000)
//This will run your functions every 60 seconds.
You can also check this stackoverflow thread1 & thread2

Firebase emulator is stoping with error: Pub/Sub Emulator has exited with code: 1

Firebase emulator is stopping with the following error
! pubsub: Fatal error occurred:
Pub/Sub Emulator has exited with code: 1,
stopping all running emulators
I don't know what is happening with emulator, even though I installed java and node correctly as said in the firebase documentation
Here is my firebase.json
{
"functions": {
"predeploy": [
"npm --prefix \"$RESOURCE_DIR\" run lint"
],
"source": "functions"
},
"emulators": {
"auth": {
"port": 9099
},
"functions": {
"port": 5001
},
"firestore": {
"port": 8080
},
"pubsub": {
"port": 8085
},
"ui": {
"enabled": true
}
}
}
I fixed by updating the JAVA_HOME variable and restarted the computer.

Swiftui and firebase emulators not working on physical device / simulator

I am working on a swiftui app and just started using the Emulator suite of firebase.
Everything is fine when i preview the app and write to a firestore collection or authenticate a user. The problem is that whenever i want to test it on a simulator or a actual physical device it does nothing. And i cannot find anything online regarding using emulator with a physical device in your local lan.
What could be stopping my physical device to make a connection with my Macbook where the emulators are running?
See my firebase.json config below:
NOTE: i have tryed to put "host": "192.168.x.x" or "0.0.0.0" <- my macbook ip adres
But with no result.
{
"firestore": {
"rules": "firestore.rules",
"indexes": "firestore.indexes.json"
},
"functions": {
"predeploy": [
"npm --prefix \"$RESOURCE_DIR\" run lint"
]
},
"storage": {
"rules": "storage.rules"
},
"emulators": {
"auth": {
"port": 9099
},
"functions": {
"port": 5001
},
"firestore": {
"port": 8080
},
"storage": {
"port": 9199
},
"ui": {
"enabled": true
}
}
}
code below FirebaseApp.configure()
Where Localhost is "192.168.x.x" my macbook IP
let settings = Firestore.firestore().settings
settings.host = "localhost:8080"
settings.isPersistenceEnabled = false
settings.isSSLEnabled = false
Firestore.firestore().settings = settings
In the info.plist i added App transport Security Settings:
Allows local networking = YES
What i see in debug.log (This might be unrelated)
Jul 04, 2021 10:20:53 AM io.gapi.emulators.netty.HttpVersionRoutingHandler channelRead
INFO: Detected non-HTTP/2 connection.
What could be a problem to stop my phone from connecting to my firebase emulators hosted on my macbook which is in the same local network as my phone.
When i use firebase cloud version everything works fine.
Could it be something in my firewall or on the macbook itself?
Which would explain that both physical and a simulator cannot write /read to firestore.
Hope some one can help me!
Would love to test functions and everything else locally first.
Greetings,
Dylan

Firebase CLI: TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string. Received type undefined

On firebase cli version 7.11.0 I am getting this error when running firebase serve or firebase deploy.
// terminal error
owner#G700:~/PhpstormProjects/shopify/buyUsedServer$ firebase serve --debug
[2020-01-10T21:56:29.047Z] ----------------------------------------------------------------------
[2020-01-10T21:56:29.060Z] Command: /home/owner/.nvm/versions/node/v10.16.3/bin/node /usr/local/bin/firebase serve --debug
[2020-01-10T21:56:29.061Z] CLI Version: 7.11.0
[2020-01-10T21:56:29.061Z] Platform: linux
[2020-01-10T21:56:29.061Z] Node Version: v10.16.3
[2020-01-10T21:56:29.062Z] Time: Sat Jan 11 2020 04:56:29 GMT+0700 (Indochina Time)
[2020-01-10T21:56:29.064Z] ----------------------------------------------------------------------
[2020-01-10T21:56:29.064Z]
[2020-01-10T21:56:29.083Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"]
[2020-01-10T21:56:29.083Z] > authorizing via signed-in user
[2020-01-10T21:56:29.084Z] [iam] checking project buyusedshopify for permissions ["firebase.projects.get"]
[2020-01-10T21:56:29.087Z] >>> HTTP REQUEST POST https://cloudresourcemanager.googleapis.com/v1/projects/buyusedshopify:testIamPermissions
permissions=[firebase.projects.get]
[2020-01-10T21:56:30.516Z] <<< HTTP RESPONSE 200 content-type=application/json; charset=UTF-8, vary=X-Origin, Referer, Origin,Accept-Encoding, date=Fri, 10 Jan 2020 21:56:30 GMT, server=ESF, cache-control=private, x-xss-protection=0, x-frame-options=SAMEORIGIN, x-content-type-options=nosniff, server-timing=gfet4t7; dur=1191, alt-svc=quic=":443"; ma=2592000; v="46,43",h3-Q050=":443"; ma=2592000,h3-Q049=":443"; ma=2592000,h3-Q048=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000, accept-ranges=none, transfer-encoding=chunked
[2020-01-10T21:56:30.519Z] >>> HTTP REQUEST GET https://firebase.googleapis.com/v1beta1/projects/buyusedshopify
[2020-01-10T21:56:31.188Z] <<< HTTP RESPONSE 200 content-type=application/json; charset=UTF-8, vary=X-Origin, Referer, Origin,Accept-Encoding, date=Fri, 10 Jan 2020 21:56:31 GMT, server=ESF, cache-control=private, x-xss-protection=0, x-frame-options=SAMEORIGIN, x-content-type-options=nosniff, alt-svc=quic=":443"; ma=2592000; v="46,43",h3-Q050=":443"; ma=2592000,h3-Q049=":443"; ma=2592000,h3-Q048=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000, accept-ranges=none, transfer-encoding=chunked
=== Serving from '/home/owner/PhpstormProjects/shopify/buyUsedServer'...
[2020-01-10T21:56:31.194Z] >>> HTTP REQUEST GET https://firebase.googleapis.com/v1beta1/projects/buyusedshopify/webApps/-/config
[2020-01-10T21:56:31.197Z] TypeError [ERR_INVALID_ARG_TYPE]: The "path" argument must be of type string. Received type undefined
at validateString (internal/validators.js:125:11)
at Object.join (path.js:1147:7)
at Object.<anonymous> (/usr/local/lib/node_modules/firebase-tools/lib/serve/functions.js:20:39)
at Generator.next (<anonymous>)
at /usr/local/lib/node_modules/firebase-tools/lib/serve/functions.js:7:71
at new Promise (<anonymous>)
at __awaiter (/usr/local/lib/node_modules/firebase-tools/lib/serve/functions.js:3:12)
at Object.start (/usr/local/lib/node_modules/firebase-tools/lib/serve/functions.js:18:16)
at /usr/local/lib/node_modules/firebase-tools/lib/serve/index.js:15:23
at arrayMap (/usr/local/lib/node_modules/firebase-tools/node_modules/lodash/lodash.js:639:23)
at Function.map (/usr/local/lib/node_modules/firebase-tools/node_modules/lodash/lodash.js:9554:14)
at _serve (/usr/local/lib/node_modules/firebase-tools/lib/serve/index.js:13:26)
at Command.module.exports.Command.description.option.option.option.option.before.action [as actionFn] (/usr/local/lib/node_modules/firebase-tools/lib/commands/serve.js:58:12)
at Command.<anonymous> (/usr/local/lib/node_modules/firebase-tools/lib/command.js:156:25)
at Generator.next (<anonymous>)
at fulfilled (/usr/local/lib/node_modules/firebase-tools/lib/command.js:4:58)
Error: An unexpected error has occurred.
Does anyone know what may cause this? Here is also my firebase.json:
// firebase.json
{
"firestore": {
"rules": "firestore.rules",
"indexes": "firestore.indexes.json"
},
"functions": {
"predeploy": [
"npm --prefix \"$RESOURCE_DIR\" run lint",
"npm --prefix \"$RESOURCE_DIR\" run build"
]
},
"hosting": {
"public": "public",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
]
},
"emulators": {
"functions": {
"port": 5001
},
"firestore": {
"port": 8080
},
"hosting": {
"port": 5000
}
}
}

Docker Swarm Mode: Not all VIPs for a service work. Getting timeouts for several VIPs

Description
I'm having issues with an overlay network using docker swarm mode (IMPORTANT: swarm mode, not swarm). I have an overlay network named "internal". I have a service named "datacollector" that is scaled to 12 instances. I docker exec into another service running in the same swarm (and on the same overlay network) and run curl http://datacollector 12 times. However, 4 of the requests result in a timeout. I then run dig tasks.datacollector and get a list of 12 ip addresses. Sure enough, 8 of the ip addresses work but 4 timeout every time.
I tried scaling the service down to 1 instance and then back up to 12, but got the same result.
I then used docker service ps datacollector to find each running instance of my service. I used docker kill xxxx on each node to manually kill all instances and let the swarm recreate them. I then checked dig again and verified that the list of IP addresses for the task was no longer the same. After this I ran curl http://datacollector 12 more times. Now only 3 requests work and the remaining 9 timeout!
This is the second time this has happened in the last 2 weeks or so. The previous time I had to remove all services, remove the overlay network, recreate the overlay network, and re-create all of the services in order to resolve the issue. Obviously, this isn't a workable long term solution :(
Output of `docker service inspect datacollector:
[
{
"ID": "2uevc4ouakk6k3dirhgqxexz9",
"Version": {
"Index": 72152
},
"CreatedAt": "2016-11-12T20:38:51.137043037Z",
"UpdatedAt": "2016-11-17T15:22:34.402801678Z",
"Spec": {
"Name": "datacollector",
"TaskTemplate": {
"ContainerSpec": {
"Image": "507452836298.dkr.ecr.us-east-1.amazonaws.com/swarm/api:61d7931f583742cca91b368bc6d9e15314545093",
"Args": [
"node",
".",
"api/dataCollector"
],
"Env": [
"ENVIRONMENT=stage",
"MONGODB_URI=mongodb://mongodb:27017/liveearth",
"RABBITMQ_URL=amqp://rabbitmq",
"ELASTICSEARCH_URL=http://elasticsearch"
]
},
"Resources": {
"Limits": {},
"Reservations": {}
},
"RestartPolicy": {
"Condition": "any",
"MaxAttempts": 0
},
"Placement": {
"Constraints": [
"node.labels.role.api==true",
"node.labels.role.api==true",
"node.labels.role.api==true",
"node.labels.role.api==true",
"node.labels.role.api==true"
]
}
},
"Mode": {
"Replicated": {
"Replicas": 12
}
},
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause"
},
"Networks": [
{
"Target": "88e9fd9715o5v1hqu6dnkg3vp"
}
],
"EndpointSpec": {
"Mode": "vip"
}
},
"Endpoint": {
"Spec": {
"Mode": "vip"
},
"VirtualIPs": [
{
"NetworkID": "88e9fd9715o5v1hqu6dnkg3vp",
"Addr": "192.168.1.23/24"
}
]
},
"UpdateStatus": {
"State": "completed",
"StartedAt": "2016-11-17T15:19:34.471292948Z",
"CompletedAt": "2016-11-17T15:22:34.402794312Z",
"Message": "update completed"
}
}
]
Output of docker network inspect internal:
[
{
"Name": "internal",
"Id": "88e9fd9715o5v1hqu6dnkg3vp",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "192.168.1.0/24",
"Gateway": "192.168.1.1"
}
]
},
"Internal": false,
"Containers": {
"03ac1e71139ff2140f93c80d9e6b1d69abf442a0c2362610bee3e116e84ef434": {
"Name": "datacollector.5.cxmvk7p1hwznautresir94m3s",
"EndpointID": "22445be80ba55b67d7cfcfbc75f2c15586bace5f317be8ba9b59c5f9f338525c",
"MacAddress": "02:42:c0:a8:01:72",
"IPv4Address": "192.168.1.114/24",
"IPv6Address": ""
},
"08ae84c7cb6e57583baf12c2a9082c1d17f1e65261cfa93346aaa9bda1244875": {
"Name": "auth.10.aasw00k7teq4knxibctlrrj7e",
"EndpointID": "c3506c851f4c9f0d06d684a9f023e7ba529d0149d70fa7834180a87ad733c678",
"MacAddress": "02:42:c0:a8:01:44",
"IPv4Address": "192.168.1.68/24",
"IPv6Address": ""
},
"192203a127d6831c3f4a41eabdd8df5282e33c3e92b99c3baaf1f213042f5418": {
"Name": "parkingcollector.1.8yrm6d831wrfsrkzhal7cf2pm",
"EndpointID": "34de6e9621ef54f7d963db942a7a7b6e0013ac6db6c9f17b384de689b1f1b187",
"MacAddress": "02:42:c0:a8:01:9a",
"IPv4Address": "192.168.1.154/24",
"IPv6Address": ""
},
"24258109e16c1a5b15dcc84a41d99a4a6617bcadecc9b35279c721c0d2855141": {
"Name": "stream.8.38npsusmpa1pf8fbnmaux57rx",
"EndpointID": "b675991ffbd5c0d051a4b68790a33307b03b48582fd1b37ba531cf5e964af0ce",
"MacAddress": "02:42:c0:a8:01:74",
"IPv4Address": "192.168.1.116/24",
"IPv6Address": ""
},
"33063b988473b73be2cbc51e912e165112de3d01bc00ee2107aa635e30a36335": {
"Name": "billing.2.ca41k2h44zkn9wfbsif0lfupf",
"EndpointID": "77c576929d5e82f1075b4cc6fcb4128ce959281d4b9c1c22d9dcd1e42eed8b5e",
"MacAddress": "02:42:c0:a8:01:87",
"IPv4Address": "192.168.1.135/24",
"IPv6Address": ""
},
"8b0929e66e6c284206ea713f7c92f1207244667d3ff02815d4bab617c349b220": {
"Name": "shotspottercollector.2.328408tiyy8aryr0g1ipmm5xm",
"EndpointID": "f2a0558ec67745f5d1601375c2090f5cd141303bf0d54bec717e3463f26ed74d",
"MacAddress": "02:42:c0:a8:01:90",
"IPv4Address": "192.168.1.144/24",
"IPv6Address": ""
},
"938fe5f6f9bb893862e8c06becd76c1a7fe5f2d3b791fc55d7d8164e67ee3553": {
"Name": "inrixproxy.2.ed77crvat0waw41phjknhhm6v",
"EndpointID": "88f550fecd60f0bdb0dfc9d5bf0c74716a91d009bcc27dc4392b113ab1215038",
"MacAddress": "02:42:c0:a8:01:96",
"IPv4Address": "192.168.1.150/24",
"IPv6Address": ""
},
"970f9d4c6ae6cc4de54a1d501408720b7d95114c28a6615d8e4e650b7e69bc40": {
"Name": "rabbitmq.1.e7j721g6hfhs8r7p3phih4g9v",
"EndpointID": "c04a4a5650ee6e10b87884004aa2cb1ec6b1c7036af15c31579462b6621436a2",
"MacAddress": "02:42:c0:a8:01:1e",
"IPv4Address": "192.168.1.30/24",
"IPv6Address": ""
},
"b1f676e6d38eec026583943dc0abff1163d21e6be9c5901539c46288f8941638": {
"Name": "logspout.0.51j8juw8aj0rjjccp2am0rib5",
"EndpointID": "98a93153abd6897c58276340df2eeec5c0ceb77fbe17d1ce8c465febb06776c7",
"MacAddress": "02:42:c0:a8:01:10",
"IPv4Address": "192.168.1.16/24",
"IPv6Address": ""
},
"bab4d80be830fa3b3fefe501c66e3640907a2cbb2addc925a0eb6967a771a172": {
"Name": "auth.2.8fduvrn5ayk024b0lkhyz50of",
"EndpointID": "7e81d41fa04ec14263a2423d8ef003d6d431a8c3ff319963197f8a8d73b4e361",
"MacAddress": "02:42:c0:a8:01:3a",
"IPv4Address": "192.168.1.58/24",
"IPv6Address": ""
},
"bc3c75a7c2d8c078eb7cc1555833ff0d374d82045dd9fb24ccfc37868615bb5e": {
"Name": "reverseproxy.6.2g20zphn5j1r2feylzcplyorg",
"EndpointID": "6c2138966ebcd144b47229a94ee603d264f3954a96ccd024d9e96501b7ffd5c0",
"MacAddress": "02:42:c0:a8:01:6c",
"IPv4Address": "192.168.1.108/24",
"IPv6Address": ""
},
"cd59d61b16ac0325336121a8558e8215e42aa5300f75054df17a70bf1f3e6c0c": {
"Name": "usgscollector.1.0h0afyw8va8maoa4tjd5qz588",
"EndpointID": "952073efc6a567ebd3f80d26811222c675183e8c76005fbf12388725a97b1bee",
"MacAddress": "02:42:c0:a8:01:48",
"IPv4Address": "192.168.1.72/24",
"IPv6Address": ""
},
"d40476e56b91762b0609acd637a4f70e42c88d266f8ebb7d9511050a8fc1df17": {
"Name": "kibana.1.6hxu5b97hfykuqr5yb9i9sn5r",
"EndpointID": "08c5188076f9b8038d864d570e7084433a8d97d4c8809d27debf71cb5d652cd7",
"MacAddress": "02:42:c0:a8:01:06",
"IPv4Address": "192.168.1.6/24",
"IPv6Address": ""
},
"e29369ad8ee5b12fb0c6f9bcb899514ab092f7da291a7c05eea758b0c19bfb65": {
"Name": "weatherbugcollector.1.crpub0hf85cewxm0qt6annsra",
"EndpointID": "afa1ddbad8ab8fdab69505ddb5342ac89c0d17bc75a11e9ac0ac8829e5885997",
"MacAddress": "02:42:c0:a8:01:2e",
"IPv4Address": "192.168.1.46/24",
"IPv6Address": ""
},
"f1bf0a656ecb9d7ef9b837efa94a050d9c98586f7312435e48b9a129c5e92e46": {
"Name": "socratacollector.1.627icslq6kdb4syaha6tzkb19",
"EndpointID": "14bea0d9ec3f94b04b32f36b7172c60316ee703651d0d920126a49dd0fa99cf5",
"MacAddress": "02:42:c0:a8:01:1b",
"IPv4Address": "192.168.1.27/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "257"
},
"Labels": {}
}
]
Output of dig datacollector:
; <<>> DiG 9.9.5-9+deb8u8-Debian <<>> datacollector
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38227
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;datacollector. IN A
;; ANSWER SECTION:
datacollector. 600 IN A 192.168.1.23
;; Query time: 0 msec
;; SERVER: 127.0.0.11#53(127.0.0.11)
;; WHEN: Thu Nov 17 16:11:57 UTC 2016
;; MSG SIZE rcvd: 60
Output of dig tasks.datacollector:
; <<>> DiG 9.9.5-9+deb8u8-Debian <<>> tasks.datacollector
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9810
;; flags: qr rd ra; QUERY: 1, ANSWER: 12, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;tasks.datacollector. IN A
;; ANSWER SECTION:
tasks.datacollector. 600 IN A 192.168.1.115
tasks.datacollector. 600 IN A 192.168.1.66
tasks.datacollector. 600 IN A 192.168.1.22
tasks.datacollector. 600 IN A 192.168.1.114
tasks.datacollector. 600 IN A 192.168.1.37
tasks.datacollector. 600 IN A 192.168.1.139
tasks.datacollector. 600 IN A 192.168.1.148
tasks.datacollector. 600 IN A 192.168.1.110
tasks.datacollector. 600 IN A 192.168.1.112
tasks.datacollector. 600 IN A 192.168.1.100
tasks.datacollector. 600 IN A 192.168.1.39
tasks.datacollector. 600 IN A 192.168.1.106
;; Query time: 0 msec
;; SERVER: 127.0.0.11#53(127.0.0.11)
;; WHEN: Thu Nov 17 16:08:54 UTC 2016
;; MSG SIZE rcvd: 457
Output of docker version:
Client:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built: Wed Oct 26 23:26:11 2016
OS/Arch: darwin/amd64
Server:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built: Wed Oct 26 21:44:32 2016
OS/Arch: linux/amd64
Output of docker info:
Containers: 58
Running: 15
Paused: 0
Stopped: 43
Images: 123
Server Version: 1.12.3
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 430
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: host null overlay bridge
Swarm: active
NodeID: 8uxexr2uz3qpn5x1km9k4le9s
Is Manager: true
ClusterID: 2kd4md2qyu67szx4y6q2npnet
Managers: 3
Nodes: 8
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 10.10.44.201
Runtimes: runc
Default Runtime: runc
Security Options: apparmor
Kernel Version: 3.13.0-91-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.676 GiB
Name: stage-0
ID: 76Z2:GN43:RQND:BBAJ:AGUU:S3F7:JWBC:CCCK:I4VH:PKYC:UHQT:IR2U
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: herbrandson
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Labels:
provider=generic
Insecure Registries:
127.0.0.0/8
Additional environment details:
Docker swarm mode (not swarm). All nodes are running on AWS. The swarm has 8 nodes (3 managers and 5 workers)
UPDATE:
Per the comments, here's a snipet from the docker daemon logs on the swarm master
time="2016-11-17T15:19:45.890158968Z" level=error msg="container status
unavailable" error="context canceled" module=taskmanager task.id=ch6w74b3cu78y8r2ugkmfmu8a
time="2016-11-17T15:19:48.929507277Z" level=error msg="container status unavailable" error="context canceled" module=taskmanager task.id=exb6dfc067nxudzr8uo1eyj4e
time="2016-11-17T15:19:50.104962867Z" level=error msg="container status unavailable" error="context canceled" module=taskmanager task.id=6mbbfkilj9gslfi33w7sursb9
time="2016-11-17T15:19:50.877223204Z" level=error msg="container status unavailable" error="context canceled" module=taskmanager task.id=drd8o0yn1cg5t3k76frxgukaq
time="2016-11-17T15:19:54.680427504Z" level=error msg="container status unavailable" error="context canceled" module=taskmanager task.id=9lwl5v0f2v6p52shg6gixs3j7
time="2016-11-17T15:19:54.949118806Z" level=error msg="container status unavailable" error="context canceled" module=taskmanager task.id=51q1eeilfspsm4cx79nfkl4r0
time="2016-11-17T15:19:56.485909146Z" level=error msg="container status unavailable" error="context canceled" module=taskmanager task.id=3vjzfjjdrjio2gx45q9c3j6qd
time="2016-11-17T15:19:56.934070026Z" level=error msg="Error closing logger: invalid argument"
time="2016-11-17T15:20:00.000614497Z" level=error msg="Error closing logger: invalid argument"
time="2016-11-17T15:20:00.163458802Z" level=error msg="container status unavailable" error="context canceled" module=taskmanager task.id=4xa2ub5npxyxpyx3vd5n1gsuy
time="2016-11-17T15:20:01.463407652Z" level=error msg="Error closing logger: invalid argument"
time="2016-11-17T15:20:01.949087337Z" level=error msg="Error closing logger: invalid argument"
time="2016-11-17T15:20:02.942094926Z" level=error msg="Failed to create real server 192.168.1.150 for vip 192.168.1.32 fwmark 947 in sb 938fe5f6f9bb893862e8c06becd76c1a7fe5f2d3b791fc55d7d8164e67ee3553: no such process"
time="2016-11-17T15:20:03.319168359Z" level=error msg="Failed to delete a new service for vip 192.168.1.61 fwmark 2133: no such process"
time="2016-11-17T15:20:03.363775880Z" level=error msg="Failed to add firewall mark rule in sbox /var/run/docker/netns/5de57ee133a5: reexec failed: exit status 5"
time="2016-11-17T15:20:05.772683092Z" level=error msg="Error closing logger: invalid argument"
time="2016-11-17T15:20:06.059212643Z" level=error msg="Error closing logger: invalid argument"
time="2016-11-17T15:20:07.335686642Z" level=error msg="Failed to delete a new service for vip 192.168.1.67 fwmark 2134: no such process"
time="2016-11-17T15:20:07.385135664Z" level=error msg="Failed to add firewall mark rule in sbox /var/run/docker/netns/6699e7c03bbd: reexec failed: exit status 5"
time="2016-11-17T15:20:07.604064777Z" level=error msg="Error closing logger: invalid argument"
time="2016-11-17T15:20:07.673852364Z" level=error msg="Failed to delete a new service for vip 192.168.1.75 fwmark 2097: no such process"
time="2016-11-17T15:20:07.766525370Z" level=error msg="Failed to add firewall mark rule in sbox /var/run/docker/netns/6699e7c03bbd: reexec failed: exit status 5"
time="2016-11-17T15:20:09.080101131Z" level=error msg="Failed to create real server 192.168.1.155 for vip 192.168.1.35 fwmark 904 in sb 192203a127d6831c3f4a41eabdd8df5282e33c3e92b99c3baaf1f213042f5418: no such process"
time="2016-11-17T15:20:11.516338629Z" level=error msg="Error closing logger: invalid argument"
time="2016-11-17T15:20:11.729274237Z" level=error msg="Failed to delete a new service for vip 192.168.1.83 fwmark 2124: no such process"
time="2016-11-17T15:20:11.887572806Z" level=error msg="Failed to add firewall mark rule in sbox /var/run/docker/netns/5b810132057e: reexec failed: exit status 5"
time="2016-11-17T15:20:12.281481060Z" level=error msg="Failed to delete a new service for vip 192.168.1.73 fwmark 2136: no such process"
time="2016-11-17T15:20:12.395326864Z" level=error msg="Failed to add firewall mark rule in sbox /var/run/docker/netns/5b810132057e: reexec failed: exit status 5"
time="2016-11-17T15:20:20.263565036Z" level=error msg="Failed to create real server 192.168.1.72 for vip 192.168.1.91 fwmark 2163 in sb cd59d61b16ac0325336121a8558e8215e42aa5300f75054df17a70bf1f3e6c0c: no such process"
time="2016-11-17T15:20:20.410996971Z" level=error msg="Failed to delete a new service for vip 192.168.1.95 fwmark 2144: no such process"
time="2016-11-17T15:20:20.456710211Z" level=error msg="Failed to add firewall mark rule in sbox /var/run/docker/netns/88d38a2bfb77: reexec failed: exit status 5"
time="2016-11-17T15:20:21.389253510Z" level=error msg="Failed to create real server 192.168.1.46 for vip 192.168.1.99 fwmark 2145 in sb cd59d61b16ac0325336121a8558e8215e42aa5300f75054df17a70bf1f3e6c0c: no such process"
time="2016-11-17T15:20:22.208965378Z" level=error msg="Failed to create real server 192.168.1.46 for vip 192.168.1.99 fwmark 2145 in sb e29369ad8ee5b12fb0c6f9bcb899514ab092f7da291a7c05eea758b0c19bfb65: no such process"
time="2016-11-17T15:20:23.334582312Z" level=error msg="Failed to create a new service for vip 192.168.1.97 fwmark 2166: file exists"
time="2016-11-17T15:20:23.495873232Z" level=error msg="Failed to create real server 192.168.1.48 for vip 192.168.1.17 fwmark 552 in sb e29369ad8ee5b12fb0c6f9bcb899514ab092f7da291a7c05eea758b0c19bfb65: no such process"
time="2016-11-17T15:20:25.831988014Z" level=error msg="Failed to create real server 192.168.1.116 for vip 192.168.1.41 fwmark 566 in sb 03ac1e71139ff2140f93c80d9e6b1d69abf442a0c2362610bee3e116e84ef434: no such process"
time="2016-11-17T15:20:25.850904011Z" level=error msg="Failed to create real server 192.168.1.116 for vip 192.168.1.41 fwmark 566 in sb 03ac1e71139ff2140f93c80d9e6b1d69abf442a0c2362610bee3e116e84ef434: no such process"
time="2016-11-17T15:20:37.159637665Z" level=error msg="container status unavailable" error="context canceled" module=taskmanager task.id=6yhu3glre4tbz6d08lk2pq9eb
time="2016-11-17T15:20:48.229343512Z" level=error msg="Error closing logger: invalid argument"
time="2016-11-17T15:51:16.027686909Z" level=error msg="Error getting service internal: service internal not found"
time="2016-11-17T15:51:16.027708795Z" level=error msg="Handler for GET /v1.24/services/internal returned error: service internal not found"
time="2016-11-17T16:15:50.946921655Z" level=error msg="container status unavailable" error="context canceled" module=taskmanager task.id=cxmvk7p1hwznautresir94m3s
time="2016-11-17T16:16:01.994494784Z" level=error msg="Error closing logger: invalid argument"
UPDATE 2:
I tried removing the service and re-creating it and that did not resolve the issue.
UPDATE 3:
I went through and rebooted each node in the cluster one-by-one. After that things appear to be back to normal. However, I still don't know what caused this. More importantly, how do I keep this from happening again in the future?

Resources