On GKE we experiencing some random error with the API.
Many time ago we have "Error dialing backend: EOF".
We use Jenkins on top of K8s to manage our build. And afew time ago job is killed with this error:
Executing shell script inside container [protobuf] of pod [kubernetes-bad0aa993add416e80bdc1e66d1b30fc-536045ac8bbe]
java.net.ProtocolException: Expected HTTP 101 response but was '500 Internal Server Error'
at com.squareup.okhttp.ws.WebSocketCall.createWebSocket(WebSocketCall.java:123)
at com.squareup.okhttp.ws.WebSocketCall.access$000(WebSocketCall.java:40)
at com.squareup.okhttp.ws.WebSocketCall$1.onResponse(WebSocketCall.java:98)
at com.squareup.okhttp.Call$AsyncCall.execute(Call.java:177)
at com.squareup.okhttp.internal.NamedRunnable.run(NamedRunnable.java:33)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
This case looks a lot like: https://gitlab.com/gitlab-org/gitlab-runner/issues/3247
Many Audit log url:
permission: "io.k8s.core.v1.pods.exec.create"
resource: "core/v1/namespaces/default/pods/pubsub-6132c0bc-2542-46a2-8041-c865f238698d-4ccc0-c1nkz-lqg5x/exec/pubsub-6132c0bc-2542-46a2-8041-c865f238698d-4ccc0-c1nkz-lqg5x"
and
permission: "io.k8s.core.v1.pods.exec.get"
resource: "core/v1/namespaces/default/pods/pubsub-a5a21f14-0bd1-4338-87b1-8658c3bbc7ad-9gm4n-8nz14/exec"
But i don't unerstand why this error comes on Kubernetes...
Update:
Those error can be validated with kube-state-metrics with 2 of them:
- ssh_tunnel_open_count
- ssh_tunnel_open_fail_count
For me the number of open tunnel ssh fail grow with more than 200 ssh tunnel open.
For information, we have make some test with GKE
- swith from zonal to regional cluster
- use new native IP (old alias IP)
But this not solve the problem.
After disabling auto-scaling on node-pool , we have no more error.
I could fix this issue by deactivating auto-scaling profile optimize-utilization/resetting the profile back to the default balanced. optimize-utilization is in beta status anyway.
Related
This is a very strange situation that's driving me nuts, and I would really appreciate some help here.
I am using CDK to define the DynamoDB table and associated indices. To test them locally, I installed cdklocal and DynamoDB local using localstack. When the computer (Mac running Ventura 13.1) is restarted, everything works as expected. Here is the script I use to bootstrap and start the stack (this is in a file called startStack.sh):
docker-compose up -d
echo "Waiting for 5 seconds"
sleep 5
cd test-app
cdklocal bootstrap
echo "Waiting for 5 seconds"
sleep 5
cdklocal deploy TestAppStack
#cdklocal deploy TestAppStack/ops-table
DYNAMO_ENDPOINT="http://localhost:4566/" dynamodb-admin &
open http://0.0.0.0:8001
cd ..
The test-app directory contains a local copy of the DynamoDB (and associated indices) definition. I do not encounter any errors running the cdklocal (or cdk) deploy commands so I am assuming that the CDK definition is not an issue.
The docker-compose looks like this:
version: "3.8"
services:
localstack:
container_name: AWS-DEVELOPMENT-WITH-LOCALSTACK
image: localstack/localstack:latest
network_mode: bridge
ports:
- "127.0.0.1:53:53"
- "127.0.0.1:53:53/udp"
- "127.0.0.1:443:443"
- "127.0.0.1:4566:4566"
- "127.0.0.1:4571:4571"
- "127.0.0.1:${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- DYNAMODB_SHARE_DB=1
- DISABLE_CORS_CHECKS=1
- SERVICES=s3,dynamodb,sns,sqs,firehose,kinesis,ses,sts,cloudformation
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- PORT_WEB_UI=8080
- LAMBDA_EXECUTOR=local
- KINESIS_ERROR_PROBABILITY=1.0
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=./.localstack
volumes:
- './.localstack:/var/lib/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
Everything works as expected when I first run the startStack.sh file - the dynamodb-admin window opens up correctly and other interfaces can interact with the local DynamoDB table. But after some time (and I have not been able to pinpoint the cause), all interactions with local DynamoDB start failing with the following error(s):
Bootstrapping environment aws://000000000000/us-west-2...
❌ Environment aws://000000000000/us-west-2 failed bootstrapping: UnknownEndpoint: Inaccessible host: `localhost' at port `4566'. This service may not be available in the `us-west-2' region.
at Request.ENOTFOUND_ERROR (/usr/local/lib/node_modules/aws-sdk/lib/event_listeners.js:611:46)
at Request.callListeners (/usr/local/lib/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/usr/local/lib/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/usr/local/lib/node_modules/aws-sdk/lib/request.js:686:14)
at error2 (/usr/local/lib/node_modules/aws-sdk/lib/event_listeners.js:443:22)
at ClientRequest.<anonymous> (/usr/local/lib/node_modules/aws-sdk/lib/http/node.js:99:9)
at ClientRequest.emit (node:events:513:28)
at ClientRequest.emit (node:domain:489:12)
at Socket.socketErrorListener (node:_http_client:494:9)
at Socket.emit (node:events:513:28) {
code: 'UnknownEndpoint',
region: 'us-west-2',
hostname: 'localhost',
retryable: true,
originalError: [Error],
time: 2023-01-15T06:46:40.614Z
}
Inaccessible host: `localhost' at port `4566'. This service may not be available in the `us-west-2' region.
The script hangs at the following message:
[16:52:01] Retrieved account ID 000000000000 from disk cache
[16:52:01] Assuming role 'arn:aws:iam::000000000000:role/cdk-hnb659fds-deploy-role-000000000000-us-west-2'.
[16:52:01] Assuming role failed: Inaccessible host: `localhost' at port `4566'. This service may not be available in the `us-west-2' region.
[16:52:01] Could not assume role in target account using current credentials Inaccessible host: `localhost' at port `4566'. This service may not be available in the `us-west-2' region. . Please make sure that this role exists in the account. If it doesn't exist, (re)-bootstrap the environment with the right '--trust', using the latest version of the CDK CLI.
current credentials could not be used to assume 'arn:aws:iam::000000000000:role/cdk-hnb659fds-deploy-role-000000000000-us-west-2', but are for the right account. Proceeding anyway.
[16:52:01] Waiting for stack CDKToolkit to finish creating or updating...
Restarting the computer fixes it, but it's not clear what causes the issue in the first place. Restarting Docker does not help either.
Any thoughts on what could be causing the problem and how I can avoid it?
I'm adding this as an answer, although I do not have an affirmative answer I thought I would try to help.
I believe your port is being occupied and thus the process you are running is unable to obtain it resulting in error. Before running the job, check if the port is occupied:
sudo lsof -i :4566
I am trying to run this docker file https://gitlab.com/snippets/1713665
consoles
I have running iroha container as you can see in right console on 50051 port, but on running the above docker file for web GRPC then you can see in left console it is unable to make connection. as i have also tried with enabling and disabling the firewalls and also with opening the 50051 withudo ufw allow 50051 sudo ufw allow 50051 ...But in the end i have the same results
"Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused". Reconnecting... system=system"
I have also posted this issue month ago but no once gave me any response, Thats why i am reposting with further elaboration
Try running the grpc web proxy, with the backend address as localhost, instead of whatever is default in the gitlab post.
ex. ./grpcwebproxy-v0.13.0-osx-x86_64 --backend_addr=localhost:50051 --run_tls_server=false
From the console logs, it looks like it is trying to connect to dev.localdomain:50051
Background
We run a kubernetes cluster that handles several php/lumen microservices. We started seeing the app php-fpm/nginx reporting 499 status code in it's logs, and it seems to correspond with the client getting a blank response (curl returns curl: (52) Empty reply from server) while the applications log 499.
10.10.x.x - - [09/Mar/2020:18:26:46 +0000] "POST /some/path/ HTTP/1.1" 499 0 "-" "curl/7.65.3"
My understanding is nginx will return the 499 code when the client socket is no longer open/available to return the content to. In this situation that appears to mean something before the nginx/application layer is terminating this connection. Our configuration currently is:
ELB -> k8s nginx ingress -> application
So my thoughts are either ELB or ingress since the application is the one who has no socket left to return to. So i started hitting ingress logs...
Potential core problem?
While looking the the ingress logs i'm seeing quite a few of these:
2020/03/06 17:40:01 [crit] 11006#11006: ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone "vhost_traffic_status"
Potential Solution
I imagine if i gave vhost_traffic_status_zone some more memory at least that error would go away and on to finding the next error.. but I can't seem to find any configmap value or annotation that would allow me to control this. I've checked the docs:
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
Thanks in advance for any insight / suggestions / documentation I might be missing!
here is the standard way to look up how to modify the nginx.conf in the ingress controller. After that, I'll link in some info on suggestions on how much memory you should give the zone.
First start by getting the ingress controller version by checking the image version on the deploy
kubectl -n <namespace> get deployment <deployment-name> | grep 'image:'
From there, you can retrieve the code for your version from the following URL. In the following, I will be using version 0.10.2.
https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.10.2
The nginx.conf template can be found at rootfs/etc/nginx/template/nginx.tmpl in the code or /etc/nginx/template/nginx.tmpl on a pod. This can be grepped for the line of interest. I the example case, we find the following line in the nginx.tmpl
vhost_traffic_status_zone shared:vhost_traffic_status:{{ $cfg.VtsStatusZoneSize }};
This gives us the config variable to look up in the code. Our next grep for VtsStatusZoneSize leads us to the lines in internal/ingress/controller/config/config.go
// Description: Sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processe
// https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
// Default value is 10m
VtsStatusZoneSize string `json:"vts-status-zone-size,omitempty"
This gives us the key "vts-status-zone-size" to be added to the configmap "ingress-nginx-ingress-controller". The current value can be found in the rendered nginx.conf template on a pod at /etc/nginx/nginx.conf.
When it comes to what size you may want to set the zone, there are the docs here that suggest setting it to 2*usedSize:
If the message("ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone") printed in error_log, increase to more than (usedSize * 2).
https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
"usedSize" can be found by hitting the stats page for nginx or through the JSON endpoint. Here is the request to get the JSON version of the stats and if you have jq the path to the value: curl http://localhost:18080/nginx_status/format/json 2> /dev/null | jq .sharedZones.usedSize
Hope this helps.
I have one API that have been published in WSO2 API gateway.
When I test API, I got this error message from console.
Exception in thread "pool-65-thread-1"
java.lang.NumberFormatException: For input string: "0:0:0:0:0:0:0:1"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at org.wso2.carbon.apimgt.impl.utils.APIUtil.ipToLong_aroundBody512(APIUtil.java:7851)
at org.wso2.carbon.apimgt.impl.utils.APIUtil.ipToLong(APIUtil.java:7847)
at org.wso2.carbon.apimgt.gateway.throttling.publisher.DataProcessAndPublishingAgent.run_aroundBody4(DataProcessAndPublishingAgent.java:155)
at org.wso2.carbon.apimgt.gateway.throttling.publisher.DataProcessAndPublishingAgent.run(DataProcessAndPublishingAgent.java:141)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
According to the blog, they say it need to disable IPv6.
I just disable by registry for IPv6 and add IPV6support only at JAVA_OPTS at wso2server.bat file.
Then I restart again, it still show as IPv6 address.
[2020-01-15 14:40:21,171] INFO - PassThroughListeningIOReactorManager
Pass-through HTTP Listener started on 0:0:0:0:0:0:0:0:8280
[2020-01-15 14:40:21,172] INFO - PassThroughHttpMultiSSLListener Starting
Pass-through HTTPS Listener...
[2020-01-15 14:40:21,192] INFO -
PassThroughListeningIOReactorManager Pass-through HTTPS Listener
started on 0:0:0:0:0:0:0:0:8243
[2020-01-15 14:40:21,449] INFO -
TaskServiceImpl Task service starting in STANDALONE mode...
[2020-01-15 14:40:21,509] INFO - RegistryEventingServiceComponent
Successfully Initialized Eventing on Registry
[2020-01-15 14:40:21,652] INFO - JMXServerManager JMX Service URL :
service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi
When I run again, I got same error.
Please help to answer?
In the latest API Manager versions, we have provided IPv6 support for throttling usecases. If you take the latest pack or a WUM updated pack of your current version
, you should not get this issue.
Also as a workaround, you could set an IPv4 address using X-Forwarded-For header and invoke the API
I am new to flume.
but i want to stream in weather data form any website to my hdfs location.
so i have created the sink, source and channel...as below
weather.channels= memory-channel
weather.channels.memory-channel.capacity=10000
weather.channels.memory-channel.type = memory
weather.sinks = hdfs-write
weather.sinks.hdfs-write.channel=memory-channel
weather.sinks.hdfs-write.type = logger
weather.sinks.hdfs-write.hdfs.path = hdfs://localhost:8020/user/hadoop/flume
weather.sinks.hdfs-write.rollInterval = 1200
weather.sinks.hdfs-write.hdfs.writeFormat=Text
weather.sinks.hdfs-write.hdfs.fileType=DataStream
weather.sources= Weather
weather.sources.Weather.bind = api.openweathermap.org/data/2.5/forecast/city?id=524901&APPID=********************************
weather.sources.Weather.channels=memory-channel
weather.sources.Weather.type = netcat
weather.sources.Weather.port = 80
so i am using here API to work with this.
What else i can use to stream in weather data, what online website can i use, or which API i should use to configure the source?
While executing the flume-ng command to start the agent i am getting following
15/03/18 11:13:28 ERROR lifecycle.LifecycleSupervisor: Unable to start EventDrivenSourceRunner:{
source:org.apache.flume.source.http.HTTPSource{name:Weather,state:IDLE} } - Exception follows.
java.lang.IllegalStateException: Running HTTP Server found in
source:Weather before I started one.Will not attempt to start.
at com.google.common.base.Preconditions.checkState(Preconditions.java:145)at org.apache.flume.source.http.HTTPSource.start(HTTPSource.java:189)
at org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
C15/03/18 11:13:31 INFO lifecycle.LifecycleSupervisor: Stopping lifecycle supervisor 10
15/03/18 11:13:31 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider stopping
15/03/18 11:13:31 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: memory-channel stopped
The "lyfecycle" error you see is the cause of a previous error trying to start the http server.
The original error is likely due to trying to bind to the priviledged 80 port with non root user. Change the port to >1024, e.g. 8080
However, it won't work as you are trying to use. A http or netcat source listens to calls, doesn't go an fetch the url you are setting in bind.
I see two options:
Create a linux daemon to go a wget or curl to that url at regular intervals, save the result to a file and then configure flume with the spool source.
Create your own Flume source that pools that url at regular intervals