Unable to connect to dynamoDB table - UnknownEndpoint: Inaccessible host: - amazon-dynamodb

I'm new to dynamoDB. I have created a table and am trying to insert data into the table. It works well when I connect from my home internet. But when I try from my office network, I get the below error:
I suspect this is due to proxy issues. Can you please help me resolve this issue? Thank you.
[UnknownEndpoint: Inaccessible host: dynamodb.ap-southeast-2.amazonaws.com'. This service may not be available in theap-southeast-2' region.]
message: 'Inaccessible host: dynamodb.ap-southeast-2.amazonaws.com\'. This service may not be available in theap-southeast-2\' region.',
code: 'UnknownEndpoint',
region: 'ap-southeast-2',
hostname: 'dynamodb.ap-southeast-2.amazonaws.com',
retryable: true,
originalError:
{ [NetworkingError: getaddrinfo ENOTFOUND dynamodb.ap-southeast-2.amazonaws.com dynamodb.ap-southeast-2.amazonaws.com:443]
message: 'getaddrinfo ENOTFOUND dynamodb.ap-southeast-2.amazonaws.com dynamodb.ap-southeast-2.amazonaws.com:443',
code: 'NetworkingError',
errno: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'dynamodb.ap-southeast-2.amazonaws.com',
host: 'dynamodb.ap-southeast-2.amazonaws.com',
port: 443,
region: 'ap-southeast-2',
retryable: true,
time: Mon Sep 21 2015 11:19:58 GMT+1000 (AUS Eastern Standard Time) },
time: Mon Sep 21 2015 11:19:58 GMT+1000 (AUS Eastern Standard Time) }

Thank you for the pointers. I managed to solve the issue using below code snipped.
var proxy = require('proxy-agent');
AWS.config.update({
httpOptions: {
agent: proxy('http://{user_name}:{password}#<proxy>:<port>')
}
});
This is documented in amazon's aws-sdk configuration site: http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-configuring.html

Related

Empty S3 remote log files in Airflow 2.3.2

I configured remote S3 logging with the following variables:
- name: AIRFLOW__LOGGING__REMOTE_LOGGING
value: 'True'
- name: AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER
value: 's3://my-airflow/airflow/logs'
- name: AIRFLOW__LOGGING__REMOTE_LOG_CONN_ID
value: 'my_s3'
- name: AIRFLOW__LOGGING__LOGGING_LEVEL
value: 'ERROR'
- name: AIRFLOW__LOGGING__ENCRYPT_S3_LOGS
value: 'False'
So far the log files are created with the DAG and task path with the name attempt=1.log or similar but always with 0 bytes size (empty). When I try to see the logs from Airflow I get this message (I'm using the KubernetesExecutor):
*** Falling back to local log
*** Trying to get logs (last 100 lines) from worker pod ***
*** Unable to fetch logs from worker pod ***
(400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'f3e0dd67-c8f4-42fc-945f-95dc42e8c2b5', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Mon, 01 Aug 2022 13:07:07 GMT', 'Content-Length': '136'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"name must be provided","reason":"BadRequest","code":400}\n'
Why are my logs files empty?

Send the trace data of a website using Jaeger and Opentelemetry to Opensearch

I'm working on the observability part of Opensearch so I'm trying to collect the trace data of a wordpress website and send it to Opensearch.
I'm collecting the trace data using the wordpress plugin Decalog, this later sends the data to Jaeger agent, then from jaeger i'm sending the data to Opentelemetry and then to Data prepper and lastly to Opensearch.
Jaeger agent service in docker-compose :
jaeger-agent:
container_name: jaeger-agent
image: jaegertracing/jaeger-agent:latest
command: [ "--reporter.grpc.host-port=otel-collector:14250" ]
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778/tcp"
networks:
- our-network
The "command" ligne got me this error : Err: connection error: desc = "transport: Error while dialing dial tcp: lookup otel-collector on 127.0.0.11:53: server misbehaving"","system":"grpc","grpc_log":true
So I changed otel-collector to the IP of the otel-collector container.
Otel collector and data prepper are installed using docker-compose.
data-prepper:
restart: unless-stopped
container_name: data-prepper
image: opensearchproject/data-prepper:latest
volumes:
- ./data-prepper/examples/trace_analytics_no_ssl.yml:/usr/share/data-prepper/pipelines.yaml
- ./data-prepper/examples/data-prepper-config.yaml:/usr/share/data-prepper/data-prepper-config.yaml
- ./data-prepper/examples/demo/root-ca.pem:/usr/share/data-prepper/root-ca.pem
ports:
- "21890:21890"
networks:
- our-network
depends_on:
- "opensearch"
otel-collector:
container_name: otel-collector
image: otel/opentelemetry-collector:0.54.0
command: [ "--config=/etc/otel-collector-config.yml" ]
working_dir: "/project"
volumes:
- ${PWD}/:/project
- ./otel-collector-config.yml:/etc/otel-collector-config.yml
- ./data-prepper/examples/demo/demo-data-prepper.crt:/etc/demo-data-prepper.crt
ports:
- "4317:4317"
depends_on:
- data-prepper
networks:
- our-network
The configuration of otel.yaml (to send data from opentelemetry to opensearch):
receivers:
jaeger:
protocols:
grpc:
exporters:
otlp/2:
endpoint: data-prepper:21890
tls:
insecure: true
insecure_skip_verify: true
logging:
service:
pipelines:
traces:
receivers: [jaeger]
exporters: [logging, otlp/2]
The configuration for data prepper pipeline : entry-pipeline:
delay: "100"
source:
otel_trace_source:
ssl: false
sink:
- pipeline:
name: "raw-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-pipeline:
source:
pipeline:
name: "entry-pipeline"
prepper:
- otel_trace_raw_prepper:
sink:
- opensearch:
hosts: [ "http://localhost:9200" ]
cert: "/usr/share/data-prepper/root-ca.pem"
username: "admin"
password: "admin"
trace_analytics_raw: true
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
prepper:
- service_map_stateful:
sink:
- opensearch:
hosts: ["http://localhost:9200"]
cert: "/usr/share/data-prepper/root-ca.pem"
username: "admin"
password: "admin"
trace_analytics_service_map: true
As of now I'm getting the following errors:
Jaeger agent :
Err: connection error: desc = \"transport: Error while dialing dial tcp otel-collector-container-IP:14250: i/o timeout\"","system":"grpc","grpc_log":true}
Open telemetry collector :
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "traces", "name": "logging"}
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "traces", "name": "logging"}
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "traces", "name": "otlp/2"}
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "traces", "name": "otlp/2"}
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:86 Starting processors...
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:98 Starting receivers...
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:102 Exporter is starting... {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info static/strategy_store.go:203 No sampling strategies provided or URL is unavailable, using defaults {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info pipelines/pipelines.go:106 Exporter started. {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info service/collector.go:220 Starting otelcol... {"Version": "0.54.0", "NumCPU": 2}
2022-08-04T15:31:32.683Z info service/collector.go:128 Everything is ready. Begin running and processing data.
2022-08-04T15:31:32.684Z warn zapgrpc/zapgrpc.go:191 [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "data-prepper:21890",
"ServerName": "data-prepper:21890",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp data-prepper-container-ip:21890: connect: connection refused" {"grpc_log": true}
Data prepper :
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.amazon.dataprepper.DataPrepper]: Constructor threw exception; nested exception is java.lang.RuntimeException: No valid pipeline is available for execution, exiting
Followed by this at the end :
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2022-08-04T15:23:22,803 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperAppConfiguration - Command line args: /usr/share/data-prepper/pipelines.yaml,/usr/share/data-prepper/data-prepper-config.yaml
2022-08-04T15:23:22,806 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperArgs - Using /usr/share/data-prepper/pipelines.yaml configuration file
Opensearch needs a separate tool to support ingestion of Opentelemetry data. It is called DataPrepper and is part of the Opensearch project. There is a nice getting started guide on how to set up trace analytics in Opensearch.
DataPrepper works similar as Fluentd or the Opentelemetry Collector, but has proper support for Opensearch as a data sink. It pre-processes trace data adequately for the Opensearch Dashboards UI tracing plugin. DataPrepper also supports the Opentelemetry metrics format.
Are you still having issues running Data Prepper? The configuration used in this example has been updated since the latest release, and should now be up to date and working (https://github.com/opensearch-project/data-prepper/blob/main/examples/trace_analytics_no_ssl.yml)

Connecting to Snowflake using Okta from R Workbench

Hi, I am trying to connect to snowflake from R Workbench. This is the error received while connecting with okta.
con <- dbConnect(jdbcDriver, "jdbc:snowflake://company.snowflakecomputing.com/?authenticator=https://company.okta.com/", 'name#company.com', 'pass')
Sep 16, 2021 10:07:42 PM net.snowflake.client.core.SessionUtil handleFederatedFlowError
SEVERE: IOException when authenticating with https://company.okta.com/
java.net.MalformedURLException: no protocol: /login/cert
at java.net.URL.(URL.java:611)
at java.net.URL.(URL.java:508)
at java.net.URL.(URL.java:457)
at net.snowflake.client.core.SessionUtil.isPrefixEqual(SessionUtil.java:1218)
at net.snowflake.client.core.SessionUtil.federatedFlowStep4(SessionUtil.java:999)
at net.snowflake.client.core.SessionUtil.getSamlResponseUsingOkta(SessionUtil.java:1206)
at net.snowflake.client.core.SessionUtil.newSession(SessionUtil.java:378)
at net.snowflake.client.core.SessionUtil.openSession(SessionUtil.java:284)
at net.snowflake.client.core.SFSession.open(SFSession.java:446)
at net.snowflake.client.jdbc.DefaultSFConnectionHandler.initialize(DefaultSFConnectionHandler.java:104)
at net.snowflake.client.jdbc.DefaultSFConnectionHandler.initializeConnection(DefaultSFConnectionHandler.java:79)
at net.snowflake.client.jdbc.SnowflakeConnectionV1.initConnectionWithImpl(SnowflakeConnectionV1.java:116)
at net.snowflake.client.jdbc.SnowflakeConnectionV1.(SnowflakeConnectionV1.java:96)
at net.snowflake.client.jdbc.SnowflakeDriver.connect(SnowflakeDriver.java:164)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
Sep 16, 2021 10:07:43 PM net.snowflake.client.core.SessionUtil handleFederatedFlowError
SEVERE: IOException when authenticating with https://company.okta.com/
java.net.MalformedURLException: no protocol: /login/cert
at java.net.URL.(URL.java:611)
at java.net.URL.(URL.java:508)
at java.net.URL.(URL.java:457)
at net.snowflake.client.core.SessionUtil.isPrefixEqual(SessionUtil.java:1218)
at net.snowflake.client.core.SessionUtil.federatedFlowStep4(SessionUtil.java:999)
at net.snowflake.client.core.SessionUtil.getSamlResponseUsingOkta(SessionUtil.java:1206)
at net.snowflake.client.core.SessionUtil.newSession(SessionUtil.java:378)
at net.snowflake.client.core.SessionUtil.openSession(SessionUtil.java:284)
at net.snowflake.client.core.SFSession.open(SFSession.java:446)
at net.snowflake.client.jdbc.DefaultSFConnectionHandler.initialize(DefaultSFConnectionHandler.java:104)
at net.snowflake.client.jdbc.DefaultSFConnectionHandler.initializeConnection(DefaultSFConnectionHandler.java:79)
at net.snowflake.client.jdbc.SnowflakeConnectionV1.initConnectionWithImpl(SnowflakeConnectionV1.java:116)
at net.snowflake.client.jdbc.SnowflakeConnectionV1.(SnowflakeConnectionV1.java:96)
at net.snowflake.client.jdbc.SnowflakeDriver.connect(SnowflakeDriver.java:164)
Error in .jcall(drv#jdrv, "Ljava/sql/Connection;", "connect", as.character(url)[1], :
net.snowflake.client.jdbc.SnowflakeSQLException: JDBC driver encountered communication error. Message: Exception encountered when opening connection: no protocol: /login/cert.
I presume that Okta is set up with MFA. If so, the error is because of that since Snowflake drivers does not support Native authentication for Okta with MFA enabled.
You need to use externalbrowser as the authenticator option if the requirement is to use Okta + MFA.

Exception 504 when registering the consumer

I've been working with Symfony 2.7 and the RabbitMQBundle to handle some long processes asynchronously.
After facing the issue where the MySQL connection dies after a few minutes, I discovered rabbitmq-cli-consumer, a small app in Go that takes care of consuming the queue, and gives its content to a command.
In my case, I use it with this command: ./rabbitmq-cli-consumer -c configuration-stock.conf --include -V -e 'php app/console amqp:consume:stock --env=prod -vvv', with this configuration file:
[rabbitmq]
host = HOST
username = USERNAME
password = PASSWORD
vhost=/VHOST
port=PORT
queue=stock
compression=Off
[exchange]
name=exports
type=direct
durable=On
[queuesettings]
routingkey=stock
messagettl=10000
deadLetterExchange=exports.dl
deadLetterroutingkey=stock
priority=10
To handle errors, I intend to use RabbitMQ's x-dead-letter-exchange and x-dead-letter-routing-key configuration, to be able to retry the message later (in case something went temporarly wrong).
My issue is that, when I define my queues in RabbitMQBundle's configuration, rabbitmq-cli-consumer is unable to consume the queue, throwing this error:
2018/04/23 11:35:54 Connecting RabbitMQ...
2018/04/23 11:35:54 Connected.
2018/04/23 11:35:54 Opening channel...
2018/04/23 11:35:54 Done.
2018/04/23 11:35:54 Setting QoS...
2018/04/23 11:35:54 Succeeded setting QoS.
2018/04/23 11:35:54 Declaring queue "stock"...
2018/04/23 11:35:54 Registering consumer...
2018/04/23 11:35:54 failed to register a consumer: Exception (504) Reason: "channel/connection is not open"
Here is the configuration I use for RabbitMQBundle:
old_sound_rabbit_mq:
producers:
exports:
connection: default
exchange_options:
name: 'exports'
type: direct
exports_dl:
connection: default
exchange_options:
name: 'exports.dl'
type: direct
consumers:
stock_dead_letter:
connection: default
exchange_options:
name: exports.dl
type: direct
queue_options:
name: stock.dl
routing_keys:
- stock
arguments:
x-dead-letter-exchange: ['S', 'exports']
x-dead-letter-routing-key: ['S', 'stock']
x-message-ttl: ['I', 60000]
callback: amqp.consumers.exports.stock
multiple_consumers:
exports:
connection: default
exchange_options:
name: 'exports'
type: direct
queues:
stock:
name: stock
callback: amqp.consumers.exports.stock
routing_keys:
- stock
arguments:
x-dead-letter-exchange: ['S', 'exports.dl']
x-dead-letter-routing-key: ['S', 'stock']
Has anyone ever encountered something similar ? And how did you solve it ?

Error when Creating New Item in datasource with relationship

I have a ONE to MANY relationship setup between two datasources (longer explanation is here).
I am intermittently getting errors when trying to create or edit an item in the non-owner datasource.
Thu Oct 19 11:59:20 GMT-700 2017
Drive Table internal error. Please try again. Caused by: Execution Failed. More information: Internal error encountered.. (HTTP status code: unknown) Error: Drive Table internal error. Please try again.
Thu Oct 19 11:59:20 GMT-700 2017
Creating new record: (Error) : Drive Table internal error. Please try again.
at PORequests.Panel13.Panel18.Add_Item.onClick:1:19
Thu Oct 19 11:59:20 GMT-700 2017
Creating new record failed.
When I looked at the server logs I see:
{
insertId: "13ovnchfxwxsqq"
labels: {
script.googleapis.com/process_id: "EAEA1GOwsdCh8YjD5KVVeBwT21xln0G3WFk4ULblcC0zX8_PML2UzR0o6WSPEhKBCgXa9SY_g6dxTqLJKMqQXk0UrIJW6en9rymo8OEBcw0X4tv4pLZeSxFNgRbInWHBJiegZM4WyjaPBqw1q"
script.googleapis.com/project_key: "MBTigw5qnbrHScnjNatBP5Rs3AwaeoTZn"
script.googleapis.com/user_key: "AEiMU4fv5flw4TblvkAxoOl6zKB5363j63bCyxicf/bJJXwtBFLZYGEBRLOkH+Z68eLnXiHPnR98"
}
logName: "projects/project-id-8108427532903699662/logs/script.googleapis.com%2Fconsole_logs"
receiveTimestamp: "2017-10-19T18:59:21.737166790Z"
resource: {
labels: {
function_name: "__appmakerGlobalScriptWrapper"
invocation_type: "web app"
project_id: "project-id-8108427532903699662"
}
type: "app_script_function"
}
severity: "ERROR"
textPayload: "Drive Table internal error. Please try again.
Caused by: Execution Failed. More information: Internal error encountered.. (HTTP status code: unknown)
Error: Drive Table internal error. Please try again. "
timestamp: "2017-10-19T18:59:20.729Z"
}
Anyone else seen this issue? Is it a bug or am I doing something wrong?

Resources