Boto3 unable to connect to local DynamoDB running in Docker container - amazon-dynamodb

I'm at a complete loss. I have a Docker container running DynamoDB locally. From the terminal window, I run:
docker run -p 8010:8000 amazon/dynamodb-local
to start the container. It starts fine. I then run:
aws dynamodb list-tables --endpoint-url http://localhost:8010
to verify that the container and the local instance is working fine. I get:
{
"TableNames": []
}
That's exactly what I expect. It tells me that the aws client can connect to the local DB instance properly.
Now the problem. I get to a python shell, and type:
import boto3
db = boto3.client('dynamodb', region_name='us-east-1', endpoint_url='http://localhost:8010', use_ssl=False, aws_access_key_id='my_secret_key', aws_secret_access_key='my_secret_access_key', verify=False)
print(db.list_tables())
I get a ConnectionRefusedError. I have tried the connection with and without the secret keys, with and without use_ssl and verify, and nothing works. At this point I'm thinking it must be a bug with boto3. What am I missing?

Related

AWS credentials not found for celery-k8s deployment

I'm trying to run dagster using celery-k8s and using the examples/celery-k8s as a start. upon running the pipeline from playground I get
Initialization of resources [s3, io_manager] failed.
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I have configured aws credentials in env variables as mentioned in the document
deployments:
- name: "user-code-deployment-test"
image:
repository: "somasays/dagster-usercode-example"
tag: "0.5"
pullPolicy: Always
dagsterApiGrpcArgs:
- "-f"
- "/workspace/repo.py"
port: 3030
env:
AWS_ACCESS_KEY_ID: AAAAAAAAAAAAAAAAAAAAAAAAA
AWS_SECRET_ACCESS_KEY: qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
AWS_DEFAULT_REGION: eu-central-1
and I can also see these values are set in the env variables of the pod and can also access the s3 location after pip install awscli and aws s3 ls see the screenshot below the job pod however throws Unable to locate credentials
Please help
The deployment configuration applies to the user code servers. Meanwhile the celery executor runs your pipeline code in separate kubernetes jobs. To provide your secrets there, you will want to configure the env_secrets field of the celery-k8s executor in your pipeline run config.
See https://github.com/dagster-io/dagster/blob/master/python_modules/libraries/dagster-k8s/dagster_k8s/job.py#L321-L327 for details on the config.

Azure Cosmos DB Emulator Not running - It Starts and then throws Error

I am trying to install and run Cosmos DB Emulator on my machine but it is not letting me to connect to Azure Cosmos DB Emulator. When I run the Azure Cosmos DB Emulator, it shows the "Started" notification and then opened a page in browser but the page is not loaded. I am tired of doing everything which I have found on google.
Here is the Azure Cosmos DB Emulator Error:
And when the page is opened on link https://localhost:8081/_explorer/index.html, the Firefox browser shows "Unable to connect"
The strange thing is, when I installed it on another machine, it ran there without any issue.
After full of 3 hectic days, I have successfully run Cosmos DB Emulator. Here is the work around.
Shut down Azure Cosmos DB Emulator
I tried to run lodctr /R in command line by running it as "Run as Administrator", it threw an error: "Error: Unable to rebuild performance counter setting from system backup store, error code is 2"
Found a solution for the lodctr /R error which is to run another command as below:
"c:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i"
It will install ASP.NET (2.0.50727). Here is the link: https://www.stevefenton.co.uk/2016/04/unable-to-rebuild-performance-counter-setting-from-system-backup-store/
Then again run the command lodctr /R, it sets the counter
Then start Azure Cosmos DB Emulator and it will run.
I used to get this constantly, I realised I was shutting down my pc before stopping the emulator. Once I started stopping the emulator before shutdown, this issue went away for me.
There are 5 ports the emulator runs on out of the box: 10251,10252,10253,10254. You can confirm this by typing:
.\CosmosDB.Emulator.exe -h
And you'll see:
/DirectPorts= Comma-separated list of 4 ports to use for direct connectivity. Default is 10251,10252,10253,
10254.
What tends to be my issue is that the emulator is still running on one of the above ports but .\CosmosDB.Emulator.exe /Shutdown doesn't effectively stop the process.
To combat this, I search to see which port is active:
PS C:\Program Files\Azure Cosmos DB Emulator> netstat -ano | findstr :10251
PS C:\Program Files\Azure Cosmos DB Emulator> netstat -ano | findstr :10252
PS C:\Program Files\Azure Cosmos DB Emulator> netstat -ano | findstr :10253
TCP 127.0.0.1:10253 0.0.0.0:0 LISTENING 27772
And then I stop that process:
stop-process 27772 // this is using powershell but that's unimportant
Then I can start the emulator again without issue:
.\CosmosDB.Emulator.exe

Airflow - Failed to fetch log file from worker. 404 Client Error: NOT FOUND for url

I am running Airflowv1.9 with Celery Executor. I have 5 Airflow workers running in 5 different machines. Airflow scheduler is also running in one of these machines. I have copied the same airflow.cfg file across these 5 machines.
I have daily workflows setup in different queues like DEV, QA etc. (each worker runs with an individual queue name) which are running fine.
While scheduling a DAG in one of the worker (no other DAG have been setup for this worker/machine previously), I am seeing the error in the 1st task and as a result downstream tasks are failing:
*** Log file isn't local.
*** Fetching here: http://<worker hostname>:8793/log/PDI_Incr_20190407_v2/checkBCWatermarkDt/2019-04-07T17:00:00/1.log
*** Failed to fetch log file from worker. 404 Client Error: NOT FOUND for url: http://<worker hostname>:8793/log/PDI_Incr_20190407_v2/checkBCWatermarkDt/2019-04-07T17:00:00/1.log
I have configured MySQL for storing the DAG metadata. When I checked task_instance table, I see proper hostnames are populated against the task.
I also checked the log location and found that the log is getting created.
airflow.cfg snippet:
base_log_folder = /var/log/airflow
base_url = http://<webserver ip>:8082
worker_log_server_port = 8793
api_client = airflow.api.client.local_client
endpoint_url = http://localhost:8080
What am I missing here? What configurations do I need to check additionally for resolving this issue?
Looks like the worker's hostname is not being correctly resolved.
Add a file hostname_resolver.py:
import os
import socket
import requests
def resolve():
"""
Resolves Airflow external hostname for accessing logs on a worker
"""
if 'AWS_REGION' in os.environ:
# Return EC2 instance hostname:
return requests.get(
'http://169.254.169.254/latest/meta-data/local-ipv4').text
# Use DNS request for finding out what's our external IP:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(('1.1.1.1', 53))
external_ip = s.getsockname()[0]
s.close()
return external_ip
And export: AIRFLOW__CORE__HOSTNAME_CALLABLE=airflow.hostname_resolver:resolve
The web program of the master needs to go to the worker to fetch the log and display it on the front-end page. This process is to find the host name of the worker. Obviously, the host name cannot be found,Therefore, add the host name to IP mapping on the master's vim /etc/hosts
If this happens as part of a Docker Compose Airflow setup, the hostname resolution needs to be passed to the container hosting the webserver, e.g. through extra_hosts:
# docker-compose.yml
version: "3.9"
services:
webserver:
extra_hosts:
- "worker_hostname_0:192.168.xxx.yyy"
- "worker_hostname_1:192.168.xxx.zzz"
...
...
More details here.

Application is unable to connect to localstack SQS

As a test developer, I am trying to use localstack to mock SQS for Integration Test.
Docker-compose:
localstack:
image: localstack/localstack:0.8.7
ports:
- "4567-4583:4567-4583"
- "9898:${PORT_WEB_UI-8080}"
environment:
- SERVICES=sqs
- DOCKER_HOST=unix:///var/run/docker.sock
- HOSTNAME=localstack
- HOSTNAME_EXTERNAL=192.168.99.101
- DEFAULT_REGION=us-east-1
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
After spinning up the localstack SQS: Able to connect and create queue and able to retrieve it via AWS CLI. Localstack Dashboard also displays the queue created.
$ aws --endpoint-url=http://192.168.99.101:4576 --region=us-west-1 sqs create-queue --queue-name myqueue
{
"QueueUrl": "http://192.168.99.101:4576/queue/myqueue"
}
The application is using com.amazon.sqs.javamessaging.SQSConnectionFactory to connect to SQS. It also uses com.amazonaws.auth.DefaultAWSCredentialsProviderChain for the AWS credentials
1) If I give "-e AWS_REGION=us-east-1 -e AWS_ACCESS_KEY_ID=foobar -e AWS_SECRET_ACCESS_KEY=foobar" while bringing up the application, I am getting
HTTPStatusCode: 403 AmazonErrorCode: InvalidClientTokenId
com.amazonaws.services.sqs.model.AmazonSQSException: The security token included in the request is invalid
2) If I give the ACCESS_KEY and SECRET_KEY of the AWS SQS, I am getting
HTTPStatusCode: 400 AmazonErrorCode: AWS.SimpleQueueService.NonExistentQueue
com.amazonaws.services.sqs.model.QueueDoesNotExistException: The specified queue does not exist for this wsdl version.
Below is the application code. The first 2 log messages are printing the connection and session it obtained. The error is coming from "Queue publisherQueue = sqsSession.createQueue(sqsName);"
sqsConnection = (Connection) context.getBean("outputSQSConnection");
LOGGER.info("SQS connection Obtained " + sqsConnection);
sqsSession = sqsConnection.createSession(false, Session.AUTO_ACKNOWLEDGE);
LOGGER.info("SQS Session Created " + sqsSession);
Queue publisherQueue = sqsSession.createQueue(sqsName);
I tried both "http://localstack:4576/queue/myqueue" "http://192.168.99.101:4576/queue/myqueue". The results are same.
Can you please help me to resolve the issue?
I ran into a similar issue couple of weeks back . Looking at your config, I think you should be just able to use localhost. In my case, we had services calling localstack also running in docker and we ended up creating a docker network to communicate between containers.
I was able figure out a solution by looking at the Localstack tests. The important thing to note here is that you need to set Endpoint configuration correctly.
private AmazonSQSClient getLocalStackConfiguredClient() {
AmazonSQSClientBuilder clientBuilder = AmazonSQSClientBuilder.standard();
clientBuilder.withEndpointConfiguration(getEndpointConfiguration(configuration.getLocalStackConfiguration()
.getSqsEndpoint(), awsRegion));
return (AmazonSQSClient)clientBuilder.build();
}
private AwsClientBuilder.EndpointConfiguration getEndpointConfiguration(String endpoint, Regions awsRegion) {
return new AwsClientBuilder.EndpointConfiguration(endpoint, awsRegion.getName());
}
Hopefully this helps you to resolve the issues.

Trying docker in docker getting TCP connection refused error

I am trying to run docker in docker and getting TCP 127.0.0.1:5000: connection refused. Can someone explain why this happened and how I can fix it.
Here is what I have tired:
docker run -it --privileged --name docker-server-test -d docker:1.7-dind
docker run --rm --link docker-server:docker docker:1.7 pull my-server:5000/qe/busybox
Unable to find image 'docker:1.7' locally
Trying to pull repository docker.io/library/docker ... 1.7: Pulling from library/docker
f4fddc471ec2: Already exists
da0daae25b21: Already exists
413668359dd0: Already exists
ab205815427f: Already exists
e8ace195c6b6: Already exists
2129588b76a3: Already exists
63f71ebd654b: Already exists
f3231b3888dd: Already exists
d449c5a1e017: Already exists
library/docker:1.7: The image you are pulling has been verified.
Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:c3666cc6458e02d780492c75acf1b0bf3424c8dd6882361438a9b93b46c2aa55
Status: Downloaded newer image for docker.io/docker:1.7
Pulling repository my-server:5000/qe/busybox
Get http://localhost:5000/v1/repositories/qe/busybox/tags: dial tcp 127.0.0.1:5000: connection refused
It looks like you're trying to pull an image from a registry running on your local machine -- in this case when you specify localhost as the place to pull the image from, it's trying to pull from localhost relative to the Docker daemon container (which isn't where your registry is listening). You probably want to instead pull from host-ip:5000/qe/busybox (likely something like 192.168.0.x:5000/qe/busybox).
Did you have env variable set? Based on your OS, set following Docker env variables if not already set:
DOCKER_HOST
DOCKER_CERT_PATH
DOCKER_TLS_VERIFY
You can get these details from your docker client
docker-machine env default

Resources