Is is possible to generate static Git credentials for HTTPS connection to AWS CodeCommit using only the CLI and not the AWS Console? - aws-codecommit

I want to generate a git username/password for CodeCommit, I see how to do that from the AWS Web Console , but is there a way to do it via the aws cli?

You can use aws iam create-service-specific-credential, documented here
aws iam create-service-specific-credential --service-name codecommit.amazonaws.com --user-name xxxxxx
{
"ServiceSpecificCredential": {
"CreateDate": "2022-10-28T12:19:19+00:00",
"ServiceName": "codecommit.amazonaws.com",
"ServiceUserName": "xxxxxx-at-yyyyyyyyyyyy",
"ServicePassword": "ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ",
"ServiceSpecificCredentialId": "LLLLLLLLLLLLLLLLLLLLL",
"UserName": "xxxxxx",
"Status": "Active"
}
}
The ServiceUserName can be used as git user and ServicePassword can be used as git password.

Related

Is there a way to view NATS messages?

I am trying to look at the messages in a NATS cluster on a certain topic.
My google searches led to https://github.com/KualiCo/nats-streaming-console and https://hub.docker.com/r/fjudith/nats-streaming-console but neither npm install nor yarn install worked. I am not sure if thats an issue with the image or if thats my system setting.
And since I am new here, I wasn't allowed to comment.
I have been running in circles for sometime with this so any pointers would be highly appreciated.
-Suresh
Use the nats CLI tool (https://github.com/nats-io/natscli/releases/):
To look at the messages being published on subject foo for example just use nats sub foo.
Note that NATS-Streaming (aka STAN) is now deprecated and replaced by JetStream which is built-in to the nats.io server.
You can use WEB UI (not good as in rabbit, but..) It's forked version of nats-webui. I am not maintainer, but it works. Also it can work with auth tokens.
Github: https://github.com/suisrc/NATS-WebUI
DockerHub: https://hub.docker.com/r/suisrc/nats-webui/tags
Maybe this is not what you are looking for on a cluster (but could be).
Nevermind, in case of a single NATS instance, using the CLI-tools:
General access:
nats --user username --password mypassword --server localhost:PORT
Listing the streams:
nats --user username --password mypassword --server localhost:PORT stream ls
Subscribing (a shell) onto a streams:
nats --user username --password mypassword --server localhost:PORT subscribe SUBJECT.STREAM
Publish a "JSON message" onto a stream:
cat <PathToMessage.json> | nats --user username --password mypassword --server localhost:PORT publish SUBJECT.STREAM --force-stdin
Show all messages published onto a Stream:
nats --user username --password mypassword --server localhost:PORT stream view
Additionally: Sample JSON Message:
{
"id": "4152c2e7-f2be-42d8-86fe-5b94f2ed3678",
"form": "triangle",
"wrappedData": {
"Color": "green",
}
}

Step timed out while step is verifying the SSM Agent availability on the target instance(s). SSM Agent on Instances: [i-xxxxxxx] are not functioning

I am working on "Patch an AMI and update an Auto Scaling group" and followed the AWS document to configure but I am stuck at "Task 3: Create a runbook, patch the AMI, and update the Auto Scaling group" with the below error. To fix it I have added "user data" while starting the instance(startInstances). As it's accepting only base64, converted and provided base64(UmVzdGFydC1TZXJ2aWNlIEFtYXpvblNTTUFnZW50Cg==).
I tried to execute with the below user data but both are not working, even I tried to apply a new step with the same commands but failed to patch AMI.
Tried the below script:
<powershell> powershell.exe -Command Start-Service -Name AmazonSSMAgent </powershell> <persist>true</persist>
Tried to start and restart SSM agent.
Restart-Service AmazonSSMAgent
base64: UmVzdGFydC1TZXJ2aWNlIEFtYXpvblNTTUFnZW50Cg==
YAML sample:
mainSteps:
- name: startInstances
action: 'aws:runInstances'
timeoutSeconds: 1200
maxAttempts: 1
onFailure: Abort
inputs:
ImageId: '{{ sourceAMIid }}'
InstanceType: m3.large
MinInstanceCount: 1
MaxInstanceCount: 1
SubnetId: '{{ subnetId }}'
UserData: UmVzdGFydC1TZXJ2aWNlIEFtYXpvblNTTUFnZW50Cg==
Still, I am seeing the below error.
Step timed out while step is verifying the SSM Agent availability on the target instance(s). SSM Agent on Instances: [i-xxxxxxxx] are not functioning. Please refer to Automation Service Troubleshooting Guide for more diagnosis details.
Your suggestion/solutions help me a lot. Thank you.
I have troubleshoot and fixed the issue.
The issue was security group is missing on the instance. To communicate the SendCommand API of SSM Service with SSM agent on instance needs a security group that allows HTTPS port 443. I have attached SG allowing 443 port then the SSM agent can communicate with the EC2 instance.
EC2 instance IAM role should have SSM agent full access policy attached to it.
We might get the same issue when the SSM agent is not running on the EC2 instance, for that we need to provide user-data or add a new step in YAML or JSON on systems manager Documents.
If you are working on a windows instance use the below script to start the SSM agent. If its Linux server uses Linux script/commands.
{
"schemaVersion": "2.0",
"description": "Start SSM agent on instance",
"mainSteps": [
{
"action": "aws:runPowerShellScript",
"name": "runPowerShellScript",
"inputs": {
"runCommand": [
"Start-Service AmazonSSMAgent"
]
}
}
]
}

Sequelize-cli returns "Unknown Database" when doing migrations

I have been using sequelize migration all this while with no issue,
for example in our development server:
"development": {
"username": "root",
"password": "password",
"database": "db",
"host": "127.0.0.1",
"dialect": "mysql"
}
using sequelize-cli will works fine:
npx sequelize db:migrate
results:
Sequelize CLI [Node: 12.16.1, CLI: 6.2.0, ORM: 6.3.5]
Loaded configuration file "config\config.json".
Using environment "development".
No migrations were executed, database schema was already up to date.
Same goes for our production server, which db is on different server than app:
"production": {
"username": "root",
"password": "password",
"database": "db",
"host": "172.xx.xx.11",
"dialect": "mysql"
}
So recently we have upgraded our production server to have 3 db servers using mariadb, managed by a load balancer (maxscale), a galera cluster or something, using the same setup as previous, so now its something like:
server a: 172.xx.xx.11,
server b: 172.xx.xx.12,
server c: 172.xx.xx.13,
load balancer: 172.xx.xx.10
our new config is like:
"production": {
"username": "root",
"password": "password",
"database": "db",
"host": "172.xx.xx.10",
"dialect": "mysql"
}
there is no firewall open between app server and db server directly, only app server to the load balancer.
testing connection between app server and the load balancer with sequelize seems to have no issue,
can pass through if username and password is correct,
if wrong username, or wrong password will give
ERROR: Access denied for user 'root'#'172.xx.xx.10' (using password: YES)
no issue there. just saying that there is a connection.
then there is no issue also using:
npx sequelize db:drop
or
npx sequelize db:create
resulting in
Sequelize CLI [Node: 12.16.1, CLI: 6.2.0, ORM: 6.3.5]
Loaded configuration file "config\config.json".
Using environment "production".
Database db created.
Verifying in all our db servers that the database did dropped and created.
But when i tried doing migrations, this happens:
Sequelize CLI [Node: 12.16.1, CLI: 6.2.0, ORM: 6.3.5]
Loaded configuration file "config\config.json".
Using environment "production".
ERROR: Unknown database 'db'
I have verified that all our db servers did have that 'db' database, its even created by sequelize based on the config, but somehow sequelize cant seems to recognize or identified that 'db' database.
Please help if you have any experience like this before, and do let me know if you need more info.
Thanks.
You can enable the verbose log level in MaxScale by adding log_info=true under the [maxscale] section. This should help explain what is going on and why it is failing.
It is possible that Sequelize does something that assumes it's working with the same database server. For example, doing an INSERT and immediately reading the inserted value will always work on a single server but with a distributed setup, it's possible the values haven't replicated to all nodes.
If you can't find an explanation as to why it behaves like this or you think MaxScale is doing something wrong, please open a bug report on the MariaDB Jira under the MaxScale project.
Turns out the maxscale user don't have enough privileges. granting SHOW DATABASES privileges to maxscale user fixed my issue.
more info:
https://mariadb.com/kb/en/mariadb-maxscale-14/maxscale-configuration-usage-scenarios/#service
Related issue on MariaDB Jira

After changing paswords in vault.yml, deployment fails in trellis

I had a wordpress site setup using Trellis. Initially I had set up the server and deployed without encrypting the vault.yml.
Once everything was working fine I changed the passwords in vault.yml and encrypted the file. But my deployment fails now.
And I get the following error-
TASK [deploy : WordPress Installed?]
**************************
System info:
Ansible 2.6.3; Darwin
Trellis version (per changelog): "Allow customizing Nginx `worker_connections`"
---------------------------------------------------
non-zero return code
Error: Error establishing a database connection. This either means that
the username and password information in your `wp-config.php` file is
incorrect or we can’t contact the database server at `localhost`. This
could mean your host’s database server is down.
fatal: [mysite.org]: FAILED! => {"changed": false,
"cmd": ["wp", "core", "is-installed", "--skip-plugins", "--skip-
themes", "--require=/srv/www/mysite.org/shared/tmp_multisite_constants.php"], "delta":
"0:00:00.224955", "end": "2019-01-04 16:59:01.531111",
"failed_when_result": true, "rc": 1, "start": "2019-01-04
16:59:01.306156", "stderr_lines": ["Error: Error establishing a
database connection. This either means that the username and password
information in your `wp-config.php` file is incorrect or we can’t
contact the database server at `localhost`. This could mean your host’s
database server is down."], "stdout": "", "stdout_lines": []}
to retry, use: --limit
#/Users/praneethavelamuri/Desktop/path/to/my/project/trellis/deploy.retry
Is there any step I missed? I followed these steps-
ansible-playbook server.yml -e env=staging
./bin/deploy.sh staging mysite.org
change passwords in staging/vault.yml
set vault password
inform ansible about password
encrypt the file
commit the file and push the repo
re deploy and then I get the error!
I got it solved. I have changed the sudo user password too in my vault. so ssh into server and changing sudo password to the password mentioned in vault and then provisioning it and then deploying solved the issue.

Error on configuration of two object-store service on keystone

Following "OpenStack installation guide for Ubunut 12.04 LTS" have configured a network of virtual machine with keystone e swift services. I have configured swift to use keystone service for authentication.All work perfectly. The problem has arisen when trying to configure a second swift service on keystone. On key stone i have create the two swift services called "swift" and "swift2" both with type property set set to the value "object-store". I set the endpoints for both services: "http://proxyserver1:8080/v1/AUTH_%(tenant_id)s" for first swift service and "http://proxyserver2:8080/v1/AUTH_%(tenant_id)s" for the second. For each swift service i have created a user with role "admin" and both belonging to same tenant "service". When i try to authenticate a user on keystone using:
curl -s -d '{"auth": {"tenantName": "service", "passwordCredentials": {"username": "swift", "password": "PASSWORD"}}}' -H 'Content-type: application/json' http://identity-manager:5000/v2.0/tokens
I receive a response with a incorrect serviceCatalog array. It contains only two endpoints: endpoints for keystone itself and one for an object-store service. So is missing one object-store service endpoint. Moreover the endpoits for the only object-store service returned is wrong because is a property mix of the two object-store service:
{
"endpoints": [
{
"adminURL": "http://proxyserver2:8080",
"region": "regionOne",
"internalURL": "http://proxyserver1:8080/v1/AUTH_74eb7b8a36f64105a7d77fe00a2b6d41",
"id": "0d30428e6a2d4035baf1c84401c8ff1b",
"publicURL": "http://proxyserver1:8080/v1/AUTH_74eb7b8a36f64105a7d77fe00a2b6d41"
}
],
"endpoints_links": [],
"type": "object-store",
"name": "swift2"
}
My question is whether it is possible to configure two swift cluster on keystone. If the answer is yes, where I could have been wrong?

Resources