Following "OpenStack installation guide for Ubunut 12.04 LTS" have configured a network of virtual machine with keystone e swift services. I have configured swift to use keystone service for authentication.All work perfectly. The problem has arisen when trying to configure a second swift service on keystone. On key stone i have create the two swift services called "swift" and "swift2" both with type property set set to the value "object-store". I set the endpoints for both services: "http://proxyserver1:8080/v1/AUTH_%(tenant_id)s" for first swift service and "http://proxyserver2:8080/v1/AUTH_%(tenant_id)s" for the second. For each swift service i have created a user with role "admin" and both belonging to same tenant "service". When i try to authenticate a user on keystone using:
curl -s -d '{"auth": {"tenantName": "service", "passwordCredentials": {"username": "swift", "password": "PASSWORD"}}}' -H 'Content-type: application/json' http://identity-manager:5000/v2.0/tokens
I receive a response with a incorrect serviceCatalog array. It contains only two endpoints: endpoints for keystone itself and one for an object-store service. So is missing one object-store service endpoint. Moreover the endpoits for the only object-store service returned is wrong because is a property mix of the two object-store service:
{
"endpoints": [
{
"adminURL": "http://proxyserver2:8080",
"region": "regionOne",
"internalURL": "http://proxyserver1:8080/v1/AUTH_74eb7b8a36f64105a7d77fe00a2b6d41",
"id": "0d30428e6a2d4035baf1c84401c8ff1b",
"publicURL": "http://proxyserver1:8080/v1/AUTH_74eb7b8a36f64105a7d77fe00a2b6d41"
}
],
"endpoints_links": [],
"type": "object-store",
"name": "swift2"
}
My question is whether it is possible to configure two swift cluster on keystone. If the answer is yes, where I could have been wrong?
Related
I want to generate a git username/password for CodeCommit, I see how to do that from the AWS Web Console , but is there a way to do it via the aws cli?
You can use aws iam create-service-specific-credential, documented here
aws iam create-service-specific-credential --service-name codecommit.amazonaws.com --user-name xxxxxx
{
"ServiceSpecificCredential": {
"CreateDate": "2022-10-28T12:19:19+00:00",
"ServiceName": "codecommit.amazonaws.com",
"ServiceUserName": "xxxxxx-at-yyyyyyyyyyyy",
"ServicePassword": "ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ",
"ServiceSpecificCredentialId": "LLLLLLLLLLLLLLLLLLLLL",
"UserName": "xxxxxx",
"Status": "Active"
}
}
The ServiceUserName can be used as git user and ServicePassword can be used as git password.
So I am using vault approle with airflow as secret backend and it keeps throwing permission denied error on $Vault_ADDR/v1/auth/approle/login. I tried using approle from CLI like:
vault write auth/approle/login role_id="$role_id" secret_id="$secret_id"
and it works fine.
But if I try it using API:
curl --request POST --data #payload.json $VAULT_ADDR/v1/auth/approle/login
where payload.json contains secret and role id. It fails with permission denied.
Here is my policy:
vault policy write test-policy -<<EOF
path "kv/data/airflow/*" {
capabilities = [ "read", "list" ]
}
EOF
It works fine for reading on this path.
and role:
vault write auth/approle/role/test-role token_ttl=4h token_max_ttl=5h token_policies="test-policy"
Don't know why it is failing with API.
An important thing to mention is that I am using cloud based HCP Vault.
The problem is with your app_role authentication.You need to provide admin namespace in your url.
Change this:
curl --request POST --data #payload.json $VAULT_ADDR/v1/auth/approle/login
To this:
curl --request POST --data #payload.json $VAULT_ADDR/v1/admin/auth/approle/login
Furthermore, if you are trying to access from a third party tool like airflow then try adding "namespace=admin" in your config file.
Found the problem. HCP vault uses namespace (default = admin). Namespace was needed in url :
$VAULT_ADDR/v1/admin/auth/approle/login
but the problem still exists in Airflow's Hashicorp provider. Changing the auth_mount_point still concatenates it at the end as :
$VAULT_ADDR/v1/auth/{$auth_mount_point}
I am trying to look at the messages in a NATS cluster on a certain topic.
My google searches led to https://github.com/KualiCo/nats-streaming-console and https://hub.docker.com/r/fjudith/nats-streaming-console but neither npm install nor yarn install worked. I am not sure if thats an issue with the image or if thats my system setting.
And since I am new here, I wasn't allowed to comment.
I have been running in circles for sometime with this so any pointers would be highly appreciated.
-Suresh
Use the nats CLI tool (https://github.com/nats-io/natscli/releases/):
To look at the messages being published on subject foo for example just use nats sub foo.
Note that NATS-Streaming (aka STAN) is now deprecated and replaced by JetStream which is built-in to the nats.io server.
You can use WEB UI (not good as in rabbit, but..) It's forked version of nats-webui. I am not maintainer, but it works. Also it can work with auth tokens.
Github: https://github.com/suisrc/NATS-WebUI
DockerHub: https://hub.docker.com/r/suisrc/nats-webui/tags
Maybe this is not what you are looking for on a cluster (but could be).
Nevermind, in case of a single NATS instance, using the CLI-tools:
General access:
nats --user username --password mypassword --server localhost:PORT
Listing the streams:
nats --user username --password mypassword --server localhost:PORT stream ls
Subscribing (a shell) onto a streams:
nats --user username --password mypassword --server localhost:PORT subscribe SUBJECT.STREAM
Publish a "JSON message" onto a stream:
cat <PathToMessage.json> | nats --user username --password mypassword --server localhost:PORT publish SUBJECT.STREAM --force-stdin
Show all messages published onto a Stream:
nats --user username --password mypassword --server localhost:PORT stream view
Additionally: Sample JSON Message:
{
"id": "4152c2e7-f2be-42d8-86fe-5b94f2ed3678",
"form": "triangle",
"wrappedData": {
"Color": "green",
}
}
I am working on "Patch an AMI and update an Auto Scaling group" and followed the AWS document to configure but I am stuck at "Task 3: Create a runbook, patch the AMI, and update the Auto Scaling group" with the below error. To fix it I have added "user data" while starting the instance(startInstances). As it's accepting only base64, converted and provided base64(UmVzdGFydC1TZXJ2aWNlIEFtYXpvblNTTUFnZW50Cg==).
I tried to execute with the below user data but both are not working, even I tried to apply a new step with the same commands but failed to patch AMI.
Tried the below script:
<powershell> powershell.exe -Command Start-Service -Name AmazonSSMAgent </powershell> <persist>true</persist>
Tried to start and restart SSM agent.
Restart-Service AmazonSSMAgent
base64: UmVzdGFydC1TZXJ2aWNlIEFtYXpvblNTTUFnZW50Cg==
YAML sample:
mainSteps:
- name: startInstances
action: 'aws:runInstances'
timeoutSeconds: 1200
maxAttempts: 1
onFailure: Abort
inputs:
ImageId: '{{ sourceAMIid }}'
InstanceType: m3.large
MinInstanceCount: 1
MaxInstanceCount: 1
SubnetId: '{{ subnetId }}'
UserData: UmVzdGFydC1TZXJ2aWNlIEFtYXpvblNTTUFnZW50Cg==
Still, I am seeing the below error.
Step timed out while step is verifying the SSM Agent availability on the target instance(s). SSM Agent on Instances: [i-xxxxxxxx] are not functioning. Please refer to Automation Service Troubleshooting Guide for more diagnosis details.
Your suggestion/solutions help me a lot. Thank you.
I have troubleshoot and fixed the issue.
The issue was security group is missing on the instance. To communicate the SendCommand API of SSM Service with SSM agent on instance needs a security group that allows HTTPS port 443. I have attached SG allowing 443 port then the SSM agent can communicate with the EC2 instance.
EC2 instance IAM role should have SSM agent full access policy attached to it.
We might get the same issue when the SSM agent is not running on the EC2 instance, for that we need to provide user-data or add a new step in YAML or JSON on systems manager Documents.
If you are working on a windows instance use the below script to start the SSM agent. If its Linux server uses Linux script/commands.
{
"schemaVersion": "2.0",
"description": "Start SSM agent on instance",
"mainSteps": [
{
"action": "aws:runPowerShellScript",
"name": "runPowerShellScript",
"inputs": {
"runCommand": [
"Start-Service AmazonSSMAgent"
]
}
}
]
}
Sorry I don't know If I should ask the question here. I used the echo sample project and
deploy it to google cloud endpoints, and I want to configure it with firebase auth instead of api key. the following is the openapi.yaml
paths:
"/echo":
post:
description: "Echo back a given message."
operationId: "echo"
produces:
- "application/json"
responses:
200:
description: "Echo"
schema:
$ref: "#/definitions/echoMessage"
parameters:
- description: "Message to echo"
in: body
name: message
required: true
schema:
$ref: "#/definitions/echoMessage"
security:
- firebase: []
And when I deploy it and access with
curl -d '{"message":"hello world"}' -H "content-
type:application/json""http://[IPADDRESS]:80/echo"
I get the error message.
"message": "Method doesn't allow unregistered callers (callers without
established identity). Please use API Key or other form of API consumer
identity to call this API.",
And if I add the api key.
curl -d '{"message":"hello world"}' -H "content-type:application/json""http://35.194.225.89:80/echo?api_key=[API_KEY]"
I can get the correct result.
I am not sure how to configure the openapi.yaml, please help. thank you very much.