Juju vault causing problems when deploying openstack/base on maas and charmed-kubernetes - openstack

I have deployed openstack/base on MaaS as indicated here. After I tried to deploy charmed-kubernetes with an openstack-integrator and vault overlay, I cannot perform openstackclient commands on the maas node and the images uploaded to the dashboard are not recognized, that means, the ubuntu charms cannot be deployed. When I do, for example,
openstack catalog list
I get
Failed to discover available identity versions when contacting https://keystone_ip:5000/v3. Attempting to parse version from URL.
SSL exception connecting to https://keystone_ip:5000/v3/auth/tokens: HTTPSConnectionPool(host='keystone_ip', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)')))
However, when I ssh into the keystone container, there is a keystone_juju_ca_cert.crt which has as
Issuer: CN = Vault Root Certificate Authority (charm-pki-local)
and as
Subject: CN = Vault Root Certificate Authority (charm-pki-local)
I have also tried to reissue the certificates and refresh the secrets through actions in the vault application, but to no avail.
Can somobody help me here ?

I don't know anything about juju or openstack, but it looks to me like the problem isn't on the keystone container, but on your local machine (or wherever you are running this openstack catalog list command. The local machine doesn't appear to have the charm-pki-local CA certificate installed, so it can't verify the connection to the keystone server.

You need to get root ca from vault using juju and then reference that file in openrc file as OS_CACERT environment variable

Related

Error with Plumber API deployment in Digital Ocean via R | Authentication with ssh server failed

I am facing some issues with Plumber API deployment in Digital Ocean via R. Given below is the R code I executed in my desktop (in RStudio) after making the following configurations..
Created SSH key in my desktop(eg:via puTTYgen)
Entered the public key content in my Digital Ocean account
library(plumber)
library(analogsea)
library(ssh)
myExistingDroplets <- analogsea::droplets()
do_provision(name="myNewDroplet")
Logs from console:
THIS ACTION COSTS YOU MONEY!
Provisioning a new server for which you will get a bill from DigitalOcean. Using default ssh keys: PC NB: This costs $0.00744 / hour until you droplet_delete() it Waiting for create ...........
New server key: <key>
Error: Authentication with ssh server failed
The code line analogsea::droplets() opened a new window in my
browser to facilitate authentication and after
authentication, the code returned my existing droplet details from Digital
Ocean. That's a small win!
The code line do_provision(name="myNewDroplet") prompted me to
provide my Digital Ocean root password, then it created a new
droplet in my Digital Ocean account!, however it didn't
completely succeed (I assume), as it gave an error "Error: Authentication with ssh server failed". And, I am unable to reach
this new server/droplet from my browser (on the port : 8000 which
plumber listen to)
One thing I am struggling to understand is, it printed out a New server key: <key>(SSH Fingerprint) on the console which doesn't match with the SSH key pair I configured on my PC + Digital Ocean website (under settings -> ssh keys). I am not sure why would it create a new ssh server key as part of the droplet creation and why would it cause an authentication failure. my understanding is an ssh key needs to be created at the client(my PC) and the public key of that should be entered in Digital Ocean.
I appreciate if you can share any thoughts on the root cause for this error or if you think I missed out any steps in between.
Thanks in advance!

Jfrog Xray Certificate Issue

Summary:
Xray helm chart needs the capability to receive a custom certificate used for Artifactory and apply that certificate to the router container.
Detail:
We have successfully installed Artifactory via helm. After installing, we configured TLS for the Artifactory web application using a custom certificate. When trying to deploy Xray via helm, the xray server pod’s router container will continually fail to connect to Artifactory with an error message of
Error: Get https://[url redacted]/access/api/v1/system/ping: x509: certificate signed by unknown authority
There does not appear to be any way to pass in a secret that would contain the custom certificate. It looks like at this time the only option is to customize the helm chart to install the certificate in the container but that will put us out if sync with Jfrog to receive vulnerability database updates or any other updates x-ray request from Jfrog.
Edit - Someone is having the same issue. Can Xray even do what we are trying to do then?
Update from Jfrog Support:
With regards to the query on adding/importing the CA cert chain to Xray, we already have a Jira for the same and our team is currently woking on it. As a work around I would request you to mount a custom volume with the ssl cert. Then run the command to import the ssl cert to the cacerts file from the init container.
Work around :
Create a Kubernetes configMap, and add the root and subordinate CA, and then mount that into the xray-server at /usr/local/share/ca-certificates. I then log into the server, and do a docker exec -it -u root into the xray server (since the container runs as a non-root user) and then run the command update-ca-certificates to import the CA certs. Once you did this, then Xray will be able to talk to Artifactory.
The drawback of this workaround would be we need to run the above steps every time the container restarts.

API Management 2018.1 and DataPower 7.7

I am trying to add DataPower 7.7 into API Management 2018.1.
I need to configure API Connect Gateway Service in DataPower (new APIC 2018.1 doesn't work with XML Management Service).
After configuration I got an error:
8:07:19 mgmt notice 959 0x00350015 apic-gw-service (default):
Operational state down
8:07:19 apic-gw-service error 959 0x88e00001 apic-gw-service
(default): Unexpected queue error: Domain check failed! Please ensure that
the 'default' domain exists and is enabled. Also, please verify that the API
Gateway Service is configured with the correct domain and SOMA credentials.
8:07:19 apic-gw-service error 959 0x88e000a0 apic-gw-service
(default): Failed to initialize gateway environment: datapower
DP version is 7.7.
Please suggest, if you have any information or manuals.
Note: Domain exists, main services are enabled
It's hard to tell what exactly the problem is based on the log messages shown above.
Update to original answer:
See also the documentation that is now available in the IBM API Connect Knowledge Center: https://www.ibm.com/support/knowledgecenter/SSMNED_2018/com.ibm.apic.install.doc/tapic_install_datapower_gateway.html
However, here are the basic steps for configuring a DataPower gateway to work with API Connect 2018.x.
You will need to ensure:
DataPower is running DP 7.7.0.0 or higher.
You have the AppOpt license installed. (Use the “show license” command in the DataPower CLI to confirm.)
You have a shared certificate and a private key for securing the
communication between the API Connect management server and the
gateway.
On DataPower, you need to:
Create an application domain. All of the subsequent configuration should be done in the application domain.
Enable statistics
Upload your private key and shared certificate to the cert:// directory in the application domain.
Create a crypto key object, a crypto certificate and a crypto identification credentials object using your key and certificate.
Create an SSL client profile and an SSL server profile that reference the crypto identification credential object.
Configure a gateway-peering object.
Configure and enable the API Connect Gateway Service in the application domain.
At that point, you should be able to configure the gateway in the API Connect cloud manager.
Here are the DataPower CLI commands to create a basic configuration. In the configuration below, IP address 1.1.1.1 represents a local IP address on your DataPower appliance. Traffic from the API Connect management server to the gateway will be sent to port 3000. API requests will go to port 9443 (but you can change it to the more standard port, 443, if you prefer.)
For a production environment, you will want to build on this configuration to ensure you are running with at least 3 gateways in the peer group, but this will get you started.
Create the application domain called apiconnect
top; configure terminal;
domain apiconnect; visible default; exit;
write mem
Use the Web GUI to upload your private key and shared certificate to the cert:// folder in the apiconnect domain
Then run these commands to create the configuration in the apiconnect domain
switch apiconnect
statistics
crypto
key gw_to_apic cert:///your-privkey.cer
certificate gw_to_apic cert:///your-sscert.cer
idcred gw_to_apic gw_to_apic gw_to_apic
ssl-client gwd_to_mgmt
idcred gw_to_apic
no validate-server-cert
exit
ssl-server gwd_to_mgmt
idcred gw_to_apic
no request-client-auth
validate-client-cert off
exit
exit
gateway-peering apic
admin-state enabled
local-address 1.1.1.1
local-port 15379
monitor-port 25379
priority 100
enable-ssl off
enable-peer-group off
persistence local
exit
apic-gw-service
admin-state enabled
local-address 0.0.0.0
local-port 3000
api-gw-address 0.0.0.0
api-gw-port 9443
v5-compatibility-mode on
gateway-peering apic
ssl-server gwd_to_mgmt
ssl-client gwd_to_mgmt
exit
write mem
The problem you are seeing is an issue with creating your api connect service in the default domain. To work around just put your Api Gateway Service in a domain other than default.

AWS CodeDeploy vs Windows 2016 in ASG

I use AWS CodeDeploy to deploy builds from GitHub to EC2 instances in AutoScaling Group.
It's working fine for Windows 2012 R2 with all Deployment configurations.
But for Windows 2016 it totally fails on "OneAtTime" deploy;
During "AllAtOnce" deploy only one or two instances deployed successfully, all other fails.
In the logfile on agent this suspicious message is present:
ERROR [codedeploy-agent(1104)]: CodeDeploy Instance Agent Service: CodeDeploy Instance Agent Service: error during start or run: Errno::ETIMEDOUT
- A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. - connect(2)
All policies, roles, software, builds and other stuff are the same, I even tested this on brand new AWS account.
Does anybody faced such behaviour?
I ran into the same problem, but during my investigation, I found out that server's route table had wrong routes for 169.254.169.254 network (there was specified the gateway from the network where my template was captured), so that it couldn't read instance metadata.
From the above error it looks like the agent isn't able to talk to CodeDeploy endpoint after instance starts up. Please check if the routing tables and other proxy related settings are set up correctly. Also if you do not have it already, you can turn on the debug log by setting :verbose to true in the agent config and restart the agent. This would help debug the issue better.

Sql Server can't see my certificate

I need to install a certificate for encryption (replication) between an external vendor and my company.
I cannot get a third party certificate for the FQDN of my server because the net part of that does not match a domain that we own (ie my FQDN is sqlservername.company.root.net but we don't own a domain called company.root.net.). We do own mycompany.com, so I got sqlserver.mycompany.com on the cert and have a DNS entry to alias sqlserver.mycompany.com to sqlservername.company.root.net.
I cannot use a self generated cert since the vendor needs to trust the cert authority.
I have a cert that I have purchased and installed, but SQL Server won't see it since the FQDN doesn't match.
I tried installing it by putting the thumbprint of the cert into the registry directly, but then SQL server won't start with the following errors:
The server could not load the certificate it needs to initiate an SSL connection. It returned the following error: 0x8009030e. Check certificates to make sure they are valid.
Unable to load user-specified certificate [Cert Hash(sha1) "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"]. The server will not accept a connection. You should verify that the certificate is correctly installed. See "Configuring Certificate for Use by SSL" in Books Online.
(where the x's above match the thumbprint of the cert without spaces)
TDSSNIClient initialization failed with error 0x80092004, status code 0x80. Reason: Unable to initialize SSL support. Cannot find object or property.
What do I need to do differently to get this working?
You need to use MMC to install your certificate in the certificate store and then use the SQL Server Configuration Manager to link the certificate to your SQL Server service. See https://support.microsoft.com/en-us/help/316898/how-to-enable-ssl-encryption-for-an-instance-of-sql-server-by-using-mi
Then, make sure that the service-account running you SQL Server service has full permission on the certificate. In MMC, right-click on the certificate, select Manage private key, and then grant full access to the service-account running you SQL Server.
You should restart your SQL Server for the changes to take effect.
Before anything else, you must install the certificate in the Windows certificate truststore.
Did you do that?
The error
You should verify that the certificate
is correctly installed
seems to indicate you did not do this.
I was expecting that the hostname verification would be configurable but from here SSL in MS-SQL2008 r2 it seems as an absolute requirement.
To be honest I am not sure if the trick you did with the DNS entry will work.
It seems that some tweeking works for cluster installations ssl for cluster installations
In your case, may be you should have bought the certificate using the IP as subject name and use DNS to resolve to the FQDN you say.
But of course this implies use of a static IP and most likely it would not be feasible as well anyway.....

Resources