API Management 2018.1 and DataPower 7.7 - ibm-datapower

I am trying to add DataPower 7.7 into API Management 2018.1.
I need to configure API Connect Gateway Service in DataPower (new APIC 2018.1 doesn't work with XML Management Service).
After configuration I got an error:
8:07:19 mgmt notice 959 0x00350015 apic-gw-service (default):
Operational state down
8:07:19 apic-gw-service error 959 0x88e00001 apic-gw-service
(default): Unexpected queue error: Domain check failed! Please ensure that
the 'default' domain exists and is enabled. Also, please verify that the API
Gateway Service is configured with the correct domain and SOMA credentials.
8:07:19 apic-gw-service error 959 0x88e000a0 apic-gw-service
(default): Failed to initialize gateway environment: datapower
DP version is 7.7.
Please suggest, if you have any information or manuals.
Note: Domain exists, main services are enabled

It's hard to tell what exactly the problem is based on the log messages shown above.
Update to original answer:
See also the documentation that is now available in the IBM API Connect Knowledge Center: https://www.ibm.com/support/knowledgecenter/SSMNED_2018/com.ibm.apic.install.doc/tapic_install_datapower_gateway.html
However, here are the basic steps for configuring a DataPower gateway to work with API Connect 2018.x.
You will need to ensure:
DataPower is running DP 7.7.0.0 or higher.
You have the AppOpt license installed. (Use the “show license” command in the DataPower CLI to confirm.)
You have a shared certificate and a private key for securing the
communication between the API Connect management server and the
gateway.
On DataPower, you need to:
Create an application domain. All of the subsequent configuration should be done in the application domain.
Enable statistics
Upload your private key and shared certificate to the cert:// directory in the application domain.
Create a crypto key object, a crypto certificate and a crypto identification credentials object using your key and certificate.
Create an SSL client profile and an SSL server profile that reference the crypto identification credential object.
Configure a gateway-peering object.
Configure and enable the API Connect Gateway Service in the application domain.
At that point, you should be able to configure the gateway in the API Connect cloud manager.
Here are the DataPower CLI commands to create a basic configuration. In the configuration below, IP address 1.1.1.1 represents a local IP address on your DataPower appliance. Traffic from the API Connect management server to the gateway will be sent to port 3000. API requests will go to port 9443 (but you can change it to the more standard port, 443, if you prefer.)
For a production environment, you will want to build on this configuration to ensure you are running with at least 3 gateways in the peer group, but this will get you started.
Create the application domain called apiconnect
top; configure terminal;
domain apiconnect; visible default; exit;
write mem
Use the Web GUI to upload your private key and shared certificate to the cert:// folder in the apiconnect domain
Then run these commands to create the configuration in the apiconnect domain
switch apiconnect
statistics
crypto
key gw_to_apic cert:///your-privkey.cer
certificate gw_to_apic cert:///your-sscert.cer
idcred gw_to_apic gw_to_apic gw_to_apic
ssl-client gwd_to_mgmt
idcred gw_to_apic
no validate-server-cert
exit
ssl-server gwd_to_mgmt
idcred gw_to_apic
no request-client-auth
validate-client-cert off
exit
exit
gateway-peering apic
admin-state enabled
local-address 1.1.1.1
local-port 15379
monitor-port 25379
priority 100
enable-ssl off
enable-peer-group off
persistence local
exit
apic-gw-service
admin-state enabled
local-address 0.0.0.0
local-port 3000
api-gw-address 0.0.0.0
api-gw-port 9443
v5-compatibility-mode on
gateway-peering apic
ssl-server gwd_to_mgmt
ssl-client gwd_to_mgmt
exit
write mem

The problem you are seeing is an issue with creating your api connect service in the default domain. To work around just put your Api Gateway Service in a domain other than default.

Related

Remote Server access with tunneling

I want to integrate service on my website, but the requirement from the service provider is that, data transfer must be performed using Tunneling, could you tell me detailed process how to connect remote server and send requests there. I have all credentials: remote server IP, ISAKMP key and stuff like that.
I tried configuring strongswan on my VPS, but I was not able to complete process due to some errors.

Juju vault causing problems when deploying openstack/base on maas and charmed-kubernetes

I have deployed openstack/base on MaaS as indicated here. After I tried to deploy charmed-kubernetes with an openstack-integrator and vault overlay, I cannot perform openstackclient commands on the maas node and the images uploaded to the dashboard are not recognized, that means, the ubuntu charms cannot be deployed. When I do, for example,
openstack catalog list
I get
Failed to discover available identity versions when contacting https://keystone_ip:5000/v3. Attempting to parse version from URL.
SSL exception connecting to https://keystone_ip:5000/v3/auth/tokens: HTTPSConnectionPool(host='keystone_ip', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)')))
However, when I ssh into the keystone container, there is a keystone_juju_ca_cert.crt which has as
Issuer: CN = Vault Root Certificate Authority (charm-pki-local)
and as
Subject: CN = Vault Root Certificate Authority (charm-pki-local)
I have also tried to reissue the certificates and refresh the secrets through actions in the vault application, but to no avail.
Can somobody help me here ?
I don't know anything about juju or openstack, but it looks to me like the problem isn't on the keystone container, but on your local machine (or wherever you are running this openstack catalog list command. The local machine doesn't appear to have the charm-pki-local CA certificate installed, so it can't verify the connection to the keystone server.
You need to get root ca from vault using juju and then reference that file in openrc file as OS_CACERT environment variable

Impala LDAPS Always Fails With Unknown CA

I am trying to use ldaps to verify connections to an Impala database with the following configuration in the Impala Command Line Argument Advanced Configuration Snippet configuration item in Cloudera Manager:
--enable_ldap=true
--ldap_uri=ldaps://testServ.domain.com
--ldap_ca_certificate="/home/impala/testServ.domain.pem"
Where testServ.domain.pem is the ldap server certificate.
Using wireshark I can see that after receiving the certificate during SSL negotiation Impala always responds with an Unknown CA alert.
I can successfully connect with Impala using unencrypted ldap and I can connect with a different ldaps enabled program using the provided certificate, so I doubt the issue is on the ldap server.
Is there another configuration parameter I need or a way to determine why Impala always rejects the ldap servers certificate?

Configure MS DTC over VPN

I tried to configure MS DTC via our VPN. But when I try to open the connection it gives me the following error.
The MSDTC transaction manager was unable to push the transaction to the destination transaction manager due to communication problems. Possible causes are: a firewall is present and it doesn't have an exception for the MSDTC process, the two machines cannot find each other by their NetBIOS names, or the support for network transactions is not enabled for one of the two transaction managers. (Exception from HRESULT: 0x8004D02A)
When I check with the network team they told me that the firewall is already configured to allow DTC.
If I explain this further this communication is done via VPN. According to my network admin, although the client machine could see the NetBios name of the server but server cannot see the client's NetBios name, when this communication is done through a firewall/router. He is telling that, to start a DTC communication both machines should be able to see their NetBios names.
I tried with DTCPing (Same setup) and the dtc ping error is
03-04, 10:18:33.918-->RPC server:NGSVR received following information:
Network Name: NGSVR
Source Port: 49179
Partner LOG: WS-PCSPOS76036.log
Partner CID: EBA77A41-C9F9-4162-B7A2-E10404719072
++++++++++++Start Reverse Bind Test+++++++++++++
Received Bind call from WS-PCSPOS7
Network Name: NGSVR
Source Port: 49179
Hosting Machine:NGSVR
03-04, 10:18:33.996-->Trying to Reverse Bind to WS-PCSPOS7...
Test Guid:EBA77A41-C9F9-4162-B7A2-E10404719072
gethostbyname can not resolve WS-PCSPOS7
Error(0xB7) at nameping.cpp #43
-->gethostbyname failure
-->183(Cannot create a file when that file already exists.)
Can not resolve WS-PCSPOS7
Error(0x6BA) at ServerManager.cpp #453
-->RPC reverse BIND failed
-->1722(The RPC server is unavailable.)
Reverse Binding to WS-PCSPOS7 Failed
In GUID
Out GUID
Reverse BIND FAILED
Session Down
I have tried to open and do a transaction via non vpn setup and it was successful.
Can we configure MS DTC via VPN?
If it is possible any additional configuration should do to VPN?

How to automatically install an SSL cert on an AWS ElasticBeanstalk running on Windows & .NET?

Is there a way to automatically deploy a .NET/Windows based Amazon Elastic Beanstalk instance with an SSL cert?
I already have the DNS for the domain in the SSL cert setup to point to the Beanstalk instance.
I can remote in and configure the server manually but I was wondering if there is a way to make it part of the deployment package (similar to what Windows Azure has).
If this isn't built in to Elastic Beanstalk, are there any hooks to run PowerShell scripts after deployment (or update) of my instance?
The AWS Elastic Beanstalk Developer Guide explains how to enable an SSL certificate for your Elastic Beanstalk environment.
The relevant part is:
Controlling the HTTPS port
Elastic Load Balancing supports the HTTPS/TLS protocol to enable
traffic encryption for client connections to the load balancer.
Connections from the load balancer to the EC2 instances are done using
plaintext. By default, the HTTPS port is turned off.
To turn on the HTTPS port
Create and upload a certificate and key to the AWS Access and Identity Management (AWS IAM) service. The IAM service will store the
certificate and provide an Amazon Resource Name (ARN) for the SSL
certificate you've uploaded. For more information creating and
uploading certificates, see the Managing Server Certificates section
of Using AWS Identity and Access Management.
Specify the HTTPS port by selecting a port from the HTTPS Listener Port drop-down list.
In the SSL Certificate ID text box, enter the Amazon Resources Name (ARN) of your SSL certificate (e.g.,
arn:aws:iam::123456789012:server-certificate/abc/certs/build). Use the
SSL certificate that you created and uploaded in step 1. For
information on viewing the certificate's ARN, see Verify the
Certificate Object topic in the Creating and Uploading Server
Certificates section of the Using IAM Guide.

Resources