We have distributed environment as below and trying to export published API from the same.
Identity Server as WSO2 KeyManager -> https://key-m:9443/carbon
WSO2 Traffic and Publisher --> https://traffic:9443/carbon ,
https://publisher:9444/carbon
WSO2 Internal Gateway -->
https://internal-gw:9443/carbon
WSO2 External Gateway -->
https://external-gw:9443/carbon
WSO2 Store --> https://store:9443/carbon
We have deployed "api-import-export-2.6.0-v14.war" in https://external-gw:9443/carbon and cli tool in the same server
apimcli add-env -n dev \ --registration https://store:9443/client-registration/v0.14/register \ --apim https://external-gw:9443 \ --token https://key-m:9443/token \ --import-export https://external-gw:9443/api-import-export-2.6.0-v10 \ --admin https://external-gw:9443/api/am/admin/v0.14 \ --api_list https://publisher:9444/api/am/publisher/v0.14/apis \ --app_list https://store:9443/api/am/store/v0.14/applications
When we tried to login through cli tool to dev environment, we are getting "403:forbidden".
Suspecting while creating environment, we might have mis-configured urls for registration/apim/token/import-import/admin/api_list/app_list.
Any help will be highly appreciated.
You need to provide the --registration https://https://key-m:9443/client-registration/v0.14/register endpoint pointing to KM.
The API Import-export war should be deployed in Publisher node and provide --import-export https://publisher:9443/api-import-export-2.6.0-v10 pointing to Publisher node.
api#am#admin#v0.15.war should be deployed to Publisher portal and provide --admin https://publisher:9443/api/am/admin/v0.14 endpoint pointing to Publisher node.
Since you are providing the api_list and app_list flags it does not matter what value you provide for apim. Hence you can just point it to the Publisher node.
Related
I'm working on a project which requires me to connect to an existing Kafka cluster using dotnet and the Confluent library. The Kafka cluster uses Kerberous/Keytab authentication. Looking at some of the documentation it looks like you can pass through the keytab file using the JAAS configuration, but when I look at the properties for the ProudcerConfig in Confluent I don't see anything about authentication. So how do I specify the keytab file so that I can authenticate against the Kafka cluster?
I think this section of Confluent docs mentions how to configure clients:
In your client.properties file you'd need the following configuration:
sasl.mechanism=GSSAPI
# Configure SASL_SSL if SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
security.protocol=SASL_SSL
sasl.kerberos.service.name=kafka
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useKeyTab=true \
storeKey=true \
keyTab="/etc/security/keytabs/kafka_client.keytab" \
principal="kafkaclient1#EXAMPLE.COM";
# optionally - kafka-console-consumer or kafka-console-producer, kinit can be used along with useTicketCache=true
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useTicketCache=true;
In order to pass client.properties to e.g. kafka-console-consumer you need to provide --consumer.config parameter as well:
For Linux:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --consumer.config client.properties --from-beginning
For Windows:
bin/kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --consumer.config client.properties --from-beginning
The airflow documentation states:
Airflow exposes an experimental Rest API. It is available through the webserver. Endpoints are available at /api/experimental/. Please note that we expect the endpoint definitions to change.
https://airflow.apache.org/api.html#experimental-rest-api
However it doesn't state in which version the API appears. We are running Airflow v1.8.0
But whenever I browse to /api/ or /api/experimental/ I get a 404 and the spinning circles.
I tried curling the same URLs but that only confirmed the same, /api/ gives me a 404:
$ curl -I -L -s http://${AIRFLOWIP}:8080/admin/ | grep HTTP
HTTP/1.1 200 OK
$ curl -I -L -s http://${AIRFLOWIP}:8080/api/ | grep HTTP
HTTP/1.1 404 NOT FOUND
We have this setting in airflow.cfg:
[api]
# How to authenticate users of the API
auth_backend = airflow.api.auth.backend.default
I don't know whether the API is only available in a later version of Airflow or if we have stood it up incorrectly.
Can someone let me know in which version of airflow we can find the experimental API?
The first experimental endpoints were added in 1.8.0, with a few more endpoints added in following releases. There is no endpoint for the root paths of /api/ and /api/experimental/ on any version, so those curls are not expected to work. However, there is a /api/experimental/test/ endpoint which you can hit to confirm the API is available.
If you're going to be using the experimental API, I think the code is the best reference at the moment.
The Airflow API is no more at the experimental phase.
Stable version here Airflow REST API.
I'm trying to figure out what Egress rules must be allowed for the Google Container Engine to function properly.
I am currently using the following rules:
gcloud beta compute firewall-rules create "outbound-deny" \
--action DENY --rules all --direction "EGRESS" --priority "65531" \
--network "secure-vpc" --destination-ranges "0.0.0.0/0"
gcloud beta compute firewall-rules create "outbound-internal" \
--allow all --direction "EGRESS" --priority "65530" \
--network "secure-vpc" --destination-ranges "10.0.0.0/8"
With these rules in place, creating a cluster in this network will fail. It does create all the machines, network rules, etc. The kubernetes cluster still never reports the nodes as alive.
I think you need to permit the traffic to K8s Master. To retrieve it:
MASTER_IP=$(gcloud container clusters describe $CLUSTER --zone $ZONE --format="value(endpoint)")
I created Kaa sandbox instance on the AWS Linux host. I am getting some of the issues
Still I am not able to see the management button on the kaa Sandbox console.
I am not able to connect AWS with using ssh. I followed all the required step to connect to AWS Linux host, but not lucky to connect.
My problem is that, I would like to change the host IP in the sandbox setting with my AWS linux host IP, so that my end point device gets connected to host,
Still I am struggling with above points. Please advise.
Regards,
Prasad
That seems to be an issue with the Kaa 0.10.0 Sandbox for AWS. We created a bug for tracking this.
For now, you can use the next workaround:
echo "sudo sed -Ei 's/(gui_change_host_enabled=).*$/\1true/'" \
"/usr/lib/kaa-sandbox/conf/sandbox-server.properties;" \
"sudo service kaa-sandbox restart" | \
ssh -i <your-private-aws-instance-key.pem> ubuntu#<your-aws-instance-host>
Note: this is a multi-line single command that works correctly in bash (should also work in sh and others, but that is not tested).
Note 2: don't forget to replace
<your-private-aws-instance-key.pem>
<your-aws-instance-host>
with the respective key name and host name/IP address.
I have a local repository that resides on my computer_1. I have setup my svn server using the following command:
svnserve -d -r Path_to_Repository
computer_1 and computer_2 are connected to each other through a router and can communicate with ssh username#IP command. Considering that computer_1 does not have a registered domain name (e.g. My_Domain.com), can I create a new working copy on my computer_2? I would like to use the following command on computer_2:
svn checkout http://computer_1_IP_address A_folder_on_computer_2 -m A_log_message
However, using other protocols other than http is ok, as long as I only need to have computer_1_IP_address
You use svnserve and in this case the URL should have svn:// protocol, not http://.
You should read the documentation before beginning to configure the server!