Zero integrations available while creating the Integration target proxy in APIGEE - apigee

We have created a couple of integration in the Integration portal but was not able to develop the Apigee Integration target proxy in Apigee X. It's showing 0 integrations available while creating the Integration target proxy. For roles, I have assigned service-GCP_PROJECT_NUMBER #gcp-sa-apigee.iam.gserviceaccount.com to Apigee Integration Admin, Service Networking Service Agent, and connector admin but still, it shows 0 integration available while creating the integration enabled proxy in Apigee X.

Related

.Net Core querying records from different microservices

I'm learning how to design and implement microservices using serverless technologies. I'm trying to create autonomous microservices and am having difficulty understanding how to communicate data across/between microservices. I'm using .Net Core for my microservices and am wanting each microservice to be a AWS lambda function exposed via API Gateway.
I have the following scenario:
Institution microservice - returns a list of institutions within a radius (25 miles) of a zipcode.
ROI Calculator microservice - receives a zip code as input and calls institution microservice receiving a list of institutions. For each institution returned, perform a series of calculations yielding a ROI value.
How should ROI Calculator microservice make a call to institution microservice?
ASP.NET core web api application can be published as it is on AWS Lambda as Serverless function. You get everything that regular .NET core application provides like controllers , models etc. Amazon API gateway proxy is integrated directly into .NET Core api routing system. So your AWS lambda function will be serving your .Net core web api. You should watch this tutorial for starters to get better understanding.
Create .NET Core AWS lambda function
.NET core AWS Lambda Microservices
If you go by template provided by AWS SDK (ASP.NET core web api template) and you publish .Net core web api on AWS it will configure everything for you including AWS Lambda function and API gateway. So if you have create 2 .net core web api projects you will have 2 web api gateways. The problem is if we have 10 microservices mean we will have 10 api gateways , so we should ideally have 1 api gateway for multiple microservices.
I have worked on POC recently that has one API gateway and all microservices AWS lambda functions are behind this. Each microservice has base path e.g. shopping or users setup in their startup.cs that will identify them individually behind apigateway. so microservice 1 will be apigateway/shopping/{anything} , another microservice will be apigateway/users/{anything} and they both are configured behind api gateway. API Gateway will send request to AWS lambda function (.Net core web api) and this request will be resolved by .Net core routing system. Even multiple controller can be used this way in a single web api project without problem.
I have modified serverless.template so we can only publish aws lambda function and configure apigateway seperatley. You can find code sample and details on my github blog here .NET Core Web API AWS Lambda Microservices Sample .
There are two ways of doing this depending on your independance of the microservices is probably the best answer:
Make a internal HTTP call from the ROI -> Institution which would be okay. The problem with this is that if the service is down the data will not be available.
Store the data needed to make the calculation inside the ROI service as well. This seems strange but the data once created in say the Institution service it could be sent via a message bus to the ROI service which then uses the data when needed. (this however may not suit your domain, it depends what information it needs).
However it seems that the calculation and the storage of the Institutions could be within the same microservice therefore eleminating the problem.

Best pattern to call AWS API from Elm SPA?

I'm developing an application following quite closely Feldman Elm SPA example with the API hosted on AWS API Gateway. My problem is the following:
I need to sign my API calls with AWS API Signature v4. It is a less trivial task than I initially thought in Elm:
There is no Elm AWS signature package, so I naturally looked at JS libraries to use via Ports.
Option 1: Use AWS Amplify API that does all the job => But then how to process the result in the most Elm-esque way (ideally with RemoteData).
Option 2: Use a third-party JS library just to sign the request forged by Elm Http.request and send send/process the HTTP request via Elm => So far I found only buggy implementations of AWS Sigv4, I would prefer an official implementation anyway.
In the 2 cases, I'm stuck with the Main Parent / Page Children communication: I can send the request 1) or 2) via Port from the Child. But then, how to can the Child receive the response to his request? Indeed, all responses go into Elm via the same port subscription. Do I need to 'tag' the outgoing requests and then dispatch the response based on the tag? But it will look horrible and won't scale well.
Please note that it is a question about App pattern and architecture. It is not a basic question about Elm Ports (I already successfully call the API from Elm).
Any recommendations or pointers appreciated. Thanks!
Additional info about my setup (following the first comment)
I follow the AWS best practices (scenario #3 Access Resources with API Gateway and Lambda with a User Pool)
Front-end App users are managed by:
Cognito User Pool (signup, sign-in, etc...)
Cognito Identity Pool (map users with IAM role to access AWS resources, including the API Gateway)
Back-end is Serverless: API Gateway + Lambda functions
API Gateway: Lambda proxy integration + Authorization = IAM => this requires the AWS Signature
I don't use API keys because:
I don't want to provide any access to the back-end to unauthenticated users
I need to identify the user from the request headers
I don't want to rely on long-term secrets for authentication on client side

ServiceDataPublisherAdmin not set in wso2 api manager gateway

I am setting up wso2 API manager 1.10.x with DAS 3.0.1 for publishing API statistics using mysql. My API manager system is clustered with gateway worker node on a separate VM. I followed the documents to enable analytics for API manager via UI. I also followed this document to manually enable analytics for gateway worker node. http://blog.rukspot.com/2016/05/configure-wso2-apim-analytics-using-xml.html After setup, I restart all servers, everything seems fine. But when I make a request to published API, gateway does not publish any statistics to DAS receiver. No data in DAS summary tables either.
By debugging wso2 Gateway, I am able to narrow it down to the fact that
private static ServiceDataPublisherAdmin dataPublisherAdminService; inside org.wso2.carbon.apimgt.impl.internal.APIManagerComponent never get set. Therefore APIMgtUsageHandler does not do anything.
Any idea on what could cause this to happen?
Thanks.
Figured it out myself.
bundle org.wso2.carbon.statistics_4.4.8 and 2 other statistics bundles are necessary for gateway worker to publish statistics data to DAS. But worker profile provided in the package of wso2 API manager 1.10.0 had them excluded.
To work around it, start wso2 on worker node with -Dprofile=default.
You can use osgi console to confirm the activation of these bundles. Once the bundle is activated, class inside is instantiated, gateway will start to publish statistics to DAS when you invoke a published API.

How WSO2 API Manager distributed setup works

How the process of deployment of api into GW node happens after publishing API from Publisher node in WSO2 APIM in a distributed set up?
There is a section called <Environments> under <APIGateway> in api-manager.xml configuration file. It is where the Gateway Environment section of an API is populated from in Publisher webapp. When you select an environment from that and publish from the Publisher webapp, it will create the Synapse artifact related to API and push it to Gateway by doing an Admin service call. For that <ServerURL> is used. So you need to correctly define the URL <ServerURL> in Publisher node so that it points to the GW node.

google cloud endpoints configure "service accounts" for Google OAuth 2.0 endpoint supports server-to-server interactions

I am implementing Cloud Endpoints with a Python app, I need to expose the restAPI in a secure way https (this is authomatic), The consumer of this Endpoint will be a java Application (not a web browser or app android or ios), and my questions is if there are any way to limit the consume od this Services only for that application.
I've seen "Service Account" oauth but i don't know if i can use it for this problem and if is possible i don't know how to configure it.
Thanks a lot.

Resources