How to route requests to desired endpoint using Environment Variables in APIGEE - apigee

I've a situation where I need to route requests to desired endpoint based on Environment the request hits. for example QA - QA, Prod to Prod
I've configured a proxy and defined a default target host during initial config.
Then I'm using a javascript to decide target host based on the env the request comes in.
var env = context.getVariable('environment.name');
if(env=="prod") {
var host = 'https://prod.com';
}
if(env=="test") {
var host = 'https://qa.com';
}
I've used this JS file in target endpoint(default) preflow as a step.
I see that all requests are sent to the default host that I configured during initial process.
Am I missing something here please help.
Also I've seen about using Target Server Env config. I've configured the hosts but how do I reference/use it in my proxy.

I usually set the target endpoint (it is the same to host of yours) in Key Value Mapping of 'Environment Configuration' of Apigee.
And then assign it to variable (example assign it to variable name endpointUrl) in Key Value Maps Operation policy
Finally, use it in your Target Request Message like below.
<AssignVariable>
<Name>target.url</Name>
<Ref>endpointUrl</Ref>
</AssignVariable>
Adventage of this method is if your host changed, you just edit the value in Key Value Mapping not edit in your code and do not need to re-deploy your API.
However, I answer you from my work experience only.
Maybe you have try to go Apigee Community, you may found the solution that suits you.

Related

Keycloak redirection issue behind proxy (Kong)

I'm trying to setup a Keycloak instance to handle the users of my webapp. This instance would be, like all others microservices, hidden behind a reverse proxy (Kong, it's a nginx-based proxy).
On my local setup, Kong listens to https://localhost, and keycloak listens to http://localhost:8082/auth.
To achieve that, I used several environment variables on my Keycloak container :
ENV KC_HOSTNAME=localhost
ENV KC_HOSTNAME_PORT=8082
ENV KC_HOSTNAME_STRICT_HTTPS=false
ENV KC_PROXY=edge
ENV PROXY_ADDRESS_FORWARDING=true
ENV KC_HTTP_ENABLED=true
ENV KC_HTTP_PORT=8082
KC_HTTP_RELATIVE_PATH=/auth
The setup of Kong configuration looks fine, and the keycloak endpoints that I need are exposed correctly through Kong (/realms, /js, /resources, /robots.txt, like the doc said). Kong handles the TLS connection, and then speaks to all microservices with HTTP only, thus KC_PROXY=edge. /admin is not exposed, I though I could access this locally using localhost:8082 on the right machine.
If I go to https://localhost/auth/realms/master/.well-known/openid-configuration, I get the configuration. However, Keycloak does not know it's behind Kong, so all endpoints contains localhost:8082. That seems normal, since it's how I set it up in the first place.
I tried to add a new realm with a different Frontend URL, calling it https://myapp.com
Now, my openid configuration contains https://myapp.com:8082/... everywhere. All the workflows get wrongs URLs.
What did I miss ? I cannot remove this port that I put in the first place, otherwise I will not be able to access the admin console.
I thought I could do something with KC_HOSTNAME_ADMIN, but unfortunately there is no KC_HOSTNAME_ADMIN_PORT.. or is there ?
Thank you for reading :)
In case it's of interest to someone, the solution was actually quite simple. I should not have set the HOSTNAME and HOSTNAME_PORT in the first place.
ENV KC_HOSTNAME_STRICT_HTTPS=false is mandatory, and also I needed to add a plugin to Kong to tweak the headers :
plugins
- name: post-function
service: keycloak
config:
functions:
- return function()
if ngx.var.upstream_x_forwarded_port == "8000" then
ngx.var.upstream_x_forwarded_port = 80
elseif ngx.var.upstream_x_forwarded_port == "8443" then
ngx.var.upstream_x_forwarded_port = 443
end
end
Otherwise, keycloak would have the wrong redirect uri in some cases.

Stuck with woocommerce_rest_authentication_error: Invalid signature - provided signature does not match

Below issue was posted by me on https://github.com/XiaoFaye/WooCommerce.NET/issues/414 but since this may not be related at all to WooCommerce.Net but on a lowerlevel to Apache/Word/WooCommerc itself I am posting the same question here
I am really stuck with the famous error:
WebException: {"code":"woocommerce_rest_authentication_error","message":"Invalid signature - provided signature does not match.","data":{"status":401}}
FYI:
I have two wordpress instance running. One on my local machine and one on a remote server. The remote server is, as my local machine, in our company's LAN
I am running WAMP on both machines to run Apache and host Wordpress on port 80
The error ONLY occurs when trying to call the Rest api on the remote server. Connecting to the local rest api, the Rest Api/WooCommerceNet is working like a charm :-)
From my local browser I can login to the remote WooCommerce instance without any problem
On the remote server I have defined WP_SITEURL as 'http://[ip address]/webshop/ and WP_HOME as 'http://[ip address]/webshopin wp-config.php
Calling the api url (http://[ip address]/webshop/wp-json/wc/v3/) from my local browser works OK. I get the normal JSON response
Authentication is done through the WooCommerce.Net wrapper which only requires a consumer key, consumer secret and the api url. I am sure I am using the right consumer key and secret and the proper api url http://[ip address]/webshop/wp-json/wc/v3/ (see previous bullet)
I already played around with the authorizedHeader variable (true/false) when instantiating a WooCommerce RestApi but this has no effect
Is there anybody that can point me into the direction of a solution?
Your help will be much appreciated!
In my case, the problem was in my url adress. The URL Adress had two // begin wp-json
Url Before the solution: http://localhost:8080/wordpress//wp-json/wc/v3/
URL Now, and works ok: http://localhost:8080/wordpress/wp-json/wc/v3/
I use with this sentence.
RestAPI rest = new RestAPI(cUrlApi, Funciones.CK, Funciones.CS,false);
WCObject wc = new WCObject(rest);
var lstWooCategorias = await wc.Category.GetAll();
I hope my answer helps you.
Had the same issue. My fault was to define my url incorrect: http:// instead of https://.

Is it possible to setup a custom hostname for AWS Transfer SFTP via Terraform

I'm trying to set up an SFTP server with a custom hostname using AWS Transfer. I'm managing the resource using Terraform. I've currently got the resource up and running, and I've used Terraform to create a Route53 record to point to the SFTP server, but the custom hostname entry on the SFTP dashboard is reading as blank.
And of course, when I create the server manually throught the AWS console and associate a route53 record with it, it looks like what I would expect:
I've looked through the terraform resource documentation and I've tried to see how it might be done via aws cli or cloudformation, but I haven't had any luck.
My server resource looks like:
resource "aws_transfer_server" "sftp" {
identity_provider_type = "SERVICE_MANAGED"
logging_role = "${aws_iam_role.logging.arn}"
force_destroy = "false"
tags {
Name = ${local.product}-${terraform.workspace}"
}
}
and my Route53 record looks like:
resource "aws_route53_record" "dns_record_cname" {
zone_id = "${data.aws_route53_zone.sftp.zone_id}"
name = "${local.product}-${terraform.workspace}"
type = "CNAME"
records = ["${aws_transfer_server.sftp.endpoint}"]
ttl = "300"
}
Functionally, I can move forward with what I have, I can connect to the server with my DNS, but I'm trying to understand the complete picture.
In AWS,
When you create a server using AWS Cloud Development Kit (AWS CDK) or through the CLI, you must add a tag if you want that server to have a custom hostname. When you create a Transfer Family server by using the console, the tagging is done automatically.
So, you will need to be able to add those tags using Terraform. In v4.35.0 they added support for a new resource: aws_transfer_tag.
An example supplied in the GitHub Issue (I haven't tested it personally yet.):
resource "aws_transfer_server" "with_custom_domain" {
# config here
}
resource "aws_transfer_tag" "with_custom_domain_route53_zone_id" {
resource_arn = aws_transfer_server.with_custom_domain.arn
key = "aws:transfer:route53HostedZoneId"
value = "/hostedzone/ABCDE1111222233334444"
}
resource "aws_transfer_tag" "with_custom_domain_name" {
resource_arn = aws_transfer_server.with_custom_domain.arn
key = "aws:transfer:customHostname"
value = "abc.example.com"
}

Symfony2 Using Amazon Load Balancers and SSL: Error on isSecure() Check

Hi I'm running into an issue where Symfony2 doesn't recognize the load balancer headers from Amazon AWS, which are need to determine if a request is SSL or not using the requires_channel: https security configuration.
By default Symfony2 $request->isSecure() looks for "X_FORWARDED_PROTO" but there's apparently no standard for this, and Amazon AWS load balancers use "HTTP_X_FORWARDED_PROTO".
I see the cookbook article for setting trusted proxies in config, but that's geared around whitelisting specific IP addresses and won't work with AWS, which generates dynamic IPs. Another feature, setting the framework config to include trust_proxy_headers: true is deprecated. This breaks my app by forcing endless redirects on the pages that require SSL-only.
You can now change the headers using setTrustedHeaderName(). This method allows you to change the four headers used throughout the file.
const HEADER_CLIENT_IP = 'client_ip'; // defaults 'X_FORWARDED_FOR'
const HEADER_CLIENT_HOST = 'client_host'; // defaults 'X_FORWARDED_HOST'
const HEADER_CLIENT_PROTO = 'client_proto'; // defaults 'X_FORWARDED_PROTO'
const HEADER_CLIENT_PORT = 'client_port'; // defaults 'X_FORWARDED_PORT'
The above, taken from the Request class indicate the keys available for use with the aforementioned method.
// $request is instance of HttpFoundation\Request;
$request->setTrustedHeaderName('client_proto', 'HTTP_X_FORWARDED_PROTO');
That said, at the time of writing, using "symfony/http-foundation": "2.5.*" the below code correctly determines whether or not the request is secure whilst behind an AWS Load Balancer.
// All IPs (*)
// $proxies = [$request->getClientIp()];
// Array of CIDR pools from load balancer
// EC2 -> Network & Security -> Load Balancers
// -> X -> Instances (tab) -> Availability Zones
// -> Subnet (column)
$proxies = ['172.x.x.0/20'];
$request->setTrustedProxies($proxies);
var_dump($request->isSecure()); // bool(true)
You're right the X_FORWARDED_PROTO header is hardcoded into HttpFoundation\Request while - as far as i know - overriding the request class in symfony is currently not possible.
There has been a discussion/RFC about this topic here and there is an open pull-request that solves this issue using a RequestFactory.

Creating a url in controller in asp.net mvc 4

I am trying to send activation mail to the currently registered user.In mail body,I need to send a link like http://example.com/account/activation?username=d&email=g.Now, for debugging on local machine, I manually write it as localhost:30995/account/activation?username=d&email=g. But, when my port number changes, I need to rewrite it.
I tried another question
on this website,but, compiler gives error like url.action doesnot exist.
Please give me fresh solution as I am confused with that solution.
Use a Url.Action overload that takes a protocol parameter to generate your URLs:
Url.Action("Activation", "Account", new { username = "d", email = "g" }, "http")
This generates an absolute URL rather than a relative one. The protocol can be either "http" or "https". So this will return http://localhost:XXXXX/account/activation?username=d&email=g on your local machine, and http://example.com/account/activation?username=d&email=g on production.
In short, this will stick whatever domain you're hosting your app on in front of your URL; you can then change your hostname/port number/domain name as many times as you want. Your links will always point to the host they originated from. That should solve the problem you're facing.
Try using IIS / IIS-Express instead of Casinni web server that comes with visual studio.
You could add bindings to have the right URL (with host entries of course).
This will avoid the port numbers in your links.

Resources