Symfony2 Using Amazon Load Balancers and SSL: Error on isSecure() Check - symfony

Hi I'm running into an issue where Symfony2 doesn't recognize the load balancer headers from Amazon AWS, which are need to determine if a request is SSL or not using the requires_channel: https security configuration.
By default Symfony2 $request->isSecure() looks for "X_FORWARDED_PROTO" but there's apparently no standard for this, and Amazon AWS load balancers use "HTTP_X_FORWARDED_PROTO".
I see the cookbook article for setting trusted proxies in config, but that's geared around whitelisting specific IP addresses and won't work with AWS, which generates dynamic IPs. Another feature, setting the framework config to include trust_proxy_headers: true is deprecated. This breaks my app by forcing endless redirects on the pages that require SSL-only.

You can now change the headers using setTrustedHeaderName(). This method allows you to change the four headers used throughout the file.
const HEADER_CLIENT_IP = 'client_ip'; // defaults 'X_FORWARDED_FOR'
const HEADER_CLIENT_HOST = 'client_host'; // defaults 'X_FORWARDED_HOST'
const HEADER_CLIENT_PROTO = 'client_proto'; // defaults 'X_FORWARDED_PROTO'
const HEADER_CLIENT_PORT = 'client_port'; // defaults 'X_FORWARDED_PORT'
The above, taken from the Request class indicate the keys available for use with the aforementioned method.
// $request is instance of HttpFoundation\Request;
$request->setTrustedHeaderName('client_proto', 'HTTP_X_FORWARDED_PROTO');
That said, at the time of writing, using "symfony/http-foundation": "2.5.*" the below code correctly determines whether or not the request is secure whilst behind an AWS Load Balancer.
// All IPs (*)
// $proxies = [$request->getClientIp()];
// Array of CIDR pools from load balancer
// EC2 -> Network & Security -> Load Balancers
// -> X -> Instances (tab) -> Availability Zones
// -> Subnet (column)
$proxies = ['172.x.x.0/20'];
$request->setTrustedProxies($proxies);
var_dump($request->isSecure()); // bool(true)

You're right the X_FORWARDED_PROTO header is hardcoded into HttpFoundation\Request while - as far as i know - overriding the request class in symfony is currently not possible.
There has been a discussion/RFC about this topic here and there is an open pull-request that solves this issue using a RequestFactory.

Related

Apply security in grpc client server node js

I am new to GRPC call, i created 1 server and client,now how to restrict other clients to connect my server
i haven't tried any solution,because not know the best solution and i have limited time to appy solution so posting question
you can create a server with secure credentials as mentioned here
there are different methods there, you can choose any of it
I prefer SSL method
to create a connection with ssl_creds
const root_cert = fs.readFileSync('path/to/root-cert');
const ssl_creds = grpc.credentials.createSsl(root_cert);
const stub = new helloworld.Greeter('myservice.example.com', ssl_creds);
for generating SSL I have used git repo and follow the instruction according to README

keycloak starts with a new realm and some client configurations

I try to use keycloak as the authentication service in my design. In my case, when the keycloak starts, I need one more realm besides default master realm. Assuming the new agency is called "demo".
So it means when keycloak starts, it should have two realms (master and demo).
In addtion, in the realm demo, I need to configure the default client "admin-cli" to enable "Full Scope Allowed". Also need to add some buildin mapper to this client.
In this case, I wonder whether I can use something like initialization file which keycloak can load when starting ?
Or I need to use keycloak client APIs to do this operations (e.g., Java keycloak admin client)?
Thanks in advance.
You can try the following:
Create the Realm;
Set all the options that you want;
Go to Manage > Export;
Switch Export groups and roles to ON;
Switch Export clients to ON;
Export.
That will export a .json file with the configurations.
Then you can tested it be deleting your Demo Realm and:
Go to Add Realm;
Chose the .json file that was exported;
Click Create.
Check if the configurations that you have changed are still presented on the Demo Realm, if there are then it means that you can use this file to import the Realm from. Otherwise, for the options that were not persistent you will have to create them via the Admin Rest API.

Cloud Function returning 403 response

I changed the connection setting of one of my cloud functions to 'Allow Internal Traffic Only' setting.
I have my nodejs app running in the same project, same region as my cloud function. I removed 'allUser' access from my cloud function and added My-PROJECT-ID#appspot.gserviceaccount.com as Invoker of my cloud function.
But I am getting 403 error now when I am calling the function from my nodejs app. What can I do to fix this?
I followed this as guidance: here
------------------UPDATE----------------
Many thanks for explanation below. It has started making sense now. So My setup is as follows as of now:
Cloud function side:
I have added My-PROJECT-ID#appspot.gserviceaccount.com as function invoker and removed 'allUsers' as an invoker.
Under variables, networking and advanced settings I have clicked on 'Allow internal traffic only' and then under Egress settings I have added the connector which I created earlier with an IP 10.8.0.0. I have added my connector in the format : projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR_NAME and selected Route all traffic through the VPC connector
App Engine (NODE js) side:
When I make a call to the function when it was publicly available, I was using the given hostname. Now my POST request looks like the following:
const optionsCFS = {
hostname: "10.8.0.0",//process.env.CLOUD_URL,
port: 443, //(tried 28 as well)
timeout: 5000,
path: process.env.CLOUD_ORDER_SAVE_PATH, // remaining path
method: 'POST',
headers: {
'Content-Type': 'application/application-json',
'Content-Length': CFSdata.length,
//'charset': 'utf-8'
}
}
console.log('Going to call CF ')
const orderReq = https.request(optionsCFS, resCFServer =>
{ // Do something })
I get Error 502 - Bad Gateway.
When you set the traffic to internal only, you say to the Cloud Functions (or cloud run, it's the same behavior):
Hey, accept only the traffic that comes from the VPC.
However, you don't say:
Hey make my service only reachable through private IP and no longer through public IP
The difference is important, because even if you set your Cloud Functions (or your Cloud Run) with an ingress mode Allow internal traffic only, the service is still exposed on the internet, still reachable publicly, but the gateway in front of your service (GFE I guess, Google Front End), perform an additional check: "Do you come from the VPC?"
This check is based on the traffic metadata only present in the internal Google Network (that's also means that the traffic stay in the Google Cloud backbone, to keep these metadata).
So, I continue my explanation.... When you set a serverless VPC connector to App Engine, you can only route the private traffic to the VPC connector, compliant with the RFC 1918.
However, as explained, the Cloud Functions, and the Cloud Run, service are reachable on the internet, not on a private IP (compliant with the RFC 1918). And thus, your App Engine traffic don't go through the serverless VPC connector, and can't be accepted as "internal" traffic during the ingress check.
With Cloud Functions and Cloud Run, you can set up the vpc-egress value to private-ranges-only (similar to the default behavior of App Engine, route only the IPs in the RFC 1918 ranges) or all. It's this latest mode that you need to use to call a internal only service from Cloud Functions or Cloud Run.

AspNet.Security.OpenIdConnect.Server (ASP.NET vNext) Authority Configuration in Mixed http/https Environments

I am using Visual Studio 2015 Enterprise and ASP.NET vNext Beta8 to build an endpoint that both issues and consumes JWT tokens as described in detail here. As explained in that article the endpoint uses AspNet.Security.OpenIdConnect.Server (AKA OIDC) to do the heavy lifting.
While standing this prototype up in our internal development environment we have encountered a problem using it with a load balancer. In particular, we think it has to do with the "Authority" setting on app.UseJwtBearerAuthentication and our peculiar mix of http/https. With our load balanced environment, any attempt to call a REST method using the token yields this exception:
WebException: The remote name could not be resolved: 'devapi.contoso.com.well-known'
HttpRequestException: An error occurred while sending the request.
IOException: IDX10804: Unable to retrieve document from: 'https://devapi.contoso.com.well-known/openid-configuration'.
Consider the following steps to reproduce (this is for prototyping and should not be considered production worthy):
We created a beta8 prototype using OIDC as described here.
We deployed the project to 2 identically configured IIS 8.5 servers running on Server 2012 R2. The IIS servers host a beta8 site called "API" with bindings to port 80 and 443 for the host name "devapi.contoso.com" (sanitized for purposes of this post) on all available IP addresses.
Both IIS servers have a host entry that point to themselves:
127.0.0.1 devapi.contoso.com
Our network admin has bound a * certificate (*.contoso.com) with our Kemp load balancer and configured the DNS entry for https://devapi.contoso.com to resolve to the load balancer.
Now this is important, the load balancer has also been configured to proxy https traffic to the IIS servers using http (not, repeat, not on https). It has been explained to me that this is standard operating procedure for our company because they only have to install the certificate in one place. We're not sure why our network admin bound 443 in IIS since it, in theory, never receives any traffic on this port.
We make a secure post via https to https://devapi.contoso.com/authorize/v1 to fetch a token, which works fine (the details of how to make this post are here ):
{
"sub": "todo",
"iss": "https://devapi.contoso.com/",
"aud": "https://devapi.contoso.com/",
"exp": 1446158373,
"nbf": 1446154773
}
We then use this token in another secure get via https to https://devapi.contoso.com/values/v1/5.
OpenIdConnect.OpenIdConnectConfigurationRetriever throws the exception:
WebException: The remote name could not be resolved: 'devapi.contoso.com.well-known'
HttpRequestException: An error occurred while sending the request.
IOException: IDX10804: Unable to retrieve document from: 'https://devapi.contoso.com.well-known/openid-configuration'.
We think this is happening because OIDC is attempting to consult the host specified in "options.Authority", which we set at startup time to https://devapi.contoso.com/. Further we speculate that because our environment has been configured to translate https traffic to non https traffic between the load balancer and IIS something is going wrong when the framework tries to resolve https://devapi.contoso.com/. We have tried many configuration changes including even pointing the authority to non-secure http://devapi.contoso.com to no avail.
Any assistance in helping us understand this problem would be greatly appreciated.
#Pinpoint was right. This exception was caused by the OIDC configuration code path that allows IdentityModel to initiate non-HTTPS calls. In particular the code sample we were using was sensitive to missing trailing slash in the authority URI. Here is a code fragment that uses the Uri class to combine paths in a reliable way, regardless of whether the Authority URI has a trailing slash:
public void Configure(IApplicationBuilder app, IOptions<AppSettings> appSettings)
{
.
.
.
// Add a new middleware validating access tokens issued by the OIDC server.
app.UseJwtBearerAuthentication
(
options =>
{
options.AuthenticationScheme = JwtBearerDefaults.AuthenticationScheme ;
options.AutomaticAuthentication = false ;
options.Authority = new Uri(appSettings.Value.AuthAuthority).ToString() ;
options.Audience = new Uri(appSettings.Value.AuthAuthority).ToString() ;
// Allow IdentityModel to use HTTP
options.ConfigurationManager =
new ConfigurationManager<OpenIdConnectConfiguration>
(
metadataAddress : new Uri(new Uri(options.Authority), ".well-known/openid-configuration").ToString(),
configRetriever : new OpenIdConnectConfigurationRetriever() ,
docRetriever : new HttpDocumentRetriever { RequireHttps = false }
);
}
);
.
.
.
}
In this example we're pulling in the Authority URI from config.json via "appSettings.Value.AuthAuthority" and then sanitizing/combining it using the Uri class.

Redirect Heroku subdomain requests to another IP

I have an app in the App Store that interects with a RESTful Rails app hosted on Heroku. I did the terrible mistake of hardcoding the API's base URL to myapp.heroku.com instead of using the top level domain.
I'm now in the process of migrating to a new server so I'm trying to see what are my options to make the transition seamless to my iPhone app users. I thought of creating a Rake app that redirects all the traffic from that heroku subdomain to my new server address, but then I read that HTTP 1.1 doesn't allow POST redirection (my API having several POST endpoints). I could always redirect my POST requests as GET ones, but that definitely doesn't feel right.
Is there any other option I might be missing? Is there any way Heroku would accept to change the A records of my subdomain to point to my new server IP?
I ended up doing as John said and change my API endpoint inside my app. To keep previous versions of the app working (which have the heroku subdomain hardcoded in them), I ended up writing this quick Sinatra app and replaced my original Heroku app with it:
require 'sinatra'
require 'mechanize'
API_BASE_URL = "http://newdomain.com"
get '/*' do |path|
url = URI("#{API_BASE_URL}/#{path}")
agent = Mechanize.new
agent.user_agent = request.user_agent
headers = {'AUTHORIZATION' => request.env['HTTP_AUTHORIZATION']}
page = agent.post(url, params, headers)
content_type :json
page.body
end
get '/*' do |path|
url = URI("#{API_BASE_URL}/#{path}")
agent = Mechanize.new
agent.user_agent = request.user_agent
headers = {'AUTHORIZATION' => request.env['HTTP_AUTHORIZATION']}
page = agent.get(url, params, nil, headers)
content_type :json
page.body
end
(this code could probably be reduced down to a single method)
They wouldn't update their DNS for you - *.heroku.com will be a wildcard DNS entry, they don't add a new subdomain each time a site is added.
It would seem the best solution would be to fix it properly. Attach the new domain to your existing application on Heroku, it will still be accessible on app.heroku.com and yourcustomdomain.com.
Then, update your iOS application to use the new customdomain for it's endpoint. Once it's all working, reduce the TTL on the DNS entry and then repoint it at your new server.

Resources