I changed the connection setting of one of my cloud functions to 'Allow Internal Traffic Only' setting.
I have my nodejs app running in the same project, same region as my cloud function. I removed 'allUser' access from my cloud function and added My-PROJECT-ID#appspot.gserviceaccount.com as Invoker of my cloud function.
But I am getting 403 error now when I am calling the function from my nodejs app. What can I do to fix this?
I followed this as guidance: here
------------------UPDATE----------------
Many thanks for explanation below. It has started making sense now. So My setup is as follows as of now:
Cloud function side:
I have added My-PROJECT-ID#appspot.gserviceaccount.com as function invoker and removed 'allUsers' as an invoker.
Under variables, networking and advanced settings I have clicked on 'Allow internal traffic only' and then under Egress settings I have added the connector which I created earlier with an IP 10.8.0.0. I have added my connector in the format : projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR_NAME and selected Route all traffic through the VPC connector
App Engine (NODE js) side:
When I make a call to the function when it was publicly available, I was using the given hostname. Now my POST request looks like the following:
const optionsCFS = {
hostname: "10.8.0.0",//process.env.CLOUD_URL,
port: 443, //(tried 28 as well)
timeout: 5000,
path: process.env.CLOUD_ORDER_SAVE_PATH, // remaining path
method: 'POST',
headers: {
'Content-Type': 'application/application-json',
'Content-Length': CFSdata.length,
//'charset': 'utf-8'
}
}
console.log('Going to call CF ')
const orderReq = https.request(optionsCFS, resCFServer =>
{ // Do something })
I get Error 502 - Bad Gateway.
When you set the traffic to internal only, you say to the Cloud Functions (or cloud run, it's the same behavior):
Hey, accept only the traffic that comes from the VPC.
However, you don't say:
Hey make my service only reachable through private IP and no longer through public IP
The difference is important, because even if you set your Cloud Functions (or your Cloud Run) with an ingress mode Allow internal traffic only, the service is still exposed on the internet, still reachable publicly, but the gateway in front of your service (GFE I guess, Google Front End), perform an additional check: "Do you come from the VPC?"
This check is based on the traffic metadata only present in the internal Google Network (that's also means that the traffic stay in the Google Cloud backbone, to keep these metadata).
So, I continue my explanation.... When you set a serverless VPC connector to App Engine, you can only route the private traffic to the VPC connector, compliant with the RFC 1918.
However, as explained, the Cloud Functions, and the Cloud Run, service are reachable on the internet, not on a private IP (compliant with the RFC 1918). And thus, your App Engine traffic don't go through the serverless VPC connector, and can't be accepted as "internal" traffic during the ingress check.
With Cloud Functions and Cloud Run, you can set up the vpc-egress value to private-ranges-only (similar to the default behavior of App Engine, route only the IPs in the RFC 1918 ranges) or all. It's this latest mode that you need to use to call a internal only service from Cloud Functions or Cloud Run.
Related
I've been using #aws-sdk/client-dynamodb server-side (SvelteKit / NodeJS) connecting to localhost Docker container with instance of amazon/dynamodb-local:latest which works well. I used AWS CLI to configure tables, etc. I've created the client using the simplest configuration:
const client = new DynamoDBClient({ endpoint: 'http://localhost:8000' });
This works server-side, but when the same is executed client-side along with a command, I get a message that the region is missing. I've tried passing region: 'none', but then I get a message that the credentials are missing. Adding dummy credentials enables the command to execute, but I don't get an expected response. For example, sending the ListTablesCommand returns an empty array. If I do the same from the AWS CLI, I get the correct response.
Does the DynamoDB client run client-side, i.e., in the browser? Or am I missing something else?
No it doesn't run in a browser, You will need API Gateway and some backend code to connect a browser to Dynamodb.
Here on console, I am able to see a total of 7 resources. Which does not match with the result got from the API call. With API calls I am getting 75 resources:
GCP Doc link
https://cloud.google.com/compute/docs/reference/rest/v1/addresses/list
Method GET:
https://compute.googleapis.com/compute/v1/projects/{project}/regions/{region}/addresses
Here for us-east1 UI console shows 1 entry and API gives 4 records.
EDIT
For region us-east1 there are 4 records:
As it was discussed at the comment section, you see mismatch between Cloud Console (image 1) and API request (image2) because UI shows you EXTERNAL IP and API shows INTERNAL IP.
To solve this issue you should follow API documentation Method: addresses.list and set required items[].addressType:
The type of address to reserve, either INTERNAL or EXTERNAL. If unspecified, defaults to EXTERNAL.
Furthermore, you can see EPHEMERAL IP via Cloud Console UI, but accordingly to the API documentation items[].address:
The static IP address represented by this resource.
We are running a simple application that connects to Firebase are reads some data. It fails to connect with the following timeout error:
#firebase/database: FIREBASE WARNING: {"code":"app/invalid-credential",
"message":"Credential implementation provided to initializeApp()
via the \"credential\" property failed to fetch a valid Google OAuth2 access token
with the following error: \"Failed to parse access token response: Error: Error
while making request: connect ETIMEDOUT
We are behind Firewall / Proxy and it appears that is blocking traffic to/from Firebase and hence failed connection. My question is what ports need to be opened and to what destination URLs to make this application work normally?
Any help will be much appreciated!
Finally, after struggling with the issue for several days got it working. Needed to contact network team and request to perform following actions:
Open ports 5228, 5229, 5230 for Firebase communication.
Opened communication at proxy level between the source server and following URLs:
fcm.googleapis.com
gcm-http.googleapis.com
accounts.google.com
{project-name}.firebaseio.com
Added following code in my node.js application:
var globalTunnel = require('global-tunnel-ng');
globalTunnel.initialize({
host: '<proxy-url>',
port: <proxy-port>,
//proxyAuth: 'userId:password', // optional authentication
sockets: 50 // optional pool size for each http and https
});
Installed module global-tunnel-ng:
npm install global-tunnel-ng
It solved the my problem and I hope it can help others too. :-)
I used Wireshark to monitor a local install of a Node.js application using the Admin SDK for firestore. I also referenced this list by Netify. This is what I found:
*.firebaseio.com
*.google.com
*.google-analytics.com
*.googleapis.com
*.firebase.com
*.firebaseapp.com
I have recently moved a project over to another server. The domain name is the same, it has just been pointed to the new server. The URL is exactly the same. Since moving the project over however I get this error when the app tries to connect to googles OAuth api.
{
"name": "Error calling GET https:\/\/www.googleapis.com\/analytics\/v3\/management\/accounts\/~all\/webproperties\/~all\/profiles?key=AIzaSyBKUP8JriiOnFnbJm_QYt_bHTMuHf-ilAI: (403) There is a per-IP or per-Referer restriction configured on your API key and the request does not match these restrictions. Please use the Google Developers Console to update your API key configuration if request from this IP or referer should be allowed.",
"url": "\/analytics\/statistics.json"
}
The obvious reason (based on the error message) would be that I haven't added the new server IP into the list of allowed IP's in the devlopers console under APIs & auth->Credentials->Key for server applications.
I have added the IP. I've checked the domain has propagated by pinging it and the new IP comes up which has been entered in the console so i'm struggling to work out why it doesn't work.
Has anybody come across this before that may be able to help me solve it?
Go to Project -> APIs & Auth -> Credentials -> API Key -> Create New Key -> Browser Key. It may take upto 5 minutes to reflect changes.
And it worked for me.
After you added your new server IP you need to generate a new API key from the Console. This message shows up when access in not properly configured. Look here and scroll down to "accessNotConfigured".
So, go to your developer console, Project -> APIs & Auth -> Credentials -> Public API Access -> Create New Key -> Server Key. Use this new key and you should be good to go.
I've had this problem for a while as well but finally solved it:
I noticed when trying wget http://bot.whatismyipaddress.com/ from my server it would actually return an IPv6-address, when on the API key's config page I had entered the IPv4-Address of my server. Once I added the IPv6-Address, my requests where finally accepted.
Go to Project -> APIs & Auth -> Credentials -> Public API Access -> Create New Key -> Server Key >> Accept requests from these server IP addresses (Optional) section,
then remove all the IP Address and Update it first, then try it. And, later you can add the specific IP address which did weirdly work for me.
Hi I'm running into an issue where Symfony2 doesn't recognize the load balancer headers from Amazon AWS, which are need to determine if a request is SSL or not using the requires_channel: https security configuration.
By default Symfony2 $request->isSecure() looks for "X_FORWARDED_PROTO" but there's apparently no standard for this, and Amazon AWS load balancers use "HTTP_X_FORWARDED_PROTO".
I see the cookbook article for setting trusted proxies in config, but that's geared around whitelisting specific IP addresses and won't work with AWS, which generates dynamic IPs. Another feature, setting the framework config to include trust_proxy_headers: true is deprecated. This breaks my app by forcing endless redirects on the pages that require SSL-only.
You can now change the headers using setTrustedHeaderName(). This method allows you to change the four headers used throughout the file.
const HEADER_CLIENT_IP = 'client_ip'; // defaults 'X_FORWARDED_FOR'
const HEADER_CLIENT_HOST = 'client_host'; // defaults 'X_FORWARDED_HOST'
const HEADER_CLIENT_PROTO = 'client_proto'; // defaults 'X_FORWARDED_PROTO'
const HEADER_CLIENT_PORT = 'client_port'; // defaults 'X_FORWARDED_PORT'
The above, taken from the Request class indicate the keys available for use with the aforementioned method.
// $request is instance of HttpFoundation\Request;
$request->setTrustedHeaderName('client_proto', 'HTTP_X_FORWARDED_PROTO');
That said, at the time of writing, using "symfony/http-foundation": "2.5.*" the below code correctly determines whether or not the request is secure whilst behind an AWS Load Balancer.
// All IPs (*)
// $proxies = [$request->getClientIp()];
// Array of CIDR pools from load balancer
// EC2 -> Network & Security -> Load Balancers
// -> X -> Instances (tab) -> Availability Zones
// -> Subnet (column)
$proxies = ['172.x.x.0/20'];
$request->setTrustedProxies($proxies);
var_dump($request->isSecure()); // bool(true)
You're right the X_FORWARDED_PROTO header is hardcoded into HttpFoundation\Request while - as far as i know - overriding the request class in symfony is currently not possible.
There has been a discussion/RFC about this topic here and there is an open pull-request that solves this issue using a RequestFactory.