I try configure my Google Cloud Functions by using CDN.
I try use this tutorial: https://cloud.google.com/cdn/docs/setting-up-cdn-with-serverless
With one function as Backend Endpoint looks all good, and if I open Load Balancing Frontend IP I see same result as I open direct function URL. And seems that this means this function now work over CDN if I use this IP.
But I have too many functions, because of this I try use URL mask for all functions in my Endpoint.
URL mask looks like this:
https://us-central1-my-real-project.cloudfunctions.net/<function>
Problem that I cant understand how I can use Load Balancing Frontend IP for my Endpoint with Cloud Functions created with URL mask.
When I open Load Balancing IP I get:
Error: Not Found
The requested URL / was not found on this server.
PS. Same if I try open http://<load-balancing-frontend-ip>/my-function-name
UPD:
Configurations in screenshots placed in google drive:
https://drive.google.com/drive/folders/1eI9tx_SQcJ_uJrlt-xzeZua9bwklszik?usp=sharing
(sorry, not know how share other way configuration, and cant attach images direct in question because of low reputation)
As described in the documentation, the URL mask must be only /<function>. No URL before.
Related
I have a WordPress installation, but it's behind an IP whitelist firewall. I'd like to make the raw JSON data therein publicly accessible only via the WordPress API. The entire WP instance can't be made public, but I can whitelist an IP for a client/host proxy server.
Diagram attached.
I'd imagine some sort of Node or React setup, but am hoping for something more direct, like a reverse proxy setup using Apache/NGINX. This service won't have any sort of front-end at all. It's only for grabbing and returning JSON from WordPress.
Listen for requests, pass requests to WP, return JSON to requester.
I'm sure something like this has been solved for, I'm just having a hard time getting started.
Well, couldn't you :
Have a simple page on your server calling your wordpress api and printing the JSON result ;
Have a vhost well configured to use different IP for Wordpress and JSON service (see Apache Doc);
Whitelist the IP of the JSON service ?
I'm not able to access the below-mentioned URL through TestCafe. The URL is as mentioned below, which will access a js file.
https://assets.adobedtm.com/launch-xxxx.min.js
The same will be accessed as mentioned below
http://localhost:1337/Kds4rFTmQ!s!utf-8/https://assets.adobedtm.com/launch-xxxx.min.js
But when I try to access the internal URL, I'm able to access the same.
https://parentwebsite/xxxx.js
The same will be accessed as mentioned below.
http://localhost:1337/Kds4rFTmQ!s!utf-8/https://parentwebsite/xxxx.js
Please let me know what I can do or where I'm going wrong.
On further investigation of the call being made, I found the calls were failing at our firewall and the firewall needed some changes from our end.
I have an Amazon AWS S3 bucket setup, with access to the files using an url. For esthetic purposes, I'd like to access these files using a cleaner URL, rather than the amazon provided one. Something like this:
https://amazon-aws-url.com/bucket-name/filename.png -> https://subdomain.domain.com/filename.png
Can someone please point me in the right direction on how to configure my NGINX server to proxy these requests?
Any help is much appreciated it.
If you name the bucket 'www.exampledomain.com' and have web hosting enabled, then update the CNAME for 'www.exampledomain.com' to point to the bucket hosting URL, that will take you to the bucket location when you use your custom domain.
https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html
If you would like a HTTPS serverless implementation, I would recommend using a CloudFront Distribution in front of your S3 bucket.
https://medium.com/#sbuckpesch/setup-aws-s3-static-website-hosting-using-ssl-acm-34d41d32e394
Both of these solutions are far more cost-effective than setting up a server. However, if you would definitely like to setup an NGINX proxy server, then an instructional article is below.
https://medium.com/happy-cog/deploying-static-websites-to-aws-s3-behind-an-nginx-proxy-fd51cc0c53ec
I have dynamically generated urls that I need to create for staging and production environments. I am doing a mix of Firebase hosting and functions. I am also using Firebase config to route urls to my Firebase functions "app". When I try to get the hosting url when I visit my site with req.get('host') inside a Firebase function I get the functions url. How can I get the Hosting url? - the url that triggered the firebase function?
If you examine the contents of req.headers, you'll find some attributes of interest:
host: The host of Cloud Functions, e.g. "us-central1-YOUR-PROJECT.cloudfunctions.net:
x-forwarded-host: Your Firebase Hosting host, e.g. "YOUR-PROJECT.firebaseapp.com"
x-forwarded-proto: The protocol of the original request, e.g. "https"
x-original-url: The URL path of the original request, e.g. "/test"
Between three of those (the ones that start with "x-"), you could concatenate them together to get the original URL.
I don't know if these headers are fully documented and supported.
If you are using functions callable, you can find the information from
context.rawRequest.headers.origin
As far as I know the original URL that the user typed is not available in the request that you get in Cloud Functions. The rewrite happens on a different server, and no information is passed along.
I'm trying to find a tool that will allow non-programmers to test files on a live server.
For example, they could modify an image on their computer, reload a webpage, then see the results of their work immediately.
I've tried finding a tool for this, because it seems obvious enough that someone must've thought of it, but a lot of software I see doesn't quite fit. A tool called Fiddler does this (they call it autoresponding) but it's Windows-only. I could change the hosts file to redirect to a local instance of nginx or something, but that seems difficult to maintain when all I really want is a simple tool that will something like this...
http://someserver.com/css/(.*) -> /home/user/localcss/$1
Does anybody have any recommendations?
Edit: Redirect clarification
Fiddler has this feature; just click the AutoResponder tab and map URLs to local files. Thousands of people do this every day.
See also video #5 here: http://www.fiddlerbook.com/fiddler/help/video/default.asp
I found Charles Proxy very useful for this
http://www.charlesproxy.com/documentation/tools/map-local/
Max's PAC solution was a life-saver so I'm providing more details (can't yet up vote)
To use a local version of, say, css files, create a file 'proxy.pac', which contains this function:
function FindProxyForURL(url, host)
{
// use regex to match requests ending with '.css'
// and redirect them to localhost
var regexpr = /.**\.css/;
if(regexpr.test(url))
{
return "PROXY localhost";
}
// Or else connect directly:
return "DIRECT";
}
Save 'proxy.pac' and point your browser to this file. In Firefox this is in Options > Advanced > Connection > Settings > Automatic Proxy Configuration URL
For best practice, also add a MIME type to your web server: map '.pac' to type 'application/x-ns-proxy-autoconfig'.
All requests to .css files will now be routed to localhost. Don't forget to ensure the file structure is the same on the proxy server.
In the case of CSS, it may well be easier to override CSS by using a local chrome. For example in Firefox, chrome/userContent.css. See http://kb.mozillazine.org/UserContent.css
It's been a while since I asked this question and I have an good technique that wasn't suggested.
PAC files are supported by all major browsers, and allow you to write a script that can redirect any individual request to a proxy server. So for example the proxy server could serve a PAC file, have the PAC file redirect whitelisted URLs to the proxy server, and return the local versions of these files. It can even support HTTPS.
Beware of one gotcha - Internet Explorer. It helpfully "caches" the results of this script incorrectly, so that if one URL on a domain is proxied, all URLs at that domain will be proxied. This feature can be disabled, however.
You can do this with the modify response rule in Requestly.
Using the local file option you can specify any file to be used as the response for the intercepted request.
According to their documentation it also supports hot reloading, i.e., as long as the file path remains the same, the rule will pick up the changes that you made.
As for the dynamic URL matching, they have support for regex and wildcards in their source filters
Note: This is currently only available in their desktop app.
If you want to implement this using their chrome extension ,which is what I personally did, you can use the Redirect rule paired with a mock server. Here is a page explaining this
You can setup a mock server / mock files endpoint within Requestly instead of using something nginx or a local server to do so. But this works only for text based content, not images
This would also bypass any setup on the tester's local machine. They would only need to install the extension. All you would have to do is send them the endpoint for your mock server and the redirect rule that you created.
Actually you can't do this because browsers don't allow files over http:// to access file on the local machine (just think a moment about it... What would happen if, for example, a malicious webpage loads some private files from your computer?).
Some browsers (e.g. Safari) allows files over file:// to access other file:// files, others don't, but no browser allows http:// to access file://.
Firefox has a feature called "Signed scripts", which are scripts digitally signed with a trusted certificate. They can ask the user to grant them access to the local hard drive. Look at this: http://www.mozilla.org/projects/security/components/signed-scripts.html
Do you mean the Fiddler Web Proxy (www.fiddler2.com)? There is a commercial Java-based alternative named Charles Web Proxy that may fit your needs.