I've been trying for the past couple of weeks to get this Load Balancer + Cloud Storage + CDN combo to work. It just doesn't, at least for me.
Got some static files (2 jpg's, svg's and css's just in case) into a multi-regional US (tried on regional too) bucket to test it out, but it just seems like it doesn't wanna cache at all.
Everytime I try checking it's headers, all I get is this same old boring bucket metadata:
Cache control are set just fine, you can see the v=2 at the top because I just keep trying to make it cache in different ways and cache strings was the last attempt. Unsucessful as well. LB works because this IP resolves from it.
What the hell am I doing wrong?
You can check the links in here:
http://35.227.213.66/style.css
http://35.227.213.66/logo.svg
http://35.227.213.66/1.jpg
http://35.227.213.66/2.jpg
I can see that you are using the correct metadata
Cache-Control: public, max-age=604800
It would be interesting to check how many requests have been answered from the CDN and how many from the bucket. You can use a query with 'gcloud beta logging' to check this:
From CDN
$ gcloud beta logging read 'resource.type="http_load_balancer" AND "logo.svg" AND httpRequest.cacheHit=true AND timestamp>="2017-12-04T07:23:00.054257251Z"' | wc -l
From your bucket
$ gcloud beta logging read 'resource.type="http_load_balancer" AND "logo.svg" AND httpRequest.cacheHit=false AND timestamp>="2017-12-04T07:23:00.054257251Z"' | wc -l
Related
I have seen many references to this issue spanning several years but 95% of it relates to Apache. I'm on NGINX hence can't try solutions involving the .htaccess file.
{"code":"woocommerce_rest_cannot_view","message":"Sorry, you cannot list resources.","data":{"status":401}}
Since nothing really covers NGINX for this problem I thought of starting a new thread
The first time it happened was when I tried to link Woobotify who automatically generates its own keys. While the keys were created it says it doesn't have read/write error (despite having the right permissions setup)
So I created a new set of keys from within WP and made a direct call (while logged in as admin of course)
as in ://site.com/wp-json/wc/v3/products/categories?consumer_key=ck_8a9b...etc to see if it was on the server-side or Woobotify's and still got the error
If you refer me to http://woocommerce.github.io/woocommerce-rest-api-docs/#rest-api-keys
I am too much of a newbie to make use of this information. I either need a step by step or I am willing to hire someone to make it work for me.
LEMP Stack on self-manage VPS
Here is example how I solve it
require "woocommerce_api"
woocommerce = WooCommerce::API.new(
"https://example.com",
"consumer_key",
"consumer_secret",
{
wp_json: true,
version: "wc/v3",
query_string_auth: true
}
)
OR simply For POSTMAN
https://example.com/wp-json/wc/v3/products?consumer_key={{csk}}&consumer_secret={{cs}}
The key is query_string_auth: true you need to force basic authentication as query string true under HTTPS
I am looking into GoReplay as to reproduce part of the production traffic that occurred yesterday.
The traffic I want to reproduce has been recorded with nginx, and I can save it as a .log or .csv file.
From what I can tell from the replay http traffic docs it is possible to reproduce traffic using a command like:
sudo gor --input-file request.gor --output-http="http://localhost:3001"
but this requires a .gor file.
My question is, is the reproduction of traffic (using GoReplay) restricted to .gor files, or could I use nginx .log files to do so?
If this is not possible, and given that I don't have a .gor file describing the yesterday requests, would you recommend creating a file conversion script, to convert the log files into .gor files, or can you recommend a better approach?
After asking this question on the GoReplay GitHub page, I got the answer that:
* there is no way to reproduce traffic directly from logs;
* you must use .gor files to recreate the traffic;
Thus, the only way to replay from traffics is to create a .log to .gor file converter.
link to official answer: https://github.com/buger/goreplay/issues/668
I've found that I can use another package to replay the logs I have, as-is, locally. At the same time, you can have goreplay listen for traffic to capture that traffic and save to log files. Then you can run goreplay with those newly created logs, updating the domain and whatever else you need.
Let me know if you want me to provide a step-by-step.
Anyone know why firebase storage would be so ridiculously slow compared to firebase hosting?
Results
Time to download image of firebase hosting: 16ms
Time to download same image from firebase storage: 2.23s (2.22s is TTFB)
Time to download same image from firebase storage (Asia Pacific Region): 1.72s (1.70s is TTFB)
(File size: 22.7kb / jpeg / firebase storage has read open to everyone)
This is repeated over and over in tests. Is there any way to speed this up to a decent time, or is firebase storage unusable for small files (images/thumbs)?
For Comparison
S3 North Cal - approximately 500ms
S3 Asia Pacific - Approximately 30ms
Cloudinary - Approximately 20ms
Extra info:
I am based in Australia.
Exact same files. Always images under 100kb.
The slow down is always in the TTFB according to dev tools.
Hosting URL: https://.firebaseapp.com/images/thumb.jpg
Storage URL: https://firebasestorage.googleapis.com/v0/b/.appspot.com/o/thumb.jpg?alt=media&token=
I found the solution.
If you have your files already uploaded to storage go to: https://console.cloud.google.com/storage/browser?project=your_project > pick your bucket > select all interesting files and click Make public (or something similar - I'm not english native).
To have all new uploaded files public by default you need to install Google cloud SDK (https://cloud.google.com/sdk/docs/) and from your command line use the following command for your bucket:
gsutil defacl set public-read gs://your_bucket
After that all my current and new images are available here storage.googleapis.com/my_project.appspot.com/img/image_name.jpg
and downloading time is definitely shorter.
Hosting = Storage + CDN, so really what you're seeing is you hitting a CDN near you, rather than going directly to the GCS or S3 bucket. Same is true with Cloudinary/Imgix. This is why performance is so much better for Hosting than Storage.
Addressing the issue of TTFB being so different between AWS and GCP: unfortunately this is a known issue of GCS vs S3 (see this great blog post w/ in depth perf analysis). I know this team is working to address this problem, but going the "stick a CDN in front of it" route will provide a faster solution (provided you don't need to restrict access, or your CDN can authorize requests).
Note: GCP has announced a Sydney region (announcement blog post) to be launched in 2017, which might help you.
In addition to #Ziwi answer.
I think it is also ok to change rules directly in Firebase
// Only a user can upload their profile picture, but anyone can view it
service firebase.storage {
match /b/<bucket>/o {
match /users/{userId}/profilePicture.png {
allow read;
allow write: if request.auth.uid == userId;
}
}
}
The source is https://firebase.googleblog.com/2016/07/5-tips-for-firebase-storage.html
I would like to host my webapp on Firebase, since I'm using their services and functionalities since a long time (before Firebase was inside Google and since its static hosting service was named Divshot... ).
But I've got a demo domain from Freenom (.tk domain) and I was wondering how to connect this with firebase:
I can set only this paramater relative to TXT record:
dns management
so where should I define the parameters needed
google-site-verification=...
?
Thank You to all!
PS: I've already seen
Firebase hosting custom domain error
and related
firebase-talk Dqmz9Iuio54
and
and the question: how-can-i-verify-my-custom-domain-using-domains-google-com/39020649#39020649
but none of them seems to respond to my problem...
PS: I've come here from firebase support page where StackOverflow is the first choice.
Thank you!
Leave the "Name" field blank and fill "Target" with the google-site-verification=... value. Once you've done so, things should go through. One way to check is to run:
dig yourdomain.tk TXT
If you've done it correctly, you should receive back the google-site-verification=... value. It may take some time to propagate before it starts showing up.
DNS registrar / records host: delete TXT records pointing to
firebase.
firebase console: delete the custom domain.
firebase console: add custom domain.
copy the two TXT records from firebase to DNS host
DNS record host should include two entries with one yourdomain.tk and www.yourdomain.tk
Once the yourdomain.tk is added it'll show as empty and other entry will be shown as www.
The copy the two given A records to Freenom DNS records.
There will be 4 A records two for yourdomain.tk and two for www.yourdomain.tk.
Wait 24-48 hours to see if the changes work.
If it doesn't work, contact firebase support from the console; be sure to take screen shots of the DNS records and the firebase console. These will help the firebase support team to troubleshoot the problem.
Mean while you can check for the dns propagation by using below tutorial
The Firebase Hosting servers run what is essentially this command for verifying the TXT records for your domain:
dig -t txt +noall +answer yourdomain.tk
If you immediately run this command right now, you might not get results. If the Firebase servers are seeing the same in their DNS query, they will not be able to continue.
That means that either you didn't save/apply your changes yet, or they haven't propagated everywhere yet. The longer it takes for the changes to show up, the more likely it becomes that you still need to take some action at your DNS provider
I have the following setup: riak 1.4.12, riakcs 1.5.3, stanchion 1.5.0
I am able to list bucket contents, and the authentication works (I get a response when listing or trying to remove a bucket, PUT a file) but get an AccessDenied error when trying to create a bucket.
I found this thread http://riak-users.197444.n3.nabble.com/RIAK-CS-Unable-to-create-bucket-using-s3cmd-AccessDenied-td4032375.html and tried adding signature_v2 = True to .s3cfg with no success, and I've also tried three versions of s3cmd (1.5.0, 1.5.0alpha, 1.0.1) I also tried creating a bucket using the python library boto, which also gives an access denied error.
I'm stumped :( any suggestions on where I should look next would be greatly appreciated! Not sure where there are logs for individual operations against Riak-cs - I've set lager log level to debug and wasn't able to see anything in the logs.
Thanks!
Ambert
I posted the same question to riak-users mailing list, and got an answer!
In my case, I had to set the admin.key and admin.secret in /etc/stanchion/stanchion.conf.
After setting them, s3cmd mb succeeded.