Firebase Storage Indexing - firebase

My Firebase Storage loaded imgs keep getting blocked by https://firebasestorage.googleapis.com/robots.txt when trying to be indexed. There is nothing private on these imgs to be blocked, so is there a way to unblock them? I've tried to upload my own robots.txt to the bucket root but it seems this doesn't work either.

I assume you're trying to use something like the Twitterbot? Would be interested to hear more about the use case.
The good news is that we just removed our robots.txt file and will deploy this change in the next backend release, so bots will be allowed to crawl your bucket soon. Happy to update this thread once things are in production :)

that's great news! In my case it's Twitterbot that's been unable to follow Firebase Storage image links, therefore unable to display my CMS' preview images in shared Twitter Cards. I think you answered a question about that on the Twitter forums (I'm commenting here because that thread's been closed). Thanks also for saying you'll report back here when the change has been rolled out; any chance you can give a rough estimate, though? Like, is it likely to be months rather than weeks (or days!)?
Cheers.

Related

Deploy to firebase hosting from a firebase function

Is it possible to deploy static assets from a firebase function to the firebase hosting?
Use case: A blog with static html files. Blog content and meta infos would be stored in the database (content as markdown). On publish or update, a firebase function is triggered which parses the markdown and generates a static html file for the blog post and deploys it to the firebase hosting. After deployment, the function would store the live URL in the database.
Would this workflow be possible? In the current documentation, I cannot find anything about deploy from functions.
As a workaround, I could imagine a setup with travis-ci. The function triggers a rebuild on travis, travis builds the static assets and deploys them to firebase hosting, but this seems like a huge overhead.
I could also pull the markdown content from the db and build on the client, but I really like to try the static file approach for initial loading time reasons.
I have been wanting to do this for a long time, and it seems that with the newly unveiled Firebase Functions Hosting Integration... well, we still can't do exactly what we want. But we can get close!
If you follow the read the post above, you can see how we can now edit the firebase.json redirect a URL(s) to point to a firebase function which will can build the page from markdown stored in firebase and serve that to the client.
The thing is, this happens on every GET request for each page. Which is dumb (for a largely static page like a typical blog). We want static pages that are instantly available without needing to wait for functions to generate anything (even though that happens really fast). We can mitigate that by setting the Cache-Control header to an arbitrarily large number with the response object as in
res.set('Cache-Control', 'public, max-age=600, s-maxage=31536000');
Which will tell the browser to cache the result for 10 minutes, but the CDN to cache it for a year. This almost solves the problem of wanting pre-rendered, instantly available pages for all but the first hit, which will incur the render cost. Plus, the CDN can evict your cached content if it determines that there is not enough traffic to warrant storing it.
Getting closer.
But we aren't quite where we need to be. Say you publish your post and a few days later, notice a typo? Well, I think you are pretty much hosed. Your cached content will continue to be served for the rest of the year unless you do something like:
Change the URL of the Post - This is probably a bad idea, as it will tank any SEO and break links to the page that are already in the wild.
There may be a way to force the CDN to update, perhaps by augmenting your 'publish blog post' process to included a javascript GET request with something odd in the request header, or maybe there is a way to do it with a firebase function any time the post gets updated. This is where I get stuck.
Firebase uses Google's Cloud Platform CDN which includes a mechanism for Cache invalidation, but I don't know that this is readily available from functions -- and even if it does, it still doesn't solve getting evicted from the cache.
Personally, I will probably use the setup I described with a CDN cache age limit of intermediate length. This beats my current approach of sending markdown to the client and rendering locally using (the excellent) showdown.js, which is still really fast, but does require client side javascript and a few cpu cycles.
Hopefully someone will have a solve for this (or someone at firebase can slip pushing to hosting from functions into the next release :) ). I'll update my answer if I get it nailed down.
I've haven't tried this yet, but I hope your cloud function could deploy new static files to firebase hosting with Hosting REST API.
I'll update this answer with function code and tutorial after some tests.
I haven’t fully investigated this yet but I wonder if this is what you’re looking for:
https://gist.github.com/puf/e00c34dd82b35c56e91adbc3a9b1c412
git clone https://gist.github.com/e00c34dd82b35c56e91adbc3a9b1c412.git
firebase-hosting-deploy-file cd firebase-hosting-deploy-file npm
install
perform a dry run, make sure you're not doing something you'll regret node deployFile.js contentsite /index.html
do the deletion for real node deployFile.js contentsite /index.html commit

Magento 1.9 styles (CSS and JS not found) break frecuently

So, I have a couple Magento sites (1.9.2+) and recently, styles started breaking frequently. By styles breaking I mean home is displayed as only text. All css, js and images are not found.
I know how to fix this. I just clear cache removing var/cache and everything is fine again.
My question is... why does this happen periodically and now, more often? I've even setup a cron to delete the cache folder hourly. However, I'm still getting this error.
It's really annoying, and hard to explain to our clients when it's happening daily. "oh, just another cache issue", isn't an acceptable answer for me.
All answers I found so far just fix it once, and never mention it happens periodically.
Any ideas as to prevent this from happening?
Thanks!
I would not set up a cron to clear the cache. So remove that asap. What 3rd party apps have you installed around the time your had these issues. My guess is there must be a process / action that is periodically occurring that causes this change. Check your error logs and exceptions also /magentoroot/var/logs
Last possible reason (although rather slim chance) is you have a sitewide ssl setup but its only valid for certain urls.

First Byte Time scores F

I recently purchased a new theme and installed wordpress on my GoDaddy hosting account for my portfolio. I am still working on it, but as of right now I sometimes get page load speeds of 10-20seconds, and others 2 seconds (usually after the page has been cached). I have done all that I believe I can (without breaking the site) to optimize my performance speed (reducing image sizing, using a free CDN, using W3 Total Cache, etc).
It seems that my main issue is this 'TTFB' wait time I get whenever I go to a new page that hasn't been cached yet. How I can fix this? Is it the theme's fault? Do I NEED to switch hosting providers? I really don't want to go through the hassle of doing that and paying So much more just to have less than optimal results. I am new to this.
My testing site:
http://test.ninamariephotography.com/
See my Web Page Results here:
http://www.webpagetest.org/result/161111_9W_WF0/
Thank you in advance to anyone for your help:)
Time To First Byte should depend on geography. I don't think that's your problem. I reran your test and got a B.
I think the issue is your hosting is a tiny shared instance, and you're serving static files. Here are some ideas to speed things up.
Serve images using an image-serving service. Check out imgix which is $3/m. It could help in unexpected ways serving images off an external domain depending on HTTP protocol version and browser version, and how connections are shared.
Try lossy compression. You lose some image detail, but you also lose some file size. Check out compressor.io for an easy tool.
Concatenate and minify scripts. You have a number of little javascript files that load individually. Consider joining them together and minifying. I don't know the tool chain for Wordpress, perhaps there's a setting?
If none of that helps, you should experiment with different a hosting choice.

is there a way to dowload netflix catalog?

http://api-public.netflix.com/catalog/titles/streaming
doesnt work anymore (it says inactive account). Is there a way to download full catalog? wanted to create app for my use.
Netflix shut down their developer program last year, which was the only way to get the proper credentials to sign a REST request URL and download the catalog... They are not going to be issuing any new developer keys, either. So, if you don't already have one, I am afraid you are out of luck.
If it is saying "account inactive" you are not authenticating your call--you cannot just paste that into a browser. There is a lot of information on the Netflix developer site about this here http://developer.netflix.com/page
I have the same question. I'm able to begin downloading the entire catalog, but due to the size it crashes my browser.
I started reading through this link (http://developer.netflix.com/forum/read/157154) but to be honest I don't really understand it.
Anyone care to enlighten me on how to set up a request, add Accept-Encoding headers, and then parse the output? Sorry for thread highjack...
Having the same issue, 401 for what used to work fine. Seems to be a Netflix issue, not sure what we can do.

ASP.NET Browser Debug (support information) page

So one of the many many tasks I'm faced with daily as a developer is trying to get our support department to get as much information about the end users environment as possible.
Browser version, current cookies, plugins, etc etc and it would be handy to point people to a specific page on our site and say "copy paste this to support".
In the past I've always written these by hand, and used third party tools (such as BrowserHawk) to get as much info as possible.
How does everyone else deal with getting this information from end users, is there a nice package I'm unaware of to give a detailed dump a users env without having to get the users to run an app?
Just to clarify I'm not looking at an elmah style reporting (which is very helpful as well!) but this mainly for the client side stuff.
Some months ago I have see the googles ads page have a cool nice report button. What this button do is that capture using javacript the page as it is and send you the report, with all the details, and an image of the actually page.
So I have found this library http://html2canvas.hertzen.com/ that make the same think.
And here are some example pages with this feedback.
http://hertzen.com/experiments/jsfeedback/
So I add this feedback option, and I ask from the users to point out the issue, and send the feedback, so for pages I have a very nice image for what is not going well.
The next think is that I log and check all errors, and I fix them soon.

Resources