Deploy to firebase hosting from a firebase function - firebase

Is it possible to deploy static assets from a firebase function to the firebase hosting?
Use case: A blog with static html files. Blog content and meta infos would be stored in the database (content as markdown). On publish or update, a firebase function is triggered which parses the markdown and generates a static html file for the blog post and deploys it to the firebase hosting. After deployment, the function would store the live URL in the database.
Would this workflow be possible? In the current documentation, I cannot find anything about deploy from functions.
As a workaround, I could imagine a setup with travis-ci. The function triggers a rebuild on travis, travis builds the static assets and deploys them to firebase hosting, but this seems like a huge overhead.
I could also pull the markdown content from the db and build on the client, but I really like to try the static file approach for initial loading time reasons.

I have been wanting to do this for a long time, and it seems that with the newly unveiled Firebase Functions Hosting Integration... well, we still can't do exactly what we want. But we can get close!
If you follow the read the post above, you can see how we can now edit the firebase.json redirect a URL(s) to point to a firebase function which will can build the page from markdown stored in firebase and serve that to the client.
The thing is, this happens on every GET request for each page. Which is dumb (for a largely static page like a typical blog). We want static pages that are instantly available without needing to wait for functions to generate anything (even though that happens really fast). We can mitigate that by setting the Cache-Control header to an arbitrarily large number with the response object as in
res.set('Cache-Control', 'public, max-age=600, s-maxage=31536000');
Which will tell the browser to cache the result for 10 minutes, but the CDN to cache it for a year. This almost solves the problem of wanting pre-rendered, instantly available pages for all but the first hit, which will incur the render cost. Plus, the CDN can evict your cached content if it determines that there is not enough traffic to warrant storing it.
Getting closer.
But we aren't quite where we need to be. Say you publish your post and a few days later, notice a typo? Well, I think you are pretty much hosed. Your cached content will continue to be served for the rest of the year unless you do something like:
Change the URL of the Post - This is probably a bad idea, as it will tank any SEO and break links to the page that are already in the wild.
There may be a way to force the CDN to update, perhaps by augmenting your 'publish blog post' process to included a javascript GET request with something odd in the request header, or maybe there is a way to do it with a firebase function any time the post gets updated. This is where I get stuck.
Firebase uses Google's Cloud Platform CDN which includes a mechanism for Cache invalidation, but I don't know that this is readily available from functions -- and even if it does, it still doesn't solve getting evicted from the cache.
Personally, I will probably use the setup I described with a CDN cache age limit of intermediate length. This beats my current approach of sending markdown to the client and rendering locally using (the excellent) showdown.js, which is still really fast, but does require client side javascript and a few cpu cycles.
Hopefully someone will have a solve for this (or someone at firebase can slip pushing to hosting from functions into the next release :) ). I'll update my answer if I get it nailed down.

I've haven't tried this yet, but I hope your cloud function could deploy new static files to firebase hosting with Hosting REST API.
I'll update this answer with function code and tutorial after some tests.

I haven’t fully investigated this yet but I wonder if this is what you’re looking for:
https://gist.github.com/puf/e00c34dd82b35c56e91adbc3a9b1c412
git clone https://gist.github.com/e00c34dd82b35c56e91adbc3a9b1c412.git
firebase-hosting-deploy-file cd firebase-hosting-deploy-file npm
install
perform a dry run, make sure you're not doing something you'll regret node deployFile.js contentsite /index.html
do the deletion for real node deployFile.js contentsite /index.html commit

Related

How can I make the Next.js "dev" server safe for production?

My team's primary goal is to be able to "snapshot" a CMS-driven site to static HTML. This is straightforward using getStaticProps and next export.
But we also need to host an intranet version always fetching latest content from the CMS. Using getStaticProps this is not really possible because its output is cached, and if you use the older getInitialProps you can't "freeze" the server version of its output during export.
next dev makes this easy; it has a service that offers up fresh versions of the JSON files that will be made static during export.
On a long-running site, are there important configuration changes that would make next dev safe/safer to use?
In 9.5 Next added Incremental Static Regeneration, a fancy way of saying getStaticProps will regularly invalidate its output some time after the next time it's requested.
This is still not ideal because someone who makes a CMS edit will want to see the change reflected on the very next request, but instead they'll see the old content, wait a few seconds, and reload the page.
On static export, nothing changes: getStaticProps turns into a static JSON file.

I am using Scripts.Render() on ASP.NET MVC - what can the reason be that 6 hours after the update users still get the old (bundled) script file?

I've updated some scripts within our TypeScript files, but users still seem to get the old versions of the bundled script. How do I know? Because I'm seeing (remote) JavaScript errors in our logs that refer to a class that 100% exists within the new latest version.
It has been more then 6 hours ago and I can verify that the ?={hash} has changed.
I have the feeling the suspects are, one way or in combination any of these:
Static content (IIS)
Scripts.Render("")
TypeScript compilation
Azure
I have the feeling it's in some kind of cache or that it has to do with static content serving. Or with the Script.Render().
What can the reason be? It drives me crazy, because I know the site is broken for some users, but I can't get a fix out.
I've included the headers below.
This is the code for the bundle. Note that core.js is being generated by TypeScript.
{
var bundle = new Bundle("~/scripts/main-bundle");
bundle.IncludeDirectory("~/scripts/fork", "*.js", searchSubdirectories : true );
bundle.IncludeDirectory("~/scripts/app", "*.js", searchSubdirectories : true );
bundle.Include("~/scripts/bundles/core.js");
bundles.Add( bundle );
}
Update
If they are getting the old HTML because it might be cached, so the hashbomb doesn’t change - shouldn’t they still have a consistent pair of JS and HTML?
https://poules.com/us doesn't have a cache-control:max-age specified so you probably have some users on browser cached html which would use the old scripts.
There is a s-maxage set, but that is only an override for public cache max-age which isn't set and this page cache is set to private; so I don't think it is doing anything.
Also, you can check Azure to see what is deployed if you have access to the Azure portal.
best way i find around any web interface that requires a forcing of update client is to use a manifest file, and have that specify the scripts. Your bundle needs to then update the manifest with the correct hash. I normally just use a grunt task for this and placeholders within the manifest.
You then manage the manifest in code. with listeners for updateReady and completed so you know when to refresh the browser.
On another note, Application Cache is for older browsers, for new browsers you have to use the Service worker in order to provide a facility to update your app.
With both applied you will have a release schedule you can manage and mitigate issues like you have been getting at the moment.
Another method if you have an API, is to allow the API to serve up Javascript, based on a given version number for the users current web front end. This way you can then inform the users that a more recent version is up to date, and offer a reload button.
However Application cache through manifest or service workers, is more accessible to other teams, if you have a split, front end and backend setup.
+++++++++++++++++++++
Another reason why, could be because your web font is blocked by ad blockers like Ghostery and AdGuard. This in turn creates a unhandled error
auth-dialog-window?openerOrigin=https%3a%2f%2fpoules.com&color=FF533C&pool=&openerType=iframe:82 Uncaught ReferenceError: WebFont is not defined
at auth-dialog-window?openerOrigin=https%3a%2f%2fpoules.com&color=FF533C&pool=&openerType=iframe:82
This could potentially stop other things from working and loading in the right way. You need to make sure that you are picking up and catching all errors and events for all of the users, no matter what plugins they use.
To be fair, on looking it could very well be the adBlockers that are your main issue here.
Anyway hope this all helps
Even if the hashbomb stays the same, the asset/bundle files do expire at their own pace but only after a day; at the moment of writing I see the following dates
Date: Fri, 25 May 2018 17:11:21 GMT
Expires: Sat, 25 May 2019 17:11:22 GMT
If you update your main-bundle at Fri 18:00:00, it might not be used until after Sat 17:11:22.
I also notice you are mixing public and private cache locations; by caching the bundles public, any proxy server (if one is involved) maintains its own cache by which the browser might get served, which might also result in a delay.
Is it an option to not cache your webpages so that the hashbomb is always up to date?
I also usually server my assets without caching but with the Last-Modified header,
doing so ensures that the browers at leasts makes a request for each asset providing the If-Modified-Since header, upon which IIS responds with te 304 status code if there's no change (so that no bytes go over the wire).

Firebase Storage Indexing

My Firebase Storage loaded imgs keep getting blocked by https://firebasestorage.googleapis.com/robots.txt when trying to be indexed. There is nothing private on these imgs to be blocked, so is there a way to unblock them? I've tried to upload my own robots.txt to the bucket root but it seems this doesn't work either.
I assume you're trying to use something like the Twitterbot? Would be interested to hear more about the use case.
The good news is that we just removed our robots.txt file and will deploy this change in the next backend release, so bots will be allowed to crawl your bucket soon. Happy to update this thread once things are in production :)
that's great news! In my case it's Twitterbot that's been unable to follow Firebase Storage image links, therefore unable to display my CMS' preview images in shared Twitter Cards. I think you answered a question about that on the Twitter forums (I'm commenting here because that thread's been closed). Thanks also for saying you'll report back here when the change has been rolled out; any chance you can give a rough estimate, though? Like, is it likely to be months rather than weeks (or days!)?
Cheers.

Is it possible to keep data cached between user visits

I have a collection of 15k+ objects(database) that I want to send to the client(an application). This can take up to 30sec to sync.
I would like a way to keep cache between user visits so I only need to sync the difference since my last visit.
It would be also nice to be able to share that local cache between browser tabs.
In theory I don't see why it would be hard to do so, but I am uncertain how to do it.
*As pointed out by #zeroasterisk it is a database cache I am looking for, not simply static files.
Have you looked at the smart package "appcache" ?
code: https://github.com/awwx/meteor-appcache
info: http://docs.meteor.com/#appcache
more: https://github.com/meteor/meteor/wiki/AppCache
The appcache package stores the static parts of a Meteor application (the client side Javascript, HTML, CSS, and images) in the browser's application cache. To enable caching simply add the appcache package to your project.
It doesn't currently support data in collections, which is what I think you were asking about, but it might be something you could extend. If I mis-understood the question and you just need to store static objects (JS files, etc) this will work great.
more on this here: Can Meteor's Appcache also store database data?
note: it is disabled by default in FF because of user prompts...

Meteor.js - Template Permissions

This has been asked in similar forms here and here but it seems pretty important, and the framework is under rapid development, so I'm going to raise it again:
Assuming your login page needs to face the public internet, how do you prevent Meteor from sending all of the authenticated user templates to a non-authenticated client?
Example use case: You have some really unique analytics / performance indicators that you want to keep secret. You've built templates to visualize each one. Simply by visiting the login page, Meteor will send any rando the templates which, even unpopulated, disclose a ton of proprietary information.
I've seen two suggestions:
Break admin into a separate app. This doesn't address the issue assuming admin login faces the public internet, unless I'm missing something.
Put the templates in the public folder or equivalent and load them dynamically. This doesn't help either, since the file names will be visible from other templates which will be sent to the client.
The only thing I can think of is to store the template strings in the server folder and have the client call a Meteor.method after login to retrieve and render them. And if you want them to behave like normal client templates, you'd have to muck around with the internal API (e.g., Meteor._def_template).
Is there any more elegant way to do this?
I asked a similar question here:
Segmented Meteor App(s) - loading only half the client or two apps sharing a database
Seems to be a common concern, and I certainly think it's something that should be addressed sometime.
Until then, I'm planning on making a smaller "public" app and sharing the DB with an admin app (possibly in Meteor, possibly in something else, depending on size/data for my admin)
These 2 packages try to address this issue:
https://atmospherejs.com/numtel/publicsources
https://atmospherejs.com/numtel/privatesources
It uses an iron-router plug-in to load your specific files on every route.
The main drawback I see here is that you must change your app structure, as the protected files need to be stored in /public or /private folder.
Also you are supposed to use iron-router.

Resources