Is there a way to get a comprehensive list of the _next/static files on server side? - next.js

I'm writing a custom service worker and would like to get a list of these endpoints to cache.
It seems like just iterating the filesystem is a good starting point, but some of the endpoints don't seem to map to files.
next-pwa appears to get this info client side using this:
const data = window.performance
.getEntriesByType('resource')
.map(e => e.name)
.filter(n => n.startsWith(`${window.location.origin}/_next/data/`) && n.endsWith('.json'))
The reason I'm not using next-pwa is I could not get it to work and am experiencing far more predictable and transparent behavior using the ServiceWorker API directly.

Related

Going offline with app and database syncing

I have a NativeScript (Angular) app that makes API-calls to a server to get data. I want to implement a bi-directional synchronization once a device gets online but using current API, no BaaS.
I could do a sort of caching. Once in a while app invalidates info in database and fetches it again. I don't like this approach because there are big lists that may change. They are fetched in batches, i.e. by page. One of them is a list of files downloaded to and stored on a device. So I have to keep those that are still in the list, and delete those that are not. It sounds like a nightmare.
How would you solve such a problem?
I use nativescript-couchebase plugin to store the data. We have following services
Connectivity
Data
API Service
Based on connectivity is Online/Offline, we either fetch data from remote API or via couchebase db. Please note that API service always returns the data from Couchebase only.
So in online mode
API Call -> Write to DB -> Return latest data from Couchebase
Offline mode
Read DB -> Return latest data from Couchebase
Also along with this, we maintain all API calls in a queue. So whenever connectivity returns, API calls are processed in sequence. Another challenge that you may face while coming in online mode from offline mode is the token expiry. This problem can be solved by showing a small popup to user after you come online.
I do this by saving my data as a json string and saving it to the devices file system.
When the app loads/reloads I read it from the file.
ie.
const fileSystemModule = require("tns-core-modules/file-system");
var siteid = appSettings.getNumber("siteid");
var fileName = viewName + ".json";
const documents = fileSystemModule.knownFolders.documents();
const site_folder = documents.getFolder("site");
const siteid_folder = site_folder.getFolder(siteid.toString());
const directoryPath = fileSystemModule.path.join(siteid_folder.path, fileName);
const directoryFile = fileSystemModule.File.fromPath(directoryPath);
directoryFile.writeText(json_string)
.then((result) => {
directoryFile.readText().then((res) => {
retFun(res);
});
}).catch((err) => {
console.log(err.stack);
});

Meteor signaling without db write

I've been looking for a good way to do, but haven't found anything that doesn't seem hacky. I want to signal the client without going through the database and a subscription. For example, in a game I want to send a message to the client to display "Player 1 almost scores!". I don't care about this information in the long run, so I don't want to push it to the DB. I guess I could just set up another socket.io, but I'd rather not have to manage a second connection if there is a good way to go it within meteor. Thanks! (BTW, have looked at Meteor Streams, but it appears to have gone inactive)
You know that Meteor provides real-time communication from the server to clients through Publish and Subscribe mechanism, which is typically used to send your MongoDB data and later modifications.
You would like a similar push system but without having to record some data into your MongoDB.
It is totally possible re-using the Meteor Pub/Sub system but without the database part: while with Meteor.publish you typically return a Collection Cursor, hence data from your DB, you can also use its low-level API to send arbitrary real-time information:
Alternatively, a publish function can directly control its published record set by calling the functions added (to add a new document to the published record set), changed (to change or clear some fields on a document already in the published record set), and removed (to remove documents from the published record set). […]
Simply do not return anything, use the above mentioned methods and do not forget calling this.ready() by the end of your publish function.
See also the Guide about Custom publications
// SERVER
const customCollectionName = 'collection-name';
let sender; // <== we will keep a reference to the publisher
Meteor.publish('custom-publication', function() {
sender = this;
this.ready();
this.onStop(() => {
// Called when a Client stops its Subscription
});
});
// Later on…
// ==> Send a "new document" as a new signal message
sender.added(customCollectionName, 'someId', {
// "new document"
field: 'values2'
});
// CLIENT
const signalsCollectionName = 'collection-name'; // Must match what is used in Server
const Signals = new Mongo.Collection(signalsCollectionName);
Meteor.subscribe('custom-publication'); // As usual, must match what is used in Server
// Then use the Collection low-level API
// to listen to changes and act accordingly
// https://docs.meteor.com/api/collections.html#Mongo-Cursor-observe
const allSignalsCursor = Signals.find();
allSignalsCursor.observe({
added: (newDocument) => {
// Do your stuff with the received document.
}
});
Then how and when you use sender.added() is totally up to you.
Note: keep in mind that it will send data individually to a Client (each Client has their own Server session)
If you want to broadcast messages to several Clients simultaneously, then the easiest way is to use your MongoDB as the glue between your Server sessions. If you do not care about actual persistence, then simply re-use the same document over and over and listen to changes instead of additions in your Client Collection Cursor observer.
It's completly fine to use the database for such a task.
Maybe create a collection of "Streams" where you store the intended receiver and the message, the client subscribe to his stream and watches any changes on it.
You can then delete the stream from the database after the client is done with it.
This is a lot easier than reinventing the wheel and writing everything from scratch.

How to generate DownloadUrl from Google-Cloud storage (I came from firebase)

Just trying to figure out something that seemed trivial in firebase, in google-cloud.
It seems as though if you're making a node.js app for HTML (i'm talking to it through Unity actually, but it's a desktop application) you can't use firebase-storage for some odd reason, you have to use google-cloud, even the firebase-admin tools use the cloud storage to do storage from here.
Nevertheless, i got it working, i am uploading the files to the firebase storage; however, the problem is in firebase, you could specify a specific file, and then do storage().ref().child(filelocation).GetDownloadURL(): this would generate a unique url for some set time that can be used publicly, without having to give out access to read to all anonymous users.
I did some research and i need to implement something called GS UTIL in order to generate my own special urls, but it's so damn complicated (im a newbie to this whole server stuff), i don't even know where to start to get this working in my node server.
Any pointers? I'm really stuck here.
-------if anyones interested, this is what im trying to do high level-----
I'm sending 3d model data to node app from Unity
the node app is publishing this model on sketchfab
then it puts the model data onto my own storage, along with some additional data specially made for my app
after it gets signed to storage, it gets saved to my Firebase DB in my global model database
to be accessed later, by users, to try to get the downloadURL of this storage file and send them all back to Unity users(s)
I would just download the files into my node app, but i wanna reduce any server load, it's supposed to be just a middleman between Unity and Firebase
(i would've done it straight from Unity, but apparently firebase isn't for desktop windows apps).
Figured it out:
var firebase_admin = require("firebase-admin");
var storage = firebase_admin.storage();
var bucket = storage.bucket();
bucket.file(childSnapshot.val().modelLink).getSignedUrl({
action: 'read',
expires: expDate
},function(err,url){
if(err){
reject(err);
}
else{
finalData.ModelDownloadLink = url;
console.log("Download model DL url: " + url);
resolve();
}
});

Can Firebase RemoteConfig be accessed from cloud functions

I'm using Firebase as a simple game-server and have some settings that are relevant for both client and backend and would like to keep them in RemoteConfig for consistency, but not sure if I can access it from my cloud functions in a simple way (I don't consider going through the REST interface a "simple" way)
As far as I can tell there is no mention of it in the docs, so I guess it's not possible, but does anyone know for sure?
firebaser here
There is a public REST API that allows you to read and set Firebase Remote Config conditions. This API requires that you have full administrative access to the Firebase project, so must only be used on a trusted environment (such as your development machine, a server you control or Cloud Functions).
There is no public API to get Firebase Remote Config settings from a client environment at the moment. Sorry I don't have better news.
This is probably only included in newer versions of firebase (8th or 9th and above if I'm not mistaken).
// We first need to import remoteConfig function.
import { remoteConfig } from firebase-admin
// Then in your cloud function we use it to fetch our remote config values.
const remoteConfigTemplate = await remoteConfig().getTemplate().catch(e => {
// Your error handling if fetching fails...
}
// Next it is just matter of extracting the values, which is kinda convoluted,
// let's say you want to extract `game_version` field from remote config:
const gameVersion = remoteConfigTemplate.parameters.game_version.defaultValue.value
So parameters are always followed by the name of the field that you defined in Firebase console's remote config, in this example game_version.
It's a mouthful (or typeful) but that's how you get it.
Also note that if value is stored as JSON string, you will need to parse it before usage, commonly: JSON.parse(gameVersion).
Similar process is outlined in Firebase docs.

How do you secure the client side MongoDB API?

I don't want just all of my users being able to insert/destroy data.
While there is no documented way to do this yet, here's some code that should do what you want:
Foo = new Meteor.Collection("foo");
...
if (Meteor.is_server) {
Meteor.startup(function () {
Meteor.default_server.method_handlers['/foo/insert'] = function () {};
Meteor.default_server.method_handlers['/foo/update'] = function () {};
Meteor.default_server.method_handlers['/foo/remove'] = function () {};
});
}
This will disable the default insert/update/remove methods. Clients can try to insert into the database, but the server will do nothing, and the client will notice and remove the locally created item when the server responds.
insert/update/remove will still work on the server. You'll need to make methods with Meteor.methods that run on the server to accomplish any database writes.
All of this will change when the authentication branch lands. Once that happens, you'll be able to provide validators to inspect and authorize database writes on the server. Here's a little more detail: http://news.ycombinator.com/item?id=3825063
[UPDATE] There is now an official and documented Auth Package which provides different solutions to secure a collection.
On a CRUD level :
[Server] collection.allow(options) and collection.deny(options). Restricts default write methods on this collection. Once either of these are called on a collection, all write methods on that collection are restricted regardless of the insecure package.
And there is also insecureto remove full write access from the client.
source : Getting Started with Auth (thanks to #dan-dascalescu)
[OLD ANSWER]
Apparently there are working on Auth Package(?) that should avoid any users taking full control on the db as it is now. There is also someone suggesting that there is an existing solution (workaround) by defining your own mutations (methods) and make them failed if they attempts to perform an unauthorized action. I didn't get it much better but I think this will often be necessary since I doubt the Auth Package will let you implement the usual auth logic on a row level but probably only on the CRUD methods. Will have to see what the devs have to say.
[EDIT]
Found something that seems to confirm my thoughts :
Currently the client is given full write access to the collection. They can execute arbitrary Mongo update commands. Once we build authentication, you will be able to limit the client's direct access to insert, update, and remove. We are also considering validators and other ORM-like functionality.
Sources of this answer :
Accessing to DB at client side as in server side with meteor
https://stackoverflow.com/questions/10100813/data-validation-and-security-in-meteor/10101516#10101516
A more succinct way:
_.each(['collection1', 'collection2'], function(collection){
_.each(['insert','update', 'remove'], function(method){
Meteor.default_server.method_handlers['/' + collection + '/' + method] = function(){}
});
});
or to make it more idiomatic:
extend meteor:
_.extend(Meteor.Collection.prototype, {
remove_client_access: function(methods){
var self = this;
if(!methods) methods = ['insert','update','remove'];
if(typeof methods === 'String') methods = [methods];
_.each(methods, function(method){
Meteor.default_server.method_handlers[self._prefix + method] = function(){}
});
}
});
Calls are simpler:
List.remove_client_access() // restrict all
List.remove_client_access('remove') //restrict one
List.remove_client_access(['remove','update']) //restrict more than one
I am new to Meteor, but what I have come across so far are these two points
You can limit what a client can access in the database by adding parameters to the find command in the server-side publish command. Then when the client calls Collection.find({}), the results that are returned correspond to what on the server side would be, for example, Collection.find({user: this.userId}) (see also Publish certain information for Meteor.users and more information for Meteor.user and http://docs.meteor.com/#meteor_publish)
One thing that is built in (I have meteor 0.5.9) is that the client can only update items by id, not using selectors. An error is logged to console on the client if there is an attempt that doesn't comply. 403: "Not permitted. Untrusted code may only update documents by ID." (see Understanding "Not permitted. Untrusted code may only update documents by ID." Meteor error).
In view of number 2, you need to use Meteor.methods on the server side to make remote procedure calls available to the client with Meteor.call.

Resources