I am confuse about why we have to use angularFireCollection built-in methods such as add() update() remove() etc instead of using firebase js api directly.
Since we just want to bind the firebase node with angularJS. We can just use
angularFireCollection(nodeRef) then if we want to do something with the binding node.
we can just go back to firebase js api and do something with the node. Example
nodeRef.push() .update() .remove().
To me, this is better than using angularFireCollection().methods(). Because
angularCollection is not completed compare to firebase js api
I don't have to learn / know AGAIN how ngFire works since I alredy learnt in firebase
different methods name confusing me. angularFireCollection().add() vs new Firebase().push()
The answer is that you do not have to use angularFireCollection at all. You can utilize Firebase methods directly. You can also use angularFire or angularFireCollection to sync data down to your client and then push data using Firebase methods directly.
angularFire and angularFireCollection are not a replacement for Firebase, but rather are wrappers intended to abstract away some of the complexities of syncing data to Angular bindings. To get an understanding of what those complexities are, you can simply look at the angularFireCollection code, and see the number of cases that have to be handled.
Consider the following example:
<h1>Hello {{user.name}}</h1>
<input ng-model="user.name" ng-change="change()" />
Without angularFire:
function($scope) {
$scope.user = {};
$scope.change = function() {
new Firebase('URL/users/kato/name').set($scope.user.name);
};
new firebase('URL/users/kato').on('value', function(snap) {
$scope.user = snap.val()||{};
});
}
With angularFire:
function($scope, angularFire) {
angularFire(new Firebase('URL/users/kato'), $scope, 'user');
}
Or, something more sophisticated:
<ul>
<li ng-repeat="user in users">
{{user.name}}
<input ng-model="user.name" ng-change="change(user)" />
</li>
</ul>
Without angularFireCollection, this would be 100+ lines of code (i.e. about the length of the angularFireCollection object) as it needs to handle child_added, child_removed, child_changed, and child_moved events, adding them into an ordered array for Angular's use, looking up keys on each change and modifying or splicing, maybe even moving records in the array. And it also needs to add changes on the client and sync them back to Firebase by finding the correct key for each record.
With angularFireCollection, it can be done in a few lines, like so:
function($scope, angularFireCollection) {
$scope.users = angularFireCollection(new Firebase('URL/users/kato'));
$scope.change = function(user) {
$scope.users.update(user);
};
}
Of course, these abstractions don't cover every use case and there are certainly plenty of places that using Firebase directly is both practical and appropriate.
Hope that helps.
Related
(Brand new learning Electron here, so I'm sure this is a basic question and I am missing something fundamental...)
How does one interact with a local database (I am using Sqlite) from an Electron application front end? I have a very basic database manager class and have no problem using it from the index.js file in my Electron application. But from the front-end (I am using Svelte, but I can probably translate solutions from other front-end frameworks), how does one interact with the database? This seems fundamental but I'm striking out trying to find a basic example.
Since everything is local it would seem it shouldn't be necessary to set up a whole API just to marshal data back and forth, but maybe it is? But if so, how does one go about telling the Electron "backend" (if that's the right term) to do something and return the results to the front end? I'm seeing something about IPC but that's not making a lot of sense right now and seems like overkill.
Here's my simple database manager class:
const sqlite3 = require("sqlite3").verbose();
class DbManager {
#db;
open() {
this.#db = new sqlite3.Database("testing.db", sqlite3.OPEN_READWRITE);
}
close() {
this.#db.close();
}
run(sql, param) {
this.#db.run(sql, param);
return this;
}
}
const manager = new DbManager();
module.exports = manager;
And I can call this and do whatever no problem from the Electron entry point index.js:
const { app, BrowserWindow, screen } = require("electron");
require("electron-reload")(__dirname);
const db = require("./src/repository/db");
const createWindow = () => {
...
};
let window = null;
app.whenReady().then(() => {
db.open();
createWindow();
});
app.on("window-all-closed", () => {
db.close();
app.quit();
});
But what to do from my component?
<script>
// this won't work, and I wouldn't expect it to, but not sure what the alternative is
const db = require("./repository/db");
let accountName;
function addAccount() {
db.run("INSERT INTO accounts (name) VALUES ($name);", { $name: accountName });
}
</script>
<main>
<form>
<label for="account_name">Account name</label>
<input id="account_name" bind:value={accountName} />
<button on:click={addAccount}>Add account</button>
</form>
</main>
If anyone is aware of a boilerplate implementation that does something similar, that would be super helpful. Obviously this is like application 101 here; I'm just not sure how to go about this yet in Electron and would appreciate someone pointing me in the right direction.
If you're absolutely 100% sure that your app won't be accessing any remote resources then you could just expose require and whatever else you may need through the preload script, just write const nodeRequire = require; window.require = nodeRequire;.
This is a fairly broad topic and requires some reading. I'll try to give you a primer and link some resources.
Electron runs on two (or more if you open multiple windows) processes - the main process and the renderer process. The main process handles things like opening new windows, starting and closing the entire app, tray icons, window visibility etc., while the renderer process is basically like your JS code in a browser. More on Electron processes.
By default, the renderer process does not have access to a Node runtime, but it is possible to let it. You can do that in two ways, with many caveats.
One way is by setting webPreferences.nodeIntegration = true when creating the BrowserWindow (note: nodeIntegration is deprecated and weird. This allows you to use all Node APIs from your frontend code, and your snippet would work. But you probably shouldn't do that because a BrowserWindow is capable of loading external URLs, and any code included on those pages would be able to execute arbitrary code on your or your users' machines.
Another way is using the preload script. The preload script runs in the renderer process but has access to a Node runtime as well as the browser's window object (the Node globals get removed from the scope before the actual front end code runs unless nodeIntegration is true). You can simply set window.require = require and essentially work with Node code in your frontend files. But you probably shouldn't do that either, even if you're careful about what you're exposing, because it's very easy to still leave a hole and allow a potential attacker to leverage some exposed API into full access, as demonstrated here. More on Electron security.
So how to do this securely? Set webPreferences.contextIsolation to true. This definitively separates the preload script context from the renderer context, as opposed to the unreliable stripping of Node APIs that nodeIntegration: false causes, so you can be almost sure that no malicious code has full access to Node.
You can then expose specific function to the frontend from the preload, through contextBridge.exposeInMainWorld. For example:
contextBridge.exposeInMainWorld('Accounts', {
addAccount: async (accountData) => {
// validate & sanitize...
const success = await db.run('...');
return success;
}
}
This securely exposes an Accounts object with the specified methods in window on the frontend. So in your component you can write:
const accountAdded = await Accounts.addAccount({id: 123, username: 'foo'});
Note that you could still expose a method like runDbCommand(command) { db.run(command) } or even evalInNode(code) { eval(code) }, which is why I said almost sure.
You don't really need to use IPC for things like working with files or databases because those APIs are available in the preload. IPC is only required if you want to manipulate windows or trigger anything else on the main process from the renderer process.
I have created a simple Pre Trigger in my CosmosDB collection.
function testTrigger() {
var context = getContext();
var request = context.getRequest();
var documentToCreate = request.getBody();
documentToCreate["test"] = "Added by trigger";
request.setBody(documentToCreate);
}
Though it seems, this trigger isn't fired at all.
What's also irritating, the getContext() call is marked with a green squiggly line in the script explorer.
The problem that the trigger doesn't seem to be... well... triggered, is that you aren't specifying it on the triggering operation. Triggers are not automatic. They must be specified for each database operation where you want them executed.
Personally, I think that was a unfortunate design choice and I've decided to not use triggers at all in my own code. Rather, I embed all the business rule constraints in stored procedures. I find this to be a bit cleaner and removes the worry that I'll forget to specify a trigger or (more likely) someone else writing code to the database won't even know that they should. I realize that someone can bypass my stored procedure and violate the business rules by going directly to the documents, but I don't know of any way in DocumentDB to protect against that. If there was a way to make triggers be automatic, that would have protected against that.
When using the REST API, you specify triggers in a header with x-ms-documentdb-pre-trigger-include or x-ms-documentdb-post-trigger-include. In the node.js SDK, you use the RequestOptions preTriggerInclude or preTriggerInclude property. For .NET, you specify it like this:
Document createdItem = await client.CreateDocumentAsync(collection.SelfLink, new Document { Id = "documentdb" },
new RequestOptions
{
PreTriggerInclude = new List<string> { "CapitalizeName" },
});
See comments for discussion on the green squiggly.
I am new to firebase and trying to use the $asObject as in the angulerFire doc. Basically, I have profile as follows below. I use $asObject to update the email. However when I use $save(), it replaces the entire profile with only the email, rather than pushing it to the end of list ie it works like set() rather than push(). Is how it is meant to work? how do I only push?
Object
{profiles:
{peterpan:
{name:"Peter Trudy", dob:"7th March"}
}
}
My click function:
$scope.angularObject = function(){
var syncProfile = $firebase(ref.child("profiles").child("peterpan"));
var profileObject = syncProfile.$asObject();
profileObject.email= "peter#peterpan.com";
profileObject.$save();
};
You're looking for $update:
syncProfile.$update({ "email": "peter#peterpan.com" });
Note that $update is only available on $firebase and not on the FirebaseObject that you get back from $asObject. The reason for this is that $asObject is really meant as an object that is bound directly to an angular scope. You're not expected to perform updates to it in your own code.
By the way: if the rest of your code is using AngularFire in a similar way, you might consider dropping AngularFire and using Firebase's JavaScript SDK directly. It is much simpler to use, since it doesn't need to mediate between Firebase and Angular's way of working.
I have an AngularJS service that returns a firebase ref.
.factory('sessionManager', ['$firebase','$scope', function($firebase){
var ref=new Firebase('https://telechat.firebaseio.com/sessions');
return $firebase(ref);
}])
In the controller, I have added the dependency and called $bind.
$scope.items=sessionManager;
$scope.items.$bind($scope,'sessions').then(function(unbind){
unbind();
});
But when I print it to the console, the returned data has a collection of functions like $add , $set ,.. etc in addition to the array of data.
Why is this occurring? Am I doing it the wrong way?
If I'm reading the question correctly, you may be under the impression that $bind() is a transformation of the $firebase object into a raw data array or object. In essence, the only difference between a $firebase instance and $bind is that data is local changes are automagically pushed back to Firebase from Angular. Whereas, when you use $firebase without $bind, you need to call $save to push local changes.
Keeping in mind that $firebase is a wrapper on the Firebase API and not a simple data array, you can still treat it like raw data in most cases.
To iterate data in a Firebase object, you can use ngRepeat:
<li ng-repeat="(key, item) in $sessions">{{item|json}}</li>
Or if you want to apply filters that depend on arrays:
<li ng-repeat="(key, item) in $sessions | orderByPriority | filter:searchText">
Or in a controller using $getIndex:
angular.forEach($scope.sessions.$getIndex(), function(key) {
console.log(key, $scope.sessions[key]);
});
The $add/$update/etc methods mentoined are part of the $firebase object's API. The documentation and the tutorial should be great primers for understanding this process.
This is also a part of the API that is continuing to evolve to better match the Angular way of doing things and feedback from users.
Is there a way to subscribe to the Meteor Session object so that a reactive template view automatically renders when data is .set on the Session object? Specifically the key/name and value data?
I have a similar question related to rendering Meteor Session object data when iterated over. This question is specifically different on purpose. I want to get an answer out on an alternate way and possibly better way to do the same thing.
I do not want to have to call Session.get('name'); This use case is because I don't know the names in the Session object.
So, I would like to be able to have something in handlebars that allows me to
Psuedo code...
{{#each Session}}
{{this.key}} {{this.value}}
{{/each}}
Unsure about subscribing to the session, but for the second part of your question, you can use this:
Template.hello.session = function () {
map = []
for (prop in Session.keys) {
map.push({key: prop, value: Session.get(prop)})
}
return map
}
then in your template:
{{#each session}}
{{key}} {{value}}
{{/each}}
Not sure if there's a more elegant way but it works.
I don't think so.
Have a look at https://github.com/meteor/meteor/blob/master/packages/session/session.js. It's actually reasonably simple. The invalidation happens in lines 45-47. You'll see calling Session.set invalidates anyone listening to that key specifically (via Session.get) or to the new or old value (via Session.equals). Nothing in there about invalidating the session as whole.
On the other hand, considering how simple it is, it wouldn't be super hard to write your own data structure that does what you want. I'm not sure what your use case is, but it might make a lot of sense to separate it from the session.
Yes you can subscribe to Session values.
Take a look a the docs for autosubscribe:
http://docs.meteor.com/#meteor_autosubscribe
// Subscribe to the chat messages in the current room. Automatically
// update the subscription whenever the current room changes.
Meteor.autosubscribe(function () {
Meteor.subscribe("chat", {room: Session.get("current-room");});
});
This code block shows one way to use this. Basically any time the value of "current-room" is changed Meteor will update the views.
EDIT
I misunderstood the initial question and decided I need to redeem myself somewhat. I just did some tests and as far as I can tell you can currently only subscribe to collection.find() and session.get() calls. So you can't subscribe to the whole session object or even to the keys object.
However, you can set a Session value to an object this may not be the most elegant solution but this worked for me on keeping track of the keys object with some hackery to keep from getting a circular object error.
var mySession = {
set: function(key, val){
var keys = Session.keys,
map = {};
Session.set(key, val);
for (var prop in keys) {
if(!(prop === "keys")){
map[prop] = keys[prop];
}
}
Session.set('keys', map);
}
};
This gives you something that looks a lot like original functionality and can help you keep track and update templates as you add or change Session values.
Here's my template helper (borrowed from previous answer):
Template.hello.keys = function() {
map = [];
keys = Session.get('keys');
for (var prop in keys) {
map.push({key:prop, value:keys[prop]});
}
return map
};
And here's the actual template:
<template name="hello">
<div class="hello">
{{#each keys}}
{{key}} {{value}} <br />
{{/each}}
</div>
</template>
This way you can just call mySession.set("thing", "another thing") and it will update on screen. I am new to Javascript so someone please let me know if I'm missing something obvious here, or let me know if there is a more elegant way of doing this.
While accessing the session object in this manner is doable .. You're not drinking the cool-aid.
In other words, Meteor excels in the respect of having publish/subscribe. Its not clear (to me) how much data you've got to put into the Session object before a browser crashes; I would rather not find out.
You would merely put all your session dependent code in a Deps.autorun() function to plug into subscriptions as mentioned in an earlier answer. Any time a session changes it'll modify the subscription; you can attach a ready callback, or use subscription.ready() checks to initiate specific actions, but ideally you'd structure your templates in a way that you could use rendered/destroyed functions, taking care to use {{isolate}}/{{constant}} when possible.
What you're getting at is kind of a difficult way to do something simple; and may not adhere to changes in meteor in the future; we can always be assured publish/subscribe will function as expected... Given Session is such a simple object.. whats to say someone designs something better shortly.. and it gets merged into meteor; and you're left with some weird workflow based on custom objects that may not break the Session; but may impact other parts of your interface.