How do I make a database call from an Electron front end? - sqlite

(Brand new learning Electron here, so I'm sure this is a basic question and I am missing something fundamental...)
How does one interact with a local database (I am using Sqlite) from an Electron application front end? I have a very basic database manager class and have no problem using it from the index.js file in my Electron application. But from the front-end (I am using Svelte, but I can probably translate solutions from other front-end frameworks), how does one interact with the database? This seems fundamental but I'm striking out trying to find a basic example.
Since everything is local it would seem it shouldn't be necessary to set up a whole API just to marshal data back and forth, but maybe it is? But if so, how does one go about telling the Electron "backend" (if that's the right term) to do something and return the results to the front end? I'm seeing something about IPC but that's not making a lot of sense right now and seems like overkill.
Here's my simple database manager class:
const sqlite3 = require("sqlite3").verbose();
class DbManager {
#db;
open() {
this.#db = new sqlite3.Database("testing.db", sqlite3.OPEN_READWRITE);
}
close() {
this.#db.close();
}
run(sql, param) {
this.#db.run(sql, param);
return this;
}
}
const manager = new DbManager();
module.exports = manager;
And I can call this and do whatever no problem from the Electron entry point index.js:
const { app, BrowserWindow, screen } = require("electron");
require("electron-reload")(__dirname);
const db = require("./src/repository/db");
const createWindow = () => {
...
};
let window = null;
app.whenReady().then(() => {
db.open();
createWindow();
});
app.on("window-all-closed", () => {
db.close();
app.quit();
});
But what to do from my component?
<script>
// this won't work, and I wouldn't expect it to, but not sure what the alternative is
const db = require("./repository/db");
let accountName;
function addAccount() {
db.run("INSERT INTO accounts (name) VALUES ($name);", { $name: accountName });
}
</script>
<main>
<form>
<label for="account_name">Account name</label>
<input id="account_name" bind:value={accountName} />
<button on:click={addAccount}>Add account</button>
</form>
</main>
If anyone is aware of a boilerplate implementation that does something similar, that would be super helpful. Obviously this is like application 101 here; I'm just not sure how to go about this yet in Electron and would appreciate someone pointing me in the right direction.

If you're absolutely 100% sure that your app won't be accessing any remote resources then you could just expose require and whatever else you may need through the preload script, just write const nodeRequire = require; window.require = nodeRequire;.
This is a fairly broad topic and requires some reading. I'll try to give you a primer and link some resources.
Electron runs on two (or more if you open multiple windows) processes - the main process and the renderer process. The main process handles things like opening new windows, starting and closing the entire app, tray icons, window visibility etc., while the renderer process is basically like your JS code in a browser. More on Electron processes.
By default, the renderer process does not have access to a Node runtime, but it is possible to let it. You can do that in two ways, with many caveats.
One way is by setting webPreferences.nodeIntegration = true when creating the BrowserWindow (note: nodeIntegration is deprecated and weird. This allows you to use all Node APIs from your frontend code, and your snippet would work. But you probably shouldn't do that because a BrowserWindow is capable of loading external URLs, and any code included on those pages would be able to execute arbitrary code on your or your users' machines.
Another way is using the preload script. The preload script runs in the renderer process but has access to a Node runtime as well as the browser's window object (the Node globals get removed from the scope before the actual front end code runs unless nodeIntegration is true). You can simply set window.require = require and essentially work with Node code in your frontend files. But you probably shouldn't do that either, even if you're careful about what you're exposing, because it's very easy to still leave a hole and allow a potential attacker to leverage some exposed API into full access, as demonstrated here. More on Electron security.
So how to do this securely? Set webPreferences.contextIsolation to true. This definitively separates the preload script context from the renderer context, as opposed to the unreliable stripping of Node APIs that nodeIntegration: false causes, so you can be almost sure that no malicious code has full access to Node.
You can then expose specific function to the frontend from the preload, through contextBridge.exposeInMainWorld. For example:
contextBridge.exposeInMainWorld('Accounts', {
addAccount: async (accountData) => {
// validate & sanitize...
const success = await db.run('...');
return success;
}
}
This securely exposes an Accounts object with the specified methods in window on the frontend. So in your component you can write:
const accountAdded = await Accounts.addAccount({id: 123, username: 'foo'});
Note that you could still expose a method like runDbCommand(command) { db.run(command) } or even evalInNode(code) { eval(code) }, which is why I said almost sure.
You don't really need to use IPC for things like working with files or databases because those APIs are available in the preload. IPC is only required if you want to manipulate windows or trigger anything else on the main process from the renderer process.

Related

Any way to programmatically open a collapsed console group?

I use console.groupCollapsed() to hide functions I don't generally need to review, but may occasionally want to dig into. One downside of this is that if I use console.warn or console.error inside that collapsed group, I may not notice it or it may be very hard to find. So when I encounter an error, I would like to force the collapsed group open to make it easy to spot the warning/error.
Is there any way to use JS to force the current console group (or just all blindly) to open?
Some way to jump directly to warnings/errors in Chrome debugger? Filtering just to warnings/errors does not work, as they remain hidden inside collapsed groups.
Or perhaps some way to force Chrome debugger to open all groups at once? <alt/option>-clicking an object shows all levels inside it, but there does not appear to be a similar command to open all groups in the console. This would be a simple and probably ideal solution.
There is no way to do this currently, nor am I aware of any plans to introduce such functionality, mainly because I don't think enough developers are actively using the feature to enough of a degree to create demand for this.
You can achieve what you're trying to do, but you need to write your own logging library. First thing you'll need to do is override the console API. Here is an example of what I do:
const consoleInterceptorKeysStack: string[][] = [];
export function getCurrentlyInterceptedConsoleKeys () { return lastElement(consoleInterceptorKeysStack); }
export function interceptConsole (keys: string[] = ['trace', 'debug', 'log', 'info', 'warn', 'error']) {
consoleInterceptorKeysStack.push(keys);
const backup: any = {};
for (let i = 0; i < keys.length; ++i) {
const key = keys[i];
const _log = console[key];
backup[key] = _log;
console[key] = (...args: any[]) => {
const frame = getCurrentLogFrame();
if (isUndefined(frame)) return _log(...args);
frame.children.push({ type: 'console', key, args });
frame.hasLogs = true;
frame.expand = true;
_log(...args);
};
}
return function restoreConsole () {
consoleInterceptorKeysStack.pop();
for (const key in backup) {
console[key] = backup[key];
}
};
}
You'll notice a reference to a function getCurrentLogFrame(). Your logging framework will require the use of a global array that represents an execution stack. When you make a call, push details of the call onto the stack. When you leave the call, pop the stack. As you can see, when logging to the console, I'm not immediately writing the logs to the console. Instead, I'm storing them in the stack I'm maintaining. Elsewhere in the framework, when I enter and leave calls, I'm augmenting the existing stack frames with references to stack frames for child calls that were made before I pop the child frame from the stack.
By the time the entire execution stack finishes, I've captured a complete log of everything that was called, who called it, what the return value was (if any), and so on. And at that time, I can then pass the root stack frame to a function that prints the entire stack out to the console, now with the full benefit of hindsight on every call that was made, allowing me to decide what the logs should actually look like. If deeper in the stack there was (for example) a console.debug statement or an error thrown, I can choose to use console.group instead of console.groupCollapsed. If there was a return value, I could print that as a tail argument of the console.group statement. The possibilities are fairly extensive. Here's a screenshot of what my console logs look like:
Note that you will have to architect your application in a way that allows for logging to be deeply integrated into your code, otherwise your code will get very messy. I use a visitor pattern for this. I have a suite of standard interface types that do almost everything of significance in my system's architecture. Each interface method includes a visitor object, which has properties and methods for every interface type in use in my system. Rather than calling interface methods directly, I use the visitor to do it. I have a standard visitor implementation that simply forwards calls to interface methods directly (i.e. the visitor doesn't do anything much on its own), but I then have a subclassed visitor type that references my logging framework internally. For every call, it tells the logging framework that we're entering a new execution frame. It then calls the default visitor internally to make the actual call, and when the call returns, the visitor tells the logging framework to exit the current call (i.e. to pop the stack and finalize any references to child calls, etc.). By having different visitor types, it means you can use your slow, expensive, logging visitor in development, and your fast, forwarding-only, default visitor in production.

How to trigger a method in another component in Vue3 (Composition API) if a global event bus is discouraged?

I have a component which lets me upload a file, and another component that shows the files that have been uploaded:
<FileUpload />
<FileList />
FileList has a refresh method that will get the latest files.
FileUpload has an emit "uploadFinished".
I would like to trigger a refresh of my FileList after an upload in FileUpload. What is the "Vue way" of communicating between those components?
I find the official explanation rather disappointing. Bearing in mind that my components do not necessarily have to be siblings (or have any close relation for that matter), I am not sure how I can achieve this exclusively with VueX (which I am using).
I can trigger the loading of the files with an action, but inside my FileList I am doing other internal things as well, like setting a loading and resetting the GUI. Those things cannot be controlled by data in my VueX State alone.
const refresh = (pageNum = 0) => {
collapseAll();
isLoading.value = true;
store.dispatch(LOAD_FILES_REQUEST, setParams(pageNum)).finally(() => {
isLoading.value = false;
});
};
To me, an event Bus seems to be the logical (and easiest) solution. Why is it frowned upon?

Server Side Rendering with viperHTML

I'm working on a Symfony application and just got SSR for JS working using https://github.com/spatie/server-side-rendering. So far I only worked with "readymade" SSR solutions for React, but currently I'm trying to use hyperHTML/viperHTML and am facing a few issues that so far I wasn't able to solve by looking at the available docs/examples.
My current test snippet is this:
const viperHTML = require('viperhtml');
class Component extends viperHTML.Component {
constructor(props) {
super();
this.props = props;
}
render() {
return this.html`
<h1>Hello, ${this.props.name}</h1>`;
}
}
console.log(new Component({ name: 'Joe' }).render().toString());
The thing here is that without explicitly calling render() I get no output. Looking at some of the official examples this shouldn't be necessary, at least not with Component. I already tried using setState() in the constructor, for example, but no difference.
Also, without using both, console.log() and toString(), I get no output either. Which is unexpected. I get that toString() might be necessary here (without it a <buffer /> is being rendered), but the console.log() seems odd. This might not be related to viperHTML at all of course. But instantiating the component is the only thing I expected to be necessary.
It's also not clear to me yet how I can write an isomorphic/universal component, i.e. one file which has the markup, event handlers etc., gets rendered on the server and then hydrated on the client. When I add an inline event handler as per the docs (https://viperhtml.js.org/hyperhtml/documentation/#essentials-6) it actually gets inlined into the rendered markup, which is not what I want.
I checked hypermorphic and the viperNews app, but that didn't really help me so far.
In case it helps, you can read viperHTML tests to see how components can be used.
The thing here is that without explicitly calling render() I get no output.
Components are meant to be used to render layout, either on the server or on the client side. This means if you pass a component instance to a hyper/viperHTML view, you don't have to worry about calling anything, it's done for you.
const {bind, Component} = require('viperhtml');
class Hello extends Component {
constructor(props) {
super().props = props;
}
render() {
return this.html`<h1>Hello, ${this.props.name}</h1>`;
}
}
console.log(
// you need a hyper/viperHTML literal to render components
bind({any:'ref'})`${Hello.for({ name: 'Joe' })}`
// by default you have a buffer to stream in NodeJS
// if you want a string you need to use toString()
.toString()
);
Since NodeJS by default streams buffers, any layout produced by viperHTML will be buffers and, as such, can be streamed while it's composed (i.e. with Promises as interpolation values).
It's also not clear to me yet how I can write an isomorphic/universal component, i.e. one file which has the markup, event handlers etc., gets rendered on the server and then hydrated on the client.
The original version of hyperHTML had a method called adopt() which purpose was to hydrate live nodes through same template literals.
While viperHTML has an viperhtml.adoptable = true switch to render adoptable content, hyperHTML adopt feature is still not quite there yet so that, for the time being, you can easily share views between SSR and the FE, but you need to either take over on the client once the SSR page has landed or react, for the very first time, differently and take over on the client at distance.
This is not optimal, but I'm afraid the hydration bit, done right, is time consuming and I haven't found such time to finalize it and ship it.
That might be hyperHTML v3 at this point.
I hope this answer helped understanding how viperHTML works and what's the current status.

Purely functional feedback suppression?

I have a problem that I can solve reasonably easy with classic imperative programming using state: I'm writing a co-browsing app that shares URL's between several nodes. The program has a module for communication that I call link and for browser handling that I call browser. Now when a URL arrives in link i use the browser module to tell the
actual web browser to start loading the URL.
The actual browser will trigger the navigation detection that the incoming URL has started to load, and hence will immediately be presented as a candidate for sending to the other side. That must be avoided, since it would create an infinite loop of link-following to the same URL, along the line of the following (very conceptualized) pseudo-code (it's Javascript, but please consider that a somewhat irrelevant implementation detail):
actualWebBrowser.urlListen.gotURL(function(url) {
// Browser delivered an URL
browser.process(url);
});
link.receivedAnURL(function(url) {
actualWebBrowser.loadURL(url); // will eventually trigger above listener
});
What I did first wast to store every incoming URL in browser and simply eat the URL immediately when it arrives, then remove it from a 'received' list in browser, along the lines of this:
browser.recents = {} // <--- mutable state
browser.recentsExpiry = 40000;
browser.doSend = function(url) {
now = (new Date).getTime();
link.sendURL(url); // <-- URL goes out on the network
// Side-effect, mutating module state, clumsy clean up mechanism :(
browser.recents[url] = now;
setTimeout(function() { delete browser.recents[url] }, browser.recentsExpiry);
return true;
}
browser.process = function(url) {
if(/* sanity checks on `url`*/) {
now = (new Date).getTime();
var duplicate = browser.recents[url];
if(! duplicate) return browser.doSend(url);
if((now - duplicate_t) > browser.recentsExpiry) {
return browser.doSend(url);
}
return false;
}
}
It works but I'm a bit disappointed by my solution because of my habitual use of mutable state in browser. Is there a "Better Way (tm)" using immutable data structures/functional programming or the like for a situation like this?
A more functional approach to handling long-lived state is to use it as a parameter to a recursive function, and have one execution of the function responsible for handling a single "action" of some kind, then calling itself again with the new state.
F#'s MailboxProcessor is one example of this kind of approach. However it does depend on having the processing happen on an independent thread which isn't the same as the event-driven style of your code.
As you identify, the setTimeout in your code complicates the state management. One way you could simplify this out is to instead have browser.process filter out any timed-out URLs before it does anything else. That would also eliminate the need for the extra timeout check on the specific URL it is processing.
Even if you can't eliminate mutable state from your code entirely, you should think carefully about the scope and lifetime of that state.
For example might you want multiple independent browsers? If so you should think about how the recents set can be encapsulated to just belong to a single browser, so that you don't get collisions. Even if you don't need multiple ones for your actual application, this might help testability.
There are various ways you might keep the state private to a specific browser, depending in part on what features the language has available. For example in a language with objects a natural way would be to make it a private member of a browser object.

Confused about Meteor: how to send data to all clients without writing to the database?

I've actually been toying with Meteor for a little bit now, but I realized that I still lack some (or a lot!) comprehension on the topic.
For example, here is a tutorial that uses node.js/express/socket.io to make a simple real-time chat: http://net.tutsplus.com/tutorials/javascript-ajax/real-time-chat-with-nodejs-socket-io-and-expressjs/
In that above example, through socket.io, the webserver receives some data and passes it onto all of the connected clients -- all without any database accesses.
With Meteor, in all the examples that I've seen, clients are updated by writing to the mongodb, which then updates all the clients. But what if I don't need to write data to the database? It seems like an expensive step to pass data to all clients.
I am sure I am missing something here. What would be the Meteor way of updating all the clients (say, like with a simple chat app), but without needing the expense of writing to a database first?
Thank you!
At the moment there isn't an official way to send data to clients without writing it to a collection. Its a little tricker in meteor because the step to send data to multiple clients when there isn't a place to write to comes from when multiple meteor's are used together. I.e items sent from one meteor won't come to clients subscribed on the other.
There is a temporary solution using Meteor Streams (http://meteorhacks.com/introducing-meteor-streams.html) that can let you do what you want without writing to the database in the meanwhile.
There is also a pretty extensive discussion about this on meteor-talk (https://groups.google.com/forum/#!topic/meteor-talk/Ze9U9lEozzE) if you want to understand some of the technical details. This will actually become possible when the linker branch is merged into master, for a single server
Here's a bit of way to have a 'virtual collection, its not perfect but it can do until Meteor has a more polished way of having it done.
Meteor.publish("virtual_collection", function() {
this.added("virtual_coll", "some_id_of_doc", {key: "value"});
//When done
this.ready()
});
Then subscribe to this on the client:
var Virt_Collection = new Meteor.Collection("virtual_coll");
Meteor.subscribe("virtual_collection");
Then you could run this when the subscription is complete:
Virt_Collection.findOne();
=> { _id: "some_id_of_doc", key: "value"}
This is a bit messy but you could also hook into it to update or remove collections. At least this way though you won't be using any plugins or packages.
See : https://www.eventedmind.com/posts/meteor-how-to-publish-to-a-client-only-collection for more details and a video example.
The publish function on the server sends data to clients. It has some convenient shortcuts to publish query results from the database but you do not have to use these. The publish function has this.added(), this.removed(), and this.changed() that allow you to publish anything you choose. The client then subscribes and receives the published data.
For example:
if ( Meteor.isClient ){
var someMessages = new Meteor.Collection( "messages" ); //messages is name of collection on client side
Meteor.subscribe ( "messagesSub" ); //messagesSub tells the server which publish function to request data from
Deps.autorun( function(){
var message = someMessages.findOne({});
if ( message ) console.log( message.m ); // prints This is not from a database
});
}
if (Meteor.isServer ) {
Meteor.publish( "messagesSub", function(){
var self = this;
self.added ( "messages", "madeUpId1", { m: "This is not from a database"} ); //messages is the collection that will be published to
self.ready();
});
}
There is an example in meteor docs explained here and another example here. I also have an example that shares data between clients without ever using a database just to teach myself how the publish and subscribe works. Nothing used but basic meteor.
It's possible to use Meteor's livedata package (their DDP implementation) without the need of a database on the server. This was demoed by Avital Oliver and below I'll point out the pertinent part.
The magic happens here:
if (Meteor.isServer) {
TransientNotes = new Meteor.Collection("transientNotes", {connection: null});
Meteor.publish("transientNotes", function () {
return TransientNotes.find();
});
}
if (Meteor.isClient) {
TransientNotes = new Meteor.Collection("transientNotes");
Meteor.subscribe("transientNotes");
}
Setting connection: null specifies no connection (see Meteor docs).
Akshat suggested using streams. I'm unable to reply to his comment due to lack of reputation, so I will put this here. The package he links to is no longer actively maintained (see author's tweet). I recommend using the yuukan:streamy (look it up on Atmosphere) package instead or use the underlying SockJS lib used in Meteor itself—you can learn how to do this by going through the Meteor code and look how Meteor.server.Stream_server, and Meteor.connection._stream are used, which is what the Streamy package does.
I tested an implementation of the Streamy chat example and found performance to be negligibly different but this was only on rudimentary tests. Using the first approach you get the benefits of the minimongo implementation (e.g. finding) and Meteor reactivity. Reactivity is possible with Streamy, though is does through things like using ReactiveVar's.
There is a way! At least theoretically The protocol used by Meteor to sync between client and server is called DDP. The spec is here
And although there some examples here and here of people implementing their own DDP clients, I'm afraid haven't seen examples of implementations of DDP servers. But I think the protocol is straightforward and would guess it wouldn't be so hard to implement.

Resources