What design pattern would be suitable for async image processing? - asynchronous

I have an app where user can select bunch of images and apply some backend async processing on them. Typical payload front frontend app looks like this:
{
files: ['tmp.jpg', 'tmp2.jpg'],
config: {'applyX': true, 'applyY': true},
}
Where applyX, applyY are namespaces for some processing functions. I want pattern to give ability to easily add new functions and unit test it properly.

If you're asking about architectures for asynchronous processing, then a pipes-and-filters architecture is appropriate.
If you're asking about how to add new functions, then you're probably looking for a plug-in architecture.

Related

How do I make a database call from an Electron front end?

(Brand new learning Electron here, so I'm sure this is a basic question and I am missing something fundamental...)
How does one interact with a local database (I am using Sqlite) from an Electron application front end? I have a very basic database manager class and have no problem using it from the index.js file in my Electron application. But from the front-end (I am using Svelte, but I can probably translate solutions from other front-end frameworks), how does one interact with the database? This seems fundamental but I'm striking out trying to find a basic example.
Since everything is local it would seem it shouldn't be necessary to set up a whole API just to marshal data back and forth, but maybe it is? But if so, how does one go about telling the Electron "backend" (if that's the right term) to do something and return the results to the front end? I'm seeing something about IPC but that's not making a lot of sense right now and seems like overkill.
Here's my simple database manager class:
const sqlite3 = require("sqlite3").verbose();
class DbManager {
#db;
open() {
this.#db = new sqlite3.Database("testing.db", sqlite3.OPEN_READWRITE);
}
close() {
this.#db.close();
}
run(sql, param) {
this.#db.run(sql, param);
return this;
}
}
const manager = new DbManager();
module.exports = manager;
And I can call this and do whatever no problem from the Electron entry point index.js:
const { app, BrowserWindow, screen } = require("electron");
require("electron-reload")(__dirname);
const db = require("./src/repository/db");
const createWindow = () => {
...
};
let window = null;
app.whenReady().then(() => {
db.open();
createWindow();
});
app.on("window-all-closed", () => {
db.close();
app.quit();
});
But what to do from my component?
<script>
// this won't work, and I wouldn't expect it to, but not sure what the alternative is
const db = require("./repository/db");
let accountName;
function addAccount() {
db.run("INSERT INTO accounts (name) VALUES ($name);", { $name: accountName });
}
</script>
<main>
<form>
<label for="account_name">Account name</label>
<input id="account_name" bind:value={accountName} />
<button on:click={addAccount}>Add account</button>
</form>
</main>
If anyone is aware of a boilerplate implementation that does something similar, that would be super helpful. Obviously this is like application 101 here; I'm just not sure how to go about this yet in Electron and would appreciate someone pointing me in the right direction.
If you're absolutely 100% sure that your app won't be accessing any remote resources then you could just expose require and whatever else you may need through the preload script, just write const nodeRequire = require; window.require = nodeRequire;.
This is a fairly broad topic and requires some reading. I'll try to give you a primer and link some resources.
Electron runs on two (or more if you open multiple windows) processes - the main process and the renderer process. The main process handles things like opening new windows, starting and closing the entire app, tray icons, window visibility etc., while the renderer process is basically like your JS code in a browser. More on Electron processes.
By default, the renderer process does not have access to a Node runtime, but it is possible to let it. You can do that in two ways, with many caveats.
One way is by setting webPreferences.nodeIntegration = true when creating the BrowserWindow (note: nodeIntegration is deprecated and weird. This allows you to use all Node APIs from your frontend code, and your snippet would work. But you probably shouldn't do that because a BrowserWindow is capable of loading external URLs, and any code included on those pages would be able to execute arbitrary code on your or your users' machines.
Another way is using the preload script. The preload script runs in the renderer process but has access to a Node runtime as well as the browser's window object (the Node globals get removed from the scope before the actual front end code runs unless nodeIntegration is true). You can simply set window.require = require and essentially work with Node code in your frontend files. But you probably shouldn't do that either, even if you're careful about what you're exposing, because it's very easy to still leave a hole and allow a potential attacker to leverage some exposed API into full access, as demonstrated here. More on Electron security.
So how to do this securely? Set webPreferences.contextIsolation to true. This definitively separates the preload script context from the renderer context, as opposed to the unreliable stripping of Node APIs that nodeIntegration: false causes, so you can be almost sure that no malicious code has full access to Node.
You can then expose specific function to the frontend from the preload, through contextBridge.exposeInMainWorld. For example:
contextBridge.exposeInMainWorld('Accounts', {
addAccount: async (accountData) => {
// validate & sanitize...
const success = await db.run('...');
return success;
}
}
This securely exposes an Accounts object with the specified methods in window on the frontend. So in your component you can write:
const accountAdded = await Accounts.addAccount({id: 123, username: 'foo'});
Note that you could still expose a method like runDbCommand(command) { db.run(command) } or even evalInNode(code) { eval(code) }, which is why I said almost sure.
You don't really need to use IPC for things like working with files or databases because those APIs are available in the preload. IPC is only required if you want to manipulate windows or trigger anything else on the main process from the renderer process.

Chaining Handlers with MediatR

We are using MediatR to implement a "Pipeline" for our dotnet core WebAPI backend, trying to follow the CQRS principle.
I can't decide if I should try to implement a IPipelineBehavior chain, or if it is better to construct a new Request and call MediatR.Send from within my Handler method (for the request).
The scenario is essentially this:
User requests an action to be executed, i.e. Delete something
We have to check if that something is being used by someone else
We have to mark that something as deleted in the database
We have to actually delete the files from the file system.
Option 1 is what we have now: A DeleteRequest which is handled by one class, wherein the Handler checks if it is being used, marks it as deleted, and then sends a new TaskStartRequest with the parameters to Delete.
Option 2 is what I'm considering: A DeleteRequest which implements the marker interfaces IRequireCheck, IStartTask, with a pipeline which runs:
IPipelineBehavior<IRequireCheck> first to check if the something is being used,
IPipelineBehavior<DeleteRequest> to mark the something as deleted in database and
IPipelineBehavior<IStartTask> to start the Task.
I haven't fully figured out what Option 2 would look like, but this is the general idea.
I guess I'm mainly wondering if it is code smell to call MediatR.Send(TRequest2) within a Handler for a TRequest1.
If those are the options you're set on going with - I say Option 2. Sending requests from inside existing Mediatr handlers can be seen as a code smell. You're hiding side effects and breaking the Single Responsibility Principle. You're also coupling your requests together and you should try to avoid situations where you can't send one type of request before another.
However, I think there might be an alternative. If a delete request can't happen without the validation and marking beforehand you may be able to leverage a preprocessor (example here) for your TaskStartRequest. That way you can have a single request that does everything you need. This even mirrors your pipeline example by simply leveraging the existing Mediatr patterns.
Is there any need to break the tasks into multiple Handlers? Maybe I am missing the point in mediatr. Wouldn't this suffice?
public async Task<Result<IFailure,ISuccess>> Handle(DeleteRequest request)
{
var thing = await this.repo.GetById(request.Id);
if (thing.IsBeignUsed())
{
return Failure.BeignUsed();
}
var deleted = await this.repo.Delete(request.Id);
return deleted ? new Success(request.Id) : Failure.DbError();
}

Register callback in Autofac and build container again in the callback

I have a dotnet core application.
My Startup.cs registers types/implementations in Autofac.
One of my registrations needs previous access to a service.
var containerBuilder = new ContainerBuilder();
containerBuilder.RegisterSettingsReaders(); // this makes available a ISettingsReader<string> that I can use to read my appsettings.json
containerBuilder.RegisterMyInfrastructureService(options =>
{
options.Username = "foo" //this should come from appsettings
});
containerBuilder.Populate(services);
var applicationContainer = containerBuilder.Build();
The dilemma is, by the time I have to .RegisterMyInfrastructureService I need to have available the ISettingsReader<string> that was registered just before (Autofac container hasn't been built yet).
I was reading about registering with callback to execute something after the autofac container has been built. So I could do something like this:
builder.RegisterBuildCallback(c =>
{
var stringReader = c.Resolve<ISettingsReader<string>>();
var usernameValue = stringReader.GetValue("Username");
//now I have my username "foo", but I want to continue registering things! Like the following:
containerBuilder.RegisterMyInfrastructureService(options =>
{
options.Username = usernameValue
});
//now what? again build?
});
but the problem is that after I want to use the service not to do something like starting a service or similar but to continue registering things that required the settings I am now able to provide.
Can I simply call again builder.Build() at the end of my callback so that the container is simply rebuilt without any issue? This seems a bit strange because the builder was already built (that's why the callback was executed).
What's the best way to deal with this dilemma with autofac?
UPDATE 1: I read that things like builder.Update() are now obsolete because containers should be immutable. Which confirms my suspicion that building a container, adding more registrations and building again is not a good practice.
In other words, I can understand that using a register build callback should not be used to register additional things. But then, the question remain: how to deal with these issues?
This discussion issue explains a lot including ways to work around having to update the container. I'll summarize here, but there is a lot of information in that issue that doesn't make sense to try and replicate all over.
Be familiar with all the ways you can register components and pass parameters. Don't forget about things like resolved parameters, modules that can dynamically put parameters in place, and so on.
Lambda registrations solve almost every one of these issues we've seen. If you need to register something that provides configuration and then, later, use that configuration as part of a different registration - lambdas will be huge.
Consider intermediate interfaces like creating an IUsernameProvider that is backed by ISettingsReader<string>. The IUsernameProvider could be the lambda (resolve some settings, read a particular one, etc.) and then the downstream components could take an IUsernameProvider directly.
These sorts of questions are hard to answer because there are a lot of ways to work around having to build/rebuild/re-rebuild the container if you take advantage of things like lambdas and parameters - there's no "best practice" because it always depends on your app and your needs.
Me, personally, I will usually start with the lambda approach.

Confused about Meteor: how to send data to all clients without writing to the database?

I've actually been toying with Meteor for a little bit now, but I realized that I still lack some (or a lot!) comprehension on the topic.
For example, here is a tutorial that uses node.js/express/socket.io to make a simple real-time chat: http://net.tutsplus.com/tutorials/javascript-ajax/real-time-chat-with-nodejs-socket-io-and-expressjs/
In that above example, through socket.io, the webserver receives some data and passes it onto all of the connected clients -- all without any database accesses.
With Meteor, in all the examples that I've seen, clients are updated by writing to the mongodb, which then updates all the clients. But what if I don't need to write data to the database? It seems like an expensive step to pass data to all clients.
I am sure I am missing something here. What would be the Meteor way of updating all the clients (say, like with a simple chat app), but without needing the expense of writing to a database first?
Thank you!
At the moment there isn't an official way to send data to clients without writing it to a collection. Its a little tricker in meteor because the step to send data to multiple clients when there isn't a place to write to comes from when multiple meteor's are used together. I.e items sent from one meteor won't come to clients subscribed on the other.
There is a temporary solution using Meteor Streams (http://meteorhacks.com/introducing-meteor-streams.html) that can let you do what you want without writing to the database in the meanwhile.
There is also a pretty extensive discussion about this on meteor-talk (https://groups.google.com/forum/#!topic/meteor-talk/Ze9U9lEozzE) if you want to understand some of the technical details. This will actually become possible when the linker branch is merged into master, for a single server
Here's a bit of way to have a 'virtual collection, its not perfect but it can do until Meteor has a more polished way of having it done.
Meteor.publish("virtual_collection", function() {
this.added("virtual_coll", "some_id_of_doc", {key: "value"});
//When done
this.ready()
});
Then subscribe to this on the client:
var Virt_Collection = new Meteor.Collection("virtual_coll");
Meteor.subscribe("virtual_collection");
Then you could run this when the subscription is complete:
Virt_Collection.findOne();
=> { _id: "some_id_of_doc", key: "value"}
This is a bit messy but you could also hook into it to update or remove collections. At least this way though you won't be using any plugins or packages.
See : https://www.eventedmind.com/posts/meteor-how-to-publish-to-a-client-only-collection for more details and a video example.
The publish function on the server sends data to clients. It has some convenient shortcuts to publish query results from the database but you do not have to use these. The publish function has this.added(), this.removed(), and this.changed() that allow you to publish anything you choose. The client then subscribes and receives the published data.
For example:
if ( Meteor.isClient ){
var someMessages = new Meteor.Collection( "messages" ); //messages is name of collection on client side
Meteor.subscribe ( "messagesSub" ); //messagesSub tells the server which publish function to request data from
Deps.autorun( function(){
var message = someMessages.findOne({});
if ( message ) console.log( message.m ); // prints This is not from a database
});
}
if (Meteor.isServer ) {
Meteor.publish( "messagesSub", function(){
var self = this;
self.added ( "messages", "madeUpId1", { m: "This is not from a database"} ); //messages is the collection that will be published to
self.ready();
});
}
There is an example in meteor docs explained here and another example here. I also have an example that shares data between clients without ever using a database just to teach myself how the publish and subscribe works. Nothing used but basic meteor.
It's possible to use Meteor's livedata package (their DDP implementation) without the need of a database on the server. This was demoed by Avital Oliver and below I'll point out the pertinent part.
The magic happens here:
if (Meteor.isServer) {
TransientNotes = new Meteor.Collection("transientNotes", {connection: null});
Meteor.publish("transientNotes", function () {
return TransientNotes.find();
});
}
if (Meteor.isClient) {
TransientNotes = new Meteor.Collection("transientNotes");
Meteor.subscribe("transientNotes");
}
Setting connection: null specifies no connection (see Meteor docs).
Akshat suggested using streams. I'm unable to reply to his comment due to lack of reputation, so I will put this here. The package he links to is no longer actively maintained (see author's tweet). I recommend using the yuukan:streamy (look it up on Atmosphere) package instead or use the underlying SockJS lib used in Meteor itself—you can learn how to do this by going through the Meteor code and look how Meteor.server.Stream_server, and Meteor.connection._stream are used, which is what the Streamy package does.
I tested an implementation of the Streamy chat example and found performance to be negligibly different but this was only on rudimentary tests. Using the first approach you get the benefits of the minimongo implementation (e.g. finding) and Meteor reactivity. Reactivity is possible with Streamy, though is does through things like using ReactiveVar's.
There is a way! At least theoretically The protocol used by Meteor to sync between client and server is called DDP. The spec is here
And although there some examples here and here of people implementing their own DDP clients, I'm afraid haven't seen examples of implementations of DDP servers. But I think the protocol is straightforward and would guess it wouldn't be so hard to implement.

When I want one object out of an firebaselistobservable using rxjs, should I still use subscribe?

I am kind of confused about which methods belong with and when to use them.
Right now, I am using subscribe for basically everything and it is not working for me when I want a quick static value out of Firebase. Can you explain when I should use subscribe vs other methods other than for a strict observable?
When working with async values you have a few options: promises, rxjs, callbacks. Every option has its own drawbacks.
When you want to retrieve a single async value it is tempting to use promises for their simplicity (.then(myVal => {})). But this does not give you access to things like timeouts/throttling/retry behaviour etc. Rx streams, even for single values do give you these options.
So my recommendation would be, even if you want to have a single value, to use Observables. There is no async option for 'a quick static value out of a remote database'.
If you do not want to use the .subscribe() method there are other options which let you activate your subscription like .toPromise() which might be easier for retrieving a single value using Rx.
const getMyObjPromise = $firebase.get(myObjId)
.timeout(5000)
.retry(3)
.toPromise()
getMyObjPromise.then(obj => console.log('got my object'));
My guess is, that you have a subscribe method that contains a bunch of logic like it was a ’.then’ and you save the result to some local variable.
First: try to avoid any logic inside the subscription-method -> use stream-operators before that and then subscribe just to retrieve the data.
You much more flexible with that and it is much easier to unit-test those single parts of your stream than to test a whole component in itself.
Second: try to avoid using a manual subscriptions at all - in angular controllers they are prone to cause memory leaks if not unsubscribed.
Use the async-pipe instead in your template and let angular manage the subscription itself.

Resources