Loading a Meteor client app with fake fire-and-forget data - meteor

I'm trying to figure out a good way to create tutorials for using Meteor apps. Visually, I've figured out a good approach, and packed this into a smart package:
https://github.com/mizzao/meteor-tutorials.
However, there is a second piece that turns out to be rather hard to figure out.
In many cases, a tutorial app needs to be loaded with fake data, to demonstrate the interface to the user without requiring it to be populated with real data that may be hard to generate. (For example, see https://www.planapple.com/trip/demo/349/ which is a demo for PlanApple). In Meteor, since the content of an app is basically defined by the contents of some collections, I see two ways to do this:
Maintain two sets of collections, one for the tutorial and one for the actual app. Use the first set for the tutorial and the second when the user is actually using the app.
Use one set of collections, and fill it with fake data during the tutorial using a subscription and with real data when the user is actually using the app using a different subscription.
The first approach is clearly bad; it means that one cannot write the app without being agnostic to whether it's being used as a tutorial or not and there is a lot of messy if/else reactive logic in presenting the app that is unnecessary. Moreover, this will be very hard to maintain if the app has more than a few collections.
The second approach seems to be the more Meteor-esque way to do things. What we basically want is for a server publication to fill all the client collections with some fake data, and then allow the data to be manipulated in whatever way on the client side without the changes propagating to the server; the client basically gets a copy of the server's tutorial data and then makes only local changes to it which are then discarded. This boils down to two things:
Sending fake data down from the server to client via a custom subscription into the same named collections as the regular app. This is definitely possible as I've written in https://stackoverflow.com/a/18880927/586086
Ignoring any inserts, updates, and deletes from the client (on the server) after the initial load of data; but allowing them to happen locally. This is also possible if one creates null (unnamed) collections, as in http://docs.meteor.com/#meteor_collection.
The problem is that although it's possible to do each of the two steps above separately, I want to do both of them - I want the data to be loaded into the same named collections as the client would have with real data, to avoid the complicated control logic of having two sets of collections, but I also want changes to be local-only but not propagated back over the subscription during the tutorial.
Anyone have ideas about how to do this?
A related question about whether the second part is possible: How does a Meteor database mutator know if it's being called from a Meteor.method vs. normal code?
EDIT: It seems that what we'd basically want to do in the tutorial is inserting directly against the local Meteor Collection as in {https://stackoverflow.com/a/19523301/586086}. However, is there a way to generally turn on this behavior during the tutorial for all relevant mutators, instead of explicitly having to specify this?

I ended up implementing this myself with the partitioner package, which allows connected clients to be divided up into different slices each containing different data.
Basically, the idea is to put the user(s) into a new partition when they are in the tutorial, and then put them into another partition when they are using the app for real. Works great with the tutorials package as well. This gives up the ability of having changes to be client-local, but storing the tutorial data doesn't have much overhead and turned out to be useful in my case anyway.
An example of an app that does this is https://github.com/mizzao/CrowdMapper.

Related

Use Julia to perform computations on a webpage

I was wondering if it is possible to use Julia to perform computations on a webpage in an automated way.
For example suppose we have a 3x3 html form in which we input some numbers. These form a square matrix A, and we can find its eigenvalues in Julia pretty straightforward. I would like to use Julia to make the computation and then return the results.
In my understanding (which is limited in this direction) I guess the process should be something like:
collect the data entered in the form
send the data to a machine which has Julia installed
run the Julia code with the given data and store the result
send the result back to the webpage and show it.
Do you think something like this is possible? (I've seen some stuff using HttpServer which allows computation with the browser, but I'm not sure this is the right thing to use) If yes, which are the things which I need to look into? Do you have any examples of such implementations of web calculations?
If you are using or can use Node.js, you can use node-julia. It has some limitations, but should work fine for this.
Coincidentally, I was already mostly done with putting together an example that does this. A rough mockup is available here, which uses express to serve the pages and plotly to display results (among other node modules).
Another option would be to write the server itself in Julia using Mux.jl and skip server-side javascript entirely.
Yes, it can be done with HttpServer.jl
It's pretty simple - you make a small script that starts your HttpServer, which now listens to the designated port. Part of configuring the web server is that you define some handlers (functions) that are invoked when certain events take place in your app's life cycle (new request, error, etc).
Here's a very simple official example:
https://github.com/JuliaWeb/HttpServer.jl/blob/master/examples/fibonacci.jl
However, things can get complex fast:
you already need to perform 2 actions:
a. render your HTML page where you take the user input (by default)
b. render the response page as a consequence of receiving a POST request
you'll need to extract the data payload coming through the form. Data sent via GET is easy to reach, data sent via POST not so much.
if you expose this to users you need to setup some failsafe measures to respawn your server script - otherwise it might just crash and exit.
if you open your script to the world you must make sure that it's not vulnerable to attacks - you don't want to empower a hacker to execute random Julia code on your server or access your DB.
So for basic usage on a small case, yes, HttpServer.jl should be enough.
If however you expect a bigger project, you can give Genie a try (https://github.com/essenciary/Genie.jl). It's still work in progress but it handles most of the low level work allowing developers to focus on the specific app logic, rather than on the transport layer (Genie's author here, btw).
If you get stuck there's GitHub issues and a Gitter channel.
Try Escher.jl.
This enables you to build up the web page in Julia.

Best practice for managing / controlling object state with 2 way databinding using Polymer

Lets try this explanation again...
I'm new to polymer (and getting back into web dev after a relatively long absence), and I'm wondering what the recommended approach might be to more closely manage object state while employing 2 way databinding. I am currently consuming rest API (json) objects. My question is if polymer keeps a copy of the original object before initiating updates to the bound object's properties/attributes...so one might be able to easily undo the changes? While allowing 2 way databinding to work its magic is often desired, there are cases where I'd like to prevent / delay changes to the object / DOM until the user approves the changes (say via the paper-dialog component for instance). I suppose one could make a temporary copy of the object and bind fields to that version, and then only persist the changes back to the source object upon user approval. In any case, I'd be interested to hear thoughts and see an example or two of recommended approaches (especially if I am off-track with my ideas!)
I suppose one could make a temporary copy of the object and bind
fields to that version, and then only persist the changes back to the
source object upon user approval
This.
Consider that view-models are essentially different from pure data-models (sometimes called business-data). Frequently, the differences are irrelevant and one can use them interchangeably. However, be aware of scenarios where the view-model is distinct (uncommitted user edits are a good example).
The notion of a field editor that requires approval from the user is purely UI/View oriented. Whatever data is managed in that modality is purely in the domain of the view, and fetches/commits to the business-data should be discrete.

What is the best way to represent related models in Qt?

I have a Qt application built on top of a library that manages an SQL database. I have several models (QAbstractTableModel and QAbstractListModel subclasses) that represent different tables in the database, and I have a few associated views.
The models typically have a cache of data from the database and a pointer to a database object that they can use to refresh their caches as needed. I have implemented insertRows, removeRows, and setData so that they attempt the database operation in a transaction, and then if it succeeds, they update the cache inside beginInsertRows()/endInsertRows() (or remove as the case may be) so that the cache doesn't change outside of the begin/end pairs and everything stays in sync.
So far, so good, and everything is working fine. If all changes to the database came from the GUI, I'd think this was a nice solution.
The problem is that some database operations involve changing tables that are represented by more than one model. To the application, it looks like table updates that originate in the database layer. I could have the database layer call up into the model layer whenever it wants to make a change (which is what I'm doing now), but I don't like the way it exposes the lower-level database layer to Qt models. It also has the problem that the models need to make their changes inside a transaction so they know when the operations succeed or fail, and going through the model layer would preclude the database layer from enclosing multiple updates in a transaction.
Alternatively, I could make the caches in the models be the "real" data. I could then change the underlying database and inform the models after the fact, at which point they would update their caches. I think this works fine for small tables, but since it would mean caching the entire table, I don't think it works for large tables.
My thought is that this is a common enough problem that may be some kind of known best practice for this. Is that the case?
Thanks.

which method will yield better performance?

Consider an example data structure with nested items. ie: schools.classes.students
In the UI is used to add/edit/delete the students only.
Do I connect the firebase Ref at the root (schools) or classes and get the UI to iterate over and manage the the data or does each student have a dataRef as this is where the data will be changed?
If each student has its own Firebase Ref (lets say 100 students), does this add any performace/latency issues?
I can explain further if needed but wanted to keep post short.
Firebase is intended to be used with large numbers of references and callbacks. Creating a reference ("new Firebase(...)") is an extremely lightweight option, so you should feel comfortable doing this very often.
In your app, usually rendering and network activity are the major bottlenecks, so loading and re-rendering large chunks is likely to be slow. I'd recommend accessing Firebase in a very granular way, and tying it tightly to your GUI so that only the minimum set of GUI elements need to be re-rendered when things change.
If this wasn't clear, please provide some example code and I can make some additional recommendations.
[I work at Firebase]

Filtering Data in ASP.NET Web Services

I've been using this site for quite a while, usually being able to sort out my questions by browsing through the questions and following tags. However, I've recently come across a question that is rather hard to lookup amongst the great number of questions asked - a question I hope some of you might be able to share your opinion on.
As my problem is a bit hard to fit into a single line, going in the title, I'll try to give a bit more details on the problem I've encountered. So, as the title says I need to filter, or limit, some of the response data my standard ASP.NET Soap-based Web service returns on invoking various web methods. The web service is used to return data used by other systems (a data repository more or less), where the client today is able to specify a few parameters on how the data should be filtered and in return a full-set of data back.
Well, easy enough I thought, just put additional filtering options on the existing web methods which needs a bit more filtered applied, make adjustments on the server-side and we are all set to go - well, unfortunately it turned out to be a bit more tricky then this.
The problem I am facing is that I'm working on a web service running in a production environment, which needs to be extended in such that additional filters can be applied to existing web method being invoked w/o affecting the calls already being made by other systems used by the customer using their client stubs. This is where I am a bit troubled, since I can't seem to find a "right solution" on extending the current web service.
Today, the filter is send as a custom data structure which holds information on which data should filtered, but I am not sure if I can simply just add more information to this data structure w/o breaking code at the clients? One of my co-workers suggested that I could implement a solution where I would extend the web.config on the server-side to hold a section with details on which data should be excluded (filtered out), but I don't find this to be a viable solution long-sighted - and I don't trust customers with such an option since this is likely to go wrong at some point. So the solution I am looking for is a way that I can apply a "second filter" to the data I am requesting from the client so instead of getting a full-set of data back it should only give a fraction, it implemented in such that the filter can be easily modified and it must not affect the current client calls.
Any suggestions on how I should approach this problem?
Thanks!
Kind regards,
E.
A pretty common practice is to create another instance of the application OR use part of the url to signify the version of the endpoint they are connecting to, perhaps the virtual directory is the date. That way old calls will go to the old API and new calls will come in on the new API.
http://api.example.com/dostuff
vs
http://api.example.com/6-7-2011/dostuff

Resources