I'm working on rewriting an existing application with Meteor that has two fairly distinct use cases (an administrator account and user account). Both could be considered separate apps in terms of functionality, but share the same back end database.
Is there any way to "namespace" or otherwise define separate clients so that Meteor only packages and sends assets for the client that's being accessed. For ie. the meteor-router could push different clients for the /admin* space and the /user* space, that way there's no unnecessary overhead downloaded for either client.
I expect this is outside the scope of what's within the means of a Meteor smart package like meteor-router.
You can always create two applications that connect to the same database. Shared server code may be put in a package and included in both, so there will be no need to repeat it.
Related
I am newbie in meteor. I planned to develop a mobile app in meteor with existing mongodb. Is it safe/secure to build mobile app in meteor?. where database credentials will be stored, mobile or server?
The database credentials will be stored in the server. The user's login credentials will also be encrypted before being sent to the server.
Anything you want to be server-side only you should put under your /server directory. Everything else is potentially visible client-side.
to avoid problems you have to take care of two default settings which are active for development - but need to be switched of for production:
By default, Meteor makes all of the data inside our database available to our users. This is convenient during development but a big security hole that needs to be plugged.
This default functionality is contained within an autopublish package. To remove it use: meteor remove autopublish but it also breaks and needs to be fixed.
The first step in fixing the application is using a Meteor.publish function inside the isServer conditional to decide what data should be available.
Because Meteor.publish function executes on the server, it continues to have access to all of our data. This is because code on the server is inherently trusted.
The second step in fixing the application is using a Meteor.subscribe function from within the isClient conditional to reference the publish function.
Inside the publish function, we can’t use the Meteor.userId() function. We can, however, achieve the same thing with this.userId .
By default, it’s possible for users to insert, update, and remove data from a collection using the JavaScript Console. This is convenient for development but a big security risk for a live application.
The solution is to move the database-related code to the trusted environment of the server. There, users don’t have any direct control.
To first remove the security risk, with meteor remove insecure -> remove the insecure package from the project. The application will become much more secure but our application will break. None of the database-related features will work.
By using methods, you are able to write code that runs on the server after it’s triggered from the client. This is how to fix the application.
To create methods, use a methods block on the server, and then trigger methods with the Meteor.call function.
You can pass data from the Meteor.call function and into the method, allowing us to still use data from our submitted form on the server.
(Answers party copied from "Your First Meteor Application", David Turnbull)
Hope that helps to get the concept.
Michael
I'm just getting into using Meteor, and yesterday I managed to get a leaflet map running with custom tiles. My goal is to get player positional data from a game and send it to a Meteor server to distribute to other players viewing the map in real time.
The data is available to a small desktop application on the player's machine and Meteor can easily handle the distribution part, so all I'm missing is getting the desktop application to talk to the Meteor server. What would be the best way to go about this? Is there a way to get Meteor to listen for incoming data from an external source?
You can communicate directly with a meteor server using its native Distributed Data Protocol (DDP). You can find the specification document here, and an up-to-date node driver here. Some searching may turn up implementations in other languages.
Alternatively, you could use server-side-routing in iron router to allow clients to use HTTP to POST/PUT their positions. The drawback of this solution is that you may need to come up with some way for clients to uniquely identify themselves (e.g. using a unique key) so you don't get bogus data.
I am trying to evaluate whether Meteor JS would be suitable for a future project that would incorporate live chat, and may need to be scalable.
It certainly can perform the chat functions, but I don't want to paint myself into a corner if traffic spikes and we need to provision the app with more resources in the form of drones/dynos/instances. I have read that a Meteor app on Heroku won't easily scale (perhaps not at all?). I am not clear on whether this is a Heroku issue, or more to do with the current state of Meteor JS (0.6.2.1 at this time). I've not found much more related to Nodejitsu or AppFog.
Can anyone clarify whether a Meteor JS app can currently be deployed on a PaaS such that resources (drones/dynos/instances) can be easily scaled up to meet demand? If so, which Paas? If not, what is the explanation (for a 5-year-old), and is there a roadmap?
Personally I've set myself up with an AWS load balancer and EC2 instances, with my DB over at MongoHQ.
The load balancer setup was made that much easier by following these instructions:
http://www.ripariandata.com/blog/creating-an-aws-elastic-load-balancer
I wrote a script to deploy to a single EC2 instance. It wouldn't be much work to add additional remotes in case you have multiple instances:
https://github.com/matb33/meteor-ec2-install
The best I would recommend is Meteor.com hosting (via meteor deploy).
This is because they would incorporate the ddp-proxy solution within their architecture. Its not as simple as just proxying between two meteors and using a dynamo because each user's session might be on the other server & it might cause a bit of trouble when switching over to another dynamo.
For now its free & it looks like they scale it fairly well too. I think they're also going to introduce a nicer hosting solution soon & who better to host meteor apps than meteor themselves.
If you want to deploy on your own infrastructure (EC2 for instance) you could scale up vertically for the moment until the DDP proxy is released (DDP is what meteor uses to communicate between the server and client (and soon between servers too) to make sure the state can be relayed across multiple 'dynamos'.
This answer is Heroku specific.
As far as I understand meteor application can't be scaled on Heroku on more than one dyno. The reason is that the meteor server instance holds a state for every client. This way it knows what updates to send to the client every time. Meaning that the client has to talk with the same server every time. The Heroku proxy layer doesn't provide this kind of communication and can route client request to a different dyno which does't hold client state.
So now the server has to get all client data from the db and send everything back to the client. The server gets loaded and the client gets updated. So we have two dynos, we do twice the work and add lots of noise to the client.
I hope it is clear enough.
My organisation (a small non-profit) currently has an internal production .NET system with SQL Server database. The customers (all local to our area) submit requests manually that our office staff then input into the system.
We are now gearing up towards online public access, so that the customers will be able to see the status of their existing requests online, and in future also be able to create new requests online. A new asp.net application will be developed for the same.
We are trying to decide whether to host this application on-site on our servers(with direct access to the existing database) or use an external hosting service provider.
Hosting externally would mean keeping a copy of Requests database on the hosting provider's server. What would be the recommended way to then keep the requests data synced real-time between the hosted database and our existing production database?
Trying to sync back and forth between two in-use databases will be a constant headache. The question that I would have to ask you is if you have the means to host the application on-site, why wouldn't you go that route?
If you have a good reason not to host on site but you do have some web infrastructure available to you, you may want to consider creating a web service which provides access to your database via a set of well-defined methods. Or, on the flip side, you could make the database hosted remotely with your website your production database and use a webservice to access it from your office system.
In either case, providing access to a single database will be much easier than trying to keep two different ones constantly and flawlessly in sync.
If a webservice is not practical (or you have concerns about availability) you may want to consider a queuing system for synchronization. Any change to the db (local or hosted) is also added to a messaging queue. Each side monitors the queue for changes that need to be made and then apply the changes. This would account for one of the databases not being available at any given time.
That being said, I agree with #LeviBotelho, syncing two db's is a nightmare and should probably be avoided if you can. If you must, you can also look into SQL Server replication.
Ultimately the data is the same, customer submitted data. Currently it is being entered by them through you, ultimately it will be entered directly by them, I see no need in having two different databases with the same data. The replication errors alone when they will pop-up (and they will), will be a headache for your team for nothing.
I've always personally used dedicated servers and VPS so I have full control over my SQL Server (using 2008 R2). Now I'm working on a asp.net project that could be deployed in a shared hosting environment which I have little experience with. My question is are there limitations on the features of SQL Server I can use in a shared environment?
For example, if I design my database to use views, stored procedures, user defined functions and triggers, will my end user be able to use them in shared hosting? Do hosts typically provide access to these and are they difficult to use?
If so, I assume the host will give a user his login, and he can use tools like management studios to operate within his own DB as if it were his own server? If I provide scripts to install these, will they run on the user's credential within his database?
All database objects are available. It includes tables, views, sp, functions, keys, certificates...
Usually CLR and FTS are disabled.
At last, you will not be able to access most of the server objects (logins, server trigger, backup devices, linked servers etc...)
SQL Mail, Reporting Services are often turned off too.
Depends on how the other users are authenticated to the database, if it is one shared database for all users.
If every user on the host will recieve it's own db:
If your scripts are written in a generic way (are not bound to fixed usernames in that case for example), other users will be able to execute them on their database and will have the same functionality. (Secondary click on the db and choose task->backup for example)
You could also provide simple pure backup dumps of a freshly setup database so for other users, the setup is only one click away. Also from the beginning, you should think about how to roll out changes that need to affect every user.
One possible approach is to always supply delta scripts, no matter if you are patching errors away or adding new things.