Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I read that Meteor doesn't do any server side processing of view templates. Does this mean that HTML is served as-is to the client, with AJAX requests to populate dynamic parts of the page?
How does this compare to server side template processing in terms of serving the same dynamic-content-rich page to many users (100s of thousands)?
Your question is kind of broad, and I don't think this is the place for such comparisons, but I'll try to get you started regarding Meteor's approach:
Meteor used to be bundled with a view layer called Blaze. It is still the official view layer, but it appears that the next version will be based on React.js. Anyway, Meteor is more loosely coupled from Blaze now and you can choose any view layer, with Blaze, React and Angular being officially supported.
All of the above are templates/components that are compiled to JavaScript and rendered on the client, based on state/data available locally.
This data is usually obtained via a pub/sub mechanism (using a local cache that mimics the MongoDB interface, called MiniMongo) and mutated via async RPC mechanism called Meteor Methods.
The Meteor servers monitor the database for changes by looking at a change stream called OpLog.
When a client requests data (via a subscription), the server fetches the initial data and monitors for changes. If an OpLog changes matches a subscription's criteria, an update is sent to the client.
The notion of Reactive Computations is used throughout the framework, where some data sources can be invalidated and re-evaluate functions that depend on them.
Combined with a client-side router you often get what is commonly referred to as a SPA (single page application).
The state of the application (route + data + local state) normally dictates what views are rendered on the screen.
Currently, Meteor bundles the views and other code during the build process and sends the bundle to the client, which then has all of the code that it needs to render all of the views and fetch the required data.
A more modular approach is being investigated by the community (via alternative build methods) and is expected in the upcoming version 1.3 of Meteor.
The data transport mechanism is the Meteor DDP (distributed data protocol), which uses a WebSocket when possible to transfer data back and forth between the client and the server, so no need for AJAX/Comet calls for each state mutation.
I think that the spectrum of alternative implementations is too broad to discuss in a SO answer. It really depends on how "reactive" or "real time" you want your app to be.
The server capacity greatly depends on your implementation:
the amount of data each user needs and the frequency in which this data changes
the way you construct your queries (getting the data you need and doing so efficiently)
the way you partition your code
the hardware, of course
It can range from hundreds to 10's of thousands of connected users per server. No real way of providing a generic answer here.
An interesting guide is being created to demonstrate best practices using Meteor.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
So I'm building a web shop using Firestore and Firebase for the first time, I'm also new to NoSQL. I have the following architectural problem: When a customer places an order the client sends the products ordered to Firestore directly which stores the order in a collection raw_orders. My idea then was to have a Cloud Function trigger on the document create which processes it and builds an invoice and such. However I read that this function invocation may be delayed for 10 seconds, I would like to have a synchronous solution instead.
Instead I had the idea to create a HTTP Cloud Function where the customer can POST the order to, the HTTP function then processes the order and pushes it to Firestore, the function then returns the orderID or something to the customer. This approach feels much more safe since the user won't have to talk to the database directly. Also it solves the issues that the a function triggered by a Firestore create might be delayed.
However I'm new to Firebase and I'm not sure if this is architecturally the preferred way. The method I propose seems to be more in line with regular old REST APIs.
What do you think?
Thanks!
It sounds like you definitely have some server-side code and database operations that you can't trust the clients to do. (Keep in mind that firestore security rules are your only protection -- anyone can run whatever code they want within those rules, not just the code you provide).
Cloud functions give you exactly this -- and since you both want the operation to be synchronous (from the view of your client) and presumably have some way for the client to react to errors in the process, a cloud function would make a lot of sense for you to use.
Using cloud functions in this way is very common in Firebase apps, even if it isn't pure REST.
Moreover, if you are using Firebase more generally in your client, it might be more natural to use a callable cloud function rather than an http function, as this will handle the marshaling of the parameters in a much more native way than a raw HTTP request might. However, it isn't clear in your case since it sounds like you're using the REST API today.
Keep in mind that there are additional costs (or specific quotas, depending on your pricing plan) for running cloud functions. The pricing is based on invocations, as well as CPU and RAM usage of your functions.
Still quite new to meteor/coding and I have a question on how to connect meteor to a live api that uses websocket.
The api is from bittrex (exchange for cryptocurrency) and there is a node js package that gives a "subscribtion" to the api in order to get live data:
https://github.com/dparlevliet/node.bittrex.api
I manage to have it run with node with no problem but I would ideally like to connect it to Meteor in order to present the data nicely. The props should be updated live with the data received. (nb: there is a lot of data, it is continuously coming).
Is there a good way to do this or is meteor not suitable for this. It means the props would change continuously.
Would a node/react solution only be better ?
This question might get closed because it's a bit opinion based but...
You have a streaming data source providing data over ws. You could:
(a) have all your clients subscribe directly to that source and not involve your server at all. In this case you'd be just using React on the client and basically ignoring Meteor (even though you'd be building the UI in a Meteor app). I don't know how bitrex charges for access or how they scale across many connections so that may be an issue if there are many connections.
(b) use your Meteor app to proxy then fan-out the bitrex data. In this case you would:
subscribe to the bitrex data source from your server
copy the data into a mongo collection
publish that data using a Meteor publication.
Your clients would subscribe to the Meteor publication and on the front end you would get reactive data updates like any other Meteor app.
The benefits of (b) are that bitrex only sees one subscriber and your app looks like a pretty vanilla Meteor app. Also if you have to use any kind of api key or secret to access bitrex then that key doesn't need to be shared with the client side.
I am about to start working on the back-end for a mobile app (initially iOS/Android, later also website) and I am thinking whether Realm could fulfill all my needs.
The basic idea is that there are two types of users - customers and service-providers. The customers send requests to the server once in a while and are subscribed (real-time) for any event that might occur in relation to this request in the future. Each service-provider is listening for specific requests from all customers and is the one who is going to trigger various events (send data) for each of those requests.
From the Realm docs, it is obvious that the real-time data sync is not going to be a problem. The thing I am concerned about is how to model the scenario (customer/service-provider) in the Realm 'world'. Based on what I read, it is preferred to have one realm per user. Therefore, I suppose the user will register and will be given a realm. Then whenever he makes a request, it is going to be stored in his realm. Now the question is how to model the service-provider. There are going to be various service-providers each responding (triggering various kinds of events up to one hour after request) to different kinds of requests. (Each user can send any request and therefore be served by any service-provider.)
I read a bit about that Realm supports data sharing among different realms which could be a partial solution for this problem, however I was not able to find if this 'sharing' could share only particular requests. (Meaning each service-provider will get only requests intended for him.)
My question is whether this scenario is doable using Realm?
This sounds like a perfect fit for Realm's server-side event-handling. Put simply, Realm offers the ability through our Node SDK to listen for changes across Realms on the server.
So in your example, where each mobile user would have their own Realm, the URL for this would be /~/myRealm in which the tilde represents the Realm user ID. The Node SDK event handling API allows you to register a JS function that will run in response to changes represented by a Regex pattern for Realm URLs. In this case you could use: ^/([0-9a-f]+)/myRealm so that any time any user's myRealm updated, the server could perform some logic.
In this manner, the server via the Node SDK is really a "super-user" or service-provider as you describe. When an event fires, the JS function that runs is provided the Realm that was updated and a list of indexes pertaining to the objects in the Realm that were inserted, deleted, or modified. You can then perform any logic in JS, such as using the changed data to call out to another API or opening the Realm in question or any other and writing changes which will get pushed back out to the respective clients.
The full server-side event handling is part of Realm Professional Edition, but we recently released another way to interact with this called Realm Functions. This provides the ability through the server's dashboard to create the same JS functions that will run in response to changes across Realms. The developer edition support 3 functions so you can try it out immediately!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm building my first app with Meteor and I'd like to know the best security measures to take in a couple of situations...
How do I ensure users are not submitting commands like drop table to my forms? Do I have to manually sanitize somehow or is this automatically handled?
Normally I would use a GET on forms if it asks the user for sensitive information, however I'm confused how Meteor handles inserting items to the db. Is the info submitted through forms secure or is it being passed over somewhere people can it?
If I have removed the autopublish and the insecure packages it means users cannot just query for other user information, is that correct?
Sorry, if these are noob questions. I haven't quite wrapped my head around how the security of an app all fits in but any help would be much appreciated :)
Here is a quick rundown of the basics of securing your meteor app:
Transmit everything over HTTPS.
Separate your client and server code into their own directories. A great way to keep your server-side secrets safe is to never send them to the client in the first place.
Remove the insecure package.
Remove the autopublish package.
Add the audit-argument-checks package and add checks to all methods and publish functions.
Add the browser-policy package (after completing all of the above).
Here are some answers to your questions:
If insecure is removed and you don't have any allow rules, then users can't remove anything. Regardless, documents on the client can only be removed or updated by id, so you can't drop a collection with a single command.
With the proper allow rules specified, then the client can execute an insert/update. Otherwise, you can use a meteor method to have the server perform an insert/update for you. e.g. you can specify a method like postsInsert on the server. Messages will be passed between the client and the server in the clear unless you use SSL.
Correct. With autopublish off, you need to specify which documents are published to the client - otherwise the client will not have read access to any documents.
If you don't want your users to perform certain operations, you don't allow them to do so. The only commands your users are able to perform on the database are those you specify : when you remove the insecure package, typing Collection.remove({}); in the browser console won't drop the entire collection, users will have to Meteor.call certain allowed operations (through Meteor.methods) to perform authenticated-only stuff.
Generally, you want the OWNER of a resource to be able to edit/delete it, but other people shouldn't be able to modify it.
If you have an edit form for one of your collection, you will likely have corresponding Meteor.methods to edit/delete it.
Inside a Meteor.method, you can check the passed arguments received from the client to validate data correctness, you can also verify if the call was issued from an allowed user (typically the owner of the document).
Hopefully, Meteor comes with a Match framework to test parameters sent to Meteor.methods.
// define a test to check if a document is editable by a certain user
EditableDocument=function(userId){
return Match.Where(function(documentId){
var document=Collection.findOne(documentId);
if(!document){
throw new Meteor.Error(500,"Document doesn't exist !");
}
if(userId!=document.creator._id){
throw new Meteor.Error(500,"Can't update a document you don't own.");
}
return true;
});
};
Meteor.methods({
updateCollection:function(documentId,fields){
check(documentId,EditableDocument(this.userId));
check(fields,{
field1:Match.Optional(String),
field2:Match.OneOf(Number,Boolean),
field3:Match.Any
});
// if the tests pass, do your thing
...
}
});
To communicate form data between the client and the server, Meteor uses DDP (Data Distributed Protocol) with a technique known as Remote Method Invocation.
A client can Meteor.call a Meteor.method declared in the server and get a response asynchronously.
This is different from classic form submition manipulation techniques which is done using HTTP, and can use its secure counterpart HTTPS, to encrypt data between a client and the server, avoiding data sniffing issues.
DDP supports data encryption and every *.meteor.com hosted sites are using meteor.com certificate to ensure that all traffic is properly encrypted using SSL.
On your own hosting solution, you will have to handle this yourself though.
https://groups.google.com/forum/#!topic/meteor-core/a9dPA1-mgXA
After removing autopublish, every server resources won't be published to all clients no more, meaning that one can't just query your collections to get other users documents access.
Be sure to read the docs at least once to fully understand Meteor related aspects of web development.
I have been given access to a real time data feed which provides location information, and I would like to build a website around this, but I am a little unsure on what architecture to use to achieve my needs.
Unfortunately the feed I have access to will only allow a single connection per IP address, therefore building a website that talks directly to the feed is out - as each user would generate a new request, which would be rejected. It would also be desirable to perform some pre-processing on the data, so I guess I will need some kind of back end which retrieves the data, processes it, then makes it available to a website.
From a front end connection perspective, web services sounds like it may work, but would this also create multiple connections to the feed for each user? I would also like the back end connection to be persistent, so that data is retrieved and processed even when the site is not being visited, I believe IIS will recycle web services and websites when they are idle?
I would like to keep the design fairly flexible - in future I will be adding some mobile clients, so the API needs to support remote connections.
The simple solution would have been to log all the processed data to a database, which could then be picked up by the website, but this loses the real-time aspect of the data. Ideally I would be looking to push the data to the website every time the data changes or now data is received.
What is the best way of achieving this, and what technologies are there out there that may assist here? Comet architecture sounds close to what I need, but that would require building a back end that can handle multiple web based queries at once, which seems like quite a task.
Ideally I would be looking for a C# / ASP.NET based solution with Javascript client side, although I guess this question is more based on architecture and concepts than technological implementations of these.
Thanks in advance for all advice!
Realtime Data Consumer
The simplest solution would seem to be having one component that is dedicated to reading the realtime feed. It could then publish the received data on to a queue (or multiple queues) for consumption by other components within your architecture.
This component (A) would be a standalone process, maybe a service.
Queue consumers
The queue(s) can be read by:
a component (B) dedicated to persisting data for future retrieval or querying. If the amount of data is large you could add more components that read from the persistence queue.
a component (C) that publishes the data directly to any connected subscribers. It could also do some processing, but if you are looking at doing large amounts of processing you may need multiple components that perform this task.
Realtime web technology components (D)
If you are using a .NET stack then it seems like SignalR is getting the most traction. You could also look at XSockets (there are more options in my realtime web tech guide. Just search for '.NET'.
You'll want to use signalR to manage subscriptions and then to publish messages to registered client (PubSub - this SO post seems relevant, maybe you can ask for a bit more info).
You could also look at offloading the PubSub component to a hosted service such as Pusher, who I work for. This will handle managing subscriptions and component C would just need to publish data to an appropriate channel. There are other options all listed in the realtime web tech guide.
All these components come with a JavaScript library.
Summary
Components:
A - .NET service - that publishes info to queue(s)
Queues - MSMQ, NServiceBus etc.
B - Could also be a simple .NET service that reads a queue.
C - this really depends on D since some realtime web technologies will be able to directly integrate. But it could also just be a simple .NET service that reads a queue.
D - Realtime web technology that offers a simple way of routing information to subscribers (PubSub).
If you provide any more info I'll update my answer.
A good solution to this would be something like http://rubyeventmachine.com/ or http://nodejs.org/ . It's not asp.net, but it can easily solve the issue of distributing real time data to other users. Since user connections, subscriptions and broadcasting to channels are built in to each, that will make coding the rest super simple. Your clients would just connect over standard tcp.
If you needed clients to poll for updates then you would need a que system to store info for the next request. That could be a simple array, or a more complicated que system depending on your requirements and number of users.
There may be solutions for .net that I am not aware of that do the same thing, but those are the 2 I know of.