I'm searching for a way to change the way Meteor loads the Mongo database. Right now, I know I can set an environment variable when I launch Meteor (or export it), but I was hoping there was a way to do this in code. This way, I could dynamically connect to different instances based on conditions.
An example test case would be for the code to parse the url 'testxx.site.com' and then look up a URL based on the 'textxx' subdomain and then connect to that particular instance.
I've tried setting the process.env.MONGO_URL in the server code, but when things execute on the client, it's not picking up the new values.
Any help would be greatly appreciated.
Meteor connects to Mongo right when it starts (using this code), so any changes to process.env.MONGO_URL won't affect the database connection.
It sounds like you are trying to run one Meteor server on several domains and have it connect to several databases at the same time depending on the client's request. This might be possible with traditional server-side scripting languages, but it's not possible with Meteor because the server and database are pretty tightly tied together, and the server basically attaches to one main database when it starts up.
The *.meteor.com hosting is doing something similar to this right now, and in the future Meteor's Galaxy commercial product will allow you to do this - all by starting up separate Meteor servers per subdomain.
Related
I've built an application that does a lot of processing of user data (they can load in their data, map the variables, run analyses, review a dashboard, and download the results/report). It's a pretty heavy application, and I'm running into an issue that I'm not sure how to best solve for.
The problem is that sometimes the session will unexpectedly disconnect from the psql database. This causes problems because just about every corner of the application depends on retrieving or sending information. Basically, the app doesn't work at all without the connection. What's even worse, is the UI doesn't really inform the user of the problem, it kindof sits all lazy-like.
The application exists on an EC2 instance within a docker container, served through an HTTPS proxy (Caddy) to the public via a registered domain name. Each new session searches for a global pool connection, and if one does not exist, it creates one, then checks out a local connection and passes that into all the downstream modules.
I'm wondering how others have addressed this problem. Should I,
use a global pool, then check out a single connection and test for a severed connection at the start of each function? This is my current (unfinished) approach and seems not great.
search for a pool connection and checkout a connection at the start of each function, then return at the end? This would take a bit of time to implement (and test), but seems like a reasonable solution.
check for a connection every minute and if one doesn't exist, create one. I'm guessing this would need to happen in each module independently.
Any direction will be greatly appreciated.
Thanks,
Need some advice before starting develop some things.. I've 15 WordPress websites on different installs, and I've remote server which gets data 24/7 from those websites.
I've reached a point that I want the server to modify the websites based on his calculated data.
The things are this:
Should I allow the server the access the WP DB remotely and modify things without using WP on the circle?
Or, use WP REST API and supply some secured routes which provide data and accept data and make those changes?
My instinct is to use the WP API, but. After all its a PHP (nginx+apache) which have some limits (timeout for example) and I find it hard to run hard and long process on the WP itself.
I can divide the tasks to different levels, for example:
fetching data (simple get)
make some process on the remote server
loop and modify in small batches to another route
My concerns are that this circle require perfect match between remote server and WP API, and any change or fix on WP side brings plugins update on the websites which is not much fun.
Hope for any ideas and suggests to make it forward.
"use WP REST API and supply some secured routes which provide data and accept data and make those changes", indeed.
i don't know why timeout or another limits may cause a problem - but using API is the best way for such kind of cases. You can avoid timeout problems with some adjustments on web servers side.
Or you can increase memory, timeout limit exclusively for requested server.
f.e.
if ($_SERVER["remote_attr"]=='YOUR_MAIN_SERVER_IP') {
ini_set('max_execution_time',1000);
ini_set('memory_limit','1024M');
}
For client security and privacy reasons, we want to deploy a unique database for each client while using the same website.
I envision that during the session_start event, we would determine which database to use for them (by looking at the subdomain they come in on) and set the connection string in a session variable. Then on every page_init, we'd dynamically set any object's connection string. In code behind, we'd do the same thing with the connection string.
Is there a better approach to doing this and will setting the connection string in page_init work? Is using a session variable wise? I've tended not to ever use them except when no other solution was possible.
The problem with the model itself it is really complex and can let you with some errors specially when we are talking about changes in the database. Imagine that you need to add an extra field on the interface. if you have 100 clients this will mean updating 100 different databases. when we talk about dealing with downtime them things get even worst.
I would do with that in a light different abstract your database layer create one api that will call the database. And from the website you always call the api passing the domain that you want the data to come from.
You can ask me what advantage this will give to you. The biggest one that you will see it is when doing upgrades and maintenance. Having one api per client it is a lot better to think them having one database per client. and if you really want to have just one (I would really recommend having one per client and deploying automatically) you can have a switch on the call and base with some parameters that you pass to the api ( can be on the header like the subdomain on the header) you can chose what database to connect.
Let me give you a sample scenario and how I would suggest to approach this scenario (that is true for database or api)
I want to include a new data field. So first thing it is to add this field on the backend (api or database) deploy this new field if it is one api you can even test that calling the api and see that the new field it is now returned that is not a problem for your ui because it is just a field that it does not use. after that you change the ui to actually use this field and deploy that to production.
If i want to use Firebase on the server side, in place of REST routes using express and node.js, how would I go about dealing with scaling and load balancing? So for example, if I have an express app that uses Firebase on the server side, will every single server that spins up contain these listeners and react to them? Is there a scale-able solution to using Firebase on the server-side with elastic load balancing in mind?
I think your question is too broad in its current form, but will give you at least a few (equally broad) options.
There are probably dozens of solutions possible, but most of them will be variations on these two broad scenarios: centralized vs decentralized.
You can use a centralized authority, which assigns each task to one of the worker nodes. This is normally what a load balancer does, so you might want to search for load balancing algorithms.
Alternatively you can have each node simply try to claim the work. The nodes should then use a transaction to update the "work queue", so that only one node ends up doing the work.
Related: https://github.com/FirebaseExtended/firebase-queue
I had my chance to play with this tool for a while now and made a chat application instead of a hello world. My project has 2 meteor applications sharing the same mongo database:
client
operator
when I type a message from the operator console it sometimes takes as much as 7-8 seconds to appear to the subscribed client. So my question is...how much of a real time can I expect from this meteor? Right now I can see better results with other services such as pubnub or pusher.
Should the delay come from the fact that it's 2 applications subscribed to the same db?
P.S. I need 2 applications because the client and operator apps are totally different mostly in design and media libraries (css/jquery plugins etc.) which is the only way I found to make the client app much lighter.
If you use two databases without DDP your apps are not going to operate in real time. You should either use one complete app or use DDP to relay messages to the other instance (via Meteor.connect)
This is a bit of an issue for the moment if you want to do the subscription on the server as there isn't really server to server ddp support with subscriptions yet. So you need to use the client to make the subscription:
connection = Meteor.connect("http://YourOtherMetorInstanceUrl");
connection.subscribe("messages");
Instead of
Meteor.subscribe("messages");
In your client app, of course using the same subscription names as you do for your corresponding publish functions on the other meteor instance
Akshat's answer is good, but there's a bit more explanation of why:
When Meteor is running it adds an observer to the collection, so any changes to data in that collection are immediately reactive. But, if you have two applications writing to the same database (and this is how you are synchronizing data), the observer is not in place. So it's not going to be fully real-time.
However, the server does regularly poll the database for outside changes, hence the 7-8 second delay.
It looks like your applications are designed this way to overcome the limitation Meteor has right now where all client code is delivered to all clients. Fixing this is on the roadmap.
In the mean time, in addition to Akshat's suggestion, I would also recommend using Meteor methods to insert messages. Then from client application, use Meteor.call('insertMessage', options ... to add messages via DDP, which will keep the application real-time.
You would also want to separate the databases.