How many connections Meteor creates to mongodb? - meteor

I have a meteor app with one nodejs process. It connects to compose with oplog.
There's only me on the server but I see that there's 27 connections in the compose stats.
Looking at the server, "netstat -ap" shows that node really have 27 connections.
I don't find any info aboute the heuristic Meteor made to creates a mongodb connection.

Related

How to set in a Dockerfile an nginx in front of the same container (google cloud run)? I tried making a python server (odoo) handle http/2

Thanks in advance. My question is, how to set in a Dockerfile an nginx in front of a container? I saw other questions [5], and it seems that the only way to allow http/2 on odoo in cloud run is to create an nginx container, as sidecars are not allowed in gcrun. But I also read that with supervisord it can be done. Has anyone been able do that so to handle http/2 and so to increase the cloud run max request quota?
I wanted to to try this: in the entrpoint.sh, write a command to install nginx, and then set its configuration to be used as a proxy to allow http2. But, I ask you here as I'm not sure if it'll work, as I read in [2] that nginx won't work with a python server.
The whole story: I'm deploying an odoo ce on google cloud run + cloud sql. I configured one odoo as a demo with 5 blogposts and when I tried to download it, it says that the request was too large. Then, I imagined this was because of the 32MB of cloud run request quota [1], as the backup size was 52 MB. Then, I saw that the quota for http/2 connections was unlimited, so I activated http/2 button in cloud run. Next, when I accessed the service an error relating "connection failure appeared".
To do so I thought of two ways: one, was upgrading the odoo http server to one that can handle http/2 like Quark. This first way seems impossible to me, because it would force me to rewrite many pieces of odoo, maybe. Then, the second option that I thought of was running in front of the odoo container (that runs a python web server on a Werkzeug), an nginx. I read in the web that nginx could upgrade connections to http/2. But, I also read that cloud run was running its internal load balancer [2]. So, then my question: would it be possible to run in the same odoo container an nginx that exposes this service on cloud run?
References:
[1] https://cloud.google.com/run/quotas
[2] Cloud Run needs NGINX or not?
[3] https://linuxize.com/post/configure-odoo-with-nginx-as-a-reverse-proxy/
[4] https://github.com/odoo/docker/tree/master/14.0
[5] How to setup nginx in front of node in docker for Cloud Run?
Has anyone been able do that so to handle http/2 and so to increase the cloud run max request quota?
Handling HTTP/2 does not help you increase your maximum requests per container limit on Cloud Run.
HTTP/2 only helps you reuse TCP connections to send concurrent requests over a single connection, but Cloud Run does not really count connections so you are not on the right track here. HTTP/2 won't help you.
Cloud Run today already supports 1000 container instances (with 250 concurrent requests in preview) so that's 250,000 simultaneous requests for your service. If you need more, contact support.
But, I ask you here as I'm not sure if it'll work, as I read in [2] that nginx won't work with a python server.
Sounds incorrect.
If you configure a multi-process container you can run Python behind nginx on Cloud Run. But as you said, Cloud Run does not need nginx.
Overall you don't need HTTP/2 in this scenario.

Listen voice messages on server A from server B with asterisk

I have RealTime asterisk with 3 servers. In database I hold sippears only and voicemail boxes. Voicemail messages are stored on the system FILE_STORAGE.
Server A and B are for calls and sip registrations and Server C is dundi.
Currently everything work fine.. I can call from Server A to Server B. The problem is when I leave message to number who is busy and registered on Server B.. then this number disconnect and register on Server A -> he can't listen the messages because it is stored on Server B..
How can I make any user to be able to listen his messages no matter on which server are?
You have alot of options, most of each in clustering area.
Simplest options are:
Glusterfs setup on both server, voicemail in glusterfs directory. This one do failover
NFS/samba share on both servers.
mysql master-master replication, use ODBC_STORAGE, put all voicemails in db. This one is recommended if you also want easy access from web interface to your voice files and simple search/lookup/get message. Highly recommended use innodb tables and optimized mysql config.
Easiest way just for to be able to listen them no matter on which of both servers user is registered is NFS and mounting for example /var/spool/asterisk/. In this case you need to install some additional components.
Here is great tutorial how can you done this:
How to configure an NFS server and mount NFS shares - Ubuntu
Another way if you can make master-slave with two servers in cluster and using rsync . Then you can sync every X minutes/hours/days folder to remote server to keep them in case of failure.
rsync -a local_dir/ user#remote-host-ip:/path/to/dir

Meteor: Run multiple websites on a single instance

I have several meteor apps which I currently run in individual meteor instances. This requires that I have several meteor processes running at the same time, each occupying it's own port. I therefore need to configure the server whenever I add a new project (using reverseproxy to redirect incoming request on port 80 to the corresponding meteor ports).
I could only find information about running multiple meteor instances for scaling, but nothing about having multiple apps in one instance.

How to get past the MongoDB port error to launch the examples?

I'm getting started with Meteor, using the examples:
https://www.meteor.com/examples/parties
If I deploy and load the deployment url ( http://radically-finished-parties-app.meteor.com/ ) , the app runs ... nothing magic there... it was an easy example
My issue occurs when I want to run it locally, I get the following message
"You are trying to access MongoDB on the native driver port. For http diagnostic access, add 1000 to the port number"
I got meteor running through the terminal command:
meteor --port 3004
Setup:
- Mac OS 10.9
- Chrome 31
This is happening because you are accessing the mongodb port in your web browser.
When you run a meteor app, e.g on port 3004
Port 3004 would be a web proxy to port 3005
Port 3005 would be the meteor app in a 'raw' sort of sense (without the websockets part.. i think)
Port 3006 would be the mongodb (which you are accessing).
Try using a different port. Or use a simpler port e.g just run meteor and access port 3000 in your web browser.
If the reason you moved the port number up because it said the port is in use the meteor app may not have exited properly on your computer. Restart your machine or have a look at activity monitor to kill the rogue node process.
I think what might have happened is you ran in on 3000, then moved the ports up and the previous one may have not been exited correctly so what you're seeing is a mongodb instance of a previous meteor instance.
This happens when you run another meteor on port 2999, forget about it and try to start a second instance on the usual port.
Try making sure Meteor is using the local embedded mongo db, which it will manage on its own:
export MONGO_URL=''
Something changed in my bash settings that I didn't copy over to zsh. I uninstalled zsh and meteor can now find and access mongo.

Logstash shipper & server on the same box

I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents.
I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM.
In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch.
My question:
Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng ---> write to redis and also read from redis ---> output to elastic search ?
Diagram of my setup:
[client]-------syslog-ng---> [log server] ---syslog-ng <----logstash-shipper ---> redis <----logstash-server ----> elastic-search <--- kibana
Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)?
Yes.
Is there any way to just have one logstash configuration and have the process read from syslog-ng ---> write to redis and also read from redis ---> output to elastic search ?
Not that I have seen yet.
Why would you want this? I have a single machine and remote machine config and they work extremely reliably, with a small footprint. Maybe you could explain your reasoning a bit - I know I would be interested to hear about it.

Resources