Meteor: Run multiple websites on a single instance - meteor

I have several meteor apps which I currently run in individual meteor instances. This requires that I have several meteor processes running at the same time, each occupying it's own port. I therefore need to configure the server whenever I add a new project (using reverseproxy to redirect incoming request on port 80 to the corresponding meteor ports).
I could only find information about running multiple meteor instances for scaling, but nothing about having multiple apps in one instance.

Related

NextJS and caching of pages

we are running a nextJS server as a service on a Kubernetes cluster, with a minimum of two replicas. So in a normal situation, we have these:
our-nextjs-server-prod-bd7c6dc4c-2dlqg 1/1 Running 0 18h
our-nextjs-server-prod-bd7c6dc4c-7dkbp 1/1 Running 0 18h
When the first server is hit for a page it hasn't cached yet, it will work on it, store it in the host node's volume, and will serve it from there in subsequent calls. Now, if the second server is hit for the same page, but is hosted on a different node, it will have to, as I understand it, re-generate the page as it doesn't exist on its node's volume.
Is there a way to have multiple nextJS pods from different nodes utilize a common resource to cache pages? a common volume, an external resource like Redis perhaps? Is there a best practice around that requirement?
For a moment: Let's disregard the CDN in front of the nextJS service caching the results for a certain TTL. We need those nextJS pods hit frequently so that they can ping the application server for changed properties that'll trigger a re-build of the page.

Symfony Messenger different consumers for different app servers

I have a Symfony application that is running on several servers behind a load balancer. So I have separate hosts www1, www2, www3, etc.
At the moment I'm running messenger:consume only on www1, in fear of race conditions and potentially messages being handled twice.
Now I have a scenario where I need to execute a command on each host.
I was thinking of using separate transports for each host and running messenger:consume on each, consuming only messages from its respective queue. However I want the configuration to be dynamic, i.e. I don't want to do another code release with different transports configuration when a new host is added or removed.
Can you suggest a strategy to achieve this?
If you want to use different queues and different consumers... just configure a different DSNs for each www, stored on environment variables (not code). Then you could have different queues or transports for each server.
The Transport Configuration can include the desired queue name within the DSN, and best practice is to store that configuration on an environment variable, not as code, so you wouldn't need "another code release with different transports config when a new host is added or removed". Simply add the appropriate environment variables when each instance is deployed, same as you do with the rest of the configuration.
framework:
messenger:
transports:
my_transport:
dsn: "%env(MESSENGER_TRANSPORT_DSN)%"
On each "www" host you would have a different value for MESSENGER_TRANSPORT_DSN, which would include a different queue name (or a completely different transport).
You would need to create a separate consumer for each instance, with a matching configuration (or run the consumer off the same instance).
But if all the hosts are actually running the same app, generally, you'd use a single consumer, and al the instances should publish to the same queue.
The consumer does not even need to run on the same server than any of the web instances, simply be configured to consume from the appropriate transport/queue.

HTTP Endpoint routing using Node-RED as a service in Kubernetes

I would like to run Node-RED as a service on Kubernetes to be able to build a custom API using the HTTP IN nodes. The goal is to be able to push any number of different flows to an arbitrary container running Node-RED using the Node-RED API.
I have tried running Node-RED as a service with 5 replicas and built a flow through the UI that has an HTTP in and HTTP out node. When I try hitting the service using curl on the minikube ip (e.g. curl http://192.168.64.2:30001/test), it will only return the results if the load balancer happens to land on the container that has the flow. Otherwise, it will return an error with HTML.
Any advice on how I should go about solving this issue? Thanks!
This is working as expected. If you are interacting with the Node-RED editor via the load balancer you are only editing the flow on that instance.
If you have 5 instances of Node-RED and only one of them is running a flow with the HTTP endpoints defined then calls to that endpoint will only succeed 1 time in 5.
You need to make sure that all instances have the same endpoints defined in their flows.
There are several ways you can do this, some examples would be:
Use the Node-RED Admin API to push the flows to each of the Node-RED instances in turn. You will probably need to do this via the private IP Address of each instance to prevent the load balancer getting in the way.
Use a custom Storage plugin to store the flow in a database and have all the Node-RED instances load the same flow. You would need to restart the instances to force the flow to be reloaded should you change it.

Tools to mock network requests

I hosted my Angular 6 and Laravel application on an AWS EC2 Instance, the angular container is running on(or mapped to) port 80 and 443 served by Nginx while the Laravel application is running on another container running on (or mapped to) port 8000 also served by Nginx.
I configured the angular app to be running at https://example.com and the Laravel app on https://api.example.com.
To be clear, the containers are task on separate services in the same EC2 Cluster on a cloud formation and there is no load-balancer.
The set up works perfectly for like 97% of customers but the remaining customers cannot get content on the site. I worked with one of the customers and realized that the Angular app(at https://example.com) loaded successfully but https://api.example.com:8000 cannot be reached.
What on earth can cause this?
Is there a way (may be tool) I can use to simulate different kinds of network request so that I can simulate the problematic network of the customers that cannot assess the site, for me to be able to trace and debug, because right now I am not having issue, making the problem very dicey for me to solve.

How many connections Meteor creates to mongodb?

I have a meteor app with one nodejs process. It connects to compose with oplog.
There's only me on the server but I see that there's 27 connections in the compose stats.
Looking at the server, "netstat -ap" shows that node really have 27 connections.
I don't find any info aboute the heuristic Meteor made to creates a mongodb connection.

Resources