I have a Symfony application that is running on several servers behind a load balancer. So I have separate hosts www1, www2, www3, etc.
At the moment I'm running messenger:consume only on www1, in fear of race conditions and potentially messages being handled twice.
Now I have a scenario where I need to execute a command on each host.
I was thinking of using separate transports for each host and running messenger:consume on each, consuming only messages from its respective queue. However I want the configuration to be dynamic, i.e. I don't want to do another code release with different transports configuration when a new host is added or removed.
Can you suggest a strategy to achieve this?
If you want to use different queues and different consumers... just configure a different DSNs for each www, stored on environment variables (not code). Then you could have different queues or transports for each server.
The Transport Configuration can include the desired queue name within the DSN, and best practice is to store that configuration on an environment variable, not as code, so you wouldn't need "another code release with different transports config when a new host is added or removed". Simply add the appropriate environment variables when each instance is deployed, same as you do with the rest of the configuration.
framework:
messenger:
transports:
my_transport:
dsn: "%env(MESSENGER_TRANSPORT_DSN)%"
On each "www" host you would have a different value for MESSENGER_TRANSPORT_DSN, which would include a different queue name (or a completely different transport).
You would need to create a separate consumer for each instance, with a matching configuration (or run the consumer off the same instance).
But if all the hosts are actually running the same app, generally, you'd use a single consumer, and al the instances should publish to the same queue.
The consumer does not even need to run on the same server than any of the web instances, simply be configured to consume from the appropriate transport/queue.
Related
I have a gRPC client, not dockerised, and server application, which I want to dockerise.
What I don't understand is that gRPC first creates a connection with a server, which involves a handshake. So, if I want to deploy the dockerised server on ECS with multiple instances, then how will the client switch from one to the other (e.g., if one gRPC server falls over).
I know AWS loadbalancer now works with HTTP 2, but I can't find information on how to handle the fact that the server might change after the client has already opened a connection to another one.
What is involved?
You don't necessarily need an in-line load balancer for this. By using a Round Robin client-side load balancing policy along with a DNS record that points to multiple backend instances, you should be able to get some level of redundancy.
I'm running Docker Compose (v2) and have a node service (website) and python based api deployed with nginx sitting in front of them.
One thing I would like to do is be able to scale the services by adding more containers. If I know ahead of time how many containers I will have, I can hardcode the nginx upstream config with the references to the IPs of the containers which docker makes available. However, the problem is that I want the upstream nginx config to be dynamic e.g. if I add another Docker container, it simply adds appends the location of the container to the upstream list of IPs in the upstream block.
My idea was to create a script which will automatically append the upstream servers using env variables when the containers change but I'm unsure where to start and can't find a good example.
There are a couple ways to achieve this. What you are referring to is usually called service discovery and comes in many forms. I'll describe two of them that I have used before.
The first and simplest one (which works fine for single servers or only discovering containers locally on one server) is a local proxy which makes use of the Docker socket or API. https://github.com/jwilder/nginx-proxy is one of the popular ones and should work well for prototyping scalable services in Compose.
Another way (which is more multi-host friendly but more complicated) would be registering services in a registry (such as etcd or Consul) and then dynamically writing out the configuration. To do this, you can use a registration system (such as https://github.com/gliderlabs/registrator) to register the containers and their ports. Then your proxy or application can consume a configuration file written out using a template system like https://github.com/kelseyhightower/confd.
I have several meteor apps which I currently run in individual meteor instances. This requires that I have several meteor processes running at the same time, each occupying it's own port. I therefore need to configure the server whenever I add a new project (using reverseproxy to redirect incoming request on port 80 to the corresponding meteor ports).
I could only find information about running multiple meteor instances for scaling, but nothing about having multiple apps in one instance.
I am trying to get a MessageDriven (EJB 3) bean to subscribe to a JMS Topic on another glassfish instance on another host. Is this possible?
In the Glassfish console you can modify the JMS server and point it to another Glassfish instance or a standalone OpenMQ broker. Although you can configure several JMS hosts to my knowledge Glassfish will always use the one called default_JMS_host so that's the one you want to edit.
Just one thing: in such a setup the two server instances will share queues and topics, which may not be what you want if the two servers are for example running the same application but don't want to share e.g a particular queue. This can easily be solved via the Destination Resources configuration, by specifying different physical names for that queue.
Using BizTalk I need to read data from one of two databases that are hosted in Unix using ODBC.
The data is replicated between the databases and if one of the databases does not respond I need to switch to the other. There is no load balancer or anything so I need to be able to do the switch on the BizTalk server.
I was thinking of creating two receive locations, one for each database server, only one of them enabled and then have a Windows service that periodically tries to make a connection to one of the database servers and if there is an exception, call a powershell script that disables the receive location for the server that does not respond and enable the other receivelocation.
Is there a better solution for this?
I would solve this as follows:
In Biztalk create a single http receive location.
Create a windows service
In the windows service poll the first database, if it does not respond poll from the second database
Have the Biztalk service post the information to the http receive location
You need to consider what happens if you read the same data twice, once from the main database and once from the backup.