I am using a lua script to push paramters into redis from a nginx http server
https://github.com/openresty/lua-resty-redis
I do not want to make a new connection to redis server everytime. Can I persist the redis connection. Also is there an option to make this async
If you use set_keepalive specifying the connection pool size (2nd paramater), when you connect, the lua-resty-redis library will automatically try to resolve a previous idle connection if any.
It also allows to specify a custom name for your pool. It is all decribed in 'redis#connect' method documentation:
Before actually resolving the host name and connecting to the remote backend, this method will always look up the connection pool for matched idle connections created by previous calls of this method.
An optional Lua table can be specified as the last argument to this method to specify various connect options:
pool
Specifies a custom name for the connection pool being used. If omitted, then the connection pool name will be generated from the string template <host>:<port> or <unix-socket-path>.
As for the "async" requirement, the library is already 100% nonblocking.
Related
I have a Symfony application that is running on several servers behind a load balancer. So I have separate hosts www1, www2, www3, etc.
At the moment I'm running messenger:consume only on www1, in fear of race conditions and potentially messages being handled twice.
Now I have a scenario where I need to execute a command on each host.
I was thinking of using separate transports for each host and running messenger:consume on each, consuming only messages from its respective queue. However I want the configuration to be dynamic, i.e. I don't want to do another code release with different transports configuration when a new host is added or removed.
Can you suggest a strategy to achieve this?
If you want to use different queues and different consumers... just configure a different DSNs for each www, stored on environment variables (not code). Then you could have different queues or transports for each server.
The Transport Configuration can include the desired queue name within the DSN, and best practice is to store that configuration on an environment variable, not as code, so you wouldn't need "another code release with different transports config when a new host is added or removed". Simply add the appropriate environment variables when each instance is deployed, same as you do with the rest of the configuration.
framework:
messenger:
transports:
my_transport:
dsn: "%env(MESSENGER_TRANSPORT_DSN)%"
On each "www" host you would have a different value for MESSENGER_TRANSPORT_DSN, which would include a different queue name (or a completely different transport).
You would need to create a separate consumer for each instance, with a matching configuration (or run the consumer off the same instance).
But if all the hosts are actually running the same app, generally, you'd use a single consumer, and al the instances should publish to the same queue.
The consumer does not even need to run on the same server than any of the web instances, simply be configured to consume from the appropriate transport/queue.
we operate CENM(1.2 and use helm template to run on k8s cluster) to construct our own private network and keep on running CENM network map server for a few week, then launching new node start failing.
with further investigation, its appeared that request timeout for http://nmap:10000/network-map causes problem.
in nmap server’s log, we found following output when access to above url with curl.
[NMServer] - Error while handling socket client message com.r3.enm.servicesapi.networkmap.handlers.LatestUnsignedNetworkParametersRetrievalMessage#760c53ea: HikariPool-1 - Connection is not available, request timed out after 30000ms.
netstat shows there is at least 3 establish connection to the database from the container which network map server runs, also I can connect database directly with using CLI.
so I don’t think it is neither database saturated nor network configuration problem.
anyone have an idea why this happens? I think restart probably solve the problem, but want to know the root cause...
regards,
Please test the following options.
Since it is the HikariCP (connection pool) component that is throwing the error it would be worth seeing if increasing the pool size in the network map configuration may help - see below)
Corda uses Hikari Pool for creating the connection pool. To configure the connection pool any custom properties can be set in the dataSourceProperties section.
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
...
maximumPoolSize = 10
connectionTimeout = 50000
}
Has a healthcheck been conducted to verify there are sufficient resources on that postgres database i.e basic diagnostic checks ?
Another option to get more information logged from the network map service is to run with TRACE logging also:
From https://docs.corda.net/docs/cenm/1.2/troubleshooting-common-issues.html
Enabling debug/trace logging
Each service can be configured to run with a deeper log level via command line flags passed at startup:
java -DdefaultLogLevel=TRACE -DconsoleLogLevel=TRACE -jar <enm-service-jar>.jar --config-fi
Clients are connecting to API gateway server through websocket connection. This server just orchestrates swarm of cloud functions, that are handling all of the data requesting and transforming. Server is statefull - it holds essential session data, which is defining, for example, what cloud functions are allowed to be requested by a given user.
This server doesn't use socket to broadcast data, so socket connections are not interacting between each other, and will not be doing this. So, all it needs to handle is single-client-to-server communication.
What will happen if i'll create bunch of replicas and put load balancer in front of all of them (like regular horizontal scaling)? If a user got connected to certain server instance, then his connection will stick there? or it will be switching between instances by load balancer?
There is a parameter available for load balancer that allows you to do what you are looking for: Session affinity.
"Session affinity if set attempts to send all network request from the same client to the same virtual machine instance."
Actually even if it seems to be related to load balancer you set it while creating target pools and/or backends. You should check if this solution can be applied to your particular configuration.
I am using apache KafkaConsumer in my Scala app to talk to a Kafka server wherein the Kafka and Zookeeper services are running in a docker container on my VM (the scala app is also running on this VM). I have setup the KafkaConsumer's property "bootstrap.servers" to use 127.0.0.1:9092.
The KafkaConsumer does log, "Sending coordinator request for group queuemanager_testGroup to broker 127.0.0.1:9092". The problem appears to be that the Kafka client code is setting the coordinator values based on the response it receives which contains responseBody={error_code=0,coordinator={node_id=0,host=e7059f0f6580,port=9092}} , that is how it sets the host for future connections. Subsequently it complains that it is unable to resolve address: e7059f0f6580
The address e7059f0f6580 is the container ID of that docker container.
I have tested using telnet that my VM is not detecting this as a hostname.
What setting do I need to change such that the Kafka on my docker returns localhost/127.0.0.1 as the host in its response ? Or is there something else that I am missing / doing incorrectly ?
Update
advertised.host.name is deprecated, and --override should be avoided.
Add/edit advertised.listeners to be the format of
[PROTOCOL]://[EXTERNAL.HOST.NAME]:[PORT]
Also make sure that PORT is also listed in property for listeners
After investigating this problem for hours on end, found that there is a way to
set the hostname while starting up the Kafka server, as follows:
kafka-server-start.sh --override advertised.host.name=xxx (in my case: localhost)
How to initialize and connect to OpenLDAP Server without using ldap_open() function.
I would like to test the feature LDAP test connection for this I need to open a connection with timeout and close it to check if the connection is available or not.
Just create a socket and call connect(), using non-blocking mode and your choice of select()/poll()/epoll() to control the timeout. Close it immediately on success, don't send anything.