NATS streaming server auto ack with Publish/Subscribe - nats.io

I accidentally noticed when I restart my NATS subscriber daemon, that all messages processed again, despite of fact they have been processed without errors.
publish.go:
err := s.conn.Publish(subject, data)
subscriber.go
durable := uuid.NewV4().String()
err = s.conn.QueueSubscribe(
subject,
durable,
handler, // here I just log something
stan.StartWithLastReceived(),
stan.DurableName(durable),
)
In fact, NATS streaming server delivers all messages ever received each time I restart subscriber daemon.

I found out. This is because of durable := uuid.NewV4().String(), means each daemon restart we get new unique durable queue name, so it processed all messages again.

Related

what is the difference between grpc stream error and TCP error

It is introduced in "google.golang.org/grpc/codes" that some errors can be generated by grpc and some cannot be generated, and the grpc stream should correspond to the stream in http2.0. Then I want to know whether it means that there is an error with the TCP connection when those grpc streams throw exceptions, so do I only need to reconnect the TCP connection, or do I have somes methods for only reconnect stream (such as reconnecting streams, etc.)
for {
request, err3 := stream.Recv()
if err3 == io.EOF {
return nil
}
if err3 != nil {
return err3 // how can i handle this error (grpc generated)
}
do something
}
gRPC-go handles network level reconnections for you: https://pkg.go.dev/google.golang.org/grpc#ClientConn
A ClientConn encapsulates a range of functionality including name resolution, TCP connection establishment (with retries and backoff) and TLS handshakes. It also handles errors on established connections by re-resolving the name and reconnecting.
If the error returns an error and is not type io.EOF, it means that something went wrong (including network errors) and that you have to request a new stream: https://github.com/grpc/grpc-go/blob/master/stream.go#L126 but you don't have to worry to create a new TCP connection.
RecvMsg blocks until it receives a message into m or the stream is done. It returns io.EOF when the stream completes successfully. On any other error, the stream is aborted and the error contains the RPC status.
If it can't get a new stream, means that it can't connect to the server or that something is very wrong with it, but if it is a transient network error, you will eventually be able to get a new stream.

How to make my redis connection Singleton in Lua?

I am trying to handle incoming HTTP requests by Nginx and Lua. I need to read a blue from Redis in each request and currently, I open a Redis connection in every request by this code:
local redis = require "resty.redis"
local red = redis:new()
local ok, err = red:connect("redis", 6379)
if not ok then
ngx.say("failed to connect: ", err)
return
end
local res, err = red:auth("abcd")
if not res then
ngx.log(ngx.ERR, err)
return
end
Is there any way to make this connection static or singleton to increase my request handler performance?
It is impossible to share a cosocket object (and, therefore, a redis object, check this answer for details) between different requests:
The cosocket object created by this API function has exactly the same lifetime as the Lua handler creating it. So never pass the cosocket object to any other Lua handler (including ngx.timer callback functions) and never share the cosocket object between different Nginx requests.
However, nginx/ngx_lua uses a connection pool internally:
Before actually resolving the host name and connecting to the remote backend, this method will always look up the connection pool for matched idle connections created by previous calls of this method
That being said, you just need to use sock:setkeepalive() instead of sock:close() for persistent connections. The redis object interface has a corresponding method: red:set_keepalive().
You'll still need to create a redis object on a per request basis, but this will help to avoid a connection overhead.

Using a KillSwitch in an akka http streaming request

I'm using Akka's HTTP client to make a connection to an infinitely streaming HTTP endpoint. I am having difficulty getting the client to close the upstream to the HTTP server.
Here's my code (StreamRequest().stream returns a Source[T, Any]. It's generated by Http().outgoingConnectionHttps and then a Flow[HttpResponse, T, NotUsed] to convert HttpResponse to a stream of T):
(killSwitch, tFuture) = StreamRequest()
.stream
.takeWithin(timeToStreamFor)
.take(toPull)
.viaMat(KillSwitches.single)(Keep.right)
.toMat(Sink.seq)(Keep.both)
.run()
Then I have
tFuture.onComplete { _ =>
info(s"Shutting down the connection")
killSwitch.shutdown()
}
When I run the code I see the 'Shutting down the connection' log message but the server tells me that I'm still connected. It disconnects only when the JVM exits.
Any ideas what I'm doing wrong or what I should be doing differently here?
Thanks!
I suspect you should invoke Http().shutdownAllConnectionPools() when tFuture completes. The pool does not close connections because they can be reused by the different stream materialisations, so when the stream completes it does not close the pool. The shut connection you seen in the log can be because the idle timeout has triggered for one of the connections.

How to query TCP connection state in go?

On the client side of a TCP connection, I am attempting to to reuse established connections as much as possible to avoid the overhead of dialing every time I need a connection. Fundamentally, it's connection pooling, although technically, my pool size just happens to be one.
I'm running into a problem in that if a connection sits idle for long enough, the other end disconnects. I've tried using something like the following to keep connections alive:
err = conn.(*net.TCPConn).SetKeepAlive(true)
if err != nil {
fmt.Println(err)
return
}
err = conn.(*net.TCPConn).SetKeepAlivePeriod(30*time.Second)
if err != nil {
fmt.Println(err)
return
}
But this isn't helping. In fact, it's causing my connections to close sooner. I'm pretty sure this is because (on a Mac) this means the connection health starts being probed after 30 seconds and then is probed at 8 times at 30 second intervals. The server side must not be supporting keepalive, so after 4 minutes and 30 seconds, the client is disconnecting.
There might be nothing I can do to keep an idle connection alive indefinitely, and that would be absolutely ok if there were some way for me to at least detect that a connection has been closed so that I can seamlessly replace it with a new one. Alas, even after reading all the docs and scouring the blogosphere for help, I can't find any way at all in go to query the state of a TCP connection.
There must be a way. Does anyone have any insight into how that can be accomplished? Many thanks in advance to anyone who does!
EDIT:
Ideally, I'd like to learn how to handle this, low-level with pure go-- without using third-party libraries to accomplish this. Of course if there is some library that does this, I don't mind being pointed in its direction so I can see how they do it.
The socket api doesn't give you access to the state of the connection. You can query the current state it in various ways from the kernel (/proc/net/tcp[6] on linux for example), but that doesn't make any guarantee that further sends will succeed.
I'm a little confused on one point here. My client is ONLY sending data. Apart from acking the packets, the server sends nothing back. Reading doesn't seem an appropriate way to determine connection status, as there's noting TO read.
The socket API is defined such that that you detect a closed connection by a read returning 0 bytes. That's the way it works. In Go, this is translated to a Read returning io.EOF. This will usually be the fastest way to detect a broken connection.
So am I supposed to just send and act on whatever errors occur? If so, that's a problem because I'm observing that I typically do not get any errors at all when attempting to send over a broken pipe-- which seems totally wrong
If you look closely at how TCP works, this is the expected behavior. If the connection is closed on the remote side, then your first send will trigger an RST from the server, fully closing the local connection. You either need to read from the connection to detect the close, or if you try to send again you will get an error (assuming you've waited long enough for the packets to make a round trip), like "broken pipe" on linux.
To clarify... I can dial, unplug an ethernet cable, and STILL send without error. The messages don't get through, obviously, but I receive no error
If the connection is actually broken, or the server is totally unresponsive, then you're sending packets off to nowhere. The TCP stack can't tell the difference between packets that are really slow, packet loss, congestion, or a broken connection. The system needs to wait for the retransmission timeout, and retry the packet a number of times before failing. The standard configuration for retries alone can take between 13 and 30 minutes to trigger an error.
What you can do in your code is
Turn on keepalive. This will notify you of a broken connection more quickly, because the idle connection is always being tested.
Read from the socket. Either have a concurrent Read in progress, or check for something to read first with select/poll/epoll (Go usually uses the first)
Set timeouts (deadlines in Go) for everything.
If you're not expecting any data from the connection, checking for a closed connection is very easy in Go; dispatch a goroutine to read from the connection until there's an error.
notify := make(chan error)
go func() {
buf := make([]byte, 1024)
for {
n, err := conn.Read(buf)
if err != nil {
notify <- err
return
}
if n > 0 {
fmt.Println("unexpected data: %s", buf[:n])
}
}
}()
There is no such thing as 'TCP connection state', by design. There is only what happens when you send something. There is no TCP API, at any level down to the silicon, that will tell you the current state of a TCP connection. You have to try to use it.
If you're sending keepalive probes, the server doesn't have any choice but to respond appropriately. The server doesn't even know that they are keepalives. They aren't. They are just duplicate ACKs. Supporting keepalive just means supporting sending keepalives.

Creating an idle timeout in Go?

I use CloudFlare for one of my high volume websites, and it sits in front of my stack.
The thing is CloudFlare leaves idle connections open in addition to creating new ones, and it's not a setting I can change.
When I have Varnish or Nginx sitting in front listening on port 80 they have out of the box configuration to hang up the idle connections.
This is fine until I had to add a proxy written in Go to the front of my stack. It uses the net/http standard library.
I'm not a Go wizard but based on what people are telling me there are only read and write timeout settings but not hanging up idle connections.
Now my server will fill up with connections and die unless I set a set read and write timeouts, but the problem with this is my backend takes a long time sometimes and it's causing good requests to be cut off when they shouldn't.
What is the proper way to handle idle connections with Go http?
Edit 1: To be more clear, I'm using httputil.NewSingleHostReverseProxy to construct a proxy, which exposes transport options but only for the upstream. The problems I am having are downstream, they need to be set on the http.Server object that uses the ReverseProxy as a handler. http.Server does not expose transport.
Edit 2: I would prefer an idle timeout to a read timeout, since the later would apply to an active uploader.
Thanks
The proper way to hangup idle connections in the Go http server is to set the read timeout.
It is not necessary to set the write timeout to hang up on idle clients. Don't set this value or adjust it up if it's cutting off responses.
If you have long uploads, then use a connection state callback to implement separate idle and read timeouts:
server.ConnState = func(c net.Conn, cs http.ConnState) {
switch cs {
case http.StateIdle, http.StateNew:
c.SetReadDeadline(time.Now() + idleTimeout)
case http.StateActive:
c.SetReadDeadline(time.Now() + activeTimeout)
}
}
See the net/http.Transport docs. The Transport type has some options for dealing with idle HTTP connections in the keep-alive state. From reading your question, the option that seems most relevant to your problem is the MaxIdleConnsPerHost field:
MaxIdleConnsPerHost, if non-zero, controls the maximum idle (keep-alive) to keep per-host. If zero, DefaultMaxIdleConnsPerHost is used.
Reading the code, the default is 2 per host.
The Transport type also has a method to zap all idle connections: CloseIdleConnections.
CloseIdleConnections closes any connections which were previously connected from previous requests but are now sitting idle in a "keep-alive" state. It does not interrupt any connections currently in use.
You can specify a Transport on any http client:
tr := &http.Transport{
TLSClientConfig: &tls.Config{RootCAs: pool},
DisableCompression: true,
MaxIdleConnsPerHost: 1,
}
client := &http.Client{Transport: tr}
resp, err := client.Get("https://example.com")
Another thing worth noting: the docs recommend that you keep a single http client object that is re-used across all your requests (i.e. like a global variable).
Clients and Transports are safe for concurrent use by multiple goroutines and for efficiency should only be created once and re-used.
If you are creating many http client objects in your proxy implementation, it might explain unbounded growth of idle connections (just guessing at how you might be implementing this, though).
EDIT: Reading a little bit more, the net/httputil package has some convenience types for reverse proxies. See the ReverseProxy type. That struct also allows you to supply your own Transport object, allowing you to control your proxy's idle client behavior via this helper type.

Resources