Getting "127.0.0.1 can't assign requested address" - http.Client - http

What I'm doing is fairly straight-forward. I need to create a "proxy" server that is very minimal and fast. Currently I have a baseline server that is proxied to (nodejs) and a proxy-service (go). Please excuse the lack of actual "proxy'ing" - just testing for now.
Baseline Service
var http = require('http');
http.createServer(function (req, res) {
// console.log("received request");
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(8080, '127.0.0.1');
console.log('Server running at http://127.0.0.1:8080/');
Proxy Service
package main
import (
"flag"
"log"
"net/http"
"net/url"
)
var (
listen = flag.String("listen", "0.0.0.0:9000", "listen on address")
logp = flag.Bool("log", false, "enable logging")
)
func main() {
flag.Parse()
proxyHandler := http.HandlerFunc(proxyHandlerFunc)
log.Fatal(http.ListenAndServe(*listen, proxyHandler))
log.Println("Started router-server on 0.0.0.0:9000")
}
func proxyHandlerFunc(w http.ResponseWriter, r *http.Request) {
// Log if requested
if *logp {
log.Println(r.URL)
}
/*
* Tweak the request as appropriate:
* - RequestURI may not be sent to client
* - Set new URL
*/
r.RequestURI = ""
u, err := url.Parse("http://localhost:8080/")
if err != nil {
log.Fatal(err)
}
r.URL = u
// And proxy
// resp, err := client.Do(r)
c := make(chan *http.Response)
go doRequest(c)
resp := <-c
if resp != nil {
err := resp.Write(w)
if err != nil {
log.Println("Error writing response")
} else {
resp.Body.Close()
}
}
}
func doRequest(c chan *http.Response) {
// new client for every request.
client := &http.Client{}
resp, err := client.Get("http://127.0.0.1:8080/test")
if err != nil {
log.Println(err)
c <- nil
} else {
c <- resp
}
}
My issue, as mentioned within the title, is that I am getting errors stating 2013/10/28 21:22:30 Get http://127.0.0.1:8080/test: dial tcp 127.0.0.1:8080: can't assign requested address from the doRequest function, and I have no clue why. Googling this particular error yields seemingly irrelevant results.

There are 2 major problems with this code.
You are not handling the client stalling or using keep alives (handled below by getTimeoutServer)
You are not handling the server (what your http.Client is talking to) timing out (handled below by TimeoutConn).
This is probably why you are exhausting your local ports. I know from past experience node.js will keep-alive you very aggressively.
There are lots of little issues, creating objects every-time when you don't need to. Creating unneeded goroutines (each incoming request is in its own goroutine before you handle it).
Here is a quick stab (that I don't have time to test well). Hopefully it will put you on the right track: (You will want to upgrade this to not buffer the responses locally)
package main
import (
"bytes"
"errors"
"flag"
"fmt"
"log"
"net"
"net/http"
"net/url"
"runtime"
"strconv"
"time"
)
const DEFAULT_IDLE_TIMEOUT = 5 * time.Second
var (
listen string
logOn bool
localhost, _ = url.Parse("http://localhost:8080/")
client = &http.Client{
Transport: &http.Transport{
Proxy: NoProxyAllowed,
Dial: func(network, addr string) (net.Conn, error) {
return NewTimeoutConnDial(network, addr, DEFAULT_IDLE_TIMEOUT)
},
},
}
)
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
flag.StringVar(&listen, "listen", "0.0.0.0:9000", "listen on address")
flag.BoolVar(&logOn, "log", true, "enable logging")
flag.Parse()
server := getTimeoutServer(listen, http.HandlerFunc(proxyHandlerFunc))
log.Printf("Starting router-server on %s\n", listen)
log.Fatal(server.ListenAndServe())
}
func proxyHandlerFunc(w http.ResponseWriter, req *http.Request) {
if logOn {
log.Printf("%+v\n", req)
}
// Setup request URL
origURL := req.URL
req.URL = new(url.URL)
*req.URL = *localhost
req.URL.Path, req.URL.RawQuery, req.URL.Fragment = origURL.Path, origURL.RawQuery, origURL.Fragment
req.RequestURI, req.Host = "", req.URL.Host
// Perform request
resp, err := client.Do(req)
if err != nil {
w.WriteHeader(http.StatusBadGateway)
w.Write([]byte(fmt.Sprintf("%d - StatusBadGateway: %s", http.StatusBadGateway, err)))
return
}
defer resp.Body.Close()
var respBuffer *bytes.Buffer
if resp.ContentLength != -1 {
respBuffer = bytes.NewBuffer(make([]byte, 0, resp.ContentLength))
} else {
respBuffer = new(bytes.Buffer)
}
if _, err = respBuffer.ReadFrom(resp.Body); err != nil {
w.WriteHeader(http.StatusBadGateway)
w.Write([]byte(fmt.Sprintf("%d - StatusBadGateway: %s", http.StatusBadGateway, err)))
return
}
// Write result of request
headers := w.Header()
var key string
var val []string
for key, val = range resp.Header {
headers[key] = val
}
headers.Set("Content-Length", strconv.Itoa(respBuffer.Len()))
w.WriteHeader(resp.StatusCode)
w.Write(respBuffer.Bytes())
}
func getTimeoutServer(addr string, handler http.Handler) *http.Server {
//keeps people who are slow or are sending keep-alives from eating all our sockets
const (
HTTP_READ_TO = DEFAULT_IDLE_TIMEOUT
HTTP_WRITE_TO = DEFAULT_IDLE_TIMEOUT
)
return &http.Server{
Addr: addr,
Handler: handler,
ReadTimeout: HTTP_READ_TO,
WriteTimeout: HTTP_WRITE_TO,
}
}
func NoProxyAllowed(request *http.Request) (*url.URL, error) {
return nil, nil
}
//TimeoutConn-------------------------
//Put me in my own TimeoutConn.go ?
type TimeoutConn struct {
net.Conn
readTimeout, writeTimeout time.Duration
}
var invalidOperationError = errors.New("TimeoutConn does not support or allow .SetDeadline operations")
func NewTimeoutConn(conn net.Conn, ioTimeout time.Duration) (*TimeoutConn, error) {
return NewTimeoutConnReadWriteTO(conn, ioTimeout, ioTimeout)
}
func NewTimeoutConnReadWriteTO(conn net.Conn, readTimeout, writeTimeout time.Duration) (*TimeoutConn, error) {
this := &TimeoutConn{
Conn: conn,
readTimeout: readTimeout,
writeTimeout: writeTimeout,
}
now := time.Now()
err := this.Conn.SetReadDeadline(now.Add(this.readTimeout))
if err != nil {
return nil, err
}
err = this.Conn.SetWriteDeadline(now.Add(this.writeTimeout))
if err != nil {
return nil, err
}
return this, nil
}
func NewTimeoutConnDial(network, addr string, ioTimeout time.Duration) (net.Conn, error) {
conn, err := net.DialTimeout(network, addr, ioTimeout)
if err != nil {
return nil, err
}
if conn, err = NewTimeoutConn(conn, ioTimeout); err != nil {
return nil, err
}
return conn, nil
}
func (this *TimeoutConn) Read(data []byte) (int, error) {
this.Conn.SetReadDeadline(time.Now().Add(this.readTimeout))
return this.Conn.Read(data)
}
func (this *TimeoutConn) Write(data []byte) (int, error) {
this.Conn.SetWriteDeadline(time.Now().Add(this.writeTimeout))
return this.Conn.Write(data)
}
func (this *TimeoutConn) SetDeadline(time time.Time) error {
return invalidOperationError
}
func (this *TimeoutConn) SetReadDeadline(time time.Time) error {
return invalidOperationError
}
func (this *TimeoutConn) SetWriteDeadline(time time.Time) error {
return invalidOperationError
}

We ran into this and after a lot of time trying to debug, I came across this: https://code.google.com/p/go/source/detail?r=d4e1ec84876c
This shifts the burden onto clients to read their whole response
bodies if they want the advantage of reusing TCP connections.
So be sure you read the entire body before closing, there are a couple of ways to do it. This function can come in handy to close to let you see whether you have this issue by logging the extra bytes that haven't been read and cleaning the stream out for you so it can reuse the connection:
func closeResponse(response *http.Response) error {
// ensure we read the entire body
bs, err2 := ioutil.ReadAll(response.Body)
if err2 != nil {
log.Println("Error during ReadAll!!", err2)
}
if len(bs) > 0 {
log.Println("Had to read some bytes, not good!", bs, string(bs))
}
return response.Body.Close()
}
Or if you really don't care about the body, you can just discard it with this:
io.Copy(ioutil.Discard, response.Body)

I have encountered this problem too, and i add an option {DisableKeepAlives: true} to http.Transport fixed this issue, you can have a try.

I came here when running a massive amount of SQL queries per second on a system without limiting the number of idle connections over a long period of time. As pointed out in this issue comment on github explicitly setting db.SetMaxIdleConns(5) completely solved my problem.

Related

Sending data in Chunks using single HTTP Post connection

I receive the contents of a file from a data source in chunks. As and when I receive the chunk I want to send the chunk data to a service using http POST request. And by keeping alive the same http POST connection used for sending the first chunk I want to send the remaining chunks of data.
I came up with the following code snippet to implement something similar.
Server-Side
func handle(w http.ResponseWriter, req *http.Request) {
buf := make([]byte, 256)
var n int
for {
n, err := req.Body.Read(buf)
if n == 0 && err == io.EOF {
break
}
fmt.Printf(string(buf[:n]))
}
fmt.Printf(string(buf[:n]))
fmt.Printf("Transfer Complete")
}
Client-Side
type alphaReader struct {
reader io.Reader
}
func newAlphaReader(reader io.Reader) *alphaReader {
return &alphaReader{reader: reader}
}
func (a *alphaReader) Read(p []byte) (int, error) {
n, err := a.reader.Read(p)
return n, err
}
func (a *alphaReader) Reset(str string) {
a.reader = strings.NewReader(str)
}
func (a *alphaReader) Close() error {
return nil
}
func main() {
tr := http.DefaultTransport
alphareader := newAlphaReader(strings.NewReader("First Chunk"))
client := &http.Client{
Transport: tr,
Timeout: 0,
}
req := &http.Request{
Method: "POST",
URL: &url.URL{
Scheme: "http",
Host: "localhost:8080",
Path: "/upload",
},
ProtoMajor: 1,
ProtoMinor: 1,
ContentLength: -1,
Body: alphareader,
}
fmt.Printf("Doing request\n")
_, err := client.Do(req)
alphareader.Reset("Second Chunk")
fmt.Printf("Done request. Err: %v\n", err)
}
Here I want that when I do alphareader.Reset("Second Chunk"), the string "Second Chunk" should be sent using the POST connection made earlier. But that is not happening. The connection gets closed after sending the First Chunk of data. Also I have not written the Close() method properly which I'm not sure how to implement.
I'm newbie to golang and any suggestions would be greatly helpful regarding the same.
A *strings.Reader returns io.EOF after the initial string has been read and your wrapper does nothing to change that, so it cannot be reused. You're looking for io.Pipe to turn the request body into an io.Writer.
package main
import (
"io"
"net/http"
)
func main() {
pr, pw := io.Pipe()
req, err := http.NewRequest("POST", "http://localhost:8080/upload", pr)
if err != nil {
// TODO: handle error
}
go func() {
defer pw.Close()
if _, err := io.WriteString(pw, "first chunk"); err != nil {
_ = err // TODO: handle error
}
if _, err := io.WriteString(pw, "second chunk"); err != nil {
_ = err // TODO: handle error
}
}()
res, err := http.DefaultClient.Do(req)
if err != nil {
// TODO: handle error
}
res.Body.Close()
}
Also, don't initialize the request using a struct literal. Use one of the constructors instead. In your code you're not setting the Host and Header fields, for instance.

implement tls.Config.GetCertificate with self signed certificates

I m trying to figure out how i can implement a function to feed to tls.Config.GetCertificate with self signed certificates.
I used this bin source as a base, https://golang.org/src/crypto/tls/generate_cert.go
Also read this,
https://ericchiang.github.io/tls/go/https/2015/06/21/go-tls.html
Unfortunately, so far i m stuck with this error
2016/11/03 23:18:20 http2: server: error reading preface from client 127.0.0.1:34346: remote error: tls: unknown certificate authority
I think i need to generate a CA cert and then sign the key with it, but i m not sure how to proceed (....).
Here is my code, can someone help with that ?
package gssc
import (
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"github.com/pkg/errors"
"math/big"
"net"
"strings"
"time"
)
func GetCertificate(arg interface{}) func(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
var opts Certopts
var err error
if host, ok := arg.(string); ok {
opts = Certopts{
RsaBits: 2048,
Host: host,
ValidFrom: time.Now(),
}
} else if o, ok := arg.(Certopts); ok {
opts = o
} else {
err = errors.New("Invalid arg type, must be string(hostname) or Certopt{...}")
}
return func(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
if err != nil {
return nil, err
}
return generate(opts)
}
}
type Certopts struct {
RsaBits int
Host string
IsCA bool
ValidFrom time.Time
ValidFor time.Duration
}
func generate(opts Certopts) (*tls.Certificate, error) {
priv, err := rsa.GenerateKey(rand.Reader, opts.RsaBits)
if err != nil {
return nil, errors.Wrap(err, "failed to generate private key")
}
notAfter := opts.ValidFrom.Add(opts.ValidFor)
serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128)
serialNumber, err := rand.Int(rand.Reader, serialNumberLimit)
if err != nil {
return nil, errors.Wrap(err, "Failed to generate serial number\n")
}
template := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
Organization: []string{"Acme Co"},
},
NotBefore: opts.ValidFrom,
NotAfter: notAfter,
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
BasicConstraintsValid: true,
}
hosts := strings.Split(opts.Host, ",")
for _, h := range hosts {
if ip := net.ParseIP(h); ip != nil {
template.IPAddresses = append(template.IPAddresses, ip)
} else {
template.DNSNames = append(template.DNSNames, h)
}
}
if opts.IsCA {
template.IsCA = true
template.KeyUsage |= x509.KeyUsageCertSign
}
derBytes, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv)
if err != nil {
return nil, errors.Wrap(err, "Failed to create certificate")
}
return &tls.Certificate{
Certificate: [][]byte{derBytes},
PrivateKey: priv,
}, nil
}
This is the test code i use
package main
import (
"crypto/tls"
"github.com/mh-cbon/gssc"
"net/http"
)
type ww struct{}
func (s *ww) ServeHTTP(w http.ResponseWriter, req *http.Request) {
w.Header().Set("Content-Type", "text/plain")
w.Write([]byte("This is an example server.\n"))
}
func main() {
s := &http.Server{
Handler: &ww{},
Addr: ":8080",
TLSConfig: &tls.Config{
InsecureSkipVerify: true,
GetCertificate: gssc.GetCertificate("example.org"),
},
}
s.ListenAndServeTLS("", "")
}
Thanks a lot!
Your implementation of tls.Config.GetCertificate is causing the problem.
You are generating a certificate each time tls.Config.GetCertificate is called. You need to generate the certificate once and then return it in the anonymous function.
In gssc.GetCertificate :
cert, err := generate(opts)
return func(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
if err != nil {
return nil, err
}
return cert, err
}

Why does func Get of fasthttp in golang have `dst` parameter?

I found fasthttp godoc as fellow:
func Get
func Get(dst []byte, url string) (statusCode int, body []byte, err error)
Get appends url contents to dst and returns it as body.
The function follows redirects. Use Do* for manually handling redirects.
New body buffer is allocated if dst is nil.
But, when I run fellow code
package main
import (
"fmt"
fh "github.com/valyala/fasthttp"
)
func main() {
url := "https://www.okcoin.cn/api/v1/ticker.do?symbol=btc_cny"
dst := []byte("ok100")
_, body, err := fh.Get(dst, url)
if err != nil {
fmt.Println(err)
}
fmt.Println("body:", string(body))
fmt.Println("dst:", string(dst))
}
body does not have "ok100", and dst is still "ok100".
why?
Looking at the code where it is used in fasthttp's client.go func clientGetURLDeadlineFreeConn (line 672), you can see that if there is a timeout, dst's contents are copied to body at line 712. So based on what I read in the code (and debugged with Delve using your code), I saw that dst doesn't get updated in this usage. It seems that it can be used to provide default content to body in the event of a timeout - probably worth a direct question to fasthttp's author for more detail.
func clientGetURLDeadlineFreeConn(dst []byte, url string, deadline time.Time, c clientDoer) (statusCode int, body []byte, err error) {
timeout := -time.Since(deadline)
if timeout <= 0 {
return 0, dst, ErrTimeout
}
var ch chan clientURLResponse
chv := clientURLResponseChPool.Get()
if chv == nil {
chv = make(chan clientURLResponse, 1)
}
ch = chv.(chan clientURLResponse)
req := AcquireRequest()
// Note that the request continues execution on ErrTimeout until
// client-specific ReadTimeout exceeds. This helps limiting load
// on slow hosts by MaxConns* concurrent requests.
//
// Without this 'hack' the load on slow host could exceed MaxConns*
// concurrent requests, since timed out requests on client side
// usually continue execution on the host.
go func() {
statusCodeCopy, bodyCopy, errCopy := doRequestFollowRedirects(req, dst, url, c)
ch <- clientURLResponse{
statusCode: statusCodeCopy,
body: bodyCopy,
err: errCopy,
}
}()
tc := acquireTimer(timeout)
select {
case resp := <-ch:
ReleaseRequest(req)
clientURLResponseChPool.Put(chv)
statusCode = resp.statusCode
body = resp.body
err = resp.err
case <-tc.C:
body = dst
err = ErrTimeout
}
releaseTimer(tc)
return statusCode, body, err
}
In client.go func doRequestFollowRedirects (line 743) it is used at line 748: bodyBuf.B = dst
func doRequestFollowRedirects(req *Request, dst []byte, url string, c clientDoer) (statusCode int, body []byte, err error) {
resp := AcquireResponse()
bodyBuf := resp.bodyBuffer()
resp.keepBodyBuffer = true
oldBody := bodyBuf.B
bodyBuf.B = dst
redirectsCount := 0
for {
req.parsedURI = false
req.Header.host = req.Header.host[:0]
req.SetRequestURI(url)
if err = c.Do(req, resp); err != nil {
break
}
statusCode = resp.Header.StatusCode()
if statusCode != StatusMovedPermanently && statusCode != StatusFound && statusCode != StatusSeeOther {
break
}
redirectsCount++
if redirectsCount > maxRedirectsCount {
err = errTooManyRedirects
break
}
location := resp.Header.peek(strLocation)
if len(location) == 0 {
err = errMissingLocation
break
}
url = getRedirectURL(url, location)
}
body = bodyBuf.B
bodyBuf.B = oldBody
resp.keepBodyBuffer = false
ReleaseResponse(resp)
return statusCode, body, err
}

golang http timeout and goroutines accumulation

I use goroutines achieve http.Get timeout, and then I found that the number has been rising steadily goroutines, and when it reaches 1000 or so, the program will exit
Code:
package main
import (
"errors"
"io/ioutil"
"log"
"net"
"net/http"
"runtime"
"time"
)
// timeout dialler
func timeoutDialler(timeout time.Duration) func(network, addr string) (net.Conn, error) {
return func(network, addr string) (net.Conn, error) {
return net.DialTimeout(network, addr, timeout)
}
}
func timeoutHttpGet(url string) ([]byte, error) {
// change dialler add timeout support && disable keep-alive
tr := &http.Transport{
Dial: timeoutDialler(3 * time.Second),
DisableKeepAlives: true,
}
client := &http.Client{Transport: tr}
type Response struct {
resp []byte
err error
}
ch := make(chan Response, 0)
defer func() {
close(ch)
ch = nil
}()
go func() {
resp, err := client.Get(url)
if err != nil {
ch <- Response{[]byte{}, err}
return
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
ch <- Response{[]byte{}, err}
return
}
tr.CloseIdleConnections()
ch <- Response{body, err}
}()
select {
case <-time.After(5 * time.Second):
return []byte{}, errors.New("timeout")
case response := <-ch:
return response.resp, response.err
}
}
func handler(w http.ResponseWriter, r *http.Request) {
_, err := timeoutHttpGet("http://google.com")
if err != nil {
log.Println(err)
return
}
}
func main() {
go func() {
for {
log.Println(runtime.NumGoroutine())
time.Sleep(500 * time.Millisecond)
}
}()
s := &http.Server{
Addr: ":8888",
ReadTimeout: 15 * time.Second,
WriteTimeout: 15 * time.Second,
}
http.HandleFunc("/", handler)
log.Fatal(s.ListenAndServe())
}
http://play.golang.org/p/SzGTMMmZkI
Init your chan with 1 instead of 0:
ch := make(chan Response, 1)
And remove the defer block that closes and nils ch.
See: http://blog.golang.org/go-concurrency-patterns-timing-out-and
Here is what I think is happening:
after the 5s timeout, timeoutHttpGet returns
the defer statement runs, closing ch and then setting it to nil
the go routine it started to do the actual fetch finishes and attempts to send its data to ch
but ch is nil, and so won't receive anything, preventing that statement from finishing, and thus preventing the go routine from finishing
I assume you are setting ch = nil because before you had that, you would get run-time panics because that's what happens when you attempt to write to a closed channel, as described by the spec.
Giving ch a buffer of 1 means that the fetch go routine can send to it without needing a receiver. If the handler has returned due to timeout, everything will just get garbage collected later on.

golang TCPConn.SetWriteDeadline doesn't seem to work as expected

I'm trying to detect sending failures by inspecting the error returned by golang TCPConn.Write, but it's nil. I also tried using TCPConn.SetWriteDeadline without success.
That's how things happen:
the server starts
a client connects
the server sends a message and the client receives it
the client shuts down
the server sends one more message: no error
the server sends the third message: only now the error appears
Question: why only the second message to a non-existing client results in an error? How should the case be handled properly?
The code follows:
package main
import (
"net"
"os"
"bufio"
"fmt"
"time"
)
func AcceptConnections(listener net.Listener, console <- chan string) {
msg := ""
for {
conn, err := listener.Accept()
if err != nil {
panic(err)
}
fmt.Printf("client connected\n")
for {
if msg == "" {
msg = <- console
fmt.Printf("read from console: %s", msg)
}
err = conn.SetWriteDeadline(time.Now().Add(time.Second))
if err != nil {
fmt.Printf("SetWriteDeadline failed: %v\n", err)
}
_, err = conn.Write([]byte(msg))
if err != nil {
// expecting an error after sending a message
// to a non-existing client endpoint
fmt.Printf("failed sending a message to network: %v\n", err)
break
} else {
fmt.Printf("msg sent: %s", msg)
msg = ""
}
}
}
}
func ReadConsole(network chan <- string) {
console := bufio.NewReader(os.Stdin)
for {
line, err := console.ReadString('\n')
if err != nil {
panic(err)
} else {
network <- line
}
}
}
func main() {
listener, err := net.Listen("tcp", "localhost:6666")
if err != nil {
panic(err)
}
println("listening on " + listener.Addr().String())
consoleToNetwork := make(chan string)
go AcceptConnections(listener, consoleToNetwork)
ReadConsole(consoleToNetwork)
}
The server console looks like this:
listening on 127.0.0.1:6666
client connected
hi there!
read from console: hi there!
msg sent: hi there!
this one should fail
read from console: this one should fail
msg sent: this one should fail
this one actually fails
read from console: this one actually fails
failed sending a message to network: write tcp 127.0.0.1:51194: broken pipe
The client looks like this:
package main
import (
"net"
"os"
"io"
//"bufio"
//"fmt"
)
func cp(dst io.Writer, src io.Reader, errc chan<- error) {
// -reads from src and writes to dst
// -blocks until EOF
// -EOF is not an error
_, err := io.Copy(dst, src)
// push err to the channel when io.Copy returns
errc <- err
}
func StartCommunication(conn net.Conn) {
//create a channel for errors
errc := make(chan error)
//read connection and print to console
go cp(os.Stdout, conn, errc)
//read user input and write to connection
go cp(conn, os.Stdin, errc)
//wait until nil or an error arrives
err := <- errc
if err != nil {
println("cp error: ", err.Error())
}
}
func main() {
servAddr := "localhost:6666"
tcpAddr, err := net.ResolveTCPAddr("tcp", servAddr)
if err != nil {
println("ResolveTCPAddr failed:", err.Error())
os.Exit(1)
}
conn, err := net.DialTCP("tcp", nil, tcpAddr)
if err != nil {
println("net.DialTCP failed:", err.Error())
os.Exit(1)
}
defer conn.Close()
StartCommunication(conn)
}
EDIT: Following JimB's suggestion I came up with a working example. Messages don't get lost any more and are re-sent in a new connection. I'm not quite sure though how safe is it to use a shared variable (connWrap.IsFaulted) between different go routines.
package main
import (
"net"
"os"
"bufio"
"fmt"
)
type Connection struct {
IsFaulted bool
Conn net.Conn
}
func StartWritingToNetwork(connWrap * Connection, errChannel chan <- error, msgStack chan string) {
for {
msg := <- msgStack
if connWrap.IsFaulted {
//put it back for another connection
msgStack <- msg
return
}
_, err := connWrap.Conn.Write([]byte(msg))
if err != nil {
fmt.Printf("failed sending a message to network: %v\n", err)
connWrap.IsFaulted = true
msgStack <- msg
errChannel <- err
return
} else {
fmt.Printf("msg sent: %s", msg)
}
}
}
func StartReadingFromNetwork(connWrap * Connection, errChannel chan <- error){
network := bufio.NewReader(connWrap.Conn)
for (!connWrap.IsFaulted) {
line, err := network.ReadString('\n')
if err != nil {
fmt.Printf("failed reading from network: %v\n", err)
connWrap.IsFaulted = true
errChannel <- err
} else {
fmt.Printf("%s", line)
}
}
}
func AcceptConnections(listener net.Listener, console chan string) {
errChannel := make(chan error)
for {
conn, err := listener.Accept()
if err != nil {
panic(err)
}
fmt.Printf("client connected\n")
connWrap := Connection{false, conn}
go StartReadingFromNetwork(&connWrap, errChannel)
go StartWritingToNetwork(&connWrap, errChannel, console)
//block until an error occurs
<- errChannel
}
}
func ReadConsole(network chan <- string) {
console := bufio.NewReader(os.Stdin)
for {
line, err := console.ReadString('\n')
if err != nil {
panic(err)
} else {
network <- line
}
}
}
func main() {
listener, err := net.Listen("tcp", "localhost:6666")
if err != nil {
panic(err)
}
println("listening on " + listener.Addr().String())
consoleToNetwork := make(chan string)
go AcceptConnections(listener, consoleToNetwork)
ReadConsole(consoleToNetwork)
}
This isn't Go specific, and is a artifact of the underlying TCP socket showing through.
A decent diagram of the TCP termination steps is at the bottom of this page:
http://www.tcpipguide.com/free/t_TCPConnectionTermination-2.htm
The simple version is that when the client closes its socket, it sends a FIN, and receives an ACK from the server. It then waits for the server to do the same. Instead of sending a FIN though, you're sending more data, which is discarded, and the client socket now assumes that any more data coming from you is invalid, so the next time you send you get an RST, which is what bubbles up into the error you see.
Going back to your program, you need to handle this somehow. Generally you can think of whomever is in charge of initiating a send, is also in charge of initiating termination, hence your server should assume that it can continue to send until it closes the connection, or encounters an error. If you need to more reliably detect the client closing, you need to have some sort of client response in the protocol. That way recv can be called on the socket and return 0, which alerts you to the closed connection.
In go, this will return an EOF error from the connection's Read method (or from within the Copy in your case). SetWriteDeadline doesn't work because a small write will go though and get dropped silently, or the client will eventually respond with an RST, giving you an error.

Resources