Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http: cannot assign requested address #16012

Closed
pierrre opened this issue Jun 8, 2016 · 9 comments
Closed

net/http: cannot assign requested address #16012

pierrre opened this issue Jun 8, 2016 · 9 comments

Comments

@pierrre
Copy link

pierrre commented Jun 8, 2016

  1. What version of Go are you using (go version)?
    1.6.2 and tip
  2. What operating system and processor architecture are you using (go env)?
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/pierre/Go"
GORACE=""
GOROOT="/home/pierre/.gimme/versions/go1.6.2.src"
GOTOOLDIR="/home/pierre/.gimme/versions/go1.6.2.src/pkg/tool/linux_amd64"
GO15VENDOREXPERIMENT="1"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0"
CXX="g++"
CGO_ENABLED="1"
  1. What did you do?

Run this benchmark:

package benchhttp

import (
    "io"
    "io/ioutil"
    "net/http"
    "net/http/httptest"
    "testing"
)

func Benchmark(b *testing.B) {
    data := []byte("Foobar")
    srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
        w.Write(data)
    }))
    defer srv.Close()
    b.RunParallel(func(pb *testing.PB) {
        for pb.Next() {
            resp, err := http.Get(srv.URL)
            if err != nil {
                b.Fatal(err)
            }
            io.Copy(ioutil.Discard, resp.Body)
            resp.Body.Close()
        }
    })
}

With: go test -bench=. -benchmem -benchtime=10s

  1. What did you expect to see?

It should work.

  1. What did you see instead?

It takes a long time and crashes:

testing: warning: no tests to run
PASS
Benchmark-8 --- FAIL: Benchmark-8
    benchhttp_test.go:21: Get http://127.0.0.1:45455: dial tcp 127.0.0.1:45455: connect: cannot assign requested address
    benchhttp_test.go:21: Get http://127.0.0.1:45455: dial tcp 127.0.0.1:45455: connect: cannot assign requested address
    benchhttp_test.go:21: Get http://127.0.0.1:45455: dial tcp 127.0.0.1:45455: connect: cannot assign requested address
    benchhttp_test.go:21: Get http://127.0.0.1:45455: dial tcp 127.0.0.1:45455: connect: cannot assign requested address
    benchhttp_test.go:21: Get http://127.0.0.1:45455: dial tcp 127.0.0.1:45455: connect: cannot assign requested address
    benchhttp_test.go:21: Get http://127.0.0.1:45455: dial tcp 127.0.0.1:45455: connect: cannot assign requested address
ok      _test/benchhttp 34.272s

During the benchmark, the value displayed by watch "ss -a | wc -l" increases really quickly (around 30-40k).

@pierrre
Copy link
Author

pierrre commented Jun 8, 2016

If I write the benchmark without concurrency:

package benchhttp

import (
    "io"
    "io/ioutil"
    "net/http"
    "net/http/httptest"
    "testing"
)

func Benchmark(b *testing.B) {
    data := []byte("Foobar")
    srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
        w.Write(data)
    }))
    defer srv.Close()
    for i := 0; i < b.N; i++ {
        resp, err := http.Get(srv.URL)
        if err != nil {
            b.Fatal(err)
        }
        io.Copy(ioutil.Discard, resp.Body)
        resp.Body.Close()
    }
}

It works (no crash).

And the value displayed by watch "ss -a | wc -l" is low and stable (around 700).

@ianlancetaylor
Copy link
Contributor

I suspect you are exceeding the number of local socket connections permitted by your OS.

@pierrre
Copy link
Author

pierrre commented Jun 9, 2016

I suspect you are exceeding the number of local socket connections permitted by your OS.

You are probably right, but I can't explain why.

I've performed the test again, with differents GOMAXPROCS values, and I displayed the number of network connection:

➜  ss -a | wc -l                                         
758
➜  GOMAXPROCS=1 go test -bench=. -benchmem -benchtime=10s
testing: warning: no tests to run
Benchmark     500000         39875 ns/op        3980 B/op         56 allocs/op
PASS
ok      _test/benchhttp 20.326s
➜  ss -a | wc -l                                         
783
➜  GOMAXPROCS=2 go test -bench=. -benchmem -benchtime=10s
testing: warning: no tests to run
Benchmark-2       500000         23642 ns/op        3994 B/op         56 allocs/op
PASS
ok      _test/benchhttp 12.074s
➜  ss -a | wc -l                                         
789
➜  GOMAXPROCS=3 go test -bench=. -benchmem -benchtime=10s
testing: warning: no tests to run
Benchmark-3     --- FAIL: Benchmark-3
    benchhttp_test.go:21: Get http://127.0.0.1:38675: dial tcp 127.0.0.1:38675: connect: cannot assign requested address
FAIL
exit status 1
FAIL    _test/benchhttp 34.238s
➜  ss -a | wc -l                                         
29863

Most connections are in the TIME-WAIT state.

Should I configure something on my system or fix my code?

@ianlancetaylor
Copy link
Contributor

The point of a parallel benchmark is to run as many iterations of the function, in parallel, as will complete in 1 second. The benchmark will keep ramping up the number of iterations until it finds the answer. On your system, it seems that the answer is: more than the system can handle.

I would suggest that you put a limit in your code on the number of simultaneous open connections.

I'm going to close this issue because at this point I don't see anything to be fixed in Go.

@pierrre
Copy link
Author

pierrre commented Jun 9, 2016

I would suggest that you put a limit in your code on the number of simultaneous open connections.

As far as I know, there are only 8 simultaneous HTTP requests in my code.
My GOMAXPROCS value is 8 (default value) and RunParallel() runs GOMAXPROCS concurrent benchmarks by default.

Does net/http.Get() opens a new connection for each request?

@ianlancetaylor
Copy link
Contributor

http.Get keeps a cache of TCP connections, but when all are in use it opens another one. It also sets a limit on the number of idle connections to a given host; the default is 2. If your GOMAXPROCS is more than 2 the benchmark is going to be regularly discarding connections and opening new ones. Each closed connection will be in TIME_WAIT state for two minutes, tying up that connection.

Try adding this line to your benchmark:

    http.DefaultTransport.(*http.Transport).MaxIdleConnsPerHost = 100

@pierrre
Copy link
Author

pierrre commented Jun 10, 2016

It works!
Your explanation makes sense.
Thank you very much ! 👍

@PaulMatencio
Copy link

ss -s should show high number of tcp connections with much of them in TIMEWAIT .
Allow the kernel to recycle and reuse these connections if not your limit of sockets will be exhausted

Add these 2 lines to your sysctl.conf .

net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1

@Komzpa
Copy link

Komzpa commented Jun 1, 2017

The sysctl way causes issues:
dial tcp 127.0.0.1:5001: i/o timeout

The MaxIdleConnsPerHost helped.

kahing added a commit to kahing/goofys that referenced this issue Oct 3, 2017
http connection pooling in golang seems broken according to
golang/go#16012 (comment) . This
works around cases when we have many parallel requests
jaffee added a commit to jaffee/pilosa that referenced this issue Oct 31, 2017
The idea here is for Pilosa to behave better under high query load where a node
might be making many connections to the other nodes in the cluster in order to
support lots of concurrent batches of SetBit queries (for example). By allowing
for more idle connections and more idle connections per host, we reduce
connection churn, and allow more connections to be reused rather than creating
new ones and potentially having many stale sockets in the TIME_WAIT state. See
golang/go#16012
@golang golang locked and limited conversation to collaborators Jun 1, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants