Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http: data race in concurrent req.Body.{Close,Read} with 100-continue expected #30580

Open
benburkert opened this issue Mar 4, 2019 · 10 comments
Labels
NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Milestone

Comments

@benburkert
Copy link
Contributor

benburkert commented Mar 4, 2019

Please answer these questions before submitting your issue. Thanks!

What did you do?

https://gist.github.com/benburkert/76548cfac5aa6761fab2f88783b673aa

What did you expect to see?

No output.

What did you see instead?

==================
WARNING: DATA RACE
Read at 0x00c00000e3f8 by goroutine 9:
  net/http.(*expectContinueReader).Read()
      /usr/local/Cellar/go/1.12/libexec/src/net/http/server.go:885 +0x42

Previous write at 0x00c00000e3f8 by goroutine 8:
  net/http.(*expectContinueReader).Close()
      /usr/local/Cellar/go/1.12/libexec/src/net/http/server.go:901 +0x42

Goroutine 9 (running) created at:
  main.main.func1()
      /tmp/http.go:15 +0xef
  net/http.HandlerFunc.ServeHTTP()
      /usr/local/Cellar/go/1.12/libexec/src/net/http/server.go:1995 +0x51
  net/http.serverHandler.ServeHTTP()
      /usr/local/Cellar/go/1.12/libexec/src/net/http/server.go:2774 +0xc4
  net/http.(*conn).serve()
      /usr/local/Cellar/go/1.12/libexec/src/net/http/server.go:1878 +0x807

Goroutine 8 (running) created at:
  main.main.func1()
      /tmp/http.go:14 +0x70
  net/http.HandlerFunc.ServeHTTP()
      /usr/local/Cellar/go/1.12/libexec/src/net/http/server.go:1995 +0x51
  net/http.serverHandler.ServeHTTP()
      /usr/local/Cellar/go/1.12/libexec/src/net/http/server.go:2774 +0xc4
  net/http.(*conn).serve()
      /usr/local/Cellar/go/1.12/libexec/src/net/http/server.go:1878 +0x807
==================

System details

go version go1.12 darwin/amd64
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/benburkert/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/benburkert"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/Cellar/go/1.12/libexec"
GOTMPDIR=""
GOTOOLDIR="/usr/local/Cellar/go/1.12/libexec/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
GOROOT/bin/go version: go version go1.12 darwin/amd64
GOROOT/bin/go tool compile -V: compile version go1.12
uname -v: Darwin Kernel Version 17.7.0: Thu Dec 20 21:47:19 PST 2018; root:xnu-4570.71.22~1/RELEASE_X86_64
ProductName:	Mac OS X
ProductVersion:	10.13.6
BuildVersion:	17G5019
lldb --version: lldb-1000.11.37.1
  Swift-4.2
@benburkert benburkert changed the title http: data race in concurrent req.Body.{Close,Read} after handler returns with 100-continue expected net/http: data race in concurrent req.Body.{Close,Read} after handler returns with 100-continue expected Mar 4, 2019
@fraenkel
Copy link
Contributor

fraenkel commented Mar 5, 2019

See the documentation for Handler, https://golang.org/pkg/net/http/#Handler.

it is not valid to use the ResponseWriter or read from the Request.Body after or concurrently with the completion of the ServeHTTP call.

You are violating the concurrently rule.

@benburkert
Copy link
Contributor Author

Thanks @fraenkel, I investigated a bit further and found that the data race and nil pointer dereference are not the same underlying issue. The data race can be triggered by any handler that concurrently calls Close & Read on the request body of a request that expects a 100-continue. The nil pointer dereference can only be triggered by concurrent calls to Close & Read that happen after the handler has returned, which is not a valid use of the request body.

@benburkert benburkert changed the title net/http: data race in concurrent req.Body.{Close,Read} after handler returns with 100-continue expected net/http: data race in concurrent req.Body.{Close,Read} with 100-continue expected Mar 5, 2019
@bradfitz
Copy link
Contributor

bradfitz commented Mar 5, 2019

I agree with @fraenkel that this isn't a bug because this code is violating the rules:

	srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		go r.Body.Read(make([]byte, 1024))
		go r.Body.Close()
	}))

@bradfitz bradfitz closed this as completed Mar 5, 2019
@benburkert
Copy link
Contributor Author

@bradfitz please have another look at this sample code, it triggers the race detector I think without violating the rules:

	srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		var wg sync.WaitGroup

		wg.Add(2)
		go func() { r.Body.Read(make([]byte, 1024)); wg.Done() }()
		go func() { r.Body.Close(); wg.Done() }()
		wg.Wait()
	}))

@bradfitz
Copy link
Contributor

bradfitz commented Mar 5, 2019

Thanks. I'm not sure concurrent Read & Close is legal in this context, but maybe it should be. I'll use this bug to investigate and either fix or document.

@bradfitz bradfitz reopened this Mar 5, 2019
@fraenkel
Copy link
Contributor

fraenkel commented Mar 5, 2019

Its a bit more than just Read & Close. Just for kicks I modified the example to do concurrent reads of 1 byte.
It causes a different DATA RACE but one nonetheless.

==================
WARNING: DATA RACE
Read at 0x00c0001b00e8 by goroutine 36:
bufio.(*Writer).WriteString()
/snap/go/3318/src/bufio/bufio.go:607 +0xd5
net/http.(*expectContinueReader).Read()
/snap/go/3318/src/net/http/server.go:890 +0x27d
main.main.func1.1()
/home/fraenkel/go/src/t/h.go:20 +0xa1

Previous write at 0x00c0001b00e8 by goroutine 37:
bufio.(*Writer).Flush()
/snap/go/3318/src/bufio/bufio.go:602 +0x3d7
net/http.(*expectContinueReader).Read()
/snap/go/3318/src/net/http/server.go:891 +0x2d1
main.main.func1.2()
/home/fraenkel/go/src/t/h.go:21 +0xa1

==================
WARNING: DATA RACE
Write at 0x00c0001d4000 by goroutine 36:
runtime.slicestringcopy()
/snap/go/3318/src/runtime/slice.go:232 +0x0
bufio.(*Writer).WriteString()
/snap/go/3318/src/bufio/bufio.go:698 +0x2bf
net/http.(*expectContinueReader).Read()
/snap/go/3318/src/net/http/server.go:890 +0x27d
main.main.func1.1()
/home/fraenkel/go/src/t/h.go:20 +0xa1

Previous write at 0x00c0001d4000 by goroutine 37:
runtime.slicestringcopy()
/snap/go/3318/src/runtime/slice.go:232 +0x0
bufio.(*Writer).WriteString()
/snap/go/3318/src/bufio/bufio.go:698 +0x2bf
net/http.(*expectContinueReader).Read()
/snap/go/3318/src/net/http/server.go:890 +0x27d
main.main.func1.2()
/home/fraenkel/go/src/t/h.go:21 +0xa1

@cuonglm
Copy link
Member

cuonglm commented Mar 5, 2019

The problem with Read and Close is both of them try to read/write *expectContinueReader.closed, with 2 or more Read, they both call ecr.resp.conn.bufw.WriteString.

Adding a simple lock would solve he problem:

diff --git a/src/net/http/server.go b/src/net/http/server.go
index 9ae0bbff14..b3d4a54ac1 100644
--- a/src/net/http/server.go
+++ b/src/net/http/server.go
@@ -877,11 +877,14 @@ func (srv *Server) initialReadLimitSize() int64 {
 type expectContinueReader struct {
        resp       *response
        readCloser io.ReadCloser
+       mu         sync.Mutex
        closed     bool
        sawEOF     bool
 }
 
 func (ecr *expectContinueReader) Read(p []byte) (n int, err error) {
+       ecr.mu.Lock()
+       defer ecr.mu.Unlock()
        if ecr.closed {
                return 0, ErrBodyReadAfterClose
        }
@@ -898,7 +901,9 @@ func (ecr *expectContinueReader) Read(p []byte) (n int, err error) {
 }
 
 func (ecr *expectContinueReader) Close() error {
+       ecr.mu.Lock()
        ecr.closed = true
+       ecr.mu.Unlock()
        return ecr.readCloser.Close()
 }

I'm not sure it's the right way.

cc @bradfitz

@julieqiu julieqiu added the NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. label Mar 11, 2019
@johnrengelman
Copy link

We're running https://github.com/fabiolb/fabio as a load balancing solution and we are seeing this panic happening whenever we have concurrent requests to the same Host and path and sending the Expect: 100-Continue header.
The Fabio code is using the standard httputil.ReverseProxy and http.Transport for proxying.

@rosenhouse
Copy link
Contributor

Is this related to #26253 ?

@odeke-em
Copy link
Member

@bradfitz what do you think about @cuonglm's suggestion in #30580 (comment)? @cuonglm would you like to send a CL to get the train rolling? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Projects
None yet
Development

No branches or pull requests

9 participants