Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http: nil pointer panic #6780

Closed
andybalholm opened this issue Nov 17, 2013 · 10 comments
Closed

net/http: nil pointer panic #6780

andybalholm opened this issue Nov 17, 2013 · 10 comments

Comments

@andybalholm
Copy link
Contributor

One of my users had a series of crashes from a nil pointer dereference deep in the
net/http package. Since it's in a separate goroutine, it brings down the whole process.

I can't figure out where this nil pointer is coming from. Apparently it is the reader
inside the bufio.Reader.

Is there a good place to put a nil pointer check in the net/http code so that the user
gets an error instead of a panic?

I'm attaching the panic message.

Attachments:

  1. errors.log (803900 bytes)
@davecheney
Copy link
Contributor

Comment 1:

Which version of Go ?

@davecheney
Copy link
Contributor

Comment 2:

A very quick reading of the stack trace, here is what I think is going on.
goroutine 142954 [select]:
net/http.(*persistConn).roundTrip(0xc211d40280, 0xc2109d1f30, 0xc211d40280, 0x0, 0x0)
    /usr/local/go/src/pkg/net/http/transport.go:879 +0x6d6
net/http.(*Transport).RoundTrip(0xcc3b40, 0xc224023340, 0x7bd760, 0x0, 0x0)
    /usr/local/go/src/pkg/net/http/transport.go:187 +0x391
main.(*retryTransport).RoundTrip(0xcc3b40, 0xc224023340, 0xd, 0x0, 0x0)
    /root/src/code.google.com/p/redwood-filter/proxy.go:294 +0x164
main.proxyHandler.ServeHTTP(0x0, 0x0, 0x0, 0x0, 0x0, ...)
    /root/src/code.google.com/p/redwood-filter/proxy.go:146 +0xd1a
main.(*proxyHandler).ServeHTTP(0xc212ca6240, 0x7fcf759c2470, 0xc21f1fca00, 0xc224023340)
    /root/src/code.google.com/p/redwood-filter/accesslog.go:1 +0xb4
net/http.serverHandler.ServeHTTP(0xc21db535a0, 0x7fcf759c2470, 0xc21f1fca00,
0xc224023340)
    /usr/local/go/src/pkg/net/http/server.go:1597 +0x16e
net/http.(*conn).serve(0xc224029000)
    /usr/local/go/src/pkg/net/http/server.go:1167 +0x7b7
created by net/http.(*Server).Serve
    /usr/local/go/src/pkg/net/http/server.go:1644 +0x28b
One of these goroutines has prepared a request and has passed to to proxy.go:146. Round
trips happen on another goroutine, so that is why we can't see a direct stack trace,
however
goroutine 142475 [running]:
runtime.panic(0x7578e0, 0xcc6148)
    /usr/local/go/src/pkg/runtime/panic.c:266 +0xb6
bufio.(*Reader).Read(0xc22103cd20, 0xc21e0a1000, 0x1000, 0x1000, 0x1000, ...)
    /usr/local/go/src/pkg/bufio/bufio.go:152 +0x100
io.(*LimitedReader).Read(0xc221355580, 0xc21e0a1000, 0x1000, 0x1000, 0x1000, ...)
    /usr/local/go/src/pkg/io/io.go:398 +0xbb
net/http.(*body).Read(0xc211559690, 0xc21e0a1000, 0x1000, 0x1000, 0xc21e0a1000, ...)
    /usr/local/go/src/pkg/net/http/transfer.go:534 +0x96
io.(*LimitedReader).Read(0xc221355620, 0xc21e0a1000, 0x1000, 0x1000, 0xdd5, ...)
    /usr/local/go/src/pkg/io/io.go:398 +0xbb
bufio.(*Writer).ReadFrom(0xc22090d700, 0x7fcf759c23c8, 0xc221355620, 0x158dbd, 0x0, ...)
    /usr/local/go/src/pkg/bufio/bufio.go:622 +0x15a
io.Copy(0x7fcf759c24f0, 0xc22090d700, 0x7fcf759c23c8, 0xc221355620, 0x0, ...)
    /usr/local/go/src/pkg/io/io.go:348 +0x124
net/http.(*transferWriter).WriteBody(0xc21fc53e70, 0x7fcf759c24f0, 0xc22090d700, 0x0,
0x0)
    /usr/local/go/src/pkg/net/http/transfer.go:196 +0x57c
net/http.(*Request).write(0xc21f4870d0, 0x7fcf759c24f0, 0xc22090d700, 0x1, 0xc211559a20,
...)
    /usr/local/go/src/pkg/net/http/request.go:400 +0x7e4
net/http.(*persistConn).writeLoop(0xc224915d00)
    /usr/local/go/src/pkg/net/http/transport.go:797 +0x185
created by net/http.(*Transport).dialConn
    /usr/local/go/src/pkg/net/http/transport.go:529 +0x61e
Looks like it is reading the contents of a request body, who's body is a Bufio.Reader
which is nil, the address of 0x20 is the giveaway there.
Also concerning thing is the retryTransport, 
func (t *retryTransport) RoundTrip(req *http.Request) (resp *http.Response, err error) {
        switch req.Method {
        case "GET", "HEAD":
                for i := 0; i < 3; i++ {
                        resp, err = t.Transport.RoundTrip(req)
                        if err == nil {
                                return resp, err
                        }
                }
                return nil, err
        }
        return t.Transport.RoundTrip(req)
}
It makes some assumptions that request can be reread, which is not the case as
request.Body is an io.Reader, which has no capacity to be rewound. This may be related
to the problem. I would recommend removing the retryTransport or at least making it more
robust and see if that helps.
HTH
Dave

@davecheney
Copy link
Contributor

Comment 3:

Here is a shorter reproduction which shows the same panic behavior
package main
import "io"
import "bufio"
import "net/http"
import "net/url"
var transport = http.Transport{
        Proxy: http.ProxyFromEnvironment,
}
var rc io.ReadCloser = struct {
        io.Reader
        io.Closer
}{Reader: bufio.NewReader((io.Reader)(nil))}
func main() {
        url, err := url.Parse("http://www.google.com/")
        if err != nil {
                panic(err)
        }
        r := http.Request{
                Method: "GET",
                URL:    url,
                Body:   rc,
                Header: make(http.Header),
        }
        transport.RoundTrip(&r)
}
I am not saying your code is creating this situation, but it explains the failure you
see.

@adg
Copy link
Contributor

adg commented Nov 18, 2013

Comment 4:

Dave's repro case is obviously broken.
I wonder if this is a bug in the recycling of bufio.Readers. That code is pretty simple,
though.

@bradfitz
Copy link
Contributor

Comment 5:

Yeah, of course #3 will crash.  That's WAI, or crashing as intended.
Waiting to hear what version of Go this is, before I even go try to line up files and
line numbers to stuff.  Also the version of this "redwood-filter" would be nice.

Status changed to WaitingForReply.

@andybalholm
Copy link
Contributor Author

Comment 6:

# go version
go version devel +1fffe6171725 Tue Oct 15 13:46:57 2013 -0400 linux/amd64
The redwood version is the current tip, fdad816027163
retryTransport is an ugly hack I created as a result of the following thread:
https://groups.google.com/forum/#!searchin/golang-nuts/keepalive/golang-nuts/jB-A9dC-aWI/fg4GjA-HP6UJ
GET and HEAD requests shouldn't have a body, should they? But that doesn't mean that
they won't. Maybe these very rare crashes are the result of such odd requests hitting a
closed persistent connection that causes them to be retried…?
Is there a better solution for that problem? Should I file an issue for it?

@rsc
Copy link
Contributor

rsc commented Nov 27, 2013

Comment 7:

Labels changed: added go1.3maybe.

@rsc
Copy link
Contributor

rsc commented Dec 4, 2013

Comment 8:

Labels changed: added release-none, removed go1.3maybe.

@rsc
Copy link
Contributor

rsc commented Dec 4, 2013

Comment 9:

Labels changed: added repo-main.

@bradfitz
Copy link
Contributor

Comment 10:

Closing this due to timeout.
I suspect this was already fixed by one or both of:
changeset:   18841:51a204237ba5
user:        Brad Fitzpatrick <bradfitz@golang.org>
date:        Tue Jan 14 09:46:40 2014 -0800
summary:     net/http: fix another data race when sharing Request.Body
changeset:   18783:00cce3a34d7e
user:        Brad Fitzpatrick <bradfitz@golang.org>
date:        Tue Jan 07 10:40:56 2014 -0800
summary:     net/http: fix data race when sharing request body between client and server

Status changed to TimedOut.

@golang golang locked and limited conversation to collaborators Jun 25, 2016
This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants