Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime/pprof: -short tests fails with "cannot map pages in arena address space" on windows/386 #10559

Closed
alexbrainman opened this issue Apr 23, 2015 · 6 comments

Comments

@alexbrainman
Copy link
Member

C:\go\root\src\runtime\pprof>go version
go version devel +ba8fa0e Thu Apr 23 06:39:56 2015 +0000 windows/386

C:\go\root\src\runtime\pprof>go test -short
fatal error: runtime: cannot map pages in arena address space

runtime stack:
runtime.throw(0x603528, 0x30)
        C:/go/root/src/runtime/panic.go:543 +0x7f
runtime.sysMap(0x0, 0x2e0000, 0x311bfe01, 0x6d92b8)
        C:/go/root/src/runtime/mem_windows.go:107 +0x85
runtime.mHeap_SysAlloc(0x6bf6c0, 0x2e0000, 0x311bfec8)
        C:/go/root/src/runtime/malloc.go:420 +0x124
runtime.mHeap_Grow(0x6bf6c0, 0x170, 0x0)
        C:/go/root/src/runtime/mheap.go:576 +0x55
runtime.mHeap_AllocSpanLocked(0x6bf6c0, 0x16b, 0x1)
        C:/go/root/src/runtime/mheap.go:480 +0x4f0
runtime.mHeap_Alloc_m(0x6bf6c0, 0x16b, 0x0, 0x418cf901, 0x418d23dc)
        C:/go/root/src/runtime/mheap.go:385 +0x122
runtime.mHeap_Alloc.func1()
        C:/go/root/src/runtime/mheap.go:434 +0x4f
runtime.systemstack(0x311bff40)
        C:/go/root/src/runtime/asm_386.s:283 +0x81
runtime.mHeap_Alloc(0x6bf6c0, 0x16b, 0x0, 0x30cd0101, 0x418cf960)
        C:/go/root/src/runtime/mheap.go:435 +0x5d
runtime.largeAlloc(0x2d5ff0, 0x0, 0x2)
        C:/go/root/src/runtime/malloc.go:710 +0x8a
runtime.mallocgc.func3()
        C:/go/root/src/runtime/malloc.go:631 +0x39
runtime.systemstack(0x10cd0a00)
        C:/go/root/src/runtime/asm_386.s:267 +0x57
runtime.mstart()
        C:/go/root/src/runtime/proc1.go:725

goroutine 60 [running]:
runtime.systemstack_switch()
        C:/go/root/src/runtime/asm_386.s:222 fp=0x41699c68 sp=0x41699c64
runtime.mallocgc(0x2d5ff0, 0x59c9c0, 0x0, 0x10)
        C:/go/root/src/runtime/malloc.go:632 +0x6ae fp=0x41699cd0 sp=0x41699c68
runtime.newarray(0x59c9c0, 0x244cc, 0x8)
        C:/go/root/src/runtime/malloc.go:742 +0xb0 fp=0x41699cf0 sp=0x41699cd0
runtime.growslice(0x561820, 0x51518000, 0x1d000, 0x1d000, 0x1, 0x0, 0x0, 0x0)
        C:/go/root/src/runtime/slice.go:88 +0x277 fp=0x41699d28 sp=0x41699cf0
internal/trace.readTrace(0x418f1490, 0x35f3c780, 0x0, 0x0, 0x0, 0x0, 0x0)
        C:/go/root/src/internal/trace/parser.go:145 +0x54f fp=0x41699e58 sp=0x41699d28
internal/trace.Parse(0x418f1490, 0x35f3c780, 0x0, 0x0, 0x0, 0x0, 0x0)
        C:/go/root/src/internal/trace/parser.go:61 +0x48 fp=0x41699e9c sp=0x41699e58
runtime/pprof_test.parseTrace(0x418f1490, 0x35f3c780, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
        c:/go/root/src/runtime/pprof/trace_test.go:82 +0x4e fp=0x41699f04 sp=0x41699e9c
runtime/pprof_test.TestTraceStress(0x35f3c720)
        c:/go/root/src/runtime/pprof/trace_test.go:222 +0x918 fp=0x41699fbc sp=0x41699f04
testing.tRunner(0x35f3c720, 0x6b8464)
        C:/go/root/src/testing/testing.go:452 +0xa6 fp=0x41699fe8 sp=0x41699fbc
runtime.goexit()
        C:/go/root/src/runtime/asm_386.s:1637 +0x1 fp=0x41699fec sp=0x41699fe8
created by testing.RunTests
        C:/go/root/src/testing/testing.go:560 +0x852

goroutine 1 [chan receive]:
testing.RunTests(0x620724, 0x6b83e0, 0xe, 0xe, 0x5c1b01)
        C:/go/root/src/testing/testing.go:561 +0x884
testing.(*M).Run(0x10cc4330, 0x6cf520)
        C:/go/root/src/testing/testing.go:490 +0x6b
main.main()
        runtime/pprof/_test/_testmain.go:82 +0x177

goroutine 41 [select (no cases)]:
runtime/pprof_test.TestTraceSymbolize.func1()
        c:/go/root/src/runtime/pprof/trace_stack_test.go:38 +0x1e
created by runtime/pprof_test.TestTraceSymbolize
        c:/go/root/src/runtime/pprof/trace_stack_test.go:39 +0x178

goroutine 42 [chan send (nil chan)]:
runtime/pprof_test.TestTraceSymbolize.func2()
        c:/go/root/src/runtime/pprof/trace_stack_test.go:42 +0x3e
created by runtime/pprof_test.TestTraceSymbolize
        c:/go/root/src/runtime/pprof/trace_stack_test.go:43 +0x18d

goroutine 43 [chan receive (nil chan)]:
runtime/pprof_test.TestTraceSymbolize.func3()
        c:/go/root/src/runtime/pprof/trace_stack_test.go:46 +0x36
created by runtime/pprof_test.TestTraceSymbolize
        c:/go/root/src/runtime/pprof/trace_stack_test.go:47 +0x1a2

goroutine 64 [select (no cases)]:
runtime/pprof_test.TestTraceStress.func9()
        c:/go/root/src/runtime/pprof/trace_test.go:210 +0x23
created by runtime/pprof_test.TestTraceStress
        c:/go/root/src/runtime/pprof/trace_test.go:211 +0x80d

goroutine 50 [chan receive, locked to thread]:
net.(*ioSrv).ProcessRemoteIO(0x10cdc580)
        C:/go/root/src/net/fd_windows.go:138 +0xdb
created by net.startServer
        C:/go/root/src/net/fd_windows.go:238 +0xc6

goroutine 51 [chan receive, locked to thread]:
net.(*ioSrv).ProcessRemoteIO(0x10cdc588)
        C:/go/root/src/net/fd_windows.go:138 +0xdb
created by net.startServer
        C:/go/root/src/net/fd_windows.go:240 +0x11d
exit status 2
FAIL    runtime/pprof   9.172s

C:\go\root\src\runtime\pprof>
@minux minux added this to the Go1.5 milestone Apr 23, 2015
@minux
Copy link
Member

minux commented Apr 23, 2015 via email

@alexbrainman
Copy link
Member Author

git bisect says:

4655aadd00fd0228469ebc3c8b9edead7387f285 is the first bad commit
commit 4655aadd00fd0228469ebc3c8b9edead7387f285
Author: Austin Clements <austin@google.com>
Date:   Tue Apr 21 14:24:25 2015 -0400

    runtime: use reachable heap estimate to set trigger/goal

    Currently, we set the heap goal for the next GC cycle using the size
    of the marked heap at the end of the current cycle. This can lead to a
    bad feedback loop if the mutator is rapidly allocating and releasing
    pointers that can significantly bloat heap size.

    If the GC were STW, the marked heap size would be exactly the
    reachable heap size (call it stwLive). However, in concurrent GC,
    marked=stwLive+floatLive, where floatLive is the amount of "floating
    garbage": objects that were reachable at some point during the cycle
    and were marked, but which are no longer reachable by the end of the
    cycle. If the GC cycle is short, then the mutator doesn't have much
    time to create floating garbage, so marked≈stwLive. However, if the GC
    cycle is long and the mutator is allocating and creating floating
    garbage very rapidly, then it's possible that marked»stwLive. Since
    the runtime currently sets the heap goal based on marked, this will
    cause it to set a high heap goal. This means that 1) the next GC cycle
    will take longer because of the larger heap and 2) the assist ratio
    will be low because of the large distance between the trigger and the
    goal. The combination of these lets the mutator produce even more
    floating garbage in the next cycle, which further exacerbates the
    problem.

    For example, on the garbage benchmark with GOMAXPROCS=1, this causes
    the heap to grow to ~500MB and the garbage collector to retain upwards
    of ~300MB of heap, while the true reachable heap size is ~32MB. This,
    in turn, causes the GC cycle to take upwards of ~3 seconds.

    Fix this bad feedback loop by estimating the true reachable heap size
    (stwLive) and using this rather than the marked heap size
    (stwLive+floatLive) as the basis for the GC trigger and heap goal.
    This breaks the bad feedback loop and causes the mutator to assist
    more, which decreases the rate at which it can create floating
    garbage. On the same garbage benchmark, this reduces the maximum heap
    size to ~73MB, the retained heap to ~40MB, and the duration of the GC
    cycle to ~200ms.

    Change-Id: I7712244c94240743b266f9eb720c03802799cdd1
    Reviewed-on: https://go-review.googlesource.com/9177
    Reviewed-by: Rick Hudson <rlh@golang.org>

:040000 040000 0d316fca9bd1a1b42e1801466f1ff7302113a026 9a190d7e325c1532822cc0f443873e942dca67f4 M      src

@minux
Copy link
Member

minux commented Apr 23, 2015 via email

@aclements
Copy link
Member

Quite possibly related to #10556, since an overflow in next_gc will effectively disable GC and let the heap grow arbitrarily.

@aclements
Copy link
Member

Yes, this and #10555 are both caused by the next_gc underflow observed in #10556. When the underflow happens, runtime/pprof's heap grows to ~1GB on my laptop. With the underflow fixed, it grows to only ~150MB.

@alexbrainman
Copy link
Member Author

Yes, it is fixed. Thank you @aclements.

Alex

@golang golang locked and limited conversation to collaborators Jun 25, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants