Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: GC: heap idle is not released to linux #33376

Closed
Anteoy opened this issue Jul 31, 2019 · 12 comments
Closed

runtime: GC: heap idle is not released to linux #33376

Anteoy opened this issue Jul 31, 2019 · 12 comments
Labels
FrozenDueToAge NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. OS-Linux

Comments

@Anteoy
Copy link

Anteoy commented Jul 31, 2019

What version of Go are you using (go version)?

$ go version
go version go1.12.1 linux/amd64

Does this issue reproduce with the latest release?

No verification

What operating system and processor architecture are you using (go env)?

go env Output
$ go env
local env for build:
GOARCH="amd64"
GOBIN="/home/zhoudazhuang/gobin/"
GOCACHE="/home/zhoudazhuang/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/zhoudazhuang/class100/gtools:/home/zhoudazhuang/goproject"
GOPROXY=""
GORACE=""
GOROOT="/home/zhoudazhuang/usr/local/go1.12.1/go"
GOTMPDIR=""
GOTOOLDIR="/home/zhoudazhuang/usr/local/go1.12.1/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build499587841=/tmp/go-build -gno-record-gcc-switches"

online server:
╰─># uname -a
Linux ll-025048236-FWWG.AppPZFW.prod.bj1 2.6.32-642.el6.x86_64 #1 SMP Tue May 10 17:27:01 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
╰─># cat /etc/issue
CentOS release 6.8 (Final)
Kernel \r on an \m

What did you do?

Normal operation, heapIdle is getting bigger and bigger

gc 24015 @413998.746s 23%: 16+3647+0.43 ms clock, 325+52790/18232/0+8.6 ms cpu, 6873->6900->5671 MB, 11404 MB goal, 20 P (forced)
scvg-1: 151 MB released
scvg-1: inuse: 9745, idle: 52574, sys: 62319, released: 52574, consumed: 9745 (MB)
gc end: heapSys->65346568192, heapAlloc->6780874680, heapIdle->55123410944, heapReleased->55123247104
gc 24016 @414005.492s 23%: 7.1+3727+0.16 ms clock, 142+52631/18634/0+3.2 ms cpu, 6722->6838->5787 MB, 11342 MB goal, 20 P (forced)
scvg-1: 149 MB released
scvg-1: inuse: 9730, idle: 52594, sys: 62325, released: 52594, consumed: 9730 (MB)
gc end: heapSys->65356005376, heapAlloc->6708748256, heapIdle->55151747072, heapReleased->55149142016
scvg2759: inuse: 9818, idle: 52491, sys: 62309, released: 52491, consumed: 9818 (MB)
gc 24017 @414011.980s 23%: 21+3679+0.22 ms clock, 438+52629/18393/0+4.4 ms cpu, 6651->6707->5694 MB, 11575 MB goal, 20 P (forced)
scvg-1: 142 MB released
scvg-1: inuse: 9685, idle: 52629, sys: 62315, released: 52629, consumed: 9685 (MB)
gc end: heapSys->65344962560, heapAlloc->6548453928, heapIdle->55188455424, heapReleased->55185702912
gc 24018 @414018.410s 23%: 25+3701+0.37 ms clock, 506+52583/18491/0+7.4 ms cpu, 6494->6558->5660 MB, 11389 MB goal, 20 P (forced)
scvg-1: 177 MB released
scvg-1: inuse: 9599, idle: 52687, sys: 62286, released: 52687, consumed: 9599 (MB)
gc end: heapSys->65312587776, heapAlloc->6380088128, heapIdle->55247020032, heapReleased->55246462976

And the MemStats:

fmtdebug Mem stats: {Alloc:7923150672 TotalAlloc:87802159107216 Sys:83425758664 Lookups:0 Mallocs:1317080204050 Frees:1316977196669 HeapAlloc:7923150672 HeapSys:63562973184 HeapIdle:52234674176 HeapInuse:11328299008 HeapReleased:51822125056 HeapObjects:103007381 StackInuse:16269934592 StackSys:16269934592 MSpanInuse:336623760 MSpanSys:420052992 MCacheInuse:34720 MCacheSys:49152 BuckHashSys:2346567 GCSys:2769477632 OtherSys:400924545 NextGC:12386391312 LastGC:1564479987316535864 PauseTotalNs:368820215369 PauseNs:[1287681 4999082 4236046 35185381 28689620 21532347 4224238 11823736 27419782 32950552 25486288 19654996 31435873 4595087 20290070 1704411 14529638 11940630 6121778 25226722 3802688 1190

What did you expect to see?

heapIdle should release to os

What did you see instead?

heapIdle is not released to os , even thougth I periodic call debug.FreeOSMemory()

@Anteoy
Copy link
Author

Anteoy commented Jul 31, 2019

#14521 Maybe this is relevant. But my linux kernel is 2.6. I am not sure if the madvise system call will use madvdontneed, I seted the evn 'madvdontneed=1'

@Anteoy
Copy link
Author

Anteoy commented Jul 31, 2019

Another thing:

proc.go sysmon() 
 // If a heap span goes unused for 5 minutes after a garbage collection,
// we hand it back to the operating system.

MemStats reported the heapIdle is 52234674176, And it will not decrease when I periodic call debug.FreeOSMemory(). But how to know if these spans are idle for no more than 5 minutes ?
Any help, thx

@Anteoy
Copy link
Author

Anteoy commented Jul 31, 2019

if errno := madvise(v, n, int32(advise)); advise == _MADV_FREE && errno != 0 {
		// MADV_FREE was added in Linux 4.5. Fall back to MADV_DONTNEED if it is
		// not supported.
		atomic.Store(&adviseUnused, _MADV_DONTNEED)
		madvise(v, n, _MADV_DONTNEED)
	}

This code shows that my linux will only use MADV_DONTNEED.

@katiehockman
Copy link
Contributor

From #14521 (comment), it seems like you should be looking at HeapReleased, rather than HeapIdle. HeapReleased is increasing in your example, so it appears to be released to linux periodically as you would expect.

/cc @aclements @randall77 since they have more context here, and may better understand the issue.

@katiehockman katiehockman added NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. OS-Linux labels Aug 2, 2019
@matthinrichsen-wf
Copy link

I believe we are seeing something similar to this in kubernetes after upgrading to Go 1.12.

Our memory usage never decreases, causing our pods to report 80-90% memory utilization after handling a large request. Even though internally, the go-runtime thinks it is only holding onto 10-20%.

Setting the GODEBUG=madvdontneed=1 fixes this and memory is returned to the pod.

Go version: 1.12.9
GOOS: linux
GOARCH: amd64

@CAFxX
Copy link
Contributor

CAFxX commented Oct 3, 2019

Setting the GODEBUG=madvdontneed=1 fixes this and memory is returned to the pod.

That's more of a workaround than a fix.

The actual fix (if the goal is to use memory-based autoscaling, or memory-based alerting) is to not consider the memory that the runtime has marked as free. Keep in mind that the OS can reuse the memory that the runtime has marked as free - but RSS does not capture this information.

The fact that the RSS is not going down means that the OS actually does not need that memory right now, so it leaves it in the RS of the go process. If the OS needed that memory, the RSS would go down (that's why I said this only matters for autoscaling or alerting, because for other purposes that memory has effectively been returned to the OS).

If you want to do this in a language-agnostic way (keep in mind that go is not the only one that uses MADV_FREE: node and java[1] do as well) is to subtract LazyFree from the RSS.

Agree though that It may still make sense to eventually completely deallocate if memory has been completely idle for an extended period of time, as this may be useful e.g. if you run multiple processes in a mem cgroup.


[1]: java uses MADV_FREE in some (but not all) configurations

@90wukai
Copy link

90wukai commented Feb 3, 2020

__

Setting the GODEBUG=madvdontneed=1 fixes this and memory is returned to the pod.

That's more of a workaround than a fix.

The actual fix (if the goal is to use memory-based autoscaling, or memory-based alerting) is to not consider the memory that the runtime has marked as free. Keep in mind that the OS can reuse the memory that the runtime has marked as free - but RSS does not capture this information.

The fact that the RSS is not going down means that the OS actually does not need that memory right now, so it leaves it in the RS of the go process. If the OS needed that memory, the RSS would go down (that's why I said this only matters for autoscaling or alerting, because for other purposes that memory has effectively been returned to the OS).

If you want to do this in a language-agnostic way (keep in mind that go is not the only one that uses MADV_FREE: node and java[1] do as well) is to subtract LazyFree from the RSS.

Agree though that It may still make sense to eventually completely deallocate if memory has been completely idle for an extended period of time, as this may be useful e.g. if you run multiple processes in a mem cgroup.

[1]: java uses MADV_FREE in some (but not all) configurations

RSS is not going down, but my contianer is oom in kubernetes. Mean OS memory is the host machine?

@CAFxX
Copy link
Contributor

CAFxX commented Feb 3, 2020

RSS is not going down, but my contianer is oom in kubernetes. Mean OS memory is the host machine?

If OOM happens it means one of two things:

  1. the go process is trying to allocate more memory than the one allowed by the cgroup (a.k.a. limits in your k8s manifest) - either you have set a limit that is too low, or you have a leak somewhere
  2. the host memory is overcommitted

Both cases have nothing to do with this issue though, or with go in general.

@TimmyOVO
Copy link

TimmyOVO commented Feb 17, 2020

Same here, the application memory usage never decreases.After test in three different machines for 20 minutes.The app which running on macOS gc works properly .

  • 050EB78771CB87374ECC66BF8C21090C.jpg

@aclements
Copy link
Member

I think there are too many things going on in this issue.

As to the original issue, @Anteoy , HeapIdle is expected to increase monotonically. It's not measuring OS memory held by Go. The documentation says:

	// Idle spans have no objects in them. These spans could be
	// (and may already have been) returned to the OS, or they can
	// be reused for heap allocations, or they can be reused as
	// stack memory.

Note the "could be returned to the OS". As @katiehockman pointed out, HeapReleased is the subset of HeapIdle that has been released to the OS. (I realize having all of these MemStats that are subsets of other MemStats is pretty annoying. We're working on a new API that will hopefully be much better structured.)

Everyone else is seeing a completely different issue. @CAFxX summarized it very well. Go uses MADV_FREE if the kernel supports it to release memory because it's dramatically more efficient than MADV_DONTNEED. A lot of modern memory managers do this. This has the rather annoying consequence that RSS doesn't go down unless the OS is actually under memory pressure (I'm not sure why Linux chose to do it this way). As @CAFxX mentioned, you can subtract LazyFree from the RSS to account for this, though top/htop/etc don't do this. Container memory limits do account for this, so if your process is OOMing, it's because it's just using too much memory, and not related to MADV_FREE.

@aclements
Copy link
Member

I think the explanation about HeapIdle resolves the original issue here. See #42330 (a meta-issue I just posted) for reverting to MADV_DONTNEED by default on Linux because of the user experience issues discussed on this issue. I think that takes care of everything, so I'm going to close this, but please reply if that's not the case.

@gopherbot
Copy link

Change https://golang.org/cl/267100 mentions this issue: runtime: default to MADV_DONTNEED on Linux

gopherbot pushed a commit that referenced this issue Nov 2, 2020
In Go 1.12, we changed the runtime to use MADV_FREE when available on
Linux (falling back to MADV_DONTNEED) in CL 135395 to address issue
 #23687. While MADV_FREE is somewhat faster than MADV_DONTNEED, it
doesn't affect many of the statistics that MADV_DONTNEED does until
the memory is actually reclaimed under OS memory pressure. This
generally leads to poor user experience, like confusing stats in top
and other monitoring tools; and bad integration with management
systems that respond to memory usage.

We've seen numerous issues about this user experience, including
 #41818, #39295, #37585, #33376, and #30904, many questions on Go
mailing lists, and requests for mechanisms to change this behavior at
run-time, such as #40870. There are also issues that may be a result
of this, but root-causing it can be difficult, such as #41444 and
 #39174. And there's some evidence it may even be incompatible with
Android's process management in #37569.

This CL changes the default to prefer MADV_DONTNEED over MADV_FREE, to
favor user-friendliness and minimal surprise over performance. I think
it's become clear that Linux's implementation of MADV_FREE ultimately
doesn't meet our needs. We've also made many improvements to the
scavenger since Go 1.12. In particular, it is now far more prompt and
it is self-paced, so it will simply trickle memory back to the system
a little more slowly with this change. This can still be overridden by
setting GODEBUG=madvdontneed=0.

Fixes #42330 (meta-issue).

Fixes #41818, #39295, #37585, #33376, #30904 (many of which were
already closed as "working as intended").

Change-Id: Ib6aa7f2dc8419b32516cc5a5fc402faf576c92e4
Reviewed-on: https://go-review.googlesource.com/c/go/+/267100
Trust: Austin Clements <austin@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
@golang golang locked and limited conversation to collaborators Nov 2, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
FrozenDueToAge NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. OS-Linux
Projects
None yet
Development

No branches or pull requests

8 participants