Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: raising the number of GOMAXPROCS cause oom #32950

Open
Anteoy opened this issue Jul 5, 2019 · 7 comments
Open

runtime: raising the number of GOMAXPROCS cause oom #32950

Anteoy opened this issue Jul 5, 2019 · 7 comments
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Milestone

Comments

@Anteoy
Copy link

Anteoy commented Jul 5, 2019

What version of Go are you using (go version)?

$ go version
go version go1.12.1 linux/amd64

Does this issue reproduce with the latest release?

yes

What operating system and processor architecture are you using (go env)?

go env Output
$ go env
$ go env
GOARCH="amd64"
GOBIN="/home/zhoudazhuang/gobin/"
GOCACHE="/home/zhoudazhuang/.cache/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/zhoudazhuang/db11/jm/pro"
GOPROXY=""
GORACE=""
GOROOT="/home/zhoudazhuang/usr/local/go1.12.1/go"
GOTMPDIR=""
GOTOOLDIR="/home/zhoudazhuang/usr/local/go1.12.1/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build631445118=/tmp/go-build -gno-record-gcc-switches"

What did you do?

Raising the number of GOMAXPROCS from 32 to 512.
I have 32 pysical cpu cores.
Because the gc make my server cosing lots of the time. And the idle cpu is much.
So I want to improve the gc performance throught GOMAXPROCS.
But I faild, I got the result that gc cost more time and the server cost more memory.
Even oom finally.
And it seem a long time stw but not from the gc.
runtime scheduler also cause the stw?
Why the memory is rasing ,finnally oom ?

What did you expect to see?

no oom

What did you see instead?

oom

@Anteoy
Copy link
Author

Anteoy commented Jul 5, 2019

the gc log:

gc 1 @0.110s 0%: 0.23+10+0.25 ms clock, 7.5+1.6/145/11+8.2 ms cpu, 22->23->15 MB, 42 MB goal, 512 P
gc 2 @1.204s 0hsd lark 读取耗时: 91.369667ms
gc 3 @1.441s 0%: 37+29+0.38 ms clock, 1209+56/884/0+12 ms cpu, 110->120->43 MB, 141 MB goal, 512 P
gc 4 @1.776s 0%: 9.2+29+0.67 ms clock, 296+5.4/1417/0.71+21 ms cpu, 187->192->62 MB, 217 MB goal, 512 P
gc hsd lark 读取耗时: 173.688146ms
gc 6 @2.948s 1%: 4.6+130+0.79 ms clock, 150+1675/8660/22+25 ms cpu, 600->615->189 MB, 673 MB goal, 512 P
gc 7 @4.280s 2%: 3.8+204+1.1 ms clock, 123+14776/16636/119+38 ms cpu, 885->904->341 MB, 946 MB goal, 512 P
gc 8 @7.568s 3%: 165+144+0.97 ms clock, 5303+33579/31801/0+31 ms cpu, 1632->1663->599 MB, 1709 MB goal, 512 P
gc 9 @12.102s 4%: 1.7+477+1.3 ms clock, 54+102140/52716/0.55+43 ms cpu, 2878->2889->1149 MB, 2996 MB goal, 512 P
gc 10 @22.494s 4%: 10+874+1.0 ms clock, 344+198656/100740/0+35 ms cpu, 5521->5543->1942 MB, 5746 MB goal, 512 P
gc get connet costv39: 2.391400352
gc 12 @53.060s 6%: 8.2+1615+29 ms clock, 262+462311/180261/0+933 ms cpu, 25371->25401->14289 MB, 25373 MB goal, 512 P

It alse occur when improve it from 32 to 64 of GOMAXPROCS

@Anteoy
Copy link
Author

Anteoy commented Jul 5, 2019

I change the value of GOMAXPROCS to 16, I got ok. It will not oom.

@Anteoy
Copy link
Author

Anteoy commented Jul 5, 2019

BTW, I set GOGC=400 because the gc is too frequent. or it cost more than 25% time to gc

@Anteoy
Copy link
Author

Anteoy commented Jul 5, 2019

My gc problem look like this #16432

@ALTree ALTree changed the title Raising the number of GOMAXPROCS cause oom runtime: raising the number of GOMAXPROCS cause oom Jul 5, 2019
@ALTree ALTree added the NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. label Jul 5, 2019
@ALTree ALTree added this to the Go1.14 milestone Jul 5, 2019
@av86743
Copy link

av86743 commented Jul 5, 2019

zhou da zhuang, you are creating too much garbage.

@Anteoy
Copy link
Author

Anteoy commented Jul 5, 2019

@ALTree I got some info.

gc before: heapSys->49528111104, heapAlloc->40467391680, heapIdle->6813548544, heapReleased->2474319872
scvg-1: 37309 MB released
scvg-1: inuse: 8592, idle: 40080, sys: 48672, released: 40080, consumed: 8592 (MB)
gc 22 @294.589s 7%: 3.0+1608+1.1 ms clock, 97+334086/198263/hsd lark 读取耗时: 57.785043ms
gc end: heapSys->51351617536, heapAlloc->5810661296, heapIdle->42233233408, heapReleased->41868288000
gc 23 @301.426s 7%: 458+1938+31 ms clock, 14679+509397/215798/0+1007 ms cpu, 12757->12804->9017 MB, 14855 MB goal, 512 P
gc end: heapSys->51492487168, heapAlloc->13227605976, heapIdle->34756304896, heapReleased->34555789312
gc 24 @309.742s 8%: 1266+1196+42 ms clock, 40515+414066/193581/0+1360 ms cpu, 46295->46307->39138 MB, 46299 MB goal, 512 P
gc before: heapSys->120142692352, heapAlloc->108389470152, heapIdle->8980471808, heapReleased->8978079744
gc 25 @351.929s 7%: 931+2020+21 ms clock, 29798+392743/233296/55155+688 ms cpu, 164927->164984->3416 MB, 195694 MB goal, 512 P (forced)

It look like about the mass heapIdle.

@Anteoy
Copy link
Author

Anteoy commented Jul 5, 2019

gc 17 @82.551s 7%: 28+2506+25 ms clock, 896+495930/282488/5478+816 ms cpu, 75123->77077->22345 MB, 82520 MB goal, 512 P
scvg0: inuse: 67735, idle: 9607, sys: 77342, released: 1898, consumed: 75443 (MB)
gc 18 @205.628s 5%: 15619+6375+get connet costv39: 106.156393782
gc before: heapSys->80458514432, heapAlloc->71374867800, heapIdle->8755847168, heapReleased->1943085056
gc 19 @238.700s 5%: 863+1477+7.5 ms clock, 27619+421165/167229/0+242 ms cpu, 7940->7975->2738 MB, 13844 MB goal, 512 P (forced)
scvg-1: 66642 MB released
scvg-1: inuse: 8584, idle: 68286, sys: 76870, released: 68286, consumed: 8584 (MB)
gc end: heapSys->80615735296, heapAlloc->9059652608, heapIdle->69228838912, heapReleased->69223890944
gc 20 @253.940s 6%: 676+1527+21 ms clock, 21652+431686/181271/0+703 ms cpu, 14857->14870->6684 MB, 14860 MB goal, 512 P
gc 21 @261.686s 6%: 39+2360+4.8 ms clock, 1251+339098/187854/79+156 ms cpu, 28464->32328->25374 MB, 33424 MB goal, 512 P



scvg1: inuse: 83110, idle: 9269, sys: 92380, released: 9269, consumed: 83110 (MB)
gc 22 @388.400s 4%: 1343+2280+11 ms clock, 42990+672990/275837/0+354 ms cpu, 85028->85180->2735 MB, 126873 MB goal, 512 P

... inuse heap is alse large.
I can not get the pprof when it are growthing...

@rsc rsc modified the milestones: Go1.14, Backlog Oct 9, 2019
@gopherbot gopherbot added the compiler/runtime Issues related to the Go compiler and/or runtime. label Jul 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Projects
Status: Triage Backlog
Development

No branches or pull requests

5 participants