Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: Go RSS memory does not decrease, memory not returned to OS #39779

Closed
jinlianch opened this issue Jun 23, 2020 · 4 comments
Closed

runtime: Go RSS memory does not decrease, memory not returned to OS #39779

jinlianch opened this issue Jun 23, 2020 · 4 comments

Comments

@jinlianch
Copy link

jinlianch commented Jun 23, 2020

What version of Go are you using (go version)?

1.14.1

What operating system and processor architecture are you using (go env)?

Running in docker
Linux 4.14.177-139.254.amzn2.x86_64

What did you do?

We have a grpc server handle client request, may have many memory allocation, will get data from redis and parse it and send back to client. when the concurrency is high, the memory increase, while requests stop, the resident memory don't decrease.
If keep request server, might cause OOM.

What did you expect to see?

RSS should decrease when requests decrease or stop.

What did you see instead?

image
RSS keeps ~740M for long time.
While memstats shows gc works well.

Alloc = 11.748207092285156 MiB
TotalAlloc = 26889.293197631836 MiB
Sys = 734.5589141845703 MiB
NumGC = 161
NextGC = 19.124771118164062 MiB
LastGC = 1592905558498324415
GCCPUFraction = 0.0005409870977607217
Mallocs = 25586250
Frees = 25547405
HeapAlloc = 11.748207092285156 MiB
HeapIdle = 685.4375 MiB
HeapInuse = 17 MiB
HeapRelease = 685.2421875 MiB
HeapSys = 702.4375 MiB
HeapObjects = 38845
StackInuse = 1.5625 MiB
StackSys = 1.5625 MiB
MSpanInuse = 0.1999969482421875 MiB
MSpanSys = 2.453125 MiB
MCacheInuse = 0.006622314453125 MiB
MCacheSys = 0.015625 MiB
GCSys = 25.260093688964844 MiB
OtherSys = 0.878931999206543 MiB

Profile heap:

(pprof) top
Showing nodes accounting for 58.84MB, 99.16% of 59.34MB total
Showing top 10 nodes out of 96
      flat  flat%   sum%        cum   cum%
   49.29MB 83.07% 83.07%    49.29MB 83.07%  github.com/golang/snappy.Decode
    2.18MB  3.68% 86.75%     2.18MB  3.68%  bytes.makeSlice
    2.13MB  3.59% 90.34%     2.13MB  3.59%  google.golang.org/grpc/internal/transport.newBufWriter
    1.55MB  2.61% 92.94%     1.55MB  2.61%  bufio.NewReaderSize
    1.16MB  1.95% 94.89%     1.16MB  1.95%  runtime/pprof.StartCPUProfile
    0.52MB  0.87% 95.76%     0.52MB  0.87%  golang.org/x/net/http2.(*Framer).WriteDataPadded
    0.51MB  0.86% 96.63%     0.51MB  0.86%  Rigel/go/pkg/utils/enmime/internal/coding.init
    0.50MB  0.85% 97.47%     0.50MB  0.85%  bufio.NewWriterSize
    0.50MB  0.84% 98.31%     0.50MB  0.84%  google.golang.org/grpc.(*ClientConn).newAddrConn
    0.50MB  0.84% 99.16%     0.50MB  0.84%  google.golang.org/protobuf/internal/impl.(*MessageInfo).makeStructInfo

Profile goroutine:

(pprof) top
Showing nodes accounting for 249, 100% of 249 total
Showing top 10 nodes out of 65
      flat  flat%   sum%        cum   cum%
       247 99.20% 99.20%        247 99.20%  runtime.gopark
         1   0.4% 99.60%          1   0.4%  runtime.notetsleepg
         1   0.4%   100%          1   0.4%  runtime/pprof.writeRuntimeProfile
         0     0%   100%          1   0.4%  Rigel/go/pkg/eprometheus.StartMetricsServer.func1
         0     0%   100%          1   0.4%  Rigel/go/pkg/eserver.Serve
         0     0%   100%          1   0.4%  Rigel/go/pkg/eserver.Serve.func3
         0     0%   100%          1   0.4%  Rigel/go/pkg/eserver/email_storage_service_server.RunServer
         0     0%   100%          1   0.4%  Rigel/go/pkg/esignal.HandleSignal.func1
         0     0%   100%         37 14.86%  bufio.(*Reader).Read
         0     0%   100%          1   0.4%  bufio.(*Reader).ReadLine

@davecheney
Copy link
Contributor

davecheney commented Jun 23, 2020

Thank you for raising this issue. From the information you provided, it looks like all but 17mb of the go heap has been released back to the operating system. However, unless there is memory pressure the operating system may ignore the request to release memory because the signal we send to the operating system is advisory. It looks like this what is happening.

ps. To improve accessibility, in the future please do not includes screenshots of text, just copy and paste the text.

@jinlianch
Copy link
Author

jinlianch commented Jun 23, 2020

I also run this service in Ubuntu 16.04.6 LTS with source code, the RSS decrease from 700M to 130M when I stopped test script.
I don't understand why the RSS keeps the same, it's because the runtime uses MADV_FREE to release unused memory?
How to get 17mb has been released to the OS? I thought HeapRelease = 685.2421875 MiB will return to OS.
Calling debug.FreeOSMemory() not work.

@randall77
Copy link
Contributor

I don't understand why the RSS keeps the same, it's because the runtime uses MADV_FREE to release unused memory?

Yes. The Go runtime has told the OS that it no longer needs the memory. But unless the OS needs those pages for something else, it does not take them back. So the Go process's RSS does not drop.
Unfortunately there's no good way in Linux to give pages back that will immediately lower the reported RSS.

@cagedmantis cagedmantis changed the title Golang RSS memory not decrease, memory not return to OS runtime: Go RSS memory does not decrease, memory not returned to OS Jun 23, 2020
@cagedmantis cagedmantis added this to the Backlog milestone Jun 23, 2020
@cagedmantis
Copy link
Contributor

@jinlianch This issue seems like it has received sufficient responses. I'm going to close the issue. Please feel free to respond in the issue if you feel like it was closed in error.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants