-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime: ThreadSanitizer failed to allocate / CHECK failed #37651
Comments
It seems that you run out of memory on the machine. Errno 12 is ENOMEM. Maybe you disable overcommit/swap/set ulimit -v/use memcg. |
It turns out I only have this problem on a machine where processes have a 5GB per process limit enforced.
Though LSF reports that peak memory when using race is less than 2GB, average 980MB. Peak memory without race is reported as 800MB, average 300MB. Is this difference in memory usage expected? Could there be a very brief >5GB heap allocation with race that LSF doesn't detect in its peak memory usage report? |
What exactly is that -m? Are you sure you restrict and measure the same memory? RSS? Virtual? Allocated? Locked? Accounted? There are lots of them :) |
So |
Memory use increase under race detector is very much expected, see: |
Sure, I expect some memory usage increase, the question is, is this much of an increase a possible bug? Is there a bad interaction between go-deadlock and the go race detector that uncovers some unexpected run-away memory usage? Or is the memory usage legitimate? If there's no easy way to answer this question, I guess this issue can be closed. |
This has some reference numbers:
I don't know. You are filing the bug, so I assume you have answers :) |
hello, I'm seeing this issue too on the github actions runner: https://github.com/benitogf/level/runs/2772001793?check_suite_focus=true only on windows though redirected here: #22553 since the error code is different (1455) |
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
What did you expect to see?
Tests should pass cleanly the same way they do without race (
go test -p 1 -tags netgo --count 1 ./jobqueue -run TestJobqueueRunners -v
).What did you see instead?
Variations on:
Which exits the test. It happens during a seemingly random test each attempt.
Additional info:
github.com/sasha-s/go-deadlock
back tosync
, but I want to run go-deadlock in production to catch deadlock bugs that my tests aren't finding.As a bad interaction between go-deadlock and the go race detector, I have no idea how to debug this.
The text was updated successfully, but these errors were encountered: