-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime,sync/atomic: deadlocks in sync/atomic_test.TestValueSwapConcurrent #45975
Comments
I have not have enough profiling and optimisation experience to just clock allocations. I assume this is the two |
forgot about
|
I think preallocating all the I don't think defining the func outside the loop would help much - we need to allocate a goroutine per iteration in any case. |
@dr2chase got a similar looking failure in a benchmark, for which we have a core file. So far, we've found:
I'm still investigating further. |
Change https://golang.org/cl/318529 mentions this issue: |
Change https://golang.org/cl/318569 mentions this issue: |
You thinking of something like this? diff --git a/src/sync/atomic/value_test.go b/src/sync/atomic/value_test.go
index a5e717d6e0..f474f3f8b0 100644
--- a/src/sync/atomic/value_test.go
+++ b/src/sync/atomic/value_test.go
@@ -188,9 +188,11 @@ func TestValueSwapConcurrent(t *testing.T) {
i := i
g.Add(1)
go func() {
+ var box interface{}
var c uint64
for new := i; new < i+n; new++ {
- if old := v.Swap(new); old != nil {
+ box = new
+ if old := v.Swap(box); old != nil {
c += old.(uint64)
}
} Does not really seem to trace any better: // Package main attempts to recreate atomic_test.TestValueSwapConcurrent in go1.16.
package main
import (
"log"
"sync"
. "sync/atomic"
)
func main() {
var x sync.Mutex
var v Value
var count uint64
var g sync.WaitGroup
var m, n uint64 = 10000, 10000
m = 1000
n = 1000
for i := uint64(0); i < m*n; i += n {
i := i
g.Add(1)
var box interface{}
go func() {
var c uint64
for new := i; new < i+n; new++ {
box = new
x.Lock()
old := v.Load()
v.Store(box)
x.Unlock()
if old != nil {
c += old.(uint64)
}
}
AddUint64(&count, c)
g.Done()
}()
}
g.Wait()
want, got := (m*n-1)*(m*n)/2, count+v.Load().(uint64)
log.Printf("sum from 0 to %d was %d, want %v\n", m*n-1, got, want)
}
|
I was thinking for the I agree for just |
so, every time through the inner for loop we make two more heap values, even if we are just spinning? thus, this should allocate only what we absolutely need: diff --git a/src/sync/atomic/value_test.go b/src/sync/atomic/value_test.go
index a5e717d6e0..5f1eea7d1c 100644
--- a/src/sync/atomic/value_test.go
+++ b/src/sync/atomic/value_test.go
@@ -255,12 +257,16 @@ func TestValueCompareAndSwapConcurrent(t *testing.T) {
m = 100
n = 100
}
+ vec := make([]interface{}, m*n)
+ for i := range vec {
+ vec[i] = i
+ }
for i := 0; i < m; i++ {
i := i
w.Add(1)
go func() {
for j := i; j < m*n; runtime.Gosched() {
- if v.CompareAndSwap(j, j+1) {
+ if v.CompareAndSwap(vec[j], vec[j+1]) {
j += m
}
} otherwise I think we have to do some deeper loop in loop nesting to do the "making |
@arnottcr Yes, that's exactly the change I was contemplating. |
2021-05-05T15:54:39-69368ce/linux-386-longtest
2021-05-04T22:12:31-6a6aa32/linux-386-longtest
Also seen in a TryBot:
https://storage.googleapis.com/go-build-log/9041f75f/linux-amd64-longtest_50b82ee1.log
I'm not gonna paste the logs here because they're huge.
The deadlocked test was added in CL 241678, but given the
gcMarkDone
stacks I'm guessing that this is triggering an existing bug in the runtime rather than a bug insync/atomic
or the test itself. Still, it may be worth considering whether the test can be written in a way that is less allocation-intensive. 😅CC @arnottcr @randall77 @prattmic @mknyszek @aclements
The text was updated successfully, but these errors were encountered: