Skip to content

runtime: execution halts with goroutines stuck in runtime.gopark (protocol error E08 during memory read for packet) #61768

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
patrick-ogrady opened this issue Aug 4, 2023 · 27 comments
Assignees
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. FrozenDueToAge NeedsFix The path to resolution is known, but the work has not been done.
Milestone

Comments

@patrick-ogrady
Copy link

patrick-ogrady commented Aug 4, 2023

What version of Go are you using (go version)?

go version go1.20.7 darwin/arm64

Does this issue reproduce with the latest release?

Yes.

What operating system and processor architecture are you using (go env)?

GOARCH="arm64"
GOHOSTARCH="arm64"
GOHOSTOS="darwin"

What did you do?

When running continuous profiling on my binary, the entire program halted with all goroutines stuck in runtime.gopark. When disabling profiling, this problem went away.

I posted a similar issue a few months ago ("halt when profiling") but don't believe it to be related (#58798).

  Goroutine 1 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [semacquire 258316338387791]
  Goroutine 2 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4)
  Goroutine 4 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [debug call]
  Goroutine 5 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [debug call 258796780404375]
  Goroutine 6 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [debug call 258856782846166]
  Goroutine 7 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [debug call 258856782846166]
  Goroutine 8 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [debug call]
  Goroutine 9 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [debug call]
  Goroutine 10 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [chan receive]
  Goroutine 18 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [GC sweep wait]
  Goroutine 19 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [GC scavenge wait]
  Goroutine 20 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [debug call]
  Goroutine 21 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [debug call]
  Goroutine 22 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [debug call]
  Goroutine 23 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [debug call]
  Goroutine 26 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [chan receive 258316338387791]
  Goroutine 33 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select 258856782846166]
  Goroutine 34 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [finalizer wait]
  Goroutine 36 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [debug call 258316186592916]
  Goroutine 37 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [debug call]
  Goroutine 44 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select 258316338387791]
  Goroutine 52 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [chan receive 258316338387791]
  Goroutine 54 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select 258316338387791]
  Goroutine 58 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [IO wait 258316338387791]
  Goroutine 59 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [IO wait 258316338387791]
  Goroutine 61 - Runtime: :0 ??? (0x1ab4ccacc) (thread 7681484)
  Goroutine 62 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [chan receive 258316338387791]
  Goroutine 66 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select 258316338387791]
  Goroutine 67 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select]
  Goroutine 68 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select 258316338387791]
  Goroutine 69 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select 258316338387791]
  Goroutine 88 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select]
  Goroutine 340 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [chan receive]
  Goroutine 347 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select 258362045239291]
  Goroutine 607 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select 258376783287916]
  Goroutine 619 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select]
  Goroutine 620 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select]
  Goroutine 621 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select]
  Goroutine 622 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [semacquire]
  Goroutine 632 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select]
  Goroutine 633 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [IO wait]
  Goroutine 634 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [select]
  Goroutine 635 - Runtime: /usr/local/go/src/runtime/proc.go:382 runtime.gopark (0x102255ab4) [chan receive 258316810183875]

What did you expect to see?

I expected profiling to allow the binary to run without issue.

What did you see instead?

Profiling put the binary in a "deadlocked"/"unrecoverable" state.

@gopherbot gopherbot added the compiler/runtime Issues related to the Go compiler and/or runtime. label Aug 4, 2023
@patrick-ogrady
Copy link
Author

patrick-ogrady commented Aug 4, 2023

Here are some interesting backtraces I captured:

thread 8303355

 0  0x00000001ab4cebc8 in ???
    at ?:-1
 1  0x00000001028682b8 in runtime.systemstack_switch
    at /usr/local/go/src/runtime/asm_arm64.s:200
 2  0x0000000102853dbc in runtime.libcCall
    at /usr/local/go/src/runtime/sys_libc.go:49
 3  0x0000000102853a80 in runtime.pthread_cond_wait
    at /usr/local/go/src/runtime/sys_darwin.go:507
 4  0x000000010282fc28 in runtime.semasleep
    at /usr/local/go/src/runtime/os_darwin.go:66
 5  0x0000000102806878 in runtime.notetsleep_internal
    at /usr/local/go/src/runtime/lock_sema.go:213
 6  0x0000000102806b80 in runtime.notetsleepg
    at /usr/local/go/src/runtime/lock_sema.go:295
 7  0x00000001028446e0 in runtime.(*profBuf).read
    at /usr/local/go/src/runtime/profbuf.go:500
 8  0x0000000102862818 in runtime/pprof.readProfile
    at /usr/local/go/src/runtime/cpuprof.go:230
 9  0x00000001030fc794 in runtime/pprof.profileWriter
    at /usr/local/go/src/runtime/pprof/pprof.go:810
10  0x00000001030fc6ac in runtime/pprof.StartCPUProfile.func2
    at /usr/local/go/src/runtime/pprof/pprof.go:794
11  0x000000010286a8b4 in runtime.goexit
    at /usr/local/go/src/runtime/asm_arm64.s:1172
thread 8303391

0  0x00000001ab4cf50c in ???
   at ?:-1
1  0x77288001ab3b01e8 in ???
   at ?:-1
2  0x683600010286b9f4 in ???
   at ?:-1
3  0x000000010286a728 in runtime.asmcgocall
   at /usr/local/go/src/runtime/asm_arm64.s:1005
4  0x000000016de0ae28 in ???
   at ?:-1
   error: protocol error E08 during memory read for packet $m683600010286b9fc,8
thread 8303392

0  0x00000001ab4cf710 in ???
   at ?:-1
1  0x1f4100010286bd78 in ???
   at ?:-1
2  0x000000010286a728 in runtime.asmcgocall
   at /usr/local/go/src/runtime/asm_arm64.s:1005
3  0x00000000043a4800 in ???
   at ?:-1
   error: protocol error E08 during memory read for packet $m1f4100010286bd80,8
thread 8303393

0  0x00000001ab4cf710 in ???
   at ?:-1
1  0xf06900010286bd78 in ???
   at ?:-1
2  0x000000010286a728 in runtime.asmcgocall
   at /usr/local/go/src/runtime/asm_arm64.s:1005
3  0x000000010282ecb7 in runtime.netpollblock
   at /usr/local/go/src/runtime/netpoll.go:527
4  0x000000016ee1ac50 in ???
   at ?:-1
   error: protocol error E08 during memory read for packet $mf06900010286bd80,8
thread 8303394

0  0x00000001ab4cf710 in ???
   at ?:-1
1  0x793480010286bd78 in ???
   at ?:-1
2  0x000000010286a728 in runtime.asmcgocall
   at /usr/local/go/src/runtime/asm_arm64.s:1005
3  0x00000140001856c0 in ???
   at ?:-1
   error: protocol error E08 during memory read for packet $m793480010286bd80,8
(truncated)
thread 8303395

0  0x00000001ab4cf710 in ???
   at ?:-1
1  0x2c0000010286bd78 in ???
   at ?:-1
2  0x000000010286a728 in runtime.asmcgocall
   at /usr/local/go/src/runtime/asm_arm64.s:1005
thread 8303398

0  0x00000001ab4cf50c in ???
   at ?:-1
1  0x00000001028682b8 in runtime.systemstack_switch
   at /usr/local/go/src/runtime/asm_arm64.s:200
thread 8303399

0  0x00000001ab4cf50c in ???
   at ?:-1
1  0xa4610001ab3b01e8 in ???
   at ?:-1
2  0x844d80010286b9f4 in ???
   at ?:-1
3  0x000000010286a728 in runtime.asmcgocall
   at /usr/local/go/src/runtime/asm_arm64.s:1005
4  0x000001400064b610 in ???
   at ?:-1
   error: protocol error E08 during memory read for packet $m844d80010286b9fc,8
thread 8303401

0  0x00000001ab4ccacc in ???
   at ?:-1
1  0x00000001028682b8 in runtime.systemstack_switch
   at /usr/local/go/src/runtime/asm_arm64.s:200
2  0x0000000102853dbc in runtime.libcCall
   at /usr/local/go/src/runtime/sys_libc.go:49
3  0x00000001028534d0 in runtime.read
   at /usr/local/go/src/runtime/sys_darwin.go:269
4  0x000000010282fea4 in runtime.sigNoteSleep
   at /usr/local/go/src/runtime/os_darwin.go:124
5  0x00000001028663ec in os/signal.signal_recv
   at /usr/local/go/src/runtime/sigqueue.go:149
6  0x000000010298e43c in os/signal.loop
   at /usr/local/go/src/os/signal/signal_unix.go:23
7  0x000000010286a8b4 in runtime.goexit
   at /usr/local/go/src/runtime/asm_arm64.s:1172
thread 8303404

0  0x00000001ab4cf710 in ???
   at ?:-1
1  0x7d1a00010286bd78 in ???
   at ?:-1
2  0x000000010286a728 in runtime.asmcgocall
   at /usr/local/go/src/runtime/asm_arm64.s:1005
3  0xdf5180010286bd24 in ???
   at ?:-1
   error: protocol error E08 during memory read for packet $m7d1a00010286bd80,8
thread 8303407

0  0x00000001ab4cf50c in ???
   at ?:-1
1  0x0000000000000000 in ???
   at :0
   error: NULL address
(truncated)
thread 8303408

0  0x00000001ab4cf710 in ???
   at ?:-1
1  0x607280010286bd78 in ???
   at ?:-1
2  0x000000010286a728 in runtime.asmcgocall
   at /usr/local/go/src/runtime/asm_arm64.s:1005
3  0x0000000173692c40 in ???
   at ?:-1
   error: protocol error E08 during memory read for packet $m607280010286bd80,8
(truncated)
thread 8303418

0  0x00000001ab4cf710 in ???
   at ?:-1
1  0xd64780010286bd78 in ???
   at ?:-1
2  0x000000010286a728 in runtime.asmcgocall
   at /usr/local/go/src/runtime/asm_arm64.s:1005
3  0x00000001746b2d00 in ???
   at ?:-1
   error: protocol error E08 during memory read for packet $md64780010286bd80,8
(truncated)
thread 8303419

0  0x00000001ab4cf710 in ???
   at ?:-1
1  0xfa4b00010286bd78 in ???
   at ?:-1
2  0x000000010286a728 in runtime.asmcgocall
   at /usr/local/go/src/runtime/asm_arm64.s:1005
3  0x0000014003b4c000 in ???
   at ?:-1
   error: protocol error E08 during memory read for packet $mfa4b00010286bd80,8
(truncated)
thread 8333596

0  0x00000001ab4cf50c in ???
   at ?:-1
1  0x0000000000000000 in ???
   at :0
   error: NULL address
(truncated)

@patrick-ogrady patrick-ogrady changed the title runtime: execution halts with goroutines stuck in runtime.gopark runtime: execution halts with goroutines stuck in runtime.gopark (protocol error E08 during memory read for packet) Aug 4, 2023
@dr2chase
Copy link
Contributor

dr2chase commented Aug 4, 2023

@mknyszek @cherrymui @bcmills
Unclear if this is a Darwin flake (if so, perhaps a recipe for making it happen more often) or a profiling bug.

@dr2chase dr2chase added the NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. label Aug 4, 2023
@patrick-ogrady
Copy link
Author

I can reliably reproduce this within a few minutes on my machine if you want me to grab anything else 👍 .

@dr2chase
Copy link
Contributor

dr2chase commented Aug 4, 2023

Is this https://github.com/ava-labs/avalanchego ? Can the test run on just a laptop, or does it need additional setup?

@patrick-ogrady
Copy link
Author

patrick-ogrady commented Aug 4, 2023

Is this https://github.com/ava-labs/avalanchego ? Can the test run on just a laptop, or does it need additional setup?

There is some additional setup. I've been reproducing on my Mac M2 Max with (spawns a local network of p2p processes):

git clone https://github.com/ava-labs/hypersdk.git;
cd hypersdk/examples/morepheusvm;
./scripts/run.sh;

Within a few minutes, different processes spawned will start to halt (continuous profiling is enabled by default and runs once per minute).

@dr2chase
Copy link
Contributor

dr2chase commented Aug 4, 2023

Can confirm that ./scripts/run.sh does run and spew messages. Is there anything else I should be doing after that? I see messages about "disconnecting peer". One avalanchego process is chewing up 300% of CPU, is that expected? (Laptop is also an M2 Max, all the cores, a truckload of memory too.)

I.e., what am I looking for? (And is there a better way than killall to get rid of the processes?)
Specific instructions are better, no way of knowing for sure if "disconnecting peer" is another way of saying "start to halt" though I have my suspicions.

Now there's two avalanchego processes, each consuming well over 300% of CPU.

@patrick-ogrady
Copy link
Author

patrick-ogrady commented Aug 4, 2023

One avalanchego process is chewing up 300% of CPU, is that expected?

This is what happens when the process halts with this bug. Run dlv attach <pid> then grs -r and you should see everything stuck in runtime.gopark. You can view more details if you go through each thread and run bt.

"disconnecting peer"

This means a peer is no longer responsive (because it has halted/is stuck).

Now there's two avalanchego processes, each consuming well over 300% of CPU.

Yeah, they all eventually halt if you wait long enough.

(And is there a better way than killall to get rid of the processes?)

You can run ./scripts/stop.sh and it will shut everything down (eventually force terminating stuck processes).

@dr2chase
Copy link
Contributor

dr2chase commented Aug 4, 2023

A-ha. Any way to prevent the other ones from getting further stuck and eating all the CPU? Maybe just keep attaching them to shut them down? Or do I debug one, kill the rest?

@patrick-ogrady
Copy link
Author

patrick-ogrady commented Aug 4, 2023

A-ha. Any way to prevent the other ones from getting further stuck and eating all the CPU? Maybe just keep attaching them to shut them down? Or do I debug one, kill the rest?

I'd suggest killing the rest after attaching to one that is stuck. Otherwise the runaway CPU will happen as more get stuck (no way to prevent that bc of the bug this PR is open for ^). Glad you were able to repro so quickly 👍 .

@patrick-ogrady
Copy link
Author

patrick-ogrady commented Aug 8, 2023

Any update on this or anything else I can collect?

@dr2chase
Copy link
Contributor

dr2chase commented Aug 8, 2023

Actually got it open just now, would not want to claim it is anything like "progress".
Currently cursing at lldb, and about to double-check Go's signal trampoline code.

@patrick-ogrady
Copy link
Author

I'm going to test if this occurs on go1.19 to see if I can bisect the regression.

@dr2chase
Copy link
Contributor

dr2chase commented Aug 8, 2023

Thank you, and good luck.

@patrick-ogrady
Copy link
Author

I ran a few tests. I have not been able to reproduce it in v1.19.12 but was able to reproduce it immediately in v1.21.0.

I'm going to keep trying on v1.19.12.

@patrick-ogrady
Copy link
Author

I just reproduced the issue on v1.19.12 😢 . It took longer but it eventually hit (same exact symptoms as shared above).

@dr2chase
Copy link
Contributor

dr2chase commented Aug 8, 2023

I know "what" but not "why". The profiling signal lock is stuck on for at least 100,000 consecutive osyield, i.e., 1 second.

@bcmills
Copy link
Contributor

bcmills commented Aug 8, 2023

Given darwin, I wonder if this is related to #60108 and/or #59995.

Hmm. Given that the Go runtime on darwin uses libc, is it possible that the Go signal handler is ending up in a libc call and deadlocking on acquiring a mutex already held by another libc call on the running thread?

@dr2chase
Copy link
Contributor

dr2chase commented Aug 8, 2023

I can look for that, but I don't think I've seen it. I'm also seeing behavior that makes me think this might perhaps be a symptom, not a cause -- after adding checks, two processes jammed at 99.9% (but not more, which was what I previously saw) and not having "Idle Wake Ups" counting up at a wicked rate. And not crashing out on the checks I added.

edit -- might not be a symptom, might be a consequence of throw from within a signal handler.

will the runtime test for "does this arm64 support CAS?" work properly within a signal handler?

@dr2chase
Copy link
Contributor

dr2chase commented Aug 8, 2023

Note, so far, of the 3 failures that I have seen, they've been at

func (p *cpuProfile) add(tagPtr *unsafe.Pointer, stk []uintptr) {

in runtime/cpuprof.go, in particular the failure was detected there, and the stuck lock was also created there.
That's what I've got so far, I'm going to collect a few more.

@dr2chase
Copy link
Contributor

definitely have a fix, maybe it's not the right fix.

@gopherbot
Copy link
Contributor

Change https://go.dev/cl/518836 mentions this issue: runtime: profiling on Darwin cannot use blocking reads

@dr2chase
Copy link
Contributor

@gopherbot please open the backport tracking issues. This is an annoying bug, possible cause of flakes, with a targeted fix.

@gopherbot
Copy link
Contributor

Backport issue(s) opened: #62018 (for 1.20), #62019 (for 1.21).

Remember to create the cherry-pick CL(s) as soon as the patch is submitted to master, according to https://go.dev/wiki/MinorReleases.

@gopherbot
Copy link
Contributor

Change https://go.dev/cl/519275 mentions this issue: runtime: guard against runtime/sema* ops on Darwin signal stack.

@patrick-ogrady
Copy link
Author

Thanks @dr2chase!!

@gopherbot
Copy link
Contributor

Change https://go.dev/cl/519375 mentions this issue: runtime: profiling on Darwin cannot use blocking reads

@gopherbot
Copy link
Contributor

Change https://go.dev/cl/518677 mentions this issue: runtime: profiling on Darwin cannot use blocking reads

@dmitshur dmitshur modified the milestones: Backlog, Go1.22 Aug 15, 2023
@dmitshur dmitshur added NeedsFix The path to resolution is known, but the work has not been done. and removed NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. labels Aug 15, 2023
gopherbot pushed a commit that referenced this issue Aug 16, 2023
These operations misbehave and cause hangs and flakes.
Fail hard if they are attempted.

Tested by backing out the Darwin-profiling-hang fix
CL 518836 and running run.bash, the guard panicked in
runtime/pprof tests, as expected/hoped.

Updates #61768

Change-Id: I89b6f85745fbaa2245141ea98f584afc5d6b133e
Reviewed-on: https://go-review.googlesource.com/c/go/+/519275
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
gopherbot pushed a commit that referenced this issue Aug 17, 2023
…ing reads

On Darwin (and assume also on iOS but not sure), notetsleepg
cannot be called in a signal-handling context.  Avoid this
by disabling block reads on Darwin.

An alternate approach was to add "sigNote" with a pipe-based
implementation on Darwin, but that ultimately would have required
at least one more linkname between runtime and syscall to avoid
racing with fork and opening the pipe, so, not.

Fixes #62018.
Updates #61768.

Change-Id: I0e8dd4abf9a606a3ff73fc37c3bd75f55924e07e
Reviewed-on: https://go-review.googlesource.com/c/go/+/518836
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
(cherry picked from commit c6ee8e3)
Reviewed-on: https://go-review.googlesource.com/c/go/+/518677
Auto-Submit: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Austin Clements <austin@google.com>
gopherbot pushed a commit that referenced this issue Aug 17, 2023
…ing reads

On Darwin (and assume also on iOS but not sure), notetsleepg
cannot be called in a signal-handling context.  Avoid this
by disabling block reads on Darwin.

An alternate approach was to add "sigNote" with a pipe-based
implementation on Darwin, but that ultimately would have required
at least one more linkname between runtime and syscall to avoid
racing with fork and opening the pipe, so, not.

Fixes #62019.
Updates #61768.

Change-Id: I0e8dd4abf9a606a3ff73fc37c3bd75f55924e07e
Reviewed-on: https://go-review.googlesource.com/c/go/+/518836
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
(cherry picked from commit c6ee8e3)
Reviewed-on: https://go-review.googlesource.com/c/go/+/519375
Reviewed-by: Austin Clements <austin@google.com>
Auto-Submit: Dmitri Shuralyov <dmitshur@google.com>
@golang golang locked and limited conversation to collaborators Aug 14, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. FrozenDueToAge NeedsFix The path to resolution is known, but the work has not been done.
Projects
None yet
Development

No branches or pull requests

6 participants