Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync: add example for Cond #20491

Open
joneskoo opened this issue May 25, 2017 · 27 comments
Open

sync: add example for Cond #20491

joneskoo opened this issue May 25, 2017 · 27 comments
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. Documentation
Milestone

Comments

@joneskoo
Copy link
Contributor

The sync.Cond should have an example demonstrating its use in a somewhat realistic use.

What would be a good example? It's hardly used at all in standard library. What are typical uses?

Continuation from #20471.

@bradfitz bradfitz added this to the Unplanned milestone May 25, 2017
@kevinburke
Copy link
Contributor

This might be a helpful example: https://github.com/golang/build/blob/master/livelog/livelog.go

It's a bit too long for an inline doc, but maybe it can be condensed.

@bcmills
Copy link
Contributor

bcmills commented Jun 8, 2017

I would expect that the reason the documentation and usage are pretty sparse in part because channels provide similar operations (and, unlike sync.Cond, can be used in select statements; see #16620).

You're of course correct that sync.Cond should have proper documentation, but perhaps we should also gently discourage its use.

@awoodbeck
Copy link
Contributor

I have an example to propose. In reign, we use sync.Cond as part of an Address/Mailbox pair to signal when new mail arrives in the mailbox. It would be easy to simplify this example for the docs.

@awoodbeck
Copy link
Contributor

awoodbeck commented Jan 3, 2018

This is what I'm thinking: https://play.golang.org/p/FkxA466uukw

It uses both Signal and Broadcast.

@bcmills
Copy link
Contributor

bcmills commented Jan 3, 2018

@awoodbeck The equivalent to that using channels is only a bit longer, and interoperates better with the context package: https://play.golang.org/p/icxM8MCgO7P

So that's arguably still not a great use of sync.Cond.

@awoodbeck
Copy link
Contributor

This issue requested an example of using sync.Cond, not why sync.Cond shouldn't be used. Arguing for channels over sync.Cond is irrelevant in the context of improving the sync.Cond docs.

@bcmills
Copy link
Contributor

bcmills commented Jan 3, 2018

Arguing for channels over sync.Cond is irrelevant in the context of improving the sync.Cond docs.

I disagree. The OP requested “a good example” in “ a somewhat realistic use”. A good example for an API should be a use-case for which it is actually the most appropriate tool, not just a technically-correct tool.

It's relatively easy to come up with correct examples of how to use sync.Cond. The problem is in finding cases where it is appropriate. If every use-case we can think of for sync.Cond is better addressed with channels instead, then adding Cond-based examples for those use-cases will only make it easier for folks to choose the wrong tool for the job, and the time would be better invested in writing channel-based examples for those use-cases.

(This is in contrast to, say, sync.Mutex: there are plenty of cases where programs have short critical sections and invariants that span multiple variables. You can often address those cases with a channel too, but the code ends up more complicated — and if the critical section is short enough, there is no advantage to the channel-based solution. In contrast, the use-cases for sync.Cond always involve relatively long waits for the condition, which make responsiveness to cancellation much more important.)

@awoodbeck
Copy link
Contributor

@bcmills I understand your point. Thank you for the clarification.

@joneskoo
Copy link
Contributor Author

joneskoo commented Jan 3, 2018 via email

@cstockton
Copy link

The only use case I come across sync.Cond for is when I want short lived periods of mutual exclusion over a named resource, a minimal example would be this. I think in many cases doing this with a goroutine / channel would be preferred for cancellation, but maybe this would serve as a fair enough example.

@bcmills
Copy link
Contributor

bcmills commented Feb 12, 2018

@cstockton I'm looking at your example, but there's a detail that I don't understand: why is mu passed in to NewPartitionLocker explicitly? (Is that important to your use case? If so, why?)

@bcmills
Copy link
Contributor

bcmills commented Feb 12, 2018

Actually, there's another detail I'm missing too: in Unlock, why call Broadcast instead of Signal? (Presumably that's because you have to ensure that the waiter for the unlocked ID wakes up, rather than a waiter for some other ID? But then why not use a per-id sync.Cond?)

At any rate, if the number of IDs is small and fixed it seems much clearer to use a long-lived channel per ID:
https://play.golang.org/p/A5lFoZjIn7K

If the number of IDs is large, a short-lived channel per ID would likely still lead to less overall contention on the mutex and more targeted wakeups, without significantly increasing steady-state memory usage:
https://play.golang.org/p/yX5lsiX4-r5

@cstockton
Copy link

@bcmills No strong design considerations here, it was posted as an example ~6mo ago in response to a reddit question. I use that and the bound WaitGroup as somewhere to refer people to for how sync.Cond works.

Since the purpose of this issue is to find a problem that serves as a good example for sync.Cond, providing solutions to the proposed problems using unrelated methods would probably add more value to your discussion to remove sync.Cond. While this primitive is a member of the standard library I see no harm in providing a clear example, so what do you feel is a better example of sync.Cond usage?

That said both your suggestions add additional constraints, such as removing the guarantee that the value for the map value will be initialized only once. The second solution never deletes old ID's from the map, requiring IDS to be a fixed set that may fit in memory with an access frequency high enough to justify holding them forever. At that point you would just use a regular map[string]sync.Mutex and fill it at initialization. You made me curious though so I benchmarked your solutions against the sync.Cond and considering the more drastic performance variation, additional constraints, memory characteristics I don't think they are better. The cond performs well with less deviation in a variety of use cases while being simple to understand and build upon.

BenchmarkCond/Serial/Cond1-24   	 5000000	       205 ns/op	       0 B/op	       0 allocs/op
BenchmarkCond/Serial/Map1-24    	  500000	      2033 ns/op	     424 B/op	       8 allocs/op
BenchmarkCond/Serial/Map2-24    	 5000000	       306 ns/op	       0 B/op	       0 allocs/op
BenchmarkCond/Parallel_3/Cond1-24         	 1000000	      1026 ns/op	       0 B/op	       0 allocs/op
BenchmarkCond/Parallel_3/Map1-24          	 5000000	       343 ns/op	     147 B/op	       4 allocs/op
BenchmarkCond/Parallel_3/Map2-24          	 3000000	       429 ns/op	       0 B/op	       0 allocs/op
BenchmarkCond/Parallel_512/Cond1-24       	 1000000	      1341 ns/op	       0 B/op	       0 allocs/op
BenchmarkCond/Parallel_512/Map1-24        	 1000000	      1767 ns/op	     198 B/op	       4 allocs/op
BenchmarkCond/Parallel_512/Map2-24        	10000000	       147 ns/op	       0 B/op	       0 allocs/op
BenchmarkCond/Parallel_8192/Cond1-24      	 1000000	      1289 ns/op	       0 B/op	       0 allocs/op
BenchmarkCond/Parallel_8192/Map1-24       	 1000000	      1817 ns/op	     199 B/op	       4 allocs/op
BenchmarkCond/Parallel_8192/Map2-24       	10000000	       106 ns/op	       0 B/op	       0 allocs/op

@bcmills
Copy link
Contributor

bcmills commented Feb 13, 2018

While this primitive is a member of the standard library I see no harm in providing a clear example, so what do you feel is a better example of sync.Cond usage?

I don't think there is such a thing as a good example of sync.Cond usage. That's kind of my point.
(That is: I think we should deprecate it instead of suggesting places to use it.)

That said both your suggestions add additional constraints […].
The second solution never deletes old ID's from the map […].

I think you've swapped “first” and “second”? At any rate, that's part of my point: sync.Cond encourages a one-size-fits-all approach to synchronization that channels do not.

It's true that my first suggestion retains keys that may no longer be interesting. In exchange, it avoids spurious wakeups: that is, it implements an O(N) solution for N calls to Lock, instead of O(N²). (And that comes through in your benchmarks: Map2, the benchmark for long-lived channels, spends less time per operation as parallelism increases, while Cond1 and Map1 both spend substantially more.)

It's also true that my second suggestion allocates more than the sync.Cond version, and still suffers from spurious wakeups (and the corresponding N² behavior), although it produces spurious wakeups proportional to the number of waiters for each key rather than for all keys. A more careful implementation would avoid those problems, at some cost in complexity.

In exchange for those changes in constraints, we get a few more nice properties: responsiveness to cancellation, usable zero-values, and reduction or elimination of spurious wakeups.

The examples I gave are solutions in the design space that happen to fall at about the same line count as the sync.Cond version (27 lines of implementation). There are many more points in that space that trade off various properties, including lock contention, memory footprint, and code complexity.
In contrast, there are approximately two using sync.Cond: one with lots of spurious wakeups, and the other with long-lived Conds per key (and generally the same memory-allocation tradeoffs as the channel versions), and neither is responsive to cancellation.

So I don't think this example is a good one for sync.Cond: its use limits the exploration of alternatives and encourages solutions that neither scale out (to more parallelism) nor up (to use-cases that require responsiveness to cancellation).

@cstockton
Copy link

I don't think there is such a thing as a good example of sync.Cond usage. That's kind of my point.
(That is: I think we should deprecate it instead of suggesting places to use it.)

I think you've swapped “first” and “second”? At any rate, that's part of my point: sync.Cond encourages a one-size-fits-all approach to synchronization that channels do not.

An examples purpose is not to promote use, it's to illustrate how to use it. I do not understand what the harm is in having a clear example of how a sync.Cond works. If they understand the way a Cond is used in Go they may apply patterns they are familiar with from other languages for an easy to understand program.

In exchange for those changes in constraints, we get a few more nice properties: responsiveness to cancellation, usable zero-values, and reduction or elimination of spurious wakeups.

Spurious wakeups are something to avoid- sure, but like all of your theory here they did not emerge as a significant details in the benchmarks I took the time to prove my position with. I also don't see the use of 10 lines of software written in under a minute in a pattern I have long understood, can prove is correct while also getting good general performance... somehow limiting future exploration of alternative solutions.

#21165
The existence of sync.Cond suggests that people might want to use it, but it is currently under-documented (#20491), incompatible with other Go synchronization patterns (e.g. select statements; see #16620), and unlike most other types in the sync package, does not have a valid zero-value (its Wait method requires a non-nil L field). It has an additional "no copy" invariant enforced through both a run-time dynamic check and special-case code in the vet tool.

You are pushing an agenda to remove sync.Cond under the premise:

  1. it is currently under-documented (sync: add example for Cond #20491) - While actively blocking constructive attempts to improve documentation.
  2. incompatible with other Go synchronization patterns (e.g. select statements; see proposal: sync: mechanism to select on condition variables #16620) - Just like every single member of the sync and atomic packages such as WaitGroup and Mutex.
  3. does not have a valid zero-value (its Wait method requires a non-nil L field) - Just like sync.Pool.
  4. It has an additional "no copy" invariant enforced through both a run-time dynamic check and special-case code in the vet tool. - Do you have something related to this I can measure?

Only thing you proved here is a demand for documentation. It's unfortunate you won't apply the expertise you've demonstrated to contribute a meaningful example you feel would best highlight correct usage. Showing someone how to do something properly, doesn't mean you are telling them when to do it.

@bcmills
Copy link
Contributor

bcmills commented Feb 14, 2018

incompatible with other Go synchronization patterns […] Just like every single member of the sync and atomic packages such as WaitGroup and Mutex.

I already addressed that above:
“In contrast [to sync.Mutex], the use-cases for sync.Cond always involve relatively long waits for the condition, which make responsiveness to cancellation much more important.”

does not have a valid zero-value (its Wait method requires a non-nil L field) - Just like sync.Pool.

A zero sync.Pool is valid: its New field is entirely optional. It is also difficult to use appropriately, and “we would like to get to [the point where we don't recommend it]” (#23199 (comment)).

It's unfortunate you won't apply the expertise you've demonstrated to contribute a meaningful example you feel would best highlight correct usage. Showing someone how to do something properly, doesn't mean you are telling them when to do it.

I think you have mistaken the cause and effect of my reasoning.

Because I cannot find a sync.Cond example that I believe demonstrates appropriate use, I have proposed that we remove it. The lack of appropriate examples is the cause, not the effect.

@cstockton
Copy link

A zero sync.Pool is valid: its New field is entirely optional. It is also difficult to use appropriately, and “we would like to get to [the point where we don't recommend it]” (#23199 (comment)).

While you are free to call the usage of the New func optional, the simple fact is the only documented and consistent way every single usage of sync.Pool I have ever seen initializes this value. So I find requiring initialization not being a sufficient reason for removing a sync prim from the std library.

"We would certainly like to get to this point, and the GC has improved a lot, but for high-churn allocations with obvious lifetimes and no need for zeroing, sync.Pool can still be a significant optimization. "

This reads to me in the context of this post that he's talking about getting the GC to the point it's sophisticated enough to no longer need sync.Pool. But that is just my interpretation- I'll defend my positions with the points I used to create them instead of inserting my views half way through the sentence of someone else.

I'll admit I didn't think I would go 3 comments into an issue for my favorite language debating the merit of improving documentation of all things. But it's clear you're not budging here so I'll agree to disagree, thanks.

@as
Copy link
Contributor

as commented Feb 14, 2018

@cstockton Here's an example of sync.Cond used to implement a deque.

https://github.com/golang/exp/blob/master/shiny/driver/internal/event/event.go

Disclaimer: I don't like sync.Cond

@ianlancetaylor
Copy link
Contributor

I don't think anybody likes sync.Cond. That said, the cases where it is useful are:

  1. Large and varying amounts of data, so a fixed channel buffer size is wasteful or insufficient, and
  2. Large number of readers, so an additional goroutine per reader is wasteful.

On the other hand sync.Cond can not be used in a select statement and it has no timeout, which limits its usability in that in realistic situations you have to create that additional goroutine per reader anyhow.

Anyhow I think the deque is not a bad example if we stress the ability to support a varying amount of data.

@bcmills
Copy link
Contributor

bcmills commented Feb 14, 2018

The event.Deque implementation in shiny is interesting for a couple of reasons.

It has a couple of issues unrelated to sync.Cond, such as over-retaining memory references and creating unnecessary rolling garbage. Those can be addressed independently of Cond and channels.

It does have one (minor) issue related to sync.Cond, too: it may impose higher-than-necessary overhead on the scheduler due to the possibility of spurious wakeups. However, that's unlikely to matter in practice because the callers of (*Deque).NextEvent appear to be mostly single-threaded.

Given that the entire Shiny framework is not plumbed for contexts, sync.Cond is reasonably appropriate there. On the other hand, there are a number of related TODOs in the shiny codebase that might benefit from more responsive APIs, including context plumbing, throughout.

Finally, I'll note that the alternative using channels (https://golang.org/cl/94138) is not particularly worse, and has the advantages of both avoiding spurious wakeups and allowing for cancellation support more easily down the road.

@bcmills
Copy link
Contributor

bcmills commented Feb 14, 2018

the cases where [sync.Cond] is useful are:

  1. Large and varying amounts of data, so a fixed channel buffer size is wasteful or insufficient, and
  2. Large number of readers, so an additional goroutine per reader is wasteful.

I do want to note that my alternative for the Deque example involves a 1-element buffer (arguably not “wasteful”, but you can decide whether it's “insufficient”), and does not induce any extra goroutines.

I would argue that the real cases where sync.Cond would be useful are those that mix Signal and Broadcast in interesting ways, but those are precisely the same cases where spurious wakeups and/or dropped signals are likely to cause subtle bugs.

@as
Copy link
Contributor

as commented Feb 15, 2018

It is a lot more pleasant to deal with the events in Go if each has a distinct channel instead of being stored in one deque. The latter is more familiar to event driven UI development, but the former seems more appropriate to what channels did in earlier languages that had them.

@bcmills
Copy link
Contributor

bcmills commented Dec 20, 2018

I now have lots of examples of what to do instead of sync.Cond (see Rethinking Classical Concurrency Patterns starting at slide 37).

To revisit @ianlancetaylor's specific points, based on what I learned preparing for that talk:

  1. Large and varying amounts of data, so a fixed channel buffer size is wasteful or insufficient, and
  2. Large number of readers, so an additional goroutine per reader is wasteful.

Both of those are pretty solidly addressed by using a 1-buffered channel. The items stored in the channel can refer to arbitrarily large backing structures, and readers can use a select to probe the channel without spawning an extra goroutine.

The channel does add a layer of indirection, but then so does sync.Cond itself (via the sync.Locker interface).

@abuchanan-nr
Copy link

How about just a simple example guys?

This one from SO taught me infinitely more about how sync.Cond works than the Go docs: https://stackoverflow.com/questions/36857167/how-to-correctly-use-sync-cond#

The debate about proper, real world use (if that even exists) can live in another issue, like this one: #21165

We just need something that walks people through the basic flow of using the methods. That's the foundation people can use to determine whether it makes sense to use it.

@gopherbot
Copy link

Change https://go.dev/cl/412237 mentions this issue: sync: add more notes about Cond behavior

gopherbot pushed a commit that referenced this issue Jun 17, 2022
Cond is difficult to use correctly (I was just bitten by it in
a production app that I inherited). While several proposals have come
up to improve or remove sync.Cond, no action has so far been taken.

Update the documentation to discourage use of sync.Cond, and point
people in the direction of preferred alternatives. I believe this will
help encourage behavior we want (less use of sync.Cond and more use of
channels), while also paving the way for, potentially, removing Cond
in a future version of the language.

Thanks very much to Bryan Mills and Sean Liao for discussion and
recommendations.

Updates #20491.
Updates #21165.

Change-Id: Ib4d0631c79d4c4d0a30027255cd43bc47cddebd3
Reviewed-on: https://go-review.googlesource.com/c/go/+/412237
Run-TryBot: Ian Lance Taylor <iant@google.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@google.com>
@gopherbot gopherbot added the compiler/runtime Issues related to the Go compiler and/or runtime. label Jul 7, 2022
@gopherbot
Copy link

Change https://go.dev/cl/521596 mentions this issue: context: fix synchronization in ExampleAfterFunc_cond

gopherbot pushed a commit that referenced this issue Aug 21, 2023
Condition variables are subtle and error-prone, and this example
demonstrates exactly the sorts of problems that they introduce.
Unfortunately, we're stuck with them for the foreseeable future.

As previously implemented, this example was racy: since the callback
passed to context.AfterFunc did not lock the mutex before calling
Broadcast, it was possible for the Broadcast to occur before the
goroutine was parked in the call to Wait, causing in a missed wakeup
resulting in deadlock.

The example also had a more insidious problem: it was not safe for
multiple goroutines to call waitOnCond concurrently, but the whole
point of using a sync.Cond is generally to synchronize concurrent
goroutines. waitOnCond must use Broadcast to ensure that it wakes up
the target goroutine, but the use of Broadcast in this way would
produce spurious wakeups for all of the other goroutines waiting on
the same condition variable. Since waitOnCond did not recheck the
condition in a loop, those spurious wakeups would cause waitOnCond
to spuriously return even if its own ctx was not yet done.

Fixing the aforementioned bugs exposes a final problem, inherent to
the use of condition variables in this way. This one is a performance
problem: for N concurrent calls to waitOnCond, the resulting CPU cost
is at least O(N²). This problem cannot be addressed without either
reintroducing one of the above bugs or abandoning sync.Cond in the
example entirely. Given that this example was already published in Go
1.21, I worry that Go users may think that it is appropriate to use a
sync.Cond in conjunction with context.AfterFunc, so I have chosen to
retain the Cond-based example and document its pitfalls instead of
removing or replacing it entirely.

I described this class of bugs and performance issues — and suggested
some channel-based alternatives — in my GopherCon 2018 talk,
“Rethinking Classical Concurrency Patterns”. The section on condition
variables starts on slide 37. (https://youtu.be/5zXAHh5tJqQ?t=679)

Fixes #62180.
For #20491.

Change-Id: If987cd9d112997c56171a7ef4fccadb360bb79bc
Reviewed-on: https://go-review.googlesource.com/c/go/+/521596
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Auto-Submit: Bryan Mills <bcmills@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Bryan Mills <bcmills@google.com>
@gopherbot
Copy link

Change https://go.dev/cl/521598 mentions this issue: [release-branch.go1.21] context: fix synchronization in ExampleAfterFunc_cond

cellularmitosis pushed a commit to cellularmitosis/go that referenced this issue Aug 24, 2023
Condition variables are subtle and error-prone, and this example
demonstrates exactly the sorts of problems that they introduce.
Unfortunately, we're stuck with them for the foreseeable future.

As previously implemented, this example was racy: since the callback
passed to context.AfterFunc did not lock the mutex before calling
Broadcast, it was possible for the Broadcast to occur before the
goroutine was parked in the call to Wait, causing in a missed wakeup
resulting in deadlock.

The example also had a more insidious problem: it was not safe for
multiple goroutines to call waitOnCond concurrently, but the whole
point of using a sync.Cond is generally to synchronize concurrent
goroutines. waitOnCond must use Broadcast to ensure that it wakes up
the target goroutine, but the use of Broadcast in this way would
produce spurious wakeups for all of the other goroutines waiting on
the same condition variable. Since waitOnCond did not recheck the
condition in a loop, those spurious wakeups would cause waitOnCond
to spuriously return even if its own ctx was not yet done.

Fixing the aforementioned bugs exposes a final problem, inherent to
the use of condition variables in this way. This one is a performance
problem: for N concurrent calls to waitOnCond, the resulting CPU cost
is at least O(N²). This problem cannot be addressed without either
reintroducing one of the above bugs or abandoning sync.Cond in the
example entirely. Given that this example was already published in Go
1.21, I worry that Go users may think that it is appropriate to use a
sync.Cond in conjunction with context.AfterFunc, so I have chosen to
retain the Cond-based example and document its pitfalls instead of
removing or replacing it entirely.

I described this class of bugs and performance issues — and suggested
some channel-based alternatives — in my GopherCon 2018 talk,
“Rethinking Classical Concurrency Patterns”. The section on condition
variables starts on slide 37. (https://youtu.be/5zXAHh5tJqQ?t=679)

Fixes golang#62180.
For golang#20491.

Change-Id: If987cd9d112997c56171a7ef4fccadb360bb79bc
Reviewed-on: https://go-review.googlesource.com/c/go/+/521596
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Auto-Submit: Bryan Mills <bcmills@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Bryan Mills <bcmills@google.com>
gopherbot pushed a commit that referenced this issue Aug 24, 2023
…unc_cond

Condition variables are subtle and error-prone, and this example
demonstrates exactly the sorts of problems that they introduce.
Unfortunately, we're stuck with them for the foreseeable future.

As previously implemented, this example was racy: since the callback
passed to context.AfterFunc did not lock the mutex before calling
Broadcast, it was possible for the Broadcast to occur before the
goroutine was parked in the call to Wait, causing in a missed wakeup
resulting in deadlock.

The example also had a more insidious problem: it was not safe for
multiple goroutines to call waitOnCond concurrently, but the whole
point of using a sync.Cond is generally to synchronize concurrent
goroutines. waitOnCond must use Broadcast to ensure that it wakes up
the target goroutine, but the use of Broadcast in this way would
produce spurious wakeups for all of the other goroutines waiting on
the same condition variable. Since waitOnCond did not recheck the
condition in a loop, those spurious wakeups would cause waitOnCond
to spuriously return even if its own ctx was not yet done.

Fixing the aforementioned bugs exposes a final problem, inherent to
the use of condition variables in this way. This one is a performance
problem: for N concurrent calls to waitOnCond, the resulting CPU cost
is at least O(N²). This problem cannot be addressed without either
reintroducing one of the above bugs or abandoning sync.Cond in the
example entirely. Given that this example was already published in Go
1.21, I worry that Go users may think that it is appropriate to use a
sync.Cond in conjunction with context.AfterFunc, so I have chosen to
retain the Cond-based example and document its pitfalls instead of
removing or replacing it entirely.

I described this class of bugs and performance issues — and suggested
some channel-based alternatives — in my GopherCon 2018 talk,
“Rethinking Classical Concurrency Patterns”. The section on condition
variables starts on slide 37. (https://youtu.be/5zXAHh5tJqQ?t=679)

Fixes #62189.
Updates #62180.
For #20491.

Change-Id: If987cd9d112997c56171a7ef4fccadb360bb79bc
Reviewed-on: https://go-review.googlesource.com/c/go/+/521596
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Auto-Submit: Bryan Mills <bcmills@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Bryan Mills <bcmills@google.com>
(cherry picked from commit 1081f8c)
Reviewed-on: https://go-review.googlesource.com/c/go/+/521598
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. Documentation
Projects
None yet
Development

No branches or pull requests

10 participants