-
Notifications
You must be signed in to change notification settings - Fork 17.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime: make goid and goidgen 64 bits #4275
Labels
Milestone
Comments
Can we remove the goid entirely? For the race detector I want to add an additional pointer to race detector context to the G struct. Using goid's is inconvenient and slow. Currently there is a 10^6 limit on total number of goroutines under the race detector, this is needed for fast direct mapping between goid's and internal contexts, but I've seen a test that creates more goroutines. If we add the explicit race context pointer to G, then the race detector does not care about goid's as well. For the scheduler patch (https://golang.org/cl/6441097) I had to effectively remove goid's. The single atomic increment on shared var can severe impact scalability. This will limit the rate at which goroutines can be created (otherwise they can be allocated from local cache, scheduled on local queue and then deallocated to local cache). I've started with using G pointer as identifier in crash dumps, and then switched to "persistent" goid's. That is, an unique goid is assigned to G struct when it's first allocated and then it's not updated when G is reused. So I would prefer to: 1. Add explicit race context to G. 2. Switch to "persistent" goid's. They should not overflow. |
What do you mean by "incrementing never-used-before goids"? Do you mean the same way they work now? If so, it looks like a rather high price for printing stack traces. When we have Processors, we can have unique ids by combining Processor id (16 bits) with monotonically increasing per-Processor id (48 bits). |
This issue was closed by revision 320df44. Status changed to Fixed. |
This issue was closed.
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The text was updated successfully, but these errors were encountered: