Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http: ServeMux benchmark regression #25383

Closed
quasilyte opened this issue May 14, 2018 · 14 comments
Closed

net/http: ServeMux benchmark regression #25383

quasilyte opened this issue May 14, 2018 · 14 comments

Comments

@quasilyte
Copy link
Contributor

quasilyte commented May 14, 2018

There is notable difference in Go1.10.2 and tip BenchmarkServeMux results:

name        old time/op    new time/op    delta
ServeMux-4     127µs ± 1%     156µs ± 1%  +23.04%  (p=0.008 n=5+5)

On other machine:
ServeMux-4       143µs ±11%      165µs ± 7%    +15.44%  (p=0.008 n=5+5)

And another machine:
ServeMux-8    85.1µs ± 3%    95.0µs ± 6%  +11.61%  (p=0.008 n=5+5)

The most significant impact comes from additional synchronization added in 2fd1b52. The commit message lacks benchmarks info, so I think it's worthwhile to report regression anyway and decide whether it can be addressed or not (we can't just remove locking if it's needed there, but there can be other options).

There are other causes of slowdown as commit right behind 2fd1b52 is still slower (but the delta is more tolerable):

name        old time/op    new time/op    delta
ServeMux-4     127µs ± 1%     135µs ± 1%  +6.43%  (p=0.008 n=5+5)
@mvdan
Copy link
Member

mvdan commented May 14, 2018

It seems like the commit you're pointing to was a race bugfix, so it's not the type of performance regression that can be easily fixed or reverted. Do you have any ideas on how to get the numbers back to 1.10 levels?

@quasilyte
Copy link
Contributor Author

quasilyte commented May 14, 2018

@mvdan, I'm not familiar with net/http at all, so I can't really propose any solution or evaluate submitted one. This is why I've added this to description:

The commit message lacks benchmarks info, so I think it's worthwhile to report regression anyway and decide whether it can be addressed or not (we can't just remove locking if it's needed there, but there can be other options).

I've reported it for completeness as it showed pretty high delta.
It's better to know the impact than not.
Usually, it's better when CLs that affects performance in this way includes benchstat info, but this one didn't, hence here is the missing link.

@OneOfOne
Copy link
Contributor

OneOfOne commented May 14, 2018

@mvdan it's the defer, some goto hackary should make it pretty close to the old numbers.

If that's acceptable, I can push a PR sometime this week (or even better if you wanna do ;)).

@gopherbot
Copy link

Change https://golang.org/cl/113996 mentions this issue: net/http: avoid defer unlock in ServeMux.shouldRedirect

@odeke-em odeke-em self-assigned this May 21, 2018
@odeke-em odeke-em added the Soon This needs to be done soon. (regressions, serious bugs, outages) label May 21, 2018
@odeke-em odeke-em added this to the Go1.11 milestone May 21, 2018
@bradfitz bradfitz added Performance and removed Soon This needs to be done soon. (regressions, serious bugs, outages) labels Jun 5, 2018
@gopherbot
Copy link

Change https://golang.org/cl/116378 mentions this issue: net/http: remove an allocation in ServeMux

@bradfitz
Copy link
Contributor

bradfitz commented Jun 6, 2018

@odeke-em, want to post an update on this now that 116378 is submitted? Or bump to Go 1.12 or Unplanned?

gopherbot pushed a commit that referenced this issue Jun 6, 2018
Also, add a benchmark variant ("SkipServe") that only benchmarks the
ServeMux handler selection path.

name                  old time/op    new time/op    delta
ServeMux_SkipServe-4    74.2µs ± 2%    60.6µs ± 1%   -18.31%   (p=0.000 n=10+9)

name                  old alloc/op   new alloc/op   delta
ServeMux_SkipServe-4    2.62kB ± 0%   0.00kB ±NaN%  -100.00%  (p=0.000 n=10+10)

name                  old allocs/op  new allocs/op  delta
ServeMux_SkipServe-4       180 ± 0%        0 ±NaN%  -100.00%  (p=0.000 n=10+10)

Updates #25383

Change-Id: Icfbb3b977e309093d032e922d1b4f254df6f5955
Reviewed-on: https://go-review.googlesource.com/116378
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com>
@odeke-em
Copy link
Member

@bradfitz thanks for the ping.

So after CL 116378, we still have that regression from the newly introduced defer. I think we meant 116378 for #25482 instead :)

CL 113996 removes the defer that is the subject of this issue but it doesn't seem like you like its approach

@bradfitz bradfitz modified the milestones: Go1.11, Go1.12 Jun 21, 2018
gopherbot pushed a commit that referenced this issue Jun 21, 2018
CL 96575 introduced concurrency protection for
ServeMux.shouldRedirect with a read lock and deferred unlock.
However, the change produced a noticeable regression.
Instead add the suffix "RLocked" to the function name to
declare that we should hold the read lock as a pre-requisite
before calling it, hence avoiding the defer altogether.

Benchmarks:
name                  old time/op    new time/op    delta
ServeMux-8              63.3µs ± 0%    54.6µs ± 0%  -13.74%  (p=0.000 n=9+9)
ServeMux_SkipServe-8    41.4µs ± 2%    32.7µs ± 1%  -21.05%  (p=0.000 n=10+10)

name                  old alloc/op   new alloc/op   delta
ServeMux-8              17.3kB ± 0%    17.3kB ± 0%     ~     (all equal)
ServeMux_SkipServe-8     0.00B          0.00B          ~     (all equal)

name                  old allocs/op  new allocs/op  delta
ServeMux-8                 360 ± 0%       360 ± 0%     ~     (all equal)
ServeMux_SkipServe-8      0.00           0.00          ~     (all equal)

Updates #25383
Updates #25482

Change-Id: I2ffa4eafe165faa961ce23bd29b5653a89facbc2
Reviewed-on: https://go-review.googlesource.com/113996
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
@agnivade
Copy link
Contributor

agnivade commented Nov 6, 2018

@odeke-em - Just wanted to check on this. I just re-ran the benchmarks from the current tip 510eea2 and 1.10.2

name        old time/op    new time/op    delta
ServeMux-4    85.5µs ± 1%    78.4µs ± 1%   -8.26%  (p=0.008 n=5+5)

name        old alloc/op   new alloc/op   delta
ServeMux-4    19.9kB ± 0%    17.3kB ± 0%  -13.18%  (p=0.008 n=5+5)

name        old allocs/op  new allocs/op  delta
ServeMux-4       540 ± 0%       360 ± 0%  -33.33%  (p=0.008 n=5+5)

It seems we have a measurable improvement. Is there still anything else you would like to do on this ?

@odeke-em
Copy link
Member

odeke-em commented Nov 10, 2018

Thank you for the reminder @agnivade!

@carl-mastrangelo's nice CL https://go-review.googlesource.com/c/144937 aka commit 1645dfa massively levelled the ground so we are good now 1645dfa -- I had an almost similar change with longest prefix matching by sorting by length but didn't get the wins in his CL, cool stuff @carl-mastrangelo!

With that I am going to close this issue, @bradfitz or anyone else please feel free to reopen or bump to Unplanned.

@carl-mastrangelo
Copy link
Contributor

@odeke-em I appreciate the call out, but I don't believe my change fixed this. I sent out https://go-review.googlesource.com/c/go/+/149377 which brings the latency down to the pre-lock levels.

@bradfitz bradfitz reopened this Nov 13, 2018
@bradfitz bradfitz modified the milestones: Go1.12, Unplanned Nov 13, 2018
@odeke-em
Copy link
Member

I appreciate the call out, but I don't believe my change fixed this. I sent out https://go-review.googlesource.com/c/go/+/149377 which brings the latency down to the pre-lock levels.

@carl-mastrangelo, thanks for the humility! For clarity, I should have spelled out that this issue was combated by a collective of fixes:

Owner CL
@quasilyte https://go-review.googlesource.com/c/go/+/113235
@bradfitz https://go-review.googlesource.com/c/go/+/116378/
@carl-mastrangelo https://go-review.googlesource.com/c/go/+/149377
Myself https://go-review.googlesource.com/c/go/+/113996/

and others that I might not have seen, which have brought the levels down to as @agnivade reported in #25383 (comment)

@agnivade
Copy link
Contributor

The issue originally was a regression in performance, which is fixed (even improved) now. Although, changes were made in other areas, but it eventually improved performance. I think this issue can be closed, because the regression does not exist any more.

Otherwise, we should re-title it to "net/http: remove mutex locks in ServeMux" because that is what this should be about.

@odeke-em
Copy link
Member

I replied before I noticed that @bradfitz reopened it. @carl-mastrangelo was just stating that he didn't think his CL fixed this issue but I explained why I mentioned his CL and how it contributed to bringing the regressions down in #25383 (comment).

@bradfitz could you please reconsider closing this issue as it was about a regression that has been erased as mentioned in #25383 (comment) but also just by @agnivade in #25383 (comment)?

@bradfitz
Copy link
Contributor

Sure. Carl, file a new bug if you want to track lock removal.

@golang golang locked and limited conversation to collaborators Nov 13, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

8 participants