-
Notifications
You must be signed in to change notification settings - Fork 18k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
database/sql: lock contentions in DB, Stmt #9484
Comments
How would that work? The point of the css slice is to know which connections already have the statement prepared in order to reuse it. If you put it in the driverConn, how do you find out which connection to use? |
One idea would be to add another mutex to DB which would only guard DB.dep. It would ease the contention a bit in this case. |
Yes, I've tried it and it has some improvements. (I'm sorry, I lost benchmark result.) |
I said about changing But I couldn't remove |
https://go-review.googlesource.com/#/c/2206/ |
So would the idea then be to iterate once over the freeConn list, doing lookups to Stmt.css? I wonder if it wouldn't be simpler to just remove the first loop altogether. In other words, get a free conn from conn(), prepare the statement if it hasn't been prepared yet, or just execute if it already has been. This would likely spread the statements over the connections in the pool a bit more than the current routine does, but it's not clear that that would be a bad idea. |
My current idea is this: https://gist.github.com/methane/0699f88fa6bc9f1e56de I'll try benchmark before fix comments and put it gerrit. |
Quick benchmark. environmentTwo c3.8xlarge on same placement group. (10Gbit ethernet) Run web app and wrk on one machine. Run MySQL on another machine. results
Splitting mutex has significant improvements. This patch make it slower since iterating map is slow. I'll try another approach. FYI, using multiple DB to avoid lock contention hits 16k req/sec. |
Can you try what I suggested, i.e. not call connIfFreePrepared() at all? If that's fast enough, perhaps we can figure out a way to clean up dead connections from Stmt.css. It would also be interesting to see that approach with Stmt.css being a slice vs. it being a map. |
An idea for cleaning up: store an atomic counter in *DB, and increment it in driverConn.Close(). When creating a Stmt, store the current value of the counter inside the Stmt. In Stmt.connStmt(), compare the current value of the counter to the stored value. If the difference is bigger than N, iterate over css and clean up dead connections and store the new value of the counter in Stmt. This way if no connections have recently died, we only need to do one atomic fetch + a comparison in connStmt(). Additionally, only one execution has to pay the price of cleaning up since the next one will see the counter having increased. |
@johto That looks nice idea! |
Benchmark updated: https://gist.github.com/methane/1341a204256e5dab52ac |
I've sent patch: https://go-review.googlesource.com/#/c/2207/ |
I'd personally have used sync/atomic for it to avoid having to grab the DB mutex when not doing the cleanup. Also, shouldn't lastNumClosed only be assigned to when the cleanup is actually done? The current coding seems incorrect. |
I've used DB.mu since it uses db.numOpen for threshold.
You're right. I'll fix it. |
Block profile says that the main bottleneck is in connStmt. The code is quite complex and I can't understand what it is trying to do and what from that complexity is inherent and what is not. Can somebody describe what connStmt/putConn are trying to achieve? I would expect that connStmt must be a pop from a stack/queue and putConn must be a push to the stack/queue. And the rest must be designed around that idea. E.g. if a connection can't be used, then it must not be in the stack/queue. |
@bradfitz can you describe what are the additional requirements for connStmt/putConn besides just popping and pushing a connection? |
Yeah, that's pretty much it. The code in HEAD does some additional work:
That's this loop. I would say other than that loop, the code is straightforward. The suggested change is to make that cleanup happen less often, and spend less resources in the cleanup loop. That also means that it doesn't try to prefer connections which already have the statement prepared, but I don't consider that a problem. Another alternative would be to go through all the statements in driverConn.finalClose(), and remove the connection from the css list. It's not obvious that that would be better, though. |
Thanks to all reviewers for persevering code review. |
When using same prepared statement in concurrently. I confirmed with 128 concurrent http access.
Issue happened here: TechEmpower/FrameworkBenchmarks#1164
And stackdump: https://gist.github.com/methane/dd3c3f8de8128730bd63
There are some issues:
1. db.mu contention
stmt.Query() uses addDep and removeDep. It locks db.mu.
stmt.connStmt() calls db.connIfFree() multiple times. Each call lock and unlock db.mu.
2. stmt.connStmt() hold stmt.mu long
Stmt.css may be bit long (e.g. 128).
https://github.com/golang/go/blob/master/src/database/sql/sql.go#L1375-L1388
This loop take bit long time.
Moving db.mu from DB.connIfFree to before the loop makes bit faster.
But I think moving Stmt.css to driverConn is more better.
The text was updated successfully, but these errors were encountered: