Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync: add atomic UnlockAndRLock to RWMutex #4026

Closed
dsymonds opened this issue Sep 1, 2012 · 22 comments
Closed

sync: add atomic UnlockAndRLock to RWMutex #4026

dsymonds opened this issue Sep 1, 2012 · 22 comments

Comments

@dsymonds
Copy link
Contributor

dsymonds commented Sep 1, 2012

A couple of times I've run into a situation where I have RLocked an RWMutex and then
need to Lock it to modify the thing it is protecting, and then want to go back to an
RLock to use it. A typical example is grabbing the RLock to examine cached data, then
noting it is stale and then needing to Lock to update it.

Unfortunately the obvious approach (RUnlock + Lock, then modify, then Unlock + RLock) is
racy because the condition that caused you to promote to a write lock may have changed
between the Unlock and the RLock. This necessitates a loop structure.

If there were Promote and Demote methods then that shuffle could be avoided. The
semantics are:
  - Promote is equivalent to mu.Lock() with a concurrent mu.RUnlock(), where the Lock happens-before the RUnlock.
  - Demote is equivalent to mu.RLock() with a concurrent mu.Unlock(), where the RLock happens-before the Unlock.
@remyoudompheng
Copy link
Contributor

Comment 1:

I don't completely understand your semantics: for example, if two concurrent readers
(holding RLock) call Promote, they will both call Lock before RUnlock, isn't that a
deadlock?

@dsymonds
Copy link
Contributor Author

dsymonds commented Sep 1, 2012

Comment 2:

As with a normal lock, two competing goroutines calling Lock will result in
exactly one getting the lock and the other blocking.

@rsc
Copy link
Contributor

rsc commented Sep 1, 2012

Comment 3:

It might be nice to have a method to downgrade from a wlock to an rlock atomically. I
will defer to Dmitry on that.
As Remy points out, a method to upgrade from an rlock to a wlock atomically is not
possible. Having a wlock means no one else has an rlock, so it is impossible to satisfy
this sequence:
   - goroutine 1 rlocks
   - goroutine 2 rlocks
   - goroutine 1 calls upgrade
   - goroutine 2 calls upgrade
Goroutine 1 cannot be upgraded to a wlock until goroutine 2 releases the rlock, and vice
versa. Deadlock.

Labels changed: added priority-later, removed priority-triage.

@dvyukov
Copy link
Member

dvyukov commented Sep 1, 2012

Comment 4:

>Unfortunately the obvious approach (RUnlock + Lock, then modify, then Unlock + RLock)
is racy because the condition that caused you to promote to a write lock may have
changed between the Unlock and the RLock. This necessitates a loop structure.
You usually do:
RLock()
if trysearch() {
  return
}
RUnlock()
Lock()
if trysearch() {
  return
}
mutate()
Unlock()

@dvyukov
Copy link
Member

dvyukov commented Sep 1, 2012

Comment 5:

>It might be nice to have a method to downgrade from a wlock to an rlock atomically.
What is the use case? I can't think of any...
I think it's possible to implement it, and it will cut 1 atomic RMW, but I am not sure
whether it's worth doing just because of that.

@rsc
Copy link
Contributor

rsc commented Sep 1, 2012

Comment 6:

The situation where it came up recently was a list that is sorted only
lazily during a read. We assume adds are infrequent and we want the
reads to be able to run concurrently most of the time:
type T struct {
    mu sync.RWMutex
    list []int
    sorted bool
}
func (t *T) add(x int) {
    t.mu.Lock()
    t.list = append(t.list, x)
    t.sorted = false
    t.mu.RUnlock()
}
func (t *T) read(f func(int)) {
    t.mu.RLock()
    for !t.sorted {
        t.mu.RUnlock()
        t.mu.Lock()
        if !t.sorted {
            sort.Ints(t.list)
            t.sorted = true
        }
        t.mu.Unlock()
        t.mu.RLock()
    }
    for _, x := range t.list {
        f(x)
    }
    t.mu.RUnlock()
}
It says 'for !t.sorted' instead of 'if !t.sorted' to avoid a race
where an add comes in between the Unlock and the RLock at the bottom
of the loop body. With an atomic UnlockAndRLock method, the Unlock +
RLock sequence could become a single call and the for !t.sorted would
become if !t.sorted - no need for a retry loop.
I don't know if it's a good idea to provide, but that is how it came up.
Russ

@dvyukov
Copy link
Member

dvyukov commented Sep 1, 2012

Comment 7:

I see. I would wait with this until we have more use case. It looks more like a corner
case, and performance should not be significantly affected (because add()'s are
relatively infrequent). What do you think?

@dsymonds
Copy link
Contributor Author

dsymonds commented Sep 1, 2012

Comment 8:

I see it as less of a performance issue and more of a correctness
issue. I feel I have a reasonable grasp of using sync.RWMutex
correctly, but I was writing the code that rsc is referring to and I
got it wrong. I'd wager I'm better than the average Go programmer, so
that means the majority of programmers have a good chance of getting
it wrong.
I agree it's not a common use case. I don't know how to evaluate that.
But if you're using a RWMutex, it seems like converting a read lock to
a write lock (or vice versa) is not particularly strange, and so
providing a way to do it correctly seems like a worthwhile extension
to the API.
I don't feel particularly strongly either way, and am happy to defer
to Russ and Dmitry.
(I do think it's still possible to promote a read lock to a write lock
and have sensible semantics: calling Promote would exclude future
(read and write) locks, relinquish their existing read lock, and would
grab a write lock when the remaining read locks are released. Anyone
else doing a concurrent Promote would also relinquish their existing
read lock, and would join a queue to get the write lock after the
first guy releases his write lock. It just requires a bit of extra
accounting.)

@dvyukov
Copy link
Member

dvyukov commented Sep 1, 2012

Comment 9:

But for the second guy RUnlockAndLock() won't be atomic, and so it will be
even more confusing and error-prone, because when you do RUnlock(); Lock()
at least you see that non-atomic possibility in the code.

@rsc
Copy link
Contributor

rsc commented Sep 1, 2012

Comment 10:

The problem with mutexes is that things can seem not particularly strange and yet still
be strange.
I think RUnlockAndLock is off the table: an atomic RUnlockAndLock is impossible, and a
non-atomic one might as well be written as two calls so it's clearly not atomic.
An atomic UnlockAndRLock would provide new functionality and perhaps simplify an
occasional bit of code, but it's a rare case. Will leave this open low-priority in case
it becomes more common.

Labels changed: added priority-someday, removed priority-later.

Status changed to LongTerm.

@cznic
Copy link
Contributor

cznic commented Sep 4, 2012

Comment 11:

I have ran more than once into the promotion-would-be-useful-here situation, though I
admit I didn't realized the deadlock problem (cf. #3). Thinking because of it now again,
what I would actually like to use is the sharing intents lock, which I think doesn't
suffer from the deadlock problem. See:
http://en.wikipedia.org/wiki/Multiple_granularity_locking
AFAICS, RWMutex is a subset of a MGL when/where S==R and X==W. I think the MGL is not a
terribly complicated state machine and I guess it's feasible to hack it on top of
existing primitives. Still it's not trivial to code (and test!) properly, so having a
reliable, ready made solution in the stdlib would be IMO pretty reasonable.

@dvyukov
Copy link
Member

dvyukov commented Sep 4, 2012

Comment 12:

You still can't upgrade S->X. So how does it solve the problem?

@cznic
Copy link
Contributor

cznic commented Sep 4, 2012

Comment 13:

That's the point of the MGL, you can't do that. The solution is to use SIX instead of S
when your intents are to possibly promote to X. SIX is not compatible with other SIX,
but is compatible with S. So on SIX->X promotion, the owner of SIX waits only for
other's S to release and then he is known to be alone to go ->X (hence no deadlock
possible here).
It may or may not make anything faster. It may make some code paths much simpler: If you
find under R, that you need W, you have to release R and acquire W. Meanwhile state may
have changed. On the path SIX->X that cannot happen. Sometimes a big win.

@dvyukov
Copy link
Member

dvyukov commented Sep 4, 2012

Comment 14:

Humm.... SIX is incompatible with SIX and X, so for Russ's example it won't allow any
concurrency at all, thus making it all senseless. At least we need other use cases
(where S is still frequent but SIX is infrequent), not saying that all looks overly
complicated.
Btw, I am not sure why they say SIX is incompatible with S, it must be compatible
(unless I misunderstood something).

@cznic
Copy link
Contributor

cznic commented Sep 4, 2012

Comment 15:

MGL will not allow any more concurrency than RWM does now, sometimes even less. It just
enables the promotion in a deadlock free way *and* preserves (the protected by MGL)
state while promoting/waiting for promotion. The later is IMO actually the important
property.
A real life anti-example of where I could use it:
https://github.com/cznic/strutil/blob/master/strutil.go#L261
"Anti" because here it would make code simpler - but slower on average. So MGL *can*
make for poor code. OTOH, the state consistency may be priceless elsewhere or even the
solution enabling condition.
On the locking table: at the second look I now don't understand it either. Maybe R is
not S but actually IS and where I wrote SIX there should be IX instead. Also, then I
would have to unlearn what I thought for years SIX is. Confused now (myself B-)

@dvyukov
Copy link
Member

dvyukov commented Sep 4, 2012

Comment 16:

For both your and Russ's examples SIX is inferior to simple Mutex, it allows the same
concurrency level (i.e. none) but slower and more complicated (for end user). I do not
see the point.

@rsc
Copy link
Contributor

rsc commented Jul 30, 2013

Comment 17:

Status changed to WontFix.

@alberts
Copy link
Contributor

alberts commented Jul 30, 2013

Comment 18:

@rsc: Could you elaborate on why atomic RUnlockAndLock is impossible (for future
generations)?

@dvyukov
Copy link
Member

dvyukov commented Jul 30, 2013

Comment 19:

Just call RUnlock and Lock.

@rsc
Copy link
Contributor

rsc commented Jul 30, 2013

Comment 20:

@fullung, see comment #3.

@CAFxX
Copy link
Contributor

CAFxX commented Aug 11, 2014

Comment 21:

@rsc actually the upgrade from RLock to Lock should be possible if both:
- Mutex supported TryLock (or another way to lock the mutex if not already locked)
- the upgrade operation was allowed to fail in case the upgrade would lead to deadlock
To reuse the example in comment #3 (assume there are no pending writers):
   - goroutine 1 rlocks
   - goroutine 2 rlocks
   - goroutine 1 calls upgrade -> no pending writers, w.TryLock succeeds, wait for other readers to release their RLocks
   - goroutine 2 calls upgrade -> pending writers, w.TryLock fails: reject upgrade, deadlock avoided
Sure, users will still have to code the fallback path - but at least for uncontented
locks it should be faster.

@davecheney
Copy link
Contributor

Comment 22:

Cafxx, this issue is closed. Please move the debate to the mailing list if you wish to
continue the discussion.

@golang golang locked and limited conversation to collaborators Jun 24, 2016
This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

9 participants