New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net/rpc/jsonrpc: Dial: incorrect result returned after multiple dials #2690
Labels
Milestone
Comments
I've found out that probably the problem is net.DialTCP as it is returning strange connections after some dials and probably it is causing these problems. I've changed the rpc.Dial code to: tcpAddr, _ := net.ResolveTCPAddr("tcp", "localhost:50001") conn, err := net.DialTCP("tcp", nil, tcpAddr) dialsCount++ if err == nil { fmt.Printf("Got unexpected result after %d dials: %v %v\n", dialsCount, conn, tcpAddr) ... and got the same result: imp@imp:~/Projects/temp/go/2012/jsonrpc_bug/rpcbug$ ./rpcbug Got unexpected result after 150410 dials: &{0xf8400bd140} 127.0.0.1:50001 |
Owner changed to builder@golang.org. |
Reproducing under arch (linux 3.1.9, at tip), running connect attempts in multiple goroutines (code attached), I observe that calls to DialTCP that will return an unexpected non-error take an unusually long time (often >10 seconds). Attachments:
|
Aha, I think we can close this issue as WorkingAsIntended. http://stackoverflow.com/questions/4949858/how-can-you-have-a-tcp-connection-back-to-the-same-port |
And as to original topic, I think at least (json)rpc.Dial should somehow check whether the connection is invalid and return error and nil, not a working client. Currently I have to manually check the returned object for the "&{{0 0} {0 0} { 0 <nil>} 0 ... map[] false false}" pattern to find out that this is actually a 'fake' client returned and I shouldn't use it. But such check seems inconvenient and incorrect. |
#12: I don't think we can determine whether is expected or not even if we can list all IP interface addresses on a target node. Because it depends on a user, a caller of DialTCP. #11: I'm happy if you can fix the issue for json-rpc. #10: I'm also happy if you can proceed to fix the comment in regard to "Simultaneous TCP active open causes the problem". |
A small update: I've checked the 'normal' connections and they also have the same pattern, so checking for patterns is not an option actually. I've followed the link that you gave, and I see that this problem occurs when the client gets the same local and foreign addresses. But you can actually check this equality and maybe give special options for the client to "connect to itself" and in other cases - perform additional connect attempt if got two equal addresses. Or any other solutions, but I really doubt that currently it is working as intended. It causes very serious effects: for example I try to connect to the service which is not up yet, and after some connection attempts I get a good and ready rpc.Client. And when I call some methods using that client - I get a panic in other goroutine which crashes the whole application. It means that there may be cases that the service is temporarily down and it would crash the front-end, because there is absolutely no possibility to catch a panic in other go routine after a Call on a 'fake' client which cannot be identified as 'fake'. So a solution is still needed. |
Ouch. Will detect and kill. Owner changed to @rsc. Status changed to Started. |
This issue was closed by revision cbe7d8d. Status changed to Fixed. |
This issue was closed.
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
by Bond.Dmitry:
Attachments:
The text was updated successfully, but these errors were encountered: