Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: should traceTickDiv be different for different architectures? #10554

Open
minux opened this issue Apr 23, 2015 · 11 comments
Open

runtime: should traceTickDiv be different for different architectures? #10554

minux opened this issue Apr 23, 2015 · 11 comments
Labels
compiler/runtime Issues related to the Go compiler and/or runtime.
Milestone

Comments

@minux
Copy link
Member

minux commented Apr 23, 2015

It's documented that "Timestamps in trace are cputicks/traceTickDiv. “ and
that "64 is somewhat arbitrary (one tick is ~20ns on a 3GHz machine)."

That is ok for x86 cpus, where cputicks increment by 1 each cpu clock cycle,
but on ppc64x, the cputicks might be increasing only at bus frequency.
(For example, on PowerPC G5, it's increasing at about 300MHz), so if we're
still using traceTickDiv=64 there, then each trace tick is not ~20ns, but ~200ns.

Will that be a concern?

/cc @dvyukov

@minux minux added this to the Go1.5 milestone Apr 23, 2015
@dvyukov
Copy link
Member

dvyukov commented Apr 23, 2015

Yes, that can be a concern, because ordering imposed by timestamps must be consistent with causal ordering of events. For example, if a goroutine is started at the same tick it was created, it can cause inconsistent trace.
Yes, it can make sense to have different traceTickDiv values on different archs.
Somebody needs to investigate this on ppc64. Does reducing the value fixes trace tests? What is the right value?
And yes, the tradeoff is that smaller traceTickDiv values increase trace size. If time difference between consecutive events is less than 128, than timestamp occupies 1 byte, otherwise 2+ bytes.

@minux
Copy link
Member Author

minux commented Apr 23, 2015

According to PowerISA v2.07 section 6.2:

The suggested frequency at which the time base increments is 512 MHz,
however, variation from this rate is allowed provided the following
requirements are met.

  1. The contents of the Time Base differ by no more than +/- four counts from
    what they would be if they incremented at the required frequency.
  2. Bit 63 of the Time Base is set to 1 between 30% and 70% of the time over
    any time interval of at least 16 counts.
    ....
    The Time Base update frequency is not required to be constant.

I guess we can first try 64 on x86, 16 on ppc, and 16 for other platforms
(e.g. arm, arm64) where cputicks is emulated by runtime.nanotime.

@minux
Copy link
Member Author

minux commented Apr 23, 2015 via email

@minux minux self-assigned this Apr 23, 2015
@gopherbot
Copy link

CL https://golang.org/cl/9247 mentions this issue.

@minux
Copy link
Member Author

minux commented May 2, 2015

My CL fixed the issue on POWER7/8, but the test still fails on PowerPC970
systems.

For example, on PowerMac G5 (970FX), the time base frequency is only 100/3
MHz (970FX is only conform to v2.03 of ISA document, the 512MHz requirement
is probably added after that version.)

$ cat /proc/cpuinfo | grep timebase
timebase : 33333333

If you have access to POWER7/8 systems, could you please tell me what's the
output of that command? Thanks.

@davecheney
Copy link
Contributor

winton-01(~) % grep timebase /proc/cpuinfo
timebase : 512000000

On Sun, May 3, 2015 at 6:02 AM, Minux Ma notifications@github.com wrote:

My CL fixed the issue on POWER7/8, but the test still fails on PowerPC970
systems.

For example, on PowerMac G5 (970FX), the time base frequency is only 100/3
MHz.

$ cat /proc/cpuinfo | grep timebase
timebase : 33333333

If you have access to POWER7/8 systems, could you please tell me what's the
output
of that command? Thanks.


Reply to this email directly or view it on GitHub
#10554 (comment).

@minux
Copy link
Member Author

minux commented May 2, 2015

Thanks Dave. That explains the problem. Perhaps we do need dynamic
adjustment of traceTickDiv.

@davecheney
Copy link
Contributor

Can we get that from auxv ?

On Sun, May 3, 2015 at 6:06 AM, Minux Ma notifications@github.com wrote:

Thanks Dave. That explains the problem. Perhaps we do need dynamic
adjustment
of traceTickDiv.


Reply to this email directly or view it on GitHub
#10554 (comment).

@minux
Copy link
Member Author

minux commented May 2, 2015

(Github seems to have ignored my last email reply.)

It seems the glibc function __ppc_get_timebase() parses the timebase frequency
from /proc/cpuinfo.

Here is the gdb dump of auxv (kernel 3.16.0)

(gdb) info auxv
22   AT_IGNOREPPC         Entry should be ignored        22
22   AT_IGNOREPPC         Entry should be ignored        22
19   AT_DCACHEBSIZE       Data cache block size          128
20   AT_ICACHEBSIZE       Instruction cache block size   128
21   AT_UCACHEBSIZE       Unified cache block size       0
33   AT_SYSINFO_EHDR      System-supplied DSO's ELF header 0x100000
16   AT_HWCAP             Machine-dependent CPU capability hints 0xdc080000
6    AT_PAGESZ            System page size               65536
17   AT_CLKTCK            Frequency of times()           100
3    AT_PHDR              Program headers for program    0x10000034
4    AT_PHENT             Size of program header entry   32
5    AT_PHNUM             Number of program headers      5
7    AT_BASE              Base address of interpreter    0x0
8    AT_FLAGS             Flags                          0x0
9    AT_ENTRY             Entry point of program         0x10000d4c
11   AT_UID               Real user ID                   1001
12   AT_EUID              Effective user ID              1001
13   AT_GID               Real group ID                  1001
14   AT_EGID              Effective group ID             1001
23   AT_SECURE            Boolean, was exec setuid-like? 0
25   AT_RANDOM            Address of 16 random bytes     0xfffef8a2
26   ???                                                 0x0
31   AT_EXECFN            File name of executable        0xfffeffdb "/tmp/a.out"
15   AT_PLATFORM          String identifying platform    0xfffef8b9 "ppc970"
24   AT_BASE_PLATFORM     String identifying base platform 0xfffef8b2 "ppc970"
0    AT_NULL              End of vector                  0x0

@minux
Copy link
Member Author

minux commented May 3, 2015 via email

@gopherbot
Copy link

CL https://golang.org/cl/12579 mentions this issue.

rsc added a commit that referenced this issue Jul 29, 2015
Nearly all the flaky failures we've seen in trace tests have been
due to the use of time stamps to determine relative event ordering.
This is tricky for many reasons, including:
 - different cores might not have exactly synchronized clocks
 - VMs are worse than real hardware
 - non-x86 chips have different timer resolution than x86 chips
 - on fast systems two events can end up with the same time stamp

Stop trying to make time reliable. It's clearly not going to be for Go 1.5.
Instead, record an explicit event sequence number for ordering.
Using our own counter solves all of the above problems.

The trace still contains time stamps, of course. The sequence number
is just used for ordering.

Should alleviate #10554 somewhat. Then tickDiv can be chosen to
be a useful time unit instead of having to be exact for ordering.

Separating ordering and time stamps lets the trace parser diagnose
systems where the time stamp order and actual order do not match
for one reason or another. This CL adds that check to the end of
trace.Parse, after all other sequence order-based checking.
If that error is found, we skip the test instead of failing it.
Putting the check in trace.Parse means that cmd/trace will pick
up the same check, refusing to display a trace where the time stamps
do not match actual ordering.

Using net/http's BenchmarkClientServerParallel4 on various CPU counts,
not tracing vs tracing:

name                      old time/op    new time/op    delta
ClientServerParallel4       50.4µs ± 4%    80.2µs ± 4%  +59.06%        (p=0.000 n=10+10)
ClientServerParallel4-2     33.1µs ± 7%    57.8µs ± 5%  +74.53%        (p=0.000 n=10+10)
ClientServerParallel4-4     18.5µs ± 4%    32.6µs ± 3%  +75.77%        (p=0.000 n=10+10)
ClientServerParallel4-6     12.9µs ± 5%    24.4µs ± 2%  +89.33%        (p=0.000 n=10+10)
ClientServerParallel4-8     11.4µs ± 6%    21.0µs ± 3%  +83.40%        (p=0.000 n=10+10)
ClientServerParallel4-12    14.4µs ± 4%    23.8µs ± 4%  +65.67%        (p=0.000 n=10+10)

Fixes #10512.

Change-Id: I173eecf8191e86feefd728a5aad25bf1bc094b12
Reviewed-on: https://go-review.googlesource.com/12579
Reviewed-by: Austin Clements <austin@google.com>
@rsc rsc modified the milestones: Unplanned, Go1.5 Jul 29, 2015
@rsc rsc unassigned minux Jun 23, 2022
@gopherbot gopherbot added the compiler/runtime Issues related to the Go compiler and/or runtime. label Jul 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/runtime Issues related to the Go compiler and/or runtime.
Projects
None yet
Development

No branches or pull requests

5 participants