Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime/debug: GOMEMLIMIT prolonged high GC CPU utilization before container OOM #58106

Open
rabbbit opened this issue Jan 27, 2023 · 34 comments
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Milestone

Comments

@rabbbit
Copy link

rabbbit commented Jan 27, 2023

What version of Go are you using (go version)?

go1.19.5, linux/amd64

Does this issue reproduce with the latest release?

yes

What did you do?

I ran a slowly leaking memory application with GOMEMLIMIT and GOGC=-1.

What did you expect to see?

I expected the application to run fine for a while. After a few days, I expected the CPU utilization to increase gradually (slightly, by <25-50%). I then the application to OOMed by the kernel.

What did you see instead?

After the initial expected GC CPU utilization increase, the utilized CPU increased dramatically (75% of the CPU time available to the container). The application remained in a degraded state for a long time (5-8+ hours).

Report

This is a continuation of a conversation started in #48409 (comment) - I thought I'd start a new issue.

tldr;

  • GOMEMLIMIT works great for 95% of our applications; we'd like to enable it globally
  • the remaining 5% are applications slowly leaking memory; it's unfeasible to fix them
  • for these applications, we're seeing significant degradation for a couple of hours before the container is OOMed
  • we'd want the applications to die faster (and be restarted as they are now) - the degradation is too "expensive".

Investigation

Hey @mknyszek, I come bringing more data. In summary:

  • testing "slow memory leaks" is slow
  • I'm able to reproduce the degradation on both staging and production consistently
  • I should be able to get gctraces; I'm giving the staging applications more time to crash
  • the CPU utilization degradation is significant (75% of the container)

I was testing two of our applications via dynamically adjusting GOMEMLIMIT.

1. A Latency-sensitive proxy in production.

Observations:

  • I'm able to consistently trigger the degradation by setting the memory manually
  • surprising: the CPU usage increase is not gradual, but sudden (1.5 core -> 3.8 core)
  • 200mb difference in GOMEMLIMIT can trigger the degraded behavior
  • the RSS usage increases by ~40mb/hour, so it would take ~10 hours for OOM to kick in with GOMEMLIMIT=95%.

The container runs with 4 GOMAXPROCS (4 cores cgroup limits) and 10gb of memory. At the time of the tests, cgroup reported RSS was 9.2GiB.

The test was:

  • I started reducing GOMEMLIMIT from 10GiB in 100MiB batches down (the first spike on the graph)
    -- 9563676416 - normal
    -- 9463676416 - increase to 150%
    -- 9363676416 - increase to 300%
  • I reverted GOMEMLIMIT to 9563676416,
  • after 1 hour we start seeing some CPU increase, after 2 hours CPU utilization is at ~1.3-1.4 core
  • after 4 hours the utilization goes through the roof.

image

Container CPU utilization before the test was fairly stable for ~2 days, so I don't think any external factors affect the test. The previous spikes are me modifying the MEMLIMIT manually.
image

Benchmarks

I ran three benchmarks during "normal" and "high CPU" periods, (1) unlimited throughput, (2) 1000 RPS, (3)100 RPS. The throughput seems (surprisingly) only to be ~10% down; latency changes are visible though. This is the same host, with GOMEMLIMIT set at all times, without restarts.

Benchmarks

All benchmarks are "degraded" followed by "non-degraded".

Unlimited throughput:

Benchmark parameters:                       
  CPUs:            32
  Connections:     64
  Concurrency:     1
  Max requests:    0
  Max duration:    1m0s
  Max RPS:         0
                                                      
                                                      
Latencies:       
  0.5000: 1.505503ms
  0.9000: 3.116004ms
  0.9500: 3.940367ms   
  0.9900: 7.068518ms
  0.9990: 29.626833ms
  0.9995: 97.738173ms
  1.0000: 901.724891ms
Elapsed time (seconds):         60.05
Total requests:                 1899024
RPS:                            31622.10

Benchmark parameters:
  CPUs:            32
  Connections:     64
  Concurrency:     1
  Max requests:    0
  Max duration:    1m0s
  Max RPS:         0
Latencies:
  0.5000: 1.186803ms
  0.9000: 3.468514ms
  0.9500: 4.749452ms
  0.9900: 9.265477ms
  0.9990: 27.223769ms
  0.9995: 65.71919ms
  1.0000: 538.934235ms
Elapsed time (seconds):         60.01
Total requests:                 2091010
RPS:                            34842.09

1000 RPS:

Benchmark parameters:
  CPUs:            32
  Connections:     64                                 
  Concurrency:     1 
  Max requests:    60000
  Max duration:    1m0s
  Max RPS:         1000
Latencies:              
  0.5000: 649.981µs    
  0.9000: 1.512459ms   
  0.9500: 16.059954ms  
  0.9900: 612.542139ms 
  0.9990: 840.800883ms
  0.9995: 860.559206ms
  1.0000: 930.54864ms 
Elapsed time (seconds):         60.00
Total requests:                 52173
RPS:                            869.55

Benchmark parameters:
  CPUs:            32
  Connections:     64
  Concurrency:     1
  Max requests:    60000
  Max duration:    1m0s
  Max RPS:         1000
Latencies:
  0.5000: 422.508µs
  0.9000: 694.131µs
  0.9500: 1.017692ms
  0.9900: 33.982008ms
  0.9990: 112.48922ms
  0.9995: 120.384829ms
  1.0000: 132.146788ms
Elapsed time (seconds):         60.00
Total requests:                 59770
RPS:                            996.16

100RPS:

Benchmark parameters:
  CPUs:            32
  Connections:     64
  Concurrency:     1
  Max requests:    6000
  Max duration:    1m0s
  Max RPS:         100
Latencies:
  0.5000: 846.241µs
  0.9000: 97.532361ms
  0.9500: 348.847315ms
  0.9900: 643.727969ms
  0.9990: 773.982663ms
  0.9995: 798.205955ms
  1.0000: 827.131386ms
Elapsed time (seconds):         60.00
Total requests:                 5992
RPS:                            99.87

Benchmark parameters:
  CPUs:            32
  Connections:     64
  Concurrency:     1
  Max requests:    6000
  Max duration:    1m0s
  Max RPS:         100

Latencies:
  0.5000: 620.282µs
  0.9000: 940.553µs
  0.9500: 1.44535ms
  0.9900: 70.505245ms
  0.9990: 145.183217ms
  0.9995: 157.922399ms
  1.0000: 187.452379ms
Elapsed time (seconds):         60.00
Total requests:                 6000
RPS:                            100.00  

2. A leader elected not-so-important application.

The leader allocates a lot. The backup containers gather some data (memory "leaks" on all), but don't allocate as much. Containers run with 3 CPUs, and 16 GiB memory.

The test: I restarted and set GOMEMLIMIT=4Gib on all containers. I let them run.

image

We see:

  • gradual CPU usage increase over time (this is the GC, there's no external work affecting the usage)
  • after 24 hours, my GOMEMLIMIT expires (we have an enforced timeout), and the CPU utilization briefly drops back to "base".
  • I reset the GOMEMLIMIT to 4GiB, CPU usage is now higher
  • the CPU is stable for 2 hours
  • after 2 hours CPU starts to increase to 3 cores (max)
  • the container crashes for an unrelated reason
  • a backup picks the work up, it's immediately throttled (it should have as much leaked data as the previous leader)
  • the memory continues to leak at ~100mb/5h - with GOMEMLIMIT at 95% of the container memory, it would be 8 hours before the crash.

This case is interesting because it's similar to a "failover" scenario where the application might suddenly get 50% extra traffic.

We measure cgroups nr_throttled periods and calculate "healthiness".
image

I'll try to get gctraces over the next few days.

@seankhliao seankhliao changed the title runtime/gomemlimit: Prolonged high GC CPU utilization before container OOM runtime: GOMEMLIMIT prolonged high GC CPU utilization before container OOM Jan 27, 2023
@gopherbot gopherbot added the compiler/runtime Issues related to the Go compiler and/or runtime. label Jan 27, 2023
@seankhliao seankhliao changed the title runtime: GOMEMLIMIT prolonged high GC CPU utilization before container OOM runtime/debug: GOMEMLIMIT prolonged high GC CPU utilization before container OOM Jan 28, 2023
@seankhliao seankhliao added the NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. label Jan 28, 2023
@mknyszek
Copy link
Contributor

Thanks for filing a new issue with all these details and sorry for the delay; I was out sick last week.

Unfortunately, I don't have any solutions off the top of my head for this slow memory leak case for the memory limit. In some sense, making the memory limit softer is only going to be a patch on the issue. Yeah, the performance will be better when you're close to the memory limit, but there's still going to be some kind of degradation. The memory limit seems fundamentally a bad fit because what you want here is to OOM (which the memory limit is actively fighting against), you just also want to take advantage of the fact that you have all this extra memory before then.

I think what might fit your criteria is a memory minimum (or what I've called a memory "target") (#44309). For instance, you set GOGC=10 and a memory target of whatever you set your memory limit to currently. The behavior this configuration would have is to use the memory target amount of memory until you get within 10% of the memory target. From then on, it behaves like GOGC=10. This will still OOM much more readily, and there's some performance degradation but you have some more control over the degree with GOGC (GOGC=10 is also already way, way softer than the memory limit).

CC @dr2chase

@zephyrtronium
Copy link
Contributor

If OOM killing is acceptable behavior, would it be reasonable to implement that yourself? https://go.dev/play/p/aUYguuR-atF

@rabbbit
Copy link
Author

rabbbit commented Jan 30, 2023

I was running some more tests; adding more screenshots - they show the same behavior mostly.

Here we see GC using 2x more CPU than the actual work (66% of the container).
image

Here it looks even more dramatic, it looks like GC was using 10x more CPU than the actual "work":
image

@zephyrtronium - that's "kinda" what we actually do in our pre-GOMEMLIMIT setup. We watch the runtime GC stats and update GOGC to slow down (and eventually OOM). This gives us a bit more time before OOM, but mostly:

  • in the perfect world, we thought we could kill our custom code
  • the behavior we observe is different than described in the proposal (i.e. GC uses more CPU than expected).

@mknyszek: no problem for the delay; thanks for responding. (I'll be offline soon for ~10 days too).

For instance, you set GOGC=10 and a memory target of whatever you set your memory limit to currently. The behavior this configuration would have is to use the memory target amount of memory until you get within 10% of the memory target. From then on, it behaves like GOGC=10.

Oh, this was not my understanding after reading the docs. Even the sliders in https://go.dev/doc/gc-guide suggest that setting GOGC=10 affects GC frequency long before arriving at GOMEMLIMIT (i.e. it doesn't matter if my GOMEMLIMIT is 1 GB or 20 GB if my "live heap" usage is low - GC will run frequently). Am I misunderstanding?

I'll run the same tests with GOGC=10 to observe the behavior.

The memory limit seems fundamentally a bad fit because what you want here is to OOM

Agreed. However, we cannot know which applications are leaking upfront, and any application might start leaking at any time. So it makes GOMEMLIMIT a "fundamentally bad fit" for any of our applications, which is really sad.

Yeah, the performance will be better when you're close to the memory limit, but there's still going to be some kind of degradation.

Yeah, the degradation is expected. As long as it's bounded, it's totally fine. In the cases we see above, the GC uses up to 80% of the available cores - and 10x more time than the application "itself". This seems divergent from the proposal, right?

Understood that there are no easy/quick fixes here. My highest hope here was that maybe in Go 1.21/1.22 you'd magically come up with a fix that would limit the GC more (any of <25% of GOMAXPROCS, <100% of "real work", <2 cores, $else).

@mknyszek
Copy link
Contributor

mknyszek commented Jan 30, 2023

For instance, you set GOGC=10 and a memory target of whatever you set your memory limit to currently. The behavior this configuration would have is to use the memory target amount of memory until you get within 10% of the memory target. From then on, it behaves like GOGC=10.

Oh, this was not my understanding after reading the docs. Even the sliders in https://go.dev/doc/gc-guide suggest that setting GOGC=10 affects GC frequency long before arriving at GOMEMLIMIT (i.e. it doesn't matter if my GOMEMLIMIT is 1 GB or 20 GB if my "live heap" usage is low - GC will run frequently). Am I misunderstanding?

Sorry, I think I was unclear: this functionality doesn't exist. It was proposed and rejected in #44309 due to lack of interest (and just knobs having a very high bar in general, which I don't think this really met for the team). (Implementing it is relatively easy, if you wanted to try it out anyway. It's not exactly a memory target so much as a heap target when done this way, but the basic idea is to set https://cs.opensource.google/go/go/+/master:src/runtime/mgcpacer.go;l=111?q=heapMinimum&sq=&ss=go%2Fgo.)

@rabbbit
Copy link
Author

rabbbit commented Jan 31, 2023

Oh, no, that's on me - I misread the titles and confused myself. #44309 was an alternative to #48409. So we'd effectively had a natively supported ballast. The fast(er) OOM would indeed be nice.

Before we delve into adding more knobs, though - what we're seeing with the GOMEMLIMIT right now is not expected, right? GC using 3-10x more CPU than the application itself is not expected, both per the proposal and the gc-guide?

I'm a fan of the "fewer knobs == better" design, so if you were amendable to making the soft limit slightly softer, that would be my preference. Both the proposal and the gc-guide seem to suggest that the "50% CPU max for GC" was a starting point (?).

@mknyszek
Copy link
Contributor

Oh, no, that's on me - I misread the titles and confused myself. #44309 was an alternative to #48409. So we'd effectively had a natively supported ballast. The fast(er) OOM would indeed be nice.

Ah, no. It wasn't meant to be an alternative, rather complementary. Whereas the current memory limit is a maximum, this would be a memory minimum. GOGC effectively controls what's in the middle. But if you don't have a maximum, and only a very high minimum, and set GOGC low, then in kind of acts like a very soft memory limit.

To be clear, I'm not proposing another knob right now, just pointing out that this might've helped your situation. I think it would be very unlikely for us to add a new knob any time soon.

Before we delve into adding more knobs, though - what we're seeing with the GOMEMLIMIT right now is not expected, right? GC using 3-10x more CPU than the application itself is not expected, both per the proposal and the gc-guide?

Using 3-10x more CPU than the application is definitely possible. The Go runtime sees its available CPU time as roughly GOMAXPROCS*(wall clock time) for some wall-clock duration. Say for example that GOMAXPROCS=8 but the Go program only ever runs a single goroutine. It's unlikely but possible for that single goroutine to allocation heavily enough that the GC uses up to 50% CPU (actually more, because the GC will also soak up idle CPUs during the mark phase, though it yields to the application very readily).

To summarize, the 50% cap is based on total assumed available CPU resources, but it also doesn't prevent the GC from soaking up idle CPU resources. If the GC is triggering often, this idle CPU soak-up can end up using a lot of the available CPU.

That's kind of extreme, but it's more plausible if you consider an application that is doing a lot of blocking on I/O. There could potentially be a lot of spare idle CPU time in GOMAXPROCS*(wall clock time) that the GC is allowed to take advantage of. Honestly, this policy of soaking up idle CPU time has been troublesome for us over the years, but getting rid of it at this point is also difficult. It's about -1% throughput and +1% tail latency to remove it. We've removed it for very idle applications, but it remains to be seen what else we do.

A gctrace would make it clear how much of what is happening.

I'm a fan of the "fewer knobs == better" design, so if you were amendable to making the soft limit slightly softer, that would be my preference. Both the proposal and the gc-guide seem to suggest that the "50% CPU max for GC" was a starting point (?).

Yeah, you're right that it is a starting point, but I think we're going to need more evidence. For example, if we switch it to 25%, why 25%? That's a question I can't answer right now, though I'd like to. (50% is also not rigorously determined, but it has more going for it; it was the default for some prior art, plus we already have it now.) Though, I think this bug report is a good first step.

@rabbbit
Copy link
Author

rabbbit commented Jan 31, 2023

I enabled gctraces on the staging service (running with 3 cores).

image

Order of events: - application starts - application gets elected as leader, starts allocations. GOGC=100, no limit - I manually switch to GOGC=-1, GOMEMLIMIT=3GiB - let it run for 10 minutes - I switch back to nolimit, GOC=100
Traces
gc 1 @0.017s 1%: 0.025+0.80+0.18 ms clock, 0.82+0.056/1.2/0.015+5.8 ms cpu, 3->4->1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 32 P
gc 2 @0.025s 1%: 0.027+0.48+0.17 ms clock, 0.87+0.076/1.9/0+5.7 ms cpu, 3->4->1 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 32 P
gc 3 @0.027s 2%: 0.063+1.4+0.028 ms clock, 2.0+0.41/3.8/0.84+0.91 ms cpu, 4->4->4 MB, 4 MB goal, 0 MB stacks, 0 MB globals, 32 P
gc 4 @0.071s 1%: 0.031+1.1+0.030 ms clock, 1.0+0.093/7.0/2.1+0.96 ms cpu, 8->8->7 MB, 9 MB goal, 0 MB stacks, 0 MB globals, 32 P
gc 5 @0.120s 1%: 0.054+1.7+0.025 ms clock, 1.7+0.13/9.1/7.0+0.80 ms cpu, 12->13->9 MB, 14 MB goal, 0 MB stacks, 0 MB globals, 32 P
gc 6 @0.144s 1%: 0.073+2.0+0.050 ms clock, 2.3+0.39/11/10+1.6 ms cpu, 17->18->13 MB, 19 MB goal, 0 MB stacks, 0 MB globals, 32 P
gc 7 @0.171s 1%: 0.055+7.9+0.018 ms clock, 0.16+4.8/4.8/7.4+0.056 ms cpu, 27->28->25 MB, 28 MB goal, 0 MB stacks, 0 MB globals, 3 P
gc 8 @0.290s 1%: 0.068+28+0.028 ms clock, 0.20+11/22/16+0.084 ms cpu, 47->52->44 MB, 51 MB goal, 1 MB stacks, 0 MB globals, 3 P
gc 9 @0.350s 3%: 0.088+46+0.028 ms clock, 0.26+46/27/8.3+0.084 ms cpu, 80->102->92 MB, 89 MB goal, 4 MB stacks, 0 MB globals, 3 P
gc 10 @0.482s 4%: 0.064+112+0.018 ms clock, 0.19+16/87/33+0.055 ms cpu, 157->201->164 MB, 185 MB goal, 10 MB stacks, 0 MB globals, 3 P
gc 11 @0.836s 6%: 0.087+200+0.017 ms clock, 0.26+55/160/134+0.053 ms cpu, 281->326->247 MB, 330 MB goal, 16 MB stacks, 0 MB globals, 3 P
gc 12 @1.283s 11%: 0.087+342+0.014 ms clock, 0.26+280/270/8.9+0.042 ms cpu, 423->535->441 MB, 497 MB goal, 27 MB stacks, 0 MB globals, 3 P
gc 13 @2.138s 15%: 0.11+685+0.017 ms clock, 0.33+403/523/0+0.051 ms cpu, 773->929->718 MB, 888 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 14 @4.524s 12%: 0.12+902+0.024 ms clock, 0.37+1.1/703/804+0.074 ms cpu, 1231->1371->1037 MB, 1448 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 15 @7.830s 13%: 0.10+1919+0.022 ms clock, 0.30+296/1456/1267+0.066 ms cpu, 1936->2189->1698 MB, 2087 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 16 @15.314s 11%: 0.14+2564+0.006 ms clock, 0.44+36/1925/1816+0.020 ms cpu, 2901->3101->1897 MB, 3409 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 17 @45.217s 6%: 0.13+2293+0.016 ms clock, 0.39+984/1749/1064+0.050 ms cpu, 3570->3793->1900 MB, 3806 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 18 @67.701s 4%: 0.10+1290+0.005 ms clock, 0.31+1.2/991/2454+0.015 ms cpu, 3246->3265->1711 MB, 3814 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 19 @85.768s 4%: 0.12+2081+0.006 ms clock, 0.38+0.95/1571/2108+0.018 ms cpu, 2922->3006->1852 MB, 3434 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 20 @106.821s 3%: 0.13+1707+0.006 ms clock, 0.41+0.87/1288/2544+0.019 ms cpu, 3163->3207->1761 MB, 3716 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 21 @123.646s 3%: 41+1427+0.005 ms clock, 123+10/1106/2690+0.017 ms cpu, 2735->2757->1743 MB, 2714 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 22 @134.217s 3%: 0.33+1762+0.030 ms clock, 1.0+10/1330/2399+0.091 ms cpu, 2405->2571->1886 MB, 2722 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 23 @145.331s 3%: 0.15+2542+0.005 ms clock, 0.47+124/1915/1825+0.017 ms cpu, 2459->2652->1956 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 24 @155.346s 4%: 0.14+2578+0.006 ms clock, 0.43+408/1955/1606+0.020 ms cpu, 2421->2605->1953 MB, 2723 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 25 @165.668s 4%: 0.11+2234+0.006 ms clock, 0.33+26/1690/2075+0.019 ms cpu, 2495->2598->1895 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 26 @175.452s 4%: 0.11+2509+0.006 ms clock, 0.33+20/1890/2188+0.018 ms cpu, 2406->2540->1924 MB, 2715 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 27 @185.675s 4%: 0.10+2252+0.006 ms clock, 0.31+7.1/1699/2160+0.020 ms cpu, 2486->2583->1900 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 28 @195.456s 4%: 0.094+2261+0.005 ms clock, 0.28+20/1712/1885+0.017 ms cpu, 2407->2544->1934 MB, 2714 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 29 @205.722s 4%: 0.12+2218+0.006 ms clock, 0.37+4.7/1664/2170+0.020 ms cpu, 2488->2587->1903 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 30 @215.429s 4%: 0.12+2274+0.006 ms clock, 0.36+15/1723/1986+0.019 ms cpu, 2407->2548->1938 MB, 2714 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 31 @225.708s 4%: 0.11+2244+0.005 ms clock, 0.33+4.8/1704/2241+0.017 ms cpu, 2490->2582->1900 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 32 @236.183s 4%: 0.10+1842+0.005 ms clock, 0.31+76/1402/2117+0.017 ms cpu, 2509->2541->1778 MB, 2715 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 33 @250.158s 4%: 0.13+2252+0.019 ms clock, 0.40+868/1708/1115+0.058 ms cpu, 2454->2701->1990 MB, 2725 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 34 @260.231s 4%: 0.13+2457+0.036 ms clock, 0.41+801/1870/1116+0.11 ms cpu, 2429->2666->1999 MB, 2722 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 35 @270.406s 4%: 0.10+2383+0.006 ms clock, 0.32+159/1799/1843+0.019 ms cpu, 2489->2646->1958 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 36 @280.370s 5%: 0.17+2542+0.006 ms clock, 0.51+201/1918/1892+0.020 ms cpu, 2432->2601->1967 MB, 2721 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 37 @290.440s 5%: 0.11+2271+0.004 ms clock, 0.33+43/1709/1989+0.014 ms cpu, 2482->2620->1953 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 38 @300.364s 5%: 0.10+2336+0.005 ms clock, 0.31+184/1758/1715+0.016 ms cpu, 2429->2601->1981 MB, 2720 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 39 @310.442s 5%: 0.10+2265+0.005 ms clock, 0.31+12/1700/1962+0.015 ms cpu, 2485->2626->1961 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 40 @320.340s 5%: 0.12+2495+0.005 ms clock, 0.37+184/1878/1815+0.016 ms cpu, 2428->2607->1994 MB, 2720 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 41 @330.404s 5%: 0.11+2414+0.009 ms clock, 0.34+151/1830/1874+0.028 ms cpu, 2489->2643->1989 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 42 @340.295s 5%: 0.15+2537+0.020 ms clock, 0.45+654/1924/1346+0.061 ms cpu, 2428->2638->2028 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 43 @350.354s 5%: 0.15+2472+0.006 ms clock, 0.47+299/1867/1674+0.019 ms cpu, 2502->2677->2011 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 44 @360.236s 5%: 0.14+2206+0.018 ms clock, 0.42+1101/1659/992+0.056 ms cpu, 2420->2629->2022 MB, 2701 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 45 @370.315s 5%: 0.13+2549+0.017 ms clock, 0.41+887/1918/1048+0.051 ms cpu, 2505->2697->2031 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 46 @380.209s 5%: 0.16+2354+0.025 ms clock, 0.48+1236/1775/879+0.077 ms cpu, 2416->2634->2027 MB, 2684 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 47 @390.297s 5%: 0.13+2538+0.023 ms clock, 0.39+1140/1909/929+0.071 ms cpu, 2502->2706->2040 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 48 @400.187s 5%: 0.12+1976+0.023 ms clock, 0.37+1239/1493/947+0.069 ms cpu, 2420->2608->1994 MB, 2664 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 49 @410.255s 5%: 0.13+2545+0.021 ms clock, 0.40+891/1924/1101+0.063 ms cpu, 2489->2711->2050 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 50 @420.177s 6%: 0.12+2017+0.025 ms clock, 0.37+1473/1523/741+0.076 ms cpu, 2422->2615->1997 MB, 2658 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 51 @430.241s 6%: 0.14+2349+0.025 ms clock, 0.44+759/1765/1158+0.077 ms cpu, 2488->2718->2054 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 52 @439.704s 6%: 0.10+2142+0.027 ms clock, 0.32+437/1611/1711+0.081 ms cpu, 2419->2635->2019 MB, 2667 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 53 @450.214s 6%: 0.13+2417+0.018 ms clock, 0.39+981/1817/1098+0.055 ms cpu, 2493->2727->2050 MB, 2725 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 54 @459.214s 6%: 0.13+1729+0.017 ms clock, 0.39+19/1311/2324+0.053 ms cpu, 2428->2580->1958 MB, 2640 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 55 @470.175s 6%: 0.14+2476+0.016 ms clock, 0.43+1152/1856/934+0.048 ms cpu, 2472->2727->2063 MB, 2725 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 56 @479.157s 6%: 0.17+1917+0.021 ms clock, 0.52+131/1460/2266+0.065 ms cpu, 2431->2616->1989 MB, 2654 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 57 @490.183s 6%: 0.12+2045+0.018 ms clock, 0.36+944/1537/1112+0.055 ms cpu, 2483->2701->2030 MB, 2725 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 58 @499.864s 6%: 0.095+2125+0.015 ms clock, 0.28+420/1607/1584+0.046 ms cpu, 2436->2664->2035 MB, 2713 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 59 @510.235s 6%: 0.14+2429+0.015 ms clock, 0.44+961/1831/1072+0.045 ms cpu, 2499->2727->2056 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 60 @519.221s 6%: 0.15+1735+0.018 ms clock, 0.45+10/1326/2356+0.056 ms cpu, 2424->2586->1970 MB, 2634 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 61 @530.193s 6%: 0.16+2318+0.027 ms clock, 0.49+925/1747/1095+0.081 ms cpu, 2477->2715->2054 MB, 2724 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 62 @539.221s 6%: 0.12+1745+0.019 ms clock, 0.38+1.3/1323/2358+0.058 ms cpu, 2433->2598->1975 MB, 2649 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 63 @550.172s 6%: 0.13+2169+0.024 ms clock, 0.41+1005/1621/1019+0.074 ms cpu, 2478->2716->2050 MB, 2725 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 64 @559.186s 6%: 0.13+1778+0.014 ms clock, 0.41+46/1341/2345+0.043 ms cpu, 2438->2609->1980 MB, 2659 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 65 @570.163s 6%: 0.14+2436+0.021 ms clock, 0.42+1173/1845/979+0.064 ms cpu, 2477->2724->2058 MB, 2725 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 66 @579.067s 6%: 0.11+1896+0.023 ms clock, 0.33+3.2/1431/2462+0.069 ms cpu, 2440->2617->1989 MB, 2660 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 67 @590.054s 6%: 0.11+2193+0.026 ms clock, 0.34+1113/1651/1022+0.078 ms cpu, 2479->2712->2046 MB, 2725 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 68 @598.206s 6%: 0.13+1449+0.005 ms clock, 0.41+35/1111/2567+0.016 ms cpu, 2448->2470->1841 MB, 2624 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 69 @606.116s 6%: 0.11+2034+0.018 ms clock, 0.33+67/1543/2290+0.055 ms cpu, 2426->2469->1927 MB, 2684 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 70 @615.376s 6%: 0.10+2426+0.008 ms clock, 0.30+303/1835/1672+0.024 ms cpu, 2415->2581->2039 MB, 2717 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 71 @625.302s 6%: 0.16+2618+0.064 ms clock, 0.49+1061/1977/1029+0.19 ms cpu, 2490->2695->2065 MB, 2722 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 72 @635.179s 6%: 0.13+2045+0.016 ms clock, 0.39+1406/1541/869+0.048 ms cpu, 2439->2629->2018 MB, 2665 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 73 @645.187s 6%: 0.15+2226+0.018 ms clock, 0.46+1086/1662/1039+0.056 ms cpu, 2484->2708->2055 MB, 2722 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 74 @654.174s 6%: 0.13+1932+0.021 ms clock, 0.41+145/1471/2295+0.065 ms cpu, 2447->2640->2018 MB, 2684 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 75 @663.198s 6%: 0.14+1523+0.005 ms clock, 0.43+25/1163/2639+0.016 ms cpu, 2482->2511->1859 MB, 2683 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 76 @673.349s 6%: 0.11+1412+0.006 ms clock, 0.34+1.1/1076/2684+0.020 ms cpu, 2489->2512->1853 MB, 2692 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 77 @681.022s 6%: 0.11+2139+0.023 ms clock, 0.35+40/1628/2408+0.069 ms cpu, 2417->2470->1946 MB, 2678 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 78 @690.379s 6%: 0.10+2462+0.022 ms clock, 0.32+258/1856/1823+0.066 ms cpu, 2425->2588->2044 MB, 2722 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 79 @700.329s 6%: 0.13+2553+0.007 ms clock, 0.40+874/1930/1105+0.021 ms cpu, 2494->2694->2066 MB, 2722 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 80 @710.161s 6%: 0.13+2023+0.012 ms clock, 0.39+1434/1516/801+0.037 ms cpu, 2435->2630->2023 MB, 2659 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 81 @720.179s 6%: 0.12+2295+0.015 ms clock, 0.38+1014/1722/1101+0.047 ms cpu, 2484->2724->2072 MB, 2722 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 82 @728.262s 6%: 0.11+1467+0.006 ms clock, 0.35+5.9/1113/2715+0.020 ms cpu, 2442->2463->1854 MB, 2605 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 83 @740.303s 6%: 0.14+1804+0.013 ms clock, 0.44+1182/1368/1035+0.041 ms cpu, 2593->2725->2001 MB, 2723 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 84 @749.484s 6%: 0.099+2391+0.032 ms clock, 0.29+2.4/1807/2129+0.097 ms cpu, 2426->2672->2074 MB, 2716 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 85 @760.194s 6%: 0.12+2349+0.018 ms clock, 0.37+1366/1763/805+0.054 ms cpu, 2497->2723->2063 MB, 2722 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 86 @769.177s 6%: 0.15+1828+0.021 ms clock, 0.45+48/1381/2337+0.063 ms cpu, 2451->2626->2004 MB, 2678 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 87 @779.214s 6%: 0.15+1864+0.022 ms clock, 0.45+41/1421/2403+0.067 ms cpu, 2483->2658->2006 MB, 2721 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 88 @788.194s 6%: 0.13+1446+0.004 ms clock, 0.41+52/1099/2511+0.014 ms cpu, 2458->2481->1858 MB, 2654 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 89 @795.979s 6%: 0.11+2091+0.007 ms clock, 0.34+5.4/1591/2383+0.022 ms cpu, 2419->2472->1950 MB, 2682 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 90 @805.318s 6%: 0.13+2513+0.042 ms clock, 0.40+356/1890/1532+0.12 ms cpu, 2421->2620->2070 MB, 2722 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 91 @815.243s 6%: 0.14+2332+0.016 ms clock, 0.43+996/1772/1060+0.049 ms cpu, 2488->2711->2076 MB, 2720 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 92 @824.207s 6%: 0.16+1781+0.029 ms clock, 0.48+35/1357/2363+0.087 ms cpu, 2446->2614->2000 MB, 2653 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 93 @834.164s 6%: 0.15+1874+0.019 ms clock, 0.45+117/1408/2315+0.059 ms cpu, 2474->2659->2015 MB, 2721 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 94 @843.217s 6%: 0.13+1491+0.005 ms clock, 0.39+19/1117/2727+0.016 ms cpu, 2470->2493->1861 MB, 2667 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 95 @851.080s 6%: 0.15+2077+0.019 ms clock, 0.45+28/1562/2456+0.058 ms cpu, 2427->2471->1944 MB, 2682 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 96 @860.370s 6%: 0.097+2376+0.005 ms clock, 0.29+67/1810/1902+0.016 ms cpu, 2418->2583->2051 MB, 2716 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 97 @870.326s 6%: 0.12+2550+0.005 ms clock, 0.36+976/1928/1108+0.017 ms cpu, 2496->2694->2069 MB, 2721 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 98 @880.163s 6%: 0.15+2225+0.025 ms clock, 0.47+1525/1686/777+0.075 ms cpu, 2437->2645->2043 MB, 2662 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 99 @890.182s 6%: 0.13+2412+0.019 ms clock, 0.40+1097/1811/1047+0.057 ms cpu, 2489->2723->2074 MB, 2721 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 100 @898.179s 6%: 0.13+1553+0.005 ms clock, 0.39+96/1187/2572+0.017 ms cpu, 2446->2483->1875 MB, 2613 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 101 @910.269s 6%: 0.13+1838+0.015 ms clock, 0.40+1653/1395/805+0.047 ms cpu, 2602->2724->1988 MB, 2722 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 102 @920.165s 6%: 0.16+2319+0.019 ms clock, 0.49+1154/1733/973+0.057 ms cpu, 2449->2691->2079 MB, 2720 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 103 @928.172s 6%: 0.16+1502+0.006 ms clock, 0.49+127/1156/2439+0.019 ms cpu, 2473->2509->1886 MB, 2648 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 104 @935.442s 6%: 0.10+2411+0.006 ms clock, 0.31+45/1808/2038+0.018 ms cpu, 2380->2534->2059 MB, 2686 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 105 @945.240s 6%: 0.15+2387+0.019 ms clock, 0.45+1149/1791/1010+0.059 ms cpu, 2494->2718->2097 MB, 2721 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 106 @952.653s 6%: 0.11+1407+0.005 ms clock, 0.34+2.7/1074/2685+0.016 ms cpu, 2440->2460->1875 MB, 2590 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 107 @956.854s 6%: 0.099+1393+0.007 ms clock, 0.29+2133/1052/871+0.023 ms cpu, 2176->2182->1863 MB, 1875 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 108 @958.364s 6%: 0.066+1672+0.005 ms clock, 0.20+338/1270/2203+0.015 ms cpu, 1863->1918->1910 MB, 1863 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 109 @960.144s 6%: 0.085+2056+0.013 ms clock, 0.25+1270/1561/1013+0.040 ms cpu, 1910->2097->2039 MB, 1910 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 110 @962.302s 7%: 0.11+2516+0.006 ms clock, 0.34+835/1899/1401+0.020 ms cpu, 2039->2185->2059 MB, 2039 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 111 @964.931s 7%: 0.074+2083+0.022 ms clock, 0.22+1012/1570/1313+0.067 ms cpu, 2059->2252->2044 MB, 2059 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 112 @967.117s 7%: 0.062+2370+0.004 ms clock, 0.18+450/1790/1669+0.014 ms cpu, 2044->2172->2042 MB, 2044 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 113 @969.614s 7%: 0.054+1946+0.016 ms clock, 0.16+396/1469/1874+0.049 ms cpu, 2042->2239->2048 MB, 2042 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 114 @971.661s 7%: 0.057+2325+0.004 ms clock, 0.17+692/1759/1564+0.014 ms cpu, 2048->2155->2023 MB, 2048 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 115 @974.105s 7%: 0.071+1979+0.027 ms clock, 0.21+211/1503/2306+0.081 ms cpu, 2023->2208->2036 MB, 2023 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 116 @976.311s 7%: 2.6+2562+0.006 ms clock, 7.8+778/1928/1374+0.019 ms cpu, 2036->2174->2053 MB, 2036 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 117 @979.000s 7%: 0.10+1852+0.020 ms clock, 0.31+31/1404/2380+0.060 ms cpu, 2053->2199->1999 MB, 2053 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 118 @980.960s 7%: 0.082+2654+0.005 ms clock, 0.24+924/2007/1145+0.017 ms cpu, 1999->2174->2073 MB, 1999 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 119 @983.721s 7%: 0.066+1425+0.017 ms clock, 0.20+26/1085/2687+0.052 ms cpu, 2074->2095->1879 MB, 2073 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 120 @985.252s 7%: 8.9+2101+0.020 ms clock, 26+1764/1586/630+0.060 ms cpu, 1879->2063->2038 MB, 1879 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 121 @987.471s 7%: 0.13+2350+0.005 ms clock, 0.39+764/1771/1354+0.015 ms cpu, 2038->2179->2055 MB, 2038 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 122 @989.934s 7%: 0.067+2072+0.014 ms clock, 0.20+1196/1559/1132+0.042 ms cpu, 2055->2248->2046 MB, 2055 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 123 @992.111s 7%: 0.11+2442+0.007 ms clock, 0.33+780/1840/1379+0.023 ms cpu, 2046->2186->2057 MB, 2046 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 124 @994.684s 7%: 0.063+2045+0.038 ms clock, 0.19+644/1553/1641+0.11 ms cpu, 2057->2254->2049 MB, 2057 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 125 @996.829s 7%: 0.084+2220+0.005 ms clock, 0.25+318/1682/1903+0.015 ms cpu, 2049->2162->2030 MB, 2049 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 126 @999.176s 8%: 12+1809+0.015 ms clock, 36+113/1385/2116+0.045 ms cpu, 2030->2216->2040 MB, 2030 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 127 @1001.097s 8%: 0.062+2555+0.005 ms clock, 0.18+789/1933/1334+0.016 ms cpu, 2040->2178->2056 MB, 2040 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 128 @1003.757s 8%: 0.054+1497+0.019 ms clock, 0.16+30/1113/2625+0.059 ms cpu, 2056->2096->1897 MB, 2056 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 129 @1005.355s 8%: 0.12+2163+0.025 ms clock, 0.37+1545/1646/814+0.075 ms cpu, 1897->2084->2046 MB, 1897 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 130 @1007.637s 8%: 0.068+2291+0.005 ms clock, 0.20+328/1737/1969+0.016 ms cpu, 2046->2162->2034 MB, 2046 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 131 @1010.061s 8%: 0.075+2127+0.024 ms clock, 0.22+1461/1598/896+0.073 ms cpu, 2034->2227->2047 MB, 2034 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 132 @1012.302s 8%: 3.0+2463+0.006 ms clock, 9.2+716/1854/1515+0.018 ms cpu, 2047->2180->2049 MB, 2047 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 133 @1014.884s 8%: 0.076+2156+0.014 ms clock, 0.23+1056/1637/1340+0.042 ms cpu, 2049->2243->2047 MB, 2049 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 134 @1017.156s 8%: 0.11+2381+0.005 ms clock, 0.34+519/1801/1622+0.017 ms cpu, 2047->2179->2050 MB, 2047 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 135 @1019.669s 8%: 0.072+2198+0.015 ms clock, 0.21+792/1670/1745+0.047 ms cpu, 2050->2245->2049 MB, 2050 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 136 @1022.045s 8%: 0.10+2511+0.005 ms clock, 0.32+727/1901/1508+0.016 ms cpu, 2049->2185->2053 MB, 2049 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 137 @1024.660s 8%: 0.080+2229+0.019 ms clock, 0.24+930/1680/1613+0.059 ms cpu, 2053->2249->2050 MB, 2053 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 138 @1027.002s 8%: 0.11+2575+0.007 ms clock, 0.33+950/1947/1310+0.022 ms cpu, 2050->2201->2069 MB, 2050 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 139 @1029.706s 8%: 0.070+2190+0.025 ms clock, 0.21+904/1656/1577+0.077 ms cpu, 2069->2266->2053 MB, 2069 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 140 @1032.017s 8%: 0.073+2529+0.007 ms clock, 0.22+618/1908/1673+0.022 ms cpu, 2053->2185->2052 MB, 2053 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 141 @1034.678s 8%: 0.057+2225+0.019 ms clock, 0.17+770/1676/1680+0.057 ms cpu, 2052->2249->2052 MB, 2052 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 142 @1037.020s 9%: 0.075+2290+0.018 ms clock, 0.22+466/1727/1828+0.055 ms cpu, 2052->2178->2046 MB, 2052 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 143 @1039.437s 9%: 0.099+1951+0.005 ms clock, 0.29+82/1468/2216+0.017 ms cpu, 2046->2250->2061 MB, 2046 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 144 @1041.491s 9%: 4.2+2278+0.005 ms clock, 12+613/1731/1549+0.015 ms cpu, 2061->2162->2023 MB, 2061 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 145 @1044.040s 9%: 0.087+1896+0.015 ms clock, 0.26+46/1432/2408+0.046 ms cpu, 2023->2186->2021 MB, 2023 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 146 @1046.047s 9%: 0.10+2674+0.005 ms clock, 0.31+1085/2020/1117+0.017 ms cpu, 2021->2180->2070 MB, 2021 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 147 @1048.942s 9%: 0.10+1767+0.026 ms clock, 0.30+22/1340/2393+0.079 ms cpu, 2070->2197->1985 MB, 2070 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 148 @1050.849s 9%: 0.070+2471+0.019 ms clock, 0.21+858/1869/1133+0.059 ms cpu, 1985->2176->2084 MB, 1985 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 149 @1053.416s 9%: 0.061+1489+0.005 ms clock, 0.18+21/1121/2549+0.017 ms cpu, 2084->2106->1883 MB, 2084 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 150 @1055.059s 9%: 0.087+2029+0.015 ms clock, 0.26+1434/1538/902+0.046 ms cpu, 1883->2066->2040 MB, 1883 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 151 @1057.194s 9%: 0.081+2508+0.005 ms clock, 0.24+788/1906/1443+0.017 ms cpu, 2041->2177->2057 MB, 2040 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 152 @1059.869s 9%: 0.079+2136+0.017 ms clock, 0.23+777/1618/1627+0.052 ms cpu, 2057->2252->2051 MB, 2057 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 153 @1062.116s 9%: 0.089+2493+0.005 ms clock, 0.26+753/1890/1481+0.015 ms cpu, 2051->2181->2051 MB, 2051 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 154 @1064.736s 9%: 0.054+2067+0.020 ms clock, 0.16+732/1566/1540+0.061 ms cpu, 2051->2247->2053 MB, 2051 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 155 @1066.907s 9%: 0.10+2165+0.005 ms clock, 0.30+273/1634/1975+0.017 ms cpu, 2053->2161->2030 MB, 2053 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 156 @1069.174s 9%: 12+1892+0.014 ms clock, 38+127/1439/2078+0.044 ms cpu, 2030->2215->2042 MB, 2030 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 157 @1071.180s 9%: 3.7+2470+0.005 ms clock, 11+937/1851/1161+0.017 ms cpu, 2042->2180->2059 MB, 2042 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 158 @1073.754s 9%: 0.052+1477+0.028 ms clock, 0.15+28/1108/2645+0.085 ms cpu, 2059->2096->1897 MB, 2059 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 159 @1075.331s 9%: 0.096+2234+0.045 ms clock, 0.29+1782/1695/637+0.13 ms cpu, 1897->2084->2056 MB, 1897 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 160 @1077.673s 9%: 0.11+2345+0.005 ms clock, 0.33+389/1791/1920+0.017 ms cpu, 2056->2180->2046 MB, 2056 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 161 @1080.126s 9%: 0.10+2090+0.024 ms clock, 0.31+1469/1571/791+0.072 ms cpu, 2046->2240->2052 MB, 2046 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 162 @1082.319s 10%: 0.15+2509+0.006 ms clock, 0.47+730/1877/1642+0.019 ms cpu, 2052->2181->2049 MB, 2052 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 163 @1084.962s 10%: 0.082+2140+0.014 ms clock, 0.24+1203/1619/1166+0.042 ms cpu, 2049->2242->2050 MB, 2049 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 164 @1087.254s 10%: 0.10+2345+0.005 ms clock, 0.30+460/1785/1640+0.016 ms cpu, 2050->2177->2048 MB, 2050 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 165 @1089.727s 10%: 0.067+1987+0.016 ms clock, 0.20+722/1497/1507+0.049 ms cpu, 2048->2244->2053 MB, 2048 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 166 @1091.813s 10%: 0.067+2260+0.006 ms clock, 0.20+336/1710/2008+0.018 ms cpu, 2053->2163->2032 MB, 2053 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 167 @1094.228s 10%: 0.12+2098+0.014 ms clock, 0.38+519/1583/1978+0.043 ms cpu, 2032->2227->2053 MB, 2032 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 168 @1096.445s 10%: 7.5+2467+0.014 ms clock, 22+703/1872/1425+0.042 ms cpu, 2053->2187->2053 MB, 2053 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 169 @1099.023s 10%: 0.075+2012+0.013 ms clock, 0.22+30/1521/2505+0.041 ms cpu, 2053->2239->2043 MB, 2053 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 170 @1101.202s 10%: 0.089+2587+0.006 ms clock, 0.26+954/1945/1191+0.019 ms cpu, 2043->2185->2063 MB, 2043 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 171 @1104.039s 10%: 0.10+2014+0.016 ms clock, 0.30+65/1511/2451+0.050 ms cpu, 2063->2249->2043 MB, 2063 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 172 @1106.169s 10%: 0.11+2596+0.005 ms clock, 0.34+943/1974/1161+0.017 ms cpu, 2044->2181->2059 MB, 2043 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 173 @1108.921s 10%: 0.10+1933+0.017 ms clock, 0.30+25/1448/2512+0.053 ms cpu, 2059->2207->2006 MB, 2059 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 174 @1110.948s 10%: 0.069+2577+0.043 ms clock, 0.20+997/1948/1018+0.13 ms cpu, 2006->2179->2078 MB, 2006 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 175 @1113.624s 10%: 0.076+1528+0.31 ms clock, 0.22+19/1156/2914+0.93 ms cpu, 2078->2100->1884 MB, 2078 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 176 @1115.332s 10%: 14+2143+0.022 ms clock, 42+1755/1641/560+0.067 ms cpu, 1884->2069->2043 MB, 1884 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 177 @1117.593s 10%: 0.080+2281+0.005 ms clock, 0.24+644/1739/1492+0.017 ms cpu, 2043->2177->2053 MB, 2043 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 178 @1119.983s 10%: 0.078+2044+0.015 ms clock, 0.23+1059/1545/1187+0.045 ms cpu, 2053->2246->2050 MB, 2053 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 179 @1122.131s 10%: 0.083+2408+0.006 ms clock, 0.25+765/1815/1461+0.018 ms cpu, 2050->2180->2051 MB, 2050 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 180 @1124.685s 10%: 0.080+2190+0.028 ms clock, 0.24+851/1642/1629+0.085 ms cpu, 2051->2247->2053 MB, 2051 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 181 @1126.987s 10%: 0.10+2488+0.006 ms clock, 0.31+468/1882/1875+0.019 ms cpu, 2053->2179->2048 MB, 2053 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 182 @1129.580s 10%: 0.068+1989+0.023 ms clock, 0.20+456/1502/1877+0.069 ms cpu, 2048->2246->2056 MB, 2048 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 183 @1131.785s 11%: 0.16+2278+0.005 ms clock, 0.48+654/1725/1516+0.017 ms cpu, 2056->2162->2028 MB, 2056 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 184 @1134.169s 11%: 11+1867+0.014 ms clock, 33+138/1420/2111+0.042 ms cpu, 2028->2216->2046 MB, 2028 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 185 @1136.160s 11%: 0.11+2487+0.018 ms clock, 0.35+738/1878/1242+0.054 ms cpu, 2047->2187->2062 MB, 2046 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 186 @1138.752s 11%: 0.052+1362+0.004 ms clock, 0.15+2.6/1036/2601+0.014 ms cpu, 2062->2081->1882 MB, 2062 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 187 @1140.202s 11%: 8.5+1974+0.017 ms clock, 25+1561/1498/615+0.051 ms cpu, 1882->2066->2043 MB, 1882 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 188 @1142.268s 11%: 0.078+2356+0.005 ms clock, 0.23+466/1781/1671+0.016 ms cpu, 2043->2182->2059 MB, 2043 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 189 @1144.735s 11%: 0.063+2148+0.020 ms clock, 0.18+610/1621/1806+0.062 ms cpu, 2059->2255->2054 MB, 2059 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 190 @1146.997s 11%: 0.061+2355+0.015 ms clock, 0.18+445/1776/1816+0.045 ms cpu, 2054->2177->2046 MB, 2054 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 191 @1149.491s 11%: 0.082+2083+0.013 ms clock, 0.24+446/1574/2067+0.040 ms cpu, 2046->2243->2056 MB, 2046 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 192 @1151.684s 11%: 0.10+2422+0.005 ms clock, 0.32+797/1830/1527+0.017 ms cpu, 2056->2174->2041 MB, 2056 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 193 @1154.226s 11%: 0.10+2062+0.025 ms clock, 0.30+350/1550/1998+0.075 ms cpu, 2041->2242->2060 MB, 2041 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 194 @1156.393s 11%: 0.11+2569+0.005 ms clock, 0.35+1195/1945/1026+0.015 ms cpu, 2060->2198->2060 MB, 2060 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 195 @1159.097s 11%: 0.078+1979+0.019 ms clock, 0.23+230/1491/2109+0.058 ms cpu, 2060->2254->2053 MB, 2060 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 196 @1161.186s 11%: 13+2538+0.005 ms clock, 41+774/1928/1329+0.017 ms cpu, 2053->2195->2064 MB, 2053 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 197 @1163.906s 11%: 0.077+2063+0.030 ms clock, 0.23+27/1564/2684+0.092 ms cpu, 2064->2241->2035 MB, 2064 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 198 @1166.088s 11%: 0.074+2716+0.005 ms clock, 0.22+900/2048/1328+0.015 ms cpu, 2035->2184->2068 MB, 2035 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 199 @1168.947s 11%: 0.069+1782+0.015 ms clock, 0.20+20/1345/2380+0.045 ms cpu, 2068->2201->1993 MB, 2068 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 200 @1170.835s 11%: 0.058+2653+0.014 ms clock, 0.17+1028/2001/1066+0.042 ms cpu, 1993->2183->2088 MB, 1993 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 201 @1173.596s 11%: 0.099+1599+0.028 ms clock, 0.29+11/1198/2835+0.086 ms cpu, 2088->2122->1897 MB, 2088 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 202 @1175.303s 11%: 0.18+2126+0.031 ms clock, 0.55+1590/1605/738+0.095 ms cpu, 1897->2084->2048 MB, 1897 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 203 @1177.523s 11%: 0.091+2416+0.006 ms clock, 0.27+295/1832/2055+0.020 ms cpu, 2048->2169->2045 MB, 2048 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 204 @1180.052s 11%: 0.065+2214+0.022 ms clock, 0.19+1540/1672/908+0.068 ms cpu, 2045->2238->2053 MB, 2045 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 205 @1182.373s 11%: 0.12+2488+0.006 ms clock, 0.37+714/1863/1646+0.018 ms cpu, 2053->2185->2054 MB, 2053 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 206 @1185.064s 12%: 0.083+2041+0.014 ms clock, 0.25+1380/1557/861+0.042 ms cpu, 2054->2247->2052 MB, 2054 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 207 @1187.204s 12%: 7.8+2403+0.005 ms clock, 23+544/1816/1671+0.016 ms cpu, 2052->2184->2055 MB, 2052 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 208 @1189.727s 12%: 0.055+2044+0.014 ms clock, 0.16+714/1542/1544+0.042 ms cpu, 2055->2250->2054 MB, 2055 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 209 @1191.872s 12%: 0.069+2218+0.006 ms clock, 0.20+266/1680/1979+0.020 ms cpu, 2054->2170->2039 MB, 2054 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 210 @1194.209s 12%: 13+2050+0.014 ms clock, 39+290/1573/2092+0.044 ms cpu, 2039->2237->2058 MB, 2039 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 211 @1196.406s 12%: 0.13+2419+0.005 ms clock, 0.40+690/1850/1402+0.015 ms cpu, 2058->2186->2050 MB, 2058 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 212 @1198.962s 12%: 0.052+1834+0.005 ms clock, 0.15+121/1389/2243+0.015 ms cpu, 2050->2255->2065 MB, 2050 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 213 @1200.898s 12%: 0.10+2489+0.007 ms clock, 0.31+1383/1888/870+0.022 ms cpu, 2065->2274->2153 MB, 2065 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 214 @1203.588s 12%: 0.096+2072+0.023 ms clock, 0.29+34/1567/2299+0.069 ms cpu, 2153->2288->2063 MB, 2153 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 215 @1205.877s 12%: 0.069+2688+0.022 ms clock, 0.20+1124/2024/1073+0.066 ms cpu, 2063->2257->2095 MB, 2063 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 216 @1208.672s 12%: 0.072+1930+0.019 ms clock, 0.21+32/1473/2556+0.057 ms cpu, 2095->2209->1981 MB, 2095 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 217 @1210.824s 12%: 0.086+2388+0.023 ms clock, 0.26+1065/1798/1318+0.070 ms cpu, 1981->2158->2081 MB, 1981 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 218 @1213.321s 12%: 0.075+2353+0.018 ms clock, 0.22+295/1777/2112+0.056 ms cpu, 2081->2239->2086 MB, 2081 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 219 @1215.793s 12%: 0.065+2552+0.020 ms clock, 0.19+941/1916/1109+0.062 ms cpu, 2086->2282->2097 MB, 2086 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 220 @1218.445s 12%: 0.065+1484+0.012 ms clock, 0.19+19/1124/2592+0.037 ms cpu, 2097->2116->1889 MB, 2097 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 221 @1220.033s 12%: 0.064+2074+0.021 ms clock, 0.19+1435/1573/940+0.064 ms cpu, 1889->2074->2051 MB, 1889 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 222 @1222.211s 12%: 0.11+2476+0.007 ms clock, 0.34+761/1880/1436+0.023 ms cpu, 2052->2191->2070 MB, 2051 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 223 @1224.797s 12%: 0.11+2143+0.025 ms clock, 0.35+961/1618/1453+0.076 ms cpu, 2070->2266->2062 MB, 2070 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 224 @1227.045s 12%: 0.11+2528+0.004 ms clock, 0.35+480/1903/1830+0.014 ms cpu, 2062->2192->2060 MB, 2062 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 225 @1229.695s 12%: 0.070+2164+0.024 ms clock, 0.21+747/1624/1631+0.074 ms cpu, 2060->2257->2063 MB, 2060 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 226 @1231.962s 12%: 0.065+2546+0.005 ms clock, 0.19+496/1913/1897+0.017 ms cpu, 2063->2195->2062 MB, 2063 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 227 @1234.613s 12%: 0.066+2165+0.020 ms clock, 0.19+615/1631/1798+0.061 ms cpu, 2062->2260->2064 MB, 2062 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 228 @1236.886s 12%: 0.078+2420+0.020 ms clock, 0.23+510/1830/1816+0.061 ms cpu, 2064->2190->2057 MB, 2064 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 229 @1239.406s 12%: 0.10+2126+0.013 ms clock, 0.31+189/1609/2178+0.041 ms cpu, 2057->2260->2070 MB, 2057 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 230 @1241.637s 12%: 0.063+2431+0.021 ms clock, 0.19+630/1848/1783+0.065 ms cpu, 2070->2174->2035 MB, 2070 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 231 @1244.168s 12%: 22+1925+0.014 ms clock, 68+163/1469/2170+0.044 ms cpu, 2035->2220->2052 MB, 2035 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 232 @1246.215s 13%: 2.2+2611+0.005 ms clock, 6.7+871/1981/1302+0.016 ms cpu, 2052->2191->2069 MB, 2052 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 233 @1248.936s 13%: 0.098+1950+0.020 ms clock, 0.29+24/1465/2520+0.061 ms cpu, 2069->2235->2033 MB, 2069 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 234 @1250.989s 13%: 0.097+2673+0.004 ms clock, 0.29+674/2016/1546+0.014 ms cpu, 2033->2190->2081 MB, 2033 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 235 @1253.772s 13%: 0.071+1622+0.018 ms clock, 0.21+30/1220/2542+0.056 ms cpu, 2081->2161->1950 MB, 2081 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 236 @1255.493s 13%: 0.078+2352+0.014 ms clock, 0.23+1174/1791/1163+0.042 ms cpu, 1950->2138->2077 MB, 1950 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 237 @1257.956s 13%: 0.065+1978+0.006 ms clock, 0.19+390/1503/2001+0.020 ms cpu, 2077->2155->2010 MB, 2077 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 238 @1260.044s 13%: 0.067+2049+0.014 ms clock, 0.20+1267/1540/1021+0.043 ms cpu, 2010->2199->2056 MB, 2010 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 239 @1262.190s 13%: 1.7+2471+0.004 ms clock, 5.2+645/1887/1543+0.014 ms cpu, 2056->2191->2066 MB, 2056 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 240 @1264.824s 13%: 0.078+2069+0.018 ms clock, 0.23+825/1563/1445+0.054 ms cpu, 2066->2262->2063 MB, 2066 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 241 @1267.005s 13%: 0.10+2454+0.004 ms clock, 0.32+727/1851/1570+0.014 ms cpu, 2063->2187->2055 MB, 2063 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 242 @1269.552s 13%: 0.055+2059+0.020 ms clock, 0.16+492/1562/1890+0.062 ms cpu, 2055->2254->2066 MB, 2055 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 243 @1271.715s 13%: 0.086+2289+0.005 ms clock, 0.26+625/1726/1620+0.016 ms cpu, 2066->2172->2037 MB, 2066 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 244 @1274.104s 13%: 0.059+1914+0.023 ms clock, 0.17+204/1450/2118+0.070 ms cpu, 2037->2224->2054 MB, 2037 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 245 @1276.119s 13%: 0.064+2568+0.005 ms clock, 0.19+752/1938/1303+0.016 ms cpu, 2054->2196->2074 MB, 2054 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 246 @1278.796s 13%: 0.070+1695+0.027 ms clock, 0.21+29/1280/2534+0.081 ms cpu, 2074->2169->1965 MB, 2074 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 247 @1280.588s 13%: 0.092+2445+0.008 ms clock, 0.27+1177/1850/992+0.025 ms cpu, 1965->2160->2093 MB, 1965 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 248 @1283.147s 13%: 0.18+1847+0.006 ms clock, 0.56+370/1405/2144+0.020 ms cpu, 2093->2152->1992 MB, 2093 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 249 @1285.095s 13%: 0.071+2188+0.019 ms clock, 0.21+1642/1651/761+0.057 ms cpu, 1993->2181->2057 MB, 1992 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 250 @1287.393s 13%: 0.073+2445+0.006 ms clock, 0.22+648/1850/1604+0.018 ms cpu, 2057->2187->2060 MB, 2057 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 251 @1290.014s 13%: 0.083+2231+0.019 ms clock, 0.25+1552/1690/877+0.059 ms cpu, 2060->2256->2063 MB, 2060 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 252 @1292.356s 13%: 0.10+2547+0.036 ms clock, 0.30+754/1936/1606+0.10 ms cpu, 2064->2191->2057 MB, 2063 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 253 @1295.009s 13%: 0.076+2101+0.014 ms clock, 0.22+1306/1582/1038+0.043 ms cpu, 2057->2249->2059 MB, 2057 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 254 @1297.209s 13%: 3.4+2526+0.005 ms clock, 10+547/1907/1765+0.016 ms cpu, 2059->2190->2062 MB, 2059 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 255 @1299.913s 13%: 0.096+2166+0.019 ms clock, 0.29+1130/1635/1280+0.059 ms cpu, 2062->2257->2062 MB, 2062 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 256 @1302.190s 13%: 8.2+2543+0.005 ms clock, 24+686/1923/1650+0.016 ms cpu, 2062->2193->2062 MB, 2062 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 257 @1304.855s 13%: 0.092+2179+0.018 ms clock, 0.27+687/1660/1763+0.054 ms cpu, 2062->2250->2055 MB, 2062 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 258 @1307.137s 14%: 0.11+2522+0.004 ms clock, 0.33+616/1902/1671+0.014 ms cpu, 2055->2189->2065 MB, 2055 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 259 @1309.775s 14%: 0.073+2187+0.021 ms clock, 0.22+928/1663/1502+0.065 ms cpu, 2065->2262->2064 MB, 2065 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 260 @1312.186s 14%: 0.092+2504+0.004 ms clock, 0.27+690/1876/1623+0.014 ms cpu, 2064->2193->2061 MB, 2064 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 261 @1314.799s 14%: 0.085+2167+0.017 ms clock, 0.25+929/1634/1490+0.051 ms cpu, 2061->2255->2062 MB, 2061 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 262 @1317.072s 14%: 0.096+2528+0.033 ms clock, 0.29+535/1917/1742+0.10 ms cpu, 2062->2193->2063 MB, 2062 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 263 @1319.703s 14%: 0.095+2195+0.015 ms clock, 0.28+710/1648/1760+0.046 ms cpu, 2063->2260->2065 MB, 2063 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 264 @1322.008s 14%: 0.070+2546+0.005 ms clock, 0.21+665/1916/1720+0.017 ms cpu, 2065->2193->2060 MB, 2065 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 265 @1324.655s 14%: 0.062+2224+0.022 ms clock, 0.18+677/1678/1829+0.067 ms cpu, 2060->2255->2063 MB, 2060 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 266 @1326.986s 14%: 0.065+2471+0.007 ms clock, 0.19+499/1870/1803+0.023 ms cpu, 2064->2190->2059 MB, 2063 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 267 @1329.556s 14%: 0.070+2138+0.028 ms clock, 0.21+576/1618/1882+0.086 ms cpu, 2059->2258->2067 MB, 2059 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 268 @1331.795s 14%: 0.097+2464+0.030 ms clock, 0.29+709/1862/1616+0.091 ms cpu, 2067->2191->2057 MB, 2067 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 269 @1334.362s 14%: 0.092+2204+0.016 ms clock, 0.27+377/1654/2056+0.050 ms cpu, 2057->2258->2071 MB, 2057 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 270 @1336.676s 14%: 0.063+2401+0.021 ms clock, 0.19+643/1825/1697+0.064 ms cpu, 2071->2178->2042 MB, 2071 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 271 @1339.210s 14%: 0.12+2009+0.016 ms clock, 0.37+114/1519/2239+0.049 ms cpu, 2042->2239->2067 MB, 2042 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 272 @1341.344s 14%: 0.10+2445+0.016 ms clock, 0.30+833/1840/1330+0.050 ms cpu, 2067->2190->2055 MB, 2067 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 273 @1344.001s 14%: 0.10+1899+0.016 ms clock, 0.32+2.2/1442/2388+0.050 ms cpu, 2055->2217->2032 MB, 2055 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 274 @1346.084s 14%: 0.11+2709+0.006 ms clock, 0.33+1026/2050/1121+0.020 ms cpu, 2032->2198->2088 MB, 2032 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 275 @1348.909s 14%: 0.098+1958+0.013 ms clock, 0.29+23/1486/2565+0.041 ms cpu, 2088->2244->2027 MB, 2088 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 276 @1350.989s 14%: 0.098+2742+0.005 ms clock, 0.29+1058/2059/1193+0.017 ms cpu, 2027->2196->2089 MB, 2027 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 277 @1353.847s 14%: 0.085+1843+0.020 ms clock, 0.25+18/1402/2493+0.061 ms cpu, 2089->2217->1999 MB, 2089 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 278 @1355.906s 14%: 0.074+2665+0.017 ms clock, 0.22+1426/2008/810+0.052 ms cpu, 1999->2198->2103 MB, 1999 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 279 @1358.700s 14%: 0.092+2049+0.023 ms clock, 0.27+31/1547/2442+0.071 ms cpu, 2103->2241->2070 MB, 2103 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 280 @1360.852s 14%: 0.063+2580+0.016 ms clock, 0.19+1035/1958/962+0.048 ms cpu, 2070->2262->2101 MB, 2070 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 281 @1363.533s 14%: 0.12+1568+0.006 ms clock, 0.36+20/1180/2839+0.018 ms cpu, 2101->2124->1897 MB, 2101 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 282 @1365.202s 14%: 17+2162+0.019 ms clock, 51+1772/1640/549+0.057 ms cpu, 1897->2083->2056 MB, 1897 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 283 @1367.487s 14%: 0.093+2519+0.005 ms clock, 0.27+853/1895/1509+0.015 ms cpu, 2056->2191->2069 MB, 2056 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 284 @1370.116s 14%: 0.096+2246+0.019 ms clock, 0.29+1798/1702/645+0.057 ms cpu, 2069->2266->2067 MB, 2069 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 285 @1372.469s 14%: 0.12+2459+0.005 ms clock, 0.37+462/1877/1789+0.017 ms cpu, 2067->2196->2063 MB, 2067 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 286 @1375.038s 14%: 0.093+2198+0.024 ms clock, 0.27+1368/1663/983+0.072 ms cpu, 2063->2259->2066 MB, 2063 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 287 @1377.339s 15%: 0.060+2442+0.007 ms clock, 0.18+489/1835/1805+0.021 ms cpu, 2067->2192->2059 MB, 2066 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 288 @1379.889s 15%: 0.092+2135+0.032 ms clock, 0.27+957/1610/1433+0.098 ms cpu, 2059->2255->2066 MB, 2059 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 289 @1382.129s 15%: 0.086+2458+0.019 ms clock, 0.26+712/1871/1546+0.057 ms cpu, 2066->2193->2062 MB, 2066 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 290 @1384.696s 15%: 0.059+2209+0.026 ms clock, 0.17+705/1659/1763+0.080 ms cpu, 2062->2258->2067 MB, 2062 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 291 @1387.019s 15%: 0.068+2497+0.004 ms clock, 0.20+698/1886/1611+0.014 ms cpu, 2067->2192->2060 MB, 2067 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 292 @1389.634s 15%: 0.065+2145+0.033 ms clock, 0.19+549/1620/1848+0.10 ms cpu, 2060->2256->2067 MB, 2060 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 293 @1391.899s 15%: 0.090+2365+0.021 ms clock, 0.27+377/1791/1920+0.063 ms cpu, 2067->2195->2063 MB, 2067 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 294 @1394.371s 15%: 0.094+2253+0.023 ms clock, 0.28+294/1692/2058+0.069 ms cpu, 2063->2266->2075 MB, 2063 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 295 @1396.729s 15%: 0.099+2305+0.006 ms clock, 0.29+437/1760/2003+0.018 ms cpu, 2075->2178->2038 MB, 2075 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 296 @1399.214s 15%: 13+2107+0.023 ms clock, 40+283/1603/2105+0.070 ms cpu, 2038->2237->2070 MB, 2038 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 297 @1401.456s 15%: 0.13+2551+0.005 ms clock, 0.41+854/1927/1423+0.015 ms cpu, 2070->2194->2058 MB, 2070 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 298 @1404.115s 15%: 0.062+2147+0.024 ms clock, 0.18+272/1631/2226+0.074 ms cpu, 2058->2263->2076 MB, 2058 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 299 @1406.445s 15%: 0.13+2656+0.004 ms clock, 0.39+1129/1992/1241+0.014 ms cpu, 2076->2199->2058 MB, 2076 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 300 @1409.204s 15%: 12+2073+0.016 ms clock, 36+154/1583/2147+0.048 ms cpu, 2058->2263->2077 MB, 2058 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 301 @1411.398s 15%: 0.10+2507+0.014 ms clock, 0.32+661/1894/1583+0.043 ms cpu, 2077->2194->2052 MB, 2077 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 302 @1414.010s 15%: 0.062+1936+0.020 ms clock, 0.18+24/1465/2462+0.060 ms cpu, 2052->2223->2043 MB, 2052 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 303 @1416.048s 15%: 0.064+2752+0.006 ms clock, 0.19+930/2083/1307+0.019 ms cpu, 2043->2200->2085 MB, 2043 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 304 @1418.972s 15%: 0.066+1855+0.024 ms clock, 0.20+31/1408/2422+0.072 ms cpu, 2085->2231->2019 MB, 2085 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 305 @1420.932s 15%: 0.068+2725+0.006 ms clock, 0.20+1154/2057/1019+0.020 ms cpu, 2019->2199->2098 MB, 2019 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 306 @1423.777s 15%: 0.10+1890+0.026 ms clock, 0.31+17/1419/2619+0.078 ms cpu, 2098->2232->2008 MB, 2098 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 307 @1425.779s 15%: 0.073+2759+0.023 ms clock, 0.21+1225/2086/928+0.070 ms cpu, 2008->2202->2104 MB, 2008 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 308 @1428.637s 15%: 0.10+1771+0.025 ms clock, 0.31+11/1331/2693+0.075 ms cpu, 2104->2181->1953 MB, 2104 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 309 @1430.522s 15%: 0.11+2337+0.027 ms clock, 0.34+1540/1761/987+0.081 ms cpu, 1953->2131->2076 MB, 1953 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 310 @1432.980s 15%: 0.060+2123+0.006 ms clock, 0.18+403/1602/1962+0.019 ms cpu, 2076->2165->2027 MB, 2076 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 311 @1435.209s 15%: 12+2209+0.014 ms clock, 37+1731/1678/564+0.044 ms cpu, 2027->2218->2064 MB, 2027 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 312 @1437.537s 15%: 0.086+2444+0.005 ms clock, 0.26+340/1837/1999+0.016 ms cpu, 2064->2194->2066 MB, 2064 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 313 @1440.091s 15%: 0.17+2257+0.017 ms clock, 0.51+1651/1708/794+0.052 ms cpu, 2066->2262->2070 MB, 2066 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 314 @1442.444s 15%: 0.068+2502+0.006 ms clock, 0.20+441/1883/1987+0.018 ms cpu, 2070->2199->2065 MB, 2070 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 315 @1445.065s 15%: 0.083+2279+0.12 ms clock, 0.25+1760/1717/695+0.38 ms cpu, 2065->2261->2070 MB, 2065 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 316 @1447.559s 15%: 0.10+2370+0.006 ms clock, 0.32+601/1796/1650+0.020 ms cpu, 2070->2198->2064 MB, 2070 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 317 @1450.039s 15%: 0.10+2305+0.024 ms clock, 0.30+1586/1741/911+0.072 ms cpu, 2064->2260->2069 MB, 2064 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 318 @1452.460s 15%: 6.8+2396+0.005 ms clock, 20+754/1808/1461+0.017 ms cpu, 2069->2205->2072 MB, 2069 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 319 @1454.976s 16%: 0.091+2184+0.022 ms clock, 0.27+1275/1646/1184+0.067 ms cpu, 2072->2268->2069 MB, 2072 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 320 @1457.278s 16%: 4.5+2450+0.005 ms clock, 13+839/1866/1275+0.017 ms cpu, 2069->2217->2084 MB, 2069 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 321 @1459.837s 16%: 0.064+2115+0.020 ms clock, 0.19+971/1598/1413+0.061 ms cpu, 2084->2283->2072 MB, 2084 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 322 @1462.059s 16%: 0.068+2655+0.005 ms clock, 0.20+817/2000/1475+0.015 ms cpu, 2072->2218->2084 MB, 2072 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 323 @1464.818s 16%: 0.12+2233+0.016 ms clock, 0.36+1052/1699/1449+0.049 ms cpu, 2084->2281->2072 MB, 2084 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 324 @1467.273s 16%: 5.1+2495+0.005 ms clock, 15+689/1886/1551+0.016 ms cpu, 2072->2199->2066 MB, 2072 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 325 @1469.877s 16%: 0.085+2150+0.013 ms clock, 0.25+1020/1613/1408+0.039 ms cpu, 2066->2260->2069 MB, 2066 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 326 @1472.137s 16%: 0.082+2567+0.005 ms clock, 0.24+763/1929/1564+0.017 ms cpu, 2069->2202->2071 MB, 2069 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 327 @1474.816s 16%: 0.078+2203+0.18 ms clock, 0.23+866/1662/1601+0.54 ms cpu, 2071->2267->2070 MB, 2071 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 328 @1477.131s 16%: 0.099+2554+0.006 ms clock, 0.29+806/1933/1537+0.019 ms cpu, 2070->2198->2066 MB, 2070 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 329 @1479.791s 16%: 0.067+2254+0.025 ms clock, 0.20+1045/1694/1470+0.075 ms cpu, 2066->2263->2071 MB, 2066 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 330 @1482.155s 16%: 7.0+2584+0.005 ms clock, 21+618/1950/1723+0.017 ms cpu, 2071->2207->2074 MB, 2071 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 331 @1484.865s 16%: 0.10+2285+0.016 ms clock, 0.30+1191/1718/1395+0.048 ms cpu, 2074->2269->2070 MB, 2074 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 332 @1487.272s 16%: 0.14+2490+0.015 ms clock, 0.44+685/1888/1626+0.045 ms cpu, 2071->2198->2067 MB, 2070 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 333 @1489.867s 16%: 0.069+2085+0.013 ms clock, 0.20+839/1569/1502+0.040 ms cpu, 2067->2262->2070 MB, 2067 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 334 @1492.057s 16%: 0.059+2468+0.005 ms clock, 0.17+654/1860/1602+0.017 ms cpu, 2070->2203->2072 MB, 2070 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 335 @1494.626s 16%: 0.063+2214+0.016 ms clock, 0.19+714/1675/1725+0.049 ms cpu, 2072->2269->2072 MB, 2072 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 336 @1496.949s 16%: 0.096+2528+0.034 ms clock, 0.29+617/1899/1706+0.10 ms cpu, 2072->2205->2072 MB, 2072 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 337 @1499.588s 16%: 0.098+2230+0.032 ms clock, 0.29+740/1689/1775+0.098 ms cpu, 2072->2270->2074 MB, 2072 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 338 @1501.928s 16%: 0.10+2530+0.015 ms clock, 0.31+646/1910/1753+0.047 ms cpu, 2074->2205->2071 MB, 2074 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 339 @1504.569s 16%: 0.067+2163+0.016 ms clock, 0.20+513/1638/1880+0.049 ms cpu, 2071->2269->2075 MB, 2071 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 340 @1506.836s 16%: 0.095+2401+0.026 ms clock, 0.28+483/1807/1879+0.078 ms cpu, 2075->2195->2060 MB, 2075 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 341 @1509.342s 16%: 0.092+2096+0.005 ms clock, 0.27+321/1589/2115+0.015 ms cpu, 2060->2264->2081 MB, 2060 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 342 @1511.553s 16%: 0.081+2522+0.005 ms clock, 0.24+652/1906/1732+0.017 ms cpu, 2081->2194->2055 MB, 2081 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 343 @1514.176s 16%: 12+2065+0.023 ms clock, 36+212/1579/2129+0.071 ms cpu, 2055->2259->2081 MB, 2055 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 344 @1516.354s 16%: 3.5+2517+0.006 ms clock, 10+777/1902/1453+0.020 ms cpu, 2081->2203->2062 MB, 2081 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 345 @1519.007s 16%: 0.081+1862+0.019 ms clock, 0.24+23/1405/2371+0.058 ms cpu, 2062->2222->2038 MB, 2062 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 346 @1520.964s 16%: 0.054+2742+0.016 ms clock, 0.16+905/2063/1258+0.049 ms cpu, 2038->2201->2094 MB, 2038 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 347 @1523.816s 16%: 0.097+1812+0.022 ms clock, 0.29+22/1382/2504+0.067 ms cpu, 2094->2208->1993 MB, 2094 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 348 @1525.731s 16%: 0.075+2484+0.18 ms clock, 0.22+944/1893/1236+0.55 ms cpu, 1993->2193->2104 MB, 1993 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 349 @1528.326s 16%: 0.073+1764+0.006 ms clock, 0.21+125/1336/2669+0.019 ms cpu, 2104->2136->1914 MB, 2104 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 350 @1530.280s 16%: 16+2228+0.026 ms clock, 48+1839/1687/500+0.079 ms cpu, 1914->2101->2066 MB, 1914 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 351 @1532.627s 16%: 0.067+2329+0.005 ms clock, 0.20+741/1764/1423+0.017 ms cpu, 2066->2207->2081 MB, 2066 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 352 @1535.174s 16%: 0.17+2194+0.022 ms clock, 0.53+1631/1651/689+0.068 ms cpu, 2082->2279->2077 MB, 2081 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 353 @1537.548s 17%: 0.10+2412+0.004 ms clock, 0.30+646/1832/1676+0.014 ms cpu, 2077->2199->2064 MB, 2077 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 354 @1540.119s 17%: 0.077+2247+0.019 ms clock, 0.23+1738/1693/660+0.059 ms cpu, 2064->2261->2074 MB, 2064 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 355 @1542.469s 17%: 6.8+2481+0.005 ms clock, 20+686/1864/1643+0.016 ms cpu, 2074->2204->2071 MB, 2074 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 356 @1545.059s 17%: 0.083+2185+0.018 ms clock, 0.25+1436/1639/944+0.056 ms cpu, 2071->2268->2076 MB, 2071 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 357 @1547.345s 17%: 0.071+2439+0.006 ms clock, 0.21+612/1838/1639+0.018 ms cpu, 2076->2205->2071 MB, 2076 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 358 @1549.891s 17%: 0.11+2195+0.023 ms clock, 0.35+1208/1657/1235+0.070 ms cpu, 2071->2266->2074 MB, 2071 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 359 @1552.188s 17%: 0.091+2579+0.013 ms clock, 0.27+608/1937/1728+0.040 ms cpu, 2074->2210->2078 MB, 2074 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 360 @1554.871s 17%: 0.077+2219+0.023 ms clock, 0.23+1173/1673/1329+0.071 ms cpu, 2078->2274->2074 MB, 2078 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 361 @1557.197s 17%: 0.097+2554+0.005 ms clock, 0.29+614/1945/1591+0.016 ms cpu, 2075->2217->2085 MB, 2074 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 362 @1559.856s 17%: 0.093+2199+0.014 ms clock, 0.28+1074/1650/1373+0.044 ms cpu, 2085->2283->2077 MB, 2085 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 363 @1562.168s 17%: 0.14+2550+0.036 ms clock, 0.42+815/1924/1444+0.10 ms cpu, 2077->2216->2082 MB, 2077 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 364 @1564.828s 17%: 0.068+2164+0.014 ms clock, 0.20+925/1637/1521+0.042 ms cpu, 2082->2275->2072 MB, 2082 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 365 @1567.107s 17%: 0.094+2558+0.006 ms clock, 0.28+558/1928/1792+0.019 ms cpu, 2072->2201->2071 MB, 2072 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 366 @1569.770s 17%: 0.079+2211+0.026 ms clock, 0.23+877/1687/1628+0.080 ms cpu, 2071->2268->2075 MB, 2071 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 367 @1572.096s 17%: 0.075+2616+0.006 ms clock, 0.22+841/1969/1529+0.018 ms cpu, 2075->2205->2072 MB, 2075 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 368 @1574.820s 17%: 0.081+2282+0.023 ms clock, 0.24+869/1732/1741+0.070 ms cpu, 2072->2270->2077 MB, 2072 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 369 @1577.219s 17%: 7.7+2512+0.005 ms clock, 23+769/1898/1555+0.017 ms cpu, 2077->2203->2069 MB, 2077 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 370 @1579.858s 17%: 0.11+2203+0.023 ms clock, 0.33+1087/1666/1355+0.069 ms cpu, 2069->2265->2075 MB, 2069 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 371 @1582.217s 17%: 7.1+2511+0.005 ms clock, 21+822/1914/1386+0.016 ms cpu, 2075->2211->2079 MB, 2075 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 372 @1584.844s 17%: 0.065+2237+0.019 ms clock, 0.19+1037/1687/1437+0.058 ms cpu, 2079->2276->2076 MB, 2079 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 373 @1587.283s 17%: 0.11+2418+0.014 ms clock, 0.35+735/1819/1413+0.044 ms cpu, 2077->2208->2075 MB, 2076 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 374 @1589.808s 17%: 0.075+2075+0.043 ms clock, 0.22+803/1561/1514+0.12 ms cpu, 2075->2271->2076 MB, 2075 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 375 @1591.998s 17%: 0.079+2419+0.015 ms clock, 0.23+620/1830/1565+0.046 ms cpu, 2076->2213->2081 MB, 2076 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 376 @1594.516s 17%: 0.093+2111+0.017 ms clock, 0.27+559/1598/1945+0.051 ms cpu, 2081->2281->2081 MB, 2081 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 377 @1596.732s 17%: 0.083+2437+0.025 ms clock, 0.24+450/1828/1937+0.077 ms cpu, 2081->2194->2058 MB, 2081 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 378 @1599.297s 17%: 0.11+2007+0.004 ms clock, 0.35+293/1523/2058+0.013 ms cpu, 2058->2260->2081 MB, 2058 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 379 @1601.408s 17%: 0.059+2379+0.005 ms clock, 0.17+821/1792/1347+0.015 ms cpu, 2081->2197->2060 MB, 2081 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 380 @1603.893s 17%: 0.054+1804+0.027 ms clock, 0.16+22/1362/2461+0.082 ms cpu, 2060->2190->2013 MB, 2060 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 381 @1605.790s 17%: 0.10+2625+0.016 ms clock, 0.30+963/1975/1267+0.049 ms cpu, 2013->2214->2117 MB, 2013 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 382 @1608.523s 17%: 0.073+1836+0.021 ms clock, 0.22+26/1393/2621+0.065 ms cpu, 2117->2187->1958 MB, 2117 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 383 @1610.461s 17%: 0.090+2328+0.014 ms clock, 0.27+1513/1765/903+0.044 ms cpu, 1958->2147->2088 MB, 1958 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 384 @1612.904s 17%: 0.10+2430+0.020 ms clock, 0.32+601/1832/1767+0.061 ms cpu, 2088->2235->2092 MB, 2088 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 385 @1615.438s 17%: 0.11+2404+0.014 ms clock, 0.35+1221/1823/1024+0.042 ms cpu, 2092->2292->2101 MB, 2092 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 386 @1617.952s 17%: 0.067+2151+0.005 ms clock, 0.20+284/1610/2293+0.017 ms cpu, 2101->2181->2026 MB, 2101 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 387 @1620.215s 17%: 13+2220+0.024 ms clock, 40+1669/1702/667+0.072 ms cpu, 2026->2219->2075 MB, 2026 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 388 @1622.551s 17%: 0.11+2468+0.005 ms clock, 0.35+1016/1848/1314+0.017 ms cpu, 2075->2218->2089 MB, 2075 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 389 @1625.128s 18%: 0.12+2341+0.042 ms clock, 0.37+1832/1771/629+0.12 ms cpu, 2089->2287->2080 MB, 2089 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 390 @1627.580s 18%: 0.065+2350+0.005 ms clock, 0.19+210/1775/2094+0.016 ms cpu, 2080->2203->2069 MB, 2080 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 391 @1630.064s 18%: 0.085+2172+0.030 ms clock, 0.25+1532/1646/817+0.090 ms cpu, 2069->2265->2079 MB, 2069 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 392 @1632.341s 18%: 0.093+2492+0.005 ms clock, 0.27+740/1898/1554+0.017 ms cpu, 2079->2211->2076 MB, 2079 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 393 @1634.947s 18%: 0.066+2184+0.020 ms clock, 0.20+1194/1657/1219+0.061 ms cpu, 2076->2272->2078 MB, 2076 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 394 @1637.245s 18%: 0.090+2531+0.006 ms clock, 0.27+820/1904/1442+0.020 ms cpu, 2078->2215->2083 MB, 2078 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 395 @1639.908s 18%: 0.13+2192+0.020 ms clock, 0.40+1187/1646/1318+0.062 ms cpu, 2083->2279->2079 MB, 2083 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 396 @1642.202s 18%: 8.7+2599+0.005 ms clock, 26+727/1977/1509+0.016 ms cpu, 2079->2224->2092 MB, 2079 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 397 @1644.911s 18%: 0.081+2221+0.019 ms clock, 0.24+1160/1674/1329+0.057 ms cpu, 2092->2289->2081 MB, 2092 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 398 @1647.242s 18%: 4.5+2483+0.006 ms clock, 13+545/1885/1731+0.018 ms cpu, 2081->2208->2075 MB, 2081 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 399 @1649.872s 18%: 0.066+2185+0.016 ms clock, 0.20+1078/1640/1391+0.048 ms cpu, 2075->2271->2081 MB, 2075 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 400 @1652.164s 18%: 7.1+2505+0.005 ms clock, 21+672/1892/1581+0.016 ms cpu, 2081->2214->2081 MB, 2081 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 401 @1654.780s 18%: 0.065+2119+0.015 ms clock, 0.19+691/1599/1600+0.046 ms cpu, 2081->2280->2083 MB, 2081 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 402 @1657.005s 18%: 0.070+2489+0.006 ms clock, 0.21+573/1864/1809+0.019 ms cpu, 2083->2208->2073 MB, 2083 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 403 @1659.616s 18%: 0.064+2056+0.016 ms clock, 0.19+704/1553/1714+0.048 ms cpu, 2073->2271->2083 MB, 2073 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 404 @1661.773s 18%: 0.084+2447+0.017 ms clock, 0.25+457/1843/1901+0.052 ms cpu, 2083->2202->2068 MB, 2083 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 405 @1664.326s 18%: 0.13+2147+0.020 ms clock, 0.40+338/1612/2162+0.060 ms cpu, 2068->2273->2089 MB, 2068 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 406 @1666.590s 18%: 0.10+2482+0.005 ms clock, 0.32+549/1871/1854+0.017 ms cpu, 2089->2198->2058 MB, 2089 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 407 @1669.192s 18%: 12+2033+0.005 ms clock, 38+118/1546/2214+0.016 ms cpu, 2058->2264->2090 MB, 2058 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 408 @1671.339s 18%: 0.071+2451+0.006 ms clock, 0.21+709/1841/1519+0.020 ms cpu, 2090->2206->2064 MB, 2090 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 409 @1673.914s 18%: 0.096+1833+0.026 ms clock, 0.29+23/1395/2403+0.079 ms cpu, 2064->2212->2033 MB, 2064 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 410 @1675.855s 18%: 0.067+2749+0.006 ms clock, 0.20+1092/2075/1002+0.018 ms cpu, 2033->2214->2111 MB, 2033 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 411 @1678.759s 18%: 0.10+1928+0.028 ms clock, 0.30+31/1448/2648+0.086 ms cpu, 2111->2246->2021 MB, 2111 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 412 @1680.795s 18%: 0.099+2622+0.024 ms clock, 0.29+905/1970/1229+0.074 ms cpu, 2021->2212->2113 MB, 2021 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 413 @1683.516s 18%: 0.060+1500+0.005 ms clock, 0.18+20/1143/2638+0.015 ms cpu, 2113->2135->1911 MB, 2113 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 414 @1685.127s 18%: 0.084+2137+0.020 ms clock, 0.25+1658/1612/721+0.061 ms cpu, 1911->2099->2073 MB, 1911 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 415 @1687.484s 18%: 0.090+2501+0.006 ms clock, 0.27+985/1886/1347+0.019 ms cpu, 2073->2213->2088 MB, 2073 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 416 @1690.132s 18%: 0.097+2229+0.017 ms clock, 0.29+1607/1683/735+0.053 ms cpu, 2088->2285->2084 MB, 2088 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 417 @1692.463s 18%: 0.10+2353+0.016 ms clock, 0.32+519/1791/1675+0.049 ms cpu, 2084->2211->2076 MB, 2084 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 418 @1694.921s 18%: 0.065+2162+0.015 ms clock, 0.19+1098/1646/1292+0.045 ms cpu, 2076->2273->2083 MB, 2076 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 419 @1697.205s 18%: 0.079+2447+0.014 ms clock, 0.23+669/1849/1528+0.042 ms cpu, 2083->2213->2080 MB, 2083 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 420 @1699.800s 18%: 0.10+2169+0.015 ms clock, 0.32+966/1637/1498+0.047 ms cpu, 2080->2277->2083 MB, 2080 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 421 @1702.078s 18%: 0.079+2618+0.007 ms clock, 0.23+821/1966/1609+0.021 ms cpu, 2083->2216->2082 MB, 2083 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 422 @1704.859s 18%: 0.085+2226+0.038 ms clock, 0.25+919/1698/1585+0.11 ms cpu, 2082->2278->2082 MB, 2082 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 423 @1707.205s 18%: 9.3+2526+0.007 ms clock, 28+554/1912/1724+0.021 ms cpu, 2082->2218->2086 MB, 2082 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 424 @1709.903s 18%: 0.093+2140+0.022 ms clock, 0.27+1149/1602/1254+0.066 ms cpu, 2086->2282->2084 MB, 2086 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 425 @1712.166s 18%: 0.12+2601+0.006 ms clock, 0.38+791/1969/1545+0.018 ms cpu, 2084->2222->2089 MB, 2084 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 426 @1714.930s 18%: 0.082+2182+0.021 ms clock, 0.24+1111/1648/1336+0.063 ms cpu, 2089->2287->2085 MB, 2089 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 427 @1717.236s 18%: 0.11+2522+0.007 ms clock, 0.34+565/1903/1775+0.022 ms cpu, 2085->2209->2076 MB, 2085 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 428 @1719.914s 18%: 0.082+2196+0.014 ms clock, 0.24+1099/1654/1388+0.042 ms cpu, 2076->2273->2084 MB, 2076 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 429 @1722.230s 18%: 0.098+2386+0.007 ms clock, 0.29+514/1811/1649+0.021 ms cpu, 2085->2213->2080 MB, 2084 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 430 @1724.778s 18%: 0.079+2166+0.033 ms clock, 0.23+860/1642/1606+0.10 ms cpu, 2080->2276->2084 MB, 2080 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 431 @1727.058s 18%: 0.079+2585+0.006 ms clock, 0.23+571/1957/1802+0.018 ms cpu, 2084->2210->2078 MB, 2084 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 432 @1729.741s 19%: 0.069+2181+0.026 ms clock, 0.20+776/1640/1665+0.080 ms cpu, 2078->2276->2085 MB, 2078 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 433 @1732.019s 19%: 0.089+2589+0.006 ms clock, 0.26+783/1961/1567+0.019 ms cpu, 2085->2217->2083 MB, 2085 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 434 @1734.793s 19%: 0.099+2179+0.015 ms clock, 0.29+1065/1644/1396+0.046 ms cpu, 2083->2278->2082 MB, 2083 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 435 @1737.075s 19%: 0.066+2554+0.006 ms clock, 0.19+824/1930/1492+0.018 ms cpu, 2082->2212->2082 MB, 2082 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 436 @1739.763s 19%: 0.086+2305+0.015 ms clock, 0.25+1095/1758/1535+0.045 ms cpu, 2082->2281->2087 MB, 2082 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 437 @1742.175s 19%: 3.3+2505+0.004 ms clock, 9.9+744/1896/1526+0.013 ms cpu, 2087->2218->2084 MB, 2087 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 438 @1744.794s 19%: 0.058+2109+0.021 ms clock, 0.17+849/1587/1442+0.064 ms cpu, 2084->2282->2087 MB, 2084 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 439 @1747.005s 19%: 0.11+2490+0.008 ms clock, 0.34+489/1876/1878+0.024 ms cpu, 2087->2213->2079 MB, 2087 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 440 @1749.671s 19%: 0.075+2180+0.022 ms clock, 0.22+663/1650/1771+0.068 ms cpu, 2079->2274->2084 MB, 2079 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 441 @1751.981s 19%: 0.075+2516+0.006 ms clock, 0.22+752/1896/1604+0.020 ms cpu, 2084->2217->2086 MB, 2084 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 442 @1754.695s 19%: 0.074+2152+0.014 ms clock, 0.22+620/1631/1693+0.044 ms cpu, 2086->2285->2088 MB, 2086 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 443 @1756.966s 19%: 0.098+2522+0.005 ms clock, 0.29+532/1903/1950+0.017 ms cpu, 2088->2211->2076 MB, 2088 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 444 @1759.662s 19%: 0.066+2233+0.014 ms clock, 0.20+647/1676/1795+0.044 ms cpu, 2076->2273->2087 MB, 2076 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 445 @1762.003s 19%: 0.075+2503+0.005 ms clock, 0.22+647/1890/1662+0.015 ms cpu, 2087->2210->2077 MB, 2087 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 446 @1764.658s 19%: 0.055+2090+0.021 ms clock, 0.16+646/1577/1683+0.064 ms cpu, 2077->2276->2088 MB, 2077 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 447 @1766.849s 19%: 0.088+2295+0.027 ms clock, 0.26+407/1731/1924+0.081 ms cpu, 2088->2194->2060 MB, 2088 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 448 @1769.239s 19%: 10+2018+0.004 ms clock, 32+155/1522/2167+0.014 ms cpu, 2060->2264->2094 MB, 2060 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 449 @1771.368s 19%: 0.067+2502+0.007 ms clock, 0.20+804/1895/1521+0.022 ms cpu, 2094->2210->2069 MB, 2094 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 450 @1774.050s 19%: 0.11+1876+0.018 ms clock, 0.35+60/1411/2352+0.056 ms cpu, 2069->2239->2060 MB, 2069 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 451 @1776.030s 19%: 0.11+2727+0.006 ms clock, 0.34+910/2060/1317+0.018 ms cpu, 2060->2215->2102 MB, 2060 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 452 @1778.878s 19%: 0.084+1915+0.034 ms clock, 0.25+30/1450/2560+0.10 ms cpu, 2102->2245->2034 MB, 2102 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 453 @1780.903s 19%: 0.10+2821+0.006 ms clock, 0.31+1173/2135/1023+0.019 ms cpu, 2034->2215->2115 MB, 2034 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 454 @1783.887s 19%: 0.11+2111+0.017 ms clock, 0.33+16/1595/2706+0.053 ms cpu, 2115->2287->2062 MB, 2115 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 455 @1786.119s 19%: 0.096+2793+0.006 ms clock, 0.28+1191/2101/1163+0.018 ms cpu, 2062->2219->2103 MB, 2062 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 456 @1789.105s 19%: 0.079+1978+0.021 ms clock, 0.23+147/1492/2252+0.063 ms cpu, 2103->2292->2079 MB, 2103 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 457 @1791.205s 19%: 6.7+2635+0.005 ms clock, 20+919/1989/1356+0.017 ms cpu, 2079->2216->2091 MB, 2079 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 458 @1793.968s 19%: 0.066+2000+0.021 ms clock, 0.19+29/1500/2509+0.063 ms cpu, 2091->2265->2064 MB, 2091 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 459 @1796.095s 19%: 0.14+2771+0.006 ms clock, 0.43+1105/2099/1152+0.019 ms cpu, 2064->2229->2110 MB, 2064 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 460 @1798.993s 19%: 0.074+2128+0.015 ms clock, 0.22+42/1615/2385+0.047 ms cpu, 2111->2311->2091 MB, 2110 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 461 @1801.334s 19%: 10+2475+0.005 ms clock, 30+678/1871/1280+0.015 ms cpu, 2091->2233->1959 MB, 2091 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 462 @1803.933s 19%: 0.069+1793+0.014 ms clock, 0.20+56/1354/2299+0.043 ms cpu, 1959->2094->1890 MB, 1959 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 463 @1805.816s 19%: 0.082+2554+0.024 ms clock, 0.24+1006/1929/1065+0.072 ms cpu, 1891->2077->1979 MB, 1893 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 464 @1808.507s 19%: 0.11+1540+0.005 ms clock, 0.34+9.8/1156/2731+0.015 ms cpu, 1979->1997->1776 MB, 1979 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 465 @1810.252s 19%: 0.12+2185+0.018 ms clock, 0.37+1481/1645/851+0.056 ms cpu, 1789->1984->1953 MB, 1820 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 466 @1812.635s 19%: 0.11+2195+0.014 ms clock, 0.34+258/1666/1848+0.043 ms cpu, 1953->2069->1935 MB, 1953 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 467 @1814.933s 19%: 0.083+1978+0.014 ms clock, 0.24+1015/1491/1212+0.043 ms cpu, 1935->2120->1939 MB, 1935 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 468 @1817.018s 19%: 0.064+2379+0.019 ms clock, 0.19+664/1786/1451+0.057 ms cpu, 1939->2073->1952 MB, 1939 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 469 @1819.505s 19%: 0.077+2008+0.018 ms clock, 0.23+400/1519/2002+0.054 ms cpu, 1952->2142->1945 MB, 1952 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 470 @1821.612s 19%: 0.11+2396+0.016 ms clock, 0.35+745/1801/1493+0.050 ms cpu, 1945->2061->1935 MB, 1945 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 471 @1824.114s 19%: 0.10+1944+0.022 ms clock, 0.30+228/1462/2225+0.068 ms cpu, 1935->2114->1933 MB, 1935 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 472 @1826.170s 19%: 0.19+2680+0.008 ms clock, 0.57+1068/2020/1030+0.024 ms cpu, 1934->2077->1961 MB, 1933 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 473 @1828.957s 19%: 0.096+1831+0.032 ms clock, 0.28+26/1383/2365+0.097 ms cpu, 1961->2106->1900 MB, 1961 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 474 @1830.881s 19%: 0.061+2622+0.022 ms clock, 0.18+809/1992/1177+0.068 ms cpu, 1902->2075->1974 MB, 1900 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 475 @1833.598s 19%: 0.068+1450+0.006 ms clock, 0.20+17/1100/2726+0.018 ms cpu, 1974->1995->1780 MB, 1974 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 476 @1835.182s 19%: 7.2+2319+0.024 ms clock, 21+1503/1748/622+0.073 ms cpu, 1780->2000->1974 MB, 1837 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 477 @1837.710s 19%: 0.10+2187+0.023 ms clock, 0.31+193/1648/2072+0.070 ms cpu, 1974->2073->1918 MB, 1974 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 478 @1840.012s 19%: 0.092+2020+0.018 ms clock, 0.27+1225/1533/1071+0.055 ms cpu, 1918->2100->1937 MB, 1918 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 479 @1842.171s 19%: 2.7+2472+0.006 ms clock, 8.1+690/1862/1426+0.019 ms cpu, 1937->2078->1959 MB, 1937 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 480 @1844.758s 19%: 0.10+2090+0.020 ms clock, 0.31+881/1579/1455+0.062 ms cpu, 1959->2146->1942 MB, 1959 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 481 @1846.951s 19%: 0.069+2368+0.029 ms clock, 0.20+569/1804/1569+0.087 ms cpu, 1942->2074->1950 MB, 1942 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 482 @1849.472s 19%: 0.083+1911+0.014 ms clock, 0.24+181/1453/2030+0.042 ms cpu, 1950->2144->1949 MB, 1950 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 483 @1851.490s 19%: 0.094+2397+0.008 ms clock, 0.28+663/1805/1449+0.024 ms cpu, 1949->2059->1929 MB, 1949 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 484 @1853.988s 19%: 0.062+1911+0.020 ms clock, 0.18+25/1450/2438+0.060 ms cpu, 1929->2095->1920 MB, 1929 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 485 @1856.007s 19%: 0.061+2661+0.005 ms clock, 0.18+1085/2027/967+0.016 ms cpu, 1920->2083->1971 MB, 1920 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 486 @1858.865s 19%: 0.13+1831+0.020 ms clock, 0.39+24/1385/2458+0.061 ms cpu, 1971->2095->1880 MB, 1971 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 487 @1860.846s 19%: 0.098+2258+0.014 ms clock, 0.29+978/1700/1045+0.042 ms cpu, 1890->2059->1964 MB, 1880 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 488 @1863.213s 19%: 0.086+1606+0.006 ms clock, 0.26+127/1236/2259+0.018 ms cpu, 1965->2003->1798 MB, 1964 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 489 @1864.969s 19%: 0.069+2103+0.015 ms clock, 0.20+1120/1593/1210+0.045 ms cpu, 1798->1987->1945 MB, 1823 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 490 @1867.172s 20%: 0.10+2459+0.004 ms clock, 0.32+788/1843/1344+0.013 ms cpu, 1945->2074->1948 MB, 1945 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 491 @1869.730s 20%: 0.064+1958+0.017 ms clock, 0.19+656/1488/1578+0.053 ms cpu, 1948->2135->1942 MB, 1948 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 492 @1871.835s 20%: 0.14+2269+0.005 ms clock, 0.42+682/1713/1509+0.016 ms cpu, 1942->2059->1936 MB, 1942 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 493 @1874.217s 20%: 11+1989+0.023 ms clock, 34+192/1515/2158+0.071 ms cpu, 1936->2128->1947 MB, 1936 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 494 @1876.385s 20%: 0.086+2512+0.006 ms clock, 0.26+908/1895/1234+0.019 ms cpu, 1947->2078->1947 MB, 1947 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 495 @1879.004s 20%: 0.11+1672+0.027 ms clock, 0.35+27/1251/2256+0.081 ms cpu, 1947->2069->1879 MB, 1947 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 496 @1880.774s 20%: 0.068+2273+0.006 ms clock, 0.20+846/1706/1140+0.018 ms cpu, 1887->2058->1965 MB, 1880 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 497 @1883.146s 20%: 0.067+1720+0.014 ms clock, 0.20+180/1302/2425+0.043 ms cpu, 1965->2004->1798 MB, 1965 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 498 @1885.014s 20%: 0.065+2072+0.019 ms clock, 0.19+1274/1564/1069+0.059 ms cpu, 1798->1986->1943 MB, 1822 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 499 @1887.257s 20%: 0.11+2426+0.005 ms clock, 0.35+739/1829/1389+0.015 ms cpu, 1943->2074->1950 MB, 1943 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 500 @1889.800s 20%: 0.068+2028+0.031 ms clock, 0.20+796/1536/1467+0.093 ms cpu, 1950->2137->1942 MB, 1950 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 501 @1891.926s 20%: 0.083+2467+0.014 ms clock, 0.25+565/1857/1585+0.044 ms cpu, 1942->2080->1957 MB, 1942 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 502 @1894.497s 20%: 0.060+2014+0.014 ms clock, 0.18+389/1519/2027+0.042 ms cpu, 1957->2150->1948 MB, 1957 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 503 @1896.606s 20%: 0.091+2411+0.015 ms clock, 0.27+666/1810/1627+0.046 ms cpu, 1948->2059->1931 MB, 1948 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 504 @1899.123s 20%: 0.081+1909+0.015 ms clock, 0.24+140/1452/2186+0.046 ms cpu, 1931->2113->1938 MB, 1931 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 505 @1901.130s 20%: 0.10+2589+0.014 ms clock, 0.32+958/1946/1144+0.042 ms cpu, 1938->2078->1959 MB, 1938 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 506 @1903.830s 20%: 0.11+1766+0.018 ms clock, 0.35+0.063/1337/2490+0.055 ms cpu, 1959->2072->1871 MB, 1959 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 507 @1905.699s 20%: 0.080+2367+0.017 ms clock, 0.24+837/1785/1247+0.052 ms cpu, 1883->2060->1968 MB, 1891 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 508 @1908.166s 20%: 0.080+1746+0.005 ms clock, 0.24+174/1341/2439+0.017 ms cpu, 1968->2002->1856 MB, 1968 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 509 @1910.032s 20%: 0.065+2007+0.017 ms clock, 0.19+1269/1518/1059+0.053 ms cpu, 1856->2033->1933 MB, 1856 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 510 @1912.154s 20%: 0.11+2496+0.005 ms clock, 0.35+647/1886/1449+0.016 ms cpu, 1934->2080->1966 MB, 1933 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 511 @1914.840s 20%: 0.082+1979+0.020 ms clock, 0.24+929/1491/1270+0.062 ms cpu, 1966->2153->1943 MB, 1966 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 512 @1916.936s 20%: 0.097+2492+0.017 ms clock, 0.29+733/1882/1462+0.053 ms cpu, 1943->2076->1953 MB, 1943 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 513 @1919.535s 20%: 0.074+1955+0.015 ms clock, 0.22+367/1480/1959+0.046 ms cpu, 1953->2143->1947 MB, 1953 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 514 @1921.593s 20%: 0.069+2393+0.004 ms clock, 0.20+588/1815/1637+0.014 ms cpu, 1947->2060->1932 MB, 1947 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 515 @1924.110s 20%: 0.065+1990+0.022 ms clock, 0.19+188/1489/2272+0.067 ms cpu, 1932->2119->1942 MB, 1932 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 516 @1926.216s 20%: 5.1+2642+0.007 ms clock, 15+849/1991/1266+0.021 ms cpu, 1942->2079->1956 MB, 1942 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 517 @1928.956s 20%: 0.081+1884+0.035 ms clock, 0.24+25/1433/2510+0.10 ms cpu, 1956->2098->1899 MB, 1956 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 518 @1930.960s 20%: 0.064+2503+0.023 ms clock, 0.19+858/1888/1044+0.069 ms cpu, 1901->2077->1977 MB, 1899 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 519 @1933.557s 20%: 0.11+1507+0.005 ms clock, 0.34+16/1153/2786+0.017 ms cpu, 1977->1998->1781 MB, 1977 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 520 @1935.270s 20%: 0.085+2067+0.014 ms clock, 0.25+1345/1549/936+0.043 ms cpu, 1819->1993->1945 MB, 1832 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 521 @1937.434s 20%: 0.11+2284+0.018 ms clock, 0.35+455/1729/1754+0.056 ms cpu, 1945->2050->1926 MB, 1945 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 522 @1939.820s 20%: 0.10+2085+0.019 ms clock, 0.31+967/1578/1419+0.058 ms cpu, 1926->2110->1940 MB, 1926 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 523 @1942.096s 20%: 0.082+2494+0.006 ms clock, 0.24+680/1882/1374+0.020 ms cpu, 1940->2087->1966 MB, 1940 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 524 @1944.719s 20%: 0.076+2121+0.005 ms clock, 0.23+859/1598/1536+0.015 ms cpu, 1966->2154->1944 MB, 1966 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 525 @1946.947s 20%: 0.081+2519+0.005 ms clock, 0.24+604/1905/1516+0.015 ms cpu, 1944->2083->1959 MB, 1944 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 526 @1949.565s 20%: 0.098+1888+0.016 ms clock, 0.29+322/1428/1963+0.049 ms cpu, 1959->2150->1948 MB, 1959 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 527 @1951.557s 20%: 2.9+2297+0.005 ms clock, 8.9+591/1738/1457+0.016 ms cpu, 1948->2063->1936 MB, 1948 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 528 @1953.965s 20%: 0.070+1742+0.015 ms clock, 0.21+20/1315/2283+0.045 ms cpu, 1936->2069->1891 MB, 1936 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 529 @1955.800s 20%: 0.061+2560+0.021 ms clock, 0.18+980/1927/982+0.064 ms cpu, 1892->2081->1984 MB, 1899 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 530 @1958.562s 20%: 0.085+1457+0.014 ms clock, 0.25+28/1106/2569+0.043 ms cpu, 1984->2003->1780 MB, 1984 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 531 @1960.144s 20%: 11+2225+0.076 ms clock, 35+1535/1687/726+0.23 ms cpu, 1780->1989->1966 MB, 1821 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 532 @1962.490s 20%: 0.081+2236+0.004 ms clock, 0.24+368/1693/1779+0.014 ms cpu, 1966->2080->1936 MB, 1966 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 533 @1964.834s 20%: 0.11+2061+0.014 ms clock, 0.35+945/1554/1365+0.044 ms cpu, 1936->2121->1942 MB, 1936 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 534 @1966.994s 20%: 0.067+2510+0.005 ms clock, 0.20+701/1894/1410+0.015 ms cpu, 1942->2081->1960 MB, 1942 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 535 @1981.167s 20%: 0.10+2179+0.027 ms clock, 0.32+73/1652/2285+0.083 ms cpu, 2850->2905->1877 MB, 3225 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 536 @1998.153s 20%: 0.11+1467+0.019 ms clock, 0.35+12/1108/2453+0.057 ms cpu, 2828->2859->1791 MB, 3228 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 537 @2020.353s 20%: 0.14+1594+0.020 ms clock, 0.42+1729/1216/830+0.061 ms cpu, 3180->3228->1853 MB, 3227 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 538 @2035.467s 20%: 0.10+2377+0.007 ms clock, 0.32+5.1/1807/2116+0.021 ms cpu, 2817->2943->1948 MB, 3224 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 539 @2055.429s 19%: 0.15+2196+0.020 ms clock, 0.45+1331/1655/943+0.061 ms cpu, 3093->3226->1951 MB, 3226 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 540 @2070.872s 19%: 0.10+2188+0.005 ms clock, 0.31+2.5/1663/2290+0.016 ms cpu, 2845->2918->1899 MB, 3224 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 541 @2094.198s 19%: 0.16+1462+0.008 ms clock, 0.49+1994/1104/609+0.024 ms cpu, 3217->3228->1776 MB, 3229 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 542 @2110.437s 19%: 0.13+2397+0.006 ms clock, 0.41+10/1809/2134+0.018 ms cpu, 2870->3016->1960 MB, 3224 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 543 @2130.174s 19%: 0.18+2549+0.021 ms clock, 0.54+928/1920/1116+0.065 ms cpu, 2961->3220->2022 MB, 3225 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 544 @2145.484s 19%: 0.11+2415+0.006 ms clock, 0.35+10/1835/2158+0.019 ms cpu, 2866->2994->1951 MB, 3224 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 545 @2165.965s 19%: 0.11+1883+0.018 ms clock, 0.35+1732/1422/614+0.054 ms cpu, 3171->3223->1879 MB, 3225 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 546 @2182.157s 19%: 0.18+1619+0.007 ms clock, 0.56+10/1230/2777+0.023 ms cpu, 2824->2853->1793 MB, 3223 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 547 @2205.407s 18%: 0.14+1430+0.019 ms clock, 0.42+1946/1082/655+0.059 ms cpu, 3211->3228->1835 MB, 3228 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 548 @2220.729s 18%: 0.10+2229+0.005 ms clock, 0.31+1.9/1679/2194+0.016 ms cpu, 2841->2930->1916 MB, 3222 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 549 @2240.699s 18%: 0.13+2295+0.007 ms clock, 0.39+647/1729/1680+0.023 ms cpu, 3093->3185->1919 MB, 3226 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 550 @2257.153s 18%: 0.14+1541+0.006 ms clock, 0.42+12/1181/2609+0.019 ms cpu, 2871->2900->1794 MB, 3225 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 551 @2280.318s 18%: 0.16+1672+0.021 ms clock, 0.49+2085/1259/565+0.064 ms cpu, 3181->3229->1855 MB, 3228 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 552 @2295.435s 18%: 0.096+2311+0.005 ms clock, 0.29+8.0/1743/2016+0.016 ms cpu, 2819->2957->1958 MB, 3226 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 553 @2315.415s 18%: 0.098+2087+0.016 ms clock, 0.29+1410/1571/949+0.049 ms cpu, 3106->3227->1936 MB, 3227 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 554 @2330.796s 18%: 0.13+2198+0.007 ms clock, 0.41+2.0/1678/2292+0.021 ms cpu, 2840->2918->1906 MB, 3223 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 555 @2355.157s 17%: 0.14+1563+0.005 ms clock, 0.43+2563/1176/166+0.017 ms cpu, 3220->3251->1797 MB, 3228 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 556 @2370.290s 17%: 0.14+2537+0.005 ms clock, 0.42+95/1927/1793+0.017 ms cpu, 2800->3004->2004 MB, 3225 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 557 @2390.279s 17%: 0.13+2100+0.022 ms clock, 0.41+1285/1581/1110+0.067 ms cpu, 3062->3228->1961 MB, 3228 MB goal, 30 MB stacks, 0 MB globals, 3 P
gc 558 @2400.967s 17%: 0.11+2607+0.017 ms clock, 0.33+1097/1961/1220+0.052 ms cpu, 2836->3035->2149 MB, 3230 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 559 @2415.413s 17%: 0.14+2536+0.018 ms clock, 0.43+24/1915/2279+0.054 ms cpu, 2966->3114->2165 MB, 3223 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 560 @2430.173s 17%: 0.17+3155+0.025 ms clock, 0.52+742/2378/1340+0.075 ms cpu, 2872->3163->2242 MB, 3215 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 561 @2445.183s 17%: 0.14+2823+0.006 ms clock, 0.44+778/2123/1262+0.018 ms cpu, 2926->3196->2224 MB, 3219 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 562 @2459.157s 17%: 0.13+2369+0.020 ms clock, 0.39+54/1784/2393+0.060 ms cpu, 2893->3126->2183 MB, 3219 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 563 @2473.203s 17%: 0.17+1644+0.006 ms clock, 0.53+19/1242/2902+0.020 ms cpu, 2909->2935->1982 MB, 3219 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 564 @2490.750s 17%: 0.11+2056+0.019 ms clock, 0.34+2265/1547/472+0.059 ms cpu, 3153->3225->2090 MB, 3223 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 565 @2505.255s 17%: 0.14+2835+0.008 ms clock, 0.43+371/2121/1797+0.026 ms cpu, 2863->3090->2205 MB, 3220 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 566 @2520.230s 17%: 0.14+2782+0.008 ms clock, 0.43+745/2101/1347+0.025 ms cpu, 2927->3168->2210 MB, 3219 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 567 @2534.223s 17%: 0.13+2403+0.019 ms clock, 0.41+12/1819/2416+0.058 ms cpu, 2888->3117->2183 MB, 3219 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 568 @2548.205s 17%: 0.11+1645+0.006 ms clock, 0.35+47/1246/2899+0.019 ms cpu, 2909->2934->1983 MB, 3217 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 569 @2565.310s 16%: 0.13+2504+0.022 ms clock, 0.41+1520/1882/848+0.066 ms cpu, 3051->3227->2173 MB, 3223 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 570 @2579.674s 16%: 0.11+3047+0.006 ms clock, 0.34+0.96/2298/1951+0.019 ms cpu, 2886->3179->2249 MB, 3218 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 571 @2593.018s 16%: 0.11+1692+0.005 ms clock, 0.34+2.4/1288/2799+0.015 ms cpu, 2912->2954->2004 MB, 3203 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 572 @2606.316s 16%: 0.12+2140+0.005 ms clock, 0.37+5.1/1608/2616+0.017 ms cpu, 2964->3010->2008 MB, 3222 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 573 @2622.167s 16%: 0.12+1606+0.006 ms clock, 0.38+81/1218/2689+0.018 ms cpu, 2963->2994->1991 MB, 3221 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 574 @2640.219s 16%: 0.13+2459+0.029 ms clock, 0.41+1427/1855/873+0.089 ms cpu, 3014->3225->2184 MB, 3222 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 575 @2654.175s 16%: 0.14+2123+0.016 ms clock, 0.44+60/1598/2386+0.050 ms cpu, 2892->3107->2176 MB, 3215 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 576 @2670.154s 16%: 0.12+2786+0.070 ms clock, 0.38+1229/2105/830+0.21 ms cpu, 2942->3222->2239 MB, 3222 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 577 @2682.223s 16%: 0.16+1611+0.005 ms clock, 0.49+17/1243/2884+0.016 ms cpu, 2887->2910->1990 MB, 3168 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 578 @2695.705s 16%: 0.095+2302+0.005 ms clock, 0.28+1.6/1729/2416+0.016 ms cpu, 2852->2944->2120 MB, 3221 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 579 @2710.312s 16%: 0.13+2748+0.006 ms clock, 0.41+277/2073/1942+0.019 ms cpu, 2892->3090->2201 MB, 3221 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 580 @2725.390s 16%: 0.21+2632+0.006 ms clock, 0.63+852/1994/1457+0.018 ms cpu, 3004->3171->2182 MB, 3223 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 581 @2740.211s 16%: 0.16+2904+0.006 ms clock, 0.48+659/2195/1409+0.018 ms cpu, 2897->3161->2240 MB, 3221 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 582 @2751.194s 16%: 0.099+2016+0.017 ms clock, 0.29+32/1535/2400+0.051 ms cpu, 2890->2929->2009 MB, 3183 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 583 @2770.327s 16%: 0.15+1929+0.016 ms clock, 0.47+1631/1459/918+0.049 ms cpu, 3110->3225->2127 MB, 3222 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 584 @2782.219s 15%: 0.14+1549+0.005 ms clock, 0.43+7.8/1174/2792+0.017 ms cpu, 2873->2894->1993 MB, 3193 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 585 @2795.717s 15%: 0.12+2387+0.005 ms clock, 0.36+2.8/1804/2504+0.016 ms cpu, 2852->2942->2123 MB, 3221 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 586 @2810.305s 15%: 0.14+2684+0.007 ms clock, 0.43+149/2015/2031+0.022 ms cpu, 2891->3092->2207 MB, 3221 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 587 @2825.346s 15%: 0.16+2720+0.006 ms clock, 0.49+1091/2060/1263+0.019 ms cpu, 3002->3185->2198 MB, 3223 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 588 @2840.194s 15%: 0.12+2906+0.005 ms clock, 0.38+804/2185/1290+0.017 ms cpu, 2898->3162->2240 MB, 3220 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 589 @2854.158s 15%: 0.19+1975+0.025 ms clock, 0.59+93/1495/2339+0.077 ms cpu, 2910->3109->2166 MB, 3219 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 590 @2866.388s 15%: 0.12+2072+0.007 ms clock, 0.36+0.86/1581/2672+0.022 ms cpu, 2902->2942->2014 MB, 3219 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 591 @2885.474s 15%: 0.090+1969+0.017 ms clock, 0.27+1948/1490/813+0.051 ms cpu, 3159->3227->2098 MB, 3224 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 592 @2900.187s 15%: 0.13+2924+0.005 ms clock, 0.41+654/2221/1421+0.016 ms cpu, 2868->3134->2242 MB, 3215 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 593 @2913.208s 15%: 0.13+1615+0.006 ms clock, 0.41+18/1228/2920+0.018 ms cpu, 2899->2924->2000 MB, 3184 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 594 @2930.364s 15%: 0.15+2235+0.027 ms clock, 0.47+1902/1687/682+0.082 ms cpu, 3110->3228->2142 MB, 3224 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 595 @2943.178s 15%: 0.12+1616+0.005 ms clock, 0.38+20/1227/2831+0.017 ms cpu, 2876->2907->2005 MB, 3195 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 596 @2955.694s 15%: 0.11+2406+0.005 ms clock, 0.34+0.69/1814/2479+0.017 ms cpu, 2853->2949->2133 MB, 3218 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 597 @2970.283s 15%: 0.16+2796+0.006 ms clock, 0.49+584/2122/1687+0.020 ms cpu, 2892->3105->2220 MB, 3221 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 598 @2984.210s 15%: 0.13+2379+0.017 ms clock, 0.40+18/1792/2469+0.052 ms cpu, 2898->3120->2196 MB, 3220 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 599 @2996.058s 15%: 0.11+2474+0.007 ms clock, 0.33+141/1856/2444+0.021 ms cpu, 2897->2980->2123 MB, 3213 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 600 @3015.168s 15%: 0.12+1731+0.056 ms clock, 0.38+2622/1299/211+0.16 ms cpu, 3164->3226->2038 MB, 3228 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 601 @3025.781s 15%: 0.12+2351+0.028 ms clock, 0.38+3.3/1778/2474+0.085 ms cpu, 2849->2943->2132 MB, 3205 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 602 @3040.235s 15%: 0.14+2818+0.025 ms clock, 0.42+449/2122/1661+0.077 ms cpu, 2893->3126->2228 MB, 3221 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 603 @3053.658s 15%: 0.12+1631+0.018 ms clock, 0.37+0.99/1230/2840+0.054 ms cpu, 2889->2937->2024 MB, 3182 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 604 @3075.566s 14%: 0.11+2524+0.007 ms clock, 0.34+2.3/1913/2475+0.021 ms cpu, 3456->3576->2158 MB, 4061 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 605 @3100.737s 14%: 0.12+2453+0.022 ms clock, 0.36+0.81/1857/2464+0.068 ms cpu, 3684->3782->2138 MB, 4329 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 606 @3125.722s 14%: 0.12+2336+0.006 ms clock, 0.36+0.83/1776/2452+0.019 ms cpu, 3650->3740->2130 MB, 4288 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 607 @3153.221s 14%: 0.12+1636+0.006 ms clock, 0.38+0.26/1252/3006+0.018 ms cpu, 3766->3787->2000 MB, 4273 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 608 @3175.403s 14%: 0.11+2564+0.005 ms clock, 0.35+17/1929/2252+0.016 ms cpu, 3416->3567->2187 MB, 4014 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 609 @3202.451s 14%: 0.10+1479+0.004 ms clock, 0.31+0.79/1115/2867+0.013 ms cpu, 3802->3819->1997 MB, 4386 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 610 @3225.308s 14%: 0.17+2742+0.005 ms clock, 0.53+87/2069/2172+0.017 ms cpu, 3410->3608->2214 MB, 4007 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 611 @3250.971s 14%: 0.11+2284+0.025 ms clock, 0.34+0.87/1727/2557+0.076 ms cpu, 3780->3854->2116 MB, 4441 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 612 @3275.462s 14%: 0.10+2466+0.006 ms clock, 0.32+8.2/1867/2229+0.019 ms cpu, 3613->3755->2179 MB, 4245 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 613 @3300.745s 14%: 0.10+2287+0.005 ms clock, 0.32+0.84/1713/2402+0.016 ms cpu, 3741->3831->2132 MB, 4371 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 614 @3325.673s 14%: 0.11+2431+0.005 ms clock, 0.33+9.2/1828/2434+0.016 ms cpu, 3641->3753->2154 MB, 4278 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 615 @3354.174s 13%: 0.16+2389+0.016 ms clock, 0.49+19/1797/2489+0.050 ms cpu, 3809->4036->2206 MB, 4321 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 616 @3385.552s 13%: 0.093+2514+0.006 ms clock, 0.27+3.6/1886/2504+0.018 ms cpu, 4089->4205->2160 MB, 4425 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 617 @3418.160s 13%: 0.13+1644+0.006 ms clock, 0.40+48/1257/2835+0.018 ms cpu, 4089->4119->2011 MB, 4332 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 618 @3446.189s 13%: 0.10+2260+0.014 ms clock, 0.32+135/1719/2384+0.043 ms cpu, 3871->3919->2031 MB, 4035 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 619 @3475.586s 13%: 0.12+2424+0.006 ms clock, 0.36+13/1838/2397+0.020 ms cpu, 3857->3969->2157 MB, 4075 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 620 @3505.873s 13%: 0.11+2439+0.013 ms clock, 0.35+6.0/1838/2503+0.041 ms cpu, 4041->4129->2135 MB, 4326 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 621 @3538.210s 13%: 0.16+1632+0.006 ms clock, 0.49+34/1244/2920+0.019 ms cpu, 4091->4114->2009 MB, 4284 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 622 @3566.165s 13%: 0.13+2254+0.017 ms clock, 0.40+239/1698/2263+0.053 ms cpu, 3890->3945->2040 MB, 4031 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 623 @3595.814s 13%: 0.12+2291+0.005 ms clock, 0.37+93/1727/2452+0.017 ms cpu, 3878->3954->2125 MB, 4093 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 624 @3625.819s 13%: 0.11+2471+0.034 ms clock, 0.33+309/1869/2235+0.10 ms cpu, 4070->4164->2167 MB, 4263 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 625 @3658.197s 12%: 0.15+1635+0.005 ms clock, 0.47+22/1231/2918+0.016 ms cpu, 4094->4117->2034 MB, 4348 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 626 @3689.144s 12%: 0.15+1804+0.009 ms clock, 0.46+1381/1363/1576+0.028 ms cpu, 4013->4081->2077 MB, 4081 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 627 @3715.231s 12%: 0.15+3034+0.016 ms clock, 0.47+203/2302/1950+0.048 ms cpu, 3755->4003->2276 MB, 4168 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 628 @3750.151s 12%: 0.16+3012+0.019 ms clock, 0.48+845/2264/1261+0.059 ms cpu, 4252->4540->2295 MB, 4564 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 629 @3780.378s 12%: 0.10+2578+0.005 ms clock, 0.32+32/1933/2237+0.015 ms cpu, 4054->4226->2233 MB, 4602 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 630 @3815.250s 12%: 0.13+2363+0.16 ms clock, 0.41+1726/1776/873+0.50 ms cpu, 4309->4476->2208 MB, 4478 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 631 @3840.614s 12%: 0.13+2349+0.006 ms clock, 0.39+1.3/1776/2382+0.020 ms cpu, 3769->3876->2187 MB, 4429 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 632 @3866.005s 12%: 0.13+2322+0.024 ms clock, 0.39+0.85/1749/2590+0.073 ms cpu, 3733->3806->2155 MB, 4386 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 633 @3895.287s 12%: 0.14+2808+0.006 ms clock, 0.44+136/2127/2133+0.020 ms cpu, 3852->4061->2261 MB, 4322 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 634 @3927.862s 12%: 0.10+1671+0.005 ms clock, 0.32+3.4/1245/2888+0.017 ms cpu, 4217->4250->2058 MB, 4535 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 635 @3959.219s 12%: 0.13+1900+0.004 ms clock, 0.41+240/1435/2337+0.014 ms cpu, 3986->4129->2168 MB, 4129 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 636 @3985.644s 12%: 0.12+2557+0.023 ms clock, 0.38+0.86/1937/2512+0.070 ms cpu, 3812->3923->2199 MB, 4348 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 637 @4020.524s 11%: 0.11+2019+0.022 ms clock, 0.34+1822/1521/783+0.066 ms cpu, 4323->4412->2177 MB, 4412 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 638 @4046.655s 11%: 0.13+1878+0.005 ms clock, 0.40+1.2/1418/2723+0.015 ms cpu, 3797->3831->2062 MB, 4367 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 639 @4070.454s 11%: 0.10+2494+0.006 ms clock, 0.32+4.5/1870/2388+0.019 ms cpu, 3521->3656->2220 MB, 4137 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 640 @4100.181s 11%: 0.15+3041+0.018 ms clock, 0.47+197/2300/1886+0.055 ms cpu, 3867->4144->2307 MB, 4453 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 641 @4135.123s 11%: 0.16+3112+0.018 ms clock, 0.50+949/2337/1129+0.056 ms cpu, 4320->4622->2328 MB, 4627 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 642 @4165.157s 11%: 0.13+3050+0.020 ms clock, 0.39+208/2296/1820+0.060 ms cpu, 4029->4326->2322 MB, 4670 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 643 @4200.185s 11%: 0.14+3149+0.016 ms clock, 0.44+901/2372/1137+0.050 ms cpu, 4311->4603->2327 MB, 4658 MB goal, 30 MB stacks, 0 MB globals, 3 P

Separately, @mknyszek - is a runtime switch for gctraces at all feasible? I'd be then able to collect on an actual production service easily.

@rabbbit
Copy link
Author

rabbbit commented Feb 1, 2023

More samples with the gctraces running overnight - https://gist.github.com/rabbbit/b6419fd71e42e95fffe35476c1f8cf85 - after 24 hours my override expired and we reset to GOGC=100.

Traces cover the whole period below:

image

image

The gist includes the spike I caused manually (as in, it includes the traces I linked earlier yesterday)

@rabbbit
Copy link
Author

rabbbit commented Feb 2, 2023

https://gist.github.com/rabbbit/db4d79875b6b1707b3caad3e19cf2d7e might be more interesting since we actually got to the degraded state for hours.

Looks like the memory grew by ~150mb in the two hours we're degraded, so around 4% of the limit. Thus setting MEMLIMIT at 95% of the container would mean a 2-hour degradation before OOM.

image

image

The 10-minute CPU utilization reduction every 40 minutes looks interesting, but I cannot explain it right now. A mirror system with the same code and inputs does not show the same behavior (at lower CPU utilization), but then I also don't expect GC to have any 40-minute cycles, so it must be in our system that I don't understand.

@thepudds
Copy link
Contributor

thepudds commented Feb 2, 2023

In the earlier issue, @cdvr1993 wrote in #48409 (comment):

would you suggest it is better to wait for #56857 to have a way to keep an eye on the size of live bytes, so that when it gets close to the soft limit make a decision of either eat the cost of GC or just OOM?

And @zephyrtronium also later suggested a similar strategy above.

It does seem that it is a reasonable choice for some applications that are using GOMEMLIMIT for the process to monitor its own memory usage and terminate itself if needed, especially when there is a concern about memory leaks or otherwise as a safeguard. (Or alternatively, monitoring memory to then make a decision about releasing some "nice-to-have memory" or otherwise reacting, though a non-terminal reaction might not really be a choice for most apps, as hinted at by some of the older discussion in #29696 and elsewhere).

Question: what metric should be used if an application does want to avoid being stuck in a high-CPU usage degraded state for an extended period when close to GOMEMLIMIT but not yet "rescued" by the kernel's OOM killer?

Some seemingly imperfect candidates:

  • #56857 is adding /gc/heap/live:bytes (or whatever it ends up being named), which is helpful, but it does not represent all memory usage. The GOMEMLIMIT implementation spent some effort to not just respond to heap memory but instead be more comprehensive (which is nice, and beyond what most popular runtimes do in other languages), and it would be nice if an application could get a similar level of visibility beyond just heap usage.

  • There are existing runtime/metrics that capture the additional memory, but as I understand it, those are not time-aligned with the WIP /gc/heap/live:bytes from #56857 , which means it might not be meaningful to combine them with /gc/heap/live:bytes. You can try to do an forEachGC style notification using finalizers, but finalizers also don't run immediately after a GC, so grabbing additional metrics based on finalizer execution also has some time skew, I think?

  • Go 1.20 introduces some additional GC CPU-related metrics, but those alone don't seem sufficient either (though maybe some applications might want to use them as an "AND" condition with a more direct measure of memory usage).

  • RSS or other OS-specific metrics can be tricky to interpret (e.g., is RSS being reduced due to memory pressure?), and of course are not consistent across OSes.

Would it make sense to have an additional metric that represents the rest of memory that is time-aligned with the value in the WIP /gc/heap/live:bytes, or maybe reporting how much GOMEMLIMIT headroom is left as of last GC, or some type of "effective GOGC" metric, or _____?

In other words, maybe the general solution to this issue could be more observability, which might reduce or eliminate the desire for another GC knob.

That said, maybe the metrics question is off base or already has a simple answer?

@rabbbit
Copy link
Author

rabbbit commented Feb 2, 2023

Skipping the metric question (for now), leaving another batch of gctraces. On the bright side, while it looks like GC is using 100% of the CPU, the cgroup nr_throttled stats stopped degrading further.

image

image

The gc-traces do show a continued increase of CPU usage (if I read this correctly), but it's increasing at a slower rate: https://gist.github.com/rabbbit/eebd814e926345298e008289f2e92675.

I would have attempted to run the same service with more cores to see if GC usage eventually tops of, but I'll be away for a week. Can run this then.

@mknyszek mknyszek added this to the Backlog milestone Feb 8, 2023
@rabbbit
Copy link
Author

rabbbit commented Mar 8, 2023

Hey @mknyszek,

Is there anything you need from us? I posted a few gc-traces in the above comments - do they help diagnose what's happening exactly?

@thepudds
Copy link
Contributor

thepudds commented Mar 13, 2023

Hi @rabbbit, I looked over some of the example graphs and traces you posted, but I was worried I might have misunderstood some of the time correlation between some of the descriptive text vs. the data, so I thought it might be easier to look at a simplified example.

I have a heap benchmark at thepudds/heapbench. I configured it with a slow leak and GOMEMLIMIT using Go 1.20.1 on Linux, with results below. Of course, this won't exactly reproduce what you reported, but maybe a simple example could be useful for discussing with @mknyszek or whoever else.

For initial parameters, I looked at your example in #58106 (comment) in a time range around roughly ~2750 seconds (when things seem "happy"), and I imprecisely estimated some key of parameters from your trace/graph as:

 GC cycle duration:  ~15 sec (avg)
         Live heap:  ~2000 MiB (avg)
          CPU util:  ~1.1 cores (avg)
        GOMAXPROCS:  3

I configured the benchmark to roughly match those parameters, plus a 1 MiB/sec leak with a 3 GiB GOMEMLIMIT. (That's an intentionally "fast" leak so that results come back sooner, but it would be easy to slow down the leak).

To start the benchmark:

$ go install github.com/thepudds/heapbench@8aef70786
$ GOMAXPROCS=3 GOGC=-1 GOMEMLIMIT=3GiB GODEBUG=gctrace=1 \
  heapbench -baseheap=2000 -garbagerate=42 -leakrate=1 -jobrate=100 -worktime=9ms

The memory-related arguments are in MiB or MiB/sec. (The last two arguments configure the benchmark to use ~90% of a core for pure CPU work: 100 jobs per sec X 9ms CPU per job on average).

The summary is you can see it run initially with ~110% CPU utilization (~90% from pure CPU work, the rest mostly from allocation & GC). It then climbs towards 300% CPU utilization as the leak pushes the live heap up towards the GOMEMLIMIT, which I think reproduces your general observation that GC used more CPU than you expected (though seems aligned with what @mknyszek expected as far as I understand):

heapbench: @63.497s rss: 3013.7 MiB, cpu: 109.0%   
heapbench: @123.497s rss: 3016.4 MiB, cpu: 118.6%                                               
heapbench: @183.498s rss: 3017.5 MiB, cpu: 119.8%                                               
heapbench: @243.497s rss: 3017.5 MiB, cpu: 126.8%                                               
heapbench: @303.497s rss: 3019.5 MiB, cpu: 128.2%                                               
heapbench: @363.498s rss: 3028.5 MiB, cpu: 134.4%                                               
heapbench: @423.497s rss: 3032.7 MiB, cpu: 134.5%                                               
heapbench: @483.498s rss: 3059.2 MiB, cpu: 149.6%                                               
heapbench: @543.497s rss: 3059.0 MiB, cpu: 153.5%                                               
heapbench: @603.507s rss: 3059.6 MiB, cpu: 158.7%                                               
heapbench: @663.516s rss: 3075.6 MiB, cpu: 183.9%                                               
heapbench: @723.498s rss: 3051.3 MiB, cpu: 197.4%                                               
heapbench: @783.500s rss: 3083.2 MiB, cpu: 259.5%                                               
heapbench: @843.551s rss: 3091.0 MiB, cpu: 299.1%  
heapbench: @903.505s rss: 3198.4 MiB, cpu: 300.6%   # leak pushed us past GOMEMLIMT  
heapbench: @963.504s rss: 3304.1 MiB, cpu: 300.8%
heapbench: @1023.498s rss: 3324.7 MiB, cpu: 300.5%

Here are some sample gctraces from that same run. After allocating the base heap and beginning the actual benchmark, it starts with a ~15 sec GC cycle and a bit more than 2000 MiB live heap and the process is using ~1.1 cores (which were the desired starting parameters from above):

gc 1 @20.012s 3%: 0.057+1129+0.022 ms clock, 0.17+1432/858/566+0.066 ms cpu, 2918->2945->2073 MB, 2916 MB goal, 0 MB stacks, 0 MB globals, 3 P
gc 2 @36.415s 3%: 0.086+1394+0.021 ms clock, 0.25+83/1059/1734+0.064 ms cpu, 2812->2881->2131 MB, 2910 MB goal, 0 MB stacks, 0 MB globals, 3 P
gc 3 @50.445s 2%: 0.053+1387+0.030 ms clock, 0.15+1.7/1047/1832+0.092 ms cpu, 2709->2774->2140 MB, 2907 MB goal, 0 MB stacks, 0 MB globals, 3 P

~500 seconds later, it's a ~5-6 sec GC cycle, the live heap is ~2650 MB, and the process is using ~1.5 cores:

gc 57 @535.692s 4%: 0.13+1611+0.005 ms clock, 0.39+355/1238/1968+0.017 ms cpu, 2835->2905->2639 MB, 2916 MB goal, 0 MB stacks, 0 MB globals, 3 P
gc 58 @541.389s 4%: 0.050+1653+0.018 ms clock, 0.15+36/1248/2259+0.055 ms cpu, 2836->2915->2653 MB, 2916 MB goal, 0 MB stacks, 0 MB globals, 3 P
gc 59 @546.834s 4%: 0.051+1624+0.021 ms clock, 0.15+1561/1222/879+0.065 ms cpu, 2840->2923->2663 MB, 2916 MB goal, 0 MB stacks, 0 MB globals, 3 P

At ~1000 seconds, it's a ~2 sec GC cycle, live heap is ~3150 MB, and the process is continually using 3 cores (all of GOMAXPROCS):

gc 244 @1032.089s 15%: 0.055+2058+0.030 ms clock, 0.16+24/1562/2821+0.091 ms cpu, 3147->3241->3144 MB, 3147 MB goal, 0 MB stacks, 0 MB globals, 3 P
gc 245 @1034.262s 15%: 0.053+2278+0.020 ms clock, 0.16+61/1720/2646+0.062 ms cpu, 3144->3257->3164 MB, 3144 MB goal, 0 MB stacks, 0 MB globals, 3 P
gc 246 @1036.639s 15%: 0.063+2035+0.022 ms clock, 0.18+25/1550/2898+0.066 ms cpu, 3164->3253->3144 MB, 3164 MB goal, 0 MB stacks, 0 MB globals, 3 P

In any event, your examples were more complex than that, but I'd be curious if (a) does that behavior shown in the benchmark align with your latest understanding of the current implementation, and (b) is there maybe one of your complex examples that you'd want to call out that doesn't seem to be explainable under that behavior? (E.g., for the one in #58106 (comment), it wasn't immediately obvious to me why it was using all 3 cores if the mem limit was 3GiB, but I also wasn't sure if I was tracking what changed when in that example).

@thepudds
Copy link
Contributor

For contrast, here's re-running that same benchmark except with GOGC=100 and no GOMEMLIMIT.

$ GOMAXPROCS=3 GOGC=100 GODEBUG=gctrace=1 \
  heapbench -baseheap=2000 -garbagerate=42 -leakrate=1 -jobrate=100 -worktime=9ms

The summary is you can see it run initially with ~100% CPU utilization (a little lower than the first benchmark above), and then as the leak progresses, the memory usage climbs but the CPU usage stays steady and the GC cycles take longer and longer, which I think is expected:

heapbench: @65.145s rss: 3760.1 MiB, cpu: 102.4%
heapbench: @125.145s rss: 4406.5 MiB, cpu: 105.2%
heapbench: @185.144s rss: 4512.0 MiB, cpu: 98.5%
heapbench: @245.145s rss: 4590.9 MiB, cpu: 98.7%
heapbench: @305.148s rss: 4772.6 MiB, cpu: 104.5%
heapbench: @365.144s rss: 4884.2 MiB, cpu: 104.4%
heapbench: @425.145s rss: 5011.6 MiB, cpu: 101.4%
heapbench: @485.145s rss: 5143.4 MiB, cpu: 98.9%
heapbench: @545.144s rss: 5192.5 MiB, cpu: 100.6%
heapbench: @605.145s rss: 5321.8 MiB, cpu: 98.3%
heapbench: @665.144s rss: 5412.2 MiB, cpu: 100.6%
heapbench: @725.148s rss: 5636.2 MiB, cpu: 103.1%
heapbench: @785.144s rss: 5786.2 MiB, cpu: 105.0%
heapbench: @845.144s rss: 5905.4 MiB, cpu: 101.1%
heapbench: @905.144s rss: 6030.8 MiB, cpu: 101.0%
heapbench: @965.151s rss: 6160.3 MiB, cpu: 108.8%
heapbench: @1025.144s rss: 6189.1 MiB, cpu: 92.0%

Some sample gctraces. After allocating the base heap and beginning the benchmark, it starts with a ~40 sec GC cycle and a bit more than 2000 MiB live heap:

gc 12 @37.584s 2%: 0.11+1338+0.012 ms clock, 0.34+2.2/1002/1904+0.038 ms cpu, 3549->3606->2118 MB, 4167 MB goal, 0 MB stacks, 0 MB globals, 3 P
gc 13 @76.706s 1%: 0.092+1308+0.017 ms clock, 0.27+0.59/985/1886+0.053 ms cpu, 3891->3946->2156 MB, 4237 MB goal, 0 MB stacks, 0 MB globals, 3 P
gc 14 @119.204s 1%: 0.064+1449+0.004 ms clock, 0.19+0.075/1095/1881+0.012 ms cpu, 4147->4220->2215 MB, 4312 MB goal, 0 MB stacks, 0 MB globals, 3 P

At ~1000 seconds, it's a ~60 sec GC cycle, live heap is ~3150 MB, but in contrast to the benchmark in the comment just above, now the process is using ~6 GiB RSS while still using ~1 core:

gc 30 @962.106s 0%: 0.12+1972+0.004 ms clock, 0.36+0.97/1498/2665+0.013 ms cpu, 5787->5887->3092 MB, 6045 MB goal, 0 MB stacks, 0 MB globals, 3 P
gc 31 @1025.203s 0%: 0.091+2107+0.022 ms clock, 0.27+3.6/1584/2752+0.067 ms cpu, 5917->6012->3151 MB, 6185 MB goal, 0 MB stacks, 0 MB globals, 3 P
gc 32 @1085.990s 0%: 0.12+2012+0.017 ms clock, 0.38+4.2/1523/2921+0.052 ms cpu, 6028->6113->3211 MB, 6302 MB goal, 0 MB stacks, 0 MB globals, 3 P

Finally, here's running the benchmark with zeros for the memory allocations parameters, just to see the pure CPU work:

$ GOMAXPROCS=3 GOGC=100 GODEBUG=gctrace=1 \
  heapbench -baseheap=0 -garbagerate=0 -leakrate=0 -jobrate=100 -worktime=9ms 
heapbench: @60.000s rss: 3.2 MiB, cpu: 90.6%                                                    
heapbench: @120.001s rss: 3.2 MiB, cpu: 91.5%                                                   
heapbench: @180.001s rss: 3.7 MiB, cpu: 90.9%
...

@rabbbit
Copy link
Author

rabbbit commented Mar 21, 2023

Hey @thepudds - thank you for your investigation, and sorry for the delay in responding.

Yes, I think your tests reproduce the case. The surprising bits were:

  • we were expecting the GC to take no more than 50% of the CPU time - here it looks like it's taking 2/3 of the time, and potentially more - with a higher GOMAXPROCS it would likely keep climbing?
  • (to a smaller extent, not blocking us) the non-linear nature of the increase. In your first benchmark, the absolute increase between @723.498s and @783.500s is as high as between the start and @603.507.

The problem is that if:

  • if we have an application with a large memory limit 10-16GiB is common
  • GOMEMLIMIT is lower than the actual physical limit (95% or even 98%)
  • the leak is slow enough
    We would end up in the "degraded state" for hours or days before the application gets oomed.

Our services strongly prefer OOM (owners are used to this) to long performance degradation. Thus we (currently) need a tool to detect excessive GC usage & update either GOGC (currently) or GOMEMLIMIT (in the future) to "relax" the constraints and let the application be killed faster.

@mknyszek
Copy link
Contributor

mknyszek commented Apr 11, 2023

Sorry for the long delayed reply here, I got caught up with a whole bunch of other things and kept putting off catching up on this. I've responded to a few of the discussion points below, but please correct me if I got anything wrong!

@thepudds Thanks for the smaller reproducer! That's super helpful, especially for experimentation.

@rabbbit

Is there anything you need from us? I posted a few gc-traces in the above comments - do they help diagnose what's happening exactly?

All your data is really useful. I do think we understand what's going on here, and from my perspective it's expected. I don't think we plan to do anything here in the near-term. Although it's unfortunate that GOMEMLIMIT isn't as soft as you expect it to be out of the box, I do think the fact that performance doesn't completely collapse is a positive (I don't disagree that it's degraded) further suggests to me that the runtime is WAI. I feel like the best I can suggest for this case, for now, is some kind metrics-based on-line adjustment of GOMEMLIMIT, or an explicit crash when you end up in this degraded state.

Given that this state is persistent but not catastrophic, I think a reasonable workaround for your case would be to monitor a bunch of runtime/metrics metrics at some regular interval (say, on the order of seconds or minutes). I think you should avoid the complexity of setting/destroying finalizers, which I don't think buys you much here.

As for what metrics to watch, I think given that you know the nature of this problem, the best metric to watch would be live heap. Unfortunately, there isn't a super good way to grab that today. #56857 would fix that however, and I think we should land something for that this release cycle. I do think watching the GC CPU metrics is also reasonable, but I would pick a fairly large time window to average them over since they can be noisy and that might lead to unintended takedowns.

I know that this isn't the most straightforward answer and it will unfortunately add complexity to your deployment, but the alternatives are basically (1) research work with an unknown endpoint for a more adaptive GOMEMLIMIT policy or (2) a new GC tuning parameter, for which there is a high bar and for which (I think) we need more experience reports from others in the community.

Let's keep this issue open so that more of the community can come in and comment further. If this issue is more widespread, I think it's reasonable to take more actions. So far I haven't been seeing that feedback through the channels available to me, but I'm keeping this in mind going forward.

we were expecting the GC to take no more than 50% of the CPU time - here it looks like it's taking 2/3 of the time, and potentially more - with a higher GOMAXPROCS it would likely keep climbing?

Note that the third number in e.g. 0.075/1095/1881 in the GC trace is idle time the GC is soaking up. Basically, the GC runs low-priority GC workers that only run when there's absolutely nothing else to run. If what you're referring to here includes that number, then indeed, that number will go up as GOMAXPROCS increases. However, if there were actually more goroutines to run, then that number would be lower and it would not impact program execution. This "soaking up idle time" is an optimization the GC implements which helps to reduce GC cycle times and GC impact on running application goroutines.

The 50% CPU limit does not apply to this "idle time" optimization, because the running application can readily take it back if it needs it.

(to a smaller extent, not blocking us) the non-linear nature of the increase. In your first benchmark, the absolute increase between @723.498s and @783.500s is as high as between the start and @603.507.

I made a note to myself to take a closer look at this.

@mknyszek
Copy link
Contributor

Actually, sorry, I guess I may have spoke too soon. Overall I stand by what I said in my last comment, but I just took a closer look at the GC trace in #58106 (comment) and noticed something that might be a bug?

So, AFAICT, this is the point where the memory limit is set: (right?)

gc 105 @945.240s 6%: 0.15+2387+0.019 ms clock, 0.45+1149/1791/1010+0.059 ms cpu, 2494->2718->2097 MB, 2721 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 106 @952.653s 6%: 0.11+1407+0.005 ms clock, 0.34+2.7/1074/2685+0.016 ms cpu, 2440->2460->1875 MB, 2590 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 107 @956.854s 6%: 0.099+1393+0.007 ms clock, 0.29+2133/1052/871+0.023 ms cpu, 2176->2182->1863 MB, 1875 MB goal, 48 MB stacks, 0 MB globals, 3 P

But what's odd is you said you set GOMEMLIMIT=3GiB. This would imply that the runtime thinks there's ~1 GiB of overhead which is quite surprising to me. Is 3 GiB right in this case? Am I missing something?

FTR I don't see the same thing in @thepudds' reproducer; the heap goal does seem to be closer to 3 GiB.

@mknyszek
Copy link
Contributor

Oh, one other useful metric to watch (though I still think live heap is best in this particular case) would be /gc/limiter/last-enabled:gc-cycle. This tells you the last GC cycle that the GC's CPU time was limited. If it's zero, then it wasn't ever limited.

You can sample this at the same time as /gc/cycles/total:gc-cycles and if the two numbers are consistently close (say, for 10 samples in a row), then you can be fairly sure you're in a degraded state.

@rabbbit
Copy link
Author

rabbbit commented Apr 12, 2023

Actually, sorry, I guess I may have spoke too soon. Overall I stand by what I said in my last comment, but I just took a closer look at the GC trace in #58106 (comment) and noticed something that might be a bug?

So, AFAICT, this is the point where the memory limit is set: (right?)

gc 105 @945.240s 6%: 0.15+2387+0.019 ms clock, 0.45+1149/1791/1010+0.059 ms cpu, 2494->2718->2097 MB, 2721 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 106 @952.653s 6%: 0.11+1407+0.005 ms clock, 0.34+2.7/1074/2685+0.016 ms cpu, 2440->2460->1875 MB, 2590 MB goal, 48 MB stacks, 0 MB globals, 3 P
gc 107 @956.854s 6%: 0.099+1393+0.007 ms clock, 0.29+2133/1052/871+0.023 ms cpu, 2176->2182->1863 MB, 1875 MB goal, 48 MB stacks, 0 MB globals, 3 P

But what's odd is you said you set GOMEMLIMIT=3GiB. This would imply that the runtime thinks there's ~1 GiB of overhead which is quite surprising to me. Is 3 GiB right in this case? Am I missing something?

FTR I don't see the same thing in @thepudds' reproducer; the heap goal does seem to be closer to 3 GiB.

It's certainly possible I fat-fingered or misreported something. I don't see a similar drop in the other traces, so perhaps it was a typo. What would this mean - that there's some overhead somewhere that GC cannot clear?

The 50% CPU limit does not apply to this "idle time" optimization, because the running application can readily take it back if it needs it.

Hm, this is fair, but also:

  • we see significant container throttling
  • we see a significant latency impact for the service - 20x+ for p99 in one of the benchmarks.
    So for whatever reason, the application is not taking the CPU back, or the GC is affecting the CFS quotas in a significant enough way.

(2) a new GC tuning parameter, for which there is a high bar and for which

Oh, no - agreed that this would be too much. The fewer knobs, the better. I liked how you gave yourself some flexibility in the GOMEMLIMIT design ("it might overshoot"), so at most, I hoped for convincing you to overshoot faster :)

Let's keep this issue open so that more of the community can come in and comment further. If this issue is more widespread, I think it's reasonable to take more actions. So far I haven't been seeing that feedback through the channels available to me, but I'm keeping this in mind going forward.

Sounds great, thanks!

@mknyszek
Copy link
Contributor

What would this mean - that there's some overhead somewhere that GC cannot clear?

Yeah, basically.

Hm, this is fair, but also:

  • we see significant container throttling
  • we see a significant latency impact for the service - 20x+ for p99 in one of the benchmarks.
    So for whatever reason, the application is not taking the CPU back, or the GC is affecting the CFS quotas in a significant enough way.

Yeah, that's not good. Impact on CFS quotas from idle GC workers is something I could absolutely see exacerbating this issue. What is the quota set to for your applications above? Does increasing the quota (to say, several full cores) show better behavior in the degraded state?

FTR, I'm not a fan of the idle-priority GC workers. See #44163 which originally proposed removing them entirely. I walked that back to just disabling them in some circumstances because they can improve performance and that's hard to take away entirely.

@mknyszek
Copy link
Contributor

Hm. Perhaps the idle-priority GC workers should also be limited, like assists. Originally, I treated the idle-priority GC worker time as idle time, but later added actual idle time tracking and just lumped the idle-priority GC worker time into that. But I/O bound applications might end up using substantially more CPU than 50%. This isn't really a problem in a vacuum, but might be a problem with quotas, autoscaling systems, or in a cotenant situation.

@thepudds
Copy link
Contributor

Hi @mknyszek

(to a smaller extent, not blocking us) the non-linear nature of the increase. In your first benchmark, the absolute increase between @723.498s and @783.500s is as high as between the start and @603.507.

I made a note to myself to take a closer look at this.

Maybe I'm off base here, but FWIW, I think it is expected for the CPU usage to grow non-linearly.

Using a simplified model of GC costs (including ignoring non-heap memory and so on):

  • Total GC cpu cost measured over some interval like a minute is:
    • cpu cost per cycle X cycle frequency, or
    • cpu cost per cycle X 1/(time between cycle starts)
  • The new memory being created here is happening at a steady rate (in two categories: a steady rate of garbage creation of ~42 MiB/sec, which is atop a slower steady rate of increase in live memory of ~1 MiB/sec).
  • Roughly speaking, the GC paces itself so that the new memory allocation consumes all the allowed free space just as it starts a new cycle, so the time between the start of cycles is proportional to:
    • free space / new memory rate, or
    • (mem limit - live heap)/new memory rate
  • This means the per minute GC CPU cost is:
    • cpu cost per cycle X 1/((mem limit - live heap)/new memory rate)
  • The new memory rate is roughly constant and the work per cycle is only growing slowly, which means the overall cpu cost is roughly proportional to 1/(mem limit - live heap), which grows non-linearly, and grows especially quickly as as mem limit - live heap approaches zero.
  • Of course, in the real system, it never reaches infinity because eventually the heap is allowed to grow past its mem limit while the GC consumes all "spare" CPU.

Does that sound plausible?

There might be other things going on too, or a bug or similar, but the main point is I don't think linear GC CPU growth is expected as one approaches the mem limit at a constant rate...

@mknyszek
Copy link
Contributor

You are correct that a constant growth will result in a non-constant increase in GC CPU growth and your reasoning is sound to me.

I was just leaving myself a mental note to look at that GC trace in general since I was still catching up (I hadn't looked closely at every log yet). 😅 What I wanted to double-check is if that was happening before or after the push past the limit, but you already annotated the log with that info so yup! It's pretty clear this is WAI. Thanks.

@thepudds
Copy link
Contributor

thepudds commented May 25, 2023

Hi @mknyszek, I wanted to briefly follow-up on this portion of my comment above from Feb:

It does seem that it is a reasonable choice for some applications that are using GOMEMLIMIT for the process to monitor its own memory usage and terminate itself if needed, especially when there is a concern about memory leaks or otherwise as a safeguard. [...]

Question: what metric should be used if an application does want to avoid being stuck in a high-CPU usage degraded state for an extended period when close to GOMEMLIMIT but not yet "rescued" by the kernel's OOM killer?

Some seemingly imperfect candidates:

  • #56857 is adding /gc/heap/live:bytes (or whatever it ends up being named), which is helpful, but it does not represent all memory usage. The GOMEMLIMIT implementation spent some effort to not just respond to heap memory but instead be more comprehensive [...] and it would be nice if an application could get a similar level of visibility beyond just heap usage.
  • There are existing runtime/metrics that capture the additional memory, but as I understand it, those are not time-aligned with the WIP /gc/heap/live:bytes from #56857, which means it might not be meaningful to combine them with /gc/heap/live:bytes. You can try to do an forEachGC style notification using finalizers, but finalizers also don't run immediately after a GC, so grabbing additional metrics based on finalizer execution also has some time skew, I think?
  • [...]

With the new runtime metrics that landed this week in #56857 (thanks @felixge!), do you think it is now possible to sum the right metrics and have something that is fairly directly comparable to the GOMEMLIMIT, including the metrics involved in the sum being at least somewhat time-aligned? (It's likely impossible to be perfectly time-aligned, but ideally it'd be better than needing to do X+Y+Z, where X and Y are say from last GC cycle but Z is close to a current measure).

The intent again is to put enough data into the application's hands via observability so that the application can make decisions and take action, which in turn might simplify what's expected of the runtime (without the application needing to do something like pad an extra 10-15% because it can't get a good measure of some of the material overheads).

One other question is I think the intent of the runtime metrics is that they are lightweight enough that polling them on some even reasonably frequent time tick would be a tiny cost?

CC @aclements

@mknyszek
Copy link
Contributor

mknyszek commented May 25, 2023

With the new runtime metrics that landed this week in #56857 (thanks @felixge!), do you think it is now possible to sum the right metrics and have something that is fairly directly comparable to the GOMEMLIMIT, including the metrics involved in the sum being at least somewhat time-aligned?

Just to clarify, comparing against GOMEMLIMIT in this case is straightforward: we already export these metrics and document that it's what the Go runtime is trying to maintain. (See https://pkg.go.dev/runtime/debug#SetMemoryLimit; specifically, /memory/classes/total:bytes - /memory/classes/heap/released:bytes. These metrics are already exported in a real-time fashion; they're not time-aligned to any GC cycle or anything.)

However, that doesn't help you if GOGC=off, because if you have a small live heap, you'll still see /memory/classes/total:bytes - /memory/classes/heap/released:bytes hovering around the memory limit, regardless of how much impact the GC has on CPU overheads. I think this might've been where we misunderstood each other last we discussed it, or maybe you already knew this but you meant what I'm about to say. If that's the case, my apologies! Hopefully the extra context will be useful for other readers. :)

#56857 is adding /gc/heap/live:bytes (or whatever it ends up being named), which is helpful, but it does not represent all memory usage. The GOMEMLIMIT implementation spent some effort to not just respond to heap memory but instead be more comprehensive [...] and it would be nice if an application could get a similar level of visibility beyond just heap usage.

I believe /gc/heap/live:bytes is really the last thing needed.

Here's the recipe:

/memory/classes/total:bytes - memory/classes/heap/released:bytes - /memory/classes/heap/free:bytes - /memory/classes/heap/objects:bytes + /gc/heap/live:bytes

This expression represents the "permanent" parts of Go runtime memory use that the runtime can't do anything about. This is the same computation the runtime uses. If this is some high fraction of the memory limit, you can be sure you'll be executing GCs often. This is a fairly generic expression; I don't expect it to change too much over time.

The fraction you pick as your threshold for "too much" will need to be dialed in to taste. You also probably want to be able to tolerate brief increases in this value, otherwise you're basically just back to where you started before GOMEMLIMIT. Luckily, in a memory leak case, the bad performance is going to be persistent. You can require that the threshold be exceeded for some reasonably large amount of time (30 seconds? A minute?) before bailing out.

FTR, all the metrics above are as real-time as they can be. /gc/heap/live:bytes updates every GC cycle, but it's not meaningful in today's world to update that faster anyway. I don't think there's really any reason to try and use a finalizer and match the GC cycle. Because a leak is a persistent issue anyway, checking every few hundred milliseconds is probably more than sufficient. If it ever does becoming meaningful (say, a reference-counted GC implementation) then it's not worthwhile binding yourself to the GC cycle anyway. The number will just update more frequently.

The intent again is to put enough data into the application's hands via observability so that the application can make decisions and take action, which in turn might simplify what's expected of the runtime

Totally agree. Beyond exposing a new knob, I'm not sure there's a lot we can do. A lot of this "let's die in this scenario" is going to be tuned to taste for each application. Having a "health checker" goroutine seems like a reasonable way to tackle issues like this.

(without the application needing to do something like pad an extra 10-15% because it can't get a good measure of some of the material overheads).

In this case (GOGC=off) nothing like that is necessary. It's really just setting a threshold that represents your tolerance for your service being taken down.

One other question is I think the intent of the runtime metrics is that they are lightweight enough that polling them on some even reasonably frequent time tick would be a tiny cost?

Correct. There's no hefty global synchronization operation involved, and the operation itself is pretty fast (order of tens of microseconds at worst), so you can sample it fairly frequently.

@mknyszek
Copy link
Contributor

Just wanted to say: thanks @thepudds for poking this issue again now that the new metrics are in!

@rabbbit if you do end up going with a "health-checker" approach, I think the expression in my last message is what I'd recommend checking against. If you do try it out, let me know how it goes!

@rabbbit
Copy link
Author

rabbbit commented May 25, 2023

Thanks @thepudds and @mknyszek!

@cdvr1993 said he'll take a look. Rather than implementing a suicide-goroutine, he's planning on:

  • setting GOMEMLIMIT to a target (~95%)
  • watching gc CPU usage and gradually increasing GOMEMLIMIT when either the usage is above some threshold (25%?). or based on the metric you recommended above - tbd.
    The reasoning is that we want to avoid all replicas of the application at the same time, + we want them to "suffer" slightly before they die.

All credit for the above goes to @cdvr1993, I just basically copy-pasted his thoughts.

if you do end up going with a "health-checker" approach, I think the expression in my last message is what I'd recommend checking against.
/memory/classes/total:bytes - memory/classes/heap/released:bytes - /memory/classes/heap/free:bytes - /memory/classes/heap/objects:bytes + /gc/heap/live:bytes

Huh, a bit extensive, but we'll try. I imagine convincing you to emit a future-proof /memory/classes/gomemlimit/observed:bytes or similar? We'd then shift the responsibility for typos and mistakes onto the runtime ;)

@mknyszek
Copy link
Contributor

mknyszek commented May 26, 2023

Thanks @thepudds and @mknyszek!

@cdvr1993 said he'll take a look. Rather than implementing a suicide-goroutine, he's planning on:

  • setting GOMEMLIMIT to a target (~95%)
  • watching gc CPU usage and gradually increasing GOMEMLIMIT when either the usage is above some threshold (25%?). or based on the metric you recommended above - tbd.
    The reasoning is that we want to avoid all replicas of the application at the same time, + we want them to "suffer" slightly before they die.

An alternative to increasing GOMEMLIMIT is to switch to a GOGC-based GC trigger once you get close to the limit. This is effectively just a softer limit, but defined in terms of memory rather than CPU usage (which may be less noisy).

You can calculate the "effective" GOGC value at any given time like so:

(/gc/heap/goal:bytes) / (/gc/heap/live:bytes + /gc/scan/stack:bytes + /gc/scan/globals:bytes) * 100 - 100

EDIT: I previous wrote this incorrectly and @cdvr1993 corrected me below. Updated here in case anyone looks at this comment first.

and compare that against your target minimum GOGC, say for example 10. If that value ever goes below 10, set SetGCPercent to 10 and set GOMEMLIMIT back to MaxInt64.

Though, I'm curious to hear what you learn from the CPU-based thresholds, too. The above is just somewhat tried-and-true (the old SetMaxHeap experiment worked like this, just internally to the runtime).

All credit for the above goes to @cdvr1993, I just basically copy-pasted his thoughts.

if you do end up going with a "health-checker" approach, I think the expression in my last message is what I'd recommend checking against.
/memory/classes/total:bytes - memory/classes/heap/released:bytes - /memory/classes/heap/free:bytes - /memory/classes/heap/objects:bytes + /gc/heap/live:bytes

Huh, a bit extensive, but we'll try. I imagine convincing you to emit a future-proof /memory/classes/gomemlimit/observed:bytes or similar? We'd then shift the responsibility for typos and mistakes onto the runtime ;)

It's not the shortest expression, and there are certainly some subtleties. I think you're right that exporting a metric for this might be a good idea. I'll propose something for next release.

@cdvr1993
Copy link

An alternative to increasing GOMEMLIMIT is to switch to a GOGC-based GC trigger once you get close to the limit. This is effectively just a softer limit, but defined in terms of memory rather than CPU usage (which may be less noisy).

You can calculate the "effective" GOGC value at any given time like so:

(/gc/heap/live:bytes + /gc/scan/stack:bytes + /gc/scan/globals:bytes) / (/gc/heap/goal:bytes) * 100

and compare that against your target minimum GOGC, say for example 10. If that value ever goes below 10, set SetGCPercent to 10 and set GOMEMLIMIT back to MaxInt64.

Though, I'm curious to hear what you learn from the CPU-based thresholds, too. The above is just somewhat tried-and-true (the old SetMaxHeap experiment worked like this, just internally to the runtime).

That's an interesting idea. We haven't really tried max cpu threshold yet. The only threshold that we had before was to maintain cpu usage below a certain threshold if we had the memory to do so.

So I'll five a try to your recommendation, that way I don't need to keep changing the memlimit and just let it grow by itself.

Thanks.

@cdvr1993
Copy link

@mknyszek

I thinks the correct formula is:

100 * (/gc/heap/goal:bytes) / (/gc/heap/live:bytes + /gc/scan/stack:bytes + /gc/scan/globals:bytes)  - 100

or am I missing something?

@mknyszek
Copy link
Contributor

Er, yes. That's right. 😅 Updated my comment above.

@extemporalgenome
Copy link
Contributor

I've also occasionally observed pathological CPU utilization among a few equally-loaded, identically configured pods forming a kafka consumer group. All of them had almost no real work to do. Most of the pods had 3-5% CPU utilization, and memory use less than GOMEMLIMIT (50-90% of GOMEMLIMIT is typically observed). However, for unclear reasons, a couple pods might get into > GOMEMLIMIT usage (sometimes by nearly double) very quickly after creation, with CPU pegged at 96% or more.

A trace revealed that these pods were spending almost all CPU time towards garbage collection. In my case, it does not appear that there's a leak, and I suspect that live heap use isn't actually near or above GOMEMLIMIT, but more investigation would be needed. I suspect there's an anti-sweet-spot that the current GC algorithm can trigger, similar to what was observed with segmented stacks in the past.

If pathological behavior is detectable, it would be preferable for these workloads to have a classic stop-the-world remedial GC cycle available in order to force usage back below GOMEMLIMIT, rather than to have cpu thrash persist indefinitely.

@mknyszek
Copy link
Contributor

mknyszek commented Nov 29, 2023

In my case, it does not appear that there's a leak, and I suspect that live heap use isn't actually near or above GOMEMLIMIT, but more investigation would be needed.

This directly contradicts the high rate of garbage collection, unless memory is being squeezed from some other direction. For example, the sudden creation of a lot of goroutines. (But even that is still unlikely to be the problem, unless the number of goroutines is in the millions or more.)

If you have some time, I'd appreciate if you could confirm this. If the rate of garbage collection is increasing without memory being squeezed in any way, that may just be a bug somewhere in the GC. A good diagnostic for confirming this would be a GC trace (GODEBUG=gctrace=1) if you haven't tried it already. There is also a runtime/metrics metric for live heap size, as of Go 1.21.

If pathological behavior is detectable, it would be preferable for these workloads to have a classic stop-the-world remedial GC cycle available in order to force usage back below GOMEMLIMIT

I'm not sure what you mean by this. A STW GC isn't going to change the rate of garbage collection, which is the core problem in these extreme cases. The concurrent garbage collection cycle already has other mechanisms to ensure the limit isn't exceeded, like GC assists. (You can kind of think of the assist rate as a slider between a fully concurrent collector that doesn't impact mutator execution and a fully STW collector that forces every goroutine to help the garbage collector until all work is done, much like a STW GC.)

Also I'm not sure I follow what you mean by this idea of an extra "remedial" garbage collection. If the live heap is high, or memory is being squeezed from some other direction, then an extra garbage collection isn't going to help alleviate the problem. (In fact, that's kind of already what the GC is trying to do when the rate of garbage collection is increasing.)

FWIW, this "pathological behavior" is already detected and mitigated to some degree. The absolute worst-case GC death spiral is cut off by a GC CPU limiter that allows the program to exceed GOMEMLIMIT if the GC is using too much CPU time (~50% of GOMAXPROCS, but it's not perfect and intentionally allows for some transient spikes). But part of what this GitHub issue is about is this isn't going to be a perfect fit for every application. What's always true about these extreme scenarios is that something has to give, be that respecting the memory limit or spending an arbitrary amount of time on GC CPU to try and stay under.

@mknyszek
Copy link
Contributor

mknyszek commented Dec 7, 2023

@extemporalgenome Just circling back to this to see if you have any more information. If you suspect a bug, please file a new issue! Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Projects
Development

No branches or pull requests

8 participants