Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

all: static analysis to proactively free constructed garbage #5448

Closed
bradfitz opened this issue May 13, 2013 · 8 comments
Closed

all: static analysis to proactively free constructed garbage #5448

bradfitz opened this issue May 13, 2013 · 8 comments

Comments

@bradfitz
Copy link
Contributor

In the following program, the compiler+linker could know that NewIntWriter always
returns unretained garbage and that on exit from foo, no reference is held to
*IntWriter, and an explicit free could be inserted.

You see this with things like http://golang.org/pkg/encoding/json/#NewEncoder (see also:
https://golang.org/cl/9365044/)

Instead, it generates garbage:

BenchmarkAllocs  2000000           819 ns/op        1057 B/op          1 allocs/op

package main

import (
        "fmt"
        "io"
        "io/ioutil"
        "testing"
)

type IntWriter struct{
        w   io.Writer
        state [1000]byte
}

func NewIntWriter(w io.Writer) *IntWriter {
        return &IntWriter{w: w}
}

func (iw *IntWriter) WriteInt(n int) {
        fmt.Fprintf(iw.w, "%d", n)
}

func BenchmarkAllocs(b *testing.B) {
        b.ReportAllocs()
        for i := 0; i < b.N; i++ {
                foo()
        }
}

func foo() {
        NewIntWriter(ioutil.Discard).WriteInt(123)
}
@rsc
Copy link
Contributor

rsc commented May 24, 2013

Comment 1:

I really don't want to start depending on whole-program analyses for freeing garbage.
I also don't want to lock us into only using allocators that support "free".
We already have two classes of storage with different lifetimes: heap and stack.
It's fine to figure out that something can go on the stack and to put it there, but
let's not introduce a third class.
Also, the linker is not involved in this example; let's keep it that way.

@rsc
Copy link
Contributor

rsc commented Nov 27, 2013

Comment 2:

Labels changed: added go1.3maybe.

@rsc
Copy link
Contributor

rsc commented Dec 4, 2013

Comment 3:

Labels changed: added release-none, removed go1.3maybe.

@rsc
Copy link
Contributor

rsc commented Dec 4, 2013

Comment 4:

Labels changed: added repo-main.

@dvyukov
Copy link
Member

dvyukov commented Jan 27, 2015

It is already 2 allocs:

BenchmarkAllocs 2000000 890 ns/op 1040 B/op 2 allocs/op

Looks like a target for escape analysis.

@rsc rsc added this to the Unplanned milestone Apr 10, 2015
@pwaller
Copy link
Contributor

pwaller commented Feb 5, 2019

I believe that escape analysis already covers this today:

BenchmarkAllocs-16    	20000000	        85.6 ns/op	       8 B/op	       1 allocs/op

(also, wow things got faster in the interim - is that really 10x faster than the above result?)

For the record, here is the salient information:

(pprof) top
Showing nodes accounting for 9699476, 100% of 9699476 total
      flat  flat%   sum%        cum   cum%
   9699476   100%   100%    9699476   100%  github.com/pwaller/escape-analysis.(*IntWriter).WriteInt
         0     0%   100%    9699476   100%  github.com/pwaller/escape-analysis.BenchmarkAllocs
         0     0%   100%    9699476   100%  github.com/pwaller/escape-analysis.foo
         0     0%   100%    9699476   100%  testing.(*B).launch
         0     0%   100%    9699476   100%  testing.(*B).runN
(pprof) list WriteInt
Total: 9699476
ROUTINE ======================== github.com/pwaller/escape-analysis.(*IntWriter).WriteInt in /home/pwaller/.local/src/github.com/pwaller/escape-analysis/main_test.go
   9699476    9699476 (flat, cum)   100% of Total
         .          .     15:func NewIntWriter(w io.Writer) *IntWriter {
         .          .     16:        return &IntWriter{w: w}
         .          .     17:}
         .          .     18:
         .          .     19:func (iw *IntWriter) WriteInt(n int) {
   9699476    9699476     20:        fmt.Fprintf(iw.w, "%d", n)
         .          .     21:}
         .          .     22:
         .          .     23:func BenchmarkAllocs(b *testing.B) {
         .          .     24:        b.ReportAllocs()
         .          .     25:        for i := 0; i < b.N; i++ {
$ go build -gcflags=-m
# github.com/pwaller/escape-analysis
./main.go:15:6: can inline NewIntWriter
./main.go:19:6: can inline (*IntWriter).WriteInt
./main.go:30:6: can inline foo
./main.go:31:21: inlining call to NewIntWriter
./main.go:31:46: inlining call to (*IntWriter).WriteInt
./main.go:24:23: inlining call to testing.(*B).ReportAllocs
./main.go:26:20: inlining call to foo
./main.go:26:20: inlining call to NewIntWriter
./main.go:26:20: inlining call to (*IntWriter).WriteInt
/tmp/go-build401243069/b001/_gomod_.go:5:6: can inline init.0
./main.go:16:27: &IntWriter literal escapes to heap
./main.go:15:19: leaking param: w to result ~r1 level=-1
./main.go:19:7: leaking param content: iw
./main.go:20:27: n escapes to heap
./main.go:20:20: (*IntWriter).WriteInt ... argument does not escape
./main.go:26:20: n escapes to heap
./main.go:23:22: BenchmarkAllocs b does not escape
./main.go:26:20: BenchmarkAllocs &IntWriter literal does not escape
./main.go:26:20: BenchmarkAllocs ... argument does not escape
./main.go:31:46: n escapes to heap
./main.go:31:21: foo &IntWriter literal does not escape
./main.go:31:46: foo ... argument does not escape
# github.com/pwaller/escape-analysis

Seems the remaining allocation is the interface argument to fmt.Fprintf. Slightly surprised that requires an allocation - I thought values which could fit inside the interface wouldn't get allocated on the heap. Unless I'm misunderstanding where the allocation is here.

/cc @mvdan for gardening on the issue, since I believe the issue as originally written is resolved and I don't have rights to close the issue.

@mvdan
Copy link
Member

mvdan commented Feb 5, 2019

(also, wow things got faster in the interim - is that really 10x faster than the above result?)

I presume your machine is just more powerful :)

I agree that it does seem like the issue as written has been fixed. There already are other open issues to improve the compiler, so we can continue work elsewhere.

@mvdan mvdan closed this as completed Feb 5, 2019
@pwaller
Copy link
Contributor

pwaller commented Feb 5, 2019

TIL from a conversation on #performance on gophers slack that interface data always contains pointer, so there has to be an allocation in these circumstances, currently.

Doc trail in case anyone finds themselves here:

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants