You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using binary.Write for encoding a slice of structs, I encountered some weird behaviour where memory allocations in a particular path was more than I expected.
I wrote some benchmarks in the standard library's encoding/binary package to demonstrate this.
func BenchmarkWriteSlice1000Structs(b *testing.B) {
slice := make([]Struct, 1000)
buf := new(bytes.Buffer)
var w io.Writer = buf
b.SetBytes(int64(Size(slice)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
buf.Reset()
Write(w, BigEndian, slice)
}
b.StopTimer()
}
func BenchmarkWriteSlice10Structs(b *testing.B) {
slice := make([]Struct, 10)
buf := new(bytes.Buffer)
var w io.Writer = buf
b.SetBytes(int64(Size(slice)))
b.ResetTimer()
for i := 0; i < b.N; i++ {
buf.Reset()
Write(w, BigEndian, slice)
}
b.StopTimer()
}
What did you see happen?
I ran both benchmarks and these are the results:
Encoding a slice with 1000 struct elements
root@ubuntu-s-2vcpu-2gb-fra1-01:~/go/src/encoding/binary# ../../../bin/go test -run='^$' -memprofile memprofile.out -benchmem -bench BenchmarkWriteSlice1000Structs -count=10
root@ubuntu-s-2vcpu-2gb-fra1-01:~/go/src/encoding/binary# ../../../bin/go tool pprof memprofile.out
File: binary.test
Type: alloc_space
Time: Mar 9, 2024 at 3:27pm (UTC)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 1305.40MB, 99.84% of 1307.48MB total
Dropped 8 nodes (cum <= 6.54MB)
flat flat% sum% cum cum%
1302.13MB 99.59% 99.59% 1304.21MB 99.75% encoding/binary.Write
3.27MB 0.25% 99.84% 1307.48MB 100% encoding/binary.BenchmarkWriteSlice1000Structs
0 0% 99.84% 1305.31MB 99.83% testing.(*B).launch
0 0% 99.84% 1307.48MB 100% testing.(*B).runN
Encoding a slice with 10 struct elements
root@ubuntu-s-2vcpu-2gb-fra1-01:~/go/src/encoding/binary# > ../../../bin/go test -run='^$' -memprofile memprofile.out -benchmem -bench BenchmarkWriteSlice10Structs -count=10
warning: GOPATH set to GOROOT (/root/go) has no effect
root@ubuntu-s-2vcpu-2gb-fra1-01:~/go/src/encoding/binary# ../../../bin/go tool pprof memprofile.out
warning: GOPATH set to GOROOT (/root/go) has no effect
File: binary.test
Type: alloc_space
Time: Mar 9, 2024 at 4:24pm (UTC)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 905.58MB, 100% of 905.58MB total
flat flat% sum% cum cum%
792.58MB 87.52% 87.52% 905.58MB 100% encoding/binary.Write
113MB 12.48% 100% 113MB 12.48% reflect.(*structType).Field
0 0% 100% 905.58MB 100% encoding/binary.BenchmarkWriteSlice10Structs
0 0% 100% 113MB 12.48% encoding/binary.dataSize
0 0% 100% 113MB 12.48% encoding/binary.sizeof
0 0% 100% 113MB 12.48% reflect.(*rtype).Field
0 0% 100% 905.58MB 100% testing.(*B).launch
0 0% 100% 905.58MB 100% testing.(*B).runN
(pprof)
What did you expect to see?
Per the benchmarks, there is a rise in total memory allocated incurred at reflect.(*structType).Field when encoding a slice of 10 struct elements compared to a slice of 1000 struct elements. I expected to see the memory incurred to be at worst, the same if not less when encoding a slice of structs with lesser length. I draw my conclusion from here since we are calling sizeof on the same struct type regardless of the length of the slice.
Also, looking at the primary source of the allocations, per the line below, since we are working with the same struct type, I expect the number of allocations here to be the same regardless since both benchmarks are working with the same struct type and hence have the same fields.
The text was updated successfully, but these errors were encountered:
kwakubiney
changed the title
encoding/binary: more memory usage when encoding slices of smaller length.
encoding/binary: more memory usage when encoding slices of structs with smaller length.
Mar 9, 2024
kwakubiney
changed the title
encoding/binary: more memory usage when encoding slices of structs with smaller length.
encoding/binary: more memory usage when encoding a slice of structs with smaller length.
Mar 10, 2024
Go version
go version devel go1.23-e8b5bc63be linux/amd64
Output of
go env
in your module/workspace:What did you do?
When using
binary.Write
for encoding a slice of structs, I encountered some weird behaviour where memory allocations in a particular path was more than I expected.I wrote some benchmarks in the standard library's
encoding/binary
package to demonstrate this.What did you see happen?
I ran both benchmarks and these are the results:
What did you expect to see?
Per the benchmarks, there is a rise in total memory allocated incurred at
reflect.(*structType).Field
when encoding a slice of 10 struct elements compared to a slice of 1000 struct elements. I expected to see the memory incurred to be at worst, the same if not less when encoding a slice of structs with lesser length. I draw my conclusion from here since we are callingsizeof
on the same struct type regardless of the length of the slice.go/src/encoding/binary/binary.go
Line 483 in 74726de
Also, looking at the primary source of the allocations, per the line below, since we are working with the same struct type, I expect the number of allocations here to be the same regardless since both benchmarks are working with the same struct type and hence have the same fields.
go/src/reflect/type.go
Line 1061 in 74726de
The text was updated successfully, but these errors were encountered: