Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: compress/zstd: add new package #62513

Open
dsnet opened this issue Sep 7, 2023 · 28 comments
Open

proposal: compress/zstd: add new package #62513

dsnet opened this issue Sep 7, 2023 · 28 comments
Labels
Milestone

Comments

@dsnet
Copy link
Member

dsnet commented Sep 7, 2023

Zstandard (RFC 8878) is well-positioned to replaced GZIP (RFC 1952) as the de-facto compression format.

Zstandard:

Given the likely ubiquitous place Zstandard is going have in the future, I propose first-class adoption of Zstandard in the stdlib.

Existing Go implementations:

Some goals for this package:

  • The API be simple and similar to existing packages like compress/gzip.
  • It does not use unsafe or assembly similar to the existing compress package.
    • Performance is always nice, but Go community surveys consistently shows security as more important than performance.
    • Users who just need a decently fast enough Zstandard implementation can use compress/zstd, while those want the best performance and/or advanced features of Zstandard can use @klauspost's package. Any stdlib packages (e.g., net/http that make use of compress/zstd should make it possible to swap over to a different zstd implementation).
    • Note that the same is already true of compress/gzip, which is generally good enough (although there is still room for performance optimizations), but power users can use github.com/klauspost/compress/gzip, which is much faster (as it makes use of assembly).

We could:

  • adopt @klauspost's pure-Go (i.e., no assembly) implementation with a cleaned-up API
  • extend @ianlancetaylor's work to support compressing
  • do a combination of both

Related issues:

@dsnet
Copy link
Member Author

dsnet commented Sep 7, 2023

Some comments on other popular compression formats and why Zstandard makes the cut, but not others:

  • Brotli (RFC 7932) has many of the benefits of Zstandard, but has several detriments:
    • It uses a 120KiB static dictionary, which must be linked into every Go binary that uses brotli. This doesn't suite Go well which always aims for static binaries and avoids shared libraries.
    • The overall design seems geared towards web data, while Zstandard is more general purpose.
    • There is no framing format for Brotli, which makes its use as a standalone file format somewhat limited.
  • XZ (or LZMA) has great compression ratios, but is really slow for compression and decompression. There is no specification.
  • bzip2 has fairly good compression ratios, but is slow for compression and decompression. There is no specification. While there exists a decompression implementation in the stdlib (i.e., compress/bzip2), I don't believe bzip2 should have made the cut for inclusion into the stdlib.
  • snappy is heavily geared towards very high speed compression and decompression, but sacrifices compression ratio. Somewhat of a niche use case.
  • lz4 aims for similar goals to snappy.

Most compression formats (other than bzip2) are a combination of LZ77 with entropy encoding (and maybe other tricks like Markov chains for LZMA). The entropy encoding that GZIP uses is Huffman encoding, which can't compress better than 1-bit per byte. In terms of compression ratio, arithmetic encoding (which LZMA uses), range encoding, and ANS (which Zstandard uses) encoding provide better ratios. ANS has the benefit that is overcomes the limitations of Huffman encoding, is patent unencumbered (to my knowledge), and is also very fast. It seems unlikely a better entropy encoding than ANS will be discovered in the next decade.

IMO, the lack of a formal specification should bar inclusion in stdlib as it makes "correctness" somewhat undefined.

@ianlancetaylor
Copy link
Contributor

IMO, the lack of a formal specification should bar inclusion in stdlib as it makes "correctness" somewhat undefined.

Makes sense but I have to say that I'm not sure that anybody could write a correct zstd decompressor based solely on reading RFC 8878.

@dsnet
Copy link
Member Author

dsnet commented Sep 7, 2023

I'm not sure that anybody could write a correct zstd decompressor based solely on reading RFC 8878.

Heh... you're the expert there, having done it. I had written a Brotli decompressor solely from the draft RFC without looking at the implementation. This effort led to a number of updates to the draft so that it would be unambiguous when finalized.

@Jorropo
Copy link
Member

Jorropo commented Sep 7, 2023

To add to the list of good points zstd has an extremely really wide range of levels available, depending on the compressor options you can have lz4 or gzip like ratios which have very low resource usage.
Or by turning up a few knobs and or changing a few algorithms in the compressor you can get ratios slightly bellow xz while being transparently compatible with the decompressor (and by using less resources than xz).

@klauspost
Copy link
Contributor

@dsnet Just to clarify some mistakes in your initial post.

zstd compression (nor deflate for that matter) uses any assembly or "unsafe" for compression. The assembly is used for decompression and can be fully disabled with a "noasm" tag. So obviously that is trivial to exclude.

the API is a somewhat complex

Could you elaborate what you find particularly complex?

I expect the stdlib would have the multithreading pulled out and only implement a fully synchronous mode, as well as possibly omitting dictionary features. Without that the API isn't really more complex than flate, gzip and similar. Without concurrency you can have the options on the Encoder (or Writer, whatever it is called), since you don't have any sharing to care about.

security as more important

I could interpret this as an underhanded way of implying my library isn't safe to use. Probably not the intention, but just to be clear there is continuous fuzzing running and there is third party security review being done.

Considering it doesn't use assembly, nor unsafe, I think it is fair to say that a battletested code is more secure than a freshly written code. If you want to write it, I will of course not take that fun from you - but to wrap it in an argument of "security" seems disingenuous or the very least not at all researched to me.

I do think that features like concurrent (de)compression and possibly dictionaries should be taken out from the stdlib. I think having the EncodeAll([]byte) []byte on an encoder is a good may to control allocations - and without concurrency it will be easier to control anyway.

@dsnet
Copy link
Member Author

dsnet commented Sep 8, 2023

@klauspost, thanks for your thoughts.

I expect the stdlib would have the multithreading pulled out and only implement a fully synchronous mode, as well as possibly omitting dictionary features

Correct. There should be no usages of goroutines in the stdlib implementation. The example in klauspost/compress#479 should be perfectly thread safe (as it is with the other stdlib packages).

One "dictionary" feature we probably want to keep is the ability to specify the maximum window size.

I could interpret this as an underhanded way of implying my library isn't safe to use. Probably not the intention,

Not at all the intention; I use your package quite extensively myself.

That said, the stdlib avoids the use of assembly except for a few select places. Assembly can be a challenge to review and maintain. Continuous fuzzing is great, but takes a long time before confidence in the implementation is obtained. Fuzzing doesn't help when trying to review if a change to the assembly implementation is correct.

I should also mention the asynchronous nature zstd.NewWriter has actually lended itself to race conditions that could then lead to security problems. To be clear, it's not a race in zstd, but the API design made it easy for users to hold it wrong. In general, library code should be synchronous (from the perspective of the caller) unless the naming or the functionality makes it obvious that it operates in a asynchronous manner.

Considering it doesn't use assembly, nor unsafe, I think it is fair to say that a battletested code is more secure than a freshly written code.

I'm confused. Just above, you mentioned that assembly is used in your package for decompression, but here you saying it doesn't use assembly? Did you mean if you built it under noasm?

If you want to write it, I will of course not take that fun from you - but to wrap it in an argument of "security" seems disingenuous or the very least not at all researched to me.

I'm not itching to write a zstd implementation from scratch. My end goal is to see support for it in stdlib. In my proposal, I deliberately didn't make a statement about how this would come to be fulfilled. Using the pure Go (i.e., no assembly) implementation of your package would be a reasonable possibility.

I think having the EncodeAll([]byte) []byte on an encoder is a good may to control allocations - and without concurrency it will be easier to control anyway.

I personally find it rather strange that the Encoder is either a streaming encoder or essentially a variable to hold encoder settings and shared resources to use with EncodeAll.

Given the acceptance of #54078, I would rather see something like:

func AppendCompress(dst, src []byte, level int) ([]byte, error)
func AppendDecompress(dst, src []byte, maxSize int) ([]byte, error)

@klauspost
Copy link
Contributor

One "dictionary" feature we probably want to keep is the ability to specify the maximum window size.

Yes. That is an essential security DOS mitigation. Though when I refer to dictionaries I only refer to predefined dictionaries that seeds the first blocks with backreferences and predefined entropy coders.

I'm confused. Just above, you mentioned that assembly is used in your package for decompression, but here you saying it doesn't use assembly? Did you mean if you built it under noasm?

I assumed you were only looking for compression given there already is an internal decompressor, which I unfortunately haven't had the time to test. But since everything is marked by tags, taking it out is extremely easy.

I should also mention the asynchronous nature zstd.NewWriter has actually lended itself to race conditions that could then lead to security problems.

I am not sure how you come to that. If you use it for streaming, you can only use it for one stream - that isn't different than a sync Writer. If you want writes to only happen when you are calling Write, yes, you need to disable concurrency - but that is an assumption you are making that doesn't hold. The documentation should be pretty clear on what you can expect. Either way, it is a non-issue since the concurrency should go.

Both the compressor and the decompressor have an "async" and "sync" code path. Ripping out the async one is a minor task. Of course some cleanup can probably be done, but that is cosmetics.

func AppendCompress(dst, src []byte, level int) ([]byte, error)
func AppendDecompress(dst, src []byte, maxSize int) ([]byte, error

So you want each of them to allocate, or try to hide the allocs with internal sync.Pools?

EncodeAll/DecodeAll are basically this (they also append), but it keeps the allocs within the Reader/Writer, and buffers, etc can be allocated with settings for the compression level needed.

Being able to use the same Reader/Writer for concurrent Encode/Decode operations would have to go, but it would at least give control over how each is reused. Sidenote, AppendCompress shouldn't need to return an error.

@dsnet
Copy link
Member Author

dsnet commented Sep 8, 2023

So you want each of them to allocate, or try to hide the allocs with internal sync.Pools?

Stack allocate if possible, reusing the destination buffer if possible (e.g., we don't need to maintain the LZ77 window for decompression since it can be obtained from dst itself), and otherwise sync.Pool.

@dsnet
Copy link
Member Author

dsnet commented Sep 8, 2023

To comment more on the EncodeAll hung off as methods where the pooling occurs within an Encoder: in a sufficiently large program, I'm observing multiple instances of global Encoder variables in various packages that each use more or less the same settings. It is somewhat silly that across packages, the resources is not shared since resource pooling occurs within an Encoder.

@silverwind
Copy link

silverwind commented Sep 8, 2023

While zstd is a noble goal, I woul like to highlight that it still has zero support in browsers as of today. So for anyone looking to use a more efficient Content-Encoding for HTTP than gzip, brotli is still the go-to:

@Jorropo
Copy link
Member

Jorropo commented Sep 8, 2023

@silverwind I think this is more relevant for #62492. ZSTD has found many uses outside Content-Encoding already.

@cespare
Copy link
Contributor

cespare commented Sep 9, 2023

To offer a bit of commentary on how it would be used: at my company Zstandard is becoming our default general-purpose compression algorithm because it's strictly better than gzip on all the dimensions we care about. There are cases where we would decide to use gzip (for compatibility) or snappy (for very high throughput) but in the absence of specific requirements, Zstandard is the go-to.

@silverwind
Copy link

@silverwind I think this is more relevant for #62492. ZSTD has found many uses outside Content-Encoding already.

Yes, just wanted to mention it. I'm in favor of adding any of the "better then gzip" mechanisms to the standard library for both http transfer, but also go:embed compression, which as of today does not even support gzip.

@klauspost
Copy link
Contributor

So you want each of them to allocate, or try to hide the allocs with internal sync.Pools?

Stack allocate if possible, reusing the destination buffer if possible (e.g., we don't need to maintain the LZ77 window for decompression since it can be obtained from dst itself), and otherwise sync.Pool.

Correct (and it already uses dst instead of a dedicated history). I can see the single-use decoder working stateless with hidden, internal buffers (for things like literal decompression, entropy decoders for example).

For encoding you have rather big hash tables to deal with, which will put quite some pressure on the stack. Level 1 has 256K, Level 2 has 1.2MB, Level 3 is 4MB. So these would also need some internal pooling, since having those on stack and forcing zeroing on each call, wouldn't be very optimal. Again entropy/literal coders would need to have internal pools, since they are rather expensive to set up.

Doable, but I am not generally a fan of having a bunch of pools on a package level.

Streams will of course be stateful, so ignoring those for now.

That said, I don't really see any problem in separating single-shot state from the Reader/Writer state and having an internal pool for those.

To comment more on the EncodeAll hung off as methods where the pooling occurs within an Encoder: in a sufficiently large program, I'm observing multiple instances of global Encoder variables in various packages that each use more or less the same settings. It is somewhat silly that across packages, the resources is not shared since resource pooling occurs within an Encoder.

Sure. I can see that. I try to avoid global package state. That is the main reason I haven't added it, since it would be rather annoying if one bad actor could cause problems for others.

A global "pool" is nice for resource control. It is pointless to run more than GOMAXPROCS EncodeAll/DecodeAll concurrently, since each will consume exactly one thread fully until it is done. That has a "play nice" feel for preemption, and limiting this below GOMAXPROCS will of course make sure that compression/decompression will never oversaturate a system.

Without concurrency there isn't the main "annoyance" with the current API - that you have to Close to release resources. It is not that different from many other things, but it is too easy to forget and cause leaks.

So yeah, I can see that as a way of simplifying things. Let me know if I understood your ideas clearly.

@klauspost
Copy link
Contributor

klauspost commented Sep 25, 2023

@dsnet I tested the internal/zstd - there are a bunch of problems in there and some differences to accepted blocks compared to zstd. This is mostly the snappy testset. input files, zstd compressed.

BenchmarkDecoder_DecoderSmall/kppkn.gtb.zst/unbuffered-32                    267           4490401 ns/op         328.38 MB/s         174 B/op          1 allocs/op
(Failed - zstd decompression error at 41415: offset past window)

BenchmarkDecoder_DecoderSmall/geo.protodata.zst/unbuffered-32               1107           1064937 ns/op         890.85 MB/s          59 B/op          1 allocs/op
BenchmarkDecoder_Internal/geo.protodata.zst-32                               669           1801050 ns/op         526.75 MB/s      554226 B/op         31 allocs/op

BenchmarkDecoder_DecoderSmall/plrabn12.txt.zst/unbuffered-32                  81          14196832 ns/op         271.53 MB/s         217 B/op          1 allocs/op
(Failed - zstd decompression error at 106901: offset past window)

BenchmarkDecoder_DecoderSmall/lcet10.txt.zst/unbuffered-32                   100          10505848 ns/op         324.96 MB/s         173 B/op          1 allocs/op
(Failed - zstd decompression error at 89343: offset past window)

BenchmarkDecoder_DecoderSmall/asyoulik.txt.zst/unbuffered-32                 327           3645659 ns/op         274.69 MB/s         150 B/op          1 allocs/op
BenchmarkDecoder_Internal/asyoulik.txt.zst-32                                188           6349734 ns/op         157.71 MB/s      615777 B/op         31 allocs/op

BenchmarkDecoder_DecoderSmall/alice29.txt.zst/unbuffered-32                  255           4717756 ns/op         257.90 MB/s          97 B/op          1 allocs/op
(Failed - zstd decompression error at 57404: offset past window)

BenchmarkDecoder_DecoderSmall/html_x_4.zst/unbuffered-32                     550           2159054 ns/op        1517.70 MB/s         109 B/op          1 allocs/op
(Failed - zstd decompression error at 14867: offset past window)

BenchmarkDecoder_DecoderSmall/paper-100k.pdf.zst/unbuffered-32              5551            211883 ns/op        3866.29 MB/s          48 B/op          1 allocs/op
BenchmarkDecoder_Internal/paper-100k.pdf.zst-32                             2862            436787 ns/op        1875.51 MB/s      672010 B/op         30 allocs/op

BenchmarkDecoder_DecoderSmall/fireworks.jpeg.zst/unbuffered-32             10000            101969 ns/op        9657.29 MB/s          48 B/op          1 allocs/op
BenchmarkDecoder_Internal/fireworks.jpeg.zst-32                             9979            149482 ns/op        6587.71 MB/s      131808 B/op          3 allocs/op

BenchmarkDecoder_DecoderSmall/urls.10K.zst/unbuffered-32                      94          12117244 ns/op         463.53 MB/s         405 B/op          1 allocs/op
(Failed - zstd decompression error at 74547: offset past window)

BenchmarkDecoder_DecoderSmall/html.zst/unbuffered-32                        1040           1153069 ns/op         710.45 MB/s          80 B/op          1 allocs/op
BenchmarkDecoder_Internal/html.zst-32                                        592           2033895 ns/op         402.77 MB/s      557693 B/op         33 allocs/op

BenchmarkDecoder_DecoderSmall/comp-data.bin.zst/unbuffered-32              14134             84502 ns/op         385.89 MB/s          48 B/op          1 allocs/op
BenchmarkDecoder_Internal/comp-data.bin.zst-32                              7734            158308 ns/op         205.98 MB/s       32780 B/op         19 allocs/op

Assembly disabled. Using streaming interface. Multithreading disabled. Bytes() []byte interface disabled. I could not use the Reset(...) function, since it doesn't provide correct output when used (see below).

So "not great, not terrible" applies it seems. With assembly the difference grows significantly. Also, I did find some bugs while testing casually.

Bugs found

I ran my fuzz set through the decoder and found some differences/bugs. I don't really know where to report them (should I open an issue?) So I collapsed them here, since they aren't super relevant for this issue.

Here are a bunch of samples: zstd-internal-problems.zip

  • test-diverge-N.zst is the input file.
  • test-diverge-N.txt contains the error message returned by the library.
  • test-diverge-N.got is the output from zstd commandline.

offset past window & invalid offset

There is a logic problem somewhere and this check fails on valid data (as can be seen above).
I suspect in some cases it ends up with wrong offsets (maybe repeats? - I can provide decoded sequence output if you want)

unexpected EOF

It seems there are a bunch of unexpected EOFs, that shouldn't really be unexpected. Not sure if it can happen on actual output of zstd.

invalid checksum: got A want B

Not sure if this is because of the offset problem - seems likely.

dictionaries are not supported

Dictionaries with ID 0 is the same as "no dictionary". The decoder should ignore this.

no sequences and no literals

This is allowed - even if it is a bit silly. Check against latest versions - this is something I had clarified while testing.

(*Reader).Reset is not working

I was unable to make (*Reader).Reset work as intended and it kept just spitting out errors and wrong content. I suspect it doesn't reset all state.

@AlexanderYastrebov
Copy link
Contributor

All failures for https://github.com/klauspost/compress/blob/master/zstd/testdata/benchdecoder.zip are fixed when window size is configured according to RFC instead of zero.

When Single_Segment_Flag is set, Window_Descriptor is not present. In this case, Window_Size is Frame_Content_Size

I've created #63224 to address this.

AlexanderYastrebov added a commit to AlexanderYastrebov/go that referenced this issue Sep 25, 2023
@gopherbot
Copy link

Change https://go.dev/cl/531075 mentions this issue: internal/zstd: configure window size for single segment frames

AlexanderYastrebov added a commit to AlexanderYastrebov/go that referenced this issue Sep 25, 2023
@ianlancetaylor
Copy link
Contributor

@klauspost Thanks for the bug reports, @AlexanderYastrebov Thanks for the quick fix.

@klauspost Please do feel free to open bug reports against the code on this issue tracker. Or I'll take a look either way.
Thanks.

AlexanderYastrebov added a commit to AlexanderYastrebov/go that referenced this issue Sep 26, 2023
AlexanderYastrebov added a commit to AlexanderYastrebov/go that referenced this issue Sep 26, 2023
Fix copyFromWindow when match extends past initial buffer size

For golang#62513
AlexanderYastrebov added a commit to AlexanderYastrebov/go that referenced this issue Sep 26, 2023
Fix copyFromWindow when match extends past initial buffer size

For golang#62513
@AlexanderYastrebov
Copy link
Contributor

So I've tested combined changes #63224 #63248 #63251 #63252 against both benchdecoder.zip and zstd-internal-problems.zip from #62513 (comment) and the only failures are due to unsupported dictionaries.

=== RUN   TestFileSamples/01a779ca.test-diverge-5.zst
    zstd_test.go:272: zstd decompression error at 4: dictionaries are not supported
--
=== RUN   TestFileSamples/1bd2d273.test-diverge-24.zst
    zstd_test.go:272: zstd decompression error at 113: dictionaries are not supported
--
=== RUN   TestFileSamples/47db12fc.test-diverge-16.zst
    zstd_test.go:272: zstd decompression error at 4: dictionaries are not supported
=== RUN   TestFileSamples/494ab243.test-diverge-13.zst
    zstd_test.go:272: zstd decompression error at 4: dictionaries are not supported
--
=== RUN   TestFileSamples/89e8d8c9.test-diverge-31.zst
    zstd_test.go:272: zstd decompression error at 139: dictionaries are not supported
=== RUN   TestFileSamples/89e8d8c9.test-diverge-32.zst
    zstd_test.go:272: zstd decompression error at 139: dictionaries are not supported
--
=== RUN   TestFileSamples/e3b0c442.test-diverge-23.zst
    zstd_test.go:272: zstd decompression error at 4: dictionaries are not supported
=== RUN   TestFileSamples/e3b0c442.test-diverge-33.zst
    zstd_test.go:272: zstd decompression error at 70: dictionaries are not supported
--
=== RUN   TestFileSamples/e3b0c442.test-diverge-8.zst
    zstd_test.go:272: zstd decompression error at 4: dictionaries are not supported
--
=== RUN   TestFileSamples/e4325775.test-diverge-3.zst
    zstd_test.go:272: zstd decompression error at 14: dictionaries are not supported

Dictionaries with ID 0 is the same as "no dictionary". The decoder should ignore this.

This is what it does but in the failing samples Dictionary_ID_Flag is not 0.

@ianlancetaylor
Copy link
Contributor

@AlexanderYastrebov Thanks very much.

AlexanderYastrebov added a commit to AlexanderYastrebov/go that referenced this issue Sep 27, 2023
AlexanderYastrebov added a commit to AlexanderYastrebov/go that referenced this issue Sep 27, 2023
AlexanderYastrebov added a commit to AlexanderYastrebov/go that referenced this issue Sep 27, 2023
AlexanderYastrebov added a commit to AlexanderYastrebov/go that referenced this issue Sep 27, 2023
gopherbot pushed a commit that referenced this issue Sep 27, 2023
For #62513

Change-Id: I295e72f71165665b8ea999e68a5586fa785b546d
GitHub-Last-Rev: 902e952
GitHub-Pull-Request: #63252
Reviewed-on: https://go-review.googlesource.com/c/go/+/531217
Auto-Submit: Ian Lance Taylor <iant@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
AlexanderYastrebov added a commit to AlexanderYastrebov/go that referenced this issue Sep 27, 2023
@klauspost
Copy link
Contributor

klauspost commented Sep 27, 2023

@AlexanderYastrebov Great stuff!

Dictionary set, but ID==0 is the same as "no dictionary" when using the official zstd, so you can safely support that.

(I can't remember if I discussed this with the author, but it wasn't explicitly stated in the spec when I implemented it)

Edit: Discussed here: facebook/zstd#2172

gopherbot pushed a commit that referenced this issue Sep 27, 2023
Set window size to frame content size when single segment flag is set.

For #62513

Change-Id: I2a60c33123aca4f6a631e6d625f4582ff31a63cb
GitHub-Last-Rev: 9bafe01
GitHub-Pull-Request: #63224
Reviewed-on: https://go-review.googlesource.com/c/go/+/531075
Auto-Submit: Ian Lance Taylor <iant@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
AlexanderYastrebov added a commit to AlexanderYastrebov/go that referenced this issue Sep 27, 2023
AlexanderYastrebov added a commit to AlexanderYastrebov/go that referenced this issue Sep 27, 2023
A value of 0 has same meaning as no Dictionary_ID,
in which case the frame may or may not need a dictionary to be decoded,
and the ID of such a dictionary is not specified.

See facebook/zstd#2172

For golang#62513
@gopherbot
Copy link

Change https://go.dev/cl/531515 mentions this issue: internal/zstd: allow zero dictionary id

AlexanderYastrebov added a commit to AlexanderYastrebov/go that referenced this issue Sep 27, 2023
A value of 0 has same meaning as no Dictionary_ID,
in which case the frame may or may not need a dictionary to be decoded,
and the ID of such a dictionary is not specified.

See facebook/zstd#2172

For golang#62513
gopherbot pushed a commit that referenced this issue Sep 28, 2023
For #62513

Change-Id: I2557aed5ae106ea4684bb599cce740e9da9df780
GitHub-Last-Rev: 2b7ddc6
GitHub-Pull-Request: #63251
Reviewed-on: https://go-review.googlesource.com/c/go/+/531295
Reviewed-by: Ian Lance Taylor <iant@google.com>
Auto-Submit: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Bryan Mills <bcmills@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
gopherbot pushed a commit that referenced this issue Sep 28, 2023
A value of 0 has same meaning as no Dictionary_ID,
in which case the frame may or may not need a dictionary to be decoded,
and the ID of such a dictionary is not specified.

See facebook/zstd#2172

For #62513

Change-Id: If0eafcbc5d2188576f0cb687234e30c9eb4037a6
GitHub-Last-Rev: 9cf12dc
GitHub-Pull-Request: #63268
Reviewed-on: https://go-review.googlesource.com/c/go/+/531515
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
gopherbot pushed a commit that referenced this issue Sep 28, 2023
For #62513

Change-Id: I59c24b254d5073140811b41497eabb91fb0046e9
GitHub-Last-Rev: 4dd16fc
GitHub-Pull-Request: #63248
Reviewed-on: https://go-review.googlesource.com/c/go/+/531255
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
AlexanderYastrebov added a commit to AlexanderYastrebov/go that referenced this issue Sep 28, 2023
Reset r.buffer on Reset to avoid subsequent Read calls observing
previously decoded data.

For golang#62513
@gopherbot
Copy link

Change https://go.dev/cl/531735 mentions this issue: internal/zstd: reset reader buffer

gopherbot pushed a commit that referenced this issue Sep 29, 2023
Reset r.buffer on Reset to avoid subsequent Read calls
observing previously decoded data.

For #62513

Change-Id: Icb65e76b5c5c0af32b36ec3a5999dca86407cbc8
GitHub-Last-Rev: 99c0a6f
GitHub-Pull-Request: #63288
Reviewed-on: https://go-review.googlesource.com/c/go/+/531735
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
@gopherbot
Copy link

Change https://go.dev/cl/540415 mentions this issue: internal/zstd: avoid panic when windowSize is negative

gopherbot pushed a commit that referenced this issue Jan 30, 2024
This change fixes an edge case in the zstd decompressor where
an int conversion could result in a negative window size.

Fixes #63979
For #62513

Change-Id: Ie714bf8fb51fa509b310deb8bd2c96bd87b52852
GitHub-Last-Rev: ab0be65
GitHub-Pull-Request: #63980
Reviewed-on: https://go-review.googlesource.com/c/go/+/540415
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: M Zhuo <mengzhuo1203@gmail.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
Reviewed-by: M Zhuo <mengzhuo1203@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
ezz-no pushed a commit to ezz-no/go-ezzno that referenced this issue Feb 18, 2024
This change fixes an edge case in the zstd decompressor where
an int conversion could result in a negative window size.

Fixes golang#63979
For golang#62513

Change-Id: Ie714bf8fb51fa509b310deb8bd2c96bd87b52852
GitHub-Last-Rev: ab0be65
GitHub-Pull-Request: golang#63980
Reviewed-on: https://go-review.googlesource.com/c/go/+/540415
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: M Zhuo <mengzhuo1203@gmail.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
Reviewed-by: M Zhuo <mengzhuo1203@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
@rsc rsc changed the title proposal: compress/zstd: implement support for compressing and decompressing proposal: compress/zstd: add new package Apr 24, 2024
@rsc
Copy link
Contributor

rsc commented Apr 24, 2024

This proposal has been added to the active column of the proposals project
and will now be reviewed at the weekly proposal review meetings.
— rsc for the proposal review group

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Active
Development

No branches or pull requests

9 participants