You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At line 79 of bzip2.go, you multiply level by 100 * 1024, while the original bzlib.c uses 1000 consistently. They also specify this in the documentation about allocation - see section 3.3.1 of http://www.bzip.org/1.0.5/bzip2-manual-1.0.5.html:
Parameter blockSize100k specifies the block size to be used for compression.
It should be a value between 1 and 9 inclusive, and the actual block size used
is 100000 x this figure. 9 gives the best compression but takes most memory.
So, instead of:
bz2.blockSize = 100 * 1024 * (int(level) - '0')
you should probably do:
bz2.blockSize = 100 * 1000 * (int(level) - '0')
The text was updated successfully, but these errors were encountered:
You are correct that the block size used in compress/bzip2 is incorrect, but using a larger block size for decoding shouldn't cause any problems. It may cause it to be able to decode something that the C version otherwise would not be able to, but should not cause the Go implementation to fail on decoding something that the C version can.
Is this causing a problem for you?
The wrong block-size is not the only inconsistency between the Go version and the "canonical" C implementation.
At line 79 of
bzip2.go
, you multiply level by 100 * 1024, while the original bzlib.c uses 1000 consistently. They also specify this in the documentation about allocation - see section 3.3.1 of http://www.bzip.org/1.0.5/bzip2-manual-1.0.5.html:So, instead of:
you should probably do:
The text was updated successfully, but these errors were encountered: