Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: spec: asymmetry between const and var conversions #6923

Open
robpike opened this issue Dec 10, 2013 · 10 comments
Open

proposal: spec: asymmetry between const and var conversions #6923

robpike opened this issue Dec 10, 2013 · 10 comments
Labels
LanguageChange v2 A language change or incompatible library change
Milestone

Comments

@robpike
Copy link
Contributor

robpike commented Dec 10, 2013

This is not a request for a language change; I am just documenting a weakness of the
current const conversion rules: it confuses people that one can convert -1 to a uint,
but only if the -1 is stored in variable. That is,

var s = uint(-1)

is illegal: constant -1 overflows uint

That is clear, but it's also clear what I mean when I write this and it's a shame that I
can't express what I mean as a constant, especially since

var m = -1
var s = uint(m)

works. There is a clumsy workaround for this case, involving magic bit operations, but
the problem can turn up in other less avoidable ways:

const N int = 1234
const x int = N*1.5

fails yet

const N = 1234
const x int = N*1.5

succeeds. (Note the missing "int" in the declaration of N.) This can be
rewritten as

const x int = N*3/2

but if the floating point constant is itself named (as with the -1 in the uint example),
it becomes impossible to express the constant value in Go even though its value seems
clear.

Again, not asking for a change, just pointing out a clumsy result of the existing rules
@rsc
Copy link
Contributor

rsc commented Mar 4, 2014

Comment 1:

Labels changed: added release-none.

@griesemer
Copy link
Contributor

We may be able to address this in a fully backward-compatible way:

  1. We absolutely want the compiler to complain when we write something like

const x uint = -1
var x uint = -1

This doesn't work because -1 cannot be (implicitly) converted to a uint.

  1. But we could make a distinction between implicit constant conversions (such as the ones above) and explicit conversions of the form T(x) as in uint(-1). If we had this distinction, we could still disallow currently invalid constant conversions that are implicit, but we could permit explicit constant conversions that are currently not permitted (but would be if the values were variables).

Since such code is not valid at the moment, no existing code should be affected. Code like this

const x uint64 = -1

would still not be permitted. But using an explicit conversion one could write

const x = uint(-1)

@ianlancetaylor
Copy link
Contributor

Right now T(c) where T is a type and c is a constant means to treat c as having type T rather than one of the default types. It gives an error if c can not be represented in T, except that for float and complex constants we quietly round to T as long as the value is not too large (I'm not sure that last bit is in the spec).

I think you are suggesting that T(c) is always permitted, but that implies that we do a type conversion, and a type conversion only makes sense if we know the type we are starting from. What type would that be? In particular, if the int type is 32 bits, what does uint64(-0x100000000) mean? That value can not be represented in a 32-bit int, and it can not be represented as a uint64. So what value do we start from when converting to uint64?

My point is of course not that we can not answer that question, but that this is not an area where it is trivial to make everyone happy.

@minux
Copy link
Member

minux commented Feb 12, 2015 via email

@griesemer
Copy link
Contributor

@ianlancetaylor That's a good point. @minux 's suggestion could work (as in uint(int(-1)), or because -1 would "default" to int, uint(-1) - but defaulting to int cuts precision where we may not want it).

But I think it's not that bad because we can easily give "meaning" to an integer constant by defining it's concrete representation as two's complement like we do for variables - w/o defining that representation we couldn't explain uint(x) for an int variable x either.

For a start, let's just consider typed and untyped integer constants x: In either case, they would be considered as represented in "infinite precision" two's complement. Then, conversions of the form T(x), where x is another integer type would simply apply any truncation needed and assign a type. E.g.,

int16(0x12345678) = 0x5678 of type int16
byte(-1) = 0xff of type byte

etc.

For floating point numbers it's similar. An untyped floating point constant would be arbitrarily precise, converting to a float32 and float64 would cut precision to 24 or 53bits respectively, and possibly underflow (value might become +/-0) or overflow (value might become +/-Inf - some of this is currently unspecified for variables but could be tied down, incl. rounding mode).

Along the same lines I think one could make float->int and int->float conversions work.

But we may not need to go as far. The concrete issue is conversions between integer types. We could be pragmatic and simply state that integer constants are considered represented in infinite precision two's complement, and explicit type conversions do the "obvious" truncation/sign extension and type assignment.

@rsc rsc added this to the Unplanned milestone Apr 10, 2015
@rsc rsc changed the title spec: asymmetry between const and var conversions proposal: spec: asymmetry between const and var conversions Jun 20, 2017
@rsc rsc added the v2 A language change or incompatible library change label Jun 20, 2017
@ianlancetaylor ianlancetaylor added NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. and removed Thinking labels Dec 6, 2017
@SophisticaSean
Copy link

This recently bit me. Took me a minute to nail down the exact issue. I'm fairly new to go, but couldn't we add an optional ok param to these type conversions? Similar to interface casting/conversions:

foo := -1
bar, ok := uint8(foo)
if !ok {
  panic(fmt.Errorf("%v is could not be converted to a Uint8!", foo))
}

The optionality of the ok return values would keep it backwards compatible while also providing a way to check for the problem.

Letting these conversions silently over/underflow via truncation is really confusing.

@seebs
Copy link
Contributor

seebs commented Aug 5, 2019

I was about to file an issue on this, specifically over:

uint64(^0) -> not valid
^uint64(0) -> valid

My proposed change (spec-wise, I don't know anything about the compiler) is basically:

The mask used by the unary bitwise complement operator ^ matches the rule for non-constants: the mask is all 1s for unsigned constants and -1 for signed and untyped constants.

=>

The mask used by the unary bitwise complement operator ^ matches the rule for non-constants: the mask is all 1s for constants interpreted as unsigned values and -1 for constants interpreted as signed values.

In short, if you use ^0 in a context where the compiler expects an unsigned constant, it should do the same thing it would have done if you'd specified ^uintN(0).

This does get more complicated in cases where the value is trickier. We know what we mean by uint(-1). It's less obvious what, if anything, would be meant by uint64(^-4294967296).

Mostly I just want to be able to write ^0 without having to think about the specific uint type I want it to be.

@ianlancetaylor ianlancetaylor removed the NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. label Apr 14, 2020
@ianlancetaylor ianlancetaylor modified the milestones: Unplanned, Proposal Apr 14, 2020
@ianlancetaylor
Copy link
Contributor

This seems worth investigating more thoroughly. It needs a proper design doc before adding, but there don't seem to be any obvious problems with it.

@loralee90
Copy link

I've recently run into this as well. Even when I define the type of the const as a non-int I still get the error. For example:

const x float64 = 66.413256
y := uint64(x)

gives me a constant 66.4133 truncated to integer error.

@streaps

This comment has been minimized.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
LanguageChange v2 A language change or incompatible library change
Projects
None yet
Development

No branches or pull requests

9 participants