Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: crypto: add crypto engines #33281

Closed
derekparker opened this issue Jul 25, 2019 · 43 comments
Closed

proposal: crypto: add crypto engines #33281

derekparker opened this issue Jul 25, 2019 · 43 comments
Labels
FrozenDueToAge Proposal Proposal-Crypto Proposal related to crypto packages or other security issues
Milestone

Comments

@derekparker
Copy link
Contributor

Crypto Engines

The Go standard library cryptography implementation is very comprehensive and
widely used. The problem, however, comes when a user wants to swap out the
implementation for another, while using the same interface. This has already
happened upstream with the dev.boringcrypto [0] branch. Use cases for using an
alternate implementation include FIPS compliance (where a user may need to call
into a FIPS validated module), and interfacing with external crypto devices
(e.g. accelerator cards, specialty hardware).

It would be easy enough to maintain a package that does this already. However,
the problem comes when trying to swap crypto implementation in a project you do
not own. This would also ensure consistency so that there is a formal way to
declare that all or part of the crypto operations provided by the standard
library should be implemented by some outside package.

This proposal outlines a way for an alternate implementation of the crypto
functionality to be loaded / unloaded via function calls, and allow the
alternate crypto implementation to live in a separate package maintained outside
of the Go source tree. It also allows all this without the user having to modify
import paths, allowing a user to load an engine once and have that change
propagated everywhere automatically.

Engine implementation

Users should be able to load a single global engine which if present will be
used instead of the standard library implementation. The design allows engines
to replace all or part of the standard library implementations. There is no
concept of having multiple engines. Loading an engine makes it the default for
any crypto implemented by that engine. This means that a single engine could
replace all or part of the crypto functionality present in the crypto package
in the standard library.

Engine interface

All engines to be registered internally must conform to a specific interface.
The interfaces will be focused on specific areas of cryptography following the
package boundaries in the standard library.

type Engine interface {
	AES()    AESEngine
	Cipher() CipherEngine
	DES()    DESEngine
	...
	TLS()    TLSEngine

	Setup()   error // Engines would provide their own setup method.
	Cleanup() error // Engines would provide their own cleanup method.
}

type AESEngine interface {
	NewCipher(key []byte) (cipher.Block, error)
}
...

The goal would be that all crypto operations could be handled by an engine
implementation, including TLS specific functionality (PRF, et al).

Loading

The loading of an engine should be simple and straightforward. Only a single
engine can be loaded at any given time. This means one must unload an engine
before attempting to load another one.

The user is presented with 3 options:

  1. Load an engine directly by having the engine source code and directly
    calling LoadEngine with the struct.
  2. Load an engine via Go plugin functionality [1].
  3. Load an engine based on the environment variable
    GO_CRYPTO_ENGINE=<path>. This would cause the program to load the
    engine from <path> at runtime. <path> should point to a valid Go
    plugin.

The basic outline for these functions would look like:

package crypto

// LoadEngine loads the engine specified by `e`.
// This engine will be used for any crypto functionality
// it provides, falling back to the standard library for
// any functionality not provided by the engine.
func LoadEngine(e Engine) error {
	...
}

// LoadEnginePlugin will load the plugin located
// at `path` and subsequently create and register 
// an engine from it.
func LoadEnginePlugin(path string) error {
	...
}

// engineFromPlugin wraps the plugin in an Engine
// implementation suitable for usage.
func engineFromPlugin(p *plugin.Plugin) Engine {
	...
}

It would also be very easy to unload an Engine, if need be, via:

func UnloadEngine() error {
	...
}

It is the responsibility of the engine to maintain correctness. In other words,
the engine must be responsible for maintaining locks, reference counting, or any
other method to ensure that when EngineCleanup is called the resources used by
that engine will not be left in an invalid state.

Usage

The standard library code will branch based on the engine and specific
implementation being non-null.

For example:

// NewCipher creates and returns a new cipher.Block.
// The key argument should be the AES key,
// either 16, 24, or 32 bytes to select
// AES-128, AES-192, or AES-256.
func NewCipher(key []byte) (cipher.Block, error) {
	k := len(key)
	switch k {
	default:
		return nil, KeySizeError(k)
	case 16, 24, 32:
		break
	}
	if e := getEngine(); e != nil && e.AES() != nil {
		return e.AES().NewCipher(key)
    	}
	return newCipher(key)
}

Real world use cases

The upstream Go repository currently has a branch [0] that is being maintained
separate from the main master branch which adds support for calling into
BoringCrypto [2]. Instead of maintaining an entirely separate branch, the
functionality could be rewritten as an engine and included in any project that
may need it.

Additionally, if a Go user has a requirement that their code meet FIPS
requirements and would not like to use (or cannot use) the dev.boringcrypto
branch, this new functionality would allow the user to pick their crypto library
of choice (OpenSSL, LibreSSL, et al).

Finally, if there are users who wish to take advantage of certain hardware for
crypto, that too could be accomplished via engines.

Implementation details

Originally when drafting this proposal I wanted to get deep into specifics of
how this functionality would be implemented as far as how the engine is
represented internally (after loading / assignment). However after a few drafts
and further thinking I figured I'd leave it a bit more abstract at first lest
this devolve into a code review of sorts. I'd rather the discussion be around
the idea of engines as a feature. If necessary I can draft a further design doc
on some of the proposed specifics. Otherwise, we can leave that to a later date
in a proper code review setting.

Summary

Engines would provide an easy way to swap all or part of the standard library
crypto functionality with one provided by the user. There are already real world
use cases and examples of folks using other crypto libraries either by creating
their own package [3][4][5], or even by upstream maintaining a separate
branch to support BoringSSL.

This proposal suggests a way to remove the maintenance burden and make the Go
standard library crypto implementation more extensible.


@gopherbot gopherbot added this to the Proposal milestone Jul 25, 2019
@elagergren-spideroak
Copy link

elagergren-spideroak commented Jul 25, 2019

A use case: business reasons (FIPS, etc.) cause us to have to use one of a couple crypto libraries, depending on the situation. We wrote a library that copies the stdlib's crypto API and uses different "backends" based on build tags.

One of the major pain points is instead of import "crypto/aes" we have to write import "some/project.tld/crypto/aes". In order to be sure that we don't accidentally import the wrong crypto/aes, we have to have tests that scan our code base and check for bad imports. This is a fraught endeavor.

Another pain point is when libraries don't implement entirely what we need. In this case, we have to fall back to the Go stdlib which results in sloppy and error-prone backend code.

Additionally, we have no real way to switch backends at runtime depending on, e.g., whether FIPS mode is enabled on the host machine. This also means for each backend we want to test we have to re-compile the binary instead of switching engines.

Similarly, the crypto/tls package uses crypto, meaning we've had to rewrite crypto/tls to use their own backends. It would be lovely to just be able to use net/http without worrying about whether our http.Client uses an http.RoundTripper that uses the proper backend.

@ianlancetaylor
Copy link
Contributor

CC @FiloSottile @agl

@Freeaqingme
Copy link

Originally when drafting this proposal I wanted to get deep into specifics of
how this functionality would be implemented as far as how the engine is
represented internally (after loading / assignment). However after a few drafts
and further thinking I figured I'd leave it a bit more abstract at first lest
this devolve into a code review of sorts. I'd rather the discussion be around
the idea of engines as a feature.

I understand this desire and it does make sense; There's no need to get stuck on implementation details while still pondering on whether we even need such a thing. Having said that, I would prefer to see a proposal in an early stage for the interfaces where there's an implementation of at least two separate engines.

Once too often I've seen someone gobble up "a single generic interface to rule them all", which turns out to be basically a copy of their own implementation, and not be compatible with any alternatives. If we set out to create a single interface to rule them all (as I see this proposal), we should at least know that that is realistic and that it fulfils all requirements of other (potential) engines as well.

If that turns hard to do -for example because no other implementations exist yet - perhaps a small case study of other similar initiatives (e.g. in the C-ecosystem) could be done to show how these evolved and where they lack - if at all.

Finally, it's not mentioned explicitly in the proposal why this should be part of the standard library, rather than be a third party package that simply encapsulates the existing crypto stuff. Am I right in assuming it's necessary because you also want other things in the standard library to use this engine?

All in all, +1 from me :)

@ericlagergren
Copy link
Contributor

@Freeaqingme I don't want to speak for @derekparker, but I know that my +1 would change to a -1 if this didn't affect the stdlib's crypto. My previous comment sort of implies why, but since the default crypto for Go is the stdlib's crypto, there'd still be lots of pain points if this was implemented as a 3rd party library.

@derekparker
Copy link
Contributor Author

derekparker commented Jul 26, 2019

I understand this desire and it does make sense; There's no need to get stuck on implementation
details while still pondering on whether we even need such a thing. Having said that, I would prefer
to see a proposal in an early stage for the interfaces where there's an implementation of at least
two separate engines.

I understand that and can work to provide a working example if necessary once discussion on this proposal has progressed. I just wanted to get the conversation started here.

Finally, it's not mentioned explicitly in the proposal why this should be part of the standard library, rather than be a third party package that simply encapsulates the existing crypto stuff. Am I right in assuming it's necessary because you also want other things in the standard library to use this engine?

Sorry, I thought I has clarified that. There are a few reason for this being part of the standard library as opposed to simply an external package. One, there are already many external crypto packages for Go and yet this proposal still exists. The issue isn't with the ability to use other crypto with the Go programming language itself, the issue is being able to swap crypto back ends without changing import paths. This is an important detail for a few reason:

  1. crypto/tls: the TLS stack uses crypto/* (of course) and if you want to use the Go stdlib TLS to use FIPS validated crypto based on a system being booted in FIPS mode, there's really no way to accomplish that as of now, even with changes in the program code using TLS.
  2. crypto/tls: there is crypto functionality implemented in this package that would also need to be swapped out (PRF, etc).
  3. Large projects: the goal would be for projects like Kubernetes to be able to simply load a new crypto engine without changing tons of import paths and worry about diverging from the standard library crypto interfaces.
  4. Also look at the example above by @ericlagergren - program code might need to use alternate crypto based on business needs, and swapping out import paths isn't the best solution.
  5. There's probably more examples, but hopefully this is a good enough start.

The dev.boring crypto branch (and other forks based on it) are created so that Go programs built with that version of the toolchain just simply call into other crypto libs without user code having to do much, if anything. That is the real goal here, to distill and simplify this need of having to use crypto other than pure-Go standard library crypto, and being able to do it without using an "unofficial" branch or fork of the language.

Hope that all makes sense! I'm excited by the support and discussion around this.

@jech
Copy link

jech commented Aug 1, 2019

type Engine interface {
    ...
}

What happens when a new algorithm gets added to the stdlib? Do all existing engines become obsolete?

@derekparker
Copy link
Contributor Author

What happens when a new algorithm gets added to the stdlib? Do all existing engines become obsolete?

Yes, that potential is there. However, looking back at the past few releases:

Go 1.12 - No new crypto functions added
Go 1.11 - Added 1 new function, 1 new method
Go 1.10 - Added 3 new functions
Go 1.9 - No new crypto functions

These changes only come in minor version updates which happen on a 6 month cadence. I feel that if a person needs to go through the trouble of using and maintaining a cryptographic library, the maintenance burden of adding ~4 new functions to said library over the span of ~2 years (from 1.9 to 1.12) doesn't seem insurmountable.

@Freeaqingme
Copy link

Alternatively, would it be possible to have a single interface that returns which algo's are supported, and then have a single interface per algorithm?

@derekparker
Copy link
Contributor Author

Alternatively, would it be possible to have a single interface that returns which algo's are supported, and then have a single interface per algorithm?

Could you sketch out what you mean by that?

@ericlagergren
Copy link
Contributor

Sorta related to having a “single interface per algorithm,” check out how bazil.org/fuse/fs handles multiple optional methods.

@derekparker
Copy link
Contributor Author

derekparker commented Aug 2, 2019

Sorta related to having a “single interface per algorithm,” check out how bazil.org/fuse/fs handles multiple optional methods.

ACK. That seems like an interesting addition in the interface approach. That could solve the problem of making older engines obsolete.

EDIT: For those who are following along, this is the series of interfaces I found: https://github.com/bazil/fuse/blob/master/fs/serve.go

@mundaym
Copy link
Member

mundaym commented Aug 3, 2019

It would definitely be nice to have some way of transparently adding FIPS or distro managed crypto support to Go. I have some concerns with doing this at runtime though:

  1. If the Go crypto library is always used as a fall back then I don't think that there is a way to verify that you are actually using your crypto engine and not ending up in the std implementation. Typos might result in the engine code not being called and yet everything might well still work since the Go crypto library is used instead. This seems like it would be an issue for people working in regulated environments.

  2. It is likely that this proposal would make most of the crypto API opaque to the compiler because of the additional level of indirection. For example, escape analysis will fail for more parameters passed into crypto calls. If you want to use a crypto engine you are probably going to need to pay this price anyway but without significant cross-package optimizations (e.g. to figure out that LoadEngine() isn't called anywhere and therefore the engine won't be present) all crypto library users will also end up paying for it, even if they aren't using crypto engines.

Personally I think it would be nicer to have a clean way to swap out standard library packages at compile time using modules or a compiler option. That way you could statically check that the binary doesn't contain any Go crypto code (unless the replacement you are using links to it as a fallback - not sure what sort of mechanism that would use). Also this approach would have no impact on people who just want to use the standard Go crypto library as-is.

@Freeaqingme
Copy link

@mundaym I'm not sure I agree that it should be required at compile time. But I think the underlying concern that it should be verifiable which engine is being used is a valid one. I suppose that's something that can also be done at runtime. E.g. a call to crypto.ActiveEngine() that returns a struct with a field to identify the engine in use. That could be printed while starting the application, shown through a http handler, etc.

@mundaym
Copy link
Member

mundaym commented Aug 3, 2019

@Freeaqingme I guess it depends on your level of paranoia (or the size of the fines...) :). A call to crypto.ActiveEngine() would certainly suffice for some use cases but I don't think it can guarantee that std crypto isn't using its own implementation due to a bug.

Does anyone know what sort of guarantees are required for FIPS compliant programs? Do they just need to avoid calling non-FIPS code or should they also avoid linking against it?

@Freeaqingme
Copy link

Well, if you don't use the stdlib crypto anywhere, then those symbols would be unused? They could then be stripped from the binary if so desired, right? However, were the application compiled to remove all debug symbols, it may be hard to analyze if std crypto is still used, so therefore I think it's at least required to be able to determine it at runtime. Removing them at compile time would be a nice bonus, but I think they could be stripped after compiling as well.

@jech
Copy link

jech commented Aug 3, 2019 via email

@magical
Copy link
Contributor

magical commented Aug 4, 2019

There has been talk of potentially splitting parts of the standard library into modules. If the crypto/ packages were part of a separate module, then you could use the replace directive to replace the crypto implementation. That seems like a simpler mechanism than the one proposed here.

Additionally, if your primary goal is to use a different TLS stack, it might be easier to make just crypto/tls pluggable, as @FiloSottile has suggested in the past: #21753, #32466 (comment), #30241 (comment)

@derekparker
Copy link
Contributor Author

If the Go crypto library is always used as a fall back then I don't think that there is a way to verify that you are actually using your crypto engine and not ending up in the std implementation.

This is a very great point, and one that is addressed in forks such as the boringcrypto branch by panic'ing if a non-boring code path is executed if boring mode is activated. I agree this proposal is a bit lax on this, but to be honest I mostly wanted to get the conversation started. I think we should have a way of explicitly failing instead of falling back to the stdlib.

Does anyone know what sort of guarantees are required for FIPS compliant programs? Do they just need to avoid calling non-FIPS code or should they also avoid linking against it?

They must fail when calling non-FIPS code.

There has been talk of potentially splitting parts of the standard library into modules. If the crypto/ packages were part of a separate module, then you could use the replace directive to replace the crypto implementation. That seems like a simpler mechanism than the one proposed here.

That would be interesting, but would have to wait until Go 2.

@rsc
Copy link
Contributor

rsc commented Aug 6, 2019

Maybe this is a distraction, but as a thought experiment, what if this proposal were instead "garbage collection engines" or "channel engines" or "reflect engines" or "encoding/json engines?" Would any of those make sense to try to support? Why or why not?

Obviously some other implementations of Go do substitute different garbage collection and channel and reflect implementations, though not as far as I know encoding/json. The existence of those alternate Go implementations that carry pieces more appropriate to their use case has not caused us to try to make the garbage collector or runtime or reflection systems "pluggable". I wonder if crypto is like those or more like encoding/json or somewhere in the middle.

For dev.boringcrypto I of course had to answer that question, and I answered it by saying that crypto was like other key pieces like GC and runtime and that if you changed it you simply had a different Go. Is that the wrong answer? If so, why?

Is crypto different solely because of FIPS? Or something more general?

@derekparker
Copy link
Contributor Author

derekparker commented Aug 6, 2019

For dev.boringcrypto I of course had to answer that question, and I answered it by saying that crypto was like other key pieces like GC and runtime and that if you changed it you simply had a different Go. Is that the wrong answer? If so, why?

I appreciate the thought experiment and the discussion, however I would argue yes. My main reasoning being that nobody will be prevented and / or charged large fines due to running Go binaries because of the garbage collector, runtime, or channel implementation, reflect or encoding implementations. However, certain companies simply cannot run Go binaries in situations requiring certain standards of cryptography due to, as you mentioned, government mandates such as requiring FIPS compliance. That's the fundamental issue here, is that by not allowing a standard upstream way for users to switch crypto implementations based on actual business needs and requirements we limit what can be used in the Go ecosystem, or at least make it way more difficult.

The argument against that would be "just use an external package". However I feel if it really were that easy, the dev.boringcrypto branch would never have been created and internally within Google there would be a bunch of rewrite rules or similar to swap projects using crypto/* with some other package. Also, as mentioned elsewhere in this thread, simply swapping out the crypto packages with a user defined one does nothing for other internal Go packages such as net which will always use the stdlib crypto anyways. So now not only does a user need to use an external crypto library, they also need to use their own net/http stack which uses their own TLS implementations. One of the beautiful aspects of the Go language is in the fact that you need not look much further than the standard library in order to build pretty much anything you'd like. Having to replace large chunks of that standard library to meet these business needs kind of takes away from that I think.

Now, along with that I think there are things that should be updated in this proposal to reflect the discussion. Namely, I think there should be some way to prevent falling back to stdlib crypto if an engine is loaded to prevent situations where non-FIPS code gets called in situations where it would be inappropriate. However I think those details can be hashed out later in this thread, certainly before any kind of implementation. I just want to address the basic need for something like this and stress that cryptography is fundamentally different than a runtime or garbage collector.

@Freeaqingme
Copy link

Freeaqingme commented Aug 6, 2019

For dev.boringcrypto I of course had to answer that question, and I answered it by saying that crypto was like other key pieces like GC and runtime and that if you changed it you simply had a different Go. Is that the wrong answer? If so, why?

That's a legit question I suppose. I'm not 100% informed when it's about the alternate GC or runtime implementations, so I might make some (incorrect) assumptions here...

I assume the alternate GC or runtime stuff is implemented because of technical considerations. It's done by people who have a different vision on certain technical aspects of these components, and therefore want to do it themselves. Simply because they can do it "better" (mind the quotes! :)).

However, this proposal is not about companies that want to swap the crypto implementation because of technical reasons. If they feel they can do better, they may temporarily fork it, but only to contribute those changes back to upstream later (like CloudFlare did with the TLS asm implementation).

The companies that are looking to use a FIPS implementation (and rule out usage of stdlib crypto), do so because of business considerations, rather than technical considerations. They're not looking to maintain a fork because they can do it better, they simply have a business requirement to substitute a single component (crypto) with another. That's why I think that allowing for alternate crypto implementations is different from allowing alternate GC or channel implementations.

Of course we can say that those concerns are not the concerns of the Golang community and therefore a fork is the way to go. Had this been proposed 10 years ago, it might have been very well because the language was less mature and resources were better spent on other things. I think that now that Go is gaining more and more traction, it could be a good moment to consider it.

In the early days of Android, vendors also wanted functionality that was not present in the stock linux kernel. This in turn lead to many forks, often barely updated, hardly ever kept in sync with upstream. This reflected badly on the Linux community, and lead to a worse user experience for the end user. In the end, much effort (by Greg Kroah-Hartman among others) was invested into getting it all back into sync. This in turn has lead to an increased amount of available developer hours on a single 'implementation' of Linux, rather than having them split between the two.

I realize that Go != Linux and that a programming language != kernel, but it could be an argument of why having forks could be a bad thing in the long term.

@andybons andybons added the Proposal-Crypto Proposal related to crypto packages or other security issues label Aug 6, 2019
@FiloSottile
Copy link
Contributor

Let me start by saying that as the current maintainer of dev.boringcrypto, I am sympathetic to the pain of maintaining a fork, and I understand where this is coming from.

I think you make a clear argument that there is a business case for swapping crypto backends. I'm not convinced it's qualitatively different from the business case for swapping other parts of the language though: for example, the GC and runtime make Go incompatible with embedded devices, which also "limit[s] what can be used in the Go ecosystem".

There are however ways in which swapping cryptography backends is peculiar technically, and they all suggest leaning against this proposal.

  • It was tried before, not with great success. I regularly hear complaints about the disparate implementations of the Java Cryptography Extension, and the OpenSSL engines add a lot of complexity to their codebase.
  • The cryptography standard library is not a monolithic and stable feature set, like garbage collection or channels might be. New algorithms are added regularly, and having to account for the possibility that they might not be implemented by the engine would add a layer of complexity in some of the most gnarly places, like TLS certificate selection and algorithm negotiation.
  • Cryptography is complex by nature, so a lot of its complexity budget is spent on the algorithms themselves, leaving less space for this kind of flexibility. This is acknowledged by golang.org/design/cryptography-principles in two places: the first instrument of Secure being "reduced complexity", and Practical specifying that we "focus[] on common use cases to stay minimal".

Overall, it's a tradeoff. Not making cryptography engines pluggable puts the cost of maintaining a FIPS fork on the organizations that need it (Google included), instead of adding complexity to the standard library used by most. Also considering that it's mostly well-staffed organizations that need FIPS compliance, I think that's the right call, and I don't think we should implement this.

Maintaining one of the simplest cryptography libraries in the whole ecosystem is an exercise in resisting complexity, and it sometimes comes at the cost of this kind of flexibility.

(This focuses on the FIPS case of switching the whole engine, because for providing hardware implementations of specific values we have interfaces like crypto.Signer, and if we need to add more like it you should feel free to open specific issues, as I'd be happy to consider them. They already proved valuable and powerful.)

@derekparker
Copy link
Contributor Author

derekparker commented Aug 7, 2019

Thanks for your thoughtful reply @FiloSottile.

Let me start by saying that as the current maintainer of dev.boringcrypto, I am sympathetic to the pain of maintaining a fork, and I understand where this is coming from.

I was definitely awaiting your reply and thoughts on this as we have talked before about the dev.boringcrypto branch in private mail and I knew we were likely experiencing the same pain points.

I think you make a clear argument that there is a business case for swapping crypto backends. I'm not convinced it's qualitatively different from the business case for swapping other parts of the language though: for example, the GC and runtime make Go incompatible with embedded devices, which also "limit[s] what can be used in the Go ecosystem".

Thank you. One counter I would provide here is that Go was never intended as a language suitable for embedded devices. As a language that is geared towards writing performant server software, I think that differentiates the crypto packages as they are a necessity these days, as running a server without TLS and proper security in any kind of production environment is irresponsible at this point.

There are however ways in which swapping cryptography backends is peculiar technically, and they all suggest leaning against this proposal.

  • It was tried before, not with great success. I regularly hear complaints about the disparate implementations of the Java Cryptography Extension, and the OpenSSL engines add a lot of complexity to their codebase.
  • The cryptography standard library is not a monolithic and stable feature set, like garbage collection or channels might be. New algorithms are added regularly, and having to account for the possibility that they might not be implemented by the engine would add a layer of complexity in some of the most gnarly places, like TLS certificate selection and algorithm negotiation.
  • Cryptography is complex by nature, so a lot of its complexity budget is spent on the algorithms themselves, leaving less space for this kind of flexibility. This is acknowledged by golang.org/design/cryptography-principles in two places: the first instrument of Secure being "reduced complexity", and Practical specifying that we "focus[] on common use cases to stay minimal".

I appreciate this feedback and the time you took to provide specific examples of where this might be a burden in the future if implemented as described.

Overall, it's a tradeoff. Not making cryptography engines pluggable puts the cost of maintaining a FIPS fork on the organizations that need it (Google included), instead of adding complexity to the standard library used by most. Also considering that it's mostly well-staffed organizations that need FIPS compliance, I think that's the right call, and I don't think we should implement this.

Maintaining one of the simplest cryptography libraries in the whole ecosystem is an exercise in resisting complexity, and it sometimes comes at the cost of this kind of flexibility.

I completely understand and agree that I wouldn't want to do anything to unnecessarily complicate the language, as the simplicity of it is one of the main factors that drove me towards being a Go programmer many years back now.

I guess my last question would be: are we doomed to maintain forks for the rest of time, or is there still room for brainstorming a solution which has an acceptably small complexity vs flexibility tradeoff? As I mentioned before, if this proposal is not "the one" I'm not offended, my goal was to provide a jumping off point and open the conversation.

@elagergren-spideroak
Copy link

With respect to comparing alternate garbage collectors and alternate cryptography implementations, there's one pertinent difference: the business reasons for cryptography backends usually necessitate multiple backends. I'm unaware of any business reasons that would require, say, three different garbage collectors or runtimes.

On Windows, the correct option for a FIPS-compliant Go package is to use the CNG API (in particular, BCrypt.dll or NCrypt.dll). On Linux, one might choose OpenSSL. In addition to OS-specific packages, however, those who need FIPS compliance often have to deal with client-specific libraries or hardware.

This greatly complicates forks, because now instead of just one BoringSSL fork you're required to maintain N forks. Or, try to combine the forks into one. But that's not entirely possible because you can't always include client A's cryptography library (or your own code that accesses client A's hardware) in your fork if that fork is provided to client B.

And often you have to provide your fork to client B because the type of clients that require FIPS compliance quite often need to build the software themselves, or at least have a trusted intermediary build it for them.

For something like FIPS-compliant cloud storage this probably isn't a big deal. You can pick and choose your desired VM and only maintain a fork that runs on that VM. But the lack of pluggability makes writing software that the clients run on their own machines incredibly difficult.

Also considering that it's mostly well-staffed organizations that need FIPS compliance...

I very strongly disagree. Large companies like Google, Amazon, Cloudflare, etc. are not the only companies that require FIPS compliance.

@derekparker
Copy link
Contributor Author

With respect to comparing alternate garbage collectors and alternate cryptography implementations, there's one pertinent difference: the business reasons for cryptography backends usually necessitate multiple backends. I'm unaware of any business reasons that would require, say, three different garbage collectors or runtimes.

On Windows, the correct option for a FIPS-compliant Go package is to use the CNG API (in particular, BCrypt.dll or NCrypt.dll). On Linux, one might choose OpenSSL. In addition to OS-specific packages, however, those who need FIPS compliance often have to deal with client-specific libraries or hardware.

This is a great point as well. I'm coming from the point of view of maintaining a fork calling into OpenSSL (as opposed to BoringSSL) for RHEL, however the idea of potentially needing multiple forks within the same organization is a major pain point for consumers.

@derekparker
Copy link
Contributor Author

I just wanted to ping this thread as it seems the discussion may have stalled. As previously mentioned, this proposal may not be the silver bullet, however I would like to continue moving things forward. Ideally, users will not have to maintain forks of the stdlib in order to use a different crypto library.

If this isn't the solution, what can we do or propose going forward to alleviate the pain of those of us who are trying to use Go in a FIPS related setting?

cc @FiloSottile @rsc @agl

@upsampled
Copy link

upsampled commented Oct 28, 2020

Here is my attempt to greatly summarize the main points and propose a compromise as FIPS 140-3 is coming very quickly and this would be the time to address this:

@FiloSottile brought up the key problems with Crypto Interface Engines (summarized greatly):

  1. Behavioral differences in engine implementations cause issues
  2. The interface will be prone to change due to new algs
  3. It can be difficult to package complex behavior settings for an implementation into an universal interface

@rsc pointed out that this is mainly being driven by FIPS. @Freeaqingme brought that performance on specific machines could be another driver for crypto engines, but that seems to be a secondary concern as code cannot run certain environments if it does not have FIPS vs having it run better. This seems to also track with the fact that @derekparker and @FiloSottile are both maintaining FIPS Branches using different (but related) backends.

Right now the suggested paths forward have been:

  • Nothing changes, BoringCrypto and OpenSSL are kept in two different branches, but share much of the same code
  • The interface is adopted, large effort is spend to prevent @FiloSottile 's 3 pitfalls and while this opens up many different engine implementations (personal note: hw accelerators are cool), it mainly will be used to just drop in FIPS .

Here is my suggested compromise: implement a crypto engine interface solely in the FIPS branches.

type FIPSEngine interface {
}

Addressing the 3 pitfalls in reverse order:

  1. It can be difficult to package complex behavior settings for an implementation into an universal interface
    These behavior settings are already being set in the mentioned FIPS branches. I would throw them all into a tlsconfig-like struct and worry about cleaning them up later.

  2. The interface will be prone to change due to new algs
    FIPS locks the algorithms used so the interface should change much less frequently.

  3. Behavioral differences in engine implementations cause issues
    The behavior is defined by FIPS and the implementation in the main FIPS branch (most likely BoringCrypto). It is up to everyone else to figure out how to implement it.This should allow the OpenSSL and Boring Branches to coordinate more. It also allows for alternatives to be considered, but puts the pressure alternatives them to match the behavior defined by FIPS.

Is this perfect? No cause I am sure someone who want to encrypt tons of data on disk may not care about some of the strict FIPS requirements. This should however help everyone coordinate on their main requirement, FIPS. It also opens the door up to alternative backends (hardware and software). Who knows, it could even allow more go code to be used and drop the cgo overhead.

@rsc
Copy link
Contributor

rsc commented Aug 12, 2022

This proposal has been added to the active column of the proposals project
and will now be reviewed at the weekly proposal review meetings.
— rsc for the proposal review group

@rsc
Copy link
Contributor

rsc commented Aug 17, 2022

Added to active to try to move discussion toward a decision. I asked in #33281 (comment) about the motivation, and it sounds like the answer is "for FIPS" not "because crypto should be pluggable generally".

We still don't support uses of BoringCrypto at all, but it's worth noting that dev.boringcrypto has been merged into the main tree. So replacing crypto should be a matter of replacing just the one crypto/internal/boring package. Does that make it easier for people to maintain forks? Or does that provide a useful chokepoint for other replacements?

@upsampled
Copy link

upsampled commented Aug 17, 2022

Ignoring FIPS compliance, there is still a good argument for "pluggable crypto" specifically to take advantage of crypto hardware. I know I cannot take advantage of a crypto accelerator in one of my embedded targets currently. While crypto accelerators will likely be niche and covered by a fork, I think things like YubiKey and TPM integration are becoming more normal. I have not developed for the YubiKey yet, but I believe it is capable of performing all the needed PKI actions for TLSv13 internally, so the private key would not have to leave its boundary. The only way I have seen Go leverage a YubiKey currently is to extract the key from it.

@FiloSottile
Copy link
Contributor

Hardware modules are supported through the crypto.Signer interface and friends, and there are definitely packages that do that, such as piv-go. If there are missing interfaces, we could consider adding those. In my experience, interfaces have been a cleaner way to support hardware modules than the engines approach of OpenSSL or the PKCS#11 approach of NSS, both of which caused complexity explosions.

@upsampled
Copy link

upsampled commented Aug 17, 2022

@FiloSottile I don't see any examples of it, but yes the crypto.Signer should allow the PKI to be offloaded to the YubiKey. Also tls.Rand is an interface that allows offloading. I guess the question is why not provide the same offloading capability for other cryptographic operations (symmetric key signing and ciphers, hashing, etc.)?

As for the engine vs interface debate, I think interfaces, like crypto.signer, could at least greatly drop the level of effort in making sure an application only performs cryptographic operations only the way the app developer intended.

@derekparker
Copy link
Contributor Author

Added to active to try to move discussion toward a decision. I asked in #33281 (comment) about the motivation, and it sounds like the answer is "for FIPS" not "because crypto should be pluggable generally".

I think you’re correct in that being the main motivation, however I also feel that FIPS in and of itself is enough motivation to provide support for this feature. FIPS is a real-world requirement with regard to cryptography that companies must deal with. Go, having a cryptography package, should also have facilities to enable this sort of compliance out of the box. Go is very much a “batteries included” language with a robust and comprehensive standard library, so forcing users who must care about FIPS due to regulations into maintaining or using a fork of the standard library seems like the wrong approach. I would very much be interested in ways to make Go cryptography FIPS certifiable by somehow being able to have a Go crypto plugin (read: shared library of sorts) that can be certified, alleviating the need to use another crypto implementation, but that also implies the standard library being able to load a cryptographic implementation and use it, which is essentially what we’re already talking about here.

We still don't support uses of BoringCrypto at all, but it's worth noting that dev.boringcrypto has been merged into the main tree. So replacing crypto should be a matter of replacing just the one crypto/internal/boring package. Does that make it easier for people to maintain forks? Or does that provide a useful chokepoint for other replacements?

This change does indeed make things somewhat simpler from a fork maintainer perspective, however I think it’s still not ideal. Carrying patches to weld things together can be brittle and time consuming as rebasing can get messy over time, and in the end we’re still not shipping or using the “real” upstream Go toolchain, which carries with it its own set of risks and complications. A simple solution could be to allow Go module replace directives to replace stdlib packages so that one could simply swap that package out with another in a more supported way, however that proposal has already been declined previously[1].

Hardware modules are supported through the crypto.Signer interface and friends, and there are definitely packages that do that, such as piv-go. If there are missing interfaces, we could consider adding those. In my experience, interfaces have been a cleaner way to support hardware modules than the engines approach of OpenSSL or the PKCS#11 approach of NSS, both of which caused complexity explosions.

It’s true that interfaces allow for swapping out certain implementations, but that only affects code the user is writing directly. Other packages, and even other stdlib packages such as crypto/tls, may still be using the regular stdlib implementation. What this proposal allows for that interfaces don’t is the ability to swap out the entire crypto package implementation for all packages, even ones that are only included as a dependency that the developer did not write themselves.

@qmuntal
Copy link
Contributor

qmuntal commented Aug 18, 2022

We still don't support uses of BoringCrypto at all, but it's worth noting that dev.boringcrypto has been merged into the main tree. So, replacing crypto should be a matter of replacing just the one crypto/internal/boring package. Does that make it easier for people to maintain forks? Or does that provide a useful chokepoint for other replacements?

Speaking as a maintainer of the Microsoft Go fork, which implements CNG and OpenSSL crypto backends, yes, merging dev.boringcrypto back the the main tree has made our life much more easy. This is a small experience report:

  • The release infrastructure only needs to track one release branch, no more parsing of misc/boring/RELEASES.
  • Boring releases were released with a couple of days of delay, which was a problematic for security releases.
  • Boring is now a first-class citizen in the crypto packages, even if it is internal, which leads to a better boring integration. This translates into less patching and hacks on our side. For example, GOEXPERIMENT=boringcrypto is a pattern that we have replicated with GOEXPERIMENT=opensslcrypto and GOEXPERIMENT=cngcrypto, and having crypto/internal/boring/bcache as a separate package allows us to use it without depending on the whole boring package.

Going back to the Crypto Engines proposal, my gut feeling is that it would make our work as engine (aka backend) maintainer more difficult:

  • Reliably implementing UnloadEngine is not trivial nor clear to me which are the expectations if there are engine-allocated life objects when unloading.
  • This proposal does not mention how to deal with engines that can't implement the full engine contract. For example, our CNG backend does not implement SHA224, AES CTR mode, ECDSA P-224, and rsa.PSSSaltLengthAuto because the underlying does not have these capabilities.

Summarizing, I agree with @FiloSottile when he says that crypto.Signer and friends are the right level of abstraction for swapping crypto implementations, unless you need FIPS or some other niche requirement that is all-or-nothing.

On the other hand, @derekparker made a good point with this proposal: rolling your own crypto should be easier. Merging dev.boringcrypto is a step in the right direction, and I think that this path still has potential to simplify our work. I would explore it before turning to crypto engines.

@derekparker
Copy link
Contributor Author

derekparker commented Aug 18, 2022

Thanks for your detailed input @qmuntal!

Going back to the Crypto Engines proposal, my gut feeling is that it would make our work as engine (aka backend) maintainer more difficult:

  • Reliably implementing UnloadEngine is not trivial nor clear to me which are the expectations if there are engine-allocated life objects when unloading.

I think that can be part of the overall discussion. If unloading an engine proves problematic maybe it's not something we should allow. E.g. loading an engine would be a one-time, irreversible step.

  • This proposal does not mention how to deal with engines that can't implement the full engine contract. For example, our CNG backend does not implement SHA224, AES CTR mode, ECDSA P-224, and rsa.PSSSaltLengthAuto because the underlying does not have these capabilities.

The original proposal allows for this in a way. The engine itself is represented by an interface, with different algorithm types as methods. Taking a look back at the code example:

...
	if e := getEngine(); e != nil && e.AES() != nil {
		return e.AES().NewCipher(key)
    	}
...

You could imagine a scenario where an engine implementation only implements part of the interface, returning nil for those not implemented, falling back to stdlib implementation.

Summarizing, I agree with @FiloSottile when he says that crypto.Signer and friends are the right level of abstraction for swapping crypto implementations, unless you need FIPS or some other niche requirement that is all-or-nothing.

What about the case for wanting to ensure that every package in your program is using the crypto implementation you choose? Even with this interface approach the code you write can use whatever crypto you want, but if you import a package that imports crypto/aes directly there's nothing you can do about it at this point. I think you should be able to change things consistently across your entire binary.

I think that if these interfaces were sufficient for accomplishing what we all would like to accomplish, then the dev.boringcrypto branch would never have been created in the first place, same with this proposal.

On the other hand, @derekparker made a good point with this proposal: rolling your own crypto should be easier. Merging dev.boringcrypto is a step in the right direction, and I think that this path still has potential to simplify our work. I would explore it before turning to crypto engines.

I agree that we should exhaustively explore all options, crypto engines being just one of them and a way to get this conversation started with at least one concrete direction. Allowing replace directives to work against stdlib packages, or making the crypto package loadable via plugins (which could then mean the stdlib crypto library could be FIPS certified), or allowing for an engine approach similar to what's described in this proposal are all ideas.

To be clear I'm not staunchly in favor of the approach I proposed, I just wanted to get the conversation started and provide at least one path forward. The ultimate goal is to be able to supply user-provided crypto without the use of patch files, forks, etc...

@mwhudson
Copy link
Contributor

I have started to write this comment half a dozen times and it's probably still a bit incoherent. Sorry about that.

One of the factors here I am not sure of is how much of a goal it is that a FIPS certified system contains no non-certified crypto code at all (certainly it is easier to prove that code that is not present is not being executed). If it is a goal, then it is certainly easier for a compile-time approach (like GOEXPERIMENT=boringcrypto).

It would be convenient for us (Ubuntu/Canonical) if we didn't have to recompile and separately distribute each Go binary that might be used on a certified system, but that seems incompatible with the above unless we build all Go applications to use openssl for crypto (which is probably OK for us as a distro maintainer, but it's quite a divergence from upstream), or this proposal is adopted to such an extent that all Go programs use some plugin-based interface to find their crypto code from a system path (which just doesn't seem like how Go works). I'm curious how other distros are handling this issue?

@rsc
Copy link
Contributor

rsc commented Oct 5, 2022

Our experience with BoringCrypto has been that each backend ends up imposing new constraints on the standard library. To the extent that the existence of BoringCrypto in the tree makes other backends easier to drop in, that's great. But I don't think we want to take on the extra work of supporting a general backend interface, for the reasons given earlier in the discussion.

@dagood
Copy link
Contributor

dagood commented Oct 5, 2022

It would be convenient for us (Ubuntu/Canonical) if we didn't have to recompile and separately distribute each Go binary that might be used on a certified system, but that seems incompatible with the above unless we build all Go applications to use openssl for crypto [...]. I'm curious how other distros are handling this issue?

I'm also a maintainer of the Microsoft Go fork, and our fork is starting to be used by the CBL-Mariner Linux distro for FIPS compliance. In Go 1.19+, our GOEXPERIMENT=opensslcrypto go build ... produces an app that always 1 uses OpenSSL for crypto.

Yep: we expect this decision means CBL-Mariner will need to build all Go apps that are part of the distro to always use OpenSSL, so the distro's FIPS mode works with the Go apps in the distro. However, we haven't explicitly confirmed this plan with the CBL-Mariner team. (Our FIPS focus is currently on our non-Linux-distro binary archive builds.)

Red Hat has historically been strict about FIPS requirements at runtime (no fallbacks, extra FIPS-based limitations on which OpenSSL APIs you can call) but plans to move to a GOEXPERIMENT compile-time decision as well, as of microsoft/go#641 (comment). Using OpenSSL for non-FIPS scenarios will need some refactoring to loosen those runtime safeguards.

I'm not a FIPS expert, but my understanding is that this doesn't have any major effect on FIPS compliance or certification. Limiting the available APIs might make some initial FIPS compliance exploration easier (run tests under FIPS mode -> fix errors) but it's always possible to use permitted APIs in a FIPS-noncompliant way, so it seems to me the code still needs review regardless.

Footnotes

  1. Some operations aren't supported by OpenSSL (or CNG, on Windows), so we have some places in our fork that fall back to Go crypto to avoid breaking compatibility with existing Go apps in non-FIPS scenarios. This is also for compatibility--we need our fork to work as a drop-in replacement.

@amnonbc
Copy link

amnonbc commented Oct 7, 2022

The Zen of Python says:
There should be one– and preferably only one –obvious way to do it.[a]

This is a good principle for Go too.

So should Go allow us to chose between different crypto engines, to chose
different flavours or security? Or should it just provide one high quality engine which does the right thing?
The Go way is to be opinionated, do the latter.

If the Go stdlib is updated to allow me to chose which crypto engine to use,
then how do I chose which engine to use? And how will this make my code more secure?

Some people have argued that making Crypto plug-able is necessary in order to comply with government rules.
If the US government can demonstrate that the Go standard Library suffers from vulnerabilities which are fixed by FIPS compliance, then I am sure that these vulnerabilities will be fixed by the Go team.
But government rules can change over time and in arbitrary ways. In the 1990s, the US government ruled that
all software written for the Department of Defence should be written in Ada. Who knows what governments will mandate in the future. Should we make all aspects of the language pluggable so that we can comply with current and future government rules? That way lies madness...

@rsc
Copy link
Contributor

rsc commented Oct 12, 2022

Based on the discussion above, this proposal seems like a likely decline.
— rsc for the proposal review group

@derekparker
Copy link
Contributor Author

I completely understand the arguments towards keeping the language simple and continuing to provide the "batteries included" approach to the standard library, and especially the mindset of having one and simply one correct way to do things. These are all aspects that drew me to this language and community in the first place, and I wouldn't want to sacrifice them.

That being said, I believe we have already added complexity with the merging of the boringcrypto codebase to the standard library. Would this proposal have any merit / chance of acceptance if we continue the direction of hiding the complexity of it's implementation behind a GOEXPERIMENT flag? That way anybody who does not need the complexity of swapping crypto backends and dealing with FIPS doesn't have to worry about it, and we can isolate these changes a bit more.

(cc @rsc)

@rsc
Copy link
Contributor

rsc commented Oct 20, 2022

No change in consensus, so declined.
— rsc for the proposal review group

@derekparker
Copy link
Contributor Author

No change in consensus, so declined. — rsc for the proposal review group

Any thoughts on my last comment? #33281 (comment)

@golang golang locked and limited conversation to collaborators Oct 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
FrozenDueToAge Proposal Proposal-Crypto Proposal related to crypto packages or other security issues
Projects
No open projects
Development

No branches or pull requests