Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cmd/go: [modules + integration] go mod split, propose loop-breaking module splits #31361

Open
nim-nim opened this issue Apr 9, 2019 · 7 comments
Labels
NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Milestone

Comments

@nim-nim
Copy link

nim-nim commented Apr 9, 2019

This report is part of a series, filled at the request of @mdempsky, focused at making Go modules integrator-friendly.

Please do not close or mark it as duplicate before making sure you’ve read and understood the general context. A lot of work went into identifying problems points precisely.

Needed feature

Go needs a go mod split command that suggests a module split, sufficient to remove module dependency cycle.

Constrains

  • the input should be a module, participating in one or several dependency cycles:
    • a module path, or
    • a module path + module version, or
    • the filesystem path of a specific mod module descriptor
      • either the go.mod file at the root of an unpacked module tree, or
      • a mod file inside a goproxy hierarchy
  • the output should be a list of Go packages, that need to be moved to separate modules, in one or more of the modules participating in the cycle(s), to break the dependency loop
  • the suggested split should ideally be the best one (the most minimal), but any suggestion would be better than the no-tooling situation that exists today.

Motivation

While Go does not allow package import cycles, it does allow module dependency cycles. This is a huge problem for integrators because a module cycle effectively means it is not possible to compute the step-by-step CI/CD integration plan, of a set of third-party modules.

Therefore, we would like a command, that suggests to module authors how to break module dependency cycles when they occur.

Because Go forbids cycles at the package import level, breaking module cycles should be no more complex, than putting sets of packages in separate nested modules.

@rsc
Copy link
Contributor

rsc commented Apr 11, 2019

If we wanted to disallow module dependency cycles, we would have done that. We explicitly allow module dependency cycles, for good reasons - they are useful for splitting modules and making sure that you don't end up with a new half-module and old full-module in combination. We are not going to add a command that makes it seem like they are a problem. They are not a problem. They are a feature. Sorry.

@nim-nim
Copy link
Author

nim-nim commented Apr 11, 2019

@rsc repeating they are a not a problem won't make them less a problem.

Cycles are a huge problem integration side. They effectively remove the modular part of modules

They kill any prospect of splitting integration work in manageable step by step pieces. They drastically increase the time and manpower needed to integrate new versions and updates, because they make it impossible to move one link in the cycle without the others. Even projects that do not do fine checking of their dependencies, will delay vendor refresh for months in cycle presence, because cycles make any update a a whole-cycle replacement, too big and scary to do regularly.

Cycles lower the integration quality, because the integrator energy is consumed dealing with the cycle instead of making sure the code works. If he deals with the cycle by forcibly breaking it up downstream, it ends up in conflicts with upstream, and lowering of the level of QA sharing since everyone ends up breaking the cycle in different ways. If he tries to integrate the whole cycle in one operation he will usually be worn out by too much code at once and do a bad job. I he tries to pretend the cycle does not exist once all the components have been imported one way or another, and QA each of one separately, he will usually test the wrong things, because changing one cycle link propagates back along the cycle.

Cycles cause people to burn out and reorient themselves to less exhausting languages. They will probably cost us months of delay in switching fully our Go stack to modules, and that is while taking liberties with the QA level and breaking the modules artisanally to limit the delay. Go modules invalidate the decycling already done in the past in GOPATH mode.

I sure hope they have some benefits, because the cost is high, so high that the cost/benefit ananalysis will usually end up negative.

There's no way anyone, no matter how smart and conscientious, can deal with cycles like google cloud requiring opencensus requiring prometheus, requiring half the go codebase in the middle. Or the utter deadlock of the moby* codebase, where every individual project is cycling back on the others, making it impossible to define a manageable upgrade plan (so each of the components in the cycle ends up mass vendoring obsolete versions of the other cycle elements).

So de-cycling will happen no matter how much you like the feature. The only question is whether decycling can be done in a controlled, coordinated, collaborative, efficient tooled way, or whether it will be done manually in hit and miss mode behind upstream's backs

And nothing here is specific to Go, other languages allowed and are allowing component cycles, so cycles effects are well understood by now.

@jeanbza
Copy link
Member

jeanbza commented Apr 11, 2019

This is a huge problem for integrators because a module cycle effectively means it is not possible to compute the step-by-step CI/CD integration plan, of a set of third-party modules.

For my edification, could you expand on or reword this? I am having a very hard time understanding what this functionally means.

It’s being filled at the request of @mdempsky in golang-dev

Could you link to the contextual history for this issue? Your link leads to the top of a very large thread that doesn't appear to have anything to do with cycles, but I'm sure I missed it.

There's no way anyone, no matter how smart and conscientious, can deal with cycles like google cloud requiring opencensus requiring prometheus, requiring half the go codebase in the middle.

Hi, I maintain google-cloud-go, which I assume is what you're referring to here. Is there a problem we can help with? Please feel free to file an issue at github.com/googleapis/google-cloud-go/issues. I'm sure the opencensus team is happy to help, too. We (cloud+opencensus) just dealt with the github.com/golang/lint problem which involved some cycle shenanigans, but it was relatively straightforward to deal with. Is that what you ran into? Happy to help with any other issues, though.

Also, FWIW, I think the line about "no way anyone [...] can deal with cycles like google cloud requiring opencensus [...]" is maybe a bit hyperbolic... :)

So de-cycling will happen no matter how much you like the feature. The only question is whether decycling can be done in a controlled, coordinated, collaborative, efficient tooled way, or whether it will be done manually in hit and miss mode behind upstream's backs

Are you looking for tooling to identify cycles? I've been working on https://godoc.org/golang.org/x/exp/cmd/modgraphviz, and one of the things I hope to put in soon is some cycle visualization stuff. Perhaps this or some other community tool might be what you're reaching for?

@nim-nim
Copy link
Author

nim-nim commented Apr 11, 2019

@jadekler thanks for taking a look at things.

This was not meant to be google-cloud-go specific or I would have used the google-cloud-go issue tracker. I used google-cloud-go as example because it sits deep in the middle of the dependency graph of numerous Go projects, so its cycles (and the curse they represent for our integration workflows) are well known. If the generic tooling part was done we could then work with willing projects like google-cloud-go to make things more manageable in all workflows.

This is a huge problem for integrators because a module cycle effectively means it is not possible to compute the step-by-step CI/CD integration plan, of a set of third-party modules.

For my edification, could you expand on or reword this? I am having a very hard time understanding what this functionally means.

Basically our tooling enforces component B can not use component A before component A passes CI/CD checks. Any component cycle means we have a chicken and egg problem.

This is a deliberate decades old core design decision, to force component owners to check changes in their components in isolation from changes in other components. Being able to manage changes in isolation means importing an urgent security fix requires re-qualifying the affected component only, not the whole dependency graph. Different people can specialize on different codebases. Upgrade paths work (components can not assume the rest of the world changes in lockstep with them). Probably other good properties I'm so used to I'm forgetting now.

Therefore, we like a nice direct acyclic dependency graph. It makes it easy to compute a step-by-step integration plan, assign people to look at each step result, bring out new hardware architectures from zero, etc. That's no different from the way the compiler computes a cycle-free build plan.

Ideally there is a strong correspondence between the order of the compiler build plan, our integration build plan, and the way upstreams split their projects. This way the consequences of software defects and vulnerabilities are as clear to humans, as the build plan is clear to the compiler.

We recognize that organisational constrains may make it too burdensome for upstream software organisations to release their code in strictly hierarchical projects. In that case, we expect, like the compiler, for software releases to be nicely in sets of components, that can be rearranged in a cycle-less graph.

In the case of the prometheus needs google-cloud needs opencensus needs prometheus cycle, that would mean splitting the part of google-cloud that needs prometheus via opencensus in a separate nested go module (so just one new go.mod in google-cloud releases). This way a new architecture bring out, or a strict QA check, can run google-cloud through CI/CD without the prometheus using submodule, then progress to prometheus step by step, then return to google-cloud and re-do it with the submodule enabled.

(google-cloud participates in other cycles IIRC at least oauth and gax, opencensus is just the latest one that crept in)

Also, FWIW, I think the line about "no way anyone [...] can deal with cycles like google cloud requiring opencensus [...]" is maybe a bit hyperbolic... :)

Well I'm pretty sure you deal with them by not looking too closely at the state of the third party software you import in your builds. We require an unbroken QA chain, that tends to expose chaining problems.

So de-cycling will happen no matter how much you like the feature. The only question is whether decycling can be done in a controlled, coordinated, collaborative, efficient tooled way, or whether it will be done manually in hit and miss mode behind upstream's backs

Are you looking for tooling to identify cycles? I've been working on https://godoc.org/golang.org/x/exp/cmd/modgraphviz, and one of the things I hope to put in soon is some cycle visualization stuff. Perhaps this or some other community tool might be what you're reaching for?

Thanks for the link, I didn't know about that one. We don't really need to identify cycles, they are such a huge source of CI/CD breakage they identify themselves pretty quickly (typically, the CI/CD system will refuse to run a job, because it can not compute the corresponding execution plan). We need to identify the best points, where a cycle could be cut, by moving some packages to a separate go module (a sub/nested go.mod, not a separate software project).

@jeanbza
Copy link
Member

jeanbza commented Apr 11, 2019

Thanks for the write-up!

Basically our tooling enforces component B can not use component A before component A passes CI/CD checks. Any component cycle means we have a chicken and egg problem.

I don't think you have a chicken and egg problem. There are three ways you can look at module dependencies:

  1. The mod graph, including versions. You will see cycles if two modules depend on each other at exactly the same version ex A@5 -> B@19 -> A@5, but it's usually more like A@5 -> B@19 -> A@4 -> B@18 -> A@3 -> etc
  2. The mod graph, excluding versions. Ex A->B->A. I think this is what you're talking about. However, I also think this is the least interesting / useful in this discussion
  3. The modules and their version that get chosen by MVS for the actual build. There will always be exactly one version chosen of each module (per major). More reading at https://research.swtch.com/vgo-mvs

The first is used by MVS / mod tools. The second is useful only as a high-level discussion piece. The third is what actually gets into your build. I suspect you need the third, which has no cycles, and therefore should not need you to go asking every library author with cycles in your transitive dep list to take on the tremendous work to break cycles (and maybe also maintain multi-module repos).

Well I'm pretty sure you deal with them by not looking too closely at the state of the third party software you import in your builds.

This is a false assumption.


Speaking just for myself (and maybe unconstructively?): I empathize with your argument, but I suspect that your desire for your own CI/CD system to work in a specific way is not a compelling reason for a library author to split a cycle. There are significant trade-offs to consider when choosing to break apart cycles, and even more if it involves turning repositories into multiple module repositories.

Anecdotally we (google-cloud-go) plan to go the route of multi-module repositories, and we might end up in a cycle-free state, but it's not our goal and I would understand any set of library authors that choose not to invest the considerable effort into breaking cycles and whatever extra maintenance is required after-the-fact.

@nim-nim
Copy link
Author

nim-nim commented Apr 12, 2019

Thanks for the write-up!

@jadekler: You're welcome, the more people understand the issues and complexities involved, the more chance we get to fix them properly.

Basically our tooling enforces component B can not use component A before component A passes CI/CD checks. Any component cycle means we have a chicken and egg problem.

I don't think you have a chicken and egg problem. There are three ways you can look at module dependencies:

1. The mod graph, including versions. You will see cycles if two modules depend on each other at exactly the same version ex A@5 -> B@19 -> A@5, but it's usually more like A@5 -> B@19 -> A@4 -> B@18 -> A@3 -> etc

That does not help, because ultimately, bringing up a new architecture needs a direct acyclic graph, where A step is done before B step or B step before A step. Unravelling the version cycle the way you suggest would require importing each intermediate versions in the CI/CD system, till you get to the point where the cycle does not exist. That's prohibitive in man and build farm power.

When I wrote enforcing, that's actual enforcing: go tools won't see any module/version couple before it passes CI/CD QA, and shortcuts like putting components in a vendor dir so you can pretend they exist before the CI/CD QA checks are done are protected against.

Because Go module tools do not look in GOPATH space, switching to Go modules means we will have to bring up every architecture from the ground up at module switch time.

2. The mod graph, excluding versions. Ex A->B->A. I think this is what you're talking about. 

I did mean a graph with versions. However, being able to manage versions does not help so much, because, as you wrote yourself, a dependency cycle ultimately implies rewinding versions till the point of history where the cycle didn't exist. And that is prohibitively expensive. Not to mention, that it won't work with modules, because modules info is added to existing code, and won't exist for the remote time in past project history where some cycles were created. And besides some project pasts have been lost during rehostings and forkings.

3. The modules and their version that get chosen by MVS for the actual build. _There will always be exactly one version chosen of each module (per major)_. More reading at https://research.swtch.com/vgo-mvs

That's irrelevant, because MVS won't apply before go tools see a set of modules, and they don't get to see any set of modules before the CI/CD checks for this set of module are finished.

Well I'm pretty sure you deal with them by not looking too closely at the state of the third party software you import in your builds.

This is a false assumption.

Sincere excuses about that, I shouldn't have generalized. That's why I'd rather keep this issue project and person-agnostic. Ultimately, who does a good job in which project is not interesting. What's interesting is to help everyone do a good job with minimal effort in every Go project.

The vast majority of software projects do not look carefully at the state of the dependencies they import. They can't afford to. Only big first-class projects, or small projects that depend on little, can afford careful checks of the code they depend on.

Even big first-class projects, which are not starved or money and manpower, often forget to monitor the state of the dependencies they imported after they imported them. They assume that if it was good at import time, it will be good forever. That is not true. Most security problems are identified post-release (unless the release is deliberately malicious). Being able to go get the exact same code version (or stashing it in vendoring) does not vouch, that because this version was fine in the past, it is still fine today.

And, sometimes vulnerabilities are backed into an API design, so fixing is not just a version bump, it requires time to change the API calls (that happened recently to a big Go project, they missed a security note in later releases notes of some code they were vendoring, and they had made no organisational provision, for security updates, that required them to change the way they called this third party code).

That's why our CI/CD system does not make any assumption about the quality of upstream dependency checks. It forces everything to re-pass our own CI/CD checks, and masks any module that didn't pass them yet. Even modules that passed CI/CD checks in the past can be removed from available versions once a security issue is detected.

Speaking just for myself (and maybe unconstructively?): I empathize with your argument, but I suspect that your desire for your own CI/CD system to work in a specific way is not a compelling reason for a library author to split a cycle.

Sorry, I was not clear, I didn't write about one CI/CD system but about a whole class of CI/CD systems. This design is not limited to one CI/CD system implementation, or to the implementations of a single organisation (I know at least an handful of them myself). It is in wide use because the "check things before they are made available to the compiler" rule has proven itself in the past, both in detecting problems and streamlining problem response times.

There are significant trade-offs to consider when choosing to break apart cycles, and even more if it involves turning repositories into multiple module repositories.

Given that we have to break cycles downstream when it's not done upstream at least at bring up time, and present a sane break up to our CI/CD system, I understand the complexity involved. And I won't pretend we break cycles cleanly, it's pretty much a desperation move for us to do it without upstream cooperation, and it's often done in a quick and dirty way by people fed up with battling the cycle side effects.

If I may, it seems to me the complexity grows with time, early cycle detection and remediation is a lot easier than once the cycle has entrenched itself deeply. As is the case for lots of things in the technical debt category.

Anecdotally we (google-cloud-go) plan to go the route of multi-module repositories, and we might end up in a cycle-free state, but it's not our goal and I would understand any set of library authors that choose not to invest the considerable effort into breaking cycles and whatever extra maintenance is required after-the-fact.

The aim of this issue is to give Go library authors the tools necessary, to make breaking cycles a less considerable effort.

@nim-nim nim-nim changed the title cmd/go: [modules + integration] go mod untangle, propose loop-breaking module splits cmd/go: [modules + integration] go mod split, propose loop-breaking module splits Apr 12, 2019
@nim-nim
Copy link
Author

nim-nim commented Apr 12, 2019

Anecdotally we (google-cloud-go) plan to go the route of multi-module repositories

That would be nice, a lot of the Go module potential is untapped right now, achieving full potential requires more reflection than just dropping go.mod descriptor files at the projects roots. If you’re looking at going multi-module, perhaps some of the tools proposed here would be of use to you? (I'm thinking of pack, buildrequires and discover).

and we might end up in a cycle-free state,

That would be even better.

@julieqiu julieqiu added the NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. label Apr 22, 2019
@julieqiu julieqiu added this to the Go1.13 milestone Apr 22, 2019
@andybons andybons modified the milestones: Go1.13, Go1.14 Jul 8, 2019
@rsc rsc modified the milestones: Go1.14, Backlog Oct 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Projects
None yet
Development

No branches or pull requests

5 participants