Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

x/tools/gopls: fails on standard library outside of $GOROOT #32173

Closed
stamblerre opened this issue May 21, 2019 · 24 comments
Closed

x/tools/gopls: fails on standard library outside of $GOROOT #32173

stamblerre opened this issue May 21, 2019 · 24 comments
Labels
FrozenDueToAge gopls Issues related to the Go language server, gopls. NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Milestone

Comments

@stamblerre
Copy link
Contributor

stamblerre commented May 21, 2019

Forked from microsoft/vscode-go#2511.

This only happens if you clone the Go project into a second place outside of your $GOROOT.

@gopherbot gopherbot added this to the Unreleased milestone May 21, 2019
@gopherbot gopherbot added the gopls Issues related to the Go language server, gopls. label May 21, 2019
@bcmills bcmills added the NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. label May 21, 2019
@arthurkiller
Copy link

arthurkiller commented Jun 26, 2019

I have the same problem with vim+ale+gopls on OSX.

I can not go to reference and completion in the standard library when jumped into a standard library file from my project.

@stamblerre
Copy link
Contributor Author

What is the output of go env?

@arthurkiller
Copy link

@stamblerre FYI

GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/arthur/Library/Caches/go-build"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/arthur/golang"
GOPROXY=""
GORACE=""
GOROOT="/usr/local/go"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/nh/71f5qs9n3dz3v02z77nc86640000gn/T/go-build129917868=/tmp/go-build -gno-record-gcc-switches -fno-common"

@stamblerre
Copy link
Contributor Author

Do you have the standard library checked out anywhere else or are you jumping to the definitions under /usr/local/go?

@arthurkiller
Copy link

I'm not sure where I am.

  • I opened my project in GOPATH
  • locate at `os.Open()
  • go to definitions
  • locate at an internal function
  • go to definitions, failed.

@stamblerre
Copy link
Contributor Author

Hm, then your issue is different from this bug. Are you able to share any gopls logs? You can get them by passing in a -logfile flag to gopls.

@arthurkiller
Copy link

image

they are in the same package, but gopls only work on the single file

@arthurkiller
Copy link

arthurkiller commented Jul 2, 2019

I can run go build in my project, but gopls still reporting error

@arthurkiller
Copy link

Hm, then your issue is different from this bug. Are you able to share any gopls logs? You can get them by passing in a -logfile flag to gopls.

I have set -logfile /Users/arthur/gopls.log but nothing printed.

let g:ale_go_gopls_options='-logfile=/Users/arthur/gopls.log'

@arthurkiller
Copy link

I'm on the latest master branch.

@arthurkiller
Copy link

arthurkiller commented Jul 2, 2019

I run

gopls -logfile=/Users/arthur/gopls.log check main.go meta.go data.go
in terminal, and get this output.

2019/07/02 16:36:47 Error:unable to check package for file:///Users/arthur/golang/src/xxxxxxxxxxxxxxxxxxxxxxxxxtool/meta.go: no packages found for file:///Users/arthur/golang/src/xxxxxxxxxxxxxxxxxxxxxxxxxtool/meta.go
2019/07/02 16:36:47 Error:unable to check package for file:///Users/arthur/golang/xxxxxxxxxxxxxxxxxxxxxxxxxxxxtool/meta.go: no packages found for file:///Users/arthur/golang/xxxxxxxxxxxxxxxxxxxxxxxxxxxxtool/meta.go

but these files are in the same package.

@arthurkiller
Copy link

And I have cleaned go build cache, but nothing happened.

@arthurkiller
Copy link

arthurkiller commented Jul 2, 2019

same for internal package.

arthur@ArthursMacBookPro:~/g/s/g/x/tools:master$ gopls -logfile=/Users/arthur/gopls.log check internal/lsp/cmd/check.go
/Users/arthur/golang/src/golang.org/x/tools/internal/lsp/cmd/check.go:18:7-18: undeclared name: Application
/Users/arthur/golang/src/golang.org/x/tools/internal/lsp/cmd/check.go:42:28-37: undeclared name: cmdFile
arthur@ArthursMacBookPro:~/g/s/g/x/tools:master$ pwd
/Users/arthur/golang/src/golang.org/x/tools

@stamblerre
Copy link
Contributor Author

I'm on the latest master branch.

Can you confirm this by sharing the output of gopls version?

I have set -logfile /Users/arthur/gopls.log but nothing printed.

Can you also try adding the -rpc.trace flag?

The errors you are seeing are very strange. Have you set $GO111MODULE? It seems like the other files in the package are not getting type-checked. If you're willing to debug a bit more, maybe you could add

v.session.log.Debugf(ctx, "pkg %s, files: %v, errors: %v", pkg.PkgPath, pkg.CompiledGoFiles, pkg.Errors)

to line 103 of internal/lsp/cache/load.go and rebuild gopls?

@stamblerre stamblerre changed the title x/tools/cmd/gopls: fails on standard library outside of $GOROOT x/tools/gopls: fails on standard library outside of $GOROOT Jul 2, 2019
@arthurkiller
Copy link

arthurkiller commented Jul 3, 2019

commit 38ae2c8f64122bd595b7f93f968a6686cd27bb5a (HEAD -> master, origin/master, origin/HEAD)

golang.org/x/tools/gopls v0.1.1
    golang.org/x/tools/gopls@(devel)

@arthurkiller
Copy link

arthurkiller commented Jul 3, 2019

env GO111MODULE=on go install

I rebuild the gopls, and got log output.

[Trace - 1:17:02 PM] Sending request 'initialize - (1)'.
Params: {"initializationOptions":{},"rootUri":"file:///Users/arthur/golang/src/icode.baidu.com/baidu/personal-code/bdrp-deploy-tool","capabilities":{"workspace":{"workspaceFolders":false,"configuration":false,"symbol":{"dynamicRegistration":false},"applyEdit":false,"didChangeConfiguration":{"dynamicRegistration":false}},"textDocument":{"documentSymbol":{"dynamicRegistration":false,"hierarchicalDocumentSymbolSupport":false},"references":{"dynamicRegistration":false},"publishDiagnostics":{"relatedInformation":true},"rename":{"dynamicRegistration":false},"completion":{"completionItem":{"snippetSupport":false,"commitCharactersSupport":false,"preselectSupport":false,"deprecatedSupport":false,"documentationFormat":["plaintext"]},"contextSupport":false,"dynamicRegistration":false},"synchronization":{"didSave":true,"willSaveWaitUntil":false,"willSave":false,"dynamicRegistration":false},"codeAction":{"dynamicRegistration":false},"typeDefinition":{"dynamicRegistration":false},"hover":{"dynamicRegistration":false,"contentFormat":["plaintext"]},"definition":{"dynamicRegistration":false,"linkSupport":false}}},"rootPath":"/Users/arthur/golang/src/icode.baidu.com/baidu/personal-code/bdrp-deploy-tool","processId":96201}


[Trace - 1:17:02 PM] Received response 'initialize - (1)' in 33ms.
Params: {"capabilities":{"textDocumentSync":{"openClose":true,"change":2,"save":{}},"hoverProvider":true,"completionProvider":{"triggerCharacters":["."]},"signatureHelpProvider":{"triggerCharacters":["(",","]},"definitionProvider":true,"referencesProvider":true,"documentHighlightProvider":true,"documentSymbolProvider":true,"codeActionProvider":true,"documentFormattingProvider":true,"renameProvider":true,"documentLinkProvider":{},"typeDefinitionProvider":true,"workspace":{"workspaceFolders":{"supported":true,"changeNotifications":"workspace/didChangeWorkspaceFolders"}}},"custom":null}


[Trace - 1:17:02 PM] Sending notification 'initialized'.
Params: {}


[Trace - 1:17:03 PM] Received notification 'window/logMessage'.
Params: {"type":3,"message":"Build info\n----------\ngolang.org/x/tools/gopls v0.1.1\n    golang.org/x/tools/gopls@(devel)\n    golang.org/x/sync@v0.0.0-20190423024810-112230192c58 h1:8gQV6CLnAEikrhgkHFbMAEhagSSnXWGV915qUMm9mrU=\n    golang.org/x/tools@v0.0.0-20190628153133-6cdbf07be9d0 =\u003e ../\n\nGo info\n-------\ngo version go1.12.6 darwin/amd64\n\nGOARCH=\"amd64\"\nGOBIN=\"\"\nGOCACHE=\"/Users/arthur/Library/Caches/go-build\"\nGOEXE=\"\"\nGOFLAGS=\"\"\nGOHOSTARCH=\"amd64\"\nGOHOSTOS=\"darwin\"\nGOOS=\"darwin\"\nGOPATH=\"/Users/arthur/golang\"\nGOPROXY=\"\"\nGORACE=\"\"\nGOROOT=\"/usr/local/go\"\nGOTMPDIR=\"\"\nGOTOOLDIR=\"/usr/local/go/pkg/tool/darwin_amd64\"\nGCCGO=\"gccgo\"\nCC=\"clang\"\nCXX=\"clang++\"\nCGO_ENABLED=\"1\"\nGOMOD=\"\"\nCGO_CFLAGS=\"-g -O2\"\nCGO_CPPFLAGS=\"\"\nCGO_CXXFLAGS=\"-g -O2\"\nCGO_FFLAGS=\"-g -O2\"\nCGO_LDFLAGS=\"-g -O2\"\nPKG_CONFIG=\"pkg-config\"\nGOGCCFLAGS=\"-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/nh/71f5qs9n3dz3v02z77nc86640000gn/T/go-build394267723=/tmp/go-build -gno-record-gcc-switches -fno-common\"\n"}


[Trace - 1:17:03 PM] Sending notification 'textDocument/didOpen'.
Params: {"textDocument":{"uri":"file:///Users/arthur/golang/src/icode.baidu.com/baidu/personal-code/bdrp-deploy-tool/main.go","version":1,"languageId":"go","text":"package main\n\nimport (\n\t\"encoding/json\"\n\t\"flag\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"math/rand\"\n\t\"net\"\n\t\"net/http\"\n\t\"os\"\n\t\"os/signal\"\n\t\"strconv\"\n\t\"strings\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"go.uber.org/zap\"\n\t\"go.uber.org/zap/zapcore\"\n)\n\nvar (\n\tLogger         *zap.Logger\n\twg             sync.WaitGroup\n\tqueryinfo      = \"http://bdrp.baidu.com/api/shard/redis/getClusterInfo?app_id=%s\"\n\tquerytsk       = \"http://bdrp.baidu.com/api/shard/redis/checkTask?taskId=%d\"\n\tFixCode        = 1\n\tTotal          = 10\n\tInterval       = 400\n\tfailedMaster   = make([]string, 0, 500)\n\tfailedSlave    = make([]string, 0, 500)\n\tfailedProxy    = make([]string, 0, 500)\n\ttaskQ          = make(chan int, 65535)\n\tslavetaskQ     = make(chan int, 65535)\n\tproxytaskQ     = make(chan int, 65535)\n\ttokenChan      = make(chan int, 200)\n\tcanceller      = make(chan int)\n\twhitelist      = make(map[string]int)\n\ttaskidmap      = make(map[int]string)\n\ttaskidproxymap = make(map[int]string)\n\ttmlock         = sync.RWMutex{}\n\ttplock         = sync.RWMutex{}\n\tretrytimes     = 0\n\tDatafile       string\n\tmworkers       int32\n\tsworkers       int32\n\tpworkers       int32\n\tMaster         *bool\n\tSlave          *bool\n\tProxy          *bool\n\tMulti          *bool\n\tFix            *bool\n\tCheck          *bool\n\tDryRun         *bool\n\tStrict         *bool\n)\n\ntype DeployInfo struct {\n\tAppid        string `json:\"appid\"`\n\tMasterRegion string `json:\"master_regions\"`\n\tSlaveRegion  string `json:\"slave_regions\"`\n\tMasterIDC    string `json:\"master_idc\"`\n\tSlaveIDC     string `json:\"slave_idc\"`\n\tMasterPool   string `json:\"master_pool\"`\n\tSlavePool    string `json:\"slave_pool\"`\n\tProxy        string `json:\"proxy\"`\n\tUserID       string `json:\"user_id\"`\n}\n\nfunc main() {\n\t// parse flags\n\tMaster = flag.Bool(\"m\", false, \"deploy master\")\n\tSlave = flag.Bool(\"s\", false, \"deploy slave\")\n\tProxy = flag.Bool(\"p\", false, \"deploy proxy\")\n\tMulti = flag.Bool(\"multi\", false, \"multi(3) slaves mode\")\n\tCheck = flag.Bool(\"check\", false, \"check deploy but not fix\")\n\tStrict = flag.Bool(\"strict\", false, \"check deploy in strict mode will connect to instance\")\n\tDryRun = flag.Bool(\"dry\", false, \"dry run only print but not request\")\n\tFix = flag.Bool(\"fix\", false, \"check and fix with data.json\")\n\tflag.IntVar(&FixCode, \"fixcode\", -1, \"fix code used to choose resource pool\")\n\tflag.IntVar(&Total, \"n\", 1, \"requests pre 10 second\")\n\tflag.StringVar(&Datafile, \"f\", \"data.json\", \"deploy info file\")\n\tflag.Parse()\n\tif !flag.Parsed() {\n\t\tflag.Usage()\n\t\tos.Exit(0)\n\t}\n\trand.Seed(time.Now().Unix())\n\n\t// init token bucket\n\tfor i := 0; i < Total; i++ {\n\t\ttokenChan <- 1\n\t}\n\tgo func() {\n\t\ttimer := time.Tick(10 * time.Second)\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-timer:\n\t\t\t\tfor i := 0; i < Total; i++ {\n\t\t\t\t\tselect {\n\t\t\t\t\tcase tokenChan <- 1:\n\t\t\t\t\tdefault:\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\tdefault:\n\t\t\t}\n\t\t}\n\t}()\n\n\t// init Logger\n\tencoder := zapcore.NewConsoleEncoder(zapcore.EncoderConfig{\n\t\tNameKey:       \"Name\",\n\t\tStacktraceKey: \"Stack\",\n\t\tMessageKey:    \"Message\",\n\t\tLevelKey:      \"Level\",\n\t\tTimeKey:       \"TimeStamp\",\n\t\tEncodeTime: func(t time.Time, enc zapcore.PrimitiveArrayEncoder) {\n\t\t\tenc.AppendString(t.Local().Format(\"15:04:05\"))\n\t\t},\n\t\tCallerKey:      \"Caller\",\n\t\tEncodeLevel:    zapcore.CapitalColorLevelEncoder,\n\t\tEncodeDuration: zapcore.StringDurationEncoder,\n\t\tEncodeCaller:   zapcore.ShortCallerEncoder,\n\t})\n\tif *DryRun {\n\t\tLogger = zap.New(zapcore.NewCore(encoder, zapcore.AddSync(os.Stdout), zap.NewAtomicLevelAt(zap.DebugLevel)), zap.AddCaller())\n\t} else {\n\t\tLogger = zap.New(zapcore.NewCore(encoder, zapcore.AddSync(os.Stdout), zap.NewAtomicLevelAt(zap.InfoLevel)), zap.AddCaller())\n\t}\n\n\t// caught signal and do dump\n\tc := make(chan os.Signal, 1)\n\tsignal.Notify(c, syscall.SIGTERM, syscall.SIGQUIT, syscall.SIGHUP, syscall.SIGUSR1, syscall.SIGUSR2)\n\tgo func() {\n\t\tfor {\n\t\t\tsig := <-c\n\t\t\tswitch sig {\n\t\t\tcase syscall.SIGHUP: // change speed\n\t\t\t\ttkf, err := os.Open(\"./tokens\")\n\t\t\t\tif err != nil {\n\t\t\t\t\tLogger.Error(\"error in reload tokens file\", zap.Error(err))\n\t\t\t\t\ttkf.Close()\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tvar n, m int\n\t\t\t\tfmt.Fscanf(tkf, \"%d,%d\", &n, &m)\n\t\t\t\tLogger.Info(\"receive signal hup\")\n\t\t\t\tif n >= 0 {\n\t\t\t\t\tLogger.Info(\"update total\", zap.Int(\"old_token\", Total), zap.Int(\"new_token\", n), zap.Int(\"old_interval\", Interval), zap.Int(\"new_interval\", m))\n\t\t\t\t\tTotal = n\n\t\t\t\t\tInterval = m\n\t\t\t\t}\n\t\t\t\ttkf.Close()\n\t\t\tcase syscall.SIGUSR1:\n\t\t\t\tLogger.Info(\"receive signal usr1, dump failed and quit\")\n\t\t\t\twg.Add(1)\n\t\t\t\tclose(canceller)\n\t\t\t\ttime.Sleep(time.Second)\n\t\t\t\tfor tskid := range taskQ {\n\t\t\t\t\ttmlock.RLock()\n\t\t\t\t\tfailedMaster = append(failedMaster, taskidmap[tskid])\n\t\t\t\t\ttmlock.RUnlock()\n\t\t\t\t}\n\t\t\t\tfor tskid := range slavetaskQ {\n\t\t\t\t\ttmlock.RLock()\n\t\t\t\t\tfailedSlave = append(failedSlave, taskidmap[tskid])\n\t\t\t\t\ttmlock.RUnlock()\n\t\t\t\t}\n\t\t\t\tfor tskid := range taskQ {\n\t\t\t\t\ttplock.RLock()\n\t\t\t\t\tfailedProxy = append(failedProxy, taskidproxymap[tskid])\n\t\t\t\t\ttplock.RUnlock()\n\t\t\t\t}\n\n\t\t\t\tif len(failedMaster) > 0 {\n\t\t\t\t\tbts, _ := json.Marshal(failedMaster)\n\t\t\t\t\tf, _ := os.Create(\"./failedM.json\")\n\t\t\t\t\tfmt.Fprintln(f, strings.Replace(string(bts), \"\\\\u0026\", \"&\", -1))\n\t\t\t\t\tLogger.Info(\"failed to deploy master, dumped\", zap.Int(\"n\", len(failedMaster)))\n\t\t\t\t\tf.Close()\n\t\t\t\t}\n\t\t\t\tif len(failedSlave) > 0 {\n\t\t\t\t\tbts, _ := json.Marshal(failedSlave)\n\t\t\t\t\tf, _ := os.Create(\"./failedS.json\")\n\t\t\t\t\tfmt.Fprintln(f, strings.Replace(string(bts), \"\\\\u0026\", \"&\", -1))\n\t\t\t\t\tLogger.Info(\"failed to deploy slave, dumped\", zap.Int(\"n\", len(failedSlave)))\n\t\t\t\t\tf.Close()\n\t\t\t\t}\n\t\t\t\tif len(failedProxy) > 0 {\n\t\t\t\t\tbts, _ := json.Marshal(failedProxy)\n\t\t\t\t\tf, _ := os.Create(\"./failedP.json\")\n\t\t\t\t\tfmt.Fprintln(f, strings.Replace(string(bts), \"\\\\u0026\", \"&\", -1))\n\t\t\t\t\tLogger.Info(\"failed to deploy proxy, dumped\", zap.Int(\"n\", len(failedProxy)))\n\t\t\t\t\tf.Close()\n\t\t\t\t}\n\t\t\t\tos.Exit(0)\n\t\t\tcase syscall.SIGUSR2:\n\t\t\t\tLogger.Info(\"receive signal usr2, dump taskid map\")\n\t\t\t\tbts, _ := json.Marshal(taskidmap)\n\t\t\t\tf, _ := os.Create(\"./taskidmap.json\")\n\t\t\t\tfmt.Fprintln(f, strings.Replace(string(bts), \"\\\\u0026\", \"&\", -1))\n\t\t\t\tLogger.Info(\"dumped, to file\", zap.Int(\"n\", len(failedSlave)))\n\t\t\t\tf.Close()\n\t\t\tdefault:\n\t\t\t\tLogger.Info(\"receive signal\")\n\t\t\t\tos.Exit(0)\n\t\t\t}\n\t\t}\n\t}()\n\n\tdinfos, err := prepareDeploy(Datafile)\n\tif err != nil {\n\t\tLogger.Error(\"error in load resource file\", zap.Error(err))\n\t}\n\n\tif *Fix || *Check {\n\t\tInitDatabases()\n\t}\n\n\t// on fix mode, try load white list first\n\tif *Fix {\n\t\tb, _ := os.Open(\"./white.json\")\n\t\tbts, _ := ioutil.ReadAll(b)\n\t\twl := make([]string, 0, 300)\n\t\tjson.Unmarshal(bts, &wl)\n\t\tfor _, v := range wl {\n\t\t\twhitelist[v] = 1\n\t\t}\n\t\tids := []string{}\n\t\tfor _, v := range dinfos {\n\t\t\tids = append(ids, v.Appid)\n\t\t}\n\t\tns, _, _ := checkCluster(ids)\n\t\tfor k, _ := range ns {\n\t\t\twhitelist[k] = 1\n\t\t}\n\t}\n\n\t// check cluster or proxy and exit,\n\tif *Check {\n\t\tc := 0\n\t\tb := 0\n\t\tif *Proxy {\n\t\t\tfor _, v := range dinfos {\n\t\t\t\tpxyn, _ := strconv.Atoi(v.Proxy)\n\t\t\t\t_, n := checkClusterProxy(v.Appid, pxyn)\n\t\t\t\tc += n\n\t\t\t}\n\t\t} else {\n\t\t\tids := make([]string, 0, 100)\n\t\t\tfor _, v := range dinfos {\n\t\t\t\tids = append(ids, v.Appid)\n\t\t\t}\n\t\t\tsinfo, n, m := checkCluster(ids)\n\t\t\tc += n\n\t\t\tb += m\n\n\t\t\tK := 2\n\t\t\tfor _, shard := range sinfo {\n\t\t\t\tfor ip, count := range shard.Ipcounter {\n\t\t\t\t\tif count > K {\n\t\t\t\t\t\tLogger.Error(\"error detected ip count invalid\", zap.String(\"appid\", shard.Appid),\n\t\t\t\t\t\t\tzap.String(\"shard\", shard.Shardid), zap.String(\"ip\", ip), zap.Int(\"count\", count))\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tLogger.Info(\"total lack instances\", zap.Int(\"lacks\", c), zap.Int(\"broken\", b))\n\t\tos.Exit(0)\n\t}\n\n\t// retry failed\n\tif *Fix {\n\t\tfor _, fname := range []string{\"./failedM.json\", \"./failedS.json\", \"./failedP.json\"} {\n\t\t\tfd, _ := os.Open(fname)\n\t\t\tdefer fd.Close()\n\t\t\tb, _ := ioutil.ReadAll(fd)\n\t\t\tfailed := make([]string, 0, 100)\n\t\t\tjson.Unmarshal(b, &failed)\n\n\t\t\tdone := func() {}\n\t\t\ttp := -1\n\t\t\tswitch fname {\n\t\t\tcase \"./failedM.json\":\n\t\t\t\tatomic.AddInt32(&mworkers, 1)\n\t\t\t\tdone = func() {\n\t\t\t\t\tatomic.AddInt32(&mworkers, -1)\n\t\t\t\t}\n\t\t\t\ttp = 0\n\t\t\tcase \"./failedS.json\":\n\t\t\t\tatomic.AddInt32(&sworkers, 1)\n\t\t\t\tdone = func() {\n\t\t\t\t\tatomic.AddInt32(&sworkers, -1)\n\t\t\t\t}\n\t\t\t\ttp = 1\n\t\t\tcase \"./failedP.json\":\n\t\t\t\tatomic.AddInt32(&pworkers, 1)\n\t\t\t\tdone = func() {\n\t\t\t\t\tatomic.AddInt32(&pworkers, -1)\n\t\t\t\t}\n\t\t\t\ttp = 2\n\t\t\t}\n\n\t\t\twg.Add(1)\n\t\t\t// retry failed\n\t\t\tgo func(flist []string, done func(), t int) {\n\t\t\t\tdefer wg.Done()\n\t\t\t\tdefer done()\n\t\t\t\tfor i, rq := range flist {\n\t\t\t\t\tretry := 0\n\t\t\t\tr:\n\t\t\t\t\tselect {\n\t\t\t\t\tcase _ = <-tokenChan:\n\t\t\t\t\tcase <-canceller:\n\t\t\t\t\t\tLogger.Info(\"cancelling retry\")\n\t\t\t\t\t\treturn\n\t\t\t\t\t}\n\n\t\t\t\t\tstart := time.Now()\n\t\t\t\t\tif *DryRun {\n\t\t\t\t\t\tLogger.Info(\"Dry run retry failed\", zap.String(\"request\", rq))\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\t\tresp, err := http.Get(rq)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tLogger.Error(\"error in request retry failed\", zap.String(\"request\", rq), zap.Error(err))\n\t\t\t\t\t\tif retry < retrytimes {\n\t\t\t\t\t\t\ttime.Sleep(time.Second * 3)\n\t\t\t\t\t\t\tretry++\n\t\t\t\t\t\t\tgoto r\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tLogger.Info(\"retry task\", zap.String(\"request\", rq), zap.Int(\"remains\", len(failed)-i), zap.Duration(\"cost\", time.Since(start)))\n\t\t\t\t\tresult := &deployResp{}\n\t\t\t\t\tif err := json.NewDecoder(resp.Body).Decode(result); err != nil {\n\t\t\t\t\t\tLogger.Error(\"error in get result unmarshal\", zap.Error(err))\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\t\tswitch t {\n\t\t\t\t\tcase 0:\n\t\t\t\t\t\ttmlock.Lock()\n\t\t\t\t\t\ttaskidmap[result.Message] = rq\n\t\t\t\t\t\ttmlock.Unlock()\n\t\t\t\t\t\ttaskQ <- result.Message\n\t\t\t\t\tcase 1:\n\t\t\t\t\t\ttmlock.Lock()\n\t\t\t\t\t\ttaskidmap[result.Message] = rq\n\t\t\t\t\t\ttmlock.Unlock()\n\t\t\t\t\t\tslavetaskQ <- result.Message\n\t\t\t\t\tcase 2:\n\t\t\t\t\t\ttplock.Lock()\n\t\t\t\t\t\ttaskidproxymap[result.Message] = rq\n\t\t\t\t\t\ttplock.Unlock()\n\t\t\t\t\t\tproxytaskQ <- result.Message\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}(failed, done, tp)\n\t\t}\n\t}\n\n\t// do deploy\n\tfor _, dinfo := range dinfos {\n\t\tif *Master {\n\t\t\tatomic.AddInt32(&mworkers, 1)\n\t\t} else if *Slave {\n\t\t\tatomic.AddInt32(&sworkers, 1)\n\t\t} else if *Proxy {\n\t\t\tatomic.AddInt32(&pworkers, 1)\n\t\t}\n\t\twg.Add(1)\n\t\tgo func(d DeployInfo) {\n\t\t\tdefer wg.Done()\n\t\t\tcinfo, err := queryClusterInfo(d.Appid)\n\t\t\tif err != nil {\n\t\t\t\tLogger.Error(\"error in query cluster info for appid\", zap.String(\"appid\", d.Appid), zap.Error(err))\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\t// check: unique and randomlize shards\n\t\t\tset := make(map[string]int)\n\t\t\tfor _, shid := range cinfo.Message.Shards {\n\t\t\t\tset[shid] = 1\n\t\t\t}\n\t\t\tvar slist []string\n\t\t\tfor k, _ := range set {\n\t\t\t\tslist = append(slist, k)\n\t\t\t}\n\t\t\tcinfo.Message.Shards = slist\n\n\t\t\tif *Master {\n\t\t\t\tDeployInstance(taskQ, cinfo, &d, &mworkers)\n\t\t\t}\n\t\t\tif *Slave {\n\t\t\t\tDeployInstance(slavetaskQ, cinfo, &d, &sworkers)\n\t\t\t}\n\t\t\tif *Proxy {\n\t\t\t\tDeployProxy(cinfo, &d)\n\t\t\t}\n\t\t}(dinfo)\n\t}\n\n\t// query task queue\n\twg.Add(1)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tqueryTaskQ(taskQ, 0, &mworkers)\n\t\tbts, _ := json.Marshal(failedMaster)\n\t\tif len(failedMaster) > 0 {\n\t\t\tf, _ := os.Create(\"./failedM.json\")\n\t\t\tfmt.Fprintln(f, strings.Replace(string(bts), \"\\\\u0026\", \"&\", -1))\n\t\t\tLogger.Error(\"failed to deploy master\")\n\t\t}\n\t}()\n\twg.Add(1)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tqueryTaskQ(slavetaskQ, 1, &sworkers)\n\t\tbts, _ := json.Marshal(failedSlave)\n\t\tif len(failedSlave) > 0 {\n\t\t\tf, _ := os.Create(\"./failedS.json\")\n\t\t\tfmt.Fprintln(f, strings.Replace(string(bts), \"\\\\u0026\", \"&\", -1))\n\t\t\tLogger.Error(\"failed to deploy slave\")\n\t\t}\n\t}()\n\twg.Add(1)\n\tgo func() {\n\t\tdefer wg.Done()\n\t\tqueryTaskQ(proxytaskQ, 2, &pworkers)\n\t\tbts, _ := json.Marshal(failedProxy)\n\t\tif len(failedProxy) > 0 {\n\t\t\tf, _ := os.Create(\"./failedP.json\")\n\t\t\tfmt.Fprintln(f, strings.Replace(string(bts), \"\\\\u0026\", \"&\", -1))\n\t\t\tLogger.Error(\"failed to deploy proxy\")\n\t\t}\n\t}()\n\n\twg.Wait()\n}\n\ntype clusterInfo struct {\n\tSuccess bool `json:\"success\"`\n\tMessage struct {\n\t\tAppid   string   `json:\"app_id\"`\n\t\tCluster string   `json:\"cluster_id\"`\n\t\tProxy   string   `json:\"proxy_id\"`\n\t\tIdcs    string   `json:\"idcs\"`\n\t\tMainIdc string   `json:\"main_idc\"`\n\t\tShards  []string `json:\"shard\"`\n\t} `json:\"message\"`\n}\n\nfunc queryClusterInfo(appid string) (*clusterInfo, error) {\n\tresp, err := http.Get(fmt.Sprintf(queryinfo, appid))\n\tif err != nil {\n\t\tLogger.Error(\"error in query cluster info\", zap.String(\"appid\", appid), zap.Error(err))\n\t\treturn nil, err\n\t}\n\tinfo := &clusterInfo{}\n\tif err = json.NewDecoder(resp.Body).Decode(info); err != nil {\n\t\tLogger.Error(\"error in decode cluster info\", zap.String(\"appid\", appid), zap.Error(err))\n\t\treturn nil, err\n\t}\n\treturn info, nil\n}\n\ntype request struct {\n\tShard        string `json:\"shard_id\"`\n\tCluster      string `json:\"cluster_id\"`\n\tApp          string `json:\"app_id\"`\n\tResourcePool string `json:\"resource_pool\"`\n\tIdc          string `json:\"idc\"`\n\tRegion       string `json:\"region\"`\n\tMaster       bool   `json:\"master\"`\n\tNumber       int    `json:\"num\"`\n\tUserID       string `json:\"user_id\"`\n}\n\nfunc (r *request) String() string {\n\treturn fmt.Sprintf(\"http://bdrp.baidu.com/api/shard/redis/addRedisInstance?shard_id=%s&resource_pool=%s&idc=%s&app_id=%s&cluster_id=%s&region=%s&master=%t&user_id=%s\", r.Shard, r.ResourcePool, r.Idc, r.App, r.Cluster, strings.ToLower(r.Region), r.Master, r.UserID)\n}\n\ntype deployResp struct {\n\tSuccess bool `json:\"success\"`\n\tMessage int  `json:\"message\"` // taskID int\n}\n\nfunc prepareDeploy(path string) ([]DeployInfo, error) {\n\tf, err := os.Open(path)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer f.Close()\n\n\tb, err := ioutil.ReadAll(f)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tdinfo := []DeployInfo{}\n\tif err = json.Unmarshal(b, &dinfo); err != nil {\n\t\treturn nil, err\n\t}\n\treturn dinfo, nil\n}\n\nfunc DeployInstance(q chan int, cinfo *clusterInfo, dinfo *DeployInfo, worker *int32) {\n\tdefer atomic.AddInt32(worker, -1)\n\t// return shid ---> []instances\n\tfor i, shid := range cinfo.Message.Shards {\n\t\t// when fix is true, white list mode will be enable\n\t\t// only shards in white list will be handled\n\t\tif *Fix {\n\t\t\tif _, ok := whitelist[shid]; !ok {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\tvar reqs []*request\n\t\tif *Master {\n\t\t\t// Mregion Mpool\n\t\t\treqs = append(reqs, &request{\n\t\t\t\tShard:        shid,\n\t\t\t\tCluster:      cinfo.Message.Cluster,\n\t\t\t\tApp:          cinfo.Message.Appid,\n\t\t\t\tResourcePool: dinfo.MasterPool,\n\t\t\t\tIdc:          dinfo.MasterIDC,\n\t\t\t\tRegion:       dinfo.MasterRegion,\n\t\t\t\tMaster:       true,\n\t\t\t\tNumber:       1,\n\t\t\t\tUserID:       dinfo.UserID,\n\t\t\t})\n\t\t} else if *Slave {\n\t\t\tif *Fix {\n\t\t\t\t// get fix infos\n\t\t\t\tfixinfomap, _, _ := checkCluster([]string{cinfo.Message.Appid})\n\n\t\t\t\t// with white list? or check the pool tag\n\t\t\t\tif *Multi {\n\t\t\t\t\t// region ---> lack n\n\t\t\t\t\tfor r, n := range fixinfomap[shid].Inneed {\n\t\t\t\t\t\t// multi mode, each region should not get more than 2 instances\n\t\t\t\t\t\tfor i := 0; i < n; i++ {\n\t\t\t\t\t\t\tswitch r {\n\t\t\t\t\t\t\tcase dinfo.MasterRegion:\n\t\t\t\t\t\t\t\treqs = append(reqs, &request{\n\t\t\t\t\t\t\t\t\tShard:        shid,\n\t\t\t\t\t\t\t\t\tCluster:      cinfo.Message.Cluster,\n\t\t\t\t\t\t\t\t\tApp:          cinfo.Message.Appid,\n\t\t\t\t\t\t\t\t\tResourcePool: dinfo.SlavePool,\n\t\t\t\t\t\t\t\t\tIdc:          dinfo.MasterIDC,\n\t\t\t\t\t\t\t\t\tRegion:       dinfo.MasterRegion,\n\t\t\t\t\t\t\t\t\tMaster:       false,\n\t\t\t\t\t\t\t\t\tNumber:       1,\n\t\t\t\t\t\t\t\t\tUserID:       dinfo.UserID,\n\t\t\t\t\t\t\t\t})\n\t\t\t\t\t\t\tcase dinfo.SlaveRegion:\n\t\t\t\t\t\t\t\treq := &request{\n\t\t\t\t\t\t\t\t\tShard:        shid,\n\t\t\t\t\t\t\t\t\tCluster:      cinfo.Message.Cluster,\n\t\t\t\t\t\t\t\t\tApp:          cinfo.Message.Appid,\n\t\t\t\t\t\t\t\t\tResourcePool: dinfo.SlavePool,\n\t\t\t\t\t\t\t\t\tIdc:          dinfo.SlaveIDC,\n\t\t\t\t\t\t\t\t\tRegion:       dinfo.SlaveRegion,\n\t\t\t\t\t\t\t\t\tMaster:       false,\n\t\t\t\t\t\t\t\t\tNumber:       1,\n\t\t\t\t\t\t\t\t\tUserID:       dinfo.UserID,\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tswitch FixCode {\n\t\t\t\t\t\t\t\tcase -1:\n\t\t\t\t\t\t\t\t\tif rand.Int()%2 == 0 {\n\t\t\t\t\t\t\t\t\t\treq.ResourcePool = dinfo.SlavePool\n\t\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\t\treq.ResourcePool = dinfo.MasterPool\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tcase 1:\n\t\t\t\t\t\t\t\t\treq.ResourcePool = dinfo.MasterPool\n\t\t\t\t\t\t\t\tcase 2:\n\t\t\t\t\t\t\t\t\treq.ResourcePool = dinfo.SlavePool\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\treqs = append(reqs, req)\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t} else { //signal region fix mode\n\t\t\t\t\treqs = append(reqs, &request{\n\t\t\t\t\t\tShard:        shid,\n\t\t\t\t\t\tCluster:      cinfo.Message.Cluster,\n\t\t\t\t\t\tApp:          cinfo.Message.Appid,\n\t\t\t\t\t\tResourcePool: dinfo.SlavePool,\n\t\t\t\t\t\tIdc:          dinfo.SlaveIDC,\n\t\t\t\t\t\tRegion:       dinfo.SlaveRegion,\n\t\t\t\t\t\tMaster:       false,\n\t\t\t\t\t\tNumber:       1,\n\t\t\t\t\t\tUserID:       dinfo.UserID,\n\t\t\t\t\t})\n\t\t\t\t} // end-fix\n\t\t\t} else { // normal deploy slave\n\t\t\t\tif *Multi { // multi slave mode and deploy slave\n\t\t\t\t\t// 1: M_Region S_Pool\n\t\t\t\t\treqs = append(reqs, &request{\n\t\t\t\t\t\tShard:        shid,\n\t\t\t\t\t\tCluster:      cinfo.Message.Cluster,\n\t\t\t\t\t\tApp:          cinfo.Message.Appid,\n\t\t\t\t\t\tResourcePool: dinfo.SlavePool,\n\t\t\t\t\t\tIdc:          dinfo.MasterIDC,\n\t\t\t\t\t\tRegion:       dinfo.MasterRegion,\n\t\t\t\t\t\tMaster:       false,\n\t\t\t\t\t\tNumber:       1,\n\t\t\t\t\t\tUserID:       dinfo.UserID,\n\t\t\t\t\t})\n\t\t\t\t\t// 2: S_Region M_Pool\n\t\t\t\t\treqs = append(reqs, &request{\n\t\t\t\t\t\tShard:        shid,\n\t\t\t\t\t\tCluster:      cinfo.Message.Cluster,\n\t\t\t\t\t\tApp:          cinfo.Message.Appid,\n\t\t\t\t\t\tResourcePool: dinfo.MasterPool,\n\t\t\t\t\t\tIdc:          dinfo.SlaveIDC,\n\t\t\t\t\t\tRegion:       dinfo.SlaveRegion,\n\t\t\t\t\t\tMaster:       false,\n\t\t\t\t\t\tNumber:       1,\n\t\t\t\t\t\tUserID:       dinfo.UserID,\n\t\t\t\t\t})\n\t\t\t\t}\n\t\t\t\t// 3: S_Region S_Pool\n\t\t\t\treqs = append(reqs, &request{\n\t\t\t\t\tShard:        shid,\n\t\t\t\t\tCluster:      cinfo.Message.Cluster,\n\t\t\t\t\tApp:          cinfo.Message.Appid,\n\t\t\t\t\tResourcePool: dinfo.SlavePool,\n\t\t\t\t\tIdc:          dinfo.SlaveIDC,\n\t\t\t\t\tRegion:       dinfo.SlaveRegion,\n\t\t\t\t\tMaster:       false,\n\t\t\t\t\tNumber:       1,\n\t\t\t\t\tUserID:       dinfo.UserID,\n\t\t\t\t})\n\t\t\t}\n\t\t}\n\n\t\tfor _, req := range reqs {\n\t\t\tselect {\n\t\t\tcase <-tokenChan:\n\t\t\tcase <-canceller:\n\t\t\t\tLogger.Info(\"cancelling exit deploy instance\")\n\t\t\t\treturn\n\t\t\t}\n\t\t\tLogger.Info(\"call deploy instance\", zap.Stringer(\"request\", req), zap.Int(\"remain\", len(cinfo.Message.Shards)-i))\n\t\t\tretry := 0\n\t\tr:\n\t\t\tstart := time.Now()\n\t\t\tif *DryRun {\n\t\t\t\tLogger.Info(\"Dry run instance\", zap.Stringer(\"request\", req))\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tresp, err := http.Get(req.String())\n\t\t\tif err != nil {\n\t\t\t\tLogger.Error(\"error in request deploy\", zap.Stringer(\"req\", req), zap.Error(err))\n\t\t\t\tif retry < retrytimes {\n\t\t\t\t\ttime.Sleep(time.Second * 3)\n\t\t\t\t\tretry++\n\t\t\t\t\tgoto r\n\t\t\t\t} else {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tresult := &deployResp{}\n\t\t\tif err = json.NewDecoder(resp.Body).Decode(result); err != nil {\n\t\t\t\tLogger.Error(\"error in get result unmarshal\", zap.Stringer(\"req\", req), zap.Error(err), zap.Int(\"taskid\", result.Message))\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\ttmlock.Lock()\n\t\t\ttaskidmap[result.Message] = req.String()\n\t\t\ttmlock.Unlock()\n\t\t\tq <- result.Message\n\t\t\tLogger.Debug(\"task recorded\", zap.Int(\"taskid\", result.Message), zap.Duration(\"cost\", time.Since(start)))\n\t\t}\n\t}\n}\n\ntype proxyrequest struct {\n\tProxy        string `json:\"proxy_id\"`\n\tApp          string `json:\"app_id\"`\n\tResourcePool string `json:\"resource_pool\"`\n\tIdc          string `json:\"idc\"`\n\tRegion       string `json:\"region\"`\n\tNumber       int    `json:\"num\"`\n\tUserID       string `json:\"user_id\"`\n}\n\nfunc (r *proxyrequest) String() string {\n\treturn fmt.Sprintf(\"http://bdrp.baidu.com/api/shard/redis/addProxyInstance?proxy_id=%s&resource_pool=%s&idc=%s&app_id=%s&region=%s&user_id=%s&number=1\", r.Proxy, r.ResourcePool, r.Idc, r.App, strings.ToLower(r.Region), r.UserID)\n}\n\nfunc DeployProxy(cinfo *clusterInfo, dinfo *DeployInfo) {\n\tdefer atomic.AddInt32(&pworkers, -1)\n\tpoolIds := []string{dinfo.MasterPool, dinfo.SlavePool}\n\tpxyn, err := strconv.Atoi(dinfo.Proxy)\n\tif err != nil {\n\t\tLogger.Fatal(\"proxy num improperly\", zap.Error(err))\n\t\treturn\n\t}\n\trqs := make([]string, 0, 500)\n\n\tif *Fix {\n\t\t// init lackinfomap to avoid all region is null and nothing we get\n\t\tcurrent, _ := checkClusterProxy(cinfo.Message.Appid, pxyn)\n\t\tlackinfomap := make(map[string]int) // region --> lacks\n\t\tlackinfomap[strings.ToLower(dinfo.MasterRegion)] = 0\n\t\tlackinfomap[strings.ToLower(dinfo.SlaveRegion)] = 0\n\t\tfor r, n := range current {\n\t\t\tif _, ok := lackinfomap[r]; !ok {\n\t\t\t} else {\n\t\t\t\tlackinfomap[r] = pxyn - n\n\t\t\t}\n\t\t}\n\n\t\tif *Multi { //fix multi\n\t\t\tfor r, n := range lackinfomap {\n\t\t\t\tif n > 0 {\n\t\t\t\t\tswitch r {\n\t\t\t\t\tcase strings.ToLower(dinfo.MasterRegion):\n\t\t\t\t\t\tfor i := 0; i < n; i++ {\n\t\t\t\t\t\t\treq := &proxyrequest{\n\t\t\t\t\t\t\t\tProxy:        cinfo.Message.Proxy,\n\t\t\t\t\t\t\t\tApp:          cinfo.Message.Appid,\n\t\t\t\t\t\t\t\tResourcePool: poolIds[i%2],\n\t\t\t\t\t\t\t\tIdc:          dinfo.MasterIDC,\n\t\t\t\t\t\t\t\tRegion:       dinfo.MasterRegion,\n\t\t\t\t\t\t\t\tNumber:       1,\n\t\t\t\t\t\t\t\tUserID:       dinfo.UserID,\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\trqs = append(rqs, req.String())\n\t\t\t\t\t\t}\n\t\t\t\t\tcase strings.ToLower(dinfo.SlaveRegion):\n\t\t\t\t\t\tfor i := 0; i < n; i++ {\n\t\t\t\t\t\t\treq := &proxyrequest{\n\t\t\t\t\t\t\t\tProxy:        cinfo.Message.Proxy,\n\t\t\t\t\t\t\t\tApp:          cinfo.Message.Appid,\n\t\t\t\t\t\t\t\tResourcePool: poolIds[(i+1)%2],\n\t\t\t\t\t\t\t\tIdc:          dinfo.SlaveIDC,\n\t\t\t\t\t\t\t\tRegion:       dinfo.SlaveRegion,\n\t\t\t\t\t\t\t\tNumber:       1,\n\t\t\t\t\t\t\t\tUserID:       dinfo.UserID,\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\trqs = append(rqs, req.String())\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} else { //fix single\n\t\t\tfor _, n := range current {\n\t\t\t\tfor i := 0; i < pxyn-n; i++ {\n\t\t\t\t\treq := &proxyrequest{\n\t\t\t\t\t\tProxy:        cinfo.Message.Proxy,\n\t\t\t\t\t\tApp:          cinfo.Message.Appid,\n\t\t\t\t\t\tResourcePool: poolIds[i%2],\n\t\t\t\t\t\tIdc:          dinfo.MasterIDC,\n\t\t\t\t\t\tRegion:       dinfo.MasterRegion,\n\t\t\t\t\t\tNumber:       1,\n\t\t\t\t\t\tUserID:       dinfo.UserID,\n\t\t\t\t\t}\n\t\t\t\t\trqs = append(rqs, req.String())\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} else { // normal\n\t\tfor i := 0; i < pxyn; i++ {\n\t\t\treq := &proxyrequest{\n\t\t\t\tProxy:        cinfo.Message.Proxy,\n\t\t\t\tApp:          cinfo.Message.Appid,\n\t\t\t\tResourcePool: poolIds[i%2],\n\t\t\t\tIdc:          dinfo.MasterIDC,\n\t\t\t\tRegion:       dinfo.MasterRegion,\n\t\t\t\tNumber:       1,\n\t\t\t\tUserID:       dinfo.UserID,\n\t\t\t}\n\t\t\trqs = append(rqs, req.String())\n\t\t\tif *Multi {\n\t\t\t\treq := &proxyrequest{\n\t\t\t\t\tProxy:        cinfo.Message.Proxy,\n\t\t\t\t\tApp:          cinfo.Message.Appid,\n\t\t\t\t\tResourcePool: poolIds[(i+1)%2],\n\t\t\t\t\tIdc:          dinfo.SlaveIDC,\n\t\t\t\t\tRegion:       dinfo.SlaveRegion,\n\t\t\t\t\tNumber:       1,\n\t\t\t\t\tUserID:       dinfo.UserID,\n\t\t\t\t}\n\t\t\t\trqs = append(rqs, req.String())\n\t\t\t}\n\t\t}\n\t}\n\n\tfor i, req := range rqs {\n\t\t_ = <-tokenChan\n\t\tLogger.Info(\"deploy proxy\", zap.String(\"request\", req), zap.Int(\"remain\", len(rqs)-i))\n\t\tretry := 0\n\tr:\n\t\tstart := time.Now()\n\t\tif *DryRun {\n\t\t\tLogger.Info(\"Dry run proxy\", zap.String(\"request\", req))\n\t\t\tcontinue\n\t\t}\n\t\tresp, err := http.Get(req)\n\t\tif err != nil {\n\t\t\tLogger.Error(\"error in post deploy proxy\", zap.String(\"url\", req), zap.Error(err))\n\t\t\tif retry < retrytimes {\n\t\t\t\ttime.Sleep(time.Second * 3)\n\t\t\t\tretry++\n\t\t\t\tgoto r\n\t\t\t} else {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\tresult := &deployResp{}\n\t\tif err = json.NewDecoder(resp.Body).Decode(result); err != nil {\n\t\t\tLogger.Error(\"error in decode json\", zap.Error(err))\n\t\t\tcontinue\n\t\t}\n\t\ttplock.Lock()\n\t\ttaskidproxymap[result.Message] = req\n\t\ttplock.Unlock()\n\t\tproxytaskQ <- result.Message\n\t\tLogger.Debug(\"task recorded\", zap.Int(\"taskid\", result.Message), zap.Duration(\"cost\", time.Since(start)))\n\t}\n}\n\ntype queryStatus struct {\n\tSuccess bool `json:\"success\"`\n\tMessage struct {\n\t\tStatus  string   `json:\"status\"`\n\t\tServers []string `json:\"servers\"`\n\t}\n}\n\n//deploy slaves: master region and slave regions\n//0 1 2 : m s p\nfunc queryTaskQ(tskq chan int, t int, workers *int32) {\n\tfor {\n\t\ttime.Sleep(time.Duration(Interval) * time.Millisecond)\n\t\tLogger.Info(\"taskQlen\", zap.Int(\"len\", len(tskq)), zap.Int32(\"workers\", atomic.LoadInt32(workers)))\n\t\tselect {\n\t\tcase <-canceller:\n\t\t\tLogger.Info(\"cancelling query\")\n\t\t\treturn\n\t\tcase taskid := <-tskq:\n\t\t\t// handle taskid\n\t\t\tresp, err := http.Get(fmt.Sprintf(querytsk, taskid))\n\t\t\tif err != nil {\n\t\t\t\tLogger.Error(\"error in query task status\", zap.Int(\"taskid\", taskid), zap.Error(err))\n\t\t\t\ttskq <- taskid\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// error message is nil but success message is full\n\t\t\tquery := &queryStatus{}\n\t\t\tif err = json.NewDecoder(resp.Body).Decode(query); err != nil {\n\t\t\t\tLogger.Error(\"error in decode json query task info\", zap.Int(\"taskid\", taskid), zap.Error(err))\n\t\t\t\ttskq <- taskid\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tswitch query.Message.Status {\n\t\t\tcase \"4\":\n\t\t\t\ttmlock.Lock()\n\t\t\t\tdelete(taskidmap, taskid)\n\t\t\t\ttmlock.Unlock()\n\t\t\t\tLogger.Info(\"Success\", zap.Int(\"taskid\", taskid))\n\t\t\tcase \"2\", \"6\":\n\t\t\t\tLogger.Debug(\"Working in progress\", zap.Int(\"taskid\", taskid))\n\t\t\t\ttskq <- taskid\n\t\t\t\tcontinue\n\t\t\tcase \"1\":\n\t\t\t\tLogger.Debug(\"Queuing\", zap.Int(\"taskid\", taskid))\n\t\t\t\ttskq <- taskid\n\t\t\t\tcontinue\n\t\t\tcase \"5\":\n\t\t\t\tswitch t {\n\t\t\t\tcase 0:\n\t\t\t\t\ttmlock.RLock()\n\t\t\t\t\tLogger.Error(\"Failed deploy\", zap.Int(\"taskid\", taskid), zap.String(\"req\", taskidmap[taskid]))\n\t\t\t\t\tfailedMaster = append(failedMaster, taskidmap[taskid])\n\t\t\t\t\ttmlock.RUnlock()\n\t\t\t\tcase 1:\n\t\t\t\t\ttmlock.RLock()\n\t\t\t\t\tLogger.Error(\"Failed deploy\", zap.Int(\"taskid\", taskid), zap.String(\"req\", taskidmap[taskid]))\n\t\t\t\t\tfailedSlave = append(failedSlave, taskidmap[taskid])\n\t\t\t\t\ttmlock.RUnlock()\n\t\t\t\tcase 2:\n\t\t\t\t\ttplock.RLock()\n\t\t\t\t\tLogger.Error(\"Failed deploy proxy\", zap.Int(\"taskid\", taskid), zap.String(\"req\", taskidproxymap[taskid]))\n\t\t\t\t\tfailedProxy = append(failedProxy, taskidproxymap[taskid])\n\t\t\t\t\ttplock.RUnlock()\n\t\t\t\t}\n\t\t\t\tcontinue\n\t\t\tcase \"3\":\n\t\t\t\tLogger.Error(\"Error in process\", zap.Int(\"taskid\", taskid))\n\t\t\t\ttskq <- taskid\n\t\t\t\tcontinue\n\t\t\tdefault:\n\t\t\t\tLogger.Error(\"Error in get status\", zap.String(\"status\", query.Message.Status))\n\t\t\t\ttskq <- taskid\n\t\t\t\tcontinue\n\t\t\t}\n\t\tdefault:\n\t\t\tif atomic.LoadInt32(workers) == 0 {\n\t\t\t\tLogger.Info(\"taskQ is empty and workers all done, exiting\")\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}\n}\n\ntype InstanceInfo struct {\n\tID     string `json:\"id\"`\n\tRegion string `json:\"region\"`\n\tIP     string `json:\"ip\"`\n\tStatus string `json:\"status\"`\n}\n\ntype ShardInfo struct {\n\tAppid     string\n\tShardid   string\n\tInneed    map[string]int\n\tIpcounter map[string]int\n\tMaster    *InstanceInfo\n\tSlaves    []*InstanceInfo\n}\n\n// check redis instance and put shid into white list\n// clusterids\n// return shardid ---> shardinfo, total lack , total brokens\nfunc checkCluster(ids []string) (map[string]*ShardInfo, int, int) {\n\tmsgmap := make(map[string]*ShardInfo)\n\tmlock := sync.Mutex{}\n\tvar total int64\n\tvar totalfailed int64\n\tvar sum int64\n\tvar sumbroken int64\n\tiwg := sync.WaitGroup{}\n\tfor _, i := range ids {\n\t\tiwg.Add(1)\n\t\tgo func(id string) {\n\t\t\tdefer iwg.Done()\n\t\t\tclusterinfo, err := queryClusterInfo(id)\n\t\t\tif err != nil {\n\t\t\t\tLogger.Error(\"error in get cluster info\", zap.String(\"cid\", id), zap.Error(err))\n\t\t\t\treturn\n\t\t\t}\n\n\t\t\tfailed := make([]string, 0, 500)\n\t\t\tfor _, shid := range clusterinfo.Message.Shards {\n\t\t\t\t// get shardinfo\n\n\t\t\t\tname, err := GetInfo(id2name, id)\n\t\t\t\tif err != nil {\n\t\t\t\t\tLogger.Error(\"error in get info\", zap.Error(err))\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tshinfo, err := getShardInfo(name, shid)\n\t\t\t\tif err != nil {\n\t\t\t\t\tLogger.Error(\"error in get shard info from metaserver\", zap.String(\"name\", name), zap.String(\"shid\", shid), zap.Error(err))\n\t\t\t\t\t// TODO failed = append(failed, shid)\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\n\t\t\t\tinfo := &ShardInfo{\n\t\t\t\t\tAppid:     id,\n\t\t\t\t\tShardid:   shid,\n\t\t\t\t\tInneed:    make(map[string]int),\n\t\t\t\t\tIpcounter: make(map[string]int),\n\t\t\t\t\tMaster: &InstanceInfo{\n\t\t\t\t\t\tID:     shinfo.Master,\n\t\t\t\t\t\tRegion: shinfo.MasterRegion,\n\t\t\t\t\t\tIP:     net.JoinHostPort(shinfo.MasterIP, shinfo.MasterPort),\n\t\t\t\t\t\tStatus: shinfo.MasterStatus,\n\t\t\t\t\t},\n\t\t\t\t}\n\t\t\t\tfor i, _ := range shinfo.Slaves {\n\t\t\t\t\tinfo.Slaves = append(info.Slaves, &InstanceInfo{\n\t\t\t\t\t\tID:     shinfo.Slaves[i],\n\t\t\t\t\t\tRegion: shinfo.SlaveRegions[i],\n\t\t\t\t\t\tIP:     net.JoinHostPort(shinfo.SlaveIPs[i], shinfo.SlavePorts[i]),\n\t\t\t\t\t\tStatus: shinfo.SlaveStatus[i],\n\t\t\t\t\t})\n\t\t\t\t}\n\n\t\t\t\t// check ip vaild\n\t\t\t\tinfo.Ipcounter[net.ParseIP(shinfo.MasterIP).Mask(net.IPv4Mask(255, 255, 255, 0)).String()] = 1\n\t\t\t\tfor _, sip := range shinfo.SlaveIPs {\n\t\t\t\t\tinfo.Ipcounter[net.ParseIP(sip).Mask(net.IPv4Mask(255, 255, 255, 0)).String()]++\n\t\t\t\t}\n\n\t\t\t\tlack := 0\n\t\t\t\tbroken := 0\n\t\t\t\tif *Master {\n\t\t\t\t\tatomic.AddInt64(&total, 1)\n\t\t\t\t\t// lack and broken will only one will take effect\n\t\t\t\t\tif info.Master.ID == \"\" || info.Master.IP == \"\" {\n\t\t\t\t\t\tlack++\n\t\t\t\t\t} else if info.Master.Status != \"1\" || !PingInstance(info.Master.IP) {\n\t\t\t\t\t\tif info.Master.Status != \"1\" {\n\t\t\t\t\t\t\tLogger.Error(\"check instance status invalid\", zap.String(\"shardid\", info.Shardid),\n\t\t\t\t\t\t\t\tzap.String(\"stat\", info.Master.Status), zap.String(\"region\", info.Master.Region))\n\t\t\t\t\t\t}\n\t\t\t\t\t\tbroken++\n\t\t\t\t\t}\n\t\t\t\t\tif lack > 0 || broken > 0 {\n\t\t\t\t\t\tinfo.Inneed[info.Master.Region]++\n\t\t\t\t\t}\n\t\t\t\t} else { //slave mode\n\t\t\t\t\tatomic.AddInt64(&total, int64(len(info.Slaves)))\n\t\t\t\t\tif *Multi { //multi mode\n\t\t\t\t\t\t// prepare region map\n\t\t\t\t\t\trmap := make(map[string]int)\n\t\t\t\t\t\tfor _, r := range strings.Split(strings.ToLower(clusterinfo.Message.Idcs), \"|\") {\n\t\t\t\t\t\t\trmap[r] = 2\n\t\t\t\t\t\t}\n\t\t\t\t\t\t// set for mainidc only need 1 slave\n\t\t\t\t\t\trmap[strings.ToLower(clusterinfo.Message.MainIdc)] = 1\n\n\t\t\t\t\t\t// put real region instance count into region map\n\t\t\t\t\t\tfor _, s := range info.Slaves {\n\t\t\t\t\t\t\tif _, ok := rmap[s.Region]; !ok {\n\t\t\t\t\t\t\t\tLogger.Error(\"error region not found\", zap.String(\"region\", s.Region), zap.String(\"idcs\", clusterinfo.Message.Idcs))\n\t\t\t\t\t\t\t\tcontinue\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\trmap[s.Region]--\n\n\t\t\t\t\t\t\tif s.Status != \"1\" || !PingInstance(s.IP) {\n\t\t\t\t\t\t\t\tbroken++\n\t\t\t\t\t\t\t\tinfo.Inneed[s.Region]++\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\t// check every region\n\t\t\t\t\t\tfor r, n := range rmap {\n\t\t\t\t\t\t\tif n > 0 {\n\t\t\t\t\t\t\t\tif r == clusterinfo.Message.MainIdc {\n\t\t\t\t\t\t\t\t\tlack++\n\t\t\t\t\t\t\t\t\tinfo.Inneed[r]++\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tlack += n\n\t\t\t\t\t\t\t\t\tinfo.Inneed[r] += n\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\t// nothing lack, counting rich\n\t\t\t\t\t\tif lack == 0 {\n\t\t\t\t\t\t\tfor _, n := range rmap {\n\t\t\t\t\t\t\t\tlack += n\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else { //single mode\n\t\t\t\t\t\tlack = 1 - len(info.Slaves)\n\t\t\t\t\t\tfor _, s := range info.Slaves {\n\t\t\t\t\t\t\tif s.Status != \"1\" || !PingInstance(s.IP) {\n\t\t\t\t\t\t\t\tbroken++\n\t\t\t\t\t\t\t\tinfo.Inneed[s.Region]++\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif lack > 0 {\n\t\t\t\t\t\t\t// only main idc, so just add\n\t\t\t\t\t\t\tinfo.Inneed[clusterinfo.Message.MainIdc]++\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif lack > 0 || broken > 0 {\n\t\t\t\t\tfailed = append(failed, shid)\n\t\t\t\t}\n\n\t\t\t\tmlock.Lock()\n\t\t\t\tmsgmap[shid] = info\n\t\t\t\tmlock.Unlock()\n\t\t\t\tif broken > 0 {\n\t\t\t\t\tLogger.Info(\"broken instance\", zap.String(\"appid\", id), zap.String(\"shid\", shid), zap.Int(\"broken\", broken))\n\t\t\t\t\tatomic.AddInt64(&sumbroken, int64(broken))\n\t\t\t\t}\n\t\t\t\tif lack > 0 {\n\t\t\t\t\tLogger.Info(\"lack instance\", zap.String(\"appid\", id), zap.String(\"shid\", shid), zap.Int(\"lack\", lack))\n\t\t\t\t\tatomic.AddInt64(&sum, int64(lack))\n\t\t\t\t} else if lack < 0 {\n\t\t\t\t\tLogger.Info(\"rich instance\", zap.String(\"appid\", id), zap.String(\"shid\", shid), zap.Int(\"rich\", -lack))\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif *Fix {\n\t\t\t\tfor _, v := range failed {\n\t\t\t\t\twhitelist[v] = 1\n\t\t\t\t}\n\t\t\t}\n\t\t\tif len(failed) == 0 {\n\t\t\t\tLogger.Info(\"Passed, Well Done!\", zap.String(\"appid\", id))\n\t\t\t} else {\n\t\t\t\tatomic.AddInt64(&totalfailed, int64(len(failed)))\n\t\t\t\tLogger.Warn(\"Failed check appid\", zap.String(\"appid\", id), zap.Strings(\"shardid\", failed), zap.Int(\"failed\", len(failed)))\n\t\t\t}\n\t\t}(i)\n\t}\n\tiwg.Wait()\n\tLogger.Info(\"check cluster done, total failed\", zap.Int64(\"total\", total), zap.Int64(\"lacks\", sum),\n\t\tzap.Int64(\"brokens\", sumbroken), zap.Float64(\"broken_rate\", float64(sumbroken)/float64(total)))\n\treturn msgmap, int(sum), int(sumbroken)\n}\n\nfunc PingInstance(ip string) bool {\n\tif !*Strict {\n\t\treturn true\n\t}\n\tif ip == \"\" {\n\t\tLogger.Error(\"error in ping,  nil ip\")\n\t\treturn false\n\t}\n\tnc, err := net.Dial(\"tcp\", ip)\n\tif err != nil {\n\t\tLogger.Error(\"error in ping\", zap.Error(err))\n\t\treturn false\n\t}\n\tdefer nc.Close()\n\t// do get a\n\tn, err := nc.Write([]byte(\"*2\\r\\n$3\\r\\nget\\r\\n$1\\r\\na\\r\\n\"))\n\tif n != 20 || err != nil {\n\t\tLogger.Error(\"error in get\", zap.Int(\"n\", n), zap.Error(err))\n\t\treturn false\n\t}\n\tbts := make([]byte, 1)\n\tn, err = nc.Read(bts)\n\tif err != nil || n != 1 {\n\t\tLogger.Error(\"error in resp\", zap.Int(\"n\", n), zap.Error(err))\n\t\treturn false\n\t}\n\treturn true\n}\n\ntype CheckClusterProxy struct {\n\tSuccess bool `json:\"success\"`\n\tMessage []struct {\n\t\tIP  string `json:\"ip\"`\n\t\tCDN string `json:\"cdn\"`\n\t}\n}\n\n// return region -> numbers\nfunc checkClusterProxy(id string, n int) (map[string]int, int) {\n\tr := make(map[string]int)\n\turl := \"http://bdrp.baidu.com/api/shard/redis/getProxyByAppId?app_id=%s\"\n\treq := fmt.Sprintf(url, id)\n\tresp, err := http.Get(req)\n\tif err != nil {\n\t\tLogger.Error(\"error in get cluster info\", zap.Error(err))\n\t\treturn r, 0\n\t}\n\n\tresult := &CheckClusterProxy{}\n\tbts, _ := ioutil.ReadAll(resp.Body)\n\tif err = json.Unmarshal(bts, result); err != nil {\n\t\tLogger.Error(\"error in get cluster info\", zap.Error(err))\n\t\treturn r, 0\n\t}\n\n\tfor _, v := range result.Message {\n\t\tif m, ok := r[v.CDN]; !ok {\n\t\t\tr[v.CDN] = 1\n\t\t} else {\n\t\t\tr[v.CDN] = m + 1\n\t\t}\n\t}\n\n\tlacks := 0\n\tcounter := 0\n\n\tfor k, v := range r {\n\t\tcounter += v\n\t\tif l := n - v; l > 0 {\n\t\t\tlacks += l\n\t\t}\n\t\tLogger.Info(\"proxy checking\", zap.String(\"appid\", id), zap.String(\"region\", k), zap.Int(\"n\", v), zap.Int(\"lack\", n-v))\n\t}\n\tLogger.Info(\"proxy checked\", zap.String(\"appid\", id), zap.Int(\"total\", counter), zap.Int(\"lacks\", lacks))\n\treturn r, lacks\n}\n"}}


[Trace - 1:17:04 PM] Received notification 'window/logMessage'.
Params: {"type":4,"message":"pkg command-line-arguments, files: [/Users/arthur/golang/src/icode.baidu.com/baidu/personal-code/bdrp-deploy-tool/main.go], errors: []"}


[Trace - 1:17:04 PM] Received notification 'textDocument/publishDiagnostics'.
Params: {"uri":"file:///Users/arthur/golang/src/icode.baidu.com/baidu/personal-code/bdrp-deploy-tool/main.go","diagnostics":[{"range":{"start":{"line":957,"character":17},"end":{"line":957,"character":24}},"severity":1,"source":"LSP","message":"undeclared name: GetInfo"},{"range":{"start":{"line":957,"character":25},"end":{"line":957,"character":32}},"severity":1,"source":"LSP","message":"undeclared name: id2name"},{"range":{"start":{"line":962,"character":19},"end":{"line":962,"character":31}},"severity":1,"source":"LSP","message":"undeclared name: getShardInfo"},{"range":{"start":{"line":218,"character":2},"end":{"line":218,"character":15}},"severity":1,"source":"LSP","message":"undeclared name: InitDatabases"}]}

then I tried go-to-definition


[Trace - 1:17:07 PM] Sending request 'textDocument/definition - (2)'.
Params: {"textDocument":{"uri":"file:///Users/arthur/golang/src/icode.baidu.com/baidu/personal-code/bdrp-deploy-tool/main.go"},"position":{"character":7,"line":218}}


[Error - 1:17:07 PM] send textDocument/definition#2 no object for ident InitDatabases


[Trace - 1:17:09 PM] Sending request 'textDocument/definition - (3)'.
Params: {"textDocument":{"uri":"file:///Users/arthur/golang/src/icode.baidu.com/baidu/personal-code/bdrp-deploy-tool/main.go"},"position":{"character":7,"line":218}}


[Error - 1:17:09 PM] send textDocument/definition#3 no object for ident InitDatabases


[Trace - 1:17:13 PM] Sending request 'textDocument/definition - (4)'.
Params: {"textDocument":{"uri":"file:///Users/arthur/golang/src/icode.baidu.com/baidu/personal-code/bdrp-deploy-tool/main.go"},"position":{"character":7,"line":218}}


[Error - 1:17:13 PM] send textDocument/definition#4 no object for ident InitDatabases



@arthurkiller
Copy link

@stamblerre ping

@stamblerre
Copy link
Contributor Author

@arthurkiller: Your issue seems completely different from the original one filed above. Do you mind filing a separate issue with your log output?

@arthurkiller
Copy link

arthurkiller commented Jul 10, 2019

NVM, but I didn't clear what goes wrong in this issue.

@stamblerre
Copy link
Contributor Author

@arthurkiller: I just took another look at your log output. It seems that you are using modules, but you are in a directory outside of your $GOPATH without a go.mod file. gopls will not work correctly with non-standard library imports in such a state. You will need to add a go.mod file to your project.

@stamblerre
Copy link
Contributor Author

Ah, I see you are actually in your $GOPATH. Let's continue the discussion in the separate issue. This one relates to a specific scenario in which you have multiple copies of the Go source code on your machine, whereas yours seems to be a more general problem with gopls.

@arthurkiller
Copy link

k, I will open up another issue. thx alot

@stamblerre
Copy link
Contributor Author

Investigated this further and added filed #33548 as a follow-up to this issue. Closing.

@arthurkiller
Copy link

Awesome, now I can use gopls again! Really fast. happy hacking again. lol

@golang golang locked and limited conversation to collaborators Aug 12, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
FrozenDueToAge gopls Issues related to the Go language server, gopls. NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Projects
None yet
Development

No branches or pull requests

4 participants