Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime/cgo: pthread_create failed: Resource temporarily unavailable #24484

Closed
fiber opened this issue Mar 22, 2018 · 29 comments
Closed

runtime/cgo: pthread_create failed: Resource temporarily unavailable #24484

fiber opened this issue Mar 22, 2018 · 29 comments
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided.
Milestone

Comments

@fiber
Copy link

fiber commented Mar 22, 2018

Please answer these questions before submitting your issue. Thanks!

What version of Go are you using (go version)?

go version go1.10 linux/amd64

Does this issue reproduce with the latest release?

go1.10 is latest

What operating system and processor architecture are you using (go env)?

GOARCH="amd64"
GOOS="linux"

What did you do?

If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.

my process starts around 500 child processes. The number of os level threads is creeping up slowly until it reaches around 10k, at which point child processes start to die with the below message.

Process limits seem set sufficiently high
Limit Soft Limit Hard Limit Units
Max processes 257093 257093 processes

$ cat /proc/sys/kernel/threads-max
514187

What did you expect to see?

no crash ;)

What did you see instead?

runtime/cgo: pthread_create failed: Resource temporarily unavailable
SIGABRT: abort
PC=0x7f24685ab428 m=44 sigcode=18446744073709551610
goroutine 0 [idle]:
runtime: unknown pc 0x7f24685ab428
stack: frame={sp:0x7f2407ffea08, fp:0x0} stack=[0x7f24077ff2f0,0x7f2407ffeef0)
00007f2407ffe908: 00007f2468d84168 00007f2407ffea68
00007f2407ffe918: 00007f2468b67b1f 0000000000000002
00007f2407ffe928: 00007f2468d79a80 0000000000000005
00007f2407ffe938: 0000000000f021e0 00007f23d80008c0
00007f2407ffe948: 00000000000000f1 0000000000000011
00007f2407ffe958: 0000000000000000 0000000000c2597a
00007f2407ffe968: 00007f2468b6cac6 0000000000000005
00007f2407ffe978: 0000000000000000 0000000100000000
00007f2407ffe988: 00007f246857cde0 00007f2407ffeb20
00007f2407ffe998: 00007f2468b74923 000000ffffffffff
00007f2407ffe9a8: 0000000000000000 0000000000000000
00007f2407ffe9b8: 0000000000000000 2525252525252525
00007f2407ffe9c8: 2525252525252525 0000000000000000
00007f2407ffe9d8: 00007f246893b700 0000000000c2597a
00007f2407ffe9e8: 00007f23d80008c0 00000000000000f1
00007f2407ffe9f8: 0000000000000011 0000000000000000
00007f2407ffea08: <00007f24685ad02a 0000000000000020
00007f2407ffea18: 0000000000000000 0000000000000000
00007f2407ffea28: 0000000000000000 0000000000000000
00007f2407ffea38: 0000000000000000 0000000000000000
00007f2407ffea48: 0000000000000000 0000000000000000
00007f2407ffea58: 0000000000000000 0000000000000000
00007f2407ffea68: 0000000000000000 0000000000000000
00007f2407ffea78: 0000000000000000 0000000000000000
00007f2407ffea88: 0000000000000000 0000000000000000
00007f2407ffea98: 0000000000000000 0000000000000000
00007f2407ffeaa8: 00007f24685eebff 00007f246893b540
00007f2407ffeab8: 0000000000000001 00007f246893b5c3
00007f2407ffeac8: 00000000000000f1 0000000000000011
00007f2407ffead8: 00007f24685f0409 000000000000000a
00007f2407ffeae8: 00007f246866d2dd 000000000000000a
00007f2407ffeaf8: 00007f246893c770 0000000000000000
runtime: unknown pc 0x7f24685ab428
stack: frame={sp:0x7f2407ffea08, fp:0x0} stack=[0x7f24077ff2f0,0x7f2407ffeef0)
00007f2407ffe908: 00007f2468d84168 00007f2407ffea68
00007f2407ffe918: 00007f2468b67b1f 0000000000000002
00007f2407ffe928: 00007f2468d79a80 0000000000000005
00007f2407ffe938: 0000000000f021e0 00007f23d80008c0
00007f2407ffe948: 00000000000000f1 0000000000000011
00007f2407ffe958: 0000000000000000 0000000000c2597a
00007f2407ffe968: 00007f2468b6cac6 0000000000000005
00007f2407ffe978: 0000000000000000 0000000100000000
00007f2407ffe988: 00007f246857cde0 00007f2407ffeb20
00007f2407ffe998: 00007f2468b74923 000000ffffffffff
00007f2407ffe9a8: 0000000000000000 0000000000000000
00007f2407ffe9b8: 0000000000000000 2525252525252525
00007f2407ffe9c8: 2525252525252525 0000000000000000
00007f2407ffe9d8: 00007f246893b700 0000000000c2597a
00007f2407ffe9e8: 00007f23d80008c0 00000000000000f1
00007f2407ffe9f8: 0000000000000011 0000000000000000
00007f2407ffea08: <00007f24685ad02a 0000000000000020
00007f2407ffea18: 0000000000000000 0000000000000000
00007f2407ffea28: 0000000000000000 0000000000000000
00007f2407ffea38: 0000000000000000 0000000000000000
00007f2407ffea48: 0000000000000000 0000000000000000
00007f2407ffea58: 0000000000000000 0000000000000000
00007f2407ffea68: 0000000000000000 0000000000000000
00007f2407ffea78: 0000000000000000 0000000000000000
00007f2407ffea88: 0000000000000000 0000000000000000
00007f2407ffea98: 0000000000000000 0000000000000000
00007f2407ffeaa8: 00007f24685eebff 00007f246893b540
00007f2407ffeab8: 0000000000000001 00007f246893b5c3
00007f2407ffeac8: 00000000000000f1 0000000000000011
00007f2407ffead8: 00007f24685f0409 000000000000000a
00007f2407ffeae8: 00007f246866d2dd 000000000000000a
00007f2407ffeaf8: 00007f246893c770 0000000000000000
goroutine 632 [running]:
runtime.systemstack_switch()
/opt/go/1.10.0/go/src/runtime/asm_amd64.s:363 fp=0xc4204f6d50 sp=0xc4204f6d48 pc=0x457270
runtime.gcMarkTermination(0x3ff75e93c8506a48)
/opt/go/1.10.0/go/src/runtime/mgc.go:1647 +0x407 fp=0xc4204f6f20 sp=0xc4204f6d50 pc=0x41a907
runtime.gcMarkDone()
/opt/go/1.10.0/go/src/runtime/mgc.go:1513 +0x22c fp=0xc4204f6f48 sp=0xc4204f6f20 pc=0x41a49c
runtime.gcBgMarkWorker(0xc420048500)
/opt/go/1.10.0/go/src/runtime/mgc.go:1912 +0x2e7 fp=0xc4204f6fd8 sp=0xc4204f6f48 pc=0x41b417
runtime.goexit()
/opt/go/1.10.0/go/src/runtime/asm_amd64.s:2361 +0x1 fp=0xc4204f6fe0 sp=0xc4204f6fd8 pc=0x459de1
created by runtime.gcBgMarkStartWorkers
/opt/go/1.10.0/go/src/runtime/mgc.go:1723 +0x79

@ianlancetaylor
Copy link
Contributor

There are many reasons why a program might leak threads. We need to know something about your programs. Ideally, you would give us code that we can use to recreate the problem. Thanks.

@ianlancetaylor ianlancetaylor added the NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. label Mar 22, 2018
@ianlancetaylor ianlancetaylor added this to the Go1.11 milestone Mar 22, 2018
@fiber
Copy link
Author

fiber commented Mar 22, 2018

I'm afraid I can not publicly share too much detail. The code and setup are fairly complex, so difficult to share either way. The instance processes run in separate network namespaces and exchange UDP+ICMP with approx. 100k peers in total. After the startup there is almost no spawning of new child processes going on. I don't see any goroutines leaking. I have 50+ CPUs in the server and I believe I can reduce the thread bleeding substantially by setting a lower GOMAXPROCS for the instances. If you can share some reaons or areas I may be able to take some of the list.

@ianlancetaylor
Copy link
Contributor

The most common reason for a thread to be created is because all the existing threads are blocked in system calls or in calls to C code via cgo (the error message shows that your application uses cgo). cgo calls would be the first place to look. See if any of those calls do not return.

@fiber
Copy link
Author

fiber commented Mar 22, 2018

I believe there is no cgo being used outside the standard library. I have compiled with CGO_ENABLED=0 now and will monitor the situation.

@bcmills bcmills added the WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided. label Mar 22, 2018
@fiber
Copy link
Author

fiber commented Apr 11, 2018

compiling with CGO_ENABLED=0 and setting a low GOMAXPROCS did reduce the overall number of threads to about 1/3rd. Instead of peaking out at 12k threads, I'm now down to 4.5k, but still (very) slowly creeping up. Still investigating.

@ianlancetaylor ianlancetaylor modified the milestones: Go1.11, Go1.12 Jun 28, 2018
@kolyshkin
Copy link
Contributor

@fiber you are probably hitting the kernel.pid_max sysctl limit. But raising it is not a solution, as it might lead to overall system being stuck (unavailable via ssh etc).

The real problem, though, if why golang runtime chooses to die upon receiving EAGAIN from pthread_create(). I have only started to look at it, but it appears that a trivial fork bomb run on the system can cause a Go app running on the same system to crash, even if there is no goroutine leak.

@ianlancetaylor
Copy link
Contributor

ianlancetaylor commented Oct 18, 2018

In a Go program that uses cgo, new threads are created using pthread_create. If pthread_create fails, the Go runtime will retry up to 20 times. The relevant code is at https://golang.org/src/runtime/cgo/gcc_libinit.c#L91 .

@kolyshkin
Copy link
Contributor

If pthread_create fails, the Go runtime will retry up to 20 times.

...on EAGAIN only (which I guess is right), and if it still fails, it calls abort(). This is unfortunate that there's no way to handle this in a more graceful manner. It does not matter how many threads the running program has, or if there a goroutine leak -- the running program aborts if it can't create a new goroutine.

In my case this is docker daemon that gets aborted once the kernel.pid_max limit is hit (which is easy to achieve by running many containers with many threads), and I don't see any practical way to avoid that.

@ianlancetaylor
Copy link
Contributor

Goroutines are not threads. There are normally many many more goroutines than threads. A goroutine leak won't in itself lead to this problem. A thread leak will.

We can't fix this problem until we understand where the thread leak is coming from in the original program.

@xianglinghui
Copy link

@ianlancetaylor Our service occur crash with the same error:

runtime/cgo: pthread_create failed: Resource temporarily unavailable
SIGABRT: abort
PC=0x3927232625 m=39 sigcode=18446744073709551610

Our go version is 1.11.1.

We didn't use cgo in our code. The library which we relying on also seems not use. Does the go itself will use cgo in some scenarios?

@SjonHortensius
Copy link

@xianglinghui - I have the same issue but as @ianlancetaylor explained

In a Go program that uses cgo, new threads are created using pthread_create

Eg. if you disable cgo, new threads will not use pthread_create but another mechanism.

For my application (which contains a very small number of concurrent go-routines and no explicit cgo usage but a lot of os.Exec calls) disabling cgo fixed a lot of Resource temporarily unavailable crashes as well. I'm not sure what causes this and how expected it is - but I'm just disabling cgo from now on

@fiber
Copy link
Author

fiber commented Oct 19, 2018 via email

@ianlancetaylor
Copy link
Contributor

@xianglinghui I believe it's platform dependent. If you are running on Darwin, then the standard library will use cgo by default, for DNS requests, unless you build with CGO_ENABLED=0.

This bug is still waiting for a reproduction case. If you have a case where a program crashes by running out of threads for no clear reason, please do share the code if you can so that we can try to reproduce it ourself. Don't forget to provide all the relevant system details.

@fiber Large numbers of concurrent DNS requests did previously cause large numbers of threads to be created, but we fixed that, at least partially, in #25694. Though we could perhaps extend that fix to also check RLIMIT_NPROC.

@tmm1
Copy link
Contributor

tmm1 commented Nov 26, 2019

I'm seeing similar issues on some 32-bit linux/android systems:

runtime/cgo: pthread_create failed: Try again
SIGABRT: abort
PC=0x223ea61c64 m=12 sigcode=0

goroutine 0 [idle]:
runtime: unknown pc 0x223ea61c64
stack: frame={sp:0x22428d0f30, fp:0x0} stack=[0x22427d57d0,0x22428d13d0)

Further investigation shows that the limited address space is being exhausted by other parts of my program which use mmap. The linux kernel can be compiled with VMSPLIT_1G or VMSPLIT_2G which limits the address space available to user space, and processes can also be run with RLIMIT_AS set.

One thing that would help diagnose these issues is if /proc/self/smaps was dumped along with the stack when the pthread failure occurs. Is there a good place in the runtime that could be patched to do so?

Or, is it possible for a go program to detect this failure and run some more code before the process dies?

@ianlancetaylor
Copy link
Contributor

@tmm1 The code in question is in src/runtime/cgo/gcc_libinit.c.

@tmm1
Copy link
Contributor

tmm1 commented Nov 26, 2019

@tmm1 The code in question is in src/runtime/cgo/gcc_libinit.c.

Thanks! Looks like here's where the extra info could be dumped:

int err = _cgo_try_pthread_create(&p, NULL, func, arg);
if (err != 0) {
fprintf(stderr, "pthread_create failed: %s", strerror(err));
abort();

@zainabb12345
Copy link

zainabb12345 commented Nov 6, 2020

I was trying to solve a hackrRank problem question.
What I Expected: Test failed (or something similar )

What I Got:
runtime/cgo: runtime/cgo: pthread_create failed: Resource temporarily unavailable
pthread_create failed: Resource temporarily unavailable
SIGABRT: abort
PC=0x7febdf25d7bb m=12 sigcode=18446744073709551610

goroutine 0 [idle]:
runtime: unknown pc 0x7febdf25d7bb
stack: frame={sp:0x7feb9bffc850, fp:0x0} stack=[0x7feb99ffd288,0x7feb9bffce88)
00007feb9bffc750: 0000000000000005 0000ffff00001fa0
00007feb9bffc760: 00007feb9bffcc60 00007febd8d2f010
00007feb9bffc770: 0000000000000000 0000000000b16de0
00007feb9bffc780: 0000000000000000 00007febdf41cb00
00007feb9bffc790: 0000000000000005 0000000000000000
00007feb9bffc7a0: 00007feb9bffcb40 00007febdf236f98
00007feb9bffc7b0: 00007feb9bffcb70 00007febdf4234ba
00007feb9bffc7c0: 0000000000000000 0000000000000000
00007feb9bffc7d0: 0000000000000000 00007feb9bffcc60
00007feb9bffc7e0: 2525252525252525 2525252525252525
00007feb9bffc7f0: 000000ffffffffff 0000000000000000
00007feb9bffc800: 000000ffffffffff 0000000000000000
00007feb9bffc810: 4d6c6f72746e6f43 6f6e3d7265747361
00007feb9bffc820: 7273752f3a6e6962 732f6c61636f6c2f
00007feb9bffc830: 7273752f3a6e6962 622f6c61636f6c2f
00007feb9bffc840: 2f7273752f3a6e69 73752f3a6e696273
00007feb9bffc850: <0000000000000000 6e69622f3a6e6962
00007feb9bffc860: 6e75720000000000 6f67632f656d6974
00007feb9bffc870: 0000000000000000 0000000000000000
00007feb9bffc880: 000000c0000289c0 000000000000000d
00007feb9bffc890: 000000c000026500 0000000000000015
00007feb9bffc8a0: 000000c0000a01e0 0000000000000027
00007feb9bffc8b0: 0405060700010203 0c0d0e0f08090a0b
00007feb9bffc8c0: 000000c000112940 000000c000112e00
00007feb9bffc8d0: fffffffe7fffffff ffffffffffffffff
00007feb9bffc8e0: ffffffffffffffff ffffffffffffffff
00007feb9bffc8f0: ffffffffffffffff ffffffffffffffff
00007feb9bffc900: ffffffffffffffff ffffffffffffffff
00007feb9bffc910: ffffffffffffffff ffffffffffffffff
00007feb9bffc920: ffffffffffffffff ffffffffffffffff
00007feb9bffc930: ffffffffffffffff ffffffffffffffff
00007feb9bffc940: ffffffffffffffff ffffffffffffffff
runtime: unknown pc 0x7febdf25d7bb
stack: frame={sp:0x7feb9bffc850, fp:0x0} stack=[0x7feb99ffd288,0x7feb9bffce88)
00007feb9bffc750: 0000000000000005 0000ffff00001fa0
00007feb9bffc760: 00007feb9bffcc60 00007febd8d2f010
00007feb9bffc770: 0000000000000000 0000000000b16de0
00007feb9bffc780: 0000000000000000 00007febdf41cb00
00007feb9bffc790: 0000000000000005 0000000000000000
00007feb9bffc7a0: 00007feb9bffcb40 00007febdf236f98
00007feb9bffc7b0: 00007feb9bffcb70 00007febdf4234ba
00007feb9bffc7c0: 0000000000000000 0000000000000000
00007feb9bffc7d0: 0000000000000000 00007feb9bffcc60
00007feb9bffc7e0: 2525252525252525 2525252525252525
00007feb9bffc7f0: 000000ffffffffff 0000000000000000
00007feb9bffc800: 000000ffffffffff 0000000000000000
00007feb9bffc810: 4d6c6f72746e6f43 6f6e3d7265747361
00007feb9bffc820: 7273752f3a6e6962 732f6c61636f6c2f
00007feb9bffc830: 7273752f3a6e6962 622f6c61636f6c2f
00007feb9bffc840: 2f7273752f3a6e69 73752f3a6e696273
00007feb9bffc850: <0000000000000000 6e69622f3a6e6962
00007feb9bffc860: 6e75720000000000 6f67632f656d6974
00007feb9bffc870: 0000000000000000 0000000000000000
00007feb9bffc880: 000000c0000289c0 000000000000000d
00007feb9bffc890: 000000c000026500 0000000000000015
00007feb9bffc8a0: 000000c0000a01e0 0000000000000027
00007feb9bffc8b0: 0405060700010203 0c0d0e0f08090a0b
00007feb9bffc8c0: 000000c000112940 000000c000112e00
00007feb9bffc8d0: fffffffe7fffffff ffffffffffffffff
00007feb9bffc8e0: ffffffffffffffff ffffffffffffffff
00007feb9bffc8f0: ffffffffffffffff ffffffffffffffff
00007feb9bffc900: ffffffffffffffff ffffffffffffffff
00007feb9bffc910: ffffffffffffffff ffffffffffffffff
00007feb9bffc920: ffffffffffffffff ffffffffffffffff
00007feb9bffc930: ffffffffffffffff ffffffffffffffff
00007feb9bffc940: ffffffffffffffff ffffffffffffffff

goroutine 1 [semacquire]:
sync.runtime_Semacquire(0xc0002f5438)
/usr/local/go/src/runtime/sema.go:56 +0x42
sync.(*WaitGroup).Wait(0xc0002f5430)
/usr/local/go/src/sync/waitgroup.go:130 +0x64
cmd/go/internal/work.(*Builder).Do(0xc0000bd860, 0xc0000c7040)
/usr/local/go/src/cmd/go/internal/work/exec.go:186 +0x3c5
cmd/go/internal/work.runBuild(0xea3280, 0xc0000201a0, 0x1, 0x1)
/usr/local/go/src/cmd/go/internal/work/build.go:387 +0x6e2
main.main()
/usr/local/go/src/cmd/go/main.go:189 +0x57f

goroutine 6 [syscall]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
/usr/local/go/src/os/signal/signal_unix.go:29 +0x41

goroutine 21 [semacquire]:
sync.runtime_SemacquireMutex(0xecabb4, 0xc0004de000, 0x1)
/usr/local/go/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xecabb0)
/usr/local/go/src/sync/mutex.go:138 +0xfc
sync.(*Mutex).Lock(...)
/usr/local/go/src/sync/mutex.go:81
sync.(*RWMutex).Lock(0xecabb0)
/usr/local/go/src/sync/rwmutex.go:98 +0x97
syscall.forkExec(0xc0004ac000, 0x2a, 0xc0004ae020, 0x2, 0x2, 0xc0003fad68, 0x17, 0x19221bee00010200, 0xc000408380)
/usr/local/go/src/syscall/exec_unix.go:193 +0x200
syscall.StartProcess(...)
/usr/local/go/src/syscall/exec_unix.go:248
os.startProcess(0xc0004ac000, 0x2a, 0xc0004ae020, 0x2, 0x2, 0xc0003faf00, 0x0, 0x0, 0x0)
/usr/local/go/src/os/exec_posix.go:51 +0x2b0
os.StartProcess(0xc0004ac000, 0x2a, 0xc0004ae020, 0x2, 0x2, 0xc0003faf00, 0x17, 0x0, 0x0)
/usr/local/go/src/os/exec.go:102 +0x7c
os/exec.(*Cmd).Start(0xc0004b2000, 0xc0004b6001, 0xc0004a0060)
/usr/local/go/src/os/exec/exec.go:416 +0x50c
os/exec.(*Cmd).Run(0xc0004b2000, 0xc0004a0060, 0x16)
/usr/local/go/src/os/exec/exec.go:338 +0x2b
cmd/go/internal/work.(*Builder).toolID(0xc0000bd860, 0xa3c6df, 0x7, 0xb, 0xc0003fb3c8)
/usr/local/go/src/cmd/go/internal/work/buildid.go:193 +0x44d
cmd/go/internal/work.(*Builder).buildActionID(0xc0000bd860, 0xc0002f3400, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/cmd/go/internal/work/exec.go:242 +0x1215
cmd/go/internal/work.(*Builder).build(0xc0000bd860, 0xc0002f3400, 0x0, 0x0)
/usr/local/go/src/cmd/go/internal/work/exec.go:397 +0x52a6
cmd/go/internal/work.(*Builder).Do.func2(0xc0002f3400)
/usr/local/go/src/cmd/go/internal/work/exec.go:117 +0x36d
cmd/go/internal/work.(*Builder).Do.func3(0xc0002f5430, 0xc0000bd860, 0xc00036c940)
/usr/local/go/src/cmd/go/internal/work/exec.go:177 +0x79
created by cmd/go/internal/work.(*Builder).Do
/usr/local/go/src/cmd/go/internal/work/exec.go:164 +0x3a1

goroutine 22 [select]:
cmd/go/internal/work.(*Builder).Do.func3(0xc0002f5430, 0xc0000bd860, 0xc00036c940)
/usr/local/go/src/cmd/go/internal/work/exec.go:167 +0xf6
created by cmd/go/internal/work.(*Builder).Do
/usr/local/go/src/cmd/go/internal/work/exec.go:164 +0x3a1

goroutine 23 [semacquire]:
syscall.forkAndExecInChild1(0xc000434060, 0xc0004380e0, 0x3, 0x3, 0xc00046c000, 0x18, 0x18, 0x0, 0x0, 0xc0003fed68, ...)
/usr/local/go/src/syscall/exec_linux.go:180 +0x1f4
syscall.forkAndExecInChild(0xc000434060, 0xc0004380e0, 0x3, 0x3, 0xc00046c000, 0x18, 0x18, 0x0, 0x0, 0xc0003fed68, ...)
/usr/local/go/src/syscall/exec_linux.go:72 +0xcf
syscall.forkExec(0xc000434000, 0x2a, 0xc000438020, 0x2, 0x2, 0xc0003fed68, 0x17, 0x3a7c51d000010200, 0xc000456000)
/usr/local/go/src/syscall/exec_unix.go:201 +0x35b
syscall.StartProcess(...)
/usr/local/go/src/syscall/exec_unix.go:248
os.startProcess(0xc000434000, 0x2a, 0xc000438020, 0x2, 0x2, 0xc0003fef00, 0x0, 0x0, 0x0)
/usr/local/go/src/os/exec_posix.go:51 +0x2b0
os.StartProcess(0xc000434000, 0x2a, 0xc000438020, 0x2, 0x2, 0xc0003fef00, 0x17, 0x0, 0x0)
/usr/local/go/src/os/exec.go:102 +0x7c
os/exec.(*Cmd).Start(0xc000444000, 0xc000446001, 0xc000406060)
/usr/local/go/src/os/exec/exec.go:416 +0x50c
os/exec.(*Cmd).Run(0xc000444000, 0xc000406060, 0x16)
/usr/local/go/src/os/exec/exec.go:338 +0x2b
cmd/go/internal/work.(*Builder).toolID(0xc0000bd860, 0xa3c6df, 0x7, 0xb, 0xc0003ff3c8)
/usr/local/go/src/cmd/go/internal/work/buildid.go:193 +0x44d
cmd/go/internal/work.(*Builder).buildActionID(0xc0000bd860, 0xc0002f2b40, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/cmd/go/internal/work/exec.go:242 +0x1215
cmd/go/internal/work.(*Builder).build(0xc0000bd860, 0xc0002f2b40, 0x0, 0x0)
/usr/local/go/src/cmd/go/internal/work/exec.go:397 +0x52a6
cmd/go/internal/work.(*Builder).Do.func2(0xc0002f2b40)
/usr/local/go/src/cmd/go/internal/work/exec.go:117 +0x36d
cmd/go/internal/work.(*Builder).Do.func3(0xc0002f5430, 0xc0000bd860, 0xc00036c940)
/usr/local/go/src/cmd/go/internal/work/exec.go:177 +0x79
created by cmd/go/internal/work.(*Builder).Do
/usr/local/go/src/cmd/go/internal/work/exec.go:164 +0x3a1

goroutine 24 [semacquire]:
os.(*Process).wait(0xc000434090, 0xa84348, 0xa84350, 0xa84340)
/usr/local/go/src/os/exec_unix.go:37 +0x75
os.(*Process).Wait(...)
/usr/local/go/src/os/exec.go:125
os/exec.(*Cmd).Wait(0xc0000b2160, 0x0, 0x0)
/usr/local/go/src/os/exec/exec.go:501 +0x60
os/exec.(*Cmd).Run(0xc0000b2160, 0xc000394060, 0x16)
/usr/local/go/src/os/exec/exec.go:341 +0x5c
cmd/go/internal/work.(*Builder).toolID(0xc0000bd860, 0xa3c6df, 0x7, 0xb, 0xc0003a33c8)
/usr/local/go/src/cmd/go/internal/work/buildid.go:193 +0x44d
cmd/go/internal/work.(*Builder).buildActionID(0xc0000bd860, 0xc0002f2140, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/cmd/go/internal/work/exec.go:242 +0x1215
cmd/go/internal/work.(*Builder).build(0xc0000bd860, 0xc0002f2140, 0x0, 0x0)
/usr/local/go/src/cmd/go/internal/work/exec.go:397 +0x52a6
cmd/go/internal/work.(*Builder).Do.func2(0xc0002f2140)
/usr/local/go/src/cmd/go/internal/work/exec.go:117 +0x36d
cmd/go/internal/work.(*Builder).Do.func3(0xc0002f5430, 0xc0000bd860, 0xc00036c940)
/usr/local/go/src/cmd/go/internal/work/exec.go:177 +0x79
created by cmd/go/internal/work.(*Builder).Do
/usr/local/go/src/cmd/go/internal/work/exec.go:164 +0x3a1

goroutine 25 [semacquire]:
os.(*Process).wait(0xc0003aa120, 0xa84348, 0xa84350, 0xa84340)
/usr/local/go/src/os/exec_unix.go:37 +0x75
os.(*Process).Wait(...)
/usr/local/go/src/os/exec.go:125
os/exec.(*Cmd).Wait(0xc0001b5e40, 0x0, 0x0)
/usr/local/go/src/os/exec/exec.go:501 +0x60
os/exec.(*Cmd).Run(0xc0001b5e40, 0xc00023cd50, 0x16)
/usr/loc

Here's the code

package main

import (
"bufio"
"fmt"
"io"
"os"
"strconv"
"strings"
)

// Complete the migratoryBirds function below.
func migratoryBirds(arr []int32) int32 {
var min int32 = 0
return min
}

func main() {
reader := bufio.NewReaderSize(os.Stdin, 16 * 1024 * 1024)

stdout, err := os.Create(os.Getenv("OUTPUT_PATH"))
checkError(err)

defer stdout.Close()

writer := bufio.NewWriterSize(stdout, 16 * 1024 * 1024)

arrCount, err := strconv.ParseInt(strings.TrimSpace(readLine(reader)), 10, 64)
checkError(err)

arrTemp := strings.Split(strings.TrimSpace(readLine(reader)), " ")

var arr []int32

for i := 0; i < int(arrCount); i++ {
    arrItemTemp, err := strconv.ParseInt(arrTemp[i], 10, 64)
    checkError(err)
    arrItem := int32(arrItemTemp)
    arr = append(arr, arrItem)
}

result := migratoryBirds(arr)

fmt.Fprintf(writer, "%d\n", result)

writer.Flush()

}

func readLine(reader *bufio.Reader) string {
str, _, err := reader.ReadLine()
if err == io.EOF {
return ""
}

return strings.TrimRight(string(str), "\r\n")

}

func checkError(err error) {
if err != nil {
panic(err)
}
}

@ianlancetaylor
Copy link
Contributor

This can happen if your system is overloaded. Is the problem repeatable?

@zainabb12345
Copy link

It's repeatable. happens every time. I tried running different problems (like a different question) but it still keeps happening

@pankaj-nayak
Copy link

Hi ,
I am facing exactly same issue as zainabb12345 while solving problem on hackrRank problem.

@barath1997
Copy link

Hi ,
I am facing exactly same issue as zainabb12345 while solving problem on hackrRank problem.

iam too facing the same issue , i was not there in the last week , some bug in hacker rank i guess

@davecheney
Copy link
Contributor

Thank you for commenting. I’m sorry you are also experiencing issues but saying “me too” is not as helpful as giving complete information on what you tried to do — the program you wrote, what happened when you ran it, and the details of the machine you ran on, what operating system, what version of go, etc.

With this information it should be possible to locate the cause of the problem. Please consider updating your responses

@ianlancetaylor
Copy link
Contributor

@zainabb12345 Thanks for providing a test case. However, the stack trace that you provided is from the go tool. It is not from your test case. I can build your test using go build foo.go without running out of threads.

Can you give us precise instructions for how we can reproduce the problem ourselves? Thanks.

@zainabb12345
Copy link

@ianlancetaylor Thanks for your response. It had something to do with hackerrank and not with golang. It's working for me now too. Thanks alot

@rfratto
Copy link

rfratto commented Jul 30, 2021

I'm able to reproduce this if I use chpst with the following program:

package main 
import "C"
func main() {
}

Running with chpst -m 1000000000 <program path> will crash with the following stacktrace:

runtime/cgo: pthread_create failed: Resource temporarily unavailable

goroutine 1 [running]:
runtime.systemstack_switch()
        /home/linuxbrew/.linuxbrew/Cellar/go/1.16.3_1/libexec/src/runtime/asm_amd64.s:339 fp=0xc000050788 sp=0xc000050780 pc=0x45b9c0
runtime.main()
        /home/linuxbrew/.linuxbrew/Cellar/go/1.16.3_1/libexec/src/runtime/proc.go:144 +0x89 fp=0xc0000507e0 sp=0xc000050788 pc=0x431ce9
runtime.goexit()
        /home/linuxbrew/.linuxbrew/Cellar/go/1.16.3_1/libexec/src/runtime/asm_amd64.s:1371 +0x1 fp=0xc0000507e8 sp=0xc0000507e0 pc=0x45d7a1

strace seems to indicate that one of the calls to mmap is miscalculating available memory. When using chpst, there's a failed mmap call of mmap(NULL, 1000005632, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0). If you don't use chpst, that call is instead mmap(NULL, 8392704, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0).

It's not clear to me if this is the same root cause as what was originally reported, but gut feeling says no. I can open a separate issue if desired.

@Bjohnson131
Copy link

I'm getting the same error in a RHEL environment.

@gopherbot gopherbot added the compiler/runtime Issues related to the Go compiler and/or runtime. label Jul 7, 2022
@seankhliao seankhliao added WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided. and removed WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided. labels Dec 26, 2022
@gopherbot
Copy link

Timed out in state WaitingForInfo. Closing.

(I am just a bot, though. Please speak up if this is a mistake or you have the requested information.)

@pPanda-beta
Copy link

As of today it gets reproduced on 4.19.0-6-cloud-amd64 #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11) x86_64 GNU/Linux ( I know, its pretty old kernel for end of 2023 era)

One alternative to chpst -m is ulimit -s
since underlying it is doing the same "setrlimit(RLIMIT_STACK, ...)" for -m flag.

e.g.ulimit -s 100000000000 && <program path>

@rfrancois
Copy link

I reproduce the bug.
I will try solution proposed by @fiber about env CGO_ENABLED=0
because I do parallel requests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided.
Projects
None yet
Development

No branches or pull requests