Package runtime

Overview ▾

Package runtime contains operations that interact with Go's runtime system, such as functions to control goroutines. It also includes the low-level type information used by the reflect package; see reflect's documentation for the programmable interface to the run-time type system.

Environment Variables

The following environment variables ($name or %name%, depending on the host operating system) control the run-time behavior of Go programs. The meanings and use may change from release to release.

The GOGC variable sets the initial garbage collection target percentage. A collection is triggered when the ratio of freshly allocated data to live data remaining after the previous collection reaches this percentage. The default is GOGC=100. Setting GOGC=off disables the garbage collector entirely. The runtime/debug package's SetGCPercent function allows changing this percentage at run time. See https://golang.org/pkg/runtime/debug/#SetGCPercent.

The GODEBUG variable controls debugging variables within the runtime. It is a comma-separated list of name=val pairs setting these named variables:

allocfreetrace: setting allocfreetrace=1 causes every allocation to be
profiled and a stack trace printed on each object's allocation and free.

clobberfree: setting clobberfree=1 causes the garbage collector to
clobber the memory content of an object with bad content when it frees
the object.

cgocheck: setting cgocheck=0 disables all checks for packages
using cgo to incorrectly pass Go pointers to non-Go code.
Setting cgocheck=1 (the default) enables relatively cheap
checks that may miss some errors.  Setting cgocheck=2 enables
expensive checks that should not miss any errors, but will
cause your program to run slower.

efence: setting efence=1 causes the allocator to run in a mode
where each object is allocated on a unique page and addresses are
never recycled.

gccheckmark: setting gccheckmark=1 enables verification of the
garbage collector's concurrent mark phase by performing a
second mark pass while the world is stopped.  If the second
pass finds a reachable object that was not found by concurrent
mark, the garbage collector will panic.

gcpacertrace: setting gcpacertrace=1 causes the garbage collector to
print information about the internal state of the concurrent pacer.

gcshrinkstackoff: setting gcshrinkstackoff=1 disables moving goroutines
onto smaller stacks. In this mode, a goroutine's stack can only grow.

gcstoptheworld: setting gcstoptheworld=1 disables concurrent garbage collection,
making every garbage collection a stop-the-world event. Setting gcstoptheworld=2
also disables concurrent sweeping after the garbage collection finishes.

gctrace: setting gctrace=1 causes the garbage collector to emit a single line to standard
error at each collection, summarizing the amount of memory collected and the
length of the pause. The format of this line is subject to change.
Currently, it is:
	gc # @#s #%: #+#+# ms clock, #+#/#/#+# ms cpu, #->#-># MB, # MB goal, # P
where the fields are as follows:
	gc #        the GC number, incremented at each GC
	@#s         time in seconds since program start
	#%          percentage of time spent in GC since program start
	#+...+#     wall-clock/CPU times for the phases of the GC
	#->#-># MB  heap size at GC start, at GC end, and live heap
	# MB goal   goal heap size
	# P         number of processors used
The phases are stop-the-world (STW) sweep termination, concurrent
mark and scan, and STW mark termination. The CPU times
for mark/scan are broken down in to assist time (GC performed in
line with allocation), background GC time, and idle GC time.
If the line ends with "(forced)", this GC was forced by a
runtime.GC() call.

Setting gctrace to any value > 0 also causes the garbage collector
to emit a summary when memory is released back to the system.
This process of returning memory to the system is called scavenging.
The format of this summary is subject to change.
Currently it is:
	scvg#: # MB released  printed only if non-zero
	scvg#: inuse: # idle: # sys: # released: # consumed: # (MB)
where the fields are as follows:
	scvg#        the scavenge cycle number, incremented at each scavenge
	inuse: #     MB used or partially used spans
	idle: #      MB spans pending scavenging
	sys: #       MB mapped from the system
	released: #  MB released to the system
	consumed: #  MB allocated from the system

madvdontneed: setting madvdontneed=1 will use MADV_DONTNEED
instead of MADV_FREE on Linux when returning memory to the
kernel. This is less efficient, but causes RSS numbers to drop
more quickly.

memprofilerate: setting memprofilerate=X will update the value of runtime.MemProfileRate.
When set to 0 memory profiling is disabled.  Refer to the description of
MemProfileRate for the default value.

invalidptr: defaults to invalidptr=1, causing the garbage collector and stack
copier to crash the program if an invalid pointer value (for example, 1)
is found in a pointer-typed location. Setting invalidptr=0 disables this check.
This should only be used as a temporary workaround to diagnose buggy code.
The real fix is to not store integers in pointer-typed locations.

sbrk: setting sbrk=1 replaces the memory allocator and garbage collector
with a trivial allocator that obtains memory from the operating system and
never reclaims any memory.

scavenge: scavenge=1 enables debugging mode of heap scavenger.

scheddetail: setting schedtrace=X and scheddetail=1 causes the scheduler to emit
detailed multiline info every X milliseconds, describing state of the scheduler,
processors, threads and goroutines.

schedtrace: setting schedtrace=X causes the scheduler to emit a single line to standard
error every X milliseconds, summarizing the scheduler state.

tracebackancestors: setting tracebackancestors=N extends tracebacks with the stacks at
which goroutines were created, where N limits the number of ancestor goroutines to
report. This also extends the information returned by runtime.Stack. Ancestor's goroutine
IDs will refer to the ID of the goroutine at the time of creation; it's possible for this
ID to be reused for another goroutine. Setting N to 0 will report no ancestry information.

The net, net/http, and crypto/tls packages also refer to debugging variables in GODEBUG. See the documentation for those packages for details.

The GOMAXPROCS variable limits the number of operating system threads that can execute user-level Go code simultaneously. There is no limit to the number of threads that can be blocked in system calls on behalf of Go code; those do not count against the GOMAXPROCS limit. This package's GOMAXPROCS function queries and changes the limit.

The GOTRACEBACK variable controls the amount of output generated when a Go program fails due to an unrecovered panic or an unexpected runtime condition. By default, a failure prints a stack trace for the current goroutine, eliding functions internal to the run-time system, and then exits with exit code 2. The failure prints stack traces for all goroutines if there is no current goroutine or the failure is internal to the run-time. GOTRACEBACK=none omits the goroutine stack traces entirely. GOTRACEBACK=single (the default) behaves as described above. GOTRACEBACK=all adds stack traces for all user-created goroutines. GOTRACEBACK=system is like “all” but adds stack frames for run-time functions and shows goroutines created internally by the run-time. GOTRACEBACK=crash is like “system” but crashes in an operating system-specific manner instead of exiting. For example, on Unix systems, the crash raises SIGABRT to trigger a core dump. For historical reasons, the GOTRACEBACK settings 0, 1, and 2 are synonyms for none, all, and system, respectively. The runtime/debug package's SetTraceback function allows increasing the amount of output at run time, but it cannot reduce the amount below that specified by the environment variable. See https://golang.org/pkg/runtime/debug/#SetTraceback.

The GOARCH, GOOS, GOPATH, and GOROOT environment variables complete the set of Go environment variables. They influence the building of Go programs (see https://golang.org/cmd/go and https://golang.org/pkg/go/build). GOARCH, GOOS, and GOROOT are recorded at compile time and made available by constants or functions in this package, but they do not influence the execution of the run-time system.

Index ▾

Constants
Variables
func BlockProfile(p []BlockProfileRecord) (n int, ok bool)
func Breakpoint()
func CPUProfile() []byte
func Caller(skip int) (pc uintptr, file string, line int, ok bool)
func Callers(skip int, pc []uintptr) int
func GC()
func GOMAXPROCS(n int) int
func GOROOT() string
func Goexit()
func GoroutineProfile(p []StackRecord) (n int, ok bool)
func Gosched()
func KeepAlive(x interface{})
func LockOSThread()
func MemProfile(p []MemProfileRecord, inuseZero bool) (n int, ok bool)
func MutexProfile(p []BlockProfileRecord) (n int, ok bool)
func NumCPU() int
func NumCgoCall() int64
func NumGoroutine() int
func ReadMemStats(m *MemStats)
func ReadTrace() []byte
func SetBlockProfileRate(rate int)
func SetCPUProfileRate(hz int)
func SetCgoTraceback(version int, traceback, context, symbolizer unsafe.Pointer)
func SetFinalizer(obj interface{}, finalizer interface{})
func SetMutexProfileFraction(rate int) int
func Stack(buf []byte, all bool) int
func StartTrace() error
func StopTrace()
func ThreadCreateProfile(p []StackRecord) (n int, ok bool)
func UnlockOSThread()
func Version() string
func _ELF_ST_BIND(val byte) byte
func _ELF_ST_TYPE(val byte) byte
func _ExternalCode()
func _GC()
func _LostExternalCode()
func _LostSIGPROFDuringAtomic64()
func _System()
func _VDSO()
func _cgo_panic_internal(p *byte)
func abort()
func abs(x float64) float64
func acquirep(_p_ *p)
func add(p unsafe.Pointer, x uintptr) unsafe.Pointer
func add1(p *byte) *byte
func addb(p *byte, n uintptr) *byte
func addfinalizer(p unsafe.Pointer, f *funcval, nret uintptr, fint *_type, ot *ptrtype) bool
func addspecial(p unsafe.Pointer, s *special) bool
func addtimer(t *timer)
func adjustctxt(gp *g, adjinfo *adjustinfo)
func adjustdefers(gp *g, adjinfo *adjustinfo)
func adjustframe(frame *stkframe, arg unsafe.Pointer) bool
func adjustpanics(gp *g, adjinfo *adjustinfo)
func adjustpointer(adjinfo *adjustinfo, vpp unsafe.Pointer)
func adjustpointers(scanp unsafe.Pointer, bv *bitvector, adjinfo *adjustinfo, f funcInfo)
func adjustsudogs(gp *g, adjinfo *adjustinfo)
func advanceEvacuationMark(h *hmap, t *maptype, newbit uintptr)
func aeshash(p unsafe.Pointer, h, s uintptr) uintptr
func aeshash32(p unsafe.Pointer, h uintptr) uintptr
func aeshash64(p unsafe.Pointer, h uintptr) uintptr
func aeshashstr(p unsafe.Pointer, h uintptr) uintptr
func afterfork()
func alginit()
func allgadd(gp *g)
func archauxv(tag, val uintptr)
func arenaBase(i arenaIdx) uintptr
func args(c int32, v **byte)
func argv_index(argv **byte, i int32) *byte
func asmcgocall(fn, arg unsafe.Pointer) int32
func asminit()
func assertE2I2(inter *interfacetype, e eface) (r iface, b bool)
func assertI2I2(inter *interfacetype, i iface) (r iface, b bool)
func atoi(s string) (int, bool)
func atoi32(s string) (int32, bool)
func atomicstorep(ptr unsafe.Pointer, new unsafe.Pointer)
func atomicwb(ptr *unsafe.Pointer, new unsafe.Pointer)
func badTimer()
func badcgocallback()
func badctxt()
func badmcall(fn func(*g))
func badmcall2(fn func(*g))
func badmorestackg0()
func badmorestackgsignal()
func badreflectcall()
func badsignal(sig uintptr, c *sigctxt)
func badsystemstack()
func badunlockosthread()
func beforeIdle() bool
func beforefork()
func bgsweep(c chan int)
func binarySearchTree(x *stackObjectBuf, idx int, n int) (root *stackObject, restBuf *stackObjectBuf, restIdx int)
func block()
func blockableSig(sig uint32) bool
func blockevent(cycles int64, skip int)
func blocksampled(cycles int64) bool
func bool2int(x bool) int
func breakpoint()
func bucketEvacuated(t *maptype, h *hmap, bucket uintptr) bool
func bucketMask(b uint8) uintptr
func bucketShift(b uint8) uintptr
func bulkBarrierBitmap(dst, src, size, maskOffset uintptr, bits *uint8)
func bulkBarrierPreWrite(dst, src, size uintptr)
func bulkBarrierPreWriteSrcOnly(dst, src, size uintptr)
func bytes(s string) (ret []byte)
func bytesHash(b []byte, seed uintptr) uintptr
func c128equal(p, q unsafe.Pointer) bool
func c128hash(p unsafe.Pointer, h uintptr) uintptr
func c64equal(p, q unsafe.Pointer) bool
func c64hash(p unsafe.Pointer, h uintptr) uintptr
func cachestats()
func call1024(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call1048576(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call1073741824(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call128(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call131072(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call134217728(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call16384(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call16777216(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call2048(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call2097152(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call256(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call262144(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call268435456(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call32(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call32768(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call33554432(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call4096(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call4194304(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call512(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call524288(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call536870912(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call64(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call65536(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call67108864(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call8192(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func call8388608(typ, fn, arg unsafe.Pointer, n, retoffset uint32)
func callCgoMmap(addr unsafe.Pointer, n uintptr, prot, flags, fd int32, off uint32) uintptr
func callCgoMunmap(addr unsafe.Pointer, n uintptr)
func callCgoSigaction(sig uintptr, new, old *sigactiont) int32
func callCgoSymbolizer(arg *cgoSymbolizerArg)
func callers(skip int, pcbuf []uintptr) int
func canpanic(gp *g) bool
func cansemacquire(addr *uint32) bool
func casfrom_Gscanstatus(gp *g, oldval, newval uint32)
func casgcopystack(gp *g) uint32
func casgstatus(gp *g, oldval, newval uint32)
func castogscanstatus(gp *g, oldval, newval uint32) bool
func cfuncname(f funcInfo) *byte
func cgoCheckArg(t *_type, p unsafe.Pointer, indir, top bool, msg string)
func cgoCheckBits(src unsafe.Pointer, gcbits *byte, off, size uintptr)
func cgoCheckMemmove(typ *_type, dst, src unsafe.Pointer, off, size uintptr)
func cgoCheckPointer(ptr interface{}, args ...interface{})
func cgoCheckResult(val interface{})
func cgoCheckSliceCopy(typ *_type, dst, src slice, n int)
func cgoCheckTypedBlock(typ *_type, src unsafe.Pointer, off, size uintptr)
func cgoCheckUnknownPointer(p unsafe.Pointer, msg string) (base, i uintptr)
func cgoCheckUsingType(typ *_type, src unsafe.Pointer, off, size uintptr)
func cgoCheckWriteBarrier(dst *uintptr, src uintptr)
func cgoContextPCs(ctxt uintptr, buf []uintptr)
func cgoInRange(p unsafe.Pointer, start, end uintptr) bool
func cgoIsGoPointer(p unsafe.Pointer) bool
func cgoSigtramp()
func cgoUse(interface{})
func cgocall(fn, arg unsafe.Pointer) int32
func cgocallback(fn, frame unsafe.Pointer, framesize, ctxt uintptr)
func cgocallback_gofunc(fv, frame, framesize, ctxt uintptr)
func cgocallbackg(ctxt uintptr)
func cgocallbackg1(ctxt uintptr)
func cgounimpl()
func chanbuf(c *hchan, i uint) unsafe.Pointer
func chanrecv(c *hchan, ep unsafe.Pointer, block bool) (selected, received bool)
func chanrecv1(c *hchan, elem unsafe.Pointer)
func chanrecv2(c *hchan, elem unsafe.Pointer) (received bool)
func chansend(c *hchan, ep unsafe.Pointer, block bool, callerpc uintptr) bool
func chansend1(c *hchan, elem unsafe.Pointer)
func check()
func checkASM() bool
func checkTimeouts()
func checkTreapNode(t *treapNode)
func checkdead()
func checkmcount()
func clearCheckmarks()
func clearSignalHandlers()
func clearpools()
func clobberfree(x unsafe.Pointer, size uintptr)
func clone(flags int32, stk, mp, gp, fn unsafe.Pointer) int32
func closechan(c *hchan)
func closefd(fd int32) int32
func closeonexec(fd int32)
func complex128div(n complex128, m complex128) complex128
func concatstring2(buf *tmpBuf, a [2]string) string
func concatstring3(buf *tmpBuf, a [3]string) string
func concatstring4(buf *tmpBuf, a [4]string) string
func concatstring5(buf *tmpBuf, a [5]string) string
func concatstrings(buf *tmpBuf, a []string) string
func contains(s, t string) bool
func convT16(val uint16) (x unsafe.Pointer)
func convT32(val uint32) (x unsafe.Pointer)
func convT64(val uint64) (x unsafe.Pointer)
func convTslice(val []byte) (x unsafe.Pointer)
func convTstring(val string) (x unsafe.Pointer)
func copysign(x, y float64) float64
func copystack(gp *g, newsize uintptr, sync bool)
func countSub(x, y uint32) int
func countrunes(s string) int
func cpuinit()
func cputicks() int64
func crash()
func createfing()
func cstring(s string) unsafe.Pointer
func debugCallCheck(pc uintptr) string
func debugCallPanicked(val interface{})
func debugCallV1()
func debugCallWrap(dispatch uintptr)
func decoderune(s string, k int) (r rune, pos int)
func deductSweepCredit(spanBytes uintptr, callerSweepPages uintptr)
func deferArgs(d *_defer) unsafe.Pointer
func deferclass(siz uintptr) uintptr
func deferproc(siz int32, fn *funcval)
func deferreturn(arg0 uintptr)
func deltimer(t *timer) bool
func dematerializeGCProg(s *mspan)
func dieFromSignal(sig uint32)
func divlu(u1, u0, v uint64) (q, r uint64)
func dolockOSThread()
func dopanic_m(gp *g, pc, sp uintptr) bool
func dounlockOSThread()
func dropg()
func dropm()
func dumpGCProg(p *byte)
func dumpbool(b bool)
func dumpbv(cbv *bitvector, offset uintptr)
func dumpfields(bv bitvector)
func dumpfinalizer(obj unsafe.Pointer, fn *funcval, fint *_type, ot *ptrtype)
func dumpframe(s *stkframe, arg unsafe.Pointer) bool
func dumpgoroutine(gp *g)
func dumpgs()
func dumpgstatus(gp *g)
func dumpint(v uint64)
func dumpitabs()
func dumpmemprof()
func dumpmemprof_callback(b *bucket, nstk uintptr, pstk *uintptr, size, allocs, frees uintptr)
func dumpmemrange(data unsafe.Pointer, len uintptr)
func dumpmemstats()
func dumpms()
func dumpobj(obj unsafe.Pointer, size uintptr, bv bitvector)
func dumpobjs()
func dumpotherroot(description string, to unsafe.Pointer)
func dumpparams()
func dumpregs(c *sigctxt)
func dumproots()
func dumpslice(b []byte)
func dumpstr(s string)
func dumptype(t *_type)
func dwrite(data unsafe.Pointer, len uintptr)
func dwritebyte(b byte)
func efaceHash(i interface{}, seed uintptr) uintptr
func efaceeq(t *_type, x, y unsafe.Pointer) bool
func elideWrapperCalling(id funcID) bool
func encoderune(p []byte, r rune) int
func ensureSigM()
func entersyscall()
func entersyscall_gcwait()
func entersyscall_sysmon()
func entersyscallblock()
func entersyscallblock_handoff()
func envKeyEqual(a, b string) bool
func environ() []string
func epollcreate(size int32) int32
func epollcreate1(flags int32) int32
func epollctl(epfd, op, fd int32, ev *epollevent) int32
func epollwait(epfd int32, ev *epollevent, nev, timeout int32) int32
func eqslice(x, y []uintptr) bool
func evacuate(t *maptype, h *hmap, oldbucket uintptr)
func evacuate_fast32(t *maptype, h *hmap, oldbucket uintptr)
func evacuate_fast64(t *maptype, h *hmap, oldbucket uintptr)
func evacuate_faststr(t *maptype, h *hmap, oldbucket uintptr)
func evacuated(b *bmap) bool
func execute(gp *g, inheritTime bool)
func exit(code int32)
func exitThread(wait *uint32)
func exitsyscall()
func exitsyscall0(gp *g)
func exitsyscallfast(oldp *p) bool
func exitsyscallfast_pidle() bool
func exitsyscallfast_reacquired()
func extendRandom(r []byte, n int)
func f32equal(p, q unsafe.Pointer) bool
func f32hash(p unsafe.Pointer, h uintptr) uintptr
func f32to64(f uint32) uint64
func f32toint32(x uint32) int32
func f32toint64(x uint32) int64
func f32touint64(x float32) uint64
func f64equal(p, q unsafe.Pointer) bool
func f64hash(p unsafe.Pointer, h uintptr) uintptr
func f64to32(f uint64) uint32
func f64toint(f uint64) (val int64, ok bool)
func f64toint32(x uint64) int32
func f64toint64(x uint64) int64
func f64touint64(x float64) uint64
func fadd32(x, y uint32) uint32
func fadd64(f, g uint64) uint64
func fastexprand(mean int) int32
func fastlog2(x float64) float64
func fastrand() uint32
func fastrandn(n uint32) uint32
func fatalpanic(msgs *_panic)
func fatalthrow()
func fcmp64(f, g uint64) (cmp int32, isnan bool)
func fdiv32(x, y uint32) uint32
func fdiv64(f, g uint64) uint64
func feq32(x, y uint32) bool
func feq64(x, y uint64) bool
func fge32(x, y uint32) bool
func fge64(x, y uint64) bool
func fgt32(x, y uint32) bool
func fgt64(x, y uint64) bool
func fillstack(stk stack, b byte)
func findObject(p, refBase, refOff uintptr) (base uintptr, s *mspan, objIndex uintptr)
func findnull(s *byte) int
func findnullw(s *uint16) int
func findrunnable() (gp *g, inheritTime bool)
func findsghi(gp *g, stk stack) uintptr
func finishsweep_m()
func finq_callback(fn *funcval, obj unsafe.Pointer, nret uintptr, fint *_type, ot *ptrtype)
func fint32to32(x int32) uint32
func fint32to64(x int32) uint64
func fint64to32(x int64) uint32
func fint64to64(x int64) uint64
func fintto64(val int64) (f uint64)
func float64bits(f float64) uint64
func float64frombits(b uint64) float64
func flush()
func flushallmcaches()
func flushmcache(i int)
func fmtNSAsMS(buf []byte, ns uint64) []byte
func fmul32(x, y uint32) uint32
func fmul64(f, g uint64) uint64
func fneg64(f uint64) uint64
func forEachP(fn func(*p))
func forcegchelper()
func fpack32(sign, mant uint32, exp int, trunc uint32) uint32
func fpack64(sign, mant uint64, exp int, trunc uint64) uint64
func freeSomeWbufs(preemptible bool) bool
func freeStackSpans()
func freedefer(d *_defer)
func freedeferfn()
func freedeferpanic()
func freemcache(c *mcache)
func freespecial(s *special, p unsafe.Pointer, size uintptr)
func freezetheworld()
func fsub64(f, g uint64) uint64
func fuint64to32(x uint64) float32
func fuint64to64(x uint64) float64
func funcPC(f interface{}) uintptr
func funcdata(f funcInfo, i uint8) unsafe.Pointer
func funcfile(f funcInfo, fileno int32) string
func funcline(f funcInfo, targetpc uintptr) (file string, line int32)
func funcline1(f funcInfo, targetpc uintptr, strict bool) (file string, line int32)
func funcname(f funcInfo) string
func funcnameFromNameoff(f funcInfo, nameoff int32) string
func funcspdelta(f funcInfo, targetpc uintptr, cache *pcvalueCache) int32
func funpack32(f uint32) (sign, mant uint32, exp int, inf, nan bool)
func funpack64(f uint64) (sign, mant uint64, exp int, inf, nan bool)
func futex(addr unsafe.Pointer, op int32, val uint32, ts, addr2 unsafe.Pointer, val3 uint32) int32
func futexsleep(addr *uint32, val uint32, ns int64)
func futexwakeup(addr *uint32, cnt uint32)
func gcAssistAlloc(gp *g)
func gcAssistAlloc1(gp *g, scanWork int64)
func gcBgMarkPrepare()
func gcBgMarkStartWorkers()
func gcBgMarkWorker(_p_ *p)
func gcDrain(gcw *gcWork, flags gcDrainFlags)
func gcDrainN(gcw *gcWork, scanWork int64) int64
func gcDumpObject(label string, obj, off uintptr)
func gcFlushBgCredit(scanWork int64)
func gcMark(start_time int64)
func gcMarkDone()
func gcMarkRootCheck()
func gcMarkRootPrepare()
func gcMarkTermination(nextTriggerRatio float64)
func gcMarkTinyAllocs()
func gcMarkWorkAvailable(p *p) bool
func gcParkAssist() bool
func gcResetMarkState()
func gcSetTriggerRatio(triggerRatio float64)
func gcStart(trigger gcTrigger)
func gcSweep(mode gcMode)
func gcWaitOnMark(n uint32)
func gcWakeAllAssists()
func gcallers(gp *g, skip int, pcbuf []uintptr) int
func gcd(a, b uint32) uint32
func gcenable()
func gcinit()
func gcmarknewobject(obj, size, scanSize uintptr)
func gcount() int32
func gcstopm()
func gentraceback(pc0, sp0, lr0 uintptr, gp *g, skip int, pcbuf *uintptr, max int, callback func(*stkframe, unsafe.Pointer) bool, v unsafe.Pointer, flags uint) int
func getArgInfo(frame *stkframe, f funcInfo, needArgMap bool, ctxt *funcval) (arglen uintptr, argmap *bitvector)
func getArgInfoFast(f funcInfo, needArgMap bool) (arglen uintptr, argmap *bitvector, ok bool)
func getRandomData(r []byte)
func getStackMap(frame *stkframe, cache *pcvalueCache, debug bool) (locals, args bitvector, objs []stackObjectRecord)
func getargp(x int) uintptr
func getcallerpc() uintptr
func getcallersp() uintptr
func getclosureptr() uintptr
func getgcmask(ep interface{}) (mask []byte)
func getgcmaskcb(frame *stkframe, ctxt unsafe.Pointer) bool
func getm() uintptr
func getproccount() int32
func getsig(i uint32) uintptr
func gettid() uint32
func gfpurge(_p_ *p)
func gfput(_p_ *p, gp *g)
func globrunqput(gp *g)
func globrunqputbatch(batch *gQueue, n int32)
func globrunqputhead(gp *g)
func goargs()
func gobytes(p *byte, n int) (b []byte)
func goenvs()
func goenvs_unix()
func goexit(neverCallThisFunction)
func goexit0(gp *g)
func goexit1()
func gogetenv(key string) string
func gogo(buf *gobuf)
func gopanic(e interface{})
func gopark(unlockf func(*g, unsafe.Pointer) bool, lock unsafe.Pointer, reason waitReason, traceEv byte, traceskip int)
func goparkunlock(lock *mutex, reason waitReason, traceEv byte, traceskip int)
func gopreempt_m(gp *g)
func goready(gp *g, traceskip int)
func gorecover(argp uintptr) interface{}
func goroutineReady(arg interface{}, seq uintptr)
func goroutineheader(gp *g)
func gosave(buf *gobuf)
func goschedImpl(gp *g)
func gosched_m(gp *g)
func goschedguarded()
func goschedguarded_m(gp *g)
func gostartcall(buf *gobuf, fn, ctxt unsafe.Pointer)
func gostartcallfn(gobuf *gobuf, fv *funcval)
func gostring(p *byte) string
func gostringn(p *byte, l int) string
func gostringnocopy(str *byte) string
func gostringw(strw *uint16) string
func gotraceback() (level int32, all, crash bool)
func greyobject(obj, base, off uintptr, span *mspan, gcw *gcWork, objIndex uintptr)
func growWork(t *maptype, h *hmap, bucket uintptr)
func growWork_fast32(t *maptype, h *hmap, bucket uintptr)
func growWork_fast64(t *maptype, h *hmap, bucket uintptr)
func growWork_faststr(t *maptype, h *hmap, bucket uintptr)
func gwrite(b []byte)
func handoffp(_p_ *p)
func hasPrefix(s, prefix string) bool
func hashGrow(t *maptype, h *hmap)
func haveexperiment(name string) bool
func heapBitsSetType(x, size, dataSize uintptr, typ *_type)
func heapBitsSetTypeGCProg(h heapBits, progSize, elemSize, dataSize, allocSize uintptr, prog *byte)
func hexdumpWords(p, end uintptr, mark func(uintptr) byte)
func ifaceHash(i interface {F()}, seed uintptr) uintptr
func ifaceeq(tab *itab, x, y unsafe.Pointer) bool
func inHeapOrStack(b uintptr) bool
func inPersistentAlloc(p uintptr) bool
func inRange(r0, r1, v0, v1 uintptr) bool
func inVDSOPage(pc uintptr) bool
func incidlelocked(v int32)
func index(s, t string) int
func inf2one(f float64) float64
func inheap(b uintptr) bool
func init()
func initAlgAES()
func initCheckmarks()
func initsig(preinit bool)
func injectglist(glist *gList)
func int32Hash(i uint32, seed uintptr) uintptr
func int64Hash(i uint64, seed uintptr) uintptr
func interequal(p, q unsafe.Pointer) bool
func interhash(p unsafe.Pointer, h uintptr) uintptr
func intstring(buf *[4]byte, v int64) (s string)
func isAbortPC(pc uintptr) bool
func isDirectIface(t *_type) bool
func isEmpty(x uint8) bool
func isExportedRuntime(name string) bool
func isFinite(f float64) bool
func isInf(f float64) bool
func isNaN(f float64) (is bool)
func isPowerOfTwo(x uintptr) bool
func isSweepDone() bool
func isSystemGoroutine(gp *g, fixed bool) bool
func ismapkey(t *_type) bool
func isscanstatus(status uint32) bool
func itabAdd(m *itab)
func itabHashFunc(inter *interfacetype, typ *_type) uintptr
func itab_callback(tab *itab)
func itabsinit()
func iterate_finq(callback func(*funcval, unsafe.Pointer, uintptr, *_type, *ptrtype))
func iterate_itabs(fn func(*itab))
func iterate_memprof(fn func(*bucket, uintptr, *uintptr, uintptr, uintptr, uintptr))
func itoaDiv(buf []byte, val uint64, dec int) []byte
func jmpdefer(fv *funcval, argp uintptr)
func key32(p *uintptr) *uint32
func less(a, b uint32) bool
func lfnodeValidate(node *lfnode)
func lfstackPack(node *lfnode, cnt uintptr) uint64
func libpreinit()
func lock(l *mutex)
func lockOSThread()
func lockedOSThread() bool
func lowerASCII(c byte) byte
func mProf_Flush()
func mProf_FlushLocked()
func mProf_Free(b *bucket, size uintptr)
func mProf_Malloc(p unsafe.Pointer, size uintptr)
func mProf_NextCycle()
func mProf_PostSweep()
func mSysStatDec(sysStat *uint64, n uintptr)
func mSysStatInc(sysStat *uint64, n uintptr)
func madvise(addr unsafe.Pointer, n uintptr, flags int32) int32
func main()
func main_init()
func main_main()
func makeslice(et *_type, len, cap int) unsafe.Pointer
func makeslice64(et *_type, len64, cap64 int64) unsafe.Pointer
func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer
func mallocinit()
func mapaccess1(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapaccess1_fast32(t *maptype, h *hmap, key uint32) unsafe.Pointer
func mapaccess1_fast64(t *maptype, h *hmap, key uint64) unsafe.Pointer
func mapaccess1_faststr(t *maptype, h *hmap, ky string) unsafe.Pointer
func mapaccess1_fat(t *maptype, h *hmap, key, zero unsafe.Pointer) unsafe.Pointer
func mapaccess2(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, bool)
func mapaccess2_fast32(t *maptype, h *hmap, key uint32) (unsafe.Pointer, bool)
func mapaccess2_fast64(t *maptype, h *hmap, key uint64) (unsafe.Pointer, bool)
func mapaccess2_faststr(t *maptype, h *hmap, ky string) (unsafe.Pointer, bool)
func mapaccess2_fat(t *maptype, h *hmap, key, zero unsafe.Pointer) (unsafe.Pointer, bool)
func mapaccessK(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, unsafe.Pointer)
func mapassign(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapassign_fast32(t *maptype, h *hmap, key uint32) unsafe.Pointer
func mapassign_fast32ptr(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapassign_fast64(t *maptype, h *hmap, key uint64) unsafe.Pointer
func mapassign_fast64ptr(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapassign_faststr(t *maptype, h *hmap, s string) unsafe.Pointer
func mapclear(t *maptype, h *hmap)
func mapdelete(t *maptype, h *hmap, key unsafe.Pointer)
func mapdelete_fast32(t *maptype, h *hmap, key uint32)
func mapdelete_fast64(t *maptype, h *hmap, key uint64)
func mapdelete_faststr(t *maptype, h *hmap, ky string)
func mapiterinit(t *maptype, h *hmap, it *hiter)
func mapiternext(it *hiter)
func markroot(gcw *gcWork, i uint32)
func markrootBlock(b0, n0 uintptr, ptrmask0 *uint8, gcw *gcWork, shard int)
func markrootFreeGStacks()
func markrootSpans(gcw *gcWork, shard int)
func mcall(fn func(*g))
func mcommoninit(mp *m)
func mcount() int32
func mdump()
func memclrHasPointers(ptr unsafe.Pointer, n uintptr)
func memclrNoHeapPointers(ptr unsafe.Pointer, n uintptr)
func memequal(a, b unsafe.Pointer, size uintptr) bool
func memequal0(p, q unsafe.Pointer) bool
func memequal128(p, q unsafe.Pointer) bool
func memequal16(p, q unsafe.Pointer) bool
func memequal32(p, q unsafe.Pointer) bool
func memequal64(p, q unsafe.Pointer) bool
func memequal8(p, q unsafe.Pointer) bool
func memequal_varlen(a, b unsafe.Pointer) bool
func memhash(p unsafe.Pointer, seed, s uintptr) uintptr
func memhash0(p unsafe.Pointer, h uintptr) uintptr
func memhash128(p unsafe.Pointer, h uintptr) uintptr
func memhash16(p unsafe.Pointer, h uintptr) uintptr
func memhash32(p unsafe.Pointer, seed uintptr) uintptr
func memhash64(p unsafe.Pointer, seed uintptr) uintptr
func memhash8(p unsafe.Pointer, h uintptr) uintptr
func memhash_varlen(p unsafe.Pointer, h uintptr) uintptr
func memmove(to, from unsafe.Pointer, n uintptr)
func mexit(osStack bool)
func mincore(addr unsafe.Pointer, n uintptr, dst *byte) int32
func minit()
func minitSignalMask()
func minitSignalStack()
func minitSignals()
func mmap(addr unsafe.Pointer, n uintptr, prot, flags, fd int32, off uint32) (unsafe.Pointer, int)
func modtimer(t *timer, when, period int64, f func(interface{}, uintptr), arg interface{}, seq uintptr)
func moduledataverify()
func moduledataverify1(datap *moduledata)
func modulesinit()
func morestack()
func morestack_noctxt()
func morestackc()
func mpreinit(mp *m)
func mput(mp *m)
func msanfree(addr unsafe.Pointer, sz uintptr)
func msanmalloc(addr unsafe.Pointer, sz uintptr)
func msanread(addr unsafe.Pointer, sz uintptr)
func msanwrite(addr unsafe.Pointer, sz uintptr)
func msigrestore(sigmask sigset)
func msigsave(mp *m)
func mspinning()
func mstart()
func mstart1()
func mstartm0()
func mullu(u, v uint64) (lo, hi uint64)
func munmap(addr unsafe.Pointer, n uintptr)
func mutexevent(cycles int64, skip int)
func nanotime() int64
func needm(x byte)
func netpollDeadline(arg interface{}, seq uintptr)
func netpollReadDeadline(arg interface{}, seq uintptr)
func netpollWriteDeadline(arg interface{}, seq uintptr)
func netpollarm(pd *pollDesc, mode int)
func netpollblock(pd *pollDesc, mode int32, waitio bool) bool
func netpollblockcommit(gp *g, gpp unsafe.Pointer) bool
func netpollcheckerr(pd *pollDesc, mode int32) int
func netpollclose(fd uintptr) int32
func netpolldeadlineimpl(pd *pollDesc, seq uintptr, read, write bool)
func netpolldescriptor() uintptr
func netpollgoready(gp *g, traceskip int)
func netpollinit()
func netpollinited() bool
func netpollopen(fd uintptr, pd *pollDesc) int32
func netpollready(toRun *gList, pd *pollDesc, mode int32)
func newarray(typ *_type, n int) unsafe.Pointer
func newextram()
func newm(fn func(), _p_ *p)
func newm1(mp *m)
func newobject(typ *_type) unsafe.Pointer
func newosproc(mp *m)
func newosproc0(stacksize uintptr, fn unsafe.Pointer)
func newproc(siz int32, fn *funcval)
func newproc1(fn *funcval, argp *uint8, narg int32, callergp *g, callerpc uintptr)
func newstack()
func nextMarkBitArenaEpoch()
func nextSample() int32
func nextSampleNoFP() int32
func nilfunc()
func nilinterequal(p, q unsafe.Pointer) bool
func nilinterhash(p unsafe.Pointer, h uintptr) uintptr
func noSignalStack(sig uint32)
func noescape(p unsafe.Pointer) unsafe.Pointer
func noteclear(n *note)
func notesleep(n *note)
func notetsleep(n *note, ns int64) bool
func notetsleep_internal(n *note, ns int64) bool
func notetsleepg(n *note, ns int64) bool
func notewakeup(n *note)
func notifyListAdd(l *notifyList) uint32
func notifyListCheck(sz uintptr)
func notifyListNotifyAll(l *notifyList)
func notifyListNotifyOne(l *notifyList)
func notifyListWait(l *notifyList, t uint32)
func oneNewExtraM()
func open(name *byte, mode, perm int32) int32
func osRelax(relax bool)
func osStackAlloc(s *mspan)
func osStackFree(s *mspan)
func os_beforeExit()
func os_runtime_args() []string
func os_sigpipe()
func osinit()
func osyield()
func overLoadFactor(count int, B uint8) bool
func pageIndexOf(p uintptr) (arena *heapArena, pageIdx uintptr, pageMask uint8)
func panicCheckMalloc(err error)
func panicdivide()
func panicdottypeE(have, want, iface *_type)
func panicdottypeI(have *itab, want, iface *_type)
func panicfloat()
func panicindex()
func panicmakeslicecap()
func panicmakeslicelen()
func panicmem()
func panicnildottype(want *_type)
func panicoverflow()
func panicslice()
func panicwrap()
func park_m(gp *g)
func parkunlock_c(gp *g, lock unsafe.Pointer) bool
func parsedebugvars()
func pcdatastart(f funcInfo, table int32) int32
func pcdatavalue(f funcInfo, table int32, targetpc uintptr, cache *pcvalueCache) int32
func pcdatavalue1(f funcInfo, table int32, targetpc uintptr, cache *pcvalueCache, strict bool) int32
func pcvalue(f funcInfo, off int32, targetpc uintptr, cache *pcvalueCache, strict bool) int32
func pcvalueCacheKey(targetpc uintptr) uintptr
func persistentalloc(size, align uintptr, sysStat *uint64) unsafe.Pointer
func pidleput(_p_ *p)
func plugin_lastmoduleinit() (path string, syms map[string]interface{}, errstr string)
func pluginftabverify(md *moduledata)
func pollFractionalWorkerExit() bool
func pollWork() bool
func poll_runtime_Semacquire(addr *uint32)
func poll_runtime_Semrelease(addr *uint32)
func poll_runtime_isPollServerDescriptor(fd uintptr) bool
func poll_runtime_pollClose(pd *pollDesc)
func poll_runtime_pollOpen(fd uintptr) (*pollDesc, int)
func poll_runtime_pollReset(pd *pollDesc, mode int) int
func poll_runtime_pollServerInit()
func poll_runtime_pollSetDeadline(pd *pollDesc, d int64, mode int)
func poll_runtime_pollUnblock(pd *pollDesc)
func poll_runtime_pollWait(pd *pollDesc, mode int) int
func poll_runtime_pollWaitCanceled(pd *pollDesc, mode int)
func preemptall() bool
func preemptone(_p_ *p) bool
func prepGoExitFrame(sp uintptr)
func prepareFreeWorkbufs()
func preprintpanics(p *_panic)
func printAncestorTraceback(ancestor ancestorInfo)
func printAncestorTracebackFuncInfo(f funcInfo, pc uintptr)
func printCgoTraceback(callers *cgoCallers)
func printOneCgoTraceback(pc uintptr, max int, arg *cgoSymbolizerArg) int
func printany(i interface{})
func printbool(v bool)
func printcomplex(c complex128)
func printcreatedby(gp *g)
func printcreatedby1(f funcInfo, pc uintptr)
func printeface(e eface)
func printfloat(v float64)
func printhex(v uint64)
func printiface(i iface)
func printint(v int64)
func printlock()
func printnl()
func printpanics(p *_panic)
func printpointer(p unsafe.Pointer)
func printslice(s []byte)
func printsp()
func printstring(s string)
func printuint(v uint64)
func printunlock()
func procPin() int
func procUnpin()
func procyield(cycles uint32)
func profilealloc(mp *m, x unsafe.Pointer, size uintptr)
func publicationBarrier()
func purgecachedstats(c *mcache)
func putempty(b *workbuf)
func putfull(b *workbuf)
func queuefinalizer(p unsafe.Pointer, fn *funcval, nret uintptr, fint *_type, ot *ptrtype)
func raceReadObjectPC(t *_type, addr unsafe.Pointer, callerpc, pc uintptr)
func raceWriteObjectPC(t *_type, addr unsafe.Pointer, callerpc, pc uintptr)
func raceacquire(addr unsafe.Pointer)
func raceacquireg(gp *g, addr unsafe.Pointer)
func racefingo()
func racefini()
func racefree(p unsafe.Pointer, sz uintptr)
func racegoend()
func racegostart(pc uintptr) uintptr
func raceinit() (uintptr, uintptr)
func racemalloc(p unsafe.Pointer, sz uintptr)
func racemapshadow(addr unsafe.Pointer, size uintptr)
func raceproccreate() uintptr
func raceprocdestroy(ctx uintptr)
func racereadpc(addr unsafe.Pointer, callerpc, pc uintptr)
func racereadrangepc(addr unsafe.Pointer, sz, callerpc, pc uintptr)
func racerelease(addr unsafe.Pointer)
func racereleaseg(gp *g, addr unsafe.Pointer)
func racereleasemerge(addr unsafe.Pointer)
func racereleasemergeg(gp *g, addr unsafe.Pointer)
func racesync(c *hchan, sg *sudog)
func racewritepc(addr unsafe.Pointer, callerpc, pc uintptr)
func racewriterangepc(addr unsafe.Pointer, sz, callerpc, pc uintptr)
func raise(sig uint32)
func raisebadsignal(sig uint32, c *sigctxt)
func raiseproc(sig uint32)
func rawbyteslice(size int) (b []byte)
func rawruneslice(size int) (b []rune)
func rawstring(size int) (s string, b []byte)
func rawstringtmp(buf *tmpBuf, l int) (s string, b []byte)
func read(fd int32, p unsafe.Pointer, n int32) int32
func readGCStats(pauses *[]uint64)
func readGCStats_m(pauses *[]uint64)
func readUnaligned32(p unsafe.Pointer) uint32
func readUnaligned64(p unsafe.Pointer) uint64
func readgogc() int32
func readgstatus(gp *g) uint32
func readmemstats_m(stats *MemStats)
func readvarint(p []byte) (read uint32, val uint32)
func ready(gp *g, traceskip int, next bool)
func readyWithTime(s *sudog, traceskip int)
func record(r *MemProfileRecord, b *bucket)
func recordForPanic(b []byte)
func recordspan(vh unsafe.Pointer, p unsafe.Pointer)
func recovery(gp *g)
func recv(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)
func recvDirect(t *_type, sg *sudog, dst unsafe.Pointer)
func reentersyscall(pc, sp uintptr)
func reflectOffsLock()
func reflectOffsUnlock()
func reflect_addReflectOff(ptr unsafe.Pointer) int32
func reflect_chancap(c *hchan) int
func reflect_chanclose(c *hchan)
func reflect_chanlen(c *hchan) int
func reflect_chanrecv(c *hchan, nb bool, elem unsafe.Pointer) (selected bool, received bool)
func reflect_chansend(c *hchan, elem unsafe.Pointer, nb bool) (selected bool)
func reflect_gcbits(x interface{}) []byte
func reflect_ifaceE2I(inter *interfacetype, e eface, dst *iface)
func reflect_ismapkey(t *_type) bool
func reflect_mapaccess(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func reflect_mapassign(t *maptype, h *hmap, key unsafe.Pointer, val unsafe.Pointer)
func reflect_mapdelete(t *maptype, h *hmap, key unsafe.Pointer)
func reflect_mapiterkey(it *hiter) unsafe.Pointer
func reflect_mapiternext(it *hiter)
func reflect_mapitervalue(it *hiter) unsafe.Pointer
func reflect_maplen(h *hmap) int
func reflect_memclrNoHeapPointers(ptr unsafe.Pointer, n uintptr)
func reflect_memmove(to, from unsafe.Pointer, n uintptr)
func reflect_resolveNameOff(ptrInModule unsafe.Pointer, off int32) unsafe.Pointer
func reflect_resolveTextOff(rtype unsafe.Pointer, off int32) unsafe.Pointer
func reflect_resolveTypeOff(rtype unsafe.Pointer, off int32) unsafe.Pointer
func reflect_rselect(cases []runtimeSelect) (int, bool)
func reflect_typedmemclr(typ *_type, ptr unsafe.Pointer)
func reflect_typedmemclrpartial(typ *_type, ptr unsafe.Pointer, off, size uintptr)
func reflect_typedmemmove(typ *_type, dst, src unsafe.Pointer)
func reflect_typedmemmovepartial(typ *_type, dst, src unsafe.Pointer, off, size uintptr)
func reflect_typedslicecopy(elemType *_type, dst, src slice) int
func reflect_typelinks() ([]unsafe.Pointer, [][]int32)
func reflect_unsafe_New(typ *_type) unsafe.Pointer
func reflect_unsafe_NewArray(typ *_type, n int) unsafe.Pointer
func reflectcall(argtype *_type, fn, arg unsafe.Pointer, argsize uint32, retoffset uint32)
func reflectcallmove(typ *_type, dst, src unsafe.Pointer, size uintptr)
func releaseSudog(s *sudog)
func releasem(mp *m)
func removefinalizer(p unsafe.Pointer)
func resetspinning()
func restartg(gp *g)
func restoreGsignalStack(st *gsignalStack)
func retake(now int64) uint32
func return0()
func rotl_31(x uint64) uint64
func round(n, a uintptr) uintptr
func round2(x int32) int32
func roundupsize(size uintptr) uintptr
func rt0_go()
func rt_sigaction(sig uintptr, new, old *sigactiont, size uintptr) int32
func rtsigprocmask(how int32, new, old *sigset, size int32)
func runGCProg(prog, trailer, dst *byte, size int) uintptr
func runSafePointFn()
func runfinq()
func runqempty(_p_ *p) bool
func runqget(_p_ *p) (gp *g, inheritTime bool)
func runqgrab(_p_ *p, batch *[256]guintptr, batchHead uint32, stealRunNextG bool) uint32
func runqput(_p_ *p, gp *g, next bool)
func runqputslow(_p_ *p, gp *g, h, t uint32) bool
func runtime_debug_WriteHeapDump(fd uintptr)
func runtime_debug_freeOSMemory()
func runtime_getProfLabel() unsafe.Pointer
func runtime_init()
func runtime_pprof_readProfile() ([]uint64, []unsafe.Pointer, bool)
func runtime_pprof_runtime_cyclesPerSecond() int64
func runtime_setProfLabel(labels unsafe.Pointer)
func save(pc, sp uintptr)
func saveAncestors(callergp *g) *[]ancestorInfo
func saveblockevent(cycles int64, skip int, which bucketType)
func saveg(pc, sp uintptr, gp *g, r *StackRecord)
func sbrk0() uintptr
func scanblock(b0, n0 uintptr, ptrmask *uint8, gcw *gcWork, stk *stackScanState)
func scanframeworker(frame *stkframe, state *stackScanState, gcw *gcWork)
func scang(gp *g, gcw *gcWork)
func scanobject(b uintptr, gcw *gcWork)
func scanstack(gp *g, gcw *gcWork)
func schedEnableUser(enable bool)
func schedEnabled(gp *g) bool
func sched_getaffinity(pid, len uintptr, buf *byte) int32
func schedinit()
func schedtrace(detailed bool)
func schedule()
func selectgo(cas0 *scase, order0 *uint16, ncases int) (int, bool)
func selectnbrecv(elem unsafe.Pointer, c *hchan) (selected bool)
func selectnbrecv2(elem unsafe.Pointer, received *bool, c *hchan) (selected bool)
func selectnbsend(c *hchan, elem unsafe.Pointer) (selected bool)
func selectsetpc(cas *scase)
func sellock(scases []scase, lockorder []uint16)
func selparkcommit(gp *g, _ unsafe.Pointer) bool
func selunlock(scases []scase, lockorder []uint16)
func semacquire(addr *uint32)
func semacquire1(addr *uint32, lifo bool, profile semaProfileFlags)
func semrelease(addr *uint32)
func semrelease1(addr *uint32, handoff bool)
func send(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)
func sendDirect(t *_type, sg *sudog, src unsafe.Pointer)
func setGCPercent(in int32) (out int32)
func setGCPhase(x uint32)
func setGNoWB(gp **g, new *g)
func setGsignalStack(st *stackt, old *gsignalStack)
func setMNoWB(mp **m, new *m)
func setMaxStack(in int) (out int)
func setMaxThreads(in int) (out int)
func setPanicOnFault(new bool) (old bool)
func setProcessCPUProfiler(hz int32)
func setSignalstackSP(s *stackt, sp uintptr)
func setThreadCPUProfiler(hz int32)
func setTraceback(level string)
func setcpuprofilerate(hz int32)
func setg(gg *g)
func setitimer(mode int32, new, old *itimerval)
func setprofilebucket(p unsafe.Pointer, b *bucket)
func setsSP(pc uintptr) bool
func setsig(i uint32, fn uintptr)
func setsigsegv(pc uintptr)
func setsigstack(i uint32)
func shade(b uintptr)
func shouldPushSigpanic(gp *g, pc, lr uintptr) bool
func showframe(f funcInfo, gp *g, firstFrame bool, funcID, childID funcID) bool
func showfuncinfo(f funcInfo, firstFrame bool, funcID, childID funcID) bool
func shrinkstack(gp *g)
func siftdownTimer(t []*timer, i int) bool
func siftupTimer(t []*timer, i int) bool
func sigInitIgnored(s uint32)
func sigInstallGoHandler(sig uint32) bool
func sigNotOnStack(sig uint32)
func sigaction(sig uint32, new, old *sigactiont)
func sigaddset(mask *sigset, i int)
func sigaltstack(new, old *stackt)
func sigblock()
func sigdelset(mask *sigset, i int)
func sigdisable(sig uint32)
func sigenable(sig uint32)
func sigfillset(mask *uint64)
func sigfwd(fn uintptr, sig uint32, info *siginfo, ctx unsafe.Pointer)
func sigfwdgo(sig uint32, info *siginfo, ctx unsafe.Pointer) bool
func sighandler(sig uint32, info *siginfo, ctxt unsafe.Pointer, gp *g)
func sigignore(sig uint32)
func signalDuringFork(sig uint32)
func signalWaitUntilIdle()
func signal_disable(s uint32)
func signal_enable(s uint32)
func signal_ignore(s uint32)
func signal_ignored(s uint32) bool
func signal_recv() uint32
func signalstack(s *stack)
func signame(sig uint32) string
func sigpanic()
func sigpipe()
func sigprocmask(how int32, new, old *sigset)
func sigprof(pc, sp, lr uintptr, gp *g, mp *m)
func sigprofNonGo()
func sigprofNonGoPC(pc uintptr)
func sigreturn()
func sigsend(s uint32) bool
func sigtramp(sig uint32, info *siginfo, ctx unsafe.Pointer)
func sigtrampgo(sig uint32, info *siginfo, ctx unsafe.Pointer)
func skipPleaseUseCallersFrames()
func slicebytetostring(buf *tmpBuf, b []byte) (str string)
func slicebytetostringtmp(b []byte) string
func slicecopy(to, fm slice, width uintptr) int
func slicerunetostring(buf *tmpBuf, a []rune) string
func slicestringcopy(to []byte, fm string) int
func stackcache_clear(c *mcache)
func stackcacherefill(c *mcache, order uint8)
func stackcacherelease(c *mcache, order uint8)
func stackcheck()
func stackfree(stk stack)
func stackinit()
func stacklog2(n uintptr) int
func stackpoolfree(x gclinkptr, order uint8)
func startTemplateThread()
func startTheWorld()
func startTheWorldWithSema(emitTraceEvent bool) int64
func startTimer(t *timer)
func startlockedm(gp *g)
func startm(_p_ *p, spinning bool)
func startpanic_m() bool
func step(p []byte, pc *uintptr, val *int32, first bool) (newp []byte, ok bool)
func stopTheWorld(reason string)
func stopTheWorldWithSema()
func stopTimer(t *timer) bool
func stoplockedm()
func stopm()
func strequal(p, q unsafe.Pointer) bool
func strhash(a unsafe.Pointer, h uintptr) uintptr
func stringDataOnStack(s string) bool
func stringHash(s string, seed uintptr) uintptr
func stringtoslicebyte(buf *tmpBuf, s string) []byte
func stringtoslicerune(buf *[tmpStringBufSize]rune, s string) []rune
func subtract1(p *byte) *byte
func subtractb(p *byte, n uintptr) *byte
func sweepone() uintptr
func sync_atomic_CompareAndSwapPointer(ptr *unsafe.Pointer, old, new unsafe.Pointer) bool
func sync_atomic_CompareAndSwapUintptr(ptr *uintptr, old, new uintptr) bool
func sync_atomic_StorePointer(ptr *unsafe.Pointer, new unsafe.Pointer)
func sync_atomic_StoreUintptr(ptr *uintptr, new uintptr)
func sync_atomic_SwapPointer(ptr *unsafe.Pointer, new unsafe.Pointer) unsafe.Pointer
func sync_atomic_SwapUintptr(ptr *uintptr, new uintptr) uintptr
func sync_atomic_runtime_procPin() int
func sync_atomic_runtime_procUnpin()
func sync_fastrand() uint32
func sync_nanotime() int64
func sync_runtime_Semacquire(addr *uint32)
func sync_runtime_SemacquireMutex(addr *uint32, lifo bool)
func sync_runtime_Semrelease(addr *uint32, handoff bool)
func sync_runtime_canSpin(i int) bool
func sync_runtime_doSpin()
func sync_runtime_procPin() int
func sync_runtime_procUnpin()
func sync_runtime_registerPoolCleanup(f func())
func sync_throw(s string)
func syncadjustsudogs(gp *g, used uintptr, adjinfo *adjustinfo) uintptr
func sysAlloc(n uintptr, sysStat *uint64) unsafe.Pointer
func sysFault(v unsafe.Pointer, n uintptr)
func sysFree(v unsafe.Pointer, n uintptr, sysStat *uint64)
func sysMap(v unsafe.Pointer, n uintptr, sysStat *uint64)
func sysMmap(addr unsafe.Pointer, n uintptr, prot, flags, fd int32, off uint32) (p unsafe.Pointer, err int)
func sysMunmap(addr unsafe.Pointer, n uintptr)
func sysReserve(v unsafe.Pointer, n uintptr) unsafe.Pointer
func sysReserveAligned(v unsafe.Pointer, size, align uintptr) (unsafe.Pointer, uintptr)
func sysSigaction(sig uint32, new, old *sigactiont)
func sysUnused(v unsafe.Pointer, n uintptr)
func sysUsed(v unsafe.Pointer, n uintptr)
func sysargs(argc int32, argv **byte)
func sysauxv(auxv []uintptr) int
func syscall_Exit(code int)
func syscall_Getpagesize() int
func syscall_runtime_AfterExec()
func syscall_runtime_AfterFork()
func syscall_runtime_AfterForkInChild()
func syscall_runtime_BeforeExec()
func syscall_runtime_BeforeFork()
func syscall_runtime_envs() []string
func syscall_setenv_c(k string, v string)
func syscall_unsetenv_c(k string)
func sysmon()
func systemstack(fn func())
func systemstack_switch()
func templateThread()
func testAtomic64()
func testdefersizes()
func throw(s string)
func throwinit()
func tickspersecond() int64
func timeSleep(ns int64)
func timeSleepUntil() int64
func time_now() (sec int64, nsec int32, mono int64)
func timediv(v int64, div int32, rem *int32) int32
func timerproc(tb *timersBucket)
func tooManyOverflowBuckets(noverflow uint16, B uint8) bool
func tophash(hash uintptr) uint8
func topofstack(f funcInfo, g0 bool) bool
func totaldefersize(siz uintptr) uintptr
func traceAcquireBuffer() (mp *m, pid int32, bufp *traceBufPtr)
func traceAppend(buf []byte, v uint64) []byte
func traceEvent(ev byte, skip int, args ...uint64)
func traceEventLocked(extraBytes int, mp *m, pid int32, bufp *traceBufPtr, ev byte, skip int, args ...uint64)
func traceFrameForPC(buf traceBufPtr, pid int32, f Frame) (traceFrame, traceBufPtr)
func traceFullQueue(buf traceBufPtr)
func traceGCDone()
func traceGCMarkAssistDone()
func traceGCMarkAssistStart()
func traceGCSTWDone()
func traceGCSTWStart(kind int)
func traceGCStart()
func traceGCSweepDone()
func traceGCSweepSpan(bytesSwept uintptr)
func traceGCSweepStart()
func traceGoCreate(newg *g, pc uintptr)
func traceGoEnd()
func traceGoPark(traceEv byte, skip int)
func traceGoPreempt()
func traceGoSched()
func traceGoStart()
func traceGoSysBlock(pp *p)
func traceGoSysCall()
func traceGoSysExit(ts int64)
func traceGoUnpark(gp *g, skip int)
func traceGomaxprocs(procs int32)
func traceHeapAlloc()
func traceNextGC()
func traceProcFree(pp *p)
func traceProcStart()
func traceProcStop(pp *p)
func traceReleaseBuffer(pid int32)
func traceStackID(mp *m, buf []uintptr, skip int) uint64
func traceString(bufp *traceBufPtr, pid int32, s string) (uint64, *traceBufPtr)
func trace_userLog(id uint64, category, message string)
func trace_userRegion(id, mode uint64, name string)
func trace_userTaskCreate(id, parentID uint64, taskType string)
func trace_userTaskEnd(id uint64)
func tracealloc(p unsafe.Pointer, size uintptr, typ *_type)
func traceback(pc, sp, lr uintptr, gp *g)
func traceback1(pc, sp, lr uintptr, gp *g, flags uint)
func tracebackCgoContext(pcbuf *uintptr, printing bool, ctxt uintptr, n, max int) int
func tracebackHexdump(stk stack, frame *stkframe, bad uintptr)
func tracebackdefers(gp *g, callback func(*stkframe, unsafe.Pointer) bool, v unsafe.Pointer)
func tracebackinit()
func tracebackothers(me *g)
func tracebacktrap(pc, sp, lr uintptr, gp *g)
func tracefree(p unsafe.Pointer, size uintptr)
func tracegc()
func typeBitsBulkBarrier(typ *_type, dst, src, size uintptr)
func typedmemclr(typ *_type, ptr unsafe.Pointer)
func typedmemmove(typ *_type, dst, src unsafe.Pointer)
func typedslicecopy(typ *_type, dst, src slice) int
func typelinksinit()
func typesEqual(t, v *_type, seen map[_typePair]struct{}) bool
func typestring(x interface{}) string
func unblocksig(sig uint32)
func unlock(l *mutex)
func unlockOSThread()
func unlockextra(mp *m)
func unminit()
func unminitSignals()
func unwindm(restore *bool)
func updatememstats()
func usleep(usec uint32)
func vdsoFindVersion(info *vdsoInfo, ver *vdsoVersionKey) int32
func vdsoInitFromSysinfoEhdr(info *vdsoInfo, hdr *elfEhdr)
func vdsoParseSymbols(info *vdsoInfo, version int32)
func vdsoauxv(tag, val uintptr)
func wakep()
func walltime() (sec int64, nsec int32)
func wbBufFlush(dst *uintptr, src uintptr)
func wbBufFlush1(_p_ *p)
func wbBufFlush1Debug(old, buf1, buf2 uintptr, start *uintptr, next uintptr)
func wirep(_p_ *p)
func write(fd uintptr, p unsafe.Pointer, n int32) int32
func writeErr(b []byte)
func writeheapdump_m(fd uintptr)
type BlockProfileRecord
type Error
type Frame
    func allFrames(pcs []uintptr) []Frame
    func expandCgoFrames(pc uintptr) []Frame
type Frames
    func CallersFrames(callers []uintptr) *Frames
    func (ci *Frames) Next() (frame Frame, more bool)
type Func
    func FuncForPC(pc uintptr) *Func
    func (f *Func) Entry() uintptr
    func (f *Func) FileLine(pc uintptr) (file string, line int)
    func (f *Func) Name() string
    func (f *Func) funcInfo() funcInfo
    func (f *Func) raw() *_func
type MemProfileRecord
    func (r *MemProfileRecord) InUseBytes() int64
    func (r *MemProfileRecord) InUseObjects() int64
    func (r *MemProfileRecord) Stack() []uintptr
type MemStats
type StackRecord
    func (r *StackRecord) Stack() []uintptr
type TypeAssertionError
    func (e *TypeAssertionError) Error() string
    func (*TypeAssertionError) RuntimeError()
type _defer
    func newdefer(siz int32) *_defer
type _func
type _panic
type _type
    func resolveTypeOff(ptrInModule unsafe.Pointer, off typeOff) *_type
    func (t *_type) name() string
    func (t *_type) nameOff(off nameOff) name
    func (t *_type) pkgpath() string
    func (t *_type) string() string
    func (t *_type) textOff(off textOff) unsafe.Pointer
    func (t *_type) typeOff(off typeOff) *_type
    func (t *_type) uncommon() *uncommontype
type _typePair
type adjustinfo
type ancestorInfo
type arenaHint
type arenaIdx
    func arenaIndex(p uintptr) arenaIdx
    func (i arenaIdx) l1() uint
    func (i arenaIdx) l2() uint
type arraytype
type bitvector
    func makeheapobjbv(p uintptr, size uintptr) bitvector
    func progToPointerMask(prog *byte, size uintptr) bitvector
    func stackmapdata(stkmap *stackmap, n int32) bitvector
    func (bv *bitvector) ptrbit(i uintptr) uint8
type blockRecord
type bmap
    func makeBucketArray(t *maptype, b uint8, dirtyalloc unsafe.Pointer) (buckets unsafe.Pointer, nextOverflow *bmap)
    func (b *bmap) keys() unsafe.Pointer
    func (b *bmap) overflow(t *maptype) *bmap
    func (b *bmap) setoverflow(t *maptype, ovf *bmap)
type bucket
    func newBucket(typ bucketType, nstk int) *bucket
    func stkbucket(typ bucketType, size uintptr, stk []uintptr, alloc bool) *bucket
    func (b *bucket) bp() *blockRecord
    func (b *bucket) mp() *memRecord
    func (b *bucket) stk() []uintptr
type bucketType
type cgoCallers
type cgoContextArg
type cgoSymbolizerArg
type cgoTracebackArg
type cgothreadstart
type chantype
type childInfo
type cpuProfile
    func (p *cpuProfile) add(gp *g, stk []uintptr)
    func (p *cpuProfile) addExtra()
    func (p *cpuProfile) addLostAtomic64(count uint64)
    func (p *cpuProfile) addNonGo(stk []uintptr)
type dbgVar
type divMagic
type eface
    func convT2E(t *_type, elem unsafe.Pointer) (e eface)
    func convT2Enoptr(t *_type, elem unsafe.Pointer) (e eface)
    func efaceOf(ep *interface{}) *eface
type elfDyn
type elfEhdr
type elfPhdr
type elfShdr
type elfSym
type elfVerdaux
type elfVerdef
type epollevent
type errorString
    func (e errorString) Error() string
    func (e errorString) RuntimeError()
type evacDst
type finalizer
type finblock
type findfuncbucket
type fixalloc
    func (f *fixalloc) alloc() unsafe.Pointer
    func (f *fixalloc) free(p unsafe.Pointer)
    func (f *fixalloc) init(size uintptr, first func(arg, p unsafe.Pointer), arg unsafe.Pointer, stat *uint64)
type forcegcstate
type fpreg1
type fpstate
type fpstate1
type fpxreg
type fpxreg1
type funcID
type funcInfo
    func findfunc(pc uintptr) funcInfo
    func (f funcInfo) _Func() *Func
    func (f funcInfo) valid() bool
type funcinl
type functab
type functype
    func (t *functype) dotdotdot() bool
    func (t *functype) in() []*_type
    func (t *functype) out() []*_type
type funcval
type g
    func getg() *g
    func gfget(_p_ *p) *g
    func globrunqget(_p_ *p, max int32) *g
    func malg(stacksize int32) *g
    func netpollunblock(pd *pollDesc, mode int32, ioready bool) *g
    func runqsteal(_p_, p2 *p, stealRunNextG bool) *g
    func timejump() *g
    func timejumpLocked() *g
    func traceReader() *g
    func wakefing() *g
type gList
    func netpoll(block bool) gList
    func (l *gList) empty() bool
    func (l *gList) pop() *g
    func (l *gList) push(gp *g)
    func (l *gList) pushAll(q gQueue)
type gQueue
    func (q *gQueue) empty() bool
    func (q *gQueue) pop() *g
    func (q *gQueue) popList() gList
    func (q *gQueue) push(gp *g)
    func (q *gQueue) pushBack(gp *g)
    func (q *gQueue) pushBackAll(q2 gQueue)
type gcBits
    func newAllocBits(nelems uintptr) *gcBits
    func newMarkBits(nelems uintptr) *gcBits
    func (b *gcBits) bitp(n uintptr) (bytep *uint8, mask uint8)
    func (b *gcBits) bytep(n uintptr) *uint8
type gcBitsArena
    func newArenaMayUnlock() *gcBitsArena
    func (b *gcBitsArena) tryAlloc(bytes uintptr) *gcBits
type gcBitsHeader
type gcControllerState
    func (c *gcControllerState) endCycle() float64
    func (c *gcControllerState) enlistWorker()
    func (c *gcControllerState) findRunnableGCWorker(_p_ *p) *g
    func (c *gcControllerState) revise()
    func (c *gcControllerState) startCycle()
type gcDrainFlags
type gcMarkWorkerMode
type gcMode
type gcSweepBlock
type gcSweepBuf
    func (b *gcSweepBuf) block(i int) []*mspan
    func (b *gcSweepBuf) numBlocks() int
    func (b *gcSweepBuf) pop() *mspan
    func (b *gcSweepBuf) push(s *mspan)
type gcTrigger
    func (t gcTrigger) test() bool
type gcTriggerKind
type gcWork
    func (w *gcWork) balance()
    func (w *gcWork) checkPut(ptr uintptr, ptrs []uintptr)
    func (w *gcWork) dispose()
    func (w *gcWork) empty() bool
    func (w *gcWork) init()
    func (w *gcWork) put(obj uintptr)
    func (w *gcWork) putBatch(obj []uintptr)
    func (w *gcWork) putFast(obj uintptr) bool
    func (w *gcWork) tryGet() uintptr
    func (w *gcWork) tryGetFast() uintptr
type gclink
type gclinkptr
    func nextFreeFast(s *mspan) gclinkptr
    func stackpoolalloc(order uint8) gclinkptr
    func (p gclinkptr) ptr() *gclink
type gobuf
type gsignalStack
type guintptr
    func (gp *guintptr) cas(old, new guintptr) bool
    func (gp guintptr) ptr() *g
    func (gp *guintptr) set(g *g)
type hchan
    func makechan(t *chantype, size int) *hchan
    func makechan64(t *chantype, size int64) *hchan
    func reflect_makechan(t *chantype, size int) *hchan
    func (c *hchan) raceaddr() unsafe.Pointer
    func (c *hchan) sortkey() uintptr
type heapArena
type heapBits
    func heapBitsForAddr(addr uintptr) (h heapBits)
    func (h heapBits) bits() uint32
    func (h heapBits) clearCheckmarkSpan(size, n, total uintptr)
    func (h heapBits) forward(n uintptr) heapBits
    func (h heapBits) forwardOrBoundary(n uintptr) (heapBits, uintptr)
    func (h heapBits) initCheckmarkSpan(size, n, total uintptr)
    func (h heapBits) initSpan(s *mspan)
    func (h heapBits) isCheckmarked(size uintptr) bool
    func (h heapBits) isPointer() bool
    func (h heapBits) morePointers() bool
    func (h heapBits) next() heapBits
    func (h heapBits) nextArena() heapBits
    func (h heapBits) setCheckmarked(size uintptr)
type hex
type hiter
    func reflect_mapiterinit(t *maptype, h *hmap) *hiter
type hmap
    func makemap(t *maptype, hint int, h *hmap) *hmap
    func makemap64(t *maptype, hint int64, h *hmap) *hmap
    func makemap_small() *hmap
    func reflect_makemap(t *maptype, cap int) *hmap
    func (h *hmap) createOverflow()
    func (h *hmap) growing() bool
    func (h *hmap) incrnoverflow()
    func (h *hmap) newoverflow(t *maptype, b *bmap) *bmap
    func (h *hmap) noldbuckets() uintptr
    func (h *hmap) oldbucketmask() uintptr
    func (h *hmap) sameSizeGrow() bool
type iface
    func assertE2I(inter *interfacetype, e eface) (r iface)
    func assertI2I(inter *interfacetype, i iface) (r iface)
    func convI2I(inter *interfacetype, i iface) (r iface)
    func convT2I(tab *itab, elem unsafe.Pointer) (i iface)
    func convT2Inoptr(tab *itab, elem unsafe.Pointer) (i iface)
type imethod
type inlinedCall
type interfacetype
type itab
    func getitab(inter *interfacetype, typ *_type, canfail bool) *itab
    func (m *itab) init() string
type itabTableType
    func (t *itabTableType) add(m *itab)
    func (t *itabTableType) find(inter *interfacetype, typ *_type) *itab
type itimerval
type lfnode
    func lfstackUnpack(val uint64) *lfnode
type lfstack
    func (head *lfstack) empty() bool
    func (head *lfstack) pop() unsafe.Pointer
    func (head *lfstack) push(node *lfnode)
type libcall
type linearAlloc
    func (l *linearAlloc) alloc(size, align uintptr, sysStat *uint64) unsafe.Pointer
    func (l *linearAlloc) init(base, size uintptr)
type m
    func acquirem() *m
    func allocm(_p_ *p, fn func()) *m
    func lockextra(nilokay bool) *m
    func mget() *m
type mOS
type mSpanList
    func (list *mSpanList) init()
    func (list *mSpanList) insert(span *mspan)
    func (list *mSpanList) insertBack(span *mspan)
    func (list *mSpanList) isEmpty() bool
    func (list *mSpanList) remove(span *mspan)
    func (list *mSpanList) takeAll(other *mSpanList)
type mSpanState
type mTreap
    func (root *mTreap) end() treapIter
    func (root *mTreap) erase(i treapIter)
    func (root *mTreap) find(npages uintptr) *treapNode
    func (root *mTreap) insert(span *mspan)
    func (root *mTreap) removeNode(t *treapNode)
    func (root *mTreap) removeSpan(span *mspan)
    func (root *mTreap) rotateLeft(x *treapNode)
    func (root *mTreap) rotateRight(y *treapNode)
    func (root *mTreap) start() treapIter
type mapextra
type maptype
    func (mt *maptype) hashMightPanic() bool
    func (mt *maptype) indirectkey() bool
    func (mt *maptype) indirectvalue() bool
    func (mt *maptype) needkeyupdate() bool
    func (mt *maptype) reflexivekey() bool
type markBits
    func markBitsForAddr(p uintptr) markBits
    func markBitsForSpan(base uintptr) (mbits markBits)
    func (m *markBits) advance()
    func (m markBits) clearMarked()
    func (m markBits) isMarked() bool
    func (m markBits) setMarked()
    func (m markBits) setMarkedNonAtomic()
type mcache
    func allocmcache() *mcache
    func gomcache() *mcache
    func (c *mcache) nextFree(spc spanClass) (v gclinkptr, s *mspan, shouldhelpgc bool)
    func (c *mcache) prepareForSweep()
    func (c *mcache) refill(spc spanClass)
    func (c *mcache) releaseAll()
type mcentral
    func (c *mcentral) cacheSpan() *mspan
    func (c *mcentral) freeSpan(s *mspan, preserve bool, wasempty bool) bool
    func (c *mcentral) grow() *mspan
    func (c *mcentral) init(spc spanClass)
    func (c *mcentral) uncacheSpan(s *mspan)
type mcontext
type memRecord
type memRecordCycle
    func (a *memRecordCycle) add(b *memRecordCycle)
type method
type mheap
    func (h *mheap) alloc(npage uintptr, spanclass spanClass, large bool, needzero bool) *mspan
    func (h *mheap) allocManual(npage uintptr, stat *uint64) *mspan
    func (h *mheap) allocSpanLocked(npage uintptr, stat *uint64) *mspan
    func (h *mheap) alloc_m(npage uintptr, spanclass spanClass, large bool) *mspan
    func (h *mheap) coalesce(s *mspan)
    func (h *mheap) freeManual(s *mspan, stat *uint64)
    func (h *mheap) freeSpan(s *mspan, large bool)
    func (h *mheap) freeSpanLocked(s *mspan, acctinuse, acctidle bool, unusedsince int64)
    func (h *mheap) grow(npage uintptr) bool
    func (h *mheap) init()
    func (h *mheap) pickFreeSpan(npage uintptr) *mspan
    func (h *mheap) reclaim(npage uintptr)
    func (h *mheap) reclaimChunk(arenas []arenaIdx, pageIdx, n uintptr) uintptr
    func (h *mheap) scavenge(k int32, now, limit uint64)
    func (h *mheap) scavengeAll(now, limit uint64) uintptr
    func (h *mheap) scavengeLargest(nbytes uintptr)
    func (h *mheap) setSpan(base uintptr, s *mspan)
    func (h *mheap) setSpans(base, npage uintptr, s *mspan)
    func (h *mheap) sysAlloc(n uintptr) (v unsafe.Pointer, size uintptr)
type mlink
type moduledata
    func activeModules() []*moduledata
    func findmoduledatap(pc uintptr) *moduledata
type modulehash
type mspan
    func largeAlloc(size uintptr, needzero bool, noscan bool) *mspan
    func materializeGCProg(ptrdata uintptr, prog *byte) *mspan
    func spanOf(p uintptr) *mspan
    func spanOfHeap(p uintptr) *mspan
    func spanOfUnchecked(p uintptr) *mspan
    func (s *mspan) allocBitsForIndex(allocBitIndex uintptr) markBits
    func (s *mspan) base() uintptr
    func (s *mspan) countAlloc() int
    func (s *mspan) ensureSwept()
    func (span *mspan) inList() bool
    func (span *mspan) init(base uintptr, npages uintptr)
    func (s *mspan) isFree(index uintptr) bool
    func (s *mspan) layout() (size, n, total uintptr)
    func (s *mspan) markBitsForBase() markBits
    func (s *mspan) markBitsForIndex(objIndex uintptr) markBits
    func (s *mspan) nextFreeIndex() uintptr
    func (s *mspan) objIndex(p uintptr) uintptr
    func (s *mspan) physPageBounds() (uintptr, uintptr)
    func (s *mspan) refillAllocCache(whichByte uintptr)
    func (s *mspan) released() uintptr
    func (s *mspan) scavenge() uintptr
    func (s *mspan) sweep(preserve bool) bool
type mstats
type muintptr
    func (mp muintptr) ptr() *m
    func (mp *muintptr) set(m *m)
type mutex
type name
    func resolveNameOff(ptrInModule unsafe.Pointer, off nameOff) name
    func (n name) data(off int) *byte
    func (n name) isExported() bool
    func (n name) name() (s string)
    func (n name) nameLen() int
    func (n name) pkgPath() string
    func (n name) tag() (s string)
    func (n name) tagLen() int
type nameOff
type neverCallThisFunction
type notInHeap
    func persistentalloc1(size, align uintptr, sysStat *uint64) *notInHeap
    func (p *notInHeap) add(bytes uintptr) *notInHeap
type notInHeapSlice
type note
type notifyList
type p
    func pidleget() *p
    func procresize(nprocs int32) *p
    func releasep() *p
type pcvalueCache
type pcvalueCacheEnt
type persistentAlloc
type plainError
    func (e plainError) Error() string
    func (e plainError) RuntimeError()
type pollCache
    func (c *pollCache) alloc() *pollDesc
    func (c *pollCache) free(pd *pollDesc)
type pollDesc
type profAtomic
    func (x *profAtomic) cas(old, new profIndex) bool
    func (x *profAtomic) load() profIndex
    func (x *profAtomic) store(new profIndex)
type profBuf
    func newProfBuf(hdrsize, bufwords, tags int) *profBuf
    func (b *profBuf) canWriteRecord(nstk int) bool
    func (b *profBuf) canWriteTwoRecords(nstk1, nstk2 int) bool
    func (b *profBuf) close()
    func (b *profBuf) hasOverflow() bool
    func (b *profBuf) incrementOverflow(now int64)
    func (b *profBuf) read(mode profBufReadMode) (data []uint64, tags []unsafe.Pointer, eof bool)
    func (b *profBuf) takeOverflow() (count uint32, time uint64)
    func (b *profBuf) wakeupExtra()
    func (b *profBuf) write(tagPtr *unsafe.Pointer, now int64, hdr []uint64, stk []uintptr)
type profBufReadMode
type profIndex
    func (x profIndex) addCountsAndClearFlags(data, tag int) profIndex
    func (x profIndex) dataCount() uint32
    func (x profIndex) tagCount() uint32
type ptabEntry
type ptrtype
type puintptr
    func (pp puintptr) ptr() *p
    func (pp *puintptr) set(p *p)
type randomEnum
    func (enum *randomEnum) done() bool
    func (enum *randomEnum) next()
    func (enum *randomEnum) position() uint32
type randomOrder
    func (ord *randomOrder) reset(count uint32)
    func (ord *randomOrder) start(i uint32) randomEnum
type reflectMethodValue
type runtimeSelect
type rwmutex
    func (rw *rwmutex) lock()
    func (rw *rwmutex) rlock()
    func (rw *rwmutex) runlock()
    func (rw *rwmutex) unlock()
type scase
type schedt
type selectDir
type semaProfileFlags
type semaRoot
    func semroot(addr *uint32) *semaRoot
    func (root *semaRoot) dequeue(addr *uint32) (found *sudog, now int64)
    func (root *semaRoot) queue(addr *uint32, s *sudog, lifo bool)
    func (root *semaRoot) rotateLeft(x *sudog)
    func (root *semaRoot) rotateRight(y *sudog)
type sigTabT
type sigactiont
type sigcontext
type sigctxt
    func (c *sigctxt) cs() uint64
    func (c *sigctxt) fault() uintptr
    func (c *sigctxt) fixsigcode(sig uint32)
    func (c *sigctxt) fs() uint64
    func (c *sigctxt) gs() uint64
    func (c *sigctxt) preparePanic(sig uint32, gp *g)
    func (c *sigctxt) r10() uint64
    func (c *sigctxt) r11() uint64
    func (c *sigctxt) r12() uint64
    func (c *sigctxt) r13() uint64
    func (c *sigctxt) r14() uint64
    func (c *sigctxt) r15() uint64
    func (c *sigctxt) r8() uint64
    func (c *sigctxt) r9() uint64
    func (c *sigctxt) rax() uint64
    func (c *sigctxt) rbp() uint64
    func (c *sigctxt) rbx() uint64
    func (c *sigctxt) rcx() uint64
    func (c *sigctxt) rdi() uint64
    func (c *sigctxt) rdx() uint64
    func (c *sigctxt) regs() *sigcontext
    func (c *sigctxt) rflags() uint64
    func (c *sigctxt) rip() uint64
    func (c *sigctxt) rsi() uint64
    func (c *sigctxt) rsp() uint64
    func (c *sigctxt) set_rip(x uint64)
    func (c *sigctxt) set_rsp(x uint64)
    func (c *sigctxt) set_sigaddr(x uint64)
    func (c *sigctxt) set_sigcode(x uint64)
    func (c *sigctxt) sigaddr() uint64
    func (c *sigctxt) sigcode() uint64
    func (c *sigctxt) siglr() uintptr
    func (c *sigctxt) sigpc() uintptr
    func (c *sigctxt) sigsp() uintptr
type siginfo
type sigset
type slice
    func growslice(et *_type, old slice, cap int) slice
type sliceInterfacePtr
type slicetype
type sockaddr_un
type spanClass
    func makeSpanClass(sizeclass uint8, noscan bool) spanClass
    func (sc spanClass) noscan() bool
    func (sc spanClass) sizeclass() int8
type special
    func removespecial(p unsafe.Pointer, kind uint8) *special
type specialfinalizer
type specialprofile
type stack
    func stackalloc(n uint32) stack
type stackObject
    func (obj *stackObject) setType(typ *_type)
type stackObjectBuf
type stackObjectBufHdr
type stackObjectRecord
type stackScanState
    func (s *stackScanState) addObject(addr uintptr, typ *_type)
    func (s *stackScanState) buildIndex()
    func (s *stackScanState) findObject(a uintptr) *stackObject
    func (s *stackScanState) getPtr() uintptr
    func (s *stackScanState) putPtr(p uintptr)
type stackWorkBuf
type stackWorkBufHdr
type stackfreelist
type stackmap
type stackt
type stkframe
type stringInterfacePtr
type stringStruct
    func stringStructOf(sp *string) *stringStruct
type stringStructDWARF
type stringer
type structfield
    func (f *structfield) offset() uintptr
type structtype
type sudog
    func acquireSudog() *sudog
type sweepdata
type sysmontick
type textOff
type textsect
type tflag
type timer
    func (t *timer) assignBucket() *timersBucket
type timersBucket
    func (tb *timersBucket) addtimerLocked(t *timer) bool
    func (tb *timersBucket) deltimerLocked(t *timer) (removed, ok bool)
type timespec
    func (ts *timespec) set_nsec(x int32)
    func (ts *timespec) set_sec(x int64)
type timeval
    func (tv *timeval) set_usec(x int32)
type tmpBuf
type traceAlloc
    func (a *traceAlloc) alloc(n uintptr) unsafe.Pointer
    func (a *traceAlloc) drop()
type traceAllocBlock
type traceAllocBlockPtr
    func (p traceAllocBlockPtr) ptr() *traceAllocBlock
    func (p *traceAllocBlockPtr) set(x *traceAllocBlock)
type traceBuf
    func (buf *traceBuf) byte(v byte)
    func (buf *traceBuf) varint(v uint64)
type traceBufHeader
type traceBufPtr
    func traceBufPtrOf(b *traceBuf) traceBufPtr
    func traceFlush(buf traceBufPtr, pid int32) traceBufPtr
    func traceFullDequeue() traceBufPtr
    func (tp traceBufPtr) ptr() *traceBuf
    func (tp *traceBufPtr) set(b *traceBuf)
type traceFrame
type traceStack
    func (ts *traceStack) stack() []uintptr
type traceStackPtr
    func (tp traceStackPtr) ptr() *traceStack
type traceStackTable
    func (tab *traceStackTable) dump()
    func (tab *traceStackTable) find(pcs []uintptr, hash uintptr) uint32
    func (tab *traceStackTable) newStack(n int) *traceStack
    func (tab *traceStackTable) put(pcs []uintptr) uint32
type treapIter
    func (i treapIter) next() treapIter
    func (i treapIter) prev() treapIter
    func (i *treapIter) span() *mspan
    func (i *treapIter) valid() bool
type treapNode
    func (t *treapNode) isSpanInTreap(s *mspan) bool
    func (t *treapNode) pred() *treapNode
    func (t *treapNode) succ() *treapNode
    func (t *treapNode) walkTreap(fn func(tn *treapNode))
type typeAlg
type typeCacheBucket
type typeOff
type ucontext
type uint16InterfacePtr
type uint32InterfacePtr
type uint64InterfacePtr
type uncommontype
type usigset
type vdsoInfo
type vdsoSymbolKey
type vdsoVersionKey
type waitReason
    func (w waitReason) String() string
type waitq
    func (q *waitq) dequeue() *sudog
    func (q *waitq) dequeueSudoG(sgp *sudog)
    func (q *waitq) enqueue(sgp *sudog)
type wbBuf
    func (b *wbBuf) discard()
    func (b *wbBuf) empty() bool
    func (b *wbBuf) putFast(old, new uintptr) bool
    func (b *wbBuf) reset()
type wincallbackcontext
type workbuf
    func getempty() *workbuf
    func handoff(b *workbuf) *workbuf
    func trygetfull() *workbuf
    func (b *workbuf) checkempty()
    func (b *workbuf) checknonempty()
type workbufhdr
type xmmreg
type xmmreg1

Examples

Frames

Package files

alg.go atomic_pointer.go cgo.go cgo_mmap.go cgo_sigaction.go cgocall.go cgocallback.go cgocheck.go chan.go compiler.go complex.go cpuflags.go cpuflags_amd64.go cpuprof.go cputicks.go debug.go debugcall.go defs_linux_amd64.go env_posix.go error.go extern.go fastlog2.go fastlog2table.go float.go hash64.go heapdump.go iface.go lfstack.go lfstack_64bit.go lock_futex.go malloc.go map.go map_fast32.go map_fast64.go map_faststr.go mbarrier.go mbitmap.go mcache.go mcentral.go mem_linux.go mfinal.go mfixalloc.go mgc.go mgclarge.go mgcmark.go mgcstack.go mgcsweep.go mgcsweepbuf.go mgcwork.go mheap.go mprof.go msan0.go msize.go mstats.go mwbbuf.go netpoll.go netpoll_epoll.go os_linux.go os_linux_generic.go os_linux_noauxv.go os_nonopenbsd.go panic.go plugin.go print.go proc.go profbuf.go proflabel.go race0.go rdebug.go relax_stub.go runtime.go runtime1.go runtime2.go rwmutex.go select.go sema.go signal_amd64x.go signal_linux_amd64.go signal_sighandler.go signal_unix.go sigqueue.go sigtab_linux_generic.go sizeclasses.go slice.go softfloat64.go stack.go string.go stubs.go stubs2.go stubs3.go stubs_linux.go stubs_x86.go symtab.go sys_nonppc64x.go sys_x86.go time.go timestub.go timestub2.go trace.go traceback.go type.go typekind.go unaligned1.go utf8.go vdso_elf64.go vdso_linux.go vdso_linux_amd64.go write_err.go

Constants

const (
        c0 = uintptr((8-sys.PtrSize)/4*2860486313 + (sys.PtrSize-4)/4*33054211828000289)
        c1 = uintptr((8-sys.PtrSize)/4*3267000013 + (sys.PtrSize-4)/4*23344194077549503)
)

type algorithms - known to compiler

const (
        alg_NOEQ = iota
        alg_MEM0
        alg_MEM8
        alg_MEM16
        alg_MEM32
        alg_MEM64
        alg_MEM128
        alg_STRING
        alg_INTER
        alg_NILINTER
        alg_FLOAT32
        alg_FLOAT64
        alg_CPLX64
        alg_CPLX128
        alg_max
)
const (
        maxAlign  = 8
        hchanSize = unsafe.Sizeof(hchan{}) + uintptr(-int(unsafe.Sizeof(hchan{}))&(maxAlign-1))
        debugChan = false
)

Offsets into internal/cpu records for use in assembly.

const (
        offsetX86HasAVX2 = unsafe.Offsetof(cpu.X86.HasAVX2)
        offsetX86HasERMS = unsafe.Offsetof(cpu.X86.HasERMS)
        offsetX86HasSSE2 = unsafe.Offsetof(cpu.X86.HasSSE2)

        offsetARMHasIDIVA = unsafe.Offsetof(cpu.ARM.HasIDIVA)
)
const (
        debugCallSystemStack = "executing on Go runtime stack"
        debugCallUnknownFunc = "call from unknown function"
        debugCallRuntime     = "call from within the Go runtime"
        debugCallUnsafePoint = "call not at safe point"
)
const (
        _EINTR  = 0x4
        _EAGAIN = 0xb
        _ENOMEM = 0xc

        _PROT_NONE  = 0x0
        _PROT_READ  = 0x1
        _PROT_WRITE = 0x2
        _PROT_EXEC  = 0x4

        _MAP_ANON    = 0x20
        _MAP_PRIVATE = 0x2
        _MAP_FIXED   = 0x10

        _MADV_DONTNEED   = 0x4
        _MADV_FREE       = 0x8
        _MADV_HUGEPAGE   = 0xe
        _MADV_NOHUGEPAGE = 0xf

        _SA_RESTART  = 0x10000000
        _SA_ONSTACK  = 0x8000000
        _SA_RESTORER = 0x4000000
        _SA_SIGINFO  = 0x4

        _SIGHUP    = 0x1
        _SIGINT    = 0x2
        _SIGQUIT   = 0x3
        _SIGILL    = 0x4
        _SIGTRAP   = 0x5
        _SIGABRT   = 0x6
        _SIGBUS    = 0x7
        _SIGFPE    = 0x8
        _SIGKILL   = 0x9
        _SIGUSR1   = 0xa
        _SIGSEGV   = 0xb
        _SIGUSR2   = 0xc
        _SIGPIPE   = 0xd
        _SIGALRM   = 0xe
        _SIGSTKFLT = 0x10
        _SIGCHLD   = 0x11
        _SIGCONT   = 0x12
        _SIGSTOP   = 0x13
        _SIGTSTP   = 0x14
        _SIGTTIN   = 0x15
        _SIGTTOU   = 0x16
        _SIGURG    = 0x17
        _SIGXCPU   = 0x18
        _SIGXFSZ   = 0x19
        _SIGVTALRM = 0x1a
        _SIGPROF   = 0x1b
        _SIGWINCH  = 0x1c
        _SIGIO     = 0x1d
        _SIGPWR    = 0x1e
        _SIGSYS    = 0x1f

        _FPE_INTDIV = 0x1
        _FPE_INTOVF = 0x2
        _FPE_FLTDIV = 0x3
        _FPE_FLTOVF = 0x4
        _FPE_FLTUND = 0x5
        _FPE_FLTRES = 0x6
        _FPE_FLTINV = 0x7
        _FPE_FLTSUB = 0x8

        _BUS_ADRALN = 0x1
        _BUS_ADRERR = 0x2
        _BUS_OBJERR = 0x3

        _SEGV_MAPERR = 0x1
        _SEGV_ACCERR = 0x2

        _ITIMER_REAL    = 0x0
        _ITIMER_VIRTUAL = 0x1
        _ITIMER_PROF    = 0x2

        _EPOLLIN       = 0x1
        _EPOLLOUT      = 0x4
        _EPOLLERR      = 0x8
        _EPOLLHUP      = 0x10
        _EPOLLRDHUP    = 0x2000
        _EPOLLET       = 0x80000000
        _EPOLL_CLOEXEC = 0x80000
        _EPOLL_CTL_ADD = 0x1
        _EPOLL_CTL_DEL = 0x2
        _EPOLL_CTL_MOD = 0x3

        _AF_UNIX    = 0x1
        _F_SETFL    = 0x4
        _SOCK_DGRAM = 0x2
)
const (
        _O_RDONLY  = 0x0
        _O_CLOEXEC = 0x80000
)
const (
        // Constants for multiplication: four random odd 64-bit numbers.
        m1 = 16877499708836156737
        m2 = 2820277070424839065
        m3 = 9497967016996688599
        m4 = 15839092249703872147
)
const (
        fieldKindEol       = 0
        fieldKindPtr       = 1
        fieldKindIface     = 2
        fieldKindEface     = 3
        tagEOF             = 0
        tagObject          = 1
        tagOtherRoot       = 2
        tagType            = 3
        tagGoroutine       = 4
        tagStackFrame      = 5
        tagParams          = 6
        tagFinalizer       = 7
        tagItab            = 8
        tagOSThread        = 9
        tagMemStats        = 10
        tagQueuedFinalizer = 11
        tagData            = 12
        tagBSS             = 13
        tagDefer           = 14
        tagPanic           = 15
        tagMemProf         = 16
        tagAllocSample     = 17
)

Cache of types that have been serialized already. We use a type's hash field to pick a bucket. Inside a bucket, we keep a list of types that have been serialized so far, most recently used first. Note: when a bucket overflows we may end up serializing a type more than once. That's ok.

const (
        typeCacheBuckets = 256
        typeCacheAssoc   = 4
)
const (
        // addrBits is the number of bits needed to represent a virtual address.
        //
        // See heapAddrBits for a table of address space sizes on
        // various architectures. 48 bits is enough for all
        // architectures except s390x.
        //
        // On AMD64, virtual addresses are 48-bit (or 57-bit) numbers sign extended to 64.
        // We shift the address left 16 to eliminate the sign extended part and make
        // room in the bottom for the count.
        //
        // On s390x, virtual addresses are 64-bit. There's not much we
        // can do about this, so we just hope that the kernel doesn't
        // get to really high addresses and panic if it does.
        addrBits = 48

        // In addition to the 16 bits taken from the top, we can take 3 from the
        // bottom, because node must be pointer-aligned, giving a total of 19 bits
        // of count.
        cntBits = 64 - addrBits + 3

        // On AIX, 64-bit addresses are split into 36-bit segment number and 28-bit
        // offset in segment.  Segment numbers in the range 0x0A0000000-0x0AFFFFFFF(LSA)
        // are available for mmap.
        // We assume all lfnode addresses are from memory allocated with mmap.
        // We use one bit to distinguish between the two ranges.
        aixAddrBits = 57
        aixCntBits  = 64 - aixAddrBits + 3
)
const (
        mutex_unlocked = 0
        mutex_locked   = 1
        mutex_sleeping = 2

        active_spin     = 4
        active_spin_cnt = 30
        passive_spin    = 1
)
const (
        debugMalloc = false

        maxTinySize   = _TinySize
        tinySizeClass = _TinySizeClass
        maxSmallSize  = _MaxSmallSize

        pageShift = _PageShift
        pageSize  = _PageSize
        pageMask  = _PageMask
        // By construction, single page spans of the smallest object class
        // have the most objects per span.
        maxObjsPerSpan = pageSize / 8

        concurrentSweep = _ConcurrentSweep

        _PageSize = 1 << _PageShift
        _PageMask = _PageSize - 1

        // _64bit = 1 on 64-bit systems, 0 on 32-bit systems
        _64bit = 1 << (^uintptr(0) >> 63) / 2

        // Tiny allocator parameters, see "Tiny allocator" comment in malloc.go.
        _TinySize      = 16
        _TinySizeClass = int8(2)

        _FixAllocChunk = 16 << 10 // Chunk size for FixAlloc

        // Per-P, per order stack segment cache size.
        _StackCacheSize = 32 * 1024

        // Number of orders that get caching. Order 0 is FixedStack
        // and each successive order is twice as large.
        // We want to cache 2KB, 4KB, 8KB, and 16KB stacks. Larger stacks
        // will be allocated directly.
        // Since FixedStack is different on different systems, we
        // must vary NumStackOrders to keep the same maximum cached size.
        //   OS               | FixedStack | NumStackOrders
        //   -----------------+------------+---------------
        //   linux/darwin/bsd | 2KB        | 4
        //   windows/32       | 4KB        | 3
        //   windows/64       | 8KB        | 2
        //   plan9            | 4KB        | 3
        _NumStackOrders = 4 - sys.PtrSize/4*sys.GoosWindows - 1*sys.GoosPlan9

        // heapAddrBits is the number of bits in a heap address. On
        // amd64, addresses are sign-extended beyond heapAddrBits. On
        // other arches, they are zero-extended.
        //
        // On most 64-bit platforms, we limit this to 48 bits based on a
        // combination of hardware and OS limitations.
        //
        // amd64 hardware limits addresses to 48 bits, sign-extended
        // to 64 bits. Addresses where the top 16 bits are not either
        // all 0 or all 1 are "non-canonical" and invalid. Because of
        // these "negative" addresses, we offset addresses by 1<<47
        // (arenaBaseOffset) on amd64 before computing indexes into
        // the heap arenas index. In 2017, amd64 hardware added
        // support for 57 bit addresses; however, currently only Linux
        // supports this extension and the kernel will never choose an
        // address above 1<<47 unless mmap is called with a hint
        // address above 1<<47 (which we never do).
        //
        // arm64 hardware (as of ARMv8) limits user addresses to 48
        // bits, in the range [0, 1<<48).
        //
        // ppc64, mips64, and s390x support arbitrary 64 bit addresses
        // in hardware. On Linux, Go leans on stricter OS limits. Based
        // on Linux's processor.h, the user address space is limited as
        // follows on 64-bit architectures:
        //
        // Architecture  Name              Maximum Value (exclusive)
        // ---------------------------------------------------------------------
        // amd64         TASK_SIZE_MAX     0x007ffffffff000 (47 bit addresses)
        // arm64         TASK_SIZE_64      0x01000000000000 (48 bit addresses)
        // ppc64{,le}    TASK_SIZE_USER64  0x00400000000000 (46 bit addresses)
        // mips64{,le}   TASK_SIZE64       0x00010000000000 (40 bit addresses)
        // s390x         TASK_SIZE         1<<64 (64 bit addresses)
        //
        // These limits may increase over time, but are currently at
        // most 48 bits except on s390x. On all architectures, Linux
        // starts placing mmap'd regions at addresses that are
        // significantly below 48 bits, so even if it's possible to
        // exceed Go's 48 bit limit, it's extremely unlikely in
        // practice.
        //
        // On aix/ppc64, the limits is increased to 1<<60 to accept addresses
        // returned by mmap syscall. These are in range:
        //  0x0a00000000000000 - 0x0afffffffffffff
        //
        // On 32-bit platforms, we accept the full 32-bit address
        // space because doing so is cheap.
        // mips32 only has access to the low 2GB of virtual memory, so
        // we further limit it to 31 bits.
        //
        // WebAssembly currently has a limit of 4GB linear memory.
        heapAddrBits = (_64bit*(1-sys.GoarchWasm)*(1-sys.GoosAix))*48 + (1-_64bit+sys.GoarchWasm)*(32-(sys.GoarchMips+sys.GoarchMipsle)) + 60*sys.GoosAix

        // maxAlloc is the maximum size of an allocation. On 64-bit,
        // it's theoretically possible to allocate 1<<heapAddrBits bytes. On
        // 32-bit, however, this is one less than 1<<32 because the
        // number of bytes in the address space doesn't actually fit
        // in a uintptr.
        maxAlloc = (1 << heapAddrBits) - (1-_64bit)*1

        // heapArenaBytes is the size of a heap arena. The heap
        // consists of mappings of size heapArenaBytes, aligned to
        // heapArenaBytes. The initial heap mapping is one arena.
        //
        // This is currently 64MB on 64-bit non-Windows and 4MB on
        // 32-bit and on Windows. We use smaller arenas on Windows
        // because all committed memory is charged to the process,
        // even if it's not touched. Hence, for processes with small
        // heaps, the mapped arena space needs to be commensurate.
        // This is particularly important with the race detector,
        // since it significantly amplifies the cost of committed
        // memory.
        heapArenaBytes = 1 << logHeapArenaBytes

        // logHeapArenaBytes is log_2 of heapArenaBytes. For clarity,
        // prefer using heapArenaBytes where possible (we need the
        // constant to compute some other constants).
        logHeapArenaBytes = (6+20)*(_64bit*(1-sys.GoosWindows)*(1-sys.GoosAix)) + (2+20)*(_64bit*sys.GoosWindows) + (2+20)*(1-_64bit) + (8+20)*sys.GoosAix

        // heapArenaBitmapBytes is the size of each heap arena's bitmap.
        heapArenaBitmapBytes = heapArenaBytes / (sys.PtrSize * 8 / 2)

        pagesPerArena = heapArenaBytes / pageSize

        // arenaL1Bits is the number of bits of the arena number
        // covered by the first level arena map.
        //
        // This number should be small, since the first level arena
        // map requires PtrSize*(1<<arenaL1Bits) of space in the
        // binary's BSS. It can be zero, in which case the first level
        // index is effectively unused. There is a performance benefit
        // to this, since the generated code can be more efficient,
        // but comes at the cost of having a large L2 mapping.
        //
        // We use the L1 map on 64-bit Windows because the arena size
        // is small, but the address space is still 48 bits, and
        // there's a high cost to having a large L2.
        //
        // We use the L1 map on aix/ppc64 to keep the same L2 value
        // as on Linux.
        arenaL1Bits = 6*(_64bit*sys.GoosWindows) + 12*sys.GoosAix

        // arenaL2Bits is the number of bits of the arena number
        // covered by the second level arena index.
        //
        // The size of each arena map allocation is proportional to
        // 1<<arenaL2Bits, so it's important that this not be too
        // large. 48 bits leads to 32MB arena index allocations, which
        // is about the practical threshold.
        arenaL2Bits = heapAddrBits - logHeapArenaBytes - arenaL1Bits

        // arenaL1Shift is the number of bits to shift an arena frame
        // number by to compute an index into the first level arena map.
        arenaL1Shift = arenaL2Bits

        // arenaBits is the total bits in a combined arena map index.
        // This is split between the index into the L1 arena map and
        // the L2 arena map.
        arenaBits = arenaL1Bits + arenaL2Bits

        // arenaBaseOffset is the pointer value that corresponds to
        // index 0 in the heap arena map.
        //
        // On amd64, the address space is 48 bits, sign extended to 64
        // bits. This offset lets us handle "negative" addresses (or
        // high addresses if viewed as unsigned).
        //
        // On other platforms, the user address space is contiguous
        // and starts at 0, so no offset is necessary.
        arenaBaseOffset uintptr = sys.GoarchAmd64 * (1 << 47)

        // Max number of threads to run garbage collection.
        // 2, 3, and 4 are all plausible maximums depending
        // on the hardware details of the machine. The garbage
        // collector scales well to 32 cpus.
        _MaxGcproc = 32

        // minLegalPointer is the smallest possible legal pointer.
        // This is the smallest possible architectural page size,
        // since we assume that the first page is never mapped.
        //
        // This should agree with minZeroPage in the compiler.
        minLegalPointer uintptr = 4096
)
const (
        // Maximum number of key/value pairs a bucket can hold.
        bucketCntBits = 3
        bucketCnt     = 1 << bucketCntBits

        // Maximum average load of a bucket that triggers growth is 6.5.
        // Represent as loadFactorNum/loadFactDen, to allow integer math.
        loadFactorNum = 13
        loadFactorDen = 2

        // Maximum key or value size to keep inline (instead of mallocing per element).
        // Must fit in a uint8.
        // Fast versions cannot handle big values - the cutoff size for
        // fast versions in cmd/compile/internal/gc/walk.go must be at most this value.
        maxKeySize   = 128
        maxValueSize = 128

        // data offset should be the size of the bmap struct, but needs to be
        // aligned correctly. For amd64p32 this means 64-bit alignment
        // even though pointers are 32 bit.
        dataOffset = unsafe.Offsetof(struct {
                b bmap
                v int64
        }{}.v)

        // Possible tophash values. We reserve a few possibilities for special marks.
        // Each bucket (including its overflow buckets, if any) will have either all or none of its
        // entries in the evacuated* states (except during the evacuate() method, which only happens
        // during map writes and thus no one else can observe the map during that time).
        emptyRest      = 0 // this cell is empty, and there are no more non-empty cells at higher indexes or overflows.
        emptyOne       = 1 // this cell is empty
        evacuatedX     = 2 // key/value is valid.  Entry has been evacuated to first half of larger table.
        evacuatedY     = 3 // same as above, but evacuated to second half of larger table.
        evacuatedEmpty = 4 // cell is empty, bucket is evacuated.
        minTopHash     = 5 // minimum tophash for a normal filled cell.

        // flags
        iterator     = 1 // there may be an iterator using buckets
        oldIterator  = 2 // there may be an iterator using oldbuckets
        hashWriting  = 4 // a goroutine is writing to the map
        sameSizeGrow = 8 // the current map growth is to a new map of the same size

        // sentinel bucket ID for iterator checks
        noCheck = 1<<(8*sys.PtrSize) - 1
)
const (
        bitPointer = 1 << 0
        bitScan    = 1 << 4

        heapBitsShift      = 1     // shift offset between successive bitPointer or bitScan entries
        wordsPerBitmapByte = 8 / 2 // heap words described by one bitmap byte

        // all scan/pointer bits in a byte
        bitScanAll    = bitScan | bitScan<<heapBitsShift | bitScan<<(2*heapBitsShift) | bitScan<<(3*heapBitsShift)
        bitPointerAll = bitPointer | bitPointer<<heapBitsShift | bitPointer<<(2*heapBitsShift) | bitPointer<<(3*heapBitsShift)
)
const (
        _EACCES = 13
        _EINVAL = 22
)
const (
        _DebugGC         = 0
        _ConcurrentSweep = true
        _FinBlockSize    = 4 * 1024

        // sweepMinHeapDistance is a lower bound on the heap distance
        // (in bytes) reserved for concurrent sweeping between GC
        // cycles. This will be scaled by gcpercent/100.
        sweepMinHeapDistance = 1024 * 1024
)
const (
        _GCoff             = iota // GC not running; sweeping in background, write barrier disabled
        _GCmark                   // GC marking roots and workbufs: allocate black, write barrier ENABLED
        _GCmarktermination        // GC mark termination: allocate black, P's help GC, write barrier ENABLED
)
const (
        fixedRootFinalizers = iota
        fixedRootFreeGStacks
        fixedRootCount

        // rootBlockBytes is the number of bytes to scan per data or
        // BSS root.
        rootBlockBytes = 256 << 10

        // rootBlockSpans is the number of spans to scan per span
        // root.
        rootBlockSpans = 8 * 1024 // 64MB worth of spans

        // maxObletBytes is the maximum bytes of an object to scan at
        // once. Larger objects will be split up into "oblets" of at
        // most this size. Since we can scan 1–2 MB/ms, 128 KB bounds
        // scan preemption at ~100 µs.
        //
        // This must be > _MaxSmallSize so that the object base is the
        // span base.
        maxObletBytes = 128 << 10

        // drainCheckThreshold specifies how many units of work to do
        // between self-preemption checks in gcDrain. Assuming a scan
        // rate of 1 MB/ms, this is ~100 µs. Lower values have higher
        // overhead in the scan loop (the scheduler check may perform
        // a syscall, so its overhead is nontrivial). Higher values
        // make the system less responsive to incoming work.
        drainCheckThreshold = 100000
)
const (
        gcSweepBlockEntries    = 512 // 4KB on 64-bit
        gcSweepBufInitSpineCap = 256 // Enough for 1GB heap on 64-bit
)
const (
        _WorkbufSize = 2048 // in bytes; larger values result in less contention

        // workbufAlloc is the number of bytes to allocate at a time
        // for new workbufs. This must be a multiple of pageSize and
        // should be a multiple of _WorkbufSize.
        //
        // Larger values reduce workbuf allocation overhead. Smaller
        // values reduce heap fragmentation.
        workbufAlloc = 32 << 10
)
const (
        numSpanClasses = _NumSizeClasses << 1
        tinySpanClass  = spanClass(tinySizeClass<<1 | 1)
)
const (
        _KindSpecialFinalizer = 1
        _KindSpecialProfile   = 2
)
const (
        // wbBufEntries is the number of write barriers between
        // flushes of the write barrier buffer.
        //
        // This trades latency for throughput amortization. Higher
        // values amortize flushing overhead more, but increase the
        // latency of flushing. Higher values also increase the cache
        // footprint of the buffer.
        //
        // TODO: What is the latency cost of this? Tune this value.
        wbBufEntries = 256

        // wbBufEntryPointers is the number of pointers added to the
        // buffer by each write barrier.
        wbBufEntryPointers = 2
)

pollDesc contains 2 binary semaphores, rg and wg, to park reader and writer goroutines respectively. The semaphore can be in the following states: pdReady - io readiness notification is pending;

a goroutine consumes the notification by changing the state to nil.

pdWait - a goroutine prepares to park on the semaphore, but not yet parked;

the goroutine commits to park by changing the state to G pointer,
or, alternatively, concurrent io notification changes the state to READY,
or, alternatively, concurrent timeout/close changes the state to nil.

G pointer - the goroutine is blocked on the semaphore;

io notification or timeout/close changes the state to READY or nil respectively
and unparks the goroutine.

nil - nothing of the above.

const (
        pdReady uintptr = 1
        pdWait  uintptr = 2
)
const (
        _FUTEX_PRIVATE_FLAG = 128
        _FUTEX_WAIT_PRIVATE = 0 | _FUTEX_PRIVATE_FLAG
        _FUTEX_WAKE_PRIVATE = 1 | _FUTEX_PRIVATE_FLAG
)

Clone, the Linux rfork.

const (
        _CLONE_VM             = 0x100
        _CLONE_FS             = 0x200
        _CLONE_FILES          = 0x400
        _CLONE_SIGHAND        = 0x800
        _CLONE_PTRACE         = 0x2000
        _CLONE_VFORK          = 0x4000
        _CLONE_PARENT         = 0x8000
        _CLONE_THREAD         = 0x10000
        _CLONE_NEWNS          = 0x20000
        _CLONE_SYSVSEM        = 0x40000
        _CLONE_SETTLS         = 0x80000
        _CLONE_PARENT_SETTID  = 0x100000
        _CLONE_CHILD_CLEARTID = 0x200000
        _CLONE_UNTRACED       = 0x800000
        _CLONE_CHILD_SETTID   = 0x1000000
        _CLONE_STOPPED        = 0x2000000
        _CLONE_NEWUTS         = 0x4000000
        _CLONE_NEWIPC         = 0x8000000

        cloneFlags = _CLONE_VM |
                _CLONE_FS |
                _CLONE_FILES |
                _CLONE_SIGHAND |
                _CLONE_SYSVSEM |
                _CLONE_THREAD /* revisit - okay for now */
)
const (
        _AT_NULL   = 0  // End of vector
        _AT_PAGESZ = 6  // System physical page size
        _AT_HWCAP  = 16 // hardware capability bit vector
        _AT_RANDOM = 25 // introduced in 2.6.29
        _AT_HWCAP2 = 26 // hardware capability bit vector 2
)
const (
        _SS_DISABLE  = 2
        _NSIG        = 65
        _SI_USER     = 0
        _SIG_BLOCK   = 0
        _SIG_UNBLOCK = 1
        _SIG_SETMASK = 2
)
const (
        deferHeaderSize = unsafe.Sizeof(_defer{})
        minDeferAlloc   = (deferHeaderSize + 15) &^ 15
        minDeferArgs    = minDeferAlloc - deferHeaderSize
)

Keep a cached value to make gotraceback fast, since we call it on every call to gentraceback. The cached value is a uint32 in which the low bits are the "crash" and "all" settings and the remaining bits are the traceback value (0 off, 1 on, 2 include system).

const (
        tracebackCrash = 1 << iota
        tracebackAll
        tracebackShift = iota
)

defined constants

const (

        // _Gidle means this goroutine was just allocated and has not
        // yet been initialized.
        _Gidle = iota // 0

        // _Grunnable means this goroutine is on a run queue. It is
        // not currently executing user code. The stack is not owned.
        _Grunnable // 1

        // _Grunning means this goroutine may execute user code. The
        // stack is owned by this goroutine. It is not on a run queue.
        // It is assigned an M and a P.
        _Grunning // 2

        // _Gsyscall means this goroutine is executing a system call.
        // It is not executing user code. The stack is owned by this
        // goroutine. It is not on a run queue. It is assigned an M.
        _Gsyscall // 3

        // _Gwaiting means this goroutine is blocked in the runtime.
        // It is not executing user code. It is not on a run queue,
        // but should be recorded somewhere (e.g., a channel wait
        // queue) so it can be ready()d when necessary. The stack is
        // not owned *except* that a channel operation may read or
        // write parts of the stack under the appropriate channel
        // lock. Otherwise, it is not safe to access the stack after a
        // goroutine enters _Gwaiting (e.g., it may get moved).
        _Gwaiting // 4

        // _Gmoribund_unused is currently unused, but hardcoded in gdb
        // scripts.
        _Gmoribund_unused // 5

        // _Gdead means this goroutine is currently unused. It may be
        // just exited, on a free list, or just being initialized. It
        // is not executing user code. It may or may not have a stack
        // allocated. The G and its stack (if any) are owned by the M
        // that is exiting the G or that obtained the G from the free
        // list.
        _Gdead // 6

        // _Genqueue_unused is currently unused.
        _Genqueue_unused // 7

        // _Gcopystack means this goroutine's stack is being moved. It
        // is not executing user code and is not on a run queue. The
        // stack is owned by the goroutine that put it in _Gcopystack.
        _Gcopystack // 8

        // _Gscan combined with one of the above states other than
        // _Grunning indicates that GC is scanning the stack. The
        // goroutine is not executing user code and the stack is owned
        // by the goroutine that set the _Gscan bit.
        //
        // _Gscanrunning is different: it is used to briefly block
        // state transitions while GC signals the G to scan its own
        // stack. This is otherwise like _Grunning.
        //
        // atomicstatus&~Gscan gives the state the goroutine will
        // return to when the scan completes.
        _Gscan         = 0x1000
        _Gscanrunnable = _Gscan + _Grunnable // 0x1001
        _Gscanrunning  = _Gscan + _Grunning  // 0x1002
        _Gscansyscall  = _Gscan + _Gsyscall  // 0x1003
        _Gscanwaiting  = _Gscan + _Gwaiting  // 0x1004
)
const (
        // P status
        _Pidle    = iota
        _Prunning // Only this P is allowed to change from _Prunning.
        _Psyscall
        _Pgcstop
        _Pdead
)

Values for the flags field of a sigTabT.

const (
        _SigNotify   = 1 << iota // let signal.Notify have signal, even if from kernel
        _SigKill                 // if signal.Notify doesn't take it, exit quietly
        _SigThrow                // if signal.Notify doesn't take it, exit loudly
        _SigPanic                // if the signal is from the kernel, panic
        _SigDefault              // if the signal isn't explicitly requested, don't monitor it
        _SigGoExit               // cause all runtime procs to exit (only used on Plan 9).
        _SigSetStack             // add SA_ONSTACK to libc handler
        _SigUnblock              // always unblock; see blockableSig
        _SigIgn                  // _SIG_DFL action is to ignore the signal
)
const (
        _TraceRuntimeFrames = 1 << iota // include frames for internal runtime functions.
        _TraceTrap                      // the initial PC, SP are from a trap, not a return PC from a call
        _TraceJumpStack                 // if traceback is on a systemstack, resume trace at g that called into it
)

scase.kind values. Known to compiler. Changes here must also be made in src/cmd/compile/internal/gc/select.go's walkselect.

const (
        caseNil = iota
        caseRecv
        caseSend
        caseDefault
)
const (
        _SIG_DFL uintptr = 0
        _SIG_IGN uintptr = 1
)
const (
        sigIdle = iota
        sigReceiving
        sigSending
)
const (
        _MaxSmallSize   = 32768
        smallSizeDiv    = 8
        smallSizeMax    = 1024
        largeSizeDiv    = 128
        _NumSizeClasses = 67
        _PageShift      = 13
)
const (
        mantbits64 uint = 52
        expbits64  uint = 11
        bias64          = -1<<(expbits64-1) + 1

        nan64 uint64 = (1<<expbits64-1)<<mantbits64 + 1
        inf64 uint64 = (1<<expbits64 - 1) << mantbits64
        neg64 uint64 = 1 << (expbits64 + mantbits64)

        mantbits32 uint = 23
        expbits32  uint = 8
        bias32          = -1<<(expbits32-1) + 1

        nan32 uint32 = (1<<expbits32-1)<<mantbits32 + 1
        inf32 uint32 = (1<<expbits32 - 1) << mantbits32
        neg32 uint32 = 1 << (expbits32 + mantbits32)
)
const (
        // StackSystem is a number of additional bytes to add
        // to each stack below the usual guard area for OS-specific
        // purposes like signal handling. Used on Windows, Plan 9,
        // and iOS because they do not use a separate stack.
        _StackSystem = sys.GoosWindows*512*sys.PtrSize + sys.GoosPlan9*512 + sys.GoosDarwin*sys.GoarchArm*1024 + sys.GoosDarwin*sys.GoarchArm64*1024

        // The minimum size of stack used by Go code
        _StackMin = 2048

        // The minimum stack size to allocate.
        // The hackery here rounds FixedStack0 up to a power of 2.
        _FixedStack0 = _StackMin + _StackSystem
        _FixedStack1 = _FixedStack0 - 1
        _FixedStack2 = _FixedStack1 | (_FixedStack1 >> 1)
        _FixedStack3 = _FixedStack2 | (_FixedStack2 >> 2)
        _FixedStack4 = _FixedStack3 | (_FixedStack3 >> 4)
        _FixedStack5 = _FixedStack4 | (_FixedStack4 >> 8)
        _FixedStack6 = _FixedStack5 | (_FixedStack5 >> 16)
        _FixedStack  = _FixedStack6 + 1

        // Functions that need frames bigger than this use an extra
        // instruction to do the stack split check, to avoid overflow
        // in case SP - framesize wraps below zero.
        // This value can be no bigger than the size of the unmapped
        // space at zero.
        _StackBig = 4096

        // The stack guard is a pointer this many bytes above the
        // bottom of the stack.
        _StackGuard = 880*sys.StackGuardMultiplier + _StackSystem

        // After a stack split check the SP is allowed to be this
        // many bytes below the stack guard. This saves an instruction
        // in the checking sequence for tiny frames.
        _StackSmall = 128

        // The maximum number of bytes that a chain of NOSPLIT
        // functions can use.
        _StackLimit = _StackGuard - _StackSystem - _StackSmall
)
const (
        // stackDebug == 0: no logging
        //            == 1: logging of per-stack operations
        //            == 2: logging of per-frame operations
        //            == 3: logging of per-word updates
        //            == 4: logging of per-word reads
        stackDebug       = 0
        stackFromSystem  = 0 // allocate stacks from system memory instead of the heap
        stackFaultOnFree = 0 // old stacks are mapped noaccess to detect use after free
        stackPoisonCopy  = 0 // fill stack that should not be accessed with garbage, to detect bad dereferences during copy
        stackNoCache     = 0 // disable per-P small stack caches

        // check the BP links during traceback.
        debugCheckBP = false
)
const (
        uintptrMask = 1<<(8*sys.PtrSize) - 1

        // Goroutine preemption request.
        // Stored into g->stackguard0 to cause split stack check failure.
        // Must be greater than any real sp.
        // 0xfffffade in hex.
        stackPreempt = uintptrMask & -1314

        // Thread is forking.
        // Stored into g->stackguard0 to cause split stack check failure.
        // Must be greater than any real sp.
        stackFork = uintptrMask & -1234
)
const (
        maxUint = ^uint(0)
        maxInt  = int(maxUint >> 1)
)

PCDATA and FUNCDATA table indexes.

See funcdata.h and ../cmd/internal/objabi/funcdata.go.

const (
        _PCDATA_StackMapIndex       = 0
        _PCDATA_InlTreeIndex        = 1
        _PCDATA_RegMapIndex         = 2
        _FUNCDATA_ArgsPointerMaps   = 0
        _FUNCDATA_LocalsPointerMaps = 1
        _FUNCDATA_InlTree           = 2
        _FUNCDATA_RegPointerMaps    = 3
        _FUNCDATA_StackObjects      = 4
        _ArgsSizeUnknown            = -0x80000000
)

Event types in the trace, args are given in square brackets.

const (
        traceEvNone              = 0  // unused
        traceEvBatch             = 1  // start of per-P batch of events [pid, timestamp]
        traceEvFrequency         = 2  // contains tracer timer frequency [frequency (ticks per second)]
        traceEvStack             = 3  // stack [stack id, number of PCs, array of {PC, func string ID, file string ID, line}]
        traceEvGomaxprocs        = 4  // current value of GOMAXPROCS [timestamp, GOMAXPROCS, stack id]
        traceEvProcStart         = 5  // start of P [timestamp, thread id]
        traceEvProcStop          = 6  // stop of P [timestamp]
        traceEvGCStart           = 7  // GC start [timestamp, seq, stack id]
        traceEvGCDone            = 8  // GC done [timestamp]
        traceEvGCSTWStart        = 9  // GC STW start [timestamp, kind]
        traceEvGCSTWDone         = 10 // GC STW done [timestamp]
        traceEvGCSweepStart      = 11 // GC sweep start [timestamp, stack id]
        traceEvGCSweepDone       = 12 // GC sweep done [timestamp, swept, reclaimed]
        traceEvGoCreate          = 13 // goroutine creation [timestamp, new goroutine id, new stack id, stack id]
        traceEvGoStart           = 14 // goroutine starts running [timestamp, goroutine id, seq]
        traceEvGoEnd             = 15 // goroutine ends [timestamp]
        traceEvGoStop            = 16 // goroutine stops (like in select{}) [timestamp, stack]
        traceEvGoSched           = 17 // goroutine calls Gosched [timestamp, stack]
        traceEvGoPreempt         = 18 // goroutine is preempted [timestamp, stack]
        traceEvGoSleep           = 19 // goroutine calls Sleep [timestamp, stack]
        traceEvGoBlock           = 20 // goroutine blocks [timestamp, stack]
        traceEvGoUnblock         = 21 // goroutine is unblocked [timestamp, goroutine id, seq, stack]
        traceEvGoBlockSend       = 22 // goroutine blocks on chan send [timestamp, stack]
        traceEvGoBlockRecv       = 23 // goroutine blocks on chan recv [timestamp, stack]
        traceEvGoBlockSelect     = 24 // goroutine blocks on select [timestamp, stack]
        traceEvGoBlockSync       = 25 // goroutine blocks on Mutex/RWMutex [timestamp, stack]
        traceEvGoBlockCond       = 26 // goroutine blocks on Cond [timestamp, stack]
        traceEvGoBlockNet        = 27 // goroutine blocks on network [timestamp, stack]
        traceEvGoSysCall         = 28 // syscall enter [timestamp, stack]
        traceEvGoSysExit         = 29 // syscall exit [timestamp, goroutine id, seq, real timestamp]
        traceEvGoSysBlock        = 30 // syscall blocks [timestamp]
        traceEvGoWaiting         = 31 // denotes that goroutine is blocked when tracing starts [timestamp, goroutine id]
        traceEvGoInSyscall       = 32 // denotes that goroutine is in syscall when tracing starts [timestamp, goroutine id]
        traceEvHeapAlloc         = 33 // memstats.heap_live change [timestamp, heap_alloc]
        traceEvNextGC            = 34 // memstats.next_gc change [timestamp, next_gc]
        traceEvTimerGoroutine    = 35 // denotes timer goroutine [timer goroutine id]
        traceEvFutileWakeup      = 36 // denotes that the previous wakeup of this goroutine was futile [timestamp]
        traceEvString            = 37 // string dictionary entry [ID, length, string]
        traceEvGoStartLocal      = 38 // goroutine starts running on the same P as the last event [timestamp, goroutine id]
        traceEvGoUnblockLocal    = 39 // goroutine is unblocked on the same P as the last event [timestamp, goroutine id, stack]
        traceEvGoSysExitLocal    = 40 // syscall exit on the same P as the last event [timestamp, goroutine id, real timestamp]
        traceEvGoStartLabel      = 41 // goroutine starts running with label [timestamp, goroutine id, seq, label string id]
        traceEvGoBlockGC         = 42 // goroutine blocks on GC assist [timestamp, stack]
        traceEvGCMarkAssistStart = 43 // GC mark assist start [timestamp, stack]
        traceEvGCMarkAssistDone  = 44 // GC mark assist done [timestamp]
        traceEvUserTaskCreate    = 45 // trace.NewContext [timestamp, internal task id, internal parent task id, stack, name string]
        traceEvUserTaskEnd       = 46 // end of a task [timestamp, internal task id, stack]
        traceEvUserRegion        = 47 // trace.WithRegion [timestamp, internal task id, mode(0:start, 1:end), stack, name string]
        traceEvUserLog           = 48 // trace.Log [timestamp, internal task id, key string id, stack, value string]
        traceEvCount             = 49
)
const (
        // Timestamps in trace are cputicks/traceTickDiv.
        // This makes absolute values of timestamp diffs smaller,
        // and so they are encoded in less number of bytes.
        // 64 on x86 is somewhat arbitrary (one tick is ~20ns on a 3GHz machine).
        // The suggested increment frequency for PowerPC's time base register is
        // 512 MHz according to Power ISA v2.07 section 6.2, so we use 16 on ppc64
        // and ppc64le.
        // Tracing won't work reliably for architectures where cputicks is emulated
        // by nanotime, so the value doesn't matter for those architectures.
        traceTickDiv = 16 + 48*(sys.Goarch386|sys.GoarchAmd64|sys.GoarchAmd64p32)
        // Maximum number of PCs in a single stack trace.
        // Since events contain only stack id rather than whole stack trace,
        // we can allow quite large values here.
        traceStackSize = 128
        // Identifier of a fake P that is used when we trace without a real P.
        traceGlobProc = -1
        // Maximum number of bytes to encode uint64 in base-128.
        traceBytesPerNumber = 10
        // Shift of the number of arguments in the first event byte.
        traceArgCountShift = 6
        // Flag passed to traceGoPark to denote that the previous wakeup of this
        // goroutine was futile. For example, a goroutine was unblocked on a mutex,
        // but another goroutine got ahead and acquired the mutex before the first
        // goroutine is scheduled, so the first goroutine has to block again.
        // Such wakeups happen on buffered channels and sync.Mutex,
        // but are generally not interesting for end user.
        traceFutileWakeup byte = 128
)
const (
        kindBool = 1 + iota
        kindInt
        kindInt8
        kindInt16
        kindInt32
        kindInt64
        kindUint
        kindUint8
        kindUint16
        kindUint32
        kindUint64
        kindUintptr
        kindFloat32
        kindFloat64
        kindComplex64
        kindComplex128
        kindArray
        kindChan
        kindFunc
        kindInterface
        kindMap
        kindPtr
        kindSlice
        kindString
        kindStruct
        kindUnsafePointer

        kindDirectIface = 1 << 5
        kindGCProg      = 1 << 6
        kindNoPointers  = 1 << 7
        kindMask        = (1 << 5) - 1
)

Numbers fundamental to the encoding.

const (
        runeError = '\uFFFD'     // the "error" Rune or "Unicode replacement character"
        runeSelf  = 0x80         // characters below Runeself are represented as themselves in a single byte.
        maxRune   = '\U0010FFFF' // Maximum valid Unicode code point.
)

Code points in the surrogate range are not valid for UTF-8.

const (
        surrogateMin = 0xD800
        surrogateMax = 0xDFFF
)
const (
        t1 = 0x00 // 0000 0000
        tx = 0x80 // 1000 0000
        t2 = 0xC0 // 1100 0000
        t3 = 0xE0 // 1110 0000
        t4 = 0xF0 // 1111 0000
        t5 = 0xF8 // 1111 1000

        maskx = 0x3F // 0011 1111
        mask2 = 0x1F // 0001 1111
        mask3 = 0x0F // 0000 1111
        mask4 = 0x07 // 0000 0111

        rune1Max = 1<<7 - 1
        rune2Max = 1<<11 - 1
        rune3Max = 1<<16 - 1

        // The default lowest and highest continuation byte.
        locb = 0x80 // 1000 0000
        hicb = 0xBF // 1011 1111
)
const (
        _AT_SYSINFO_EHDR = 33

        _PT_LOAD    = 1 /* Loadable program segment */
        _PT_DYNAMIC = 2 /* Dynamic linking information */

        _DT_NULL     = 0          /* Marks end of dynamic section */
        _DT_HASH     = 4          /* Dynamic symbol hash table */
        _DT_STRTAB   = 5          /* Address of string table */
        _DT_SYMTAB   = 6          /* Address of symbol table */
        _DT_GNU_HASH = 0x6ffffef5 /* GNU-style dynamic symbol hash table */
        _DT_VERSYM   = 0x6ffffff0
        _DT_VERDEF   = 0x6ffffffc

        _VER_FLG_BASE = 0x1 /* Version definition of file itself */

        _SHN_UNDEF = 0 /* Undefined section */

        _SHT_DYNSYM = 11 /* Dynamic linker symbol table */

        _STT_FUNC = 2 /* Symbol is a code object */

        _STT_NOTYPE = 0 /* Symbol type is not specified */

        _STB_GLOBAL = 1 /* Global symbol */
        _STB_WEAK   = 2 /* Weak symbol */

        _EI_NIDENT = 16

        // Maximum indices for the array types used when traversing the vDSO ELF structures.
        // Computed from architecture-specific max provided by vdso_linux_*.go
        vdsoSymTabSize     = vdsoArrayMax / unsafe.Sizeof(elfSym{})
        vdsoDynSize        = vdsoArrayMax / unsafe.Sizeof(elfDyn{})
        vdsoSymStringsSize = vdsoArrayMax     // byte
        vdsoVerSymSize     = vdsoArrayMax / 2 // uint16
        vdsoHashSize       = vdsoArrayMax / 4 // uint32

        // vdsoBloomSizeScale is a scaling factor for gnuhash tables which are uint32 indexed,
        // but contain uintptrs
        vdsoBloomSizeScale = unsafe.Sizeof(uintptr(0)) / 4 // uint32
)

Compiler is the name of the compiler toolchain that built the running binary. Known toolchains are:

gc      Also known as cmd/compile.
gccgo   The gccgo front end, part of the GCC compiler suite.
const Compiler = "gc"

GOARCH is the running program's architecture target: one of 386, amd64, arm, s390x, and so on.

const GOARCH string = sys.GOARCH

GOOS is the running program's operating system target: one of darwin, freebsd, linux, and so on. To view possible combinations of GOOS and GOARCH, run "go tool dist list".

const GOOS string = sys.GOOS
const (
        // Number of goroutine ids to grab from sched.goidgen to local per-P cache at once.
        // 16 seems to provide enough amortization, but other than that it's mostly arbitrary number.
        _GoidCacheBatch = 16
)

argp used in Defer structs when there is no argp.

const _NoArgs = ^uintptr(0)

The maximum number of frames we print for a traceback

const _TracebackMaxFrames = 100

buffer of pending write data

const (
        bufSize = 4096
)
const cgoCheckPointerFail = "cgo argument has Go pointer to Go pointer"
const cgoResultFail = "cgo result has Go pointer"
const cgoWriteBarrierFail = "Go pointer stored into non-Go memory"

debugCachedWork enables extra checks for debugging premature mark termination.

For debugging issue #27993.

const debugCachedWork = false
const debugPcln = false
const debugSelect = false

defaultHeapMinimum is the value of heapminimum for GOGC==100.

const defaultHeapMinimum = 4 << 20
const fastlogNumBits = 5

forcePreemptNS is the time slice given to a G before it is preempted.

const forcePreemptNS = 10 * 1000 * 1000 // 10ms

freezeStopWait is a large value that freezetheworld sets sched.stopwait to in order to request that all Gs permanently stop.

const freezeStopWait = 0x7fffffff

gcAssistTimeSlack is the nanoseconds of mutator assist time that can accumulate on a P before updating gcController.assistTime.

const gcAssistTimeSlack = 5000

gcBackgroundUtilization is the fixed CPU utilization for background marking. It must be <= gcGoalUtilization. The difference between gcGoalUtilization and gcBackgroundUtilization will be made up by mark assists. The scheduler will aim to use within 50% of this goal.

Setting this to < gcGoalUtilization avoids saturating the trigger feedback controller when there are no assists, which allows it to better control CPU and heap growth. However, the larger the gap, the more mutator assists are expected to happen, which impact mutator latency.

const gcBackgroundUtilization = 0.25
const gcBitsChunkBytes = uintptr(64 << 10)
const gcBitsHeaderBytes = unsafe.Sizeof(gcBitsHeader{})

gcCreditSlack is the amount of scan work credit that can accumulate locally before updating gcController.scanWork and, optionally, gcController.bgScanCredit. Lower values give a more accurate assist ratio and make it more likely that assists will successfully steal background credit. Higher values reduce memory contention.

const gcCreditSlack = 2000

gcGoalUtilization is the goal CPU utilization for marking as a fraction of GOMAXPROCS.

const gcGoalUtilization = 0.30

gcOverAssistWork determines how many extra units of scan work a GC assist does when an assist happens. This amortizes the cost of an assist by pre-paying for this many bytes of future allocations.

const gcOverAssistWork = 64 << 10
const hashRandomBytes = sys.PtrSize / 4 * 64
const itabInitSize = 512
const mProfCycleWrap = uint32(len(memRecord{}.future)) * (2 << 24)
const maxCPUProfStack = 64
const maxZero = 1024 // must match value in cmd/compile/internal/gc/walk.go

minPhysPageSize is a lower-bound on the physical page size. The true physical page size may be larger than this. In contrast, sys.PhysPageSize is an upper-bound on the physical page size.

const minPhysPageSize = 4096
const minfunc = 16 // minimum function size
const msanenabled = false

osRelaxMinNS is the number of nanoseconds of idleness to tolerate without performing an osRelax. Since osRelax may reduce the precision of timers, this should be enough larger than the relaxed timer precision to keep the timer error acceptable.

const osRelaxMinNS = 0
const pcbucketsize = 256 * minfunc // size of bucket in the pc->func lookup table

persistentChunkSize is the number of bytes we allocate when we grow a persistentAlloc.

const persistentChunkSize = 256 << 10
const pollBlockSize = 4 * 1024
const raceenabled = false

To shake out latent assumptions about scheduling order, we introduce some randomness into scheduling decisions when running with the race detector. The need for this was made obvious by changing the (deterministic) scheduling order in Go 1.5 and breaking many poorly-written tests. With the randomness here, as long as the tests pass consistently with -race, they shouldn't have latent scheduling assumptions.

const randomizeScheduler = raceenabled
const rwmutexMaxReaders = 1 << 30

Prime to not correlate with any user patterns.

const semTabSize = 251
const sizeofSkipFunction = 256
const stackTraceDebug = false

testSmallBuf forces a small write barrier buffer to stress write barrier flushing.

const testSmallBuf = false

timersLen is the length of timers array.

Ideally, this would be set to GOMAXPROCS, but that would require dynamic reallocation

The current value is a compromise between memory usage and performance that should cover the majority of GOMAXPROCS values used in the wild.

const timersLen = 64

The constant is known to the compiler. There is no fundamental theory behind this number.

const tmpStringBufSize = 32
const usesLR = sys.MinFrameSize > 0
const (
        // vdsoArrayMax is the byte-size of a maximally sized array on this architecture.
        // See cmd/compile/internal/amd64/galign.go arch.MAXWIDTH initialization.
        vdsoArrayMax = 1<<50 - 1
)

Variables

var (
        _cgo_init                     unsafe.Pointer
        _cgo_thread_start             unsafe.Pointer
        _cgo_sys_thread_create        unsafe.Pointer
        _cgo_notify_runtime_init_done unsafe.Pointer
        _cgo_callers                  unsafe.Pointer
        _cgo_set_context_function     unsafe.Pointer
        _cgo_yield                    unsafe.Pointer
)
var (
        // Set in runtime.cpuinit.
        // TODO: deprecate these; use internal/cpu directly.
        x86HasPOPCNT bool
        x86HasSSE41  bool

        arm64HasATOMICS bool
)
var (
        itabLock      mutex                               // lock for accessing itab table
        itabTable     = &itabTableInit                    // pointer to current table
        itabTableInit = itabTableType{size: itabInitSize} // starter table
)
var (
        uint16Eface interface{} = uint16InterfacePtr(0)
        uint32Eface interface{} = uint32InterfacePtr(0)
        uint64Eface interface{} = uint64InterfacePtr(0)
        stringEface interface{} = stringInterfacePtr("")
        sliceEface  interface{} = sliceInterfacePtr(nil)

        uint16Type *_type = (*eface)(unsafe.Pointer(&uint16Eface))._type
        uint32Type *_type = (*eface)(unsafe.Pointer(&uint32Eface))._type
        uint64Type *_type = (*eface)(unsafe.Pointer(&uint64Eface))._type
        stringType *_type = (*eface)(unsafe.Pointer(&stringEface))._type
        sliceType  *_type = (*eface)(unsafe.Pointer(&sliceEface))._type
)
var (
        fingCreate  uint32
        fingRunning bool
)
var (
        mbuckets  *bucket // memory profile buckets
        bbuckets  *bucket // blocking profile buckets
        xbuckets  *bucket // mutex profile buckets
        buckhash  *[179999]*bucket
        bucketmem uintptr

        mProf struct {

                // cycle is the global heap profile cycle. This wraps
                // at mProfCycleWrap.
                cycle uint32
                // flushed indicates that future[cycle] in all buckets
                // has been flushed to the active profile.
                flushed bool
        }
)
var (
        netpollInited  uint32
        pollcache      pollCache
        netpollWaiters uint32
)
var (
        // printBacklog is a circular buffer of messages written with the builtin
        // print* functions, for use in postmortem analysis of core dumps.
        printBacklog      [512]byte
        printBacklogIndex int
)
var (
        m0           m
        g0           g
        raceprocctx0 uintptr
)
var (
        argc int32
        argv **byte
)
var (
        allglen    uintptr
        allm       *m
        allp       []*p  // len(allp) == gomaxprocs; may change at safe points, otherwise immutable
        allpLock   mutex // Protects P-less reads of allp and all writes
        gomaxprocs int32
        ncpu       int32
        forcegc    forcegcstate
        sched      schedt
        newprocs   int32

        // Information about what cpu features are available.
        // Packages outside the runtime should not use these
        // as they are not an external api.
        // Set on startup in asm_{386,amd64,amd64p32}.s
        processorVersionInfo uint32
        isIntel              bool
        lfenceBeforeRdtsc    bool

        goarm                uint8 // set by cmd/link on arm systems
        framepointer_enabled bool  // set by cmd/link
)

Set by the linker so the runtime can determine the buildmode.

var (
        islibrary bool // -buildmode=c-shared
        isarchive bool // -buildmode=c-archive
)
var (
        chansendpc = funcPC(chansend)
        chanrecvpc = funcPC(chanrecv)
)

channels for synchronizing signal mask updates with the signal mask thread

var (
        disableSigChan  chan uint32
        enableSigChan   chan uint32
        maskUpdatedChan chan struct{}
)

initialize with vsyscall fallbacks

var (
        vdsoGettimeofdaySym uintptr = 0xffffffffff600000
        vdsoClockgettimeSym uintptr = 0
)

MemProfileRate controls the fraction of memory allocations that are recorded and reported in the memory profile. The profiler aims to sample an average of one allocation per MemProfileRate bytes allocated.

To include every allocated block in the profile, set MemProfileRate to 1. To turn off profiling entirely, set MemProfileRate to 0.

The tools that process the memory profiles assume that the profile rate is constant across the lifetime of the program and equal to the current value. Programs that change the memory profiling rate should do so just once, as early as possible in the execution of the program (for example, at the beginning of main).

var MemProfileRate int = 512 * 1024

Make the compiler check that heapBits.arena is large enough to hold the maximum arena frame number.

var _ = heapBits{arena: (1<<heapAddrBits)/heapArenaBytes - 1}

_cgo_mmap is filled in by runtime/cgo when it is linked into the program, so it is only non-nil when using cgo. go:linkname _cgo_mmap _cgo_mmap

var _cgo_mmap unsafe.Pointer

_cgo_munmap is filled in by runtime/cgo when it is linked into the program, so it is only non-nil when using cgo. go:linkname _cgo_munmap _cgo_munmap

var _cgo_munmap unsafe.Pointer
var _cgo_setenv unsafe.Pointer // pointer to C function

_cgo_sigaction is filled in by runtime/cgo when it is linked into the program, so it is only non-nil when using cgo. go:linkname _cgo_sigaction _cgo_sigaction

var _cgo_sigaction unsafe.Pointer
var _cgo_unsetenv unsafe.Pointer // pointer to C function
var addrspace_vec [1]byte
var adviseUnused = uint32(_MADV_FREE)

used in asm_{386,amd64,arm64}.s to seed the hash function

var aeskeysched [hashRandomBytes]byte
var algarray = [alg_max]typeAlg{
        alg_NOEQ:     {nil, nil},
        alg_MEM0:     {memhash0, memequal0},
        alg_MEM8:     {memhash8, memequal8},
        alg_MEM16:    {memhash16, memequal16},
        alg_MEM32:    {memhash32, memequal32},
        alg_MEM64:    {memhash64, memequal64},
        alg_MEM128:   {memhash128, memequal128},
        alg_STRING:   {strhash, strequal},
        alg_INTER:    {interhash, interequal},
        alg_NILINTER: {nilinterhash, nilinterequal},
        alg_FLOAT32:  {f32hash, f32equal},
        alg_FLOAT64:  {f64hash, f64equal},
        alg_CPLX64:   {c64hash, c64equal},
        alg_CPLX128:  {c128hash, c128equal},
}
var argslice []string
var badmorestackg0Msg = "fatal: morestack on g0\n"
var badmorestackgsignalMsg = "fatal: morestack on gsignal\n"
var badsystemstackMsg = "fatal: systemstack called from unexpected goroutine"
var blockprofilerate uint64 // in CPU ticks
var buf [bufSize]byte
var buildVersion = sys.TheVersion

cgoAlwaysFalse is a boolean value that is always false. The cgo-generated code says if cgoAlwaysFalse { cgoUse(p) }. The compiler cannot see that cgoAlwaysFalse is always false, so it emits the test and keeps the call, giving the desired escape analysis result. The test is cheaper than the call.

var cgoAlwaysFalse bool
var cgoContext unsafe.Pointer

cgoHasExtraM is set on startup when an extra M is created for cgo. The extra M must be created before any C/C++ code calls cgocallback.

var cgoHasExtraM bool
var cgoSymbolizer unsafe.Pointer

When running with cgo, we call _cgo_thread_start to start threads for us so that we can play nicely with foreign code.

var cgoThreadStart unsafe.Pointer
var cgoTraceback unsafe.Pointer
var cgo_yield = &_cgo_yield
var class_to_allocnpages = [_NumSizeClasses]uint8{0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1, 2, 1, 3, 2, 3, 1, 3, 2, 3, 4, 5, 6, 1, 7, 6, 5, 4, 3, 5, 7, 2, 9, 7, 5, 8, 3, 10, 7, 4}
var class_to_divmagic = [_NumSizeClasses]divMagic{{0, 0, 0, 0}, {3, 0, 1, 65528}, {4, 0, 1, 65520}, {5, 0, 1, 65504}, {4, 9, 171, 0}, {6, 0, 1, 65472}, {4, 10, 205, 0}, {5, 9, 171, 0}, {4, 11, 293, 0}, {7, 0, 1, 65408}, {4, 9, 57, 0}, {5, 10, 205, 0}, {4, 12, 373, 0}, {6, 7, 43, 0}, {4, 13, 631, 0}, {5, 11, 293, 0}, {4, 13, 547, 0}, {8, 0, 1, 65280}, {5, 9, 57, 0}, {6, 9, 103, 0}, {5, 12, 373, 0}, {7, 7, 43, 0}, {5, 10, 79, 0}, {6, 10, 147, 0}, {5, 11, 137, 0}, {9, 0, 1, 65024}, {6, 9, 57, 0}, {7, 6, 13, 0}, {6, 11, 187, 0}, {8, 5, 11, 0}, {7, 8, 37, 0}, {10, 0, 1, 64512}, {7, 9, 57, 0}, {8, 6, 13, 0}, {7, 11, 187, 0}, {9, 5, 11, 0}, {8, 8, 37, 0}, {11, 0, 1, 63488}, {8, 9, 57, 0}, {7, 10, 49, 0}, {10, 5, 11, 0}, {7, 10, 41, 0}, {7, 9, 19, 0}, {12, 0, 1, 61440}, {8, 9, 27, 0}, {8, 10, 49, 0}, {11, 5, 11, 0}, {7, 13, 161, 0}, {7, 13, 155, 0}, {8, 9, 19, 0}, {13, 0, 1, 57344}, {8, 12, 111, 0}, {9, 9, 27, 0}, {11, 6, 13, 0}, {7, 14, 193, 0}, {12, 3, 3, 0}, {8, 13, 155, 0}, {11, 8, 37, 0}, {14, 0, 1, 49152}, {11, 8, 29, 0}, {7, 13, 55, 0}, {12, 5, 7, 0}, {8, 14, 193, 0}, {13, 3, 3, 0}, {7, 14, 77, 0}, {12, 7, 19, 0}, {15, 0, 1, 32768}}
var class_to_size = [_NumSizeClasses]uint16{0, 8, 16, 32, 48, 64, 80, 96, 112, 128, 144, 160, 176, 192, 208, 224, 240, 256, 288, 320, 352, 384, 416, 448, 480, 512, 576, 640, 704, 768, 896, 1024, 1152, 1280, 1408, 1536, 1792, 2048, 2304, 2688, 3072, 3200, 3456, 4096, 4864, 5376, 6144, 6528, 6784, 6912, 8192, 9472, 9728, 10240, 10880, 12288, 13568, 14336, 16384, 18432, 19072, 20480, 21760, 24576, 27264, 28672, 32768}

crashing is the number of m's we have waited for when implementing GOTRACEBACK=crash when a signal is received.

var crashing int32
var dbgvars = []dbgVar{
        {"allocfreetrace", &debug.allocfreetrace},
        {"clobberfree", &debug.clobberfree},
        {"cgocheck", &debug.cgocheck},
        {"efence", &debug.efence},
        {"gccheckmark", &debug.gccheckmark},
        {"gcpacertrace", &debug.gcpacertrace},
        {"gcshrinkstackoff", &debug.gcshrinkstackoff},
        {"gcstoptheworld", &debug.gcstoptheworld},
        {"gctrace", &debug.gctrace},
        {"invalidptr", &debug.invalidptr},
        {"madvdontneed", &debug.madvdontneed},
        {"sbrk", &debug.sbrk},
        {"scavenge", &debug.scavenge},
        {"scheddetail", &debug.scheddetail},
        {"schedtrace", &debug.schedtrace},
        {"tracebackancestors", &debug.tracebackancestors},
}

Holds variables parsed from GODEBUG env var, except for "memprofilerate" since there is an existing int var for that value, which may already have an initial value.

var debug struct {
        allocfreetrace     int32
        cgocheck           int32
        clobberfree        int32
        efence             int32
        gccheckmark        int32
        gcpacertrace       int32
        gcshrinkstackoff   int32
        gcstoptheworld     int32
        gctrace            int32
        invalidptr         int32
        madvdontneed       int32 // for Linux; issue 28466
        sbrk               int32
        scavenge           int32
        scheddetail        int32
        schedtrace         int32
        tracebackancestors int32
}
var debugPtrmask struct {
        lock mutex
        data *byte
}
var didothers bool
var divideError = error(errorString("integer divide by zero"))
var dumpfd uintptr // fd to write the dump to.
var dumphdr = []byte("go1.7 heap dump\n")
var earlycgocallback = []byte("fatal error: cgo callback before cgo call\n")
var envs []string
var (
        epfd int32 = -1 // epoll descriptor
)
var extraMCount uint32 // Protected by lockextra
var extraMWaiters uint32
var extram uintptr
var failallocatestack = []byte("runtime: failed to allocate stack for the new OS thread\n")
var failthreadcreate = []byte("runtime: failed to create new OS thread\n")

nacl fake time support - time in nanoseconds since 1970

var faketime int64
var fastlog2Table = [1<<fastlogNumBits + 1]float64{
        0,
        0.0443941193584535,
        0.08746284125033943,
        0.12928301694496647,
        0.16992500144231248,
        0.2094533656289499,
        0.24792751344358555,
        0.28540221886224837,
        0.3219280948873623,
        0.3575520046180837,
        0.39231742277876036,
        0.4262647547020979,
        0.4594316186372973,
        0.4918530963296748,
        0.5235619560570128,
        0.5545888516776374,
        0.5849625007211563,
        0.6147098441152082,
        0.6438561897747247,
        0.6724253419714956,
        0.7004397181410922,
        0.7279204545631992,
        0.7548875021634686,
        0.7813597135246596,
        0.8073549220576042,
        0.8328900141647417,
        0.8579809951275721,
        0.8826430493618412,
        0.9068905956085185,
        0.9307373375628862,
        0.9541963103868752,
        0.9772799234999164,
        1,
}
var finalizer1 = [...]byte{

        1<<0 | 1<<1 | 0<<2 | 1<<3 | 1<<4 | 1<<5 | 1<<6 | 0<<7,
        1<<0 | 1<<1 | 1<<2 | 1<<3 | 0<<4 | 1<<5 | 1<<6 | 1<<7,
        1<<0 | 0<<1 | 1<<2 | 1<<3 | 1<<4 | 1<<5 | 0<<6 | 1<<7,
        1<<0 | 1<<1 | 1<<2 | 0<<3 | 1<<4 | 1<<5 | 1<<6 | 1<<7,
        0<<0 | 1<<1 | 1<<2 | 1<<3 | 1<<4 | 0<<5 | 1<<6 | 1<<7,
}
var fingwait bool
var fingwake bool
var finptrmask [_FinBlockSize / sys.PtrSize / 8]byte
var floatError = error(errorString("floating point error"))

forcegcperiod is the maximum time in nanoseconds between garbage collections. If we go this long without a garbage collection, one is forced to run.

This is a variable for testing purposes. It normally doesn't change.

var forcegcperiod int64 = 2 * 60 * 1e9

Bit vector of free marks. Needs to be as big as the largest number of objects per span.

var freemark [_PageSize / 8]bool

freezing is set to non-zero if the runtime is trying to freeze the world.

var freezing uint32

Stores the signal handlers registered before Go installed its own. These signal handlers will be invoked in cases where Go doesn't want to handle a particular signal (e.g., signal occurred on a non-Go thread). See sigfwdgo for more information on when the signals are forwarded.

This is read by the signal handler; accesses should use atomic.Loaduintptr and atomic.Storeuintptr.

var fwdSig [_NSIG]uintptr
var gStatusStrings = [...]string{
        _Gidle:      "idle",
        _Grunnable:  "runnable",
        _Grunning:   "running",
        _Gsyscall:   "syscall",
        _Gwaiting:   "waiting",
        _Gdead:      "dead",
        _Gcopystack: "copystack",
}
var gcBitsArenas struct {
        lock     mutex
        free     *gcBitsArena
        next     *gcBitsArena // Read atomically. Write atomically under lock.
        current  *gcBitsArena
        previous *gcBitsArena
}

gcBlackenEnabled is 1 if mutator assists and background mark workers are allowed to blacken objects. This must only be set when gcphase == _GCmark.

var gcBlackenEnabled uint32

gcMarkDoneFlushed counts the number of P's with flushed work.

Ideally this would be a captured local in gcMarkDone, but forEachP escapes its callback closure, so it can't capture anything.

This is protected by markDoneSema.

var gcMarkDoneFlushed uint32

gcMarkWorkerModeStrings are the strings labels of gcMarkWorkerModes to use in execution traces.

var gcMarkWorkerModeStrings = [...]string{
        "GC (dedicated)",
        "GC (fractional)",
        "GC (idle)",
}

gcWorkPauseGen is for debugging the mark completion algorithm. gcWork put operations spin while gcWork.pauseGen == gcWorkPauseGen. Only used if debugCachedWork is true.

For debugging issue #27993.

var gcWorkPauseGen uint32 = 1

Initialized from $GOGC. GOGC=off means no GC.

var gcpercent int32

Garbage collector phase. Indicates to write barrier and synchronization task to perform.

var gcphase uint32
var globalAlloc struct {
        mutex
        persistentAlloc
}

handlingSig is indexed by signal number and is non-zero if we are currently handling the signal. Or, to put it another way, whether the signal handler is currently set to the Go signal handler or not. This is uint32 rather than bool so that we can use atomic instructions.

var handlingSig [_NSIG]uint32

exported value for testing

var hashLoad = float32(loadFactorNum) / float32(loadFactorDen)

used in hash{32,64}.go to seed the hash function

var hashkey [4]uintptr

heapminimum is the minimum heap size at which to trigger GC. For small heaps, this overrides the usual GOGC*live set rule.

When there is a very small live set but a lot of allocation, simply collecting when the heap reaches GOGC*live results in many GC cycles and high total per-GC overhead. This minimum amortizes this per-GC overhead while keeping the heap reasonably small.

During initialization this is set to 4MB*GOGC/100. In the case of GOGC==0, this will set heapminimum to 0, resulting in constant collection even when the heap size is small, which is useful for debugging.

var heapminimum uint64 = defaultHeapMinimum

inForkedChild is true while manipulating signals in the child process. This is used to avoid calling libc functions in case we are using vfork.

var inForkedChild bool
var indexError = error(errorString("index out of range"))
var inf = float64frombits(0x7FF0000000000000)

iscgo is set to true by the runtime/cgo package

var iscgo bool
var labelSync uintptr

Counts SIGPROFs received while in atomic64 critical section, on mips{,le}

var lostAtomic64Count uint64

mSpanStateNames are the names of the span states, indexed by mSpanState.

var mSpanStateNames = []string{
        "mSpanDead",
        "mSpanInUse",
        "mSpanManual",
        "mSpanFree",
}

mainStarted indicates that the main M has started.

var mainStarted bool

main_init_done is a signal used by cgocallbackg that initialization has been completed. It is made before _cgo_notify_runtime_init_done, so all cgo calls can rely on it existing. When main_init is complete, it is closed, meaning cgocallbackg can reliably receive from it.

var main_init_done chan bool
var maxstacksize uintptr = 1 << 20 // enough until runtime.main sets it for real
var memoryError = error(errorString("invalid memory address or nil pointer dereference"))
var modulesSlice *[]*moduledata // see activeModules
var mutexprofilerate uint64 // fraction sampled
var nbuf uintptr

newmHandoff contains a list of m structures that need new OS threads. This is used by newm in situations where newm itself can't safely start an OS thread.

var newmHandoff struct {
        lock mutex

        // newm points to a list of M structures that need new OS
        // threads. The list is linked through m.schedlink.
        newm muintptr

        // waiting indicates that wake needs to be notified when an m
        // is put on the list.
        waiting bool
        wake    note

        // haveTemplateThread indicates that the templateThread has
        // been started. This is not protected by lock. Use cas to set
        // to 1.
        haveTemplateThread uint32
}

oneBitCount is indexed by byte and produces the number of 1 bits in that byte. For example 128 has 1 bit set and oneBitCount[128] will holds 1.

var oneBitCount = [256]uint8{
        0, 1, 1, 2, 1, 2, 2, 3,
        1, 2, 2, 3, 2, 3, 3, 4,
        1, 2, 2, 3, 2, 3, 3, 4,
        2, 3, 3, 4, 3, 4, 4, 5,
        1, 2, 2, 3, 2, 3, 3, 4,
        2, 3, 3, 4, 3, 4, 4, 5,
        2, 3, 3, 4, 3, 4, 4, 5,
        3, 4, 4, 5, 4, 5, 5, 6,
        1, 2, 2, 3, 2, 3, 3, 4,
        2, 3, 3, 4, 3, 4, 4, 5,
        2, 3, 3, 4, 3, 4, 4, 5,
        3, 4, 4, 5, 4, 5, 5, 6,
        2, 3, 3, 4, 3, 4, 4, 5,
        3, 4, 4, 5, 4, 5, 5, 6,
        3, 4, 4, 5, 4, 5, 5, 6,
        4, 5, 5, 6, 5, 6, 6, 7,
        1, 2, 2, 3, 2, 3, 3, 4,
        2, 3, 3, 4, 3, 4, 4, 5,
        2, 3, 3, 4, 3, 4, 4, 5,
        3, 4, 4, 5, 4, 5, 5, 6,
        2, 3, 3, 4, 3, 4, 4, 5,
        3, 4, 4, 5, 4, 5, 5, 6,
        3, 4, 4, 5, 4, 5, 5, 6,
        4, 5, 5, 6, 5, 6, 6, 7,
        2, 3, 3, 4, 3, 4, 4, 5,
        3, 4, 4, 5, 4, 5, 5, 6,
        3, 4, 4, 5, 4, 5, 5, 6,
        4, 5, 5, 6, 5, 6, 6, 7,
        3, 4, 4, 5, 4, 5, 5, 6,
        4, 5, 5, 6, 5, 6, 6, 7,
        4, 5, 5, 6, 5, 6, 6, 7,
        5, 6, 6, 7, 6, 7, 7, 8}

ptrmask for an allocation containing a single pointer.

var oneptrmask = [...]uint8{1}
var overflowError = error(errorString("integer overflow"))
var overflowTag [1]unsafe.Pointer // always nil

panicking is non-zero when crashing the program for an unrecovered panic. panicking is incremented and decremented atomically.

var panicking uint32

physPageSize is the size in bytes of the OS's physical pages. Mapping and unmapping operations must be done at multiples of physPageSize.

This must be set by the OS init code (typically in osinit) before mallocinit.

var physPageSize uintptr

pinnedTypemaps are the map[typeOff]*_type from the moduledata objects.

These typemap objects are allocated at run time on the heap, but the only direct reference to them is in the moduledata, created by the linker and marked SNOPTRDATA so it is ignored by the GC.

To make sure the map isn't collected, we keep a second reference here.

var pinnedTypemaps []map[typeOff]*_type
var poolcleanup func()
var procAuxv = []byte("/proc/self/auxv\x00")
var prof struct {
        signalLock uint32
        hz         int32
}
var ptrnames = []string{
        0: "scalar",
        1: "ptr",
}
var racecgosync uint64 // represents possible synchronization in C code

reflectOffs holds type offsets defined at run time by the reflect package.

When a type is defined at run time, its *rtype data lives on the heap. There are a wide range of possible addresses the heap may use, that may not be representable as a 32-bit offset. Moreover the GC may one day start moving heap memory, in which case there is no stable offset that can be defined.

To provide stable offsets, we add pin *rtype objects in a global map and treat the offset as an identifier. We use negative offsets that do not overlap with any compile-time module offsets.

Entries are created by reflect.addReflectOff.

var reflectOffs struct {
        lock mutex
        next int32
        m    map[int32]unsafe.Pointer
        minv map[unsafe.Pointer]int32
}

runningPanicDefers is non-zero while running deferred functions for panic. runningPanicDefers is incremented and decremented atomically. This is used to try hard to get a panic stack trace out when exiting.

var runningPanicDefers uint32

runtimeInitTime is the nanotime() at which the runtime started.

var runtimeInitTime int64
var semtable [semTabSize]struct {
        root semaRoot
        pad  [cpu.CacheLinePadSize - unsafe.Sizeof(semaRoot{})]byte
}

sig handles communication between the signal handler and os/signal. Other than the inuse and recv fields, the fields are accessed atomically.

The wanted and ignored fields are only written by one goroutine at a time; access is controlled by the handlers Mutex in os/signal. The fields are only read by that one goroutine and by the signal handler. We access them atomically to minimize the race between setting them in the goroutine calling os/signal and the signal handler, which may be running in a different thread. That race is unavoidable, as there is no connection between handling a signal and receiving one, but atomic instructions should minimize it.

var sig struct {
        note       note
        mask       [(_NSIG + 31) / 32]uint32
        wanted     [(_NSIG + 31) / 32]uint32
        ignored    [(_NSIG + 31) / 32]uint32
        recv       [(_NSIG + 31) / 32]uint32
        state      uint32
        delivering uint32
        inuse      bool
}
var signalsOK bool
var sigprofCallersUse uint32
var sigset_all = sigset{^uint32(0), ^uint32(0)}
var sigtable = [...]sigTabT{
        {0, "SIGNONE: no trap"},
        {_SigNotify + _SigKill, "SIGHUP: terminal line hangup"},
        {_SigNotify + _SigKill, "SIGINT: interrupt"},
        {_SigNotify + _SigThrow, "SIGQUIT: quit"},
        {_SigThrow + _SigUnblock, "SIGILL: illegal instruction"},
        {_SigThrow + _SigUnblock, "SIGTRAP: trace trap"},
        {_SigNotify + _SigThrow, "SIGABRT: abort"},
        {_SigPanic + _SigUnblock, "SIGBUS: bus error"},
        {_SigPanic + _SigUnblock, "SIGFPE: floating-point exception"},
        {0, "SIGKILL: kill"},
        {_SigNotify, "SIGUSR1: user-defined signal 1"},
        {_SigPanic + _SigUnblock, "SIGSEGV: segmentation violation"},
        {_SigNotify, "SIGUSR2: user-defined signal 2"},
        {_SigNotify, "SIGPIPE: write to broken pipe"},
        {_SigNotify, "SIGALRM: alarm clock"},
        {_SigNotify + _SigKill, "SIGTERM: termination"},
        {_SigThrow + _SigUnblock, "SIGSTKFLT: stack fault"},
        {_SigNotify + _SigUnblock + _SigIgn, "SIGCHLD: child status has changed"},
        {_SigNotify + _SigDefault + _SigIgn, "SIGCONT: continue"},
        {0, "SIGSTOP: stop, unblockable"},
        {_SigNotify + _SigDefault + _SigIgn, "SIGTSTP: keyboard stop"},
        {_SigNotify + _SigDefault + _SigIgn, "SIGTTIN: background read from tty"},
        {_SigNotify + _SigDefault + _SigIgn, "SIGTTOU: background write to tty"},
        {_SigNotify + _SigIgn, "SIGURG: urgent condition on socket"},
        {_SigNotify, "SIGXCPU: cpu limit exceeded"},
        {_SigNotify, "SIGXFSZ: file size limit exceeded"},
        {_SigNotify, "SIGVTALRM: virtual alarm clock"},
        {_SigNotify + _SigUnblock, "SIGPROF: profiling alarm clock"},
        {_SigNotify + _SigIgn, "SIGWINCH: window size change"},
        {_SigNotify, "SIGIO: i/o now possible"},
        {_SigNotify, "SIGPWR: power failure restart"},
        {_SigThrow, "SIGSYS: bad system call"},
        {_SigSetStack + _SigUnblock, "signal 32"},
        {_SigSetStack + _SigUnblock, "signal 33"},
        {_SigNotify, "signal 34"},
        {_SigNotify, "signal 35"},
        {_SigNotify, "signal 36"},
        {_SigNotify, "signal 37"},
        {_SigNotify, "signal 38"},
        {_SigNotify, "signal 39"},
        {_SigNotify, "signal 40"},
        {_SigNotify, "signal 41"},
        {_SigNotify, "signal 42"},
        {_SigNotify, "signal 43"},
        {_SigNotify, "signal 44"},
        {_SigNotify, "signal 45"},
        {_SigNotify, "signal 46"},
        {_SigNotify, "signal 47"},
        {_SigNotify, "signal 48"},
        {_SigNotify, "signal 49"},
        {_SigNotify, "signal 50"},
        {_SigNotify, "signal 51"},
        {_SigNotify, "signal 52"},
        {_SigNotify, "signal 53"},
        {_SigNotify, "signal 54"},
        {_SigNotify, "signal 55"},
        {_SigNotify, "signal 56"},
        {_SigNotify, "signal 57"},
        {_SigNotify, "signal 58"},
        {_SigNotify, "signal 59"},
        {_SigNotify, "signal 60"},
        {_SigNotify, "signal 61"},
        {_SigNotify, "signal 62"},
        {_SigNotify, "signal 63"},
        {_SigNotify, "signal 64"},
}
var size_to_class128 = [(_MaxSmallSize-smallSizeMax)/largeSizeDiv + 1]uint8{31, 32, 33, 34, 35, 36, 36, 37, 37, 38, 38, 39, 39, 39, 40, 40, 40, 41, 42, 42, 43, 43, 43, 43, 43, 44, 44, 44, 44, 44, 44, 45, 45, 45, 45, 46, 46, 46, 46, 46, 46, 47, 47, 47, 48, 48, 49, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 51, 51, 51, 51, 51, 51, 51, 51, 51, 51, 52, 52, 53, 53, 53, 53, 54, 54, 54, 54, 54, 55, 55, 55, 55, 55, 55, 55, 55, 55, 55, 55, 56, 56, 56, 56, 56, 56, 56, 56, 56, 56, 57, 57, 57, 57, 57, 57, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 58, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 59, 60, 60, 60, 60, 60, 61, 61, 61, 61, 61, 61, 61, 61, 61, 61, 61, 62, 62, 62, 62, 62, 62, 62, 62, 62, 62, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 65, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66, 66}
var size_to_class8 = [smallSizeMax/smallSizeDiv + 1]uint8{0, 1, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 13, 13, 14, 14, 15, 15, 16, 16, 17, 17, 18, 18, 18, 18, 19, 19, 19, 19, 20, 20, 20, 20, 21, 21, 21, 21, 22, 22, 22, 22, 23, 23, 23, 23, 24, 24, 24, 24, 25, 25, 25, 25, 26, 26, 26, 26, 26, 26, 26, 26, 27, 27, 27, 27, 27, 27, 27, 27, 28, 28, 28, 28, 28, 28, 28, 28, 29, 29, 29, 29, 29, 29, 29, 29, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 31}

Size of the trailing by_size array differs between mstats and MemStats, and all data after by_size is local to runtime, not exported. NumSizeClasses was changed, but we cannot change MemStats because of backward compatibility. sizeof_C_MStats is the size of the prefix of mstats that corresponds to MemStats. It should match Sizeof(MemStats{}).

var sizeof_C_MStats = unsafe.Offsetof(memstats.by_size) + 61*unsafe.Sizeof(memstats.by_size[0])
var skipPC uintptr
var sliceError = error(errorString("slice bounds out of range"))

Global pool of large stack spans.

var stackLarge struct {
        lock mutex
        free [heapAddrBits - pageShift]mSpanList // free lists by log_2(s.npages)
}

Global pool of spans that have free stacks. Stacks are assigned an order according to size.

order = log_2(size/FixedStack)

There is a free list for each order. TODO: one lock per order?

var stackpool [_NumStackOrders]mSpanList
var starttime int64

startup_random_data holds random bytes initialized at startup. These come from the ELF AT_RANDOM auxiliary vector (vdso_linux_amd64.go or os_linux_386.go).

var startupRandomData []byte

staticbytes is used to avoid convT2E for byte-sized values.

var staticbytes = [...]byte{
        0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
        0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
        0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
        0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f,
        0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27,
        0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f,
        0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
        0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f,
        0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47,
        0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f,
        0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57,
        0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f,
        0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67,
        0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f,
        0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77,
        0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f,
        0x80, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87,
        0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f,
        0x90, 0x91, 0x92, 0x93, 0x94, 0x95, 0x96, 0x97,
        0x98, 0x99, 0x9a, 0x9b, 0x9c, 0x9d, 0x9e, 0x9f,
        0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0xa6, 0xa7,
        0xa8, 0xa9, 0xaa, 0xab, 0xac, 0xad, 0xae, 0xaf,
        0xb0, 0xb1, 0xb2, 0xb3, 0xb4, 0xb5, 0xb6, 0xb7,
        0xb8, 0xb9, 0xba, 0xbb, 0xbc, 0xbd, 0xbe, 0xbf,
        0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7,
        0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd, 0xce, 0xcf,
        0xd0, 0xd1, 0xd2, 0xd3, 0xd4, 0xd5, 0xd6, 0xd7,
        0xd8, 0xd9, 0xda, 0xdb, 0xdc, 0xdd, 0xde, 0xdf,
        0xe0, 0xe1, 0xe2, 0xe3, 0xe4, 0xe5, 0xe6, 0xe7,
        0xe8, 0xe9, 0xea, 0xeb, 0xec, 0xed, 0xee, 0xef,
        0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7,
        0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0xfd, 0xfe, 0xff,
}

testSigtrap is used by the runtime tests. If non-nil, it is called on SIGTRAP. If it returns true, the normal behavior on SIGTRAP is suppressed.

var testSigtrap func(info *siginfo, ctxt *sigctxt, gp *g) bool

TODO: These should be locals in testAtomic64, but we don't 8-byte align stack variables on 386.

var test_z64, test_x64 uint64

throwOnGCWork causes any operations that add pointers to a gcWork buffer to throw.

TODO(austin): This is a temporary debugging measure for issue #27993. To be removed before release.

var throwOnGCWork bool
var ticks struct {
        lock mutex
        pad  uint32 // ensure 8-byte alignment of val on 386
        val  uint64
}

timers contains "per-P" timer heaps.

Timers are queued into timersBucket associated with the current P, so each P may work with its own timers independently of other P instances.

Each timersBucket may be associated with multiple P if GOMAXPROCS > timersLen.

var timers [timersLen]struct {
        timersBucket

        // The padding should eliminate false sharing
        // between timersBucket values.
        pad [cpu.CacheLinePadSize - unsafe.Sizeof(timersBucket{})%cpu.CacheLinePadSize]byte
}
var tmpbuf []byte

trace is global tracing context.

var trace struct {
        lock          mutex       // protects the following members
        lockOwner     *g          // to avoid deadlocks during recursive lock locks
        enabled       bool        // when set runtime traces events
        shutdown      bool        // set when we are waiting for trace reader to finish after setting enabled to false
        headerWritten bool        // whether ReadTrace has emitted trace header
        footerWritten bool        // whether ReadTrace has emitted trace footer
        shutdownSema  uint32      // used to wait for ReadTrace completion
        seqStart      uint64      // sequence number when tracing was started
        ticksStart    int64       // cputicks when tracing was started
        ticksEnd      int64       // cputicks when tracing was stopped
        timeStart     int64       // nanotime when tracing was started
        timeEnd       int64       // nanotime when tracing was stopped
        seqGC         uint64      // GC start/done sequencer
        reading       traceBufPtr // buffer currently handed off to user
        empty         traceBufPtr // stack of empty buffers
        fullHead      traceBufPtr // queue of full buffers
        fullTail      traceBufPtr
        reader        guintptr        // goroutine that called ReadTrace, or nil
        stackTab      traceStackTable // maps stack traces to unique ids

        // Dictionary for traceEvString.
        //
        // TODO: central lock to access the map is not ideal.
        //   option: pre-assign ids to all user annotation region names and tags
        //   option: per-P cache
        //   option: sync.Map like data structure
        stringsLock mutex
        strings     map[string]uint64
        stringSeq   uint64

        // markWorkerLabels maps gcMarkWorkerMode to string ID.
        markWorkerLabels [len(gcMarkWorkerModeStrings)]uint64

        bufLock mutex       // protects buf
        buf     traceBufPtr // global trace buffer, used when running without a p
}
var traceback_cache uint32 = 2 << tracebackShift
var traceback_env uint32
var typecache [typeCacheBuckets]typeCacheBucket
var urandom_dev = []byte("/dev/urandom\x00")
var useAVXmemmove bool
var useAeshash bool

If useCheckmark is true, marking of an object uses the checkmark bits (encoding above) instead of the standard mark bits.

var useCheckmark = false
var vdsoLinuxVersion = vdsoVersionKey{"LINUX_2.6", 0x3ae75f6}
var vdsoSymbolKeys = []vdsoSymbolKey{
        {"__vdso_gettimeofday", 0x315ca59, 0xb01bca00, &vdsoGettimeofdaySym},
        {"__vdso_clock_gettime", 0xd35ec75, 0x6e43a318, &vdsoClockgettimeSym},
}
var waitReasonStrings = [...]string{
        waitReasonZero:                  "",
        waitReasonGCAssistMarking:       "GC assist marking",
        waitReasonIOWait:                "IO wait",
        waitReasonChanReceiveNilChan:    "chan receive (nil chan)",
        waitReasonChanSendNilChan:       "chan send (nil chan)",
        waitReasonDumpingHeap:           "dumping heap",
        waitReasonGarbageCollection:     "garbage collection",
        waitReasonGarbageCollectionScan: "garbage collection scan",
        waitReasonPanicWait:             "panicwait",
        waitReasonSelect:                "select",
        waitReasonSelectNoCases:         "select (no cases)",
        waitReasonGCAssistWait:          "GC assist wait",
        waitReasonGCSweepWait:           "GC sweep wait",
        waitReasonChanReceive:           "chan receive",
        waitReasonChanSend:              "chan send",
        waitReasonFinalizerWait:         "finalizer wait",
        waitReasonForceGGIdle:           "force gc (idle)",
        waitReasonSemacquire:            "semacquire",
        waitReasonSleep:                 "sleep",
        waitReasonSyncCondWait:          "sync.Cond.Wait",
        waitReasonTimerGoroutineIdle:    "timer goroutine (idle)",
        waitReasonTraceReaderBlocked:    "trace reader (blocked)",
        waitReasonWaitForGCCycle:        "wait for GC cycle",
        waitReasonGCWorkerIdle:          "GC worker (idle)",
}
var work struct {
        full  lfstack          // lock-free list of full blocks workbuf
        empty lfstack          // lock-free list of empty blocks workbuf
        pad0  cpu.CacheLinePad // prevents false-sharing between full/empty and nproc/nwait

        wbufSpans struct {
                lock mutex
                // free is a list of spans dedicated to workbufs, but
                // that don't currently contain any workbufs.
                free mSpanList
                // busy is a list of all spans containing workbufs on
                // one of the workbuf lists.
                busy mSpanList
        }

        // Restore 64-bit alignment on 32-bit.
        _ uint32

        // bytesMarked is the number of bytes marked this cycle. This
        // includes bytes blackened in scanned objects, noscan objects
        // that go straight to black, and permagrey objects scanned by
        // markroot during the concurrent scan phase. This is updated
        // atomically during the cycle. Updates may be batched
        // arbitrarily, since the value is only read at the end of the
        // cycle.
        //
        // Because of benign races during marking, this number may not
        // be the exact number of marked bytes, but it should be very
        // close.
        //
        // Put this field here because it needs 64-bit atomic access
        // (and thus 8-byte alignment even on 32-bit architectures).
        bytesMarked uint64

        markrootNext uint32 // next markroot job
        markrootJobs uint32 // number of markroot jobs

        nproc  uint32
        tstart int64
        nwait  uint32
        ndone  uint32

        // Number of roots of various root types. Set by gcMarkRootPrepare.
        nFlushCacheRoots                               int
        nDataRoots, nBSSRoots, nSpanRoots, nStackRoots int

        // Each type of GC state transition is protected by a lock.
        // Since multiple threads can simultaneously detect the state
        // transition condition, any thread that detects a transition
        // condition must acquire the appropriate transition lock,
        // re-check the transition condition and return if it no
        // longer holds or perform the transition if it does.
        // Likewise, any transition must invalidate the transition
        // condition before releasing the lock. This ensures that each
        // transition is performed by exactly one thread and threads
        // that need the transition to happen block until it has
        // happened.
        //
        // startSema protects the transition from "off" to mark or
        // mark termination.
        startSema uint32
        // markDoneSema protects transitions from mark to mark termination.
        markDoneSema uint32

        bgMarkReady note   // signal background mark worker has started
        bgMarkDone  uint32 // cas to 1 when at a background mark completion point

        // mode is the concurrency mode of the current GC cycle.
        mode gcMode

        // userForced indicates the current GC cycle was forced by an
        // explicit user call.
        userForced bool

        // totaltime is the CPU nanoseconds spent in GC since the
        // program started if debug.gctrace > 0.
        totaltime int64

        // initialHeapLive is the value of memstats.heap_live at the
        // beginning of this GC cycle.
        initialHeapLive uint64

        // assistQueue is a queue of assists that are blocked because
        // there was neither enough credit to steal or enough work to
        // do.
        assistQueue struct {
                lock mutex
                q    gQueue
        }

        // sweepWaiters is a list of blocked goroutines to wake when
        // we transition from mark termination to sweep.
        sweepWaiters struct {
                lock mutex
                list gList
        }

        // cycles is the number of completed GC cycles, where a GC
        // cycle is sweep termination, mark, mark termination, and
        // sweep. This differs from memstats.numgc, which is
        // incremented at mark termination.
        cycles uint32

        // Timing/utilization stats for this cycle.
        stwprocs, maxprocs                 int32
        tSweepTerm, tMark, tMarkTerm, tEnd int64 // nanotime() of phase start

        pauseNS    int64 // total STW time this cycle
        pauseStart int64 // nanotime() of last STW

        // debug.gctrace heap sizes for this cycle.
        heap0, heap1, heap2, heapGoal uint64
}

Holding worldsema grants an M the right to try to stop the world and prevents gomaxprocs from changing concurrently.

var worldsema uint32 = 1

The compiler knows about this variable. If you change it, you must change builtin/runtime.go, too. If you change the first four bytes, you must also change the write barrier insertion code.

var writeBarrier struct {
        enabled bool    // compiler emits a check of this before calling write barrier
        pad     [3]byte // compiler uses 32-bit load for "enabled" field
        needed  bool    // whether we need a write barrier for current GC phase
        cgo     bool    // whether we need a write barrier for a cgo check
        alignme uint64  // guarantee alignment so that compiler can use a 32 or 64-bit load
}
var zeroVal [maxZero]byte

base address for all 0-byte allocations

var zerobase uintptr

func BlockProfile 1.1

func BlockProfile(p []BlockProfileRecord) (n int, ok bool)

BlockProfile returns n, the number of records in the current blocking profile. If len(p) >= n, BlockProfile copies the profile into p and returns n, true. If len(p) < n, BlockProfile does not change p and returns n, false.

Most clients should use the runtime/pprof package or the testing package's -test.blockprofile flag instead of calling BlockProfile directly.

func Breakpoint

func Breakpoint()

Breakpoint executes a breakpoint trap.

func CPUProfile

func CPUProfile() []byte

CPUProfile panics. It formerly provided raw access to chunks of a pprof-format profile generated by the runtime. The details of generating that format have changed, so this functionality has been removed.

Deprecated: use the runtime/pprof package, or the handlers in the net/http/pprof package, or the testing package's -test.cpuprofile flag instead.

func Caller

func Caller(skip int) (pc uintptr, file string, line int, ok bool)

Caller reports file and line number information about function invocations on the calling goroutine's stack. The argument skip is the number of stack frames to ascend, with 0 identifying the caller of Caller. (For historical reasons the meaning of skip differs between Caller and Callers.) The return values report the program counter, file name, and line number within the file of the corresponding call. The boolean ok is false if it was not possible to recover the information.

func Callers

func Callers(skip int, pc []uintptr) int

Callers fills the slice pc with the return program counters of function invocations on the calling goroutine's stack. The argument skip is the number of stack frames to skip before recording in pc, with 0 identifying the frame for Callers itself and 1 identifying the caller of Callers. It returns the number of entries written to pc.

To translate these PCs into symbolic information such as function names and line numbers, use CallersFrames. CallersFrames accounts for inlined functions and adjusts the return program counters into call program counters. Iterating over the returned slice of PCs directly is discouraged, as is using FuncForPC on any of the returned PCs, since these cannot account for inlining or return program counter adjustment. go:noinline

func GC

func GC()

GC runs a garbage collection and blocks the caller until the garbage collection is complete. It may also block the entire program.

func GOMAXPROCS

func GOMAXPROCS(n int) int

GOMAXPROCS sets the maximum number of CPUs that can be executing simultaneously and returns the previous setting. If n < 1, it does not change the current setting. The number of logical CPUs on the local machine can be queried with NumCPU. This call will go away when the scheduler improves.

func GOROOT

func GOROOT() string

GOROOT returns the root of the Go tree. It uses the GOROOT environment variable, if set at process start, or else the root used during the Go build.

func Goexit

func Goexit()

Goexit terminates the goroutine that calls it. No other goroutine is affected. Goexit runs all deferred calls before terminating the goroutine. Because Goexit is not a panic, any recover calls in those deferred functions will return nil.

Calling Goexit from the main goroutine terminates that goroutine without func main returning. Since func main has not returned, the program continues execution of other goroutines. If all other goroutines exit, the program crashes.

func GoroutineProfile

func GoroutineProfile(p []StackRecord) (n int, ok bool)

GoroutineProfile returns n, the number of records in the active goroutine stack profile. If len(p) >= n, GoroutineProfile copies the profile into p and returns n, true. If len(p) < n, GoroutineProfile does not change p and returns n, false.

Most clients should use the runtime/pprof package instead of calling GoroutineProfile directly.

func Gosched

func Gosched()

Gosched yields the processor, allowing other goroutines to run. It does not suspend the current goroutine, so execution resumes automatically.

func KeepAlive 1.7

func KeepAlive(x interface{})

KeepAlive marks its argument as currently reachable. This ensures that the object is not freed, and its finalizer is not run, before the point in the program where KeepAlive is called.

A very simplified example showing where KeepAlive is required:

type File struct { d int }
d, err := syscall.Open("/file/path", syscall.O_RDONLY, 0)
// ... do something if err != nil ...
p := &File{d}
runtime.SetFinalizer(p, func(p *File) { syscall.Close(p.d) })
var buf [10]byte
n, err := syscall.Read(p.d, buf[:])
// Ensure p is not finalized until Read returns.
runtime.KeepAlive(p)
// No more uses of p after this point.

Without the KeepAlive call, the finalizer could run at the start of syscall.Read, closing the file descriptor before syscall.Read makes the actual system call.

func LockOSThread

func LockOSThread()

LockOSThread wires the calling goroutine to its current operating system thread. The calling goroutine will always execute in that thread, and no other goroutine will execute in it, until the calling goroutine has made as many calls to UnlockOSThread as to LockOSThread. If the calling goroutine exits without unlocking the thread, the thread will be terminated.

All init functions are run on the startup thread. Calling LockOSThread from an init function will cause the main function to be invoked on that thread.

A goroutine should call LockOSThread before calling OS services or non-Go library functions that depend on per-thread state.

func MemProfile

func MemProfile(p []MemProfileRecord, inuseZero bool) (n int, ok bool)

MemProfile returns a profile of memory allocated and freed per allocation site.

MemProfile returns n, the number of records in the current memory profile. If len(p) >= n, MemProfile copies the profile into p and returns n, true. If len(p) < n, MemProfile does not change p and returns n, false.

If inuseZero is true, the profile includes allocation records where r.AllocBytes > 0 but r.AllocBytes == r.FreeBytes. These are sites where memory was allocated, but it has all been released back to the runtime.

The returned profile may be up to two garbage collection cycles old. This is to avoid skewing the profile toward allocations; because allocations happen in real time but frees are delayed until the garbage collector performs sweeping, the profile only accounts for allocations that have had a chance to be freed by the garbage collector.

Most clients should use the runtime/pprof package or the testing package's -test.memprofile flag instead of calling MemProfile directly.

func MutexProfile 1.8

func MutexProfile(p []BlockProfileRecord) (n int, ok bool)

MutexProfile returns n, the number of records in the current mutex profile. If len(p) >= n, MutexProfile copies the profile into p and returns n, true. Otherwise, MutexProfile does not change p, and returns n, false.

Most clients should use the runtime/pprof package instead of calling MutexProfile directly.

func NumCPU

func NumCPU() int

NumCPU returns the number of logical CPUs usable by the current process.

The set of available CPUs is checked by querying the operating system at process startup. Changes to operating system CPU allocation after process startup are not reflected.

func NumCgoCall

func NumCgoCall() int64

NumCgoCall returns the number of cgo calls made by the current process.

func NumGoroutine

func NumGoroutine() int

NumGoroutine returns the number of goroutines that currently exist.

func ReadMemStats

func ReadMemStats(m *MemStats)

ReadMemStats populates m with memory allocator statistics.

The returned memory allocator statistics are up to date as of the call to ReadMemStats. This is in contrast with a heap profile, which is a snapshot as of the most recently completed garbage collection cycle.

func ReadTrace 1.5

func ReadTrace() []byte

ReadTrace returns the next chunk of binary tracing data, blocking until data is available. If tracing is turned off and all the data accumulated while it was on has been returned, ReadTrace returns nil. The caller must copy the returned data before calling ReadTrace again. ReadTrace must be called from one goroutine at a time.

func SetBlockProfileRate 1.1

func SetBlockProfileRate(rate int)

SetBlockProfileRate controls the fraction of goroutine blocking events that are reported in the blocking profile. The profiler aims to sample an average of one blocking event per rate nanoseconds spent blocked.

To include every blocking event in the profile, pass rate = 1. To turn off profiling entirely, pass rate <= 0.

func SetCPUProfileRate

func SetCPUProfileRate(hz int)

SetCPUProfileRate sets the CPU profiling rate to hz samples per second. If hz <= 0, SetCPUProfileRate turns off profiling. If the profiler is on, the rate cannot be changed without first turning it off.

Most clients should use the runtime/pprof package or the testing package's -test.cpuprofile flag instead of calling SetCPUProfileRate directly.

func SetCgoTraceback 1.7

func SetCgoTraceback(version int, traceback, context, symbolizer unsafe.Pointer)

SetCgoTraceback records three C functions to use to gather traceback information from C code and to convert that traceback information into symbolic information. These are used when printing stack traces for a program that uses cgo.

The traceback and context functions may be called from a signal handler, and must therefore use only async-signal safe functions. The symbolizer function may be called while the program is crashing, and so must be cautious about using memory. None of the functions may call back into Go.

The context function will be called with a single argument, a pointer to a struct:

struct {
	Context uintptr
}

In C syntax, this struct will be

struct {
	uintptr_t Context;
};

If the Context field is 0, the context function is being called to record the current traceback context. It should record in the Context field whatever information is needed about the current point of execution to later produce a stack trace, probably the stack pointer and PC. In this case the context function will be called from C code.

If the Context field is not 0, then it is a value returned by a previous call to the context function. This case is called when the context is no longer needed; that is, when the Go code is returning to its C code caller. This permits the context function to release any associated resources.

While it would be correct for the context function to record a complete a stack trace whenever it is called, and simply copy that out in the traceback function, in a typical program the context function will be called many times without ever recording a traceback for that context. Recording a complete stack trace in a call to the context function is likely to be inefficient.

The traceback function will be called with a single argument, a pointer to a struct:

struct {
	Context    uintptr
	SigContext uintptr
	Buf        *uintptr
	Max        uintptr
}

In C syntax, this struct will be

struct {
	uintptr_t  Context;
	uintptr_t  SigContext;
	uintptr_t* Buf;
	uintptr_t  Max;
};

The Context field will be zero to gather a traceback from the current program execution point. In this case, the traceback function will be called from C code.

Otherwise Context will be a value previously returned by a call to the context function. The traceback function should gather a stack trace from that saved point in the program execution. The traceback function may be called from an execution thread other than the one that recorded the context, but only when the context is known to be valid and unchanging. The traceback function may also be called deeper in the call stack on the same thread that recorded the context. The traceback function may be called multiple times with the same Context value; it will usually be appropriate to cache the result, if possible, the first time this is called for a specific context value.

If the traceback function is called from a signal handler on a Unix system, SigContext will be the signal context argument passed to the signal handler (a C ucontext_t* cast to uintptr_t). This may be used to start tracing at the point where the signal occurred. If the traceback function is not called from a signal handler, SigContext will be zero.

Buf is where the traceback information should be stored. It should be PC values, such that Buf[0] is the PC of the caller, Buf[1] is the PC of that function's caller, and so on. Max is the maximum number of entries to store. The function should store a zero to indicate the top of the stack, or that the caller is on a different stack, presumably a Go stack.

Unlike runtime.Callers, the PC values returned should, when passed to the symbolizer function, return the file/line of the call instruction. No additional subtraction is required or appropriate.

On all platforms, the traceback function is invoked when a call from Go to C to Go requests a stack trace. On linux/amd64, linux/ppc64le, and freebsd/amd64, the traceback function is also invoked when a signal is received by a thread that is executing a cgo call. The traceback function should not make assumptions about when it is called, as future versions of Go may make additional calls.

The symbolizer function will be called with a single argument, a pointer to a struct:

struct {
	PC      uintptr // program counter to fetch information for
	File    *byte   // file name (NUL terminated)
	Lineno  uintptr // line number
	Func    *byte   // function name (NUL terminated)
	Entry   uintptr // function entry point
	More    uintptr // set non-zero if more info for this PC
	Data    uintptr // unused by runtime, available for function
}

In C syntax, this struct will be

struct {
	uintptr_t PC;
	char*     File;
	uintptr_t Lineno;
	char*     Func;
	uintptr_t Entry;
	uintptr_t More;
	uintptr_t Data;
};

The PC field will be a value returned by a call to the traceback function.

The first time the function is called for a particular traceback, all the fields except PC will be 0. The function should fill in the other fields if possible, setting them to 0/nil if the information is not available. The Data field may be used to store any useful information across calls. The More field should be set to non-zero if there is more information for this PC, zero otherwise. If More is set non-zero, the function will be called again with the same PC, and may return different information (this is intended for use with inlined functions). If More is zero, the function will be called with the next PC value in the traceback. When the traceback is complete, the function will be called once more with PC set to zero; this may be used to free any information. Each call will leave the fields of the struct set to the same values they had upon return, except for the PC field when the More field is zero. The function must not keep a copy of the struct pointer between calls.

When calling SetCgoTraceback, the version argument is the version number of the structs that the functions expect to receive. Currently this must be zero.

The symbolizer function may be nil, in which case the results of the traceback function will be displayed as numbers. If the traceback function is nil, the symbolizer function will never be called. The context function may be nil, in which case the traceback function will only be called with the context field set to zero. If the context function is nil, then calls from Go to C to Go will not show a traceback for the C portion of the call stack.

SetCgoTraceback should be called only once, ideally from an init function.

func SetFinalizer

func SetFinalizer(obj interface{}, finalizer interface{})

SetFinalizer sets the finalizer associated with obj to the provided finalizer function. When the garbage collector finds an unreachable block with an associated finalizer, it clears the association and runs finalizer(obj) in a separate goroutine. This makes obj reachable again, but now without an associated finalizer. Assuming that SetFinalizer is not called again, the next time the garbage collector sees that obj is unreachable, it will free obj.

SetFinalizer(obj, nil) clears any finalizer associated with obj.

The argument obj must be a pointer to an object allocated by calling new, by taking the address of a composite literal, or by taking the address of a local variable. The argument finalizer must be a function that takes a single argument to which obj's type can be assigned, and can have arbitrary ignored return values. If either of these is not true, SetFinalizer may abort the program.

Finalizers are run in dependency order: if A points at B, both have finalizers, and they are otherwise unreachable, only the finalizer for A runs; once A is freed, the finalizer for B can run. If a cyclic structure includes a block with a finalizer, that cycle is not guaranteed to be garbage collected and the finalizer is not guaranteed to run, because there is no ordering that respects the dependencies.

The finalizer is scheduled to run at some arbitrary time after the program can no longer reach the object to which obj points. There is no guarantee that finalizers will run before a program exits, so typically they are useful only for releasing non-memory resources associated with an object during a long-running program. For example, an os.File object could use a finalizer to close the associated operating system file descriptor when a program discards an os.File without calling Close, but it would be a mistake to depend on a finalizer to flush an in-memory I/O buffer such as a bufio.Writer, because the buffer would not be flushed at program exit.

It is not guaranteed that a finalizer will run if the size of *obj is zero bytes.

It is not guaranteed that a finalizer will run for objects allocated in initializers for package-level variables. Such objects may be linker-allocated, not heap-allocated.

A finalizer may run as soon as an object becomes unreachable. In order to use finalizers correctly, the program must ensure that the object is reachable until it is no longer required. Objects stored in global variables, or that can be found by tracing pointers from a global variable, are reachable. For other objects, pass the object to a call of the KeepAlive function to mark the last point in the function where the object must be reachable.

For example, if p points to a struct that contains a file descriptor d, and p has a finalizer that closes that file descriptor, and if the last use of p in a function is a call to syscall.Write(p.d, buf, size), then p may be unreachable as soon as the program enters syscall.Write. The finalizer may run at that moment, closing p.d, causing syscall.Write to fail because it is writing to a closed file descriptor (or, worse, to an entirely different file descriptor opened by a different goroutine). To avoid this problem, call runtime.KeepAlive(p) after the call to syscall.Write.

A single goroutine runs all finalizers for a program, sequentially. If a finalizer must run for a long time, it should do so by starting a new goroutine.

func SetMutexProfileFraction 1.8

func SetMutexProfileFraction(rate int) int

SetMutexProfileFraction controls the fraction of mutex contention events that are reported in the mutex profile. On average 1/rate events are reported. The previous rate is returned.

To turn off profiling entirely, pass rate 0. To just read the current rate, pass rate < 0. (For n>1 the details of sampling may change.)

func Stack

func Stack(buf []byte, all bool) int

Stack formats a stack trace of the calling goroutine into buf and returns the number of bytes written to buf. If all is true, Stack formats stack traces of all other goroutines into buf after the trace for the current goroutine.

func StartTrace 1.5

func StartTrace() error

StartTrace enables tracing for the current process. While tracing, the data will be buffered and available via ReadTrace. StartTrace returns an error if tracing is already enabled. Most clients should use the runtime/trace package or the testing package's -test.trace flag instead of calling StartTrace directly.

func StopTrace 1.5

func StopTrace()

StopTrace stops tracing, if it was previously enabled. StopTrace only returns after all the reads for the trace have completed.

func ThreadCreateProfile

func ThreadCreateProfile(p []StackRecord) (n int, ok bool)

ThreadCreateProfile returns n, the number of records in the thread creation profile. If len(p) >= n, ThreadCreateProfile copies the profile into p and returns n, true. If len(p) < n, ThreadCreateProfile does not change p and returns n, false.

Most clients should use the runtime/pprof package instead of calling ThreadCreateProfile directly.

func UnlockOSThread

func UnlockOSThread()

UnlockOSThread undoes an earlier call to LockOSThread. If this drops the number of active LockOSThread calls on the calling goroutine to zero, it unwires the calling goroutine from its fixed operating system thread. If there are no active LockOSThread calls, this is a no-op.

Before calling UnlockOSThread, the caller must ensure that the OS thread is suitable for running other goroutines. If the caller made any permanent changes to the state of the thread that would affect other goroutines, it should not call this function and thus leave the goroutine locked to the OS thread until the goroutine (and hence the thread) exits.

func Version

func Version() string

Version returns the Go tree's version string. It is either the commit hash and date at the time of the build or, when possible, a release tag like "go1.3".

func _ELF_ST_BIND

func _ELF_ST_BIND(val byte) byte

How to extract and insert information held in the st_info field.

func _ELF_ST_TYPE

func _ELF_ST_TYPE(val byte) byte

func _ExternalCode

func _ExternalCode()

func _GC

func _GC()

func _LostExternalCode

func _LostExternalCode()

func _LostSIGPROFDuringAtomic64

func _LostSIGPROFDuringAtomic64()

func _System

func _System()

func _VDSO

func _VDSO()

func _cgo_panic_internal

func _cgo_panic_internal(p *byte)

func abort

func abort()

abort crashes the runtime in situations where even throw might not work. In general it should do something a debugger will recognize (e.g., an INT3 on x86). A crash in abort is recognized by the signal handler, which will attempt to tear down the runtime immediately.

func abs

func abs(x float64) float64

Abs returns the absolute value of x.

Special cases are:

Abs(±Inf) = +Inf
Abs(NaN) = NaN

func acquirep

func acquirep(_p_ *p)

Associate p and the current m.

This function is allowed to have write barriers even if the caller isn't because it immediately acquires _p_.

go:yeswritebarrierrec

func add

func add(p unsafe.Pointer, x uintptr) unsafe.Pointer

Should be a built-in for unsafe.Pointer? go:nosplit

func add1

func add1(p *byte) *byte

add1 returns the byte pointer p+1. go:nowritebarrier go:nosplit

func addb

func addb(p *byte, n uintptr) *byte

addb returns the byte pointer p+n. go:nowritebarrier go:nosplit

func addfinalizer

func addfinalizer(p unsafe.Pointer, f *funcval, nret uintptr, fint *_type, ot *ptrtype) bool

Adds a finalizer to the object p. Returns true if it succeeded.

func addspecial

func addspecial(p unsafe.Pointer, s *special) bool

Adds the special record s to the list of special records for the object p. All fields of s should be filled in except for offset & next, which this routine will fill in. Returns true if the special was successfully added, false otherwise. (The add will fail only if a record with the same p and s->kind

already exists.)

func addtimer

func addtimer(t *timer)

func adjustctxt

func adjustctxt(gp *g, adjinfo *adjustinfo)

func adjustdefers

func adjustdefers(gp *g, adjinfo *adjustinfo)

func adjustframe

func adjustframe(frame *stkframe, arg unsafe.Pointer) bool

Note: the argument/return area is adjusted by the callee.

func adjustpanics

func adjustpanics(gp *g, adjinfo *adjustinfo)

func adjustpointer

func adjustpointer(adjinfo *adjustinfo, vpp unsafe.Pointer)

Adjustpointer checks whether *vpp is in the old stack described by adjinfo. If so, it rewrites *vpp to point into the new stack.

func adjustpointers

func adjustpointers(scanp unsafe.Pointer, bv *bitvector, adjinfo *adjustinfo, f funcInfo)

bv describes the memory starting at address scanp. Adjust any pointers contained therein.

func adjustsudogs

func adjustsudogs(gp *g, adjinfo *adjustinfo)

func advanceEvacuationMark

func advanceEvacuationMark(h *hmap, t *maptype, newbit uintptr)

func aeshash

func aeshash(p unsafe.Pointer, h, s uintptr) uintptr

in asm_*.s

func aeshash32

func aeshash32(p unsafe.Pointer, h uintptr) uintptr

func aeshash64

func aeshash64(p unsafe.Pointer, h uintptr) uintptr

func aeshashstr

func aeshashstr(p unsafe.Pointer, h uintptr) uintptr

func afterfork

func afterfork()

func alginit

func alginit()

func allgadd

func allgadd(gp *g)

func archauxv

func archauxv(tag, val uintptr)

func arenaBase

func arenaBase(i arenaIdx) uintptr

arenaBase returns the low address of the region covered by heap arena i.

func args

func args(c int32, v **byte)

func argv_index

func argv_index(argv **byte, i int32) *byte

nosplit for use in linux startup sysargs go:nosplit

func asmcgocall

func asmcgocall(fn, arg unsafe.Pointer) int32

go:noescape

func asminit

func asminit()

func assertE2I2

func assertE2I2(inter *interfacetype, e eface) (r iface, b bool)

func assertI2I2

func assertI2I2(inter *interfacetype, i iface) (r iface, b bool)

func atoi

func atoi(s string) (int, bool)

atoi parses an int from a string s. The bool result reports whether s is a number representable by a value of type int.

func atoi32

func atoi32(s string) (int32, bool)

atoi32 is like atoi but for integers that fit into an int32.

func atomicstorep

func atomicstorep(ptr unsafe.Pointer, new unsafe.Pointer)

atomicstorep performs *ptr = new atomically and invokes a write barrier.

go:nosplit

func atomicwb

func atomicwb(ptr *unsafe.Pointer, new unsafe.Pointer)

atomicwb performs a write barrier before an atomic pointer write. The caller should guard the call with "if writeBarrier.enabled".

go:nosplit

func badTimer

func badTimer()

badTimer is called if the timer data structures have been corrupted, presumably due to racy use by the program. We panic here rather than panicing due to invalid slice access while holding locks. See issue #25686.

func badcgocallback

func badcgocallback()

called from assembly

func badctxt

func badctxt()

go:nosplit

func badmcall

func badmcall(fn func(*g))

called from assembly

func badmcall2

func badmcall2(fn func(*g))

func badmorestackg0

func badmorestackg0()

go:nosplit go:nowritebarrierrec

func badmorestackgsignal

func badmorestackgsignal()

go:nosplit go:nowritebarrierrec

func badreflectcall

func badreflectcall()

func badsignal

func badsignal(sig uintptr, c *sigctxt)

This runs on a foreign stack, without an m or a g. No stack split. go:nosplit go:norace go:nowritebarrierrec

func badsystemstack

func badsystemstack()

go:nosplit go:nowritebarrierrec

func badunlockosthread

func badunlockosthread()

func beforeIdle

func beforeIdle() bool

func beforefork

func beforefork()

func bgsweep

func bgsweep(c chan int)

func binarySearchTree

func binarySearchTree(x *stackObjectBuf, idx int, n int) (root *stackObject, restBuf *stackObjectBuf, restIdx int)

Build a binary search tree with the n objects in the list x.obj[idx], x.obj[idx+1], ..., x.next.obj[0], ... Returns the root of that tree, and the buf+idx of the nth object after x.obj[idx]. (The first object that was not included in the binary search tree.) If n == 0, returns nil, x.

func block

func block()

func blockableSig

func blockableSig(sig uint32) bool

blockableSig reports whether sig may be blocked by the signal mask. We never want to block the signals marked _SigUnblock; these are the synchronous signals that turn into a Go panic. In a Go program--not a c-archive/c-shared--we never want to block the signals marked _SigKill or _SigThrow, as otherwise it's possible for all running threads to block them and delay their delivery until we start a new thread. When linked into a C program we let the C code decide on the disposition of those signals.

func blockevent

func blockevent(cycles int64, skip int)

func blocksampled

func blocksampled(cycles int64) bool

func bool2int

func bool2int(x bool) int

bool2int returns 0 if x is false or 1 if x is true.

func breakpoint

func breakpoint()

func bucketEvacuated

func bucketEvacuated(t *maptype, h *hmap, bucket uintptr) bool

func bucketMask

func bucketMask(b uint8) uintptr

bucketMask returns 1<<b - 1, optimized for code generation.

func bucketShift

func bucketShift(b uint8) uintptr

bucketShift returns 1<<b, optimized for code generation.

func bulkBarrierBitmap

func bulkBarrierBitmap(dst, src, size, maskOffset uintptr, bits *uint8)

bulkBarrierBitmap executes write barriers for copying from [src, src+size) to [dst, dst+size) using a 1-bit pointer bitmap. src is assumed to start maskOffset bytes into the data covered by the bitmap in bits (which may not be a multiple of 8).

This is used by bulkBarrierPreWrite for writes to data and BSS.

go:nosplit

func bulkBarrierPreWrite

func bulkBarrierPreWrite(dst, src, size uintptr)

bulkBarrierPreWrite executes a write barrier for every pointer slot in the memory range [src, src+size), using pointer/scalar information from [dst, dst+size). This executes the write barriers necessary before a memmove. src, dst, and size must be pointer-aligned. The range [dst, dst+size) must lie within a single object. It does not perform the actual writes.

As a special case, src == 0 indicates that this is being used for a memclr. bulkBarrierPreWrite will pass 0 for the src of each write barrier.

Callers should call bulkBarrierPreWrite immediately before calling memmove(dst, src, size). This function is marked nosplit to avoid being preempted; the GC must not stop the goroutine between the memmove and the execution of the barriers. The caller is also responsible for cgo pointer checks if this may be writing Go pointers into non-Go memory.

The pointer bitmap is not maintained for allocations containing no pointers at all; any caller of bulkBarrierPreWrite must first make sure the underlying allocation contains pointers, usually by checking typ.kind&kindNoPointers.

Callers must perform cgo checks if writeBarrier.cgo.

go:nosplit

func bulkBarrierPreWriteSrcOnly

func bulkBarrierPreWriteSrcOnly(dst, src, size uintptr)

bulkBarrierPreWriteSrcOnly is like bulkBarrierPreWrite but does not execute write barriers for [dst, dst+size).

In addition to the requirements of bulkBarrierPreWrite callers need to ensure [dst, dst+size) is zeroed.

This is used for special cases where e.g. dst was just created and zeroed with malloc. go:nosplit

func bytes

func bytes(s string) (ret []byte)

func bytesHash

func bytesHash(b []byte, seed uintptr) uintptr

func c128equal

func c128equal(p, q unsafe.Pointer) bool

func c128hash

func c128hash(p unsafe.Pointer, h uintptr) uintptr

func c64equal

func c64equal(p, q unsafe.Pointer) bool

func c64hash

func c64hash(p unsafe.Pointer, h uintptr) uintptr

func cachestats

func cachestats()

cachestats flushes all mcache stats.

The world must be stopped.

go:nowritebarrier

func call1024

func call1024(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call1048576

func call1048576(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call1073741824

func call1073741824(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call128

func call128(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call131072

func call131072(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call134217728

func call134217728(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call16384

func call16384(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call16777216

func call16777216(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call2048

func call2048(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call2097152

func call2097152(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call256

func call256(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call262144

func call262144(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call268435456

func call268435456(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call32

func call32(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

in asm_*.s not called directly; definitions here supply type information for traceback.

func call32768

func call32768(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call33554432

func call33554432(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call4096

func call4096(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call4194304

func call4194304(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call512

func call512(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call524288

func call524288(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call536870912

func call536870912(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call64

func call64(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call65536

func call65536(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call67108864

func call67108864(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call8192

func call8192(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func call8388608

func call8388608(typ, fn, arg unsafe.Pointer, n, retoffset uint32)

func callCgoMmap

func callCgoMmap(addr unsafe.Pointer, n uintptr, prot, flags, fd int32, off uint32) uintptr

callCgoMmap calls the mmap function in the runtime/cgo package using the GCC calling convention. It is implemented in assembly.

func callCgoMunmap

func callCgoMunmap(addr unsafe.Pointer, n uintptr)

callCgoMunmap calls the munmap function in the runtime/cgo package using the GCC calling convention. It is implemented in assembly.

func callCgoSigaction

func callCgoSigaction(sig uintptr, new, old *sigactiont) int32

callCgoSigaction calls the sigaction function in the runtime/cgo package using the GCC calling convention. It is implemented in assembly. go:noescape

func callCgoSymbolizer

func callCgoSymbolizer(arg *cgoSymbolizerArg)

callCgoSymbolizer calls the cgoSymbolizer function.

func callers

func callers(skip int, pcbuf []uintptr) int

func canpanic

func canpanic(gp *g) bool

canpanic returns false if a signal should throw instead of panicking.

go:nosplit

func cansemacquire

func cansemacquire(addr *uint32) bool

func casfrom_Gscanstatus

func casfrom_Gscanstatus(gp *g, oldval, newval uint32)

The Gscanstatuses are acting like locks and this releases them. If it proves to be a performance hit we should be able to make these simple atomic stores but for now we are going to throw if we see an inconsistent state.

func casgcopystack

func casgcopystack(gp *g) uint32

casgstatus(gp, oldstatus, Gcopystack), assuming oldstatus is Gwaiting or Grunnable. Returns old status. Cannot call casgstatus directly, because we are racing with an async wakeup that might come in from netpoll. If we see Gwaiting from the readgstatus, it might have become Grunnable by the time we get to the cas. If we called casgstatus, it would loop waiting for the status to go back to Gwaiting, which it never will. go:nosplit

func casgstatus

func casgstatus(gp *g, oldval, newval uint32)

If asked to move to or from a Gscanstatus this will throw. Use the castogscanstatus and casfrom_Gscanstatus instead. casgstatus will loop if the g->atomicstatus is in a Gscan status until the routine that put it in the Gscan state is finished. go:nosplit

func castogscanstatus

func castogscanstatus(gp *g, oldval, newval uint32) bool

This will return false if the gp is not in the expected status and the cas fails. This acts like a lock acquire while the casfromgstatus acts like a lock release.

func cfuncname

func cfuncname(f funcInfo) *byte

func cgoCheckArg

func cgoCheckArg(t *_type, p unsafe.Pointer, indir, top bool, msg string)

cgoCheckArg is the real work of cgoCheckPointer. The argument p is either a pointer to the value (of type t), or the value itself, depending on indir. The top parameter is whether we are at the top level, where Go pointers are allowed.

func cgoCheckBits

func cgoCheckBits(src unsafe.Pointer, gcbits *byte, off, size uintptr)

cgoCheckBits checks the block of memory at src, for up to size bytes, and throws if it finds a Go pointer. The gcbits mark each pointer value. The src pointer is off bytes into the gcbits. go:nosplit go:nowritebarrier

func cgoCheckMemmove

func cgoCheckMemmove(typ *_type, dst, src unsafe.Pointer, off, size uintptr)

cgoCheckMemmove is called when moving a block of memory. dst and src point off bytes into the value to copy. size is the number of bytes to copy. It throws if the program is copying a block that contains a Go pointer into non-Go memory. go:nosplit go:nowritebarrier

func cgoCheckPointer

func cgoCheckPointer(ptr interface{}, args ...interface{})

cgoCheckPointer checks if the argument contains a Go pointer that points to a Go pointer, and panics if it does.

func cgoCheckResult

func cgoCheckResult(val interface{})

cgoCheckResult is called to check the result parameter of an exported Go function. It panics if the result is or contains a Go pointer.

func cgoCheckSliceCopy

func cgoCheckSliceCopy(typ *_type, dst, src slice, n int)

cgoCheckSliceCopy is called when copying n elements of a slice from src to dst. typ is the element type of the slice. It throws if the program is copying slice elements that contain Go pointers into non-Go memory. go:nosplit go:nowritebarrier

func cgoCheckTypedBlock

func cgoCheckTypedBlock(typ *_type, src unsafe.Pointer, off, size uintptr)

cgoCheckTypedBlock checks the block of memory at src, for up to size bytes, and throws if it finds a Go pointer. The type of the memory is typ, and src is off bytes into that type. go:nosplit go:nowritebarrier

func cgoCheckUnknownPointer

func cgoCheckUnknownPointer(p unsafe.Pointer, msg string) (base, i uintptr)

cgoCheckUnknownPointer is called for an arbitrary pointer into Go memory. It checks whether that Go memory contains any other pointer into Go memory. If it does, we panic. The return values are unused but useful to see in panic tracebacks.

func cgoCheckUsingType

func cgoCheckUsingType(typ *_type, src unsafe.Pointer, off, size uintptr)

cgoCheckUsingType is like cgoCheckTypedBlock, but is a last ditch fall back to look for pointers in src using the type information. We only use this when looking at a value on the stack when the type uses a GC program, because otherwise it's more efficient to use the GC bits. This is called on the system stack. go:nowritebarrier go:systemstack

func cgoCheckWriteBarrier

func cgoCheckWriteBarrier(dst *uintptr, src uintptr)

cgoCheckWriteBarrier is called whenever a pointer is stored into memory. It throws if the program is storing a Go pointer into non-Go memory.

This is called from the write barrier, so its entire call tree must be nosplit.

go:nosplit go:nowritebarrier

func cgoContextPCs

func cgoContextPCs(ctxt uintptr, buf []uintptr)

cgoContextPCs gets the PC values from a cgo traceback.

func cgoInRange

func cgoInRange(p unsafe.Pointer, start, end uintptr) bool

cgoInRange reports whether p is between start and end. go:nosplit go:nowritebarrierrec

func cgoIsGoPointer

func cgoIsGoPointer(p unsafe.Pointer) bool

cgoIsGoPointer reports whether the pointer is a Go pointer--a pointer to Go memory. We only care about Go memory that might contain pointers. go:nosplit go:nowritebarrierrec

func cgoSigtramp

func cgoSigtramp()

func cgoUse

func cgoUse(interface{})

cgoUse is called by cgo-generated code (using go:linkname to get at an unexported name). The calls serve two purposes: 1) they are opaque to escape analysis, so the argument is considered to escape to the heap. 2) they keep the argument alive until the call site; the call is emitted after the end of the (presumed) use of the argument by C. cgoUse should not actually be called (see cgoAlwaysFalse).

func cgocall

func cgocall(fn, arg unsafe.Pointer) int32

Call from Go to C. go:nosplit

func cgocallback

func cgocallback(fn, frame unsafe.Pointer, framesize, ctxt uintptr)

func cgocallback_gofunc

func cgocallback_gofunc(fv, frame, framesize, ctxt uintptr)

Not all cgocallback_gofunc frames are actually cgocallback_gofunc, so not all have these arguments. Mark them uintptr so that the GC does not misinterpret memory when the arguments are not present. cgocallback_gofunc is not called from go, only from cgocallback, so the arguments will be found via cgocallback's pointer-declared arguments. See the assembly implementations for more details.

func cgocallbackg

func cgocallbackg(ctxt uintptr)

Call from C back to Go. go:nosplit

func cgocallbackg1

func cgocallbackg1(ctxt uintptr)

func cgounimpl

func cgounimpl()

called from (incomplete) assembly

func chanbuf

func chanbuf(c *hchan, i uint) unsafe.Pointer

chanbuf(c, i) is pointer to the i'th slot in the buffer.

func chanrecv

func chanrecv(c *hchan, ep unsafe.Pointer, block bool) (selected, received bool)

chanrecv receives on channel c and writes the received data to ep. ep may be nil, in which case received data is ignored. If block == false and no elements are available, returns (false, false). Otherwise, if c is closed, zeros *ep and returns (true, false). Otherwise, fills in *ep with an element and returns (true, true). A non-nil ep must point to the heap or the caller's stack.

func chanrecv1

func chanrecv1(c *hchan, elem unsafe.Pointer)

entry points for <- c from compiled code go:nosplit

func chanrecv2

func chanrecv2(c *hchan, elem unsafe.Pointer) (received bool)

go:nosplit

func chansend

func chansend(c *hchan, ep unsafe.Pointer, block bool, callerpc uintptr) bool

* generic single channel send/recv * If block is not nil, * then the protocol will not * sleep but return if it could * not complete. * * sleep can wake up with g.param == nil * when a channel involved in the sleep has * been closed. it is easiest to loop and re-run * the operation; we'll see that it's now closed.

func chansend1

func chansend1(c *hchan, elem unsafe.Pointer)

entry point for c <- x from compiled code go:nosplit

func check

func check()

func checkASM

func checkASM() bool

checkASM reports whether assembly runtime checks have passed.

func checkTimeouts

func checkTimeouts()

func checkTreapNode

func checkTreapNode(t *treapNode)

checkTreapNode when used in conjunction with walkTreap can usually detect a poorly formed treap.

func checkdead

func checkdead()

Check for deadlock situation. The check is based on number of running M's, if 0 -> deadlock. sched.lock must be held.

func checkmcount

func checkmcount()

func clearCheckmarks

func clearCheckmarks()

func clearSignalHandlers

func clearSignalHandlers()

clearSignalHandlers clears all signal handlers that are not ignored back to the default. This is called by the child after a fork, so that we can enable the signal mask for the exec without worrying about running a signal handler in the child. go:nosplit go:nowritebarrierrec

func clearpools

func clearpools()

func clobberfree

func clobberfree(x unsafe.Pointer, size uintptr)

clobberfree sets the memory content at x to bad content, for debugging purposes.

func clone

func clone(flags int32, stk, mp, gp, fn unsafe.Pointer) int32

go:noescape

func closechan

func closechan(c *hchan)

func closefd

func closefd(fd int32) int32

func closeonexec

func closeonexec(fd int32)

func complex128div

func complex128div(n complex128, m complex128) complex128

func concatstring2

func concatstring2(buf *tmpBuf, a [2]string) string

func concatstring3

func concatstring3(buf *tmpBuf, a [3]string) string

func concatstring4

func concatstring4(buf *tmpBuf, a [4]string) string

func concatstring5

func concatstring5(buf *tmpBuf, a [5]string) string

func concatstrings

func concatstrings(buf *tmpBuf, a []string) string

concatstrings implements a Go string concatenation x+y+z+... The operands are passed in the slice a. If buf != nil, the compiler has determined that the result does not escape the calling function, so the string data can be stored in buf if small enough.

func contains

func contains(s, t string) bool

func convT16

func convT16(val uint16) (x unsafe.Pointer)

func convT32

func convT32(val uint32) (x unsafe.Pointer)

func convT64

func convT64(val uint64) (x unsafe.Pointer)

func convTslice

func convTslice(val []byte) (x unsafe.Pointer)

func convTstring

func convTstring(val string) (x unsafe.Pointer)

func copysign

func copysign(x, y float64) float64

copysign returns a value with the magnitude of x and the sign of y.

func copystack

func copystack(gp *g, newsize uintptr, sync bool)

Copies gp's stack to a new stack of a different size. Caller must have changed gp status to Gcopystack.

If sync is true, this is a self-triggered stack growth and, in particular, no other G may be writing to gp's stack (e.g., via a channel operation). If sync is false, copystack protects against concurrent channel operations.

func countSub

func countSub(x, y uint32) int

countSub subtracts two counts obtained from profIndex.dataCount or profIndex.tagCount, assuming that they are no more than 2^29 apart (guaranteed since they are never more than len(data) or len(tags) apart, respectively). tagCount wraps at 2^30, while dataCount wraps at 2^32. This function works for both.

func countrunes

func countrunes(s string) int

countrunes returns the number of runes in s.

func cpuinit

func cpuinit()

cpuinit extracts the environment variable GODEBUG from the environment on Unix-like operating systems and calls internal/cpu.Initialize.

func cputicks

func cputicks() int64

careful: cputicks is not guaranteed to be monotonic! In particular, we have noticed drift between cpus on certain os/arch combinations. See issue 8976.

func crash

func crash()

go:nosplit

func createfing

func createfing()

func cstring

func cstring(s string) unsafe.Pointer

func debugCallCheck

func debugCallCheck(pc uintptr) string

debugCallCheck checks whether it is safe to inject a debugger function call with return PC pc. If not, it returns a string explaining why.

go:nosplit

func debugCallPanicked

func debugCallPanicked(val interface{})

func debugCallV1

func debugCallV1()

func debugCallWrap

func debugCallWrap(dispatch uintptr)

debugCallWrap pushes a defer to recover from panics in debug calls and then calls the dispatching function at PC dispatch.

func decoderune

func decoderune(s string, k int) (r rune, pos int)

decoderune returns the non-ASCII rune at the start of s[k:] and the index after the rune in s.

decoderune assumes that caller has checked that the to be decoded rune is a non-ASCII rune.

If the string appears to be incomplete or decoding problems are encountered (runeerror, k + 1) is returned to ensure progress when decoderune is used to iterate over a string.

func deductSweepCredit

func deductSweepCredit(spanBytes uintptr, callerSweepPages uintptr)

deductSweepCredit deducts sweep credit for allocating a span of size spanBytes. This must be performed *before* the span is allocated to ensure the system has enough credit. If necessary, it performs sweeping to prevent going in to debt. If the caller will also sweep pages (e.g., for a large allocation), it can pass a non-zero callerSweepPages to leave that many pages unswept.

deductSweepCredit makes a worst-case assumption that all spanBytes bytes of the ultimately allocated span will be available for object allocation.

deductSweepCredit is the core of the "proportional sweep" system. It uses statistics gathered by the garbage collector to perform enough sweeping so that all pages are swept during the concurrent sweep phase between GC cycles.

mheap_ must NOT be locked.

func deferArgs

func deferArgs(d *_defer) unsafe.Pointer

The arguments associated with a deferred call are stored immediately after the _defer header in memory. go:nosplit

func deferclass

func deferclass(siz uintptr) uintptr

defer size class for arg size sz go:nosplit

func deferproc

func deferproc(siz int32, fn *funcval)

Create a new deferred function fn with siz bytes of arguments. The compiler turns a defer statement into a call to this. go:nosplit

func deferreturn

func deferreturn(arg0 uintptr)

The single argument isn't actually used - it just has its address taken so it can be matched against pending defers. go:nosplit

func deltimer

func deltimer(t *timer) bool

Delete timer t from the heap. Do not need to update the timerproc: if it wakes up early, no big deal.

func dematerializeGCProg

func dematerializeGCProg(s *mspan)

func dieFromSignal

func dieFromSignal(sig uint32)

dieFromSignal kills the program with a signal. This provides the expected exit status for the shell. This is only called with fatal signals expected to kill the process. go:nosplit go:nowritebarrierrec

func divlu

func divlu(u1, u0, v uint64) (q, r uint64)

128/64 -> 64 quotient, 64 remainder. adapted from hacker's delight

func dolockOSThread

func dolockOSThread()

dolockOSThread is called by LockOSThread and lockOSThread below after they modify m.locked. Do not allow preemption during this call, or else the m might be different in this function than in the caller. go:nosplit

func dopanic_m

func dopanic_m(gp *g, pc, sp uintptr) bool

func dounlockOSThread

func dounlockOSThread()

dounlockOSThread is called by UnlockOSThread and unlockOSThread below after they update m->locked. Do not allow preemption during this call, or else the m might be in different in this function than in the caller. go:nosplit

func dropg

func dropg()

dropg removes the association between m and the current goroutine m->curg (gp for short). Typically a caller sets gp's status away from Grunning and then immediately calls dropg to finish the job. The caller is also responsible for arranging that gp will be restarted using ready at an appropriate time. After calling dropg and arranging for gp to be readied later, the caller can do other work but eventually should call schedule to restart the scheduling of goroutines on this m.

func dropm

func dropm()

dropm is called when a cgo callback has called needm but is now done with the callback and returning back into the non-Go thread. It puts the current m back onto the extra list.

The main expense here is the call to signalstack to release the m's signal stack, and then the call to needm on the next callback from this thread. It is tempting to try to save the m for next time, which would eliminate both these costs, but there might not be a next time: the current thread (which Go does not control) might exit. If we saved the m for that thread, there would be an m leak each time such a thread exited. Instead, we acquire and release an m on each call. These should typically not be scheduling operations, just a few atomics, so the cost should be small.

TODO(rsc): An alternative would be to allocate a dummy pthread per-thread variable using pthread_key_create. Unlike the pthread keys we already use on OS X, this dummy key would never be read by Go code. It would exist only so that we could register at thread-exit-time destructor. That destructor would put the m back onto the extra list. This is purely a performance optimization. The current version, in which dropm happens on each cgo call, is still correct too. We may have to keep the current version on systems with cgo but without pthreads, like Windows.

func dumpGCProg

func dumpGCProg(p *byte)

func dumpbool

func dumpbool(b bool)

func dumpbv

func dumpbv(cbv *bitvector, offset uintptr)

dump kinds & offsets of interesting fields in bv

func dumpfields

func dumpfields(bv bitvector)

dumpint() the kind & offset of each field in an object.

func dumpfinalizer

func dumpfinalizer(obj unsafe.Pointer, fn *funcval, fint *_type, ot *ptrtype)

func dumpframe

func dumpframe(s *stkframe, arg unsafe.Pointer) bool

func dumpgoroutine

func dumpgoroutine(gp *g)

func dumpgs

func dumpgs()

func dumpgstatus

func dumpgstatus(gp *g)

func dumpint

func dumpint(v uint64)

dump a uint64 in a varint format parseable by encoding/binary

func dumpitabs

func dumpitabs()

func dumpmemprof

func dumpmemprof()

func dumpmemprof_callback

func dumpmemprof_callback(b *bucket, nstk uintptr, pstk *uintptr, size, allocs, frees uintptr)

func dumpmemrange

func dumpmemrange(data unsafe.Pointer, len uintptr)

dump varint uint64 length followed by memory contents

func dumpmemstats

func dumpmemstats()

func dumpms

func dumpms()

func dumpobj

func dumpobj(obj unsafe.Pointer, size uintptr, bv bitvector)

dump an object

func dumpobjs

func dumpobjs()

func dumpotherroot

func dumpotherroot(description string, to unsafe.Pointer)

func dumpparams

func dumpparams()

func dumpregs

func dumpregs(c *sigctxt)

func dumproots

func dumproots()

func dumpslice

func dumpslice(b []byte)

func dumpstr

func dumpstr(s string)

func dumptype

func dumptype(t *_type)

dump information for a type

func dwrite

func dwrite(data unsafe.Pointer, len uintptr)

func dwritebyte

func dwritebyte(b byte)

func efaceHash

func efaceHash(i interface{}, seed uintptr) uintptr

func efaceeq

func efaceeq(t *_type, x, y unsafe.Pointer) bool

func elideWrapperCalling

func elideWrapperCalling(id funcID) bool

elideWrapperCalling reports whether a wrapper function that called function id should be elided from stack traces.

func encoderune

func encoderune(p []byte, r rune) int

encoderune writes into p (which must be large enough) the UTF-8 encoding of the rune. It returns the number of bytes written.

func ensureSigM

func ensureSigM()

ensureSigM starts one global, sleeping thread to make sure at least one thread is available to catch signals enabled for os/signal.

func entersyscall

func entersyscall()

Standard syscall entry used by the go syscall library and normal cgo calls. go:nosplit

func entersyscall_gcwait

func entersyscall_gcwait()

func entersyscall_sysmon

func entersyscall_sysmon()

func entersyscallblock

func entersyscallblock()

The same as entersyscall(), but with a hint that the syscall is blocking. go:nosplit

func entersyscallblock_handoff

func entersyscallblock_handoff()

func envKeyEqual

func envKeyEqual(a, b string) bool

envKeyEqual reports whether a == b, with ASCII-only case insensitivity on Windows. The two strings must have the same length.

func environ

func environ() []string

func epollcreate

func epollcreate(size int32) int32

func epollcreate1

func epollcreate1(flags int32) int32

func epollctl

func epollctl(epfd, op, fd int32, ev *epollevent) int32

go:noescape

func epollwait

func epollwait(epfd int32, ev *epollevent, nev, timeout int32) int32

go:noescape

func eqslice

func eqslice(x, y []uintptr) bool

func evacuate

func evacuate(t *maptype, h *hmap, oldbucket uintptr)

func evacuate_fast32

func evacuate_fast32(t *maptype, h *hmap, oldbucket uintptr)

func evacuate_fast64

func evacuate_fast64(t *maptype, h *hmap, oldbucket uintptr)

func evacuate_faststr

func evacuate_faststr(t *maptype, h *hmap, oldbucket uintptr)

func evacuated

func evacuated(b *bmap) bool

func execute

func execute(gp *g, inheritTime bool)

Schedules gp to run on the current M. If inheritTime is true, gp inherits the remaining time in the current time slice. Otherwise, it starts a new time slice. Never returns.

Write barriers are allowed because this is called immediately after acquiring a P in several places.

go:yeswritebarrierrec

func exit

func exit(code int32)

func exitThread

func exitThread(wait *uint32)

exitThread terminates the current thread, writing *wait = 0 when the stack is safe to reclaim.

go:noescape

func exitsyscall

func exitsyscall()

The goroutine g exited its system call. Arrange for it to run on a cpu again. This is called only from the go syscall library, not from the low-level system calls used by the runtime.

Write barriers are not allowed because our P may have been stolen.

go:nosplit go:nowritebarrierrec

func exitsyscall0

func exitsyscall0(gp *g)

exitsyscall slow path on g0. Failed to acquire P, enqueue gp as runnable.

go:nowritebarrierrec

func exitsyscallfast

func exitsyscallfast(oldp *p) bool

go:nosplit

func exitsyscallfast_pidle

func exitsyscallfast_pidle() bool

func exitsyscallfast_reacquired

func exitsyscallfast_reacquired()

exitsyscallfast_reacquired is the exitsyscall path on which this G has successfully reacquired the P it was running on before the syscall.

go:nosplit

func extendRandom

func extendRandom(r []byte, n int)

extendRandom extends the random numbers in r[:n] to the whole slice r. Treats n<0 as n==0.

func f32equal

func f32equal(p, q unsafe.Pointer) bool

func f32hash

func f32hash(p unsafe.Pointer, h uintptr) uintptr

func f32to64

func f32to64(f uint32) uint64

func f32toint32

func f32toint32(x uint32) int32

func f32toint64

func f32toint64(x uint32) int64

func f32touint64

func f32touint64(x float32) uint64

func f64equal

func f64equal(p, q unsafe.Pointer) bool

func f64hash

func f64hash(p unsafe.Pointer, h uintptr) uintptr

func f64to32

func f64to32(f uint64) uint32

func f64toint

func f64toint(f uint64) (val int64, ok bool)

func f64toint32

func f64toint32(x uint64) int32

func f64toint64

func f64toint64(x uint64) int64

func f64touint64

func f64touint64(x float64) uint64

func fadd32

func fadd32(x, y uint32) uint32

func fadd64

func fadd64(f, g uint64) uint64

func fastexprand

func fastexprand(mean int) int32

fastexprand returns a random number from an exponential distribution with the specified mean.

func fastlog2

func fastlog2(x float64) float64

fastlog2 implements a fast approximation to the base 2 log of a float64. This is used to compute a geometric distribution for heap sampling, without introducing dependencies into package math. This uses a very rough approximation using the float64 exponent and the first 25 bits of the mantissa. The top 5 bits of the mantissa are used to load limits from a table of constants and the rest are used to scale linearly between them.

func fastrand

func fastrand() uint32

go:nosplit

func fastrandn

func fastrandn(n uint32) uint32

go:nosplit

func fatalpanic

func fatalpanic(msgs *_panic)

fatalpanic implements an unrecoverable panic. It is like fatalthrow, except that if msgs != nil, fatalpanic also prints panic messages and decrements runningPanicDefers once main is blocked from exiting.

go:nosplit

func fatalthrow

func fatalthrow()

fatalthrow implements an unrecoverable runtime throw. It freezes the system, prints stack traces starting from its caller, and terminates the process.

go:nosplit

func fcmp64

func fcmp64(f, g uint64) (cmp int32, isnan bool)

func fdiv32

func fdiv32(x, y uint32) uint32

func fdiv64

func fdiv64(f, g uint64) uint64

func feq32

func feq32(x, y uint32) bool

func feq64

func feq64(x, y uint64) bool

func fge32

func fge32(x, y uint32) bool

func fge64

func fge64(x, y uint64) bool

func fgt32

func fgt32(x, y uint32) bool

func fgt64

func fgt64(x, y uint64) bool

func fillstack

func fillstack(stk stack, b byte)

func findObject

func findObject(p, refBase, refOff uintptr) (base uintptr, s *mspan, objIndex uintptr)

findObject returns the base address for the heap object containing the address p, the object's span, and the index of the object in s. If p does not point into a heap object, it returns base == 0.

If p points is an invalid heap pointer and debug.invalidptr != 0, findObject panics.

refBase and refOff optionally give the base address of the object in which the pointer p was found and the byte offset at which it was found. These are used for error reporting.

func findnull

func findnull(s *byte) int

go:nosplit

func findnullw

func findnullw(s *uint16) int

func findrunnable

func findrunnable() (gp *g, inheritTime bool)

Finds a runnable goroutine to execute. Tries to steal from other P's, get g from global queue, poll network.

func findsghi

func findsghi(gp *g, stk stack) uintptr

func finishsweep_m

func finishsweep_m()

finishsweep_m ensures that all spans are swept.

The world must be stopped. This ensures there are no sweeps in progress.

go:nowritebarrier

func finq_callback

func finq_callback(fn *funcval, obj unsafe.Pointer, nret uintptr, fint *_type, ot *ptrtype)

func fint32to32

func fint32to32(x int32) uint32

func fint32to64

func fint32to64(x int32) uint64

func fint64to32

func fint64to32(x int64) uint32

func fint64to64

func fint64to64(x int64) uint64

func fintto64

func fintto64(val int64) (f uint64)

func float64bits

func float64bits(f float64) uint64

Float64bits returns the IEEE 754 binary representation of f.

func float64frombits

func float64frombits(b uint64) float64

Float64frombits returns the floating point number corresponding the IEEE 754 binary representation b.

func flush

func flush()

func flushallmcaches

func flushallmcaches()

flushallmcaches flushes the mcaches of all Ps.

The world must be stopped.

go:nowritebarrier

func flushmcache

func flushmcache(i int)

flushmcache flushes the mcache of allp[i].

The world must be stopped.

go:nowritebarrier

func fmtNSAsMS

func fmtNSAsMS(buf []byte, ns uint64) []byte

fmtNSAsMS nicely formats ns nanoseconds as milliseconds.

func fmul32

func fmul32(x, y uint32) uint32

func fmul64

func fmul64(f, g uint64) uint64

func fneg64

func fneg64(f uint64) uint64

func forEachP

func forEachP(fn func(*p))

forEachP calls fn(p) for every P p when p reaches a GC safe point. If a P is currently executing code, this will bring the P to a GC safe point and execute fn on that P. If the P is not executing code (it is idle or in a syscall), this will call fn(p) directly while preventing the P from exiting its state. This does not ensure that fn will run on every CPU executing Go code, but it acts as a global memory barrier. GC uses this as a "ragged barrier."

The caller must hold worldsema.

go:systemstack

func forcegchelper

func forcegchelper()

func fpack32

func fpack32(sign, mant uint32, exp int, trunc uint32) uint32

func fpack64

func fpack64(sign, mant uint64, exp int, trunc uint64) uint64

func freeSomeWbufs

func freeSomeWbufs(preemptible bool) bool

freeSomeWbufs frees some workbufs back to the heap and returns true if it should be called again to free more.

func freeStackSpans

func freeStackSpans()

freeStackSpans frees unused stack spans at the end of GC.

func freedefer

func freedefer(d *_defer)

Free the given defer. The defer cannot be used after this call.

This must not grow the stack because there may be a frame without a stack map when this is called.

go:nosplit

func freedeferfn

func freedeferfn()

func freedeferpanic

func freedeferpanic()

Separate function so that it can split stack. Windows otherwise runs out of stack space.

func freemcache

func freemcache(c *mcache)

func freespecial

func freespecial(s *special, p unsafe.Pointer, size uintptr)

Do whatever cleanup needs to be done to deallocate s. It has already been unlinked from the mspan specials list.

func freezetheworld

func freezetheworld()

Similar to stopTheWorld but best-effort and can be called several times. There is no reverse operation, used during crashing. This function must not lock any mutexes.

func fsub64

func fsub64(f, g uint64) uint64

func fuint64to32

func fuint64to32(x uint64) float32

func fuint64to64

func fuint64to64(x uint64) float64

func funcPC

func funcPC(f interface{}) uintptr

funcPC returns the entry PC of the function f. It assumes that f is a func value. Otherwise the behavior is undefined. CAREFUL: In programs with plugins, funcPC can return different values for the same function (because there are actually multiple copies of the same function in the address space). To be safe, don't use the results of this function in any == expression. It is only safe to use the result as an address at which to start executing code. go:nosplit

func funcdata

func funcdata(f funcInfo, i uint8) unsafe.Pointer

func funcfile

func funcfile(f funcInfo, fileno int32) string

func funcline

func funcline(f funcInfo, targetpc uintptr) (file string, line int32)

func funcline1

func funcline1(f funcInfo, targetpc uintptr, strict bool) (file string, line int32)

func funcname

func funcname(f funcInfo) string

func funcnameFromNameoff

func funcnameFromNameoff(f funcInfo, nameoff int32) string

func funcspdelta

func funcspdelta(f funcInfo, targetpc uintptr, cache *pcvalueCache) int32

func funpack32

func funpack32(f uint32) (sign, mant uint32, exp int, inf, nan bool)

func funpack64

func funpack64(f uint64) (sign, mant uint64, exp int, inf, nan bool)

func futex

func futex(addr unsafe.Pointer, op int32, val uint32, ts, addr2 unsafe.Pointer, val3 uint32) int32

go:noescape

func futexsleep

func futexsleep(addr *uint32, val uint32, ns int64)

Atomically,

if(*addr == val) sleep

Might be woken up spuriously; that's allowed. Don't sleep longer than ns; ns < 0 means forever. go:nosplit

func futexwakeup

func futexwakeup(addr *uint32, cnt uint32)

If any procs are sleeping on addr, wake up at most cnt. go:nosplit

func gcAssistAlloc

func gcAssistAlloc(gp *g)

gcAssistAlloc performs GC work to make gp's assist debt positive. gp must be the calling user gorountine.

This must be called with preemption enabled.

func gcAssistAlloc1

func gcAssistAlloc1(gp *g, scanWork int64)

gcAssistAlloc1 is the part of gcAssistAlloc that runs on the system stack. This is a separate function to make it easier to see that we're not capturing anything from the user stack, since the user stack may move while we're in this function.

gcAssistAlloc1 indicates whether this assist completed the mark phase by setting gp.param to non-nil. This can't be communicated on the stack since it may move.

go:systemstack

func gcBgMarkPrepare

func gcBgMarkPrepare()

gcBgMarkPrepare sets up state for background marking. Mutator assists must not yet be enabled.

func gcBgMarkStartWorkers

func gcBgMarkStartWorkers()

gcBgMarkStartWorkers prepares background mark worker goroutines. These goroutines will not run until the mark phase, but they must be started while the work is not stopped and from a regular G stack. The caller must hold worldsema.

func gcBgMarkWorker

func gcBgMarkWorker(_p_ *p)

func gcDrain

func gcDrain(gcw *gcWork, flags gcDrainFlags)

gcDrain scans roots and objects in work buffers, blackening grey objects until it is unable to get more work. It may return before GC is done; it's the caller's responsibility to balance work from other Ps.

If flags&gcDrainUntilPreempt != 0, gcDrain returns when g.preempt is set.

If flags&gcDrainIdle != 0, gcDrain returns when there is other work to do.

If flags&gcDrainFractional != 0, gcDrain self-preempts when pollFractionalWorkerExit() returns true. This implies gcDrainNoBlock.

If flags&gcDrainFlushBgCredit != 0, gcDrain flushes scan work credit to gcController.bgScanCredit every gcCreditSlack units of scan work.

go:nowritebarrier

func gcDrainN

func gcDrainN(gcw *gcWork, scanWork int64) int64

gcDrainN blackens grey objects until it has performed roughly scanWork units of scan work or the G is preempted. This is best-effort, so it may perform less work if it fails to get a work buffer. Otherwise, it will perform at least n units of work, but may perform more because scanning is always done in whole object increments. It returns the amount of scan work performed.

The caller goroutine must be in a preemptible state (e.g., _Gwaiting) to prevent deadlocks during stack scanning. As a consequence, this must be called on the system stack.

go:nowritebarrier go:systemstack

func gcDumpObject

func gcDumpObject(label string, obj, off uintptr)

gcDumpObject dumps the contents of obj for debugging and marks the field at byte offset off in obj.

func gcFlushBgCredit

func gcFlushBgCredit(scanWork int64)

gcFlushBgCredit flushes scanWork units of background scan work credit. This first satisfies blocked assists on the work.assistQueue and then flushes any remaining credit to gcController.bgScanCredit.

Write barriers are disallowed because this is used by gcDrain after it has ensured that all work is drained and this must preserve that condition.

go:nowritebarrierrec

func gcMark

func gcMark(start_time int64)

gcMark runs the mark (or, for concurrent GC, mark termination) All gcWork caches must be empty. STW is in effect at this point. TODO go:nowritebarrier

func gcMarkDone

func gcMarkDone()

gcMarkDone transitions the GC from mark to mark termination if all reachable objects have been marked (that is, there are no grey objects and can be no more in the future). Otherwise, it flushes all local work to the global queues where it can be discovered by other workers.

This should be called when all local mark work has been drained and there are no remaining workers. Specifically, when

work.nwait == work.nproc && !gcMarkWorkAvailable(p)

The calling context must be preemptible.

Flushing local work is important because idle Ps may have local work queued. This is the only way to make that work visible and drive GC to completion.

It is explicitly okay to have write barriers in this function. If it does transition to mark termination, then all reachable objects have been marked, so the write barrier cannot shade any more objects.

func gcMarkRootCheck

func gcMarkRootCheck()

gcMarkRootCheck checks that all roots have been scanned. It is purely for debugging.

func gcMarkRootPrepare

func gcMarkRootPrepare()

gcMarkRootPrepare queues root scanning jobs (stacks, globals, and some miscellany) and initializes scanning-related state.

The caller must have call gcCopySpans().

The world must be stopped.

go:nowritebarrier

func gcMarkTermination

func gcMarkTermination(nextTriggerRatio float64)

func gcMarkTinyAllocs

func gcMarkTinyAllocs()

gcMarkTinyAllocs greys all active tiny alloc blocks.

The world must be stopped.

func gcMarkWorkAvailable

func gcMarkWorkAvailable(p *p) bool

gcMarkWorkAvailable reports whether executing a mark worker on p is potentially useful. p may be nil, in which case it only checks the global sources of work.

func gcParkAssist

func gcParkAssist() bool

gcParkAssist puts the current goroutine on the assist queue and parks.

gcParkAssist reports whether the assist is now satisfied. If it returns false, the caller must retry the assist.

go:nowritebarrier

func gcResetMarkState

func gcResetMarkState()

gcResetMarkState resets global state prior to marking (concurrent or STW) and resets the stack scan state of all Gs.

This is safe to do without the world stopped because any Gs created during or after this will start out in the reset state.

func gcSetTriggerRatio

func gcSetTriggerRatio(triggerRatio float64)

gcSetTriggerRatio sets the trigger ratio and updates everything derived from it: the absolute trigger, the heap goal, mark pacing, and sweep pacing.

This can be called any time. If GC is the in the middle of a concurrent phase, it will adjust the pacing of that phase.

This depends on gcpercent, memstats.heap_marked, and memstats.heap_live. These must be up to date.

mheap_.lock must be held or the world must be stopped.

func gcStart

func gcStart(trigger gcTrigger)

gcStart starts the GC. It transitions from _GCoff to _GCmark (if debug.gcstoptheworld == 0) or performs all of GC (if debug.gcstoptheworld != 0).

This may return without performing this transition in some cases, such as when called on a system stack or with locks held.

func gcSweep

func gcSweep(mode gcMode)

func gcWaitOnMark

func gcWaitOnMark(n uint32)

gcWaitOnMark blocks until GC finishes the Nth mark phase. If GC has already completed this mark phase, it returns immediately.

func gcWakeAllAssists

func gcWakeAllAssists()

gcWakeAllAssists wakes all currently blocked assists. This is used at the end of a GC cycle. gcBlackenEnabled must be false to prevent new assists from going to sleep after this point.

func gcallers

func gcallers(gp *g, skip int, pcbuf []uintptr) int

func gcd

func gcd(a, b uint32) uint32

func gcenable

func gcenable()

gcenable is called after the bulk of the runtime initialization, just before we're about to start letting user code run. It kicks off the background sweeper goroutine and enables GC.

func gcinit

func gcinit()

func gcmarknewobject

func gcmarknewobject(obj, size, scanSize uintptr)

gcmarknewobject marks a newly allocated object black. obj must not contain any non-nil pointers.

This is nosplit so it can manipulate a gcWork without preemption.

go:nowritebarrier go:nosplit

func gcount

func gcount() int32

func gcstopm

func gcstopm()

Stops the current m for stopTheWorld. Returns when the world is restarted.

func gentraceback

func gentraceback(pc0, sp0, lr0 uintptr, gp *g, skip int, pcbuf *uintptr, max int, callback func(*stkframe, unsafe.Pointer) bool, v unsafe.Pointer, flags uint) int

Generic traceback. Handles runtime stack prints (pcbuf == nil), the runtime.Callers function (pcbuf != nil), as well as the garbage collector (callback != nil). A little clunky to merge these, but avoids duplicating the code and all its subtlety.

The skip argument is only valid with pcbuf != nil and counts the number of logical frames to skip rather than physical frames (with inlining, a PC in pcbuf can represent multiple calls). If a PC is partially skipped and max > 1, pcbuf[1] will be runtime.skipPleaseUseCallersFrames+N where N indicates the number of logical frames to skip in pcbuf[0].

func getArgInfo

func getArgInfo(frame *stkframe, f funcInfo, needArgMap bool, ctxt *funcval) (arglen uintptr, argmap *bitvector)

getArgInfo returns the argument frame information for a call to f with call frame frame.

This is used for both actual calls with active stack frames and for deferred calls that are not yet executing. If this is an actual call, ctxt must be nil (getArgInfo will retrieve what it needs from the active stack frame). If this is a deferred call, ctxt must be the function object that was deferred.

func getArgInfoFast

func getArgInfoFast(f funcInfo, needArgMap bool) (arglen uintptr, argmap *bitvector, ok bool)

getArgInfoFast returns the argument frame information for a call to f. It is short and inlineable. However, it does not handle all functions. If ok reports false, you must call getArgInfo instead. TODO(josharian): once we do mid-stack inlining, call getArgInfo directly from getArgInfoFast and stop returning an ok bool.

func getRandomData

func getRandomData(r []byte)

func getStackMap

func getStackMap(frame *stkframe, cache *pcvalueCache, debug bool) (locals, args bitvector, objs []stackObjectRecord)

getStackMap returns the locals and arguments live pointer maps, and stack object list for frame.

func getargp

func getargp(x int) uintptr

getargp returns the location where the caller writes outgoing function call arguments. go:nosplit go:noinline

func getcallerpc

func getcallerpc() uintptr

go:noescape

func getcallersp

func getcallersp() uintptr

go:noescape

func getclosureptr

func getclosureptr() uintptr

getclosureptr returns the pointer to the current closure. getclosureptr can only be used in an assignment statement at the entry of a function. Moreover, go:nosplit directive must be specified at the declaration of caller function, so that the function prolog does not clobber the closure register. for example:

//go:nosplit
func f(arg1, arg2, arg3 int) {
	dx := getclosureptr()
}

The compiler rewrites calls to this function into instructions that fetch the pointer from a well-known register (DX on x86 architecture, etc.) directly.

func getgcmask

func getgcmask(ep interface{}) (mask []byte)

Returns GC type info for the pointer stored in ep for testing. If ep points to the stack, only static live information will be returned (i.e. not for objects which are only dynamically live stack objects).

func getgcmaskcb

func getgcmaskcb(frame *stkframe, ctxt unsafe.Pointer) bool

func getm

func getm() uintptr

A helper function for EnsureDropM.

func getproccount

func getproccount() int32

func getsig

func getsig(i uint32) uintptr

go:nosplit go:nowritebarrierrec

func gettid

func gettid() uint32

func gfpurge

func gfpurge(_p_ *p)

Purge all cached G's from gfree list to the global list.

func gfput

func gfput(_p_ *p, gp *g)

Put on gfree list. If local list is too long, transfer a batch to the global list.

func globrunqput

func globrunqput(gp *g)

Put gp on the global runnable queue. Sched must be locked. May run during STW, so write barriers are not allowed. go:nowritebarrierrec

func globrunqputbatch

func globrunqputbatch(batch *gQueue, n int32)

Put a batch of runnable goroutines on the global runnable queue. This clears *batch. Sched must be locked.

func globrunqputhead

func globrunqputhead(gp *g)

Put gp at the head of the global runnable queue. Sched must be locked. May run during STW, so write barriers are not allowed. go:nowritebarrierrec

func goargs

func goargs()

func gobytes

func gobytes(p *byte, n int) (b []byte)

used by cmd/cgo

func goenvs

func goenvs()

func goenvs_unix

func goenvs_unix()

func goexit

func goexit(neverCallThisFunction)

goexit is the return stub at the top of every goroutine call stack. Each goroutine stack is constructed as if goexit called the goroutine's entry point function, so that when the entry point function returns, it will return to goexit, which will call goexit1 to perform the actual exit.

This function must never be called directly. Call goexit1 instead. gentraceback assumes that goexit terminates the stack. A direct call on the stack will cause gentraceback to stop walking the stack prematurely and if there is leftover state it may panic.

func goexit0

func goexit0(gp *g)

goexit continuation on g0.

func goexit1

func goexit1()

Finishes execution of the current goroutine.

func gogetenv

func gogetenv(key string) string

func gogo

func gogo(buf *gobuf)

func gopanic

func gopanic(e interface{})

The implementation of the predeclared function panic.

func gopark

func gopark(unlockf func(*g, unsafe.Pointer) bool, lock unsafe.Pointer, reason waitReason, traceEv byte, traceskip int)

Puts the current goroutine into a waiting state and calls unlockf. If unlockf returns false, the goroutine is resumed. unlockf must not access this G's stack, as it may be moved between the call to gopark and the call to unlockf. Reason explains why the goroutine has been parked. It is displayed in stack traces and heap dumps. Reasons should be unique and descriptive. Do not re-use reasons, add new ones.

func goparkunlock

func goparkunlock(lock *mutex, reason waitReason, traceEv byte, traceskip int)

Puts the current goroutine into a waiting state and unlocks the lock. The goroutine can be made runnable again by calling goready(gp).

func gopreempt_m

func gopreempt_m(gp *g)

func goready

func goready(gp *g, traceskip int)

func gorecover

func gorecover(argp uintptr) interface{}

The implementation of the predeclared function recover. Cannot split the stack because it needs to reliably find the stack segment of its caller.

TODO(rsc): Once we commit to CopyStackAlways, this doesn't need to be nosplit. go:nosplit

func goroutineReady

func goroutineReady(arg interface{}, seq uintptr)

Ready the goroutine arg.

func goroutineheader

func goroutineheader(gp *g)

func gosave

func gosave(buf *gobuf)

func goschedImpl

func goschedImpl(gp *g)

func gosched_m

func gosched_m(gp *g)

Gosched continuation on g0.

func goschedguarded

func goschedguarded()

goschedguarded yields the processor like gosched, but also checks for forbidden states and opts out of the yield in those cases. go:nosplit

func goschedguarded_m

func goschedguarded_m(gp *g)

goschedguarded is a forbidden-states-avoided version of gosched_m

func gostartcall

func gostartcall(buf *gobuf, fn, ctxt unsafe.Pointer)

adjust Gobuf as if it executed a call to fn with context ctxt and then did an immediate gosave.

func gostartcallfn

func gostartcallfn(gobuf *gobuf, fv *funcval)

adjust Gobuf as if it executed a call to fn and then did an immediate gosave.

func gostring

func gostring(p *byte) string

func gostringn

func gostringn(p *byte, l int) string

func gostringnocopy

func gostringnocopy(str *byte) string

go:nosplit

func gostringw

func gostringw(strw *uint16) string

func gotraceback

func gotraceback() (level int32, all, crash bool)

gotraceback returns the current traceback settings.

If level is 0, suppress all tracebacks. If level is 1, show tracebacks, but exclude runtime frames. If level is 2, show tracebacks including runtime frames. If all is set, print all goroutine stacks. Otherwise, print just the current goroutine. If crash is set, crash (core dump, etc) after tracebacking.

go:nosplit

func greyobject

func greyobject(obj, base, off uintptr, span *mspan, gcw *gcWork, objIndex uintptr)

obj is the start of an object with mark mbits. If it isn't already marked, mark it and enqueue into gcw. base and off are for debugging only and could be removed.

See also wbBufFlush1, which partially duplicates this logic.

go:nowritebarrierrec

func growWork

func growWork(t *maptype, h *hmap, bucket uintptr)

func growWork_fast32

func growWork_fast32(t *maptype, h *hmap, bucket uintptr)

func growWork_fast64

func growWork_fast64(t *maptype, h *hmap, bucket uintptr)

func growWork_faststr

func growWork_faststr(t *maptype, h *hmap, bucket uintptr)

func gwrite

func gwrite(b []byte)

write to goroutine-local buffer if diverting output, or else standard error.

func handoffp

func handoffp(_p_ *p)

Hands off P from syscall or locked M. Always runs without a P, so write barriers are not allowed. go:nowritebarrierrec

func hasPrefix

func hasPrefix(s, prefix string) bool

func hashGrow

func hashGrow(t *maptype, h *hmap)

func haveexperiment

func haveexperiment(name string) bool

func heapBitsSetType

func heapBitsSetType(x, size, dataSize uintptr, typ *_type)

heapBitsSetType records that the new allocation [x, x+size) holds in [x, x+dataSize) one or more values of type typ. (The number of values is given by dataSize / typ.size.) If dataSize < size, the fragment [x+dataSize, x+size) is recorded as non-pointer data. It is known that the type has pointers somewhere; malloc does not call heapBitsSetType when there are no pointers, because all free objects are marked as noscan during heapBitsSweepSpan.

There can only be one allocation from a given span active at a time, and the bitmap for a span always falls on byte boundaries, so there are no write-write races for access to the heap bitmap. Hence, heapBitsSetType can access the bitmap without atomics.

There can be read-write races between heapBitsSetType and things that read the heap bitmap like scanobject. However, since heapBitsSetType is only used for objects that have not yet been made reachable, readers will ignore bits being modified by this function. This does mean this function cannot transiently modify bits that belong to neighboring objects. Also, on weakly-ordered machines, callers must execute a store/store (publication) barrier between calling this function and making the object reachable.

func heapBitsSetTypeGCProg

func heapBitsSetTypeGCProg(h heapBits, progSize, elemSize, dataSize, allocSize uintptr, prog *byte)

heapBitsSetTypeGCProg implements heapBitsSetType using a GC program. progSize is the size of the memory described by the program. elemSize is the size of the element that the GC program describes (a prefix of). dataSize is the total size of the intended data, a multiple of elemSize. allocSize is the total size of the allocated memory.

GC programs are only used for large allocations. heapBitsSetType requires that allocSize is a multiple of 4 words, so that the relevant bitmap bytes are not shared with surrounding objects.

func hexdumpWords

func hexdumpWords(p, end uintptr, mark func(uintptr) byte)

hexdumpWords prints a word-oriented hex dump of [p, end).

If mark != nil, it will be called with each printed word's address and should return a character mark to appear just before that word's value. It can return 0 to indicate no mark.

func ifaceHash

func ifaceHash(i interface {
        F()
}, seed uintptr) uintptr

func ifaceeq

func ifaceeq(tab *itab, x, y unsafe.Pointer) bool

func inHeapOrStack

func inHeapOrStack(b uintptr) bool

inHeapOrStack is a variant of inheap that returns true for pointers into any allocated heap span.

go:nowritebarrier go:nosplit

func inPersistentAlloc

func inPersistentAlloc(p uintptr) bool

inPersistentAlloc reports whether p points to memory allocated by persistentalloc. This must be nosplit because it is called by the cgo checker code, which is called by the write barrier code. go:nosplit

func inRange

func inRange(r0, r1, v0, v1 uintptr) bool

inRange reports whether v0 or v1 are in the range [r0, r1].

func inVDSOPage

func inVDSOPage(pc uintptr) bool

vdsoMarker reports whether PC is on the VDSO page.

func incidlelocked

func incidlelocked(v int32)

func index

func index(s, t string) int

func inf2one

func inf2one(f float64) float64

inf2one returns a signed 1 if f is an infinity and a signed 0 otherwise. The sign of the result is the sign of f.

func inheap

func inheap(b uintptr) bool

inheap reports whether b is a pointer into a (potentially dead) heap object. It returns false for pointers into mSpanManual spans. Non-preemptible because it is used by write barriers. go:nowritebarrier go:nosplit

func init

func init()

start forcegc helper goroutine

func initAlgAES

func initAlgAES()

func initCheckmarks

func initCheckmarks()

go:nowritebarrier

func initsig

func initsig(preinit bool)

Initialize signals. Called by libpreinit so runtime may not be initialized. go:nosplit go:nowritebarrierrec

func injectglist

func injectglist(glist *gList)

Injects the list of runnable G's into the scheduler and clears glist. Can run concurrently with GC.

func int32Hash

func int32Hash(i uint32, seed uintptr) uintptr

func int64Hash

func int64Hash(i uint64, seed uintptr) uintptr

func interequal

func interequal(p, q unsafe.Pointer) bool

func interhash

func interhash(p unsafe.Pointer, h uintptr) uintptr

func intstring

func intstring(buf *[4]byte, v int64) (s string)

func isAbortPC

func isAbortPC(pc uintptr) bool

isAbortPC reports whether pc is the program counter at which runtime.abort raises a signal.

It is nosplit because it's part of the isgoexception implementation.

go:nosplit

func isDirectIface

func isDirectIface(t *_type) bool

isDirectIface reports whether t is stored directly in an interface value.

func isEmpty

func isEmpty(x uint8) bool

isEmpty reports whether the given tophash array entry represents an empty bucket entry.

func isExportedRuntime

func isExportedRuntime(name string) bool

isExportedRuntime reports whether name is an exported runtime function. It is only for runtime functions, so ASCII A-Z is fine.

func isFinite

func isFinite(f float64) bool

isFinite reports whether f is neither NaN nor an infinity.

func isInf

func isInf(f float64) bool

isInf reports whether f is an infinity.

func isNaN

func isNaN(f float64) (is bool)

isNaN reports whether f is an IEEE 754 “not-a-number” value.

func isPowerOfTwo

func isPowerOfTwo(x uintptr) bool

func isSweepDone

func isSweepDone() bool

isSweepDone reports whether all spans are swept or currently being swept.

Note that this condition may transition from false to true at any time as the sweeper runs. It may transition from true to false if a GC runs; to prevent that the caller must be non-preemptible or must somehow block GC progress.

func isSystemGoroutine

func isSystemGoroutine(gp *g, fixed bool) bool

isSystemGoroutine reports whether the goroutine g must be omitted in stack dumps and deadlock detector. This is any goroutine that starts at a runtime.* entry point, except for runtime.main and sometimes runtime.runfinq.

If fixed is true, any goroutine that can vary between user and system (that is, the finalizer goroutine) is considered a user goroutine.

func ismapkey

func ismapkey(t *_type) bool

func isscanstatus

func isscanstatus(status uint32) bool

func itabAdd

func itabAdd(m *itab)

itabAdd adds the given itab to the itab hash table. itabLock must be held.

func itabHashFunc

func itabHashFunc(inter *interfacetype, typ *_type) uintptr

func itab_callback

func itab_callback(tab *itab)

func itabsinit

func itabsinit()

func iterate_finq

func iterate_finq(callback func(*funcval, unsafe.Pointer, uintptr, *_type, *ptrtype))

go:nowritebarrier

func iterate_itabs

func iterate_itabs(fn func(*itab))

func iterate_memprof

func iterate_memprof(fn func(*bucket, uintptr, *uintptr, uintptr, uintptr, uintptr))

func itoaDiv

func itoaDiv(buf []byte, val uint64, dec int) []byte

itoaDiv formats val/(10**dec) into buf.

func jmpdefer

func jmpdefer(fv *funcval, argp uintptr)

go:noescape

func key32

func key32(p *uintptr) *uint32

We use the uintptr mutex.key and note.key as a uint32. go:nosplit

func less

func less(a, b uint32) bool

less checks if a < b, considering a & b running counts that may overflow the 32-bit range, and that their "unwrapped" difference is always less than 2^31.

func lfnodeValidate

func lfnodeValidate(node *lfnode)

lfnodeValidate panics if node is not a valid address for use with lfstack.push. This only needs to be called when node is allocated.

func lfstackPack

func lfstackPack(node *lfnode, cnt uintptr) uint64

func libpreinit

func libpreinit()

Called to do synchronous initialization of Go code built with -buildmode=c-archive or -buildmode=c-shared. None of the Go runtime is initialized. go:nosplit go:nowritebarrierrec

func lock

func lock(l *mutex)

func lockOSThread

func lockOSThread()

go:nosplit

func lockedOSThread

func lockedOSThread() bool

func lowerASCII

func lowerASCII(c byte) byte

func mProf_Flush

func mProf_Flush()

mProf_Flush flushes the events from the current heap profiling cycle into the active profile. After this it is safe to start a new heap profiling cycle with mProf_NextCycle.

This is called by GC after mark termination starts the world. In contrast with mProf_NextCycle, this is somewhat expensive, but safe to do concurrently.

func mProf_FlushLocked

func mProf_FlushLocked()

func mProf_Free

func mProf_Free(b *bucket, size uintptr)

Called when freeing a profiled block.

func mProf_Malloc

func mProf_Malloc(p unsafe.Pointer, size uintptr)

Called by malloc to record a profiled block.

func mProf_NextCycle

func mProf_NextCycle()

mProf_NextCycle publishes the next heap profile cycle and creates a fresh heap profile cycle. This operation is fast and can be done during STW. The caller must call mProf_Flush before calling mProf_NextCycle again.

This is called by mark termination during STW so allocations and frees after the world is started again count towards a new heap profiling cycle.

func mProf_PostSweep

func mProf_PostSweep()

mProf_PostSweep records that all sweep frees for this GC cycle have completed. This has the effect of publishing the heap profile snapshot as of the last mark termination without advancing the heap profile cycle.

func mSysStatDec

func mSysStatDec(sysStat *uint64, n uintptr)

Atomically decreases a given *system* memory stat. Same comments as mSysStatInc apply. go:nosplit

func mSysStatInc

func mSysStatInc(sysStat *uint64, n uintptr)

Atomically increases a given *system* memory stat. We are counting on this stat never overflowing a uintptr, so this function must only be used for system memory stats.

The current implementation for little endian architectures is based on xadduintptr(), which is less than ideal: xadd64() should really be used. Using xadduintptr() is a stop-gap solution until arm supports xadd64() that doesn't use locks. (Locks are a problem as they require a valid G, which restricts their useability.)

A side-effect of using xadduintptr() is that we need to check for overflow errors. go:nosplit

func madvise

func madvise(addr unsafe.Pointer, n uintptr, flags int32) int32

return value is only set on linux to be used in osinit()

func main

func main()

The main goroutine.

func main_init

func main_init()

go:linkname main_init main.init

func main_main

func main_main()

go:linkname main_main main.main

func makeslice

func makeslice(et *_type, len, cap int) unsafe.Pointer

func makeslice64

func makeslice64(et *_type, len64, cap64 int64) unsafe.Pointer

func mallocgc

func mallocgc(size uintptr, typ *_type, needzero bool) unsafe.Pointer

Allocate an object of size bytes. Small objects are allocated from the per-P cache's free lists. Large objects (> 32 kB) are allocated straight from the heap.

func mallocinit

func mallocinit()

func mapaccess1

func mapaccess1(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer

mapaccess1 returns a pointer to h[key]. Never returns nil, instead it will return a reference to the zero object for the value type if the key is not in the map. NOTE: The returned pointer may keep the whole map live, so don't hold onto it for very long.

func mapaccess1_fast32

func mapaccess1_fast32(t *maptype, h *hmap, key uint32) unsafe.Pointer

func mapaccess1_fast64

func mapaccess1_fast64(t *maptype, h *hmap, key uint64) unsafe.Pointer

func mapaccess1_faststr

func mapaccess1_faststr(t *maptype, h *hmap, ky string) unsafe.Pointer

func mapaccess1_fat

func mapaccess1_fat(t *maptype, h *hmap, key, zero unsafe.Pointer) unsafe.Pointer

func mapaccess2

func mapaccess2(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, bool)

func mapaccess2_fast32

func mapaccess2_fast32(t *maptype, h *hmap, key uint32) (unsafe.Pointer, bool)

func mapaccess2_fast64

func mapaccess2_fast64(t *maptype, h *hmap, key uint64) (unsafe.Pointer, bool)

func mapaccess2_faststr

func mapaccess2_faststr(t *maptype, h *hmap, ky string) (unsafe.Pointer, bool)

func mapaccess2_fat

func mapaccess2_fat(t *maptype, h *hmap, key, zero unsafe.Pointer) (unsafe.Pointer, bool)

func mapaccessK

func mapaccessK(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, unsafe.Pointer)

returns both key and value. Used by map iterator

func mapassign

func mapassign(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer

Like mapaccess, but allocates a slot for the key if it is not present in the map.

func mapassign_fast32

func mapassign_fast32(t *maptype, h *hmap, key uint32) unsafe.Pointer

func mapassign_fast32ptr

func mapassign_fast32ptr(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer

func mapassign_fast64

func mapassign_fast64(t *maptype, h *hmap, key uint64) unsafe.Pointer

func mapassign_fast64ptr

func mapassign_fast64ptr(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer

func mapassign_faststr

func mapassign_faststr(t *maptype, h *hmap, s string) unsafe.Pointer

func mapclear

func mapclear(t *maptype, h *hmap)

mapclear deletes all keys from a map.

func mapdelete

func mapdelete(t *maptype, h *hmap, key unsafe.Pointer)

func mapdelete_fast32

func mapdelete_fast32(t *maptype, h *hmap, key uint32)

func mapdelete_fast64

func mapdelete_fast64(t *maptype, h *hmap, key uint64)

func mapdelete_faststr

func mapdelete_faststr(t *maptype, h *hmap, ky string)

func mapiterinit

func mapiterinit(t *maptype, h *hmap, it *hiter)

mapiterinit initializes the hiter struct used for ranging over maps. The hiter struct pointed to by 'it' is allocated on the stack by the compilers order pass or on the heap by reflect_mapiterinit. Both need to have zeroed hiter since the struct contains pointers.

func mapiternext

func mapiternext(it *hiter)

func markroot

func markroot(gcw *gcWork, i uint32)

markroot scans the i'th root.

Preemption must be disabled (because this uses a gcWork).

nowritebarrier is only advisory here.

go:nowritebarrier

func markrootBlock

func markrootBlock(b0, n0 uintptr, ptrmask0 *uint8, gcw *gcWork, shard int)

markrootBlock scans the shard'th shard of the block of memory [b0, b0+n0), with the given pointer mask.

go:nowritebarrier

func markrootFreeGStacks

func markrootFreeGStacks()

markrootFreeGStacks frees stacks of dead Gs.

This does not free stacks of dead Gs cached on Ps, but having a few cached stacks around isn't a problem.

TODO go:nowritebarrier

func markrootSpans

func markrootSpans(gcw *gcWork, shard int)

markrootSpans marks roots for one shard of work.spans.

go:nowritebarrier

func mcall

func mcall(fn func(*g))

mcall switches from the g to the g0 stack and invokes fn(g), where g is the goroutine that made the call. mcall saves g's current PC/SP in g->sched so that it can be restored later. It is up to fn to arrange for that later execution, typically by recording g in a data structure, causing something to call ready(g) later. mcall returns to the original goroutine g later, when g has been rescheduled. fn must not return at all; typically it ends by calling schedule, to let the m run other goroutines.

mcall can only be called from g stacks (not g0, not gsignal).

This must NOT be go:noescape: if fn is a stack-allocated closure, fn puts g on a run queue, and g executes before fn returns, the closure will be invalidated while it is still executing.

func mcommoninit

func mcommoninit(mp *m)

func mcount

func mcount() int32

func mdump

func mdump()

func memclrHasPointers

func memclrHasPointers(ptr unsafe.Pointer, n uintptr)

memclrHasPointers clears n bytes of typed memory starting at ptr. The caller must ensure that the type of the object at ptr has pointers, usually by checking typ.kind&kindNoPointers. However, ptr does not have to point to the start of the allocation.

go:nosplit

func memclrNoHeapPointers

func memclrNoHeapPointers(ptr unsafe.Pointer, n uintptr)

memclrNoHeapPointers clears n bytes starting at ptr.

Usually you should use typedmemclr. memclrNoHeapPointers should be used only when the caller knows that *ptr contains no heap pointers because either:

*ptr is initialized memory and its type is pointer-free, or

*ptr is uninitialized memory (e.g., memory that's being reused for a new allocation) and hence contains only "junk".

The (CPU-specific) implementations of this function are in memclr_*.s. go:noescape

func memequal

func memequal(a, b unsafe.Pointer, size uintptr) bool

in asm_*.s go:noescape

func memequal0

func memequal0(p, q unsafe.Pointer) bool

func memequal128

func memequal128(p, q unsafe.Pointer) bool

func memequal16

func memequal16(p, q unsafe.Pointer) bool

func memequal32

func memequal32(p, q unsafe.Pointer) bool

func memequal64

func memequal64(p, q unsafe.Pointer) bool

func memequal8

func memequal8(p, q unsafe.Pointer) bool

func memequal_varlen

func memequal_varlen(a, b unsafe.Pointer) bool

func memhash

func memhash(p unsafe.Pointer, seed, s uintptr) uintptr

func memhash0

func memhash0(p unsafe.Pointer, h uintptr) uintptr

func memhash128

func memhash128(p unsafe.Pointer, h uintptr) uintptr

func memhash16

func memhash16(p unsafe.Pointer, h uintptr) uintptr

func memhash32

func memhash32(p unsafe.Pointer, seed uintptr) uintptr

func memhash64

func memhash64(p unsafe.Pointer, seed uintptr) uintptr

func memhash8

func memhash8(p unsafe.Pointer, h uintptr) uintptr

func memhash_varlen

func memhash_varlen(p unsafe.Pointer, h uintptr) uintptr

go:nosplit

func memmove

func memmove(to, from unsafe.Pointer, n uintptr)

memmove copies n bytes from "from" to "to". in memmove_*.s go:noescape

func mexit

func mexit(osStack bool)

mexit tears down and exits the current thread.

Don't call this directly to exit the thread, since it must run at the top of the thread stack. Instead, use gogo(&_g_.m.g0.sched) to unwind the stack to the point that exits the thread.

It is entered with m.p != nil, so write barriers are allowed. It will release the P before exiting.

go:yeswritebarrierrec

func mincore

func mincore(addr unsafe.Pointer, n uintptr, dst *byte) int32

func minit

func minit()

Called to initialize a new m (including the bootstrap m). Called on the new thread, cannot allocate memory.

func minitSignalMask

func minitSignalMask()

minitSignalMask is called when initializing a new m to set the thread's signal mask. When this is called all signals have been blocked for the thread. This starts with m.sigmask, which was set either from initSigmask for a newly created thread or by calling msigsave if this is a non-Go thread calling a Go function. It removes all essential signals from the mask, thus causing those signals to not be blocked. Then it sets the thread's signal mask. After this is called the thread can receive signals.

func minitSignalStack

func minitSignalStack()

minitSignalStack is called when initializing a new m to set the alternate signal stack. If the alternate signal stack is not set for the thread (the normal case) then set the alternate signal stack to the gsignal stack. If the alternate signal stack is set for the thread (the case when a non-Go thread sets the alternate signal stack and then calls a Go function) then set the gsignal stack to the alternate signal stack. Record which choice was made in newSigstack, so that it can be undone in unminit.

func minitSignals

func minitSignals()

minitSignals is called when initializing a new m to set the thread's alternate signal stack and signal mask.

func mmap

func mmap(addr unsafe.Pointer, n uintptr, prot, flags, fd int32, off uint32) (unsafe.Pointer, int)

func modtimer

func modtimer(t *timer, when, period int64, f func(interface{}, uintptr), arg interface{}, seq uintptr)

func moduledataverify

func moduledataverify()

func moduledataverify1

func moduledataverify1(datap *moduledata)

func modulesinit

func modulesinit()

modulesinit creates the active modules slice out of all loaded modules.

When a module is first loaded by the dynamic linker, an .init_array function (written by cmd/link) is invoked to call addmoduledata, appending to the module to the linked list that starts with firstmoduledata.

There are two times this can happen in the lifecycle of a Go program. First, if compiled with -linkshared, a number of modules built with -buildmode=shared can be loaded at program initialization. Second, a Go program can load a module while running that was built with -buildmode=plugin.

After loading, this function is called which initializes the moduledata so it is usable by the GC and creates a new activeModules list.

Only one goroutine may call modulesinit at a time.

func morestack

func morestack()

func morestack_noctxt

func morestack_noctxt()

func morestackc

func morestackc()

go:nosplit

func mpreinit

func mpreinit(mp *m)

Called to initialize a new m (including the bootstrap m). Called on the parent thread (main thread in case of bootstrap), can allocate memory.

func mput

func mput(mp *m)

Put mp on midle list. Sched must be locked. May run during STW, so write barriers are not allowed. go:nowritebarrierrec

func msanfree

func msanfree(addr unsafe.Pointer, sz uintptr)

func msanmalloc

func msanmalloc(addr unsafe.Pointer, sz uintptr)

func msanread

func msanread(addr unsafe.Pointer, sz uintptr)

func msanwrite

func msanwrite(addr unsafe.Pointer, sz uintptr)

func msigrestore

func msigrestore(sigmask sigset)

msigrestore sets the current thread's signal mask to sigmask. This is used to restore the non-Go signal mask when a non-Go thread calls a Go function. This is nosplit and nowritebarrierrec because it is called by dropm after g has been cleared. go:nosplit go:nowritebarrierrec

func msigsave

func msigsave(mp *m)

msigsave saves the current thread's signal mask into mp.sigmask. This is used to preserve the non-Go signal mask when a non-Go thread calls a Go function. This is nosplit and nowritebarrierrec because it is called by needm which may be called on a non-Go thread with no g available. go:nosplit go:nowritebarrierrec

func mspinning

func mspinning()

func mstart

func mstart()

Called to start an M.

This must not split the stack because we may not even have stack bounds set up yet.

May run during STW (because it doesn't have a P yet), so write barriers are not allowed.

go:nosplit go:nowritebarrierrec

func mstart1

func mstart1()

func mstartm0

func mstartm0()

mstartm0 implements part of mstart1 that only runs on the m0.

Write barriers are allowed here because we know the GC can't be running yet, so they'll be no-ops.

go:yeswritebarrierrec

func mullu

func mullu(u, v uint64) (lo, hi uint64)

64x64 -> 128 multiply. adapted from hacker's delight.

func munmap

func munmap(addr unsafe.Pointer, n uintptr)

func mutexevent

func mutexevent(cycles int64, skip int)

go:linkname mutexevent sync.event

func nanotime

func nanotime() int64

func needm

func needm(x byte)

needm is called when a cgo callback happens on a thread without an m (a thread not created by Go). In this case, needm is expected to find an m to use and return with m, g initialized correctly. Since m and g are not set now (likely nil, but see below) needm is limited in what routines it can call. In particular it can only call nosplit functions (textflag 7) and cannot do any scheduling that requires an m.

In order to avoid needing heavy lifting here, we adopt the following strategy: there is a stack of available m's that can be stolen. Using compare-and-swap to pop from the stack has ABA races, so we simulate a lock by doing an exchange (via Casuintptr) to steal the stack head and replace the top pointer with MLOCKED (1). This serves as a simple spin lock that we can use even without an m. The thread that locks the stack in this way unlocks the stack by storing a valid stack head pointer.

In order to make sure that there is always an m structure available to be stolen, we maintain the invariant that there is always one more than needed. At the beginning of the program (if cgo is in use) the list is seeded with a single m. If needm finds that it has taken the last m off the list, its job is - once it has installed its own m so that it can do things like allocate memory - to create a spare m and put it on the list.

Each of these extra m's also has a g0 and a curg that are pressed into service as the scheduling stack and current goroutine for the duration of the cgo callback.

When the callback is done with the m, it calls dropm to put the m back on the list. go:nosplit

func netpollDeadline

func netpollDeadline(arg interface{}, seq uintptr)

func netpollReadDeadline

func netpollReadDeadline(arg interface{}, seq uintptr)

func netpollWriteDeadline

func netpollWriteDeadline(arg interface{}, seq uintptr)

func netpollarm

func netpollarm(pd *pollDesc, mode int)

func netpollblock

func netpollblock(pd *pollDesc, mode int32, waitio bool) bool

returns true if IO is ready, or false if timedout or closed waitio - wait only for completed IO, ignore errors

func netpollblockcommit

func netpollblockcommit(gp *g, gpp unsafe.Pointer) bool

func netpollcheckerr

func netpollcheckerr(pd *pollDesc, mode int32) int

func netpollclose

func netpollclose(fd uintptr) int32

func netpolldeadlineimpl

func netpolldeadlineimpl(pd *pollDesc, seq uintptr, read, write bool)

func netpolldescriptor

func netpolldescriptor() uintptr

func netpollgoready

func netpollgoready(gp *g, traceskip int)

func netpollinit

func netpollinit()

func netpollinited

func netpollinited() bool

func netpollopen

func netpollopen(fd uintptr, pd *pollDesc) int32

func netpollready

func netpollready(toRun *gList, pd *pollDesc, mode int32)

make pd ready, newly runnable goroutines (if any) are added to toRun. May run during STW, so write barriers are not allowed. go:nowritebarrier

func newarray

func newarray(typ *_type, n int) unsafe.Pointer

newarray allocates an array of n elements of type typ.

func newextram

func newextram()

newextram allocates m's and puts them on the extra list. It is called with a working local m, so that it can do things like call schedlock and allocate.

func newm

func newm(fn func(), _p_ *p)

Create a new m. It will start off with a call to fn, or else the scheduler. fn needs to be static and not a heap allocated closure. May run with m.p==nil, so write barriers are not allowed. go:nowritebarrierrec

func newm1

func newm1(mp *m)

func newobject

func newobject(typ *_type) unsafe.Pointer

implementation of new builtin compiler (both frontend and SSA backend) knows the signature of this function

func newosproc

func newosproc(mp *m)

May run with m.p==nil, so write barriers are not allowed. go:nowritebarrier

func newosproc0

func newosproc0(stacksize uintptr, fn unsafe.Pointer)

Version of newosproc that doesn't require a valid G. go:nosplit

func newproc

func newproc(siz int32, fn *funcval)

Create a new g running fn with siz bytes of arguments. Put it on the queue of g's waiting to run. The compiler turns a go statement into a call to this. Cannot split the stack because it assumes that the arguments are available sequentially after &fn; they would not be copied if a stack split occurred. go:nosplit

func newproc1

func newproc1(fn *funcval, argp *uint8, narg int32, callergp *g, callerpc uintptr)

Create a new g running fn with narg bytes of arguments starting at argp. callerpc is the address of the go statement that created this. The new g is put on the queue of g's waiting to run.

func newstack

func newstack()

Called from runtime·morestack when more stack is needed. Allocate larger stack and relocate to new stack. Stack growth is multiplicative, for constant amortized cost.

g->atomicstatus will be Grunning or Gscanrunning upon entry. If the GC is trying to stop this g then it will set preemptscan to true.

This must be nowritebarrierrec because it can be called as part of stack growth from other nowritebarrierrec functions, but the compiler doesn't check this.

go:nowritebarrierrec

func nextMarkBitArenaEpoch

func nextMarkBitArenaEpoch()

nextMarkBitArenaEpoch establishes a new epoch for the arenas holding the mark bits. The arenas are named relative to the current GC cycle which is demarcated by the call to finishweep_m.

All current spans have been swept. During that sweep each span allocated room for its gcmarkBits in gcBitsArenas.next block. gcBitsArenas.next becomes the gcBitsArenas.current where the GC will mark objects and after each span is swept these bits will be used to allocate objects. gcBitsArenas.current becomes gcBitsArenas.previous where the span's gcAllocBits live until all the spans have been swept during this GC cycle. The span's sweep extinguishes all the references to gcBitsArenas.previous by pointing gcAllocBits into the gcBitsArenas.current. The gcBitsArenas.previous is released to the gcBitsArenas.free list.

func nextSample

func nextSample() int32

nextSample returns the next sampling point for heap profiling. The goal is to sample allocations on average every MemProfileRate bytes, but with a completely random distribution over the allocation timeline; this corresponds to a Poisson process with parameter MemProfileRate. In Poisson processes, the distance between two samples follows the exponential distribution (exp(MemProfileRate)), so the best return value is a random number taken from an exponential distribution whose mean is MemProfileRate.

func nextSampleNoFP

func nextSampleNoFP() int32

nextSampleNoFP is similar to nextSample, but uses older, simpler code to avoid floating point.

func nilfunc

func nilfunc()

go:nosplit

func nilinterequal

func nilinterequal(p, q unsafe.Pointer) bool

func nilinterhash

func nilinterhash(p unsafe.Pointer, h uintptr) uintptr

func noSignalStack

func noSignalStack(sig uint32)

This is called when we receive a signal when there is no signal stack. This can only happen if non-Go code calls sigaltstack to disable the signal stack.

func noescape

func noescape(p unsafe.Pointer) unsafe.Pointer

noescape hides a pointer from escape analysis. noescape is the identity function but escape analysis doesn't think the output depends on the input. noescape is inlined and currently compiles down to zero instructions. USE CAREFULLY! go:nosplit

func noteclear

func noteclear(n *note)

One-time notifications.

func notesleep

func notesleep(n *note)

func notetsleep

func notetsleep(n *note, ns int64) bool

func notetsleep_internal

func notetsleep_internal(n *note, ns int64) bool

May run with m.p==nil if called from notetsleep, so write barriers are not allowed.

go:nosplit go:nowritebarrier

func notetsleepg

func notetsleepg(n *note, ns int64) bool

same as runtime·notetsleep, but called on user g (not g0) calls only nosplit functions between entersyscallblock/exitsyscall

func notewakeup

func notewakeup(n *note)

func notifyListAdd

func notifyListAdd(l *notifyList) uint32

notifyListAdd adds the caller to a notify list such that it can receive notifications. The caller must eventually call notifyListWait to wait for such a notification, passing the returned ticket number. go:linkname notifyListAdd sync.runtime_notifyListAdd

func notifyListCheck

func notifyListCheck(sz uintptr)

go:linkname notifyListCheck sync.runtime_notifyListCheck

func notifyListNotifyAll

func notifyListNotifyAll(l *notifyList)

notifyListNotifyAll notifies all entries in the list. go:linkname notifyListNotifyAll sync.runtime_notifyListNotifyAll

func notifyListNotifyOne

func notifyListNotifyOne(l *notifyList)

notifyListNotifyOne notifies one entry in the list. go:linkname notifyListNotifyOne sync.runtime_notifyListNotifyOne

func notifyListWait

func notifyListWait(l *notifyList, t uint32)

notifyListWait waits for a notification. If one has been sent since notifyListAdd was called, it returns immediately. Otherwise, it blocks. go:linkname notifyListWait sync.runtime_notifyListWait

func oneNewExtraM

func oneNewExtraM()

oneNewExtraM allocates an m and puts it on the extra list.

func open

func open(name *byte, mode, perm int32) int32

go:noescape

func osRelax

func osRelax(relax bool)

osRelax is called by the scheduler when transitioning to and from all Ps being idle.

func osStackAlloc

func osStackAlloc(s *mspan)

osStackAlloc performs OS-specific initialization before s is used as stack memory.

func osStackFree

func osStackFree(s *mspan)

osStackFree undoes the effect of osStackAlloc before s is returned to the heap.

func os_beforeExit

func os_beforeExit()

os_beforeExit is called from os.Exit(0). go:linkname os_beforeExit os.runtime_beforeExit

func os_runtime_args

func os_runtime_args() []string

go:linkname os_runtime_args os.runtime_args

func os_sigpipe

func os_sigpipe()

go:linkname os_sigpipe os.sigpipe

func osinit

func osinit()

func osyield

func osyield()

func overLoadFactor

func overLoadFactor(count int, B uint8) bool

overLoadFactor reports whether count items placed in 1<<B buckets is over loadFactor.

func pageIndexOf

func pageIndexOf(p uintptr) (arena *heapArena, pageIdx uintptr, pageMask uint8)

pageIndexOf returns the arena, page index, and page mask for pointer p. The caller must ensure p is in the heap.

func panicCheckMalloc

func panicCheckMalloc(err error)

Calling panic with one of the errors below will call errorString.Error which will call mallocgc to concatenate strings. That will fail if malloc is locked, causing a confusing error message. Throw a better error message instead.

func panicdivide

func panicdivide()

func panicdottypeE

func panicdottypeE(have, want, iface *_type)

panicdottypeE is called when doing an e.(T) conversion and the conversion fails. have = the dynamic type we have. want = the static type we're trying to convert to. iface = the static type we're converting from.

func panicdottypeI

func panicdottypeI(have *itab, want, iface *_type)

panicdottypeI is called when doing an i.(T) conversion and the conversion fails. Same args as panicdottypeE, but "have" is the dynamic itab we have.

func panicfloat

func panicfloat()

func panicindex

func panicindex()

func panicmakeslicecap

func panicmakeslicecap()

func panicmakeslicelen

func panicmakeslicelen()

func panicmem

func panicmem()

func panicnildottype

func panicnildottype(want *_type)

panicnildottype is called when doing a i.(T) conversion and the interface i is nil. want = the static type we're trying to convert to.

func panicoverflow

func panicoverflow()

func panicslice

func panicslice()

func panicwrap

func panicwrap()

panicwrap generates a panic for a call to a wrapped value method with a nil pointer receiver.

It is called from the generated wrapper code.

func park_m

func park_m(gp *g)

park continuation on g0.

func parkunlock_c

func parkunlock_c(gp *g, lock unsafe.Pointer) bool

func parsedebugvars

func parsedebugvars()

func pcdatastart

func pcdatastart(f funcInfo, table int32) int32

func pcdatavalue

func pcdatavalue(f funcInfo, table int32, targetpc uintptr, cache *pcvalueCache) int32

func pcdatavalue1

func pcdatavalue1(f funcInfo, table int32, targetpc uintptr, cache *pcvalueCache, strict bool) int32

func pcvalue

func pcvalue(f funcInfo, off int32, targetpc uintptr, cache *pcvalueCache, strict bool) int32

func pcvalueCacheKey

func pcvalueCacheKey(targetpc uintptr) uintptr

pcvalueCacheKey returns the outermost index in a pcvalueCache to use for targetpc. It must be very cheap to calculate. For now, align to sys.PtrSize and reduce mod the number of entries. In practice, this appears to be fairly randomly and evenly distributed.

func persistentalloc

func persistentalloc(size, align uintptr, sysStat *uint64) unsafe.Pointer

Wrapper around sysAlloc that can allocate small chunks. There is no associated free operation. Intended for things like function/type/debug-related persistent data. If align is 0, uses default align (currently 8). The returned memory will be zeroed.

Consider marking persistentalloc'd types go:notinheap.

func pidleput

func pidleput(_p_ *p)

Put p to on _Pidle list. Sched must be locked. May run during STW, so write barriers are not allowed. go:nowritebarrierrec

func plugin_lastmoduleinit

func plugin_lastmoduleinit() (path string, syms map[string]interface{}, errstr string)

go:linkname plugin_lastmoduleinit plugin.lastmoduleinit

func pluginftabverify

func pluginftabverify(md *moduledata)

func pollFractionalWorkerExit

func pollFractionalWorkerExit() bool

pollFractionalWorkerExit reports whether a fractional mark worker should self-preempt. It assumes it is called from the fractional worker.

func pollWork

func pollWork() bool

pollWork reports whether there is non-background work this P could be doing. This is a fairly lightweight check to be used for background work loops, like idle GC. It checks a subset of the conditions checked by the actual scheduler.

func poll_runtime_Semacquire

func poll_runtime_Semacquire(addr *uint32)

go:linkname poll_runtime_Semacquire internal/poll.runtime_Semacquire

func poll_runtime_Semrelease

func poll_runtime_Semrelease(addr *uint32)

go:linkname poll_runtime_Semrelease internal/poll.runtime_Semrelease

func poll_runtime_isPollServerDescriptor

func poll_runtime_isPollServerDescriptor(fd uintptr) bool

poll_runtime_isPollServerDescriptor reports whether fd is a descriptor being used by netpoll.

func poll_runtime_pollClose

func poll_runtime_pollClose(pd *pollDesc)

go:linkname poll_runtime_pollClose internal/poll.runtime_pollClose

func poll_runtime_pollOpen

func poll_runtime_pollOpen(fd uintptr) (*pollDesc, int)

go:linkname poll_runtime_pollOpen internal/poll.runtime_pollOpen

func poll_runtime_pollReset

func poll_runtime_pollReset(pd *pollDesc, mode int) int

go:linkname poll_runtime_pollReset internal/poll.runtime_pollReset

func poll_runtime_pollServerInit

func poll_runtime_pollServerInit()

go:linkname poll_runtime_pollServerInit internal/poll.runtime_pollServerInit

func poll_runtime_pollSetDeadline

func poll_runtime_pollSetDeadline(pd *pollDesc, d int64, mode int)

go:linkname poll_runtime_pollSetDeadline internal/poll.runtime_pollSetDeadline

func poll_runtime_pollUnblock

func poll_runtime_pollUnblock(pd *pollDesc)

go:linkname poll_runtime_pollUnblock internal/poll.runtime_pollUnblock

func poll_runtime_pollWait

func poll_runtime_pollWait(pd *pollDesc, mode int) int

go:linkname poll_runtime_pollWait internal/poll.runtime_pollWait

func poll_runtime_pollWaitCanceled

func poll_runtime_pollWaitCanceled(pd *pollDesc, mode int)

go:linkname poll_runtime_pollWaitCanceled internal/poll.runtime_pollWaitCanceled

func preemptall

func preemptall() bool

Tell all goroutines that they have been preempted and they should stop. This function is purely best-effort. It can fail to inform a goroutine if a processor just started running it. No locks need to be held. Returns true if preemption request was issued to at least one goroutine.

func preemptone

func preemptone(_p_ *p) bool

Tell the goroutine running on processor P to stop. This function is purely best-effort. It can incorrectly fail to inform the goroutine. It can send inform the wrong goroutine. Even if it informs the correct goroutine, that goroutine might ignore the request if it is simultaneously executing newstack. No lock needs to be held. Returns true if preemption request was issued. The actual preemption will happen at some point in the future and will be indicated by the gp->status no longer being Grunning

func prepGoExitFrame

func prepGoExitFrame(sp uintptr)

func prepareFreeWorkbufs

func prepareFreeWorkbufs()

prepareFreeWorkbufs moves busy workbuf spans to free list so they can be freed to the heap. This must only be called when all workbufs are on the empty list.

func preprintpanics

func preprintpanics(p *_panic)

Call all Error and String methods before freezing the world. Used when crashing with panicking.

func printAncestorTraceback

func printAncestorTraceback(ancestor ancestorInfo)

printAncestorTraceback prints the traceback of the given ancestor. TODO: Unify this with gentraceback and CallersFrames.

func printAncestorTracebackFuncInfo

func printAncestorTracebackFuncInfo(f funcInfo, pc uintptr)

printAncestorTraceback prints the given function info at a given pc within an ancestor traceback. The precision of this info is reduced due to only have access to the pcs at the time of the caller goroutine being created.

func printCgoTraceback

func printCgoTraceback(callers *cgoCallers)

cgoTraceback prints a traceback of callers.

func printOneCgoTraceback

func printOneCgoTraceback(pc uintptr, max int, arg *cgoSymbolizerArg) int

printOneCgoTraceback prints the traceback of a single cgo caller. This can print more than one line because of inlining. Returns the number of frames printed.

func printany

func printany(i interface{})

printany prints an argument passed to panic. If panic is called with a value that has a String or Error method, it has already been converted into a string by preprintpanics.

func printbool

func printbool(v bool)

func printcomplex

func printcomplex(c complex128)

func printcreatedby

func printcreatedby(gp *g)

func printcreatedby1

func printcreatedby1(f funcInfo, pc uintptr)

func printeface

func printeface(e eface)

func printfloat

func printfloat(v float64)

func printhex

func printhex(v uint64)

func printiface

func printiface(i iface)

func printint

func printint(v int64)

func printlock

func printlock()

func printnl

func printnl()

func printpanics

func printpanics(p *_panic)

Print all currently active panics. Used when crashing. Should only be called after preprintpanics.

func printpointer

func printpointer(p unsafe.Pointer)

func printslice

func printslice(s []byte)

func printsp

func printsp()

func printstring

func printstring(s string)

func printuint

func printuint(v uint64)

func printunlock

func printunlock()

func procPin

func procPin() int

go:nosplit

func procUnpin

func procUnpin()

go:nosplit

func procyield

func procyield(cycles uint32)

func profilealloc

func profilealloc(mp *m, x unsafe.Pointer, size uintptr)

func publicationBarrier

func publicationBarrier()

publicationBarrier performs a store/store barrier (a "publication" or "export" barrier). Some form of synchronization is required between initializing an object and making that object accessible to another processor. Without synchronization, the initialization writes and the "publication" write may be reordered, allowing the other processor to follow the pointer and observe an uninitialized object. In general, higher-level synchronization should be used, such as locking or an atomic pointer write. publicationBarrier is for when those aren't an option, such as in the implementation of the memory manager.

There's no corresponding barrier for the read side because the read side naturally has a data dependency order. All architectures that Go supports or seems likely to ever support automatically enforce data dependency ordering.

func purgecachedstats

func purgecachedstats(c *mcache)

go:nosplit

func putempty

func putempty(b *workbuf)

putempty puts a workbuf onto the work.empty list. Upon entry this go routine owns b. The lfstack.push relinquishes ownership. go:nowritebarrier

func putfull

func putfull(b *workbuf)

putfull puts the workbuf on the work.full list for the GC. putfull accepts partially full buffers so the GC can avoid competing with the mutators for ownership of partially full buffers. go:nowritebarrier

func queuefinalizer

func queuefinalizer(p unsafe.Pointer, fn *funcval, nret uintptr, fint *_type, ot *ptrtype)

func raceReadObjectPC

func raceReadObjectPC(t *_type, addr unsafe.Pointer, callerpc, pc uintptr)

func raceWriteObjectPC

func raceWriteObjectPC(t *_type, addr unsafe.Pointer, callerpc, pc uintptr)

func raceacquire

func raceacquire(addr unsafe.Pointer)

func raceacquireg

func raceacquireg(gp *g, addr unsafe.Pointer)

func racefingo

func racefingo()

func racefini

func racefini()

func racefree

func racefree(p unsafe.Pointer, sz uintptr)

func racegoend

func racegoend()

func racegostart

func racegostart(pc uintptr) uintptr

func raceinit

func raceinit() (uintptr, uintptr)

func racemalloc

func racemalloc(p unsafe.Pointer, sz uintptr)

func racemapshadow

func racemapshadow(addr unsafe.Pointer, size uintptr)

func raceproccreate

func raceproccreate() uintptr

func raceprocdestroy

func raceprocdestroy(ctx uintptr)

func racereadpc

func racereadpc(addr unsafe.Pointer, callerpc, pc uintptr)

func racereadrangepc

func racereadrangepc(addr unsafe.Pointer, sz, callerpc, pc uintptr)

func racerelease

func racerelease(addr unsafe.Pointer)

func racereleaseg

func racereleaseg(gp *g, addr unsafe.Pointer)

func racereleasemerge

func racereleasemerge(addr unsafe.Pointer)

func racereleasemergeg

func racereleasemergeg(gp *g, addr unsafe.Pointer)

func racesync

func racesync(c *hchan, sg *sudog)

func racewritepc

func racewritepc(addr unsafe.Pointer, callerpc, pc uintptr)

func racewriterangepc

func racewriterangepc(addr unsafe.Pointer, sz, callerpc, pc uintptr)

func raise

func raise(sig uint32)

func raisebadsignal

func raisebadsignal(sig uint32, c *sigctxt)

raisebadsignal is called when a signal is received on a non-Go thread, and the Go program does not want to handle it (that is, the program has not called os/signal.Notify for the signal).

func raiseproc

func raiseproc(sig uint32)

func rawbyteslice

func rawbyteslice(size int) (b []byte)

rawbyteslice allocates a new byte slice. The byte slice is not zeroed.

func rawruneslice

func rawruneslice(size int) (b []rune)

rawruneslice allocates a new rune slice. The rune slice is not zeroed.

func rawstring

func rawstring(size int) (s string, b []byte)

rawstring allocates storage for a new string. The returned string and byte slice both refer to the same storage. The storage is not zeroed. Callers should use b to set the string contents and then drop b.

func rawstringtmp

func rawstringtmp(buf *tmpBuf, l int) (s string, b []byte)

func read

func read(fd int32, p unsafe.Pointer, n int32) int32

func readGCStats

func readGCStats(pauses *[]uint64)

go:linkname readGCStats runtime/debug.readGCStats

func readGCStats_m

func readGCStats_m(pauses *[]uint64)

func readUnaligned32

func readUnaligned32(p unsafe.Pointer) uint32

func readUnaligned64

func readUnaligned64(p unsafe.Pointer) uint64

func readgogc

func readgogc() int32

func readgstatus

func readgstatus(gp *g) uint32

All reads and writes of g's status go through readgstatus, casgstatus castogscanstatus, casfrom_Gscanstatus. go:nosplit

func readmemstats_m

func readmemstats_m(stats *MemStats)

func readvarint

func readvarint(p []byte) (read uint32, val uint32)

readvarint reads a varint from p.

func ready

func ready(gp *g, traceskip int, next bool)

Mark gp ready to run.

func readyWithTime

func readyWithTime(s *sudog, traceskip int)

func record

func record(r *MemProfileRecord, b *bucket)

Write b's data to r.

func recordForPanic

func recordForPanic(b []byte)

recordForPanic maintains a circular buffer of messages written by the runtime leading up to a process crash, allowing the messages to be extracted from a core dump.

The text written during a process crash (following "panic" or "fatal error") is not saved, since the goroutine stacks will generally be readable from the runtime datastructures in the core file.

func recordspan

func recordspan(vh unsafe.Pointer, p unsafe.Pointer)

recordspan adds a newly allocated span to h.allspans.

This only happens the first time a span is allocated from mheap.spanalloc (it is not called when a span is reused).

Write barriers are disallowed here because it can be called from gcWork when allocating new workbufs. However, because it's an indirect call from the fixalloc initializer, the compiler can't see this.

go:nowritebarrierrec

func recovery

func recovery(gp *g)

Unwind the stack after a deferred function calls recover after a panic. Then arrange to continue running as though the caller of the deferred function returned normally.

func recv

func recv(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)

recv processes a receive operation on a full channel c. There are 2 parts: 1) The value sent by the sender sg is put into the channel

and the sender is woken up to go on its merry way.

2) The value received by the receiver (the current G) is

written to ep.

For synchronous channels, both values are the same. For asynchronous channels, the receiver gets its data from the channel buffer and the sender's data is put in the channel buffer. Channel c must be full and locked. recv unlocks c with unlockf. sg must already be dequeued from c. A non-nil ep must point to the heap or the caller's stack.

func recvDirect

func recvDirect(t *_type, sg *sudog, dst unsafe.Pointer)

func reentersyscall

func reentersyscall(pc, sp uintptr)

The goroutine g is about to enter a system call. Record that it's not using the cpu anymore. This is called only from the go syscall library and cgocall, not from the low-level system calls used by the runtime.

Entersyscall cannot split the stack: the gosave must make g->sched refer to the caller's stack segment, because entersyscall is going to return immediately after.

Nothing entersyscall calls can split the stack either. We cannot safely move the stack during an active call to syscall, because we do not know which of the uintptr arguments are really pointers (back into the stack). In practice, this means that we make the fast path run through entersyscall doing no-split things, and the slow path has to use systemstack to run bigger things on the system stack.

reentersyscall is the entry point used by cgo callbacks, where explicitly saved SP and PC are restored. This is needed when exitsyscall will be called from a function further up in the call stack than the parent, as g->syscallsp must always point to a valid stack frame. entersyscall below is the normal entry point for syscalls, which obtains the SP and PC from the caller.

Syscall tracing: At the start of a syscall we emit traceGoSysCall to capture the stack trace. If the syscall does not block, that is it, we do not emit any other events. If the syscall blocks (that is, P is retaken), retaker emits traceGoSysBlock; when syscall returns we emit traceGoSysExit and when the goroutine starts running (potentially instantly, if exitsyscallfast returns true) we emit traceGoStart. To ensure that traceGoSysExit is emitted strictly after traceGoSysBlock, we remember current value of syscalltick in m (_g_.m.syscalltick = _g_.m.p.ptr().syscalltick), whoever emits traceGoSysBlock increments p.syscalltick afterwards; and we wait for the increment before emitting traceGoSysExit. Note that the increment is done even if tracing is not enabled, because tracing can be enabled in the middle of syscall. We don't want the wait to hang.

go:nosplit

func reflectOffsLock

func reflectOffsLock()

func reflectOffsUnlock

func reflectOffsUnlock()

func reflect_addReflectOff

func reflect_addReflectOff(ptr unsafe.Pointer) int32

reflect_addReflectOff adds a pointer to the reflection offset lookup map. go:linkname reflect_addReflectOff reflect.addReflectOff

func reflect_chancap

func reflect_chancap(c *hchan) int

go:linkname reflect_chancap reflect.chancap

func reflect_chanclose

func reflect_chanclose(c *hchan)

go:linkname reflect_chanclose reflect.chanclose

func reflect_chanlen

func reflect_chanlen(c *hchan) int

go:linkname reflect_chanlen reflect.chanlen

func reflect_chanrecv

func reflect_chanrecv(c *hchan, nb bool, elem unsafe.Pointer) (selected bool, received bool)

go:linkname reflect_chanrecv reflect.chanrecv

func reflect_chansend

func reflect_chansend(c *hchan, elem unsafe.Pointer, nb bool) (selected bool)

go:linkname reflect_chansend reflect.chansend

func reflect_gcbits

func reflect_gcbits(x interface{}) []byte

gcbits returns the GC type info for x, for testing. The result is the bitmap entries (0 or 1), one entry per byte. go:linkname reflect_gcbits reflect.gcbits

func reflect_ifaceE2I

func reflect_ifaceE2I(inter *interfacetype, e eface, dst *iface)

go:linkname reflect_ifaceE2I reflect.ifaceE2I

func reflect_ismapkey

func reflect_ismapkey(t *_type) bool

go:linkname reflect_ismapkey reflect.ismapkey

func reflect_mapaccess

func reflect_mapaccess(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer

go:linkname reflect_mapaccess reflect.mapaccess

func reflect_mapassign

func reflect_mapassign(t *maptype, h *hmap, key unsafe.Pointer, val unsafe.Pointer)

go:linkname reflect_mapassign reflect.mapassign

func reflect_mapdelete

func reflect_mapdelete(t *maptype, h *hmap, key unsafe.Pointer)

go:linkname reflect_mapdelete reflect.mapdelete

func reflect_mapiterkey

func reflect_mapiterkey(it *hiter) unsafe.Pointer

go:linkname reflect_mapiterkey reflect.mapiterkey

func reflect_mapiternext

func reflect_mapiternext(it *hiter)

go:linkname reflect_mapiternext reflect.mapiternext

func reflect_mapitervalue

func reflect_mapitervalue(it *hiter) unsafe.Pointer

go:linkname reflect_mapitervalue reflect.mapitervalue

func reflect_maplen

func reflect_maplen(h *hmap) int

go:linkname reflect_maplen reflect.maplen

func reflect_memclrNoHeapPointers

func reflect_memclrNoHeapPointers(ptr unsafe.Pointer, n uintptr)

go:linkname reflect_memclrNoHeapPointers reflect.memclrNoHeapPointers

func reflect_memmove

func reflect_memmove(to, from unsafe.Pointer, n uintptr)

go:linkname reflect_memmove reflect.memmove

func reflect_resolveNameOff

func reflect_resolveNameOff(ptrInModule unsafe.Pointer, off int32) unsafe.Pointer

reflect_resolveNameOff resolves a name offset from a base pointer. go:linkname reflect_resolveNameOff reflect.resolveNameOff

func reflect_resolveTextOff

func reflect_resolveTextOff(rtype unsafe.Pointer, off int32) unsafe.Pointer

reflect_resolveTextOff resolves an function pointer offset from a base type. go:linkname reflect_resolveTextOff reflect.resolveTextOff

func reflect_resolveTypeOff

func reflect_resolveTypeOff(rtype unsafe.Pointer, off int32) unsafe.Pointer

reflect_resolveTypeOff resolves an *rtype offset from a base type. go:linkname reflect_resolveTypeOff reflect.resolveTypeOff

func reflect_rselect

func reflect_rselect(cases []runtimeSelect) (int, bool)

go:linkname reflect_rselect reflect.rselect

func reflect_typedmemclr

func reflect_typedmemclr(typ *_type, ptr unsafe.Pointer)

go:linkname reflect_typedmemclr reflect.typedmemclr

func reflect_typedmemclrpartial

func reflect_typedmemclrpartial(typ *_type, ptr unsafe.Pointer, off, size uintptr)

go:linkname reflect_typedmemclrpartial reflect.typedmemclrpartial

func reflect_typedmemmove

func reflect_typedmemmove(typ *_type, dst, src unsafe.Pointer)

go:linkname reflect_typedmemmove reflect.typedmemmove

func reflect_typedmemmovepartial

func reflect_typedmemmovepartial(typ *_type, dst, src unsafe.Pointer, off, size uintptr)

typedmemmovepartial is like typedmemmove but assumes that dst and src point off bytes into the value and only copies size bytes. go:linkname reflect_typedmemmovepartial reflect.typedmemmovepartial

func reflect_typedslicecopy

func reflect_typedslicecopy(elemType *_type, dst, src slice) int

go:linkname reflect_typedslicecopy reflect.typedslicecopy

func reflect_typelinks() ([]unsafe.Pointer, [][]int32)

go:linkname reflect_typelinks reflect.typelinks

func reflect_unsafe_New

func reflect_unsafe_New(typ *_type) unsafe.Pointer

go:linkname reflect_unsafe_New reflect.unsafe_New

func reflect_unsafe_NewArray

func reflect_unsafe_NewArray(typ *_type, n int) unsafe.Pointer

go:linkname reflect_unsafe_NewArray reflect.unsafe_NewArray

func reflectcall

func reflectcall(argtype *_type, fn, arg unsafe.Pointer, argsize uint32, retoffset uint32)

reflectcall calls fn with a copy of the n argument bytes pointed at by arg. After fn returns, reflectcall copies n-retoffset result bytes back into arg+retoffset before returning. If copying result bytes back, the caller should pass the argument frame type as argtype, so that call can execute appropriate write barriers during the copy. Package reflect passes a frame type. In package runtime, there is only one call that copies results back, in cgocallbackg1, and it does NOT pass a frame type, meaning there are no write barriers invoked. See that call site for justification.

Package reflect accesses this symbol through a linkname.

func reflectcallmove

func reflectcallmove(typ *_type, dst, src unsafe.Pointer, size uintptr)

reflectcallmove is invoked by reflectcall to copy the return values out of the stack and into the heap, invoking the necessary write barriers. dst, src, and size describe the return value area to copy. typ describes the entire frame (not just the return values). typ may be nil, which indicates write barriers are not needed.

It must be nosplit and must only call nosplit functions because the stack map of reflectcall is wrong.

go:nosplit

func releaseSudog

func releaseSudog(s *sudog)

go:nosplit

func releasem

func releasem(mp *m)

go:nosplit

func removefinalizer

func removefinalizer(p unsafe.Pointer)

Removes the finalizer (if any) from the object p.

func resetspinning

func resetspinning()

func restartg

func restartg(gp *g)

The GC requests that this routine be moved from a scanmumble state to a mumble state.

func restoreGsignalStack

func restoreGsignalStack(st *gsignalStack)

restoreGsignalStack restores the gsignal stack to the value it had before entering the signal handler. go:nosplit go:nowritebarrierrec

func retake

func retake(now int64) uint32

func return0

func return0()

return0 is a stub used to return 0 from deferproc. It is called at the very end of deferproc to signal the calling Go function that it should not jump to deferreturn. in asm_*.s

func rotl_31

func rotl_31(x uint64) uint64

Note: in order to get the compiler to issue rotl instructions, we need to constant fold the shift amount by hand. TODO: convince the compiler to issue rotl instructions after inlining.

func round

func round(n, a uintptr) uintptr

round n up to a multiple of a. a must be a power of 2.

func round2

func round2(x int32) int32

round x up to a power of 2.

func roundupsize

func roundupsize(size uintptr) uintptr

Returns size of the memory block that mallocgc will allocate if you ask for the size.

func rt0_go

func rt0_go()

func rt_sigaction

func rt_sigaction(sig uintptr, new, old *sigactiont, size uintptr) int32

rt_sigaction is implemented in assembly. go:noescape

func rtsigprocmask

func rtsigprocmask(how int32, new, old *sigset, size int32)

go:noescape

func runGCProg

func runGCProg(prog, trailer, dst *byte, size int) uintptr

runGCProg executes the GC program prog, and then trailer if non-nil, writing to dst with entries of the given size. If size == 1, dst is a 1-bit pointer mask laid out moving forward from dst. If size == 2, dst is the 2-bit heap bitmap, and writes move backward starting at dst (because the heap bitmap does). In this case, the caller guarantees that only whole bytes in dst need to be written.

runGCProg returns the number of 1- or 2-bit entries written to memory.

func runSafePointFn

func runSafePointFn()

runSafePointFn runs the safe point function, if any, for this P. This should be called like

if getg().m.p.runSafePointFn != 0 {
    runSafePointFn()
}

runSafePointFn must be checked on any transition in to _Pidle or _Psyscall to avoid a race where forEachP sees that the P is running just before the P goes into _Pidle/_Psyscall and neither forEachP nor the P run the safe-point function.

func runfinq

func runfinq()

This is the goroutine that runs all of the finalizers

func runqempty

func runqempty(_p_ *p) bool

runqempty reports whether _p_ has no Gs on its local run queue. It never returns true spuriously.

func runqget

func runqget(_p_ *p) (gp *g, inheritTime bool)

Get g from local runnable queue. If inheritTime is true, gp should inherit the remaining time in the current time slice. Otherwise, it should start a new time slice. Executed only by the owner P.

func runqgrab

func runqgrab(_p_ *p, batch *[256]guintptr, batchHead uint32, stealRunNextG bool) uint32

Grabs a batch of goroutines from _p_'s runnable queue into batch. Batch is a ring buffer starting at batchHead. Returns number of grabbed goroutines. Can be executed by any P.

func runqput

func runqput(_p_ *p, gp *g, next bool)

runqput tries to put g on the local runnable queue. If next is false, runqput adds g to the tail of the runnable queue. If next is true, runqput puts g in the _p_.runnext slot. If the run queue is full, runnext puts g on the global queue. Executed only by the owner P.

func runqputslow

func runqputslow(_p_ *p, gp *g, h, t uint32) bool

Put g and a batch of work from local runnable queue on global queue. Executed only by the owner P.

func runtime_debug_WriteHeapDump

func runtime_debug_WriteHeapDump(fd uintptr)

go:linkname runtime_debug_WriteHeapDump runtime/debug.WriteHeapDump

func runtime_debug_freeOSMemory

func runtime_debug_freeOSMemory()

go:linkname runtime_debug_freeOSMemory runtime/debug.freeOSMemory

func runtime_getProfLabel

func runtime_getProfLabel() unsafe.Pointer

go:linkname runtime_getProfLabel runtime/pprof.runtime_getProfLabel

func runtime_init

func runtime_init()

go:linkname runtime_init runtime.init

func runtime_pprof_readProfile

func runtime_pprof_readProfile() ([]uint64, []unsafe.Pointer, bool)

readProfile, provided to runtime/pprof, returns the next chunk of binary CPU profiling stack trace data, blocking until data is available. If profiling is turned off and all the profile data accumulated while it was on has been returned, readProfile returns eof=true. The caller must save the returned data and tags before calling readProfile again.

go:linkname runtime_pprof_readProfile runtime/pprof.readProfile

func runtime_pprof_runtime_cyclesPerSecond

func runtime_pprof_runtime_cyclesPerSecond() int64

go:linkname runtime_pprof_runtime_cyclesPerSecond runtime/pprof.runtime_cyclesPerSecond

func runtime_setProfLabel

func runtime_setProfLabel(labels unsafe.Pointer)

go:linkname runtime_setProfLabel runtime/pprof.runtime_setProfLabel

func save

func save(pc, sp uintptr)

save updates getg().sched to refer to pc and sp so that a following gogo will restore pc and sp.

save must not have write barriers because invoking a write barrier can clobber getg().sched.

go:nosplit go:nowritebarrierrec

func saveAncestors

func saveAncestors(callergp *g) *[]ancestorInfo

saveAncestors copies previous ancestors of the given caller g and includes infor for the current caller into a new set of tracebacks for a g being created.

func saveblockevent

func saveblockevent(cycles int64, skip int, which bucketType)

func saveg

func saveg(pc, sp uintptr, gp *g, r *StackRecord)

func sbrk0

func sbrk0() uintptr

func scanblock

func scanblock(b0, n0 uintptr, ptrmask *uint8, gcw *gcWork, stk *stackScanState)

scanblock scans b as scanobject would, but using an explicit pointer bitmap instead of the heap bitmap.

This is used to scan non-heap roots, so it does not update gcw.bytesMarked or gcw.scanWork.

If stk != nil, possible stack pointers are also reported to stk.putPtr. go:nowritebarrier

func scanframeworker

func scanframeworker(frame *stkframe, state *stackScanState, gcw *gcWork)

Scan a stack frame: local variables and function arguments/results. go:nowritebarrier

func scang

func scang(gp *g, gcw *gcWork)

scang blocks until gp's stack has been scanned. It might be scanned by scang or it might be scanned by the goroutine itself. Either way, the stack scan has completed when scang returns.

func scanobject

func scanobject(b uintptr, gcw *gcWork)

scanobject scans the object starting at b, adding pointers to gcw. b must point to the beginning of a heap object or an oblet. scanobject consults the GC bitmap for the pointer mask and the spans for the size of the object.

go:nowritebarrier

func scanstack

func scanstack(gp *g, gcw *gcWork)

scanstack scans gp's stack, greying all pointers found on the stack.

scanstack is marked go:systemstack because it must not be preempted while using a workbuf.

go:nowritebarrier go:systemstack

func schedEnableUser

func schedEnableUser(enable bool)

schedEnableUser enables or disables the scheduling of user goroutines.

This does not stop already running user goroutines, so the caller should first stop the world when disabling user goroutines.

func schedEnabled

func schedEnabled(gp *g) bool

schedEnabled reports whether gp should be scheduled. It returns false is scheduling of gp is disabled.

func sched_getaffinity

func sched_getaffinity(pid, len uintptr, buf *byte) int32

go:noescape

func schedinit

func schedinit()

The bootstrap sequence is:

call osinit
call schedinit
make & queue new G
call runtime·mstart

The new G calls runtime·main.

func schedtrace

func schedtrace(detailed bool)

func schedule

func schedule()

One round of scheduler: find a runnable goroutine and execute it. Never returns.

func selectgo

func selectgo(cas0 *scase, order0 *uint16, ncases int) (int, bool)

selectgo implements the select statement.

cas0 points to an array of type [ncases]scase, and order0 points to an array of type [2*ncases]uint16. Both reside on the goroutine's stack (regardless of any escaping in selectgo).

selectgo returns the index of the chosen scase, which matches the ordinal position of its respective select{recv,send,default} call. Also, if the chosen scase was a receive operation, it reports whether a value was received.

func selectnbrecv

func selectnbrecv(elem unsafe.Pointer, c *hchan) (selected bool)

compiler implements

select {
case v = <-c:
	... foo
default:
	... bar
}

as

if selectnbrecv(&v, c) {
	... foo
} else {
	... bar
}

func selectnbrecv2

func selectnbrecv2(elem unsafe.Pointer, received *bool, c *hchan) (selected bool)

compiler implements

select {
case v, ok = <-c:
	... foo
default:
	... bar
}

as

if c != nil && selectnbrecv2(&v, &ok, c) {
	... foo
} else {
	... bar
}

func selectnbsend

func selectnbsend(c *hchan, elem unsafe.Pointer) (selected bool)

compiler implements

select {
case c <- v:
	... foo
default:
	... bar
}

as

if selectnbsend(c, v) {
	... foo
} else {
	... bar
}

func selectsetpc

func selectsetpc(cas *scase)

func sellock

func sellock(scases []scase, lockorder []uint16)

func selparkcommit

func selparkcommit(gp *g, _ unsafe.Pointer) bool

func selunlock

func selunlock(scases []scase, lockorder []uint16)

func semacquire

func semacquire(addr *uint32)

Called from runtime.

func semacquire1

func semacquire1(addr *uint32, lifo bool, profile semaProfileFlags)

func semrelease

func semrelease(addr *uint32)

func semrelease1

func semrelease1(addr *uint32, handoff bool)

func send

func send(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)

send processes a send operation on an empty channel c. The value ep sent by the sender is copied to the receiver sg. The receiver is then woken up to go on its merry way. Channel c must be empty and locked. send unlocks c with unlockf. sg must already be dequeued from c. ep must be non-nil and point to the heap or the caller's stack.

func sendDirect

func sendDirect(t *_type, sg *sudog, src unsafe.Pointer)

func setGCPercent

func setGCPercent(in int32) (out int32)

go:linkname setGCPercent runtime/debug.setGCPercent

func setGCPhase

func setGCPhase(x uint32)

go:nosplit

func setGNoWB

func setGNoWB(gp **g, new *g)

setGNoWB performs *gp = new without a write barrier. For times when it's impractical to use a guintptr. go:nosplit go:nowritebarrier

func setGsignalStack

func setGsignalStack(st *stackt, old *gsignalStack)

setGsignalStack sets the gsignal stack of the current m to an alternate signal stack returned from the sigaltstack system call. It saves the old values in *old for use by restoreGsignalStack. This is used when handling a signal if non-Go code has set the alternate signal stack. go:nosplit go:nowritebarrierrec

func setMNoWB

func setMNoWB(mp **m, new *m)

setMNoWB performs *mp = new without a write barrier. For times when it's impractical to use an muintptr. go:nosplit go:nowritebarrier

func setMaxStack

func setMaxStack(in int) (out int)

go:linkname setMaxStack runtime/debug.setMaxStack

func setMaxThreads

func setMaxThreads(in int) (out int)

go:linkname setMaxThreads runtime/debug.setMaxThreads

func setPanicOnFault

func setPanicOnFault(new bool) (old bool)

go:linkname setPanicOnFault runtime/debug.setPanicOnFault

func setProcessCPUProfiler

func setProcessCPUProfiler(hz int32)

setProcessCPUProfiler is called when the profiling timer changes. It is called with prof.lock held. hz is the new timer, and is 0 if profiling is being disabled. Enable or disable the signal as required for -buildmode=c-archive.

func setSignalstackSP

func setSignalstackSP(s *stackt, sp uintptr)

setSignaltstackSP sets the ss_sp field of a stackt. go:nosplit

func setThreadCPUProfiler

func setThreadCPUProfiler(hz int32)

setThreadCPUProfiler makes any thread-specific changes required to implement profiling at a rate of hz.

func setTraceback

func setTraceback(level string)

go:linkname setTraceback runtime/debug.SetTraceback

func setcpuprofilerate

func setcpuprofilerate(hz int32)

setcpuprofilerate sets the CPU profiling rate to hz times per second. If hz <= 0, setcpuprofilerate turns off CPU profiling.

func setg

func setg(gg *g)

func setitimer

func setitimer(mode int32, new, old *itimerval)

go:noescape

func setprofilebucket

func setprofilebucket(p unsafe.Pointer, b *bucket)

Set the heap profile bucket associated with addr to b.

func setsSP

func setsSP(pc uintptr) bool

Reports whether a function will set the SP to an absolute value. Important that we don't traceback when these are at the bottom of the stack since we can't be sure that we will find the caller.

If the function is not on the bottom of the stack we assume that it will have set it up so that traceback will be consistent, either by being a traceback terminating function or putting one on the stack at the right offset.

func setsig

func setsig(i uint32, fn uintptr)

go:nosplit go:nowritebarrierrec

func setsigsegv

func setsigsegv(pc uintptr)

setsigsegv is used on darwin/arm{,64} to fake a segmentation fault. go:nosplit

func setsigstack

func setsigstack(i uint32)

go:nosplit go:nowritebarrierrec

func shade

func shade(b uintptr)

Shade the object if it isn't already. The object is not nil and known to be in the heap. Preemption must be disabled. go:nowritebarrier

func shouldPushSigpanic

func shouldPushSigpanic(gp *g, pc, lr uintptr) bool

shouldPushSigpanic reports whether pc should be used as sigpanic's return PC (pushing a frame for the call). Otherwise, it should be left alone so that LR is used as sigpanic's return PC, effectively replacing the top-most frame with sigpanic. This is used by preparePanic.

func showframe

func showframe(f funcInfo, gp *g, firstFrame bool, funcID, childID funcID) bool

showframe reports whether the frame with the given characteristics should be printed during a traceback.

func showfuncinfo

func showfuncinfo(f funcInfo, firstFrame bool, funcID, childID funcID) bool

showfuncinfo reports whether a function with the given characteristics should be printed during a traceback.

func shrinkstack

func shrinkstack(gp *g)

Maybe shrink the stack being used by gp. Called at garbage collection time. gp must be stopped, but the world need not be.

func siftdownTimer

func siftdownTimer(t []*timer, i int) bool

func siftupTimer

func siftupTimer(t []*timer, i int) bool

func sigInitIgnored

func sigInitIgnored(s uint32)

sigInitIgnored marks the signal as already ignored. This is called at program start by initsig. In a shared library initsig is called by libpreinit, so the runtime may not be initialized yet. go:nosplit

func sigInstallGoHandler

func sigInstallGoHandler(sig uint32) bool

go:nosplit go:nowritebarrierrec

func sigNotOnStack

func sigNotOnStack(sig uint32)

This is called if we receive a signal when there is a signal stack but we are not on it. This can only happen if non-Go code called sigaction without setting the SS_ONSTACK flag.

func sigaction

func sigaction(sig uint32, new, old *sigactiont)

go:nosplit go:nowritebarrierrec

func sigaddset

func sigaddset(mask *sigset, i int)

go:nosplit go:nowritebarrierrec

func sigaltstack

func sigaltstack(new, old *stackt)

go:noescape

func sigblock

func sigblock()

sigblock blocks all signals in the current thread's signal mask. This is used to block signals while setting up and tearing down g when a non-Go thread calls a Go function. The OS-specific code is expected to define sigset_all. This is nosplit and nowritebarrierrec because it is called by needm which may be called on a non-Go thread with no g available. go:nosplit go:nowritebarrierrec

func sigdelset

func sigdelset(mask *sigset, i int)

func sigdisable

func sigdisable(sig uint32)

sigdisable disables the Go signal handler for the signal sig. It is only called while holding the os/signal.handlers lock, via os/signal.disableSignal and signal_disable.

func sigenable

func sigenable(sig uint32)

sigenable enables the Go signal handler to catch the signal sig. It is only called while holding the os/signal.handlers lock, via os/signal.enableSignal and signal_enable.

func sigfillset

func sigfillset(mask *uint64)

func sigfwd

func sigfwd(fn uintptr, sig uint32, info *siginfo, ctx unsafe.Pointer)

go:noescape

func sigfwdgo

func sigfwdgo(sig uint32, info *siginfo, ctx unsafe.Pointer) bool

Determines if the signal should be handled by Go and if not, forwards the signal to the handler that was installed before Go's. Returns whether the signal was forwarded. This is called by the signal handler, and the world may be stopped. go:nosplit go:nowritebarrierrec

func sighandler

func sighandler(sig uint32, info *siginfo, ctxt unsafe.Pointer, gp *g)

sighandler is invoked when a signal occurs. The global g will be set to a gsignal goroutine and we will be running on the alternate signal stack. The parameter g will be the value of the global g when the signal occurred. The sig, info, and ctxt parameters are from the system signal handler: they are the parameters passed when the SA is passed to the sigaction system call.

The garbage collector may have stopped the world, so write barriers are not allowed.

go:nowritebarrierrec

func sigignore

func sigignore(sig uint32)

sigignore ignores the signal sig. It is only called while holding the os/signal.handlers lock, via os/signal.ignoreSignal and signal_ignore.

func signalDuringFork

func signalDuringFork(sig uint32)

signalDuringFork is called if we receive a signal while doing a fork. We do not want signals at that time, as a signal sent to the process group may be delivered to the child process, causing confusion. This should never be called, because we block signals across the fork; this function is just a safety check. See issue 18600 for background.

func signalWaitUntilIdle

func signalWaitUntilIdle()

signalWaitUntilIdle waits until the signal delivery mechanism is idle. This is used to ensure that we do not drop a signal notification due to a race between disabling a signal and receiving a signal. This assumes that signal delivery has already been disabled for the signal(s) in question, and here we are just waiting to make sure that all the signals have been delivered to the user channels by the os/signal package. go:linkname signalWaitUntilIdle os/signal.signalWaitUntilIdle

func signal_disable

func signal_disable(s uint32)

Must only be called from a single goroutine at a time. go:linkname signal_disable os/signal.signal_disable

func signal_enable

func signal_enable(s uint32)

Must only be called from a single goroutine at a time. go:linkname signal_enable os/signal.signal_enable

func signal_ignore

func signal_ignore(s uint32)

Must only be called from a single goroutine at a time. go:linkname signal_ignore os/signal.signal_ignore

func signal_ignored

func signal_ignored(s uint32) bool

Checked by signal handlers. go:linkname signal_ignored os/signal.signal_ignored

func signal_recv

func signal_recv() uint32

Called to receive the next queued signal. Must only be called from a single goroutine at a time. go:linkname signal_recv os/signal.signal_recv

func signalstack

func signalstack(s *stack)

signalstack sets the current thread's alternate signal stack to s. go:nosplit

func signame

func signame(sig uint32) string

func sigpanic

func sigpanic()

sigpanic turns a synchronous signal into a run-time panic. If the signal handler sees a synchronous panic, it arranges the stack to look like the function where the signal occurred called sigpanic, sets the signal's PC value to sigpanic, and returns from the signal handler. The effect is that the program will act as though the function that got the signal simply called sigpanic instead.

This must NOT be nosplit because the linker doesn't know where sigpanic calls can be injected.

The signal handler must not inject a call to sigpanic if getg().throwsplit, since sigpanic may need to grow the stack.

func sigpipe

func sigpipe()

func sigprocmask

func sigprocmask(how int32, new, old *sigset)

go:nosplit go:nowritebarrierrec

func sigprof

func sigprof(pc, sp, lr uintptr, gp *g, mp *m)

Called if we receive a SIGPROF signal. Called by the signal handler, may run during STW. go:nowritebarrierrec

func sigprofNonGo

func sigprofNonGo()

sigprofNonGo is called if we receive a SIGPROF signal on a non-Go thread, and the signal handler collected a stack trace in sigprofCallers. When this is called, sigprofCallersUse will be non-zero. g is nil, and what we can do is very limited. go:nosplit go:nowritebarrierrec

func sigprofNonGoPC

func sigprofNonGoPC(pc uintptr)

sigprofNonGoPC is called when a profiling signal arrived on a non-Go thread and we have a single PC value, not a stack trace. g is nil, and what we can do is very limited. go:nosplit go:nowritebarrierrec

func sigreturn

func sigreturn()

func sigsend

func sigsend(s uint32) bool

sigsend delivers a signal from sighandler to the internal signal delivery queue. It reports whether the signal was sent. If not, the caller typically crashes the program. It runs from the signal handler, so it's limited in what it can do.

func sigtramp

func sigtramp(sig uint32, info *siginfo, ctx unsafe.Pointer)

func sigtrampgo

func sigtrampgo(sig uint32, info *siginfo, ctx unsafe.Pointer)

sigtrampgo is called from the signal handler function, sigtramp, written in assembly code. This is called by the signal handler, and the world may be stopped.

It must be nosplit because getg() is still the G that was running (if any) when the signal was delivered, but it's (usually) called on the gsignal stack. Until this switches the G to gsignal, the stack bounds check won't work.

go:nosplit go:nowritebarrierrec

func skipPleaseUseCallersFrames

func skipPleaseUseCallersFrames()

This function is defined in asm.s to be sizeofSkipFunction bytes long.

func slicebytetostring

func slicebytetostring(buf *tmpBuf, b []byte) (str string)

Buf is a fixed-size buffer for the result, it is not nil if the result does not escape.

func slicebytetostringtmp

func slicebytetostringtmp(b []byte) string

slicebytetostringtmp returns a "string" referring to the actual []byte bytes.

Callers need to ensure that the returned string will not be used after the calling goroutine modifies the original slice or synchronizes with another goroutine.

The function is only called when instrumenting and otherwise intrinsified by the compiler.

Some internal compiler optimizations use this function. - Used for m[T1{... Tn{..., string(k), ...} ...}] and m[string(k)]

where k is []byte, T1 to Tn is a nesting of struct and array literals.

- Used for "<"+string(b)+">" concatenation where b is []byte. - Used for string(b)=="foo" comparison where b is []byte.

func slicecopy

func slicecopy(to, fm slice, width uintptr) int

func slicerunetostring

func slicerunetostring(buf *tmpBuf, a []rune) string

func slicestringcopy

func slicestringcopy(to []byte, fm string) int

func stackcache_clear

func stackcache_clear(c *mcache)

go:systemstack

func stackcacherefill

func stackcacherefill(c *mcache, order uint8)

stackcacherefill/stackcacherelease implement a global pool of stack segments. The pool is required to prevent unlimited growth of per-thread caches.

go:systemstack

func stackcacherelease

func stackcacherelease(c *mcache, order uint8)

go:systemstack

func stackcheck

func stackcheck()

stackcheck checks that SP is in range [g->stack.lo, g->stack.hi).

func stackfree

func stackfree(stk stack)

stackfree frees an n byte stack allocation at stk.

stackfree must run on the system stack because it uses per-P resources and must not split the stack.

go:systemstack

func stackinit

func stackinit()

func stacklog2

func stacklog2(n uintptr) int

stacklog2 returns ⌊log_2(n)⌋.

func stackpoolfree

func stackpoolfree(x gclinkptr, order uint8)

Adds stack x to the free pool. Must be called with stackpoolmu held.

func startTemplateThread

func startTemplateThread()

startTemplateThread starts the template thread if it is not already running.

The calling thread must itself be in a known-good state.

func startTheWorld

func startTheWorld()

startTheWorld undoes the effects of stopTheWorld.

func startTheWorldWithSema

func startTheWorldWithSema(emitTraceEvent bool) int64

func startTimer

func startTimer(t *timer)

startTimer adds t to the timer heap. go:linkname startTimer time.startTimer

func startlockedm

func startlockedm(gp *g)

Schedules the locked m to run the locked gp. May run during STW, so write barriers are not allowed. go:nowritebarrierrec

func startm

func startm(_p_ *p, spinning bool)

Schedules some M to run the p (creates an M if necessary). If p==nil, tries to get an idle P, if no idle P's does nothing. May run with m.p==nil, so write barriers are not allowed. If spinning is set, the caller has incremented nmspinning and startm will either decrement nmspinning or set m.spinning in the newly started M. go:nowritebarrierrec

func startpanic_m

func startpanic_m() bool

startpanic_m prepares for an unrecoverable panic.

It returns true if panic messages should be printed, or false if the runtime is in bad shape and should just print stacks.

It must not have write barriers even though the write barrier explicitly ignores writes once dying > 0. Write barriers still assume that g.m.p != nil, and this function may not have P in some contexts (e.g. a panic in a signal handler for a signal sent to an M with no P).

go:nowritebarrierrec

func step

func step(p []byte, pc *uintptr, val *int32, first bool) (newp []byte, ok bool)

step advances to the next pc, value pair in the encoded table.

func stopTheWorld

func stopTheWorld(reason string)

stopTheWorld stops all P's from executing goroutines, interrupting all goroutines at GC safe points and records reason as the reason for the stop. On return, only the current goroutine's P is running. stopTheWorld must not be called from a system stack and the caller must not hold worldsema. The caller must call startTheWorld when other P's should resume execution.

stopTheWorld is safe for multiple goroutines to call at the same time. Each will execute its own stop, and the stops will be serialized.

This is also used by routines that do stack dumps. If the system is in panic or being exited, this may not reliably stop all goroutines.

func stopTheWorldWithSema

func stopTheWorldWithSema()

stopTheWorldWithSema is the core implementation of stopTheWorld. The caller is responsible for acquiring worldsema and disabling preemption first and then should stopTheWorldWithSema on the system stack:

semacquire(&worldsema, 0)
m.preemptoff = "reason"
systemstack(stopTheWorldWithSema)

When finished, the caller must either call startTheWorld or undo these three operations separately:

m.preemptoff = ""
systemstack(startTheWorldWithSema)
semrelease(&worldsema)

It is allowed to acquire worldsema once and then execute multiple startTheWorldWithSema/stopTheWorldWithSema pairs. Other P's are able to execute between successive calls to startTheWorldWithSema and stopTheWorldWithSema. Holding worldsema causes any other goroutines invoking stopTheWorld to block.

func stopTimer

func stopTimer(t *timer) bool

stopTimer removes t from the timer heap if it is there. It returns true if t was removed, false if t wasn't even there. go:linkname stopTimer time.stopTimer

func stoplockedm

func stoplockedm()

Stops execution of the current m that is locked to a g until the g is runnable again. Returns with acquired P.

func stopm

func stopm()

Stops execution of the current m until new work is available. Returns with acquired P.

func strequal

func strequal(p, q unsafe.Pointer) bool

func strhash

func strhash(a unsafe.Pointer, h uintptr) uintptr

func stringDataOnStack

func stringDataOnStack(s string) bool

stringDataOnStack reports whether the string's data is stored on the current goroutine's stack.

func stringHash

func stringHash(s string, seed uintptr) uintptr

Testing adapters for hash quality tests (see hash_test.go)

func stringtoslicebyte

func stringtoslicebyte(buf *tmpBuf, s string) []byte

func stringtoslicerune

func stringtoslicerune(buf *[tmpStringBufSize]rune, s string) []rune

func subtract1

func subtract1(p *byte) *byte

subtract1 returns the byte pointer p-1. go:nowritebarrier

nosplit because it is used during write barriers and must not be preempted. go:nosplit

func subtractb

func subtractb(p *byte, n uintptr) *byte

subtractb returns the byte pointer p-n. go:nowritebarrier go:nosplit

func sweepone

func sweepone() uintptr

sweepone sweeps some unswept heap span and returns the number of pages returned to the heap, or ^uintptr(0) if there was nothing to sweep.

func sync_atomic_CompareAndSwapPointer

func sync_atomic_CompareAndSwapPointer(ptr *unsafe.Pointer, old, new unsafe.Pointer) bool

go:linkname sync_atomic_CompareAndSwapPointer sync/atomic.CompareAndSwapPointer go:nosplit

func sync_atomic_CompareAndSwapUintptr

func sync_atomic_CompareAndSwapUintptr(ptr *uintptr, old, new uintptr) bool

go:linkname sync_atomic_CompareAndSwapUintptr sync/atomic.CompareAndSwapUintptr

func sync_atomic_StorePointer

func sync_atomic_StorePointer(ptr *unsafe.Pointer, new unsafe.Pointer)

go:linkname sync_atomic_StorePointer sync/atomic.StorePointer go:nosplit

func sync_atomic_StoreUintptr

func sync_atomic_StoreUintptr(ptr *uintptr, new uintptr)

go:linkname sync_atomic_StoreUintptr sync/atomic.StoreUintptr

func sync_atomic_SwapPointer

func sync_atomic_SwapPointer(ptr *unsafe.Pointer, new unsafe.Pointer) unsafe.Pointer

go:linkname sync_atomic_SwapPointer sync/atomic.SwapPointer go:nosplit

func sync_atomic_SwapUintptr

func sync_atomic_SwapUintptr(ptr *uintptr, new uintptr) uintptr

go:linkname sync_atomic_SwapUintptr sync/atomic.SwapUintptr

func sync_atomic_runtime_procPin

func sync_atomic_runtime_procPin() int

go:linkname sync_atomic_runtime_procPin sync/atomic.runtime_procPin go:nosplit

func sync_atomic_runtime_procUnpin

func sync_atomic_runtime_procUnpin()

go:linkname sync_atomic_runtime_procUnpin sync/atomic.runtime_procUnpin go:nosplit

func sync_fastrand

func sync_fastrand() uint32

go:linkname sync_fastrand sync.fastrand

func sync_nanotime

func sync_nanotime() int64

go:linkname sync_nanotime sync.runtime_nanotime

func sync_runtime_Semacquire

func sync_runtime_Semacquire(addr *uint32)

go:linkname sync_runtime_Semacquire sync.runtime_Semacquire

func sync_runtime_SemacquireMutex

func sync_runtime_SemacquireMutex(addr *uint32, lifo bool)

go:linkname sync_runtime_SemacquireMutex sync.runtime_SemacquireMutex

func sync_runtime_Semrelease

func sync_runtime_Semrelease(addr *uint32, handoff bool)

go:linkname sync_runtime_Semrelease sync.runtime_Semrelease

func sync_runtime_canSpin

func sync_runtime_canSpin(i int) bool

Active spinning for sync.Mutex. go:linkname sync_runtime_canSpin sync.runtime_canSpin go:nosplit

func sync_runtime_doSpin

func sync_runtime_doSpin()

go:linkname sync_runtime_doSpin sync.runtime_doSpin go:nosplit

func sync_runtime_procPin

func sync_runtime_procPin() int

go:linkname sync_runtime_procPin sync.runtime_procPin go:nosplit

func sync_runtime_procUnpin

func sync_runtime_procUnpin()

go:linkname sync_runtime_procUnpin sync.runtime_procUnpin go:nosplit

func sync_runtime_registerPoolCleanup

func sync_runtime_registerPoolCleanup(f func())

go:linkname sync_runtime_registerPoolCleanup sync.runtime_registerPoolCleanup

func sync_throw

func sync_throw(s string)

go:linkname sync_throw sync.throw

func syncadjustsudogs

func syncadjustsudogs(gp *g, used uintptr, adjinfo *adjustinfo) uintptr

syncadjustsudogs adjusts gp's sudogs and copies the part of gp's stack they refer to while synchronizing with concurrent channel operations. It returns the number of bytes of stack copied.

func sysAlloc

func sysAlloc(n uintptr, sysStat *uint64) unsafe.Pointer

Don't split the stack as this method may be invoked without a valid G, which prevents us from allocating more stack. go:nosplit

func sysFault

func sysFault(v unsafe.Pointer, n uintptr)

func sysFree

func sysFree(v unsafe.Pointer, n uintptr, sysStat *uint64)

Don't split the stack as this function may be invoked without a valid G, which prevents us from allocating more stack. go:nosplit

func sysMap

func sysMap(v unsafe.Pointer, n uintptr, sysStat *uint64)

func sysMmap

func sysMmap(addr unsafe.Pointer, n uintptr, prot, flags, fd int32, off uint32) (p unsafe.Pointer, err int)

sysMmap calls the mmap system call. It is implemented in assembly.

func sysMunmap

func sysMunmap(addr unsafe.Pointer, n uintptr)

sysMunmap calls the munmap system call. It is implemented in assembly.

func sysReserve

func sysReserve(v unsafe.Pointer, n uintptr) unsafe.Pointer

func sysReserveAligned

func sysReserveAligned(v unsafe.Pointer, size, align uintptr) (unsafe.Pointer, uintptr)

sysReserveAligned is like sysReserve, but the returned pointer is aligned to align bytes. It may reserve either n or n+align bytes, so it returns the size that was reserved.

func sysSigaction

func sysSigaction(sig uint32, new, old *sigactiont)

sysSigaction calls the rt_sigaction system call. go:nosplit

func sysUnused

func sysUnused(v unsafe.Pointer, n uintptr)

func sysUsed

func sysUsed(v unsafe.Pointer, n uintptr)

func sysargs

func sysargs(argc int32, argv **byte)

func sysauxv

func sysauxv(auxv []uintptr) int

func syscall_Exit

func syscall_Exit(code int)

go:linkname syscall_Exit syscall.Exit go:nosplit

func syscall_Getpagesize

func syscall_Getpagesize() int

go:linkname syscall_Getpagesize syscall.Getpagesize

func syscall_runtime_AfterExec

func syscall_runtime_AfterExec()

Called from syscall package after Exec. go:linkname syscall_runtime_AfterExec syscall.runtime_AfterExec

func syscall_runtime_AfterFork

func syscall_runtime_AfterFork()

Called from syscall package after fork in parent. go:linkname syscall_runtime_AfterFork syscall.runtime_AfterFork go:nosplit

func syscall_runtime_AfterForkInChild

func syscall_runtime_AfterForkInChild()

Called from syscall package after fork in child. It resets non-sigignored signals to the default handler, and restores the signal mask in preparation for the exec.

Because this might be called during a vfork, and therefore may be temporarily sharing address space with the parent process, this must not change any global variables or calling into C code that may do so.

go:linkname syscall_runtime_AfterForkInChild syscall.runtime_AfterForkInChild go:nosplit go:nowritebarrierrec

func syscall_runtime_BeforeExec

func syscall_runtime_BeforeExec()

Called from syscall package before Exec. go:linkname syscall_runtime_BeforeExec syscall.runtime_BeforeExec

func syscall_runtime_BeforeFork

func syscall_runtime_BeforeFork()

Called from syscall package before fork. go:linkname syscall_runtime_BeforeFork syscall.runtime_BeforeFork go:nosplit

func syscall_runtime_envs

func syscall_runtime_envs() []string

go:linkname syscall_runtime_envs syscall.runtime_envs

func syscall_setenv_c

func syscall_setenv_c(k string, v string)

Update the C environment if cgo is loaded. Called from syscall.Setenv. go:linkname syscall_setenv_c syscall.setenv_c

func syscall_unsetenv_c

func syscall_unsetenv_c(k string)

Update the C environment if cgo is loaded. Called from syscall.unsetenv. go:linkname syscall_unsetenv_c syscall.unsetenv_c

func sysmon

func sysmon()

Always runs without a P, so write barriers are not allowed.

go:nowritebarrierrec

func systemstack

func systemstack(fn func())

systemstack runs fn on a system stack. If systemstack is called from the per-OS-thread (g0) stack, or if systemstack is called from the signal handling (gsignal) stack, systemstack calls fn directly and returns. Otherwise, systemstack is being called from the limited stack of an ordinary goroutine. In this case, systemstack switches to the per-OS-thread stack, calls fn, and switches back. It is common to use a func literal as the argument, in order to share inputs and outputs with the code around the call to system stack:

... set up y ...
systemstack(func() {
	x = bigcall(y)
})
... use x ...

go:noescape

func systemstack_switch

func systemstack_switch()

func templateThread

func templateThread()

templateThread is a thread in a known-good state that exists solely to start new threads in known-good states when the calling thread may not be in a good state.

Many programs never need this, so templateThread is started lazily when we first enter a state that might lead to running on a thread in an unknown state.

templateThread runs on an M without a P, so it must not have write barriers.

go:nowritebarrierrec

func testAtomic64

func testAtomic64()

func testdefersizes

func testdefersizes()

Ensure that defer arg sizes that map to the same defer size class also map to the same malloc size class.

func throw

func throw(s string)

go:nosplit

func throwinit

func throwinit()

func tickspersecond

func tickspersecond() int64

Note: Called by runtime/pprof in addition to runtime code.

func timeSleep

func timeSleep(ns int64)

timeSleep puts the current goroutine to sleep for at least ns nanoseconds. go:linkname timeSleep time.Sleep

func timeSleepUntil

func timeSleepUntil() int64

func time_now

func time_now() (sec int64, nsec int32, mono int64)

go:linkname time_now time.now

func timediv

func timediv(v int64, div int32, rem *int32) int32

Poor mans 64-bit division. This is a very special function, do not use it if you are not sure what you are doing. int64 division is lowered into _divv() call on 386, which does not fit into nosplit functions. Handles overflow in a time-specific manner. go:nosplit

func timerproc

func timerproc(tb *timersBucket)

Timerproc runs the time-driven events. It sleeps until the next event in the tb heap. If addtimer inserts a new earlier event, it wakes timerproc early.

func tooManyOverflowBuckets

func tooManyOverflowBuckets(noverflow uint16, B uint8) bool

tooManyOverflowBuckets reports whether noverflow buckets is too many for a map with 1<<B buckets. Note that most of these overflow buckets must be in sparse use; if use was dense, then we'd have already triggered regular map growth.

func tophash

func tophash(hash uintptr) uint8

tophash calculates the tophash value for hash.

func topofstack

func topofstack(f funcInfo, g0 bool) bool

Does f mark the top of a goroutine stack?

func totaldefersize

func totaldefersize(siz uintptr) uintptr

total size of memory block for defer with arg size sz

func traceAcquireBuffer

func traceAcquireBuffer() (mp *m, pid int32, bufp *traceBufPtr)

traceAcquireBuffer returns trace buffer to use and, if necessary, locks it.

func traceAppend

func traceAppend(buf []byte, v uint64) []byte

traceAppend appends v to buf in little-endian-base-128 encoding.

func traceEvent

func traceEvent(ev byte, skip int, args ...uint64)

traceEvent writes a single event to trace buffer, flushing the buffer if necessary. ev is event type. If skip > 0, write current stack id as the last argument (skipping skip top frames). If skip = 0, this event type should contain a stack, but we don't want to collect and remember it for this particular call.

func traceEventLocked

func traceEventLocked(extraBytes int, mp *m, pid int32, bufp *traceBufPtr, ev byte, skip int, args ...uint64)

func traceFrameForPC

func traceFrameForPC(buf traceBufPtr, pid int32, f Frame) (traceFrame, traceBufPtr)

traceFrameForPC records the frame information. It may allocate memory.

func traceFullQueue

func traceFullQueue(buf traceBufPtr)

traceFullQueue queues buf into queue of full buffers.

func traceGCDone

func traceGCDone()

func traceGCMarkAssistDone

func traceGCMarkAssistDone()

func traceGCMarkAssistStart

func traceGCMarkAssistStart()

func traceGCSTWDone

func traceGCSTWDone()

func traceGCSTWStart

func traceGCSTWStart(kind int)

func traceGCStart

func traceGCStart()

func traceGCSweepDone

func traceGCSweepDone()

func traceGCSweepSpan

func traceGCSweepSpan(bytesSwept uintptr)

traceGCSweepSpan traces the sweep of a single page.

This may be called outside a traceGCSweepStart/traceGCSweepDone pair; however, it will not emit any trace events in this case.

func traceGCSweepStart

func traceGCSweepStart()

traceGCSweepStart prepares to trace a sweep loop. This does not emit any events until traceGCSweepSpan is called.

traceGCSweepStart must be paired with traceGCSweepDone and there must be no preemption points between these two calls.

func traceGoCreate

func traceGoCreate(newg *g, pc uintptr)

func traceGoEnd

func traceGoEnd()

func traceGoPark

func traceGoPark(traceEv byte, skip int)

func traceGoPreempt

func traceGoPreempt()

func traceGoSched

func traceGoSched()

func traceGoStart

func traceGoStart()

func traceGoSysBlock

func traceGoSysBlock(pp *p)

func traceGoSysCall

func traceGoSysCall()

func traceGoSysExit

func traceGoSysExit(ts int64)

func traceGoUnpark

func traceGoUnpark(gp *g, skip int)

func traceGomaxprocs

func traceGomaxprocs(procs int32)

func traceHeapAlloc

func traceHeapAlloc()

func traceNextGC

func traceNextGC()

func traceProcFree

func traceProcFree(pp *p)

traceProcFree frees trace buffer associated with pp.

func traceProcStart

func traceProcStart()

func traceProcStop

func traceProcStop(pp *p)

func traceReleaseBuffer

func traceReleaseBuffer(pid int32)

traceReleaseBuffer releases a buffer previously acquired with traceAcquireBuffer.

func traceStackID

func traceStackID(mp *m, buf []uintptr, skip int) uint64

func traceString

func traceString(bufp *traceBufPtr, pid int32, s string) (uint64, *traceBufPtr)

traceString adds a string to the trace.strings and returns the id.

func trace_userLog

func trace_userLog(id uint64, category, message string)

go:linkname trace_userLog runtime/trace.userLog

func trace_userRegion

func trace_userRegion(id, mode uint64, name string)

go:linkname trace_userRegion runtime/trace.userRegion

func trace_userTaskCreate

func trace_userTaskCreate(id, parentID uint64, taskType string)

go:linkname trace_userTaskCreate runtime/trace.userTaskCreate

func trace_userTaskEnd

func trace_userTaskEnd(id uint64)

go:linkname trace_userTaskEnd runtime/trace.userTaskEnd

func tracealloc

func tracealloc(p unsafe.Pointer, size uintptr, typ *_type)

func traceback

func traceback(pc, sp, lr uintptr, gp *g)

func traceback1

func traceback1(pc, sp, lr uintptr, gp *g, flags uint)

func tracebackCgoContext

func tracebackCgoContext(pcbuf *uintptr, printing bool, ctxt uintptr, n, max int) int

tracebackCgoContext handles tracing back a cgo context value, from the context argument to setCgoTraceback, for the gentraceback function. It returns the new value of n.

func tracebackHexdump

func tracebackHexdump(stk stack, frame *stkframe, bad uintptr)

tracebackHexdump hexdumps part of stk around frame.sp and frame.fp for debugging purposes. If the address bad is included in the hexdumped range, it will mark it as well.

func tracebackdefers

func tracebackdefers(gp *g, callback func(*stkframe, unsafe.Pointer) bool, v unsafe.Pointer)

Traceback over the deferred function calls. Report them like calls that have been invoked but not started executing yet.

func tracebackinit

func tracebackinit()

func tracebackothers

func tracebackothers(me *g)

func tracebacktrap

func tracebacktrap(pc, sp, lr uintptr, gp *g)

tracebacktrap is like traceback but expects that the PC and SP were obtained from a trap, not from gp->sched or gp->syscallpc/gp->syscallsp or getcallerpc/getcallersp. Because they are from a trap instead of from a saved pair, the initial PC must not be rewound to the previous instruction. (All the saved pairs record a PC that is a return address, so we rewind it into the CALL instruction.) If gp.m.libcall{g,pc,sp} information is available, it uses that information in preference to the pc/sp/lr passed in.

func tracefree

func tracefree(p unsafe.Pointer, size uintptr)

func tracegc

func tracegc()

func typeBitsBulkBarrier

func typeBitsBulkBarrier(typ *_type, dst, src, size uintptr)

typeBitsBulkBarrier executes a write barrier for every pointer that would be copied from [src, src+size) to [dst, dst+size) by a memmove using the type bitmap to locate those pointer slots.

The type typ must correspond exactly to [src, src+size) and [dst, dst+size). dst, src, and size must be pointer-aligned. The type typ must have a plain bitmap, not a GC program. The only use of this function is in channel sends, and the 64 kB channel element limit takes care of this for us.

Must not be preempted because it typically runs right before memmove, and the GC must observe them as an atomic action.

Callers must perform cgo checks if writeBarrier.cgo.

go:nosplit

func typedmemclr

func typedmemclr(typ *_type, ptr unsafe.Pointer)

typedmemclr clears the typed memory at ptr with type typ. The memory at ptr must already be initialized (and hence in type-safe state). If the memory is being initialized for the first time, see memclrNoHeapPointers.

If the caller knows that typ has pointers, it can alternatively call memclrHasPointers.

go:nosplit

func typedmemmove

func typedmemmove(typ *_type, dst, src unsafe.Pointer)

typedmemmove copies a value of type t to dst from src. Must be nosplit, see #16026.

TODO: Perfect for go:nosplitrec since we can't have a safe point anywhere in the bulk barrier or memmove.

go:nosplit

func typedslicecopy

func typedslicecopy(typ *_type, dst, src slice) int

go:nosplit

func typelinksinit

func typelinksinit()

typelinksinit scans the types from extra modules and builds the moduledata typemap used to de-duplicate type pointers.

func typesEqual

func typesEqual(t, v *_type, seen map[_typePair]struct{}) bool

typesEqual reports whether two types are equal.

Everywhere in the runtime and reflect packages, it is assumed that there is exactly one *_type per Go type, so that pointer equality can be used to test if types are equal. There is one place that breaks this assumption: buildmode=shared. In this case a type can appear as two different pieces of memory. This is hidden from the runtime and reflect package by the per-module typemap built in typelinksinit. It uses typesEqual to map types from later modules back into earlier ones.

Only typelinksinit needs this function.

func typestring

func typestring(x interface{}) string

func unblocksig

func unblocksig(sig uint32)

unblocksig removes sig from the current thread's signal mask. This is nosplit and nowritebarrierrec because it is called from dieFromSignal, which can be called by sigfwdgo while running in the signal handler, on the signal stack, with no g available. go:nosplit go:nowritebarrierrec

func unlock

func unlock(l *mutex)

func unlockOSThread

func unlockOSThread()

go:nosplit

func unlockextra

func unlockextra(mp *m)

go:nosplit

func unminit

func unminit()

Called from dropm to undo the effect of an minit. go:nosplit

func unminitSignals

func unminitSignals()

unminitSignals is called from dropm, via unminit, to undo the effect of calling minit on a non-Go thread. go:nosplit

func unwindm

func unwindm(restore *bool)

func updatememstats

func updatememstats()

go:nowritebarrier

func usleep

func usleep(usec uint32)

func vdsoFindVersion

func vdsoFindVersion(info *vdsoInfo, ver *vdsoVersionKey) int32

func vdsoInitFromSysinfoEhdr

func vdsoInitFromSysinfoEhdr(info *vdsoInfo, hdr *elfEhdr)

func vdsoParseSymbols

func vdsoParseSymbols(info *vdsoInfo, version int32)

func vdsoauxv

func vdsoauxv(tag, val uintptr)

func wakep

func wakep()

Tries to add one more P to execute G's. Called when a G is made runnable (newproc, ready).

func walltime

func walltime() (sec int64, nsec int32)

func wbBufFlush

func wbBufFlush(dst *uintptr, src uintptr)

wbBufFlush flushes the current P's write barrier buffer to the GC workbufs. It is passed the slot and value of the write barrier that caused the flush so that it can implement cgocheck.

This must not have write barriers because it is part of the write barrier implementation.

This and everything it calls must be nosplit because 1) the stack contains untyped slots from gcWriteBarrier and 2) there must not be a GC safe point between the write barrier test in the caller and flushing the buffer.

TODO: A "go:nosplitrec" annotation would be perfect for this.

go:nowritebarrierrec go:nosplit

func wbBufFlush1

func wbBufFlush1(_p_ *p)

wbBufFlush1 flushes p's write barrier buffer to the GC work queue.

This must not have write barriers because it is part of the write barrier implementation, so this may lead to infinite loops or buffer corruption.

This must be non-preemptible because it uses the P's workbuf.

go:nowritebarrierrec go:systemstack

func wbBufFlush1Debug

func wbBufFlush1Debug(old, buf1, buf2 uintptr, start *uintptr, next uintptr)

wbBufFlush1Debug is a temporary function for debugging issue #27993. It exists solely to add some context to the traceback.

go:nowritebarrierrec go:systemstack go:noinline

func wirep

func wirep(_p_ *p)

wirep is the first step of acquirep, which actually associates the current M to _p_. This is broken out so we can disallow write barriers for this part, since we don't yet have a P.

go:nowritebarrierrec go:nosplit

func write

func write(fd uintptr, p unsafe.Pointer, n int32) int32

go:noescape

func writeErr

func writeErr(b []byte)

func writeheapdump_m

func writeheapdump_m(fd uintptr)

type BlockProfileRecord 1.1

BlockProfileRecord describes blocking events originated at a particular call sequence (stack trace).

type BlockProfileRecord struct {
        Count  int64
        Cycles int64
        StackRecord
}

type Error

The Error interface identifies a run time error.

type Error interface {
        error

        // RuntimeError is a no-op function but
        // serves to distinguish types that are run time
        // errors from ordinary errors: a type is a
        // run time error if it has a RuntimeError method.
        RuntimeError()
}

type Frame 1.7

Frame is the information returned by Frames for each call frame.

type Frame struct {
        // PC is the program counter for the location in this frame.
        // For a frame that calls another frame, this will be the
        // program counter of a call instruction. Because of inlining,
        // multiple frames may have the same PC value, but different
        // symbolic information.
        PC uintptr

        // Func is the Func value of this call frame. This may be nil
        // for non-Go code or fully inlined functions.
        Func *Func

        // Function is the package path-qualified function name of
        // this call frame. If non-empty, this string uniquely
        // identifies a single function in the program.
        // This may be the empty string if not known.
        // If Func is not nil then Function == Func.Name().
        Function string

        // File and Line are the file name and line number of the
        // location in this frame. For non-leaf frames, this will be
        // the location of a call. These may be the empty string and
        // zero, respectively, if not known.
        File string
        Line int

        // Entry point program counter for the function; may be zero
        // if not known. If Func is not nil then Entry ==
        // Func.Entry().
        Entry uintptr
}

func allFrames

func allFrames(pcs []uintptr) []Frame

allFrames returns all of the Frames corresponding to pcs.

func expandCgoFrames

func expandCgoFrames(pc uintptr) []Frame

expandCgoFrames expands frame information for pc, known to be a non-Go function, using the cgoSymbolizer hook. expandCgoFrames returns nil if pc could not be expanded.

type Frames 1.7

Frames may be used to get function/file/line information for a slice of PC values returned by Callers.

type Frames struct {
        // callers is a slice of PCs that have not yet been expanded to frames.
        callers []uintptr

        // frames is a slice of Frames that have yet to be returned.
        frames     []Frame
        frameStore [2]Frame
}

Example

- more:true | runtime.Callers
- more:true | runtime_test.ExampleFrames.func1
- more:true | runtime_test.ExampleFrames.func2
- more:true | runtime_test.ExampleFrames.func3
- more:true | runtime_test.ExampleFrames

func CallersFrames 1.7

func CallersFrames(callers []uintptr) *Frames

CallersFrames takes a slice of PC values returned by Callers and prepares to return function/file/line information. Do not change the slice until you are done with the Frames.

func (*Frames) Next 1.7

func (ci *Frames) Next() (frame Frame, more bool)

Next returns frame information for the next caller. If more is false, there are no more callers (the Frame value is valid).

type Func

A Func represents a Go function in the running binary.

type Func struct {
        opaque struct{} // unexported field to disallow conversions
}

func FuncForPC

func FuncForPC(pc uintptr) *Func

FuncForPC returns a *Func describing the function that contains the given program counter address, or else nil.

If pc represents multiple functions because of inlining, it returns the a *Func describing the innermost function, but with an entry of the outermost function.

func (*Func) Entry

func (f *Func) Entry() uintptr

Entry returns the entry address of the function.

func (*Func) FileLine

func (f *Func) FileLine(pc uintptr) (file string, line int)

FileLine returns the file name and line number of the source code corresponding to the program counter pc. The result will not be accurate if pc is not a program counter within f.

func (*Func) Name

func (f *Func) Name() string

Name returns the name of the function.

func (*Func) funcInfo

func (f *Func) funcInfo() funcInfo

func (*Func) raw

func (f *Func) raw() *_func

type MemProfileRecord

A MemProfileRecord describes the live objects allocated by a particular call sequence (stack trace).

type MemProfileRecord struct {
        AllocBytes, FreeBytes     int64       // number of bytes allocated, freed
        AllocObjects, FreeObjects int64       // number of objects allocated, freed
        Stack0                    [32]uintptr // stack trace for this record; ends at first 0 entry
}

func (*MemProfileRecord) InUseBytes

func (r *MemProfileRecord) InUseBytes() int64

InUseBytes returns the number of bytes in use (AllocBytes - FreeBytes).

func (*MemProfileRecord) InUseObjects

func (r *MemProfileRecord) InUseObjects() int64

InUseObjects returns the number of objects in use (AllocObjects - FreeObjects).

func (*MemProfileRecord) Stack

func (r *MemProfileRecord) Stack() []uintptr

Stack returns the stack trace associated with the record, a prefix of r.Stack0.

type MemStats

A MemStats records statistics about the memory allocator.

type MemStats struct {

        // Alloc is bytes of allocated heap objects.
        //
        // This is the same as HeapAlloc (see below).
        Alloc uint64

        // TotalAlloc is cumulative bytes allocated for heap objects.
        //
        // TotalAlloc increases as heap objects are allocated, but
        // unlike Alloc and HeapAlloc, it does not decrease when
        // objects are freed.
        TotalAlloc uint64

        // Sys is the total bytes of memory obtained from the OS.
        //
        // Sys is the sum of the XSys fields below. Sys measures the
        // virtual address space reserved by the Go runtime for the
        // heap, stacks, and other internal data structures. It's
        // likely that not all of the virtual address space is backed
        // by physical memory at any given moment, though in general
        // it all was at some point.
        Sys uint64

        // Lookups is the number of pointer lookups performed by the
        // runtime.
        //
        // This is primarily useful for debugging runtime internals.
        Lookups uint64

        // Mallocs is the cumulative count of heap objects allocated.
        // The number of live objects is Mallocs - Frees.
        Mallocs uint64

        // Frees is the cumulative count of heap objects freed.
        Frees uint64

        // HeapAlloc is bytes of allocated heap objects.
        //
        // "Allocated" heap objects include all reachable objects, as
        // well as unreachable objects that the garbage collector has
        // not yet freed. Specifically, HeapAlloc increases as heap
        // objects are allocated and decreases as the heap is swept
        // and unreachable objects are freed. Sweeping occurs
        // incrementally between GC cycles, so these two processes
        // occur simultaneously, and as a result HeapAlloc tends to
        // change smoothly (in contrast with the sawtooth that is
        // typical of stop-the-world garbage collectors).
        HeapAlloc uint64

        // HeapSys is bytes of heap memory obtained from the OS.
        //
        // HeapSys measures the amount of virtual address space
        // reserved for the heap. This includes virtual address space
        // that has been reserved but not yet used, which consumes no
        // physical memory, but tends to be small, as well as virtual
        // address space for which the physical memory has been
        // returned to the OS after it became unused (see HeapReleased
        // for a measure of the latter).
        //
        // HeapSys estimates the largest size the heap has had.
        HeapSys uint64

        // HeapIdle is bytes in idle (unused) spans.
        //
        // Idle spans have no objects in them. These spans could be
        // (and may already have been) returned to the OS, or they can
        // be reused for heap allocations, or they can be reused as
        // stack memory.
        //
        // HeapIdle minus HeapReleased estimates the amount of memory
        // that could be returned to the OS, but is being retained by
        // the runtime so it can grow the heap without requesting more
        // memory from the OS. If this difference is significantly
        // larger than the heap size, it indicates there was a recent
        // transient spike in live heap size.
        HeapIdle uint64

        // HeapInuse is bytes in in-use spans.
        //
        // In-use spans have at least one object in them. These spans
        // can only be used for other objects of roughly the same
        // size.
        //
        // HeapInuse minus HeapAlloc estimates the amount of memory
        // that has been dedicated to particular size classes, but is
        // not currently being used. This is an upper bound on
        // fragmentation, but in general this memory can be reused
        // efficiently.
        HeapInuse uint64

        // HeapReleased is bytes of physical memory returned to the OS.
        //
        // This counts heap memory from idle spans that was returned
        // to the OS and has not yet been reacquired for the heap.
        HeapReleased uint64

        // HeapObjects is the number of allocated heap objects.
        //
        // Like HeapAlloc, this increases as objects are allocated and
        // decreases as the heap is swept and unreachable objects are
        // freed.
        HeapObjects uint64

        // StackInuse is bytes in stack spans.
        //
        // In-use stack spans have at least one stack in them. These
        // spans can only be used for other stacks of the same size.
        //
        // There is no StackIdle because unused stack spans are
        // returned to the heap (and hence counted toward HeapIdle).
        StackInuse uint64

        // StackSys is bytes of stack memory obtained from the OS.
        //
        // StackSys is StackInuse, plus any memory obtained directly
        // from the OS for OS thread stacks (which should be minimal).
        StackSys uint64

        // MSpanInuse is bytes of allocated mspan structures.
        MSpanInuse uint64

        // MSpanSys is bytes of memory obtained from the OS for mspan
        // structures.
        MSpanSys uint64

        // MCacheInuse is bytes of allocated mcache structures.
        MCacheInuse uint64

        // MCacheSys is bytes of memory obtained from the OS for
        // mcache structures.
        MCacheSys uint64

        // BuckHashSys is bytes of memory in profiling bucket hash tables.
        BuckHashSys uint64

        // GCSys is bytes of memory in garbage collection metadata.
        GCSys uint64 // Go 1.2

        // OtherSys is bytes of memory in miscellaneous off-heap
        // runtime allocations.
        OtherSys uint64 // Go 1.2

        // NextGC is the target heap size of the next GC cycle.
        //
        // The garbage collector's goal is to keep HeapAlloc ≤ NextGC.
        // At the end of each GC cycle, the target for the next cycle
        // is computed based on the amount of reachable data and the
        // value of GOGC.
        NextGC uint64

        // LastGC is the time the last garbage collection finished, as
        // nanoseconds since 1970 (the UNIX epoch).
        LastGC uint64

        // PauseTotalNs is the cumulative nanoseconds in GC
        // stop-the-world pauses since the program started.
        //
        // During a stop-the-world pause, all goroutines are paused
        // and only the garbage collector can run.
        PauseTotalNs uint64

        // PauseNs is a circular buffer of recent GC stop-the-world
        // pause times in nanoseconds.
        //
        // The most recent pause is at PauseNs[(NumGC+255)%256]. In
        // general, PauseNs[N%256] records the time paused in the most
        // recent N%256th GC cycle. There may be multiple pauses per
        // GC cycle; this is the sum of all pauses during a cycle.
        PauseNs [256]uint64

        // PauseEnd is a circular buffer of recent GC pause end times,
        // as nanoseconds since 1970 (the UNIX epoch).
        //
        // This buffer is filled the same way as PauseNs. There may be
        // multiple pauses per GC cycle; this records the end of the
        // last pause in a cycle.
        PauseEnd [256]uint64 // Go 1.4

        // NumGC is the number of completed GC cycles.
        NumGC uint32

        // NumForcedGC is the number of GC cycles that were forced by
        // the application calling the GC function.
        NumForcedGC uint32 // Go 1.8

        // GCCPUFraction is the fraction of this program's available
        // CPU time used by the GC since the program started.
        //
        // GCCPUFraction is expressed as a number between 0 and 1,
        // where 0 means GC has consumed none of this program's CPU. A
        // program's available CPU time is defined as the integral of
        // GOMAXPROCS since the program started. That is, if
        // GOMAXPROCS is 2 and a program has been running for 10
        // seconds, its "available CPU" is 20 seconds. GCCPUFraction
        // does not include CPU time used for write barrier activity.
        //
        // This is the same as the fraction of CPU reported by
        // GODEBUG=gctrace=1.
        GCCPUFraction float64 // Go 1.5

        // EnableGC indicates that GC is enabled. It is always true,
        // even if GOGC=off.
        EnableGC bool

        // DebugGC is currently unused.
        DebugGC bool

        // BySize reports per-size class allocation statistics.
        //
        // BySize[N] gives statistics for allocations of size S where
        // BySize[N-1].Size < S ≤ BySize[N].Size.
        //
        // This does not report allocations larger than BySize[60].Size.
        BySize [61]struct {
                // Size is the maximum byte size of an object in this
                // size class.
                Size uint32

                // Mallocs is the cumulative count of heap objects
                // allocated in this size class. The cumulative bytes
                // of allocation is Size*Mallocs. The number of live
                // objects in this size class is Mallocs - Frees.
                Mallocs uint64

                // Frees is the cumulative count of heap objects freed
                // in this size class.
                Frees uint64
        }
}

type StackRecord

A StackRecord describes a single execution stack.

type StackRecord struct {
        Stack0 [32]uintptr // stack trace for this record; ends at first 0 entry
}

func (*StackRecord) Stack

func (r *StackRecord) Stack() []uintptr

Stack returns the stack trace associated with the record, a prefix of r.Stack0.

type TypeAssertionError

A TypeAssertionError explains a failed type assertion.

type TypeAssertionError struct {
        _interface    *_type
        concrete      *_type
        asserted      *_type
        missingMethod string // one method needed by Interface, missing from Concrete
}

func (*TypeAssertionError) Error

func (e *TypeAssertionError) Error() string

func (*TypeAssertionError) RuntimeError

func (*TypeAssertionError) RuntimeError()

type _defer

A _defer holds an entry on the list of deferred calls. If you add a field here, add code to clear it in freedefer.

type _defer struct {
        siz     int32
        started bool
        sp      uintptr // sp at time of defer
        pc      uintptr
        fn      *funcval
        _panic  *_panic // panic that is running defer
        link    *_defer
}

func newdefer

func newdefer(siz int32) *_defer

Allocate a Defer, usually using per-P pool. Each defer must be released with freedefer.

This must not grow the stack because there may be a frame without stack map information when this is called.

go:nosplit

type _func

Layout of in-memory per-function information prepared by linker See https://golang.org/s/go12symtab. Keep in sync with linker (../cmd/link/internal/ld/pcln.go:/pclntab) and with package debug/gosym and with symtab.go in package runtime.

type _func struct {
        entry   uintptr // start pc
        nameoff int32   // function name

        args        int32  // in/out args size
        deferreturn uint32 // offset of a deferreturn block from entry, if any.

        pcsp      int32
        pcfile    int32
        pcln      int32
        npcdata   int32
        funcID    funcID  // set for certain special runtime functions
        _         [2]int8 // unused
        nfuncdata uint8   // must be last
}

type _panic

A _panic holds information about an active panic.

This is marked go:notinheap because _panic values must only ever live on the stack.

The argp and link fields are stack pointers, but don't need special handling during stack growth: because they are pointer-typed and _panic values only live on the stack, regular stack pointer adjustment takes care of them.

go:notinheap

type _panic struct {
        argp      unsafe.Pointer // pointer to arguments of deferred call run during panic; cannot move - known to liblink
        arg       interface{}    // argument to panic
        link      *_panic        // link to earlier panic
        recovered bool           // whether this panic is over
        aborted   bool           // the panic was aborted
}

type _type

Needs to be in sync with ../cmd/link/internal/ld/decodesym.go:/^func.commonsize, ../cmd/compile/internal/gc/reflect.go:/^func.dcommontype and ../reflect/type.go:/^type.rtype.

type _type struct {
        size       uintptr
        ptrdata    uintptr // size of memory prefix holding all pointers
        hash       uint32
        tflag      tflag
        align      uint8
        fieldalign uint8
        kind       uint8
        alg        *typeAlg
        // gcdata stores the GC type data for the garbage collector.
        // If the KindGCProg bit is set in kind, gcdata is a GC program.
        // Otherwise it is a ptrmask bitmap. See mbitmap.go for details.
        gcdata    *byte
        str       nameOff
        ptrToThis typeOff
}
var deferType *_type // type of _defer struct

func resolveTypeOff

func resolveTypeOff(ptrInModule unsafe.Pointer, off typeOff) *_type

func (*_type) name

func (t *_type) name() string

func (*_type) nameOff

func (t *_type) nameOff(off nameOff) name

func (*_type) pkgpath

func (t *_type) pkgpath() string

pkgpath returns the path of the package where t was defined, if available. This is not the same as the reflect package's PkgPath method, in that it returns the package path for struct and interface types, not just named types.

func (*_type) string

func (t *_type) string() string

func (*_type) textOff

func (t *_type) textOff(off textOff) unsafe.Pointer

func (*_type) typeOff

func (t *_type) typeOff(off typeOff) *_type

func (*_type) uncommon

func (t *_type) uncommon() *uncommontype

type _typePair

type _typePair struct {
        t1 *_type
        t2 *_type
}

type adjustinfo

type adjustinfo struct {
        old   stack
        delta uintptr // ptr distance from old to new stack (newbase - oldbase)
        cache pcvalueCache

        // sghi is the highest sudog.elem on the stack.
        sghi uintptr
}

type ancestorInfo

ancestorInfo records details of where a goroutine was started.

type ancestorInfo struct {
        pcs  []uintptr // pcs from the stack of this goroutine
        goid int64     // goroutine id of this goroutine; original goroutine possibly dead
        gopc uintptr   // pc of go statement that created this goroutine
}

type arenaHint

arenaHint is a hint for where to grow the heap arenas. See mheap_.arenaHints.

go:notinheap

type arenaHint struct {
        addr uintptr
        down bool
        next *arenaHint
}

type arenaIdx

type arenaIdx uint

func arenaIndex

func arenaIndex(p uintptr) arenaIdx

arenaIndex returns the index into mheap_.arenas of the arena containing metadata for p. This index combines of an index into the L1 map and an index into the L2 map and should be used as mheap_.arenas[ai.l1()][ai.l2()].

If p is outside the range of valid heap addresses, either l1() or l2() will be out of bounds.

It is nosplit because it's called by spanOf and several other nosplit functions.

go:nosplit

func (arenaIdx) l1

func (i arenaIdx) l1() uint

func (arenaIdx) l2

func (i arenaIdx) l2() uint

type arraytype

type arraytype struct {
        typ   _type
        elem  *_type
        slice *_type
        len   uintptr
}

type bitvector

Information from the compiler about the layout of stack frames.

type bitvector struct {
        n        int32 // # of bits
        bytedata *uint8
}

func makeheapobjbv

func makeheapobjbv(p uintptr, size uintptr) bitvector

func progToPointerMask

func progToPointerMask(prog *byte, size uintptr) bitvector

progToPointerMask returns the 1-bit pointer mask output by the GC program prog. size the size of the region described by prog, in bytes. The resulting bitvector will have no more than size/sys.PtrSize bits.

func stackmapdata

func stackmapdata(stkmap *stackmap, n int32) bitvector

go:nowritebarrier

func (*bitvector) ptrbit

func (bv *bitvector) ptrbit(i uintptr) uint8

ptrbit returns the i'th bit in bv. ptrbit is less efficient than iterating directly over bitvector bits, and should only be used in non-performance-critical code. See adjustpointers for an example of a high-efficiency walk of a bitvector.

type blockRecord

A blockRecord is the bucket data for a bucket of type blockProfile, which is used in blocking and mutex profiles.

type blockRecord struct {
        count  int64
        cycles int64
}

type bmap

A bucket for a Go map.

type bmap struct {
        // tophash generally contains the top byte of the hash value
        // for each key in this bucket. If tophash[0] < minTopHash,
        // tophash[0] is a bucket evacuation state instead.
        tophash [bucketCnt]uint8
}

func makeBucketArray

func makeBucketArray(t *maptype, b uint8, dirtyalloc unsafe.Pointer) (buckets unsafe.Pointer, nextOverflow *bmap)

makeBucketArray initializes a backing array for map buckets. 1<<b is the minimum number of buckets to allocate. dirtyalloc should either be nil or a bucket array previously allocated by makeBucketArray with the same t and b parameters. If dirtyalloc is nil a new backing array will be alloced and otherwise dirtyalloc will be cleared and reused as backing array.

func (*bmap) keys

func (b *bmap) keys() unsafe.Pointer

func (*bmap) overflow

func (b *bmap) overflow(t *maptype) *bmap

func (*bmap) setoverflow

func (b *bmap) setoverflow(t *maptype, ovf *bmap)

type bucket

A bucket holds per-call-stack profiling information. The representation is a bit sleazy, inherited from C. This struct defines the bucket header. It is followed in memory by the stack words and then the actual record data, either a memRecord or a blockRecord.

Per-call-stack profiling information. Lookup by hashing call stack into a linked-list hash table.

No heap pointers.

go:notinheap

type bucket struct {
        next    *bucket
        allnext *bucket
        typ     bucketType // memBucket or blockBucket (includes mutexProfile)
        hash    uintptr
        size    uintptr
        nstk    uintptr
}

func newBucket

func newBucket(typ bucketType, nstk int) *bucket

newBucket allocates a bucket with the given type and number of stack entries.

func stkbucket

func stkbucket(typ bucketType, size uintptr, stk []uintptr, alloc bool) *bucket

Return the bucket for stk[0:nstk], allocating new bucket if needed.

func (*bucket) bp

func (b *bucket) bp() *blockRecord

bp returns the blockRecord associated with the blockProfile bucket b.

func (*bucket) mp

func (b *bucket) mp() *memRecord

mp returns the memRecord associated with the memProfile bucket b.

func (*bucket) stk

func (b *bucket) stk() []uintptr

stk returns the slice in b holding the stack.

type bucketType

type bucketType int
const (
        // profile types
        memProfile bucketType = 1 + iota
        blockProfile
        mutexProfile

        // size of bucket hash table
        buckHashSize = 179999

        // max depth of stack to record in bucket
        maxStack = 32
)

type cgoCallers

Addresses collected in a cgo backtrace when crashing. Length must match arg.Max in x_cgo_callers in runtime/cgo/gcc_traceback.c.

type cgoCallers [32]uintptr

If the signal handler receives a SIGPROF signal on a non-Go thread, it tries to collect a traceback into sigprofCallers. sigprofCallersUse is set to non-zero while sigprofCallers holds a traceback.

var sigprofCallers cgoCallers

type cgoContextArg

cgoContextArg is the type passed to the context function.

type cgoContextArg struct {
        context uintptr
}

type cgoSymbolizerArg

cgoSymbolizerArg is the type passed to cgoSymbolizer.

type cgoSymbolizerArg struct {
        pc       uintptr
        file     *byte
        lineno   uintptr
        funcName *byte
        entry    uintptr
        more     uintptr
        data     uintptr
}

type cgoTracebackArg

cgoTracebackArg is the type passed to cgoTraceback.

type cgoTracebackArg struct {
        context    uintptr
        sigContext uintptr
        buf        *uintptr
        max        uintptr
}

type cgothreadstart

type cgothreadstart struct {
        g   guintptr
        tls *uint64
        fn  unsafe.Pointer
}

type chantype

type chantype struct {
        typ  _type
        elem *_type
        dir  uintptr
}

type childInfo

type childInfo struct {
        // Information passed up from the callee frame about
        // the layout of the outargs region.
        argoff uintptr   // where the arguments start in the frame
        arglen uintptr   // size of args region
        args   bitvector // if args.n >= 0, pointer map of args region
        sp     *uint8    // callee sp
        depth  uintptr   // depth in call stack (0 == most recent)
}

type cpuProfile

type cpuProfile struct {
        lock mutex
        on   bool     // profiling is on
        log  *profBuf // profile events written here

        // extra holds extra stacks accumulated in addNonGo
        // corresponding to profiling signals arriving on
        // non-Go-created threads. Those stacks are written
        // to log the next time a normal Go thread gets the
        // signal handler.
        // Assuming the stacks are 2 words each (we don't get
        // a full traceback from those threads), plus one word
        // size for framing, 100 Hz profiling would generate
        // 300 words per second.
        // Hopefully a normal Go thread will get the profiling
        // signal at least once every few seconds.
        extra     [1000]uintptr
        numExtra  int
        lostExtra uint64 // count of frames lost because extra is full
}
var cpuprof cpuProfile

func (*cpuProfile) add

func (p *cpuProfile) add(gp *g, stk []uintptr)

add adds the stack trace to the profile. It is called from signal handlers and other limited environments and cannot allocate memory or acquire locks that might be held at the time of the signal, nor can it use substantial amounts of stack. go:nowritebarrierrec

func (*cpuProfile) addExtra

func (p *cpuProfile) addExtra()

addExtra adds the "extra" profiling events, queued by addNonGo, to the profile log. addExtra is called either from a signal handler on a Go thread or from an ordinary goroutine; either way it can use stack and has a g. The world may be stopped, though.

func (*cpuProfile) addLostAtomic64

func (p *cpuProfile) addLostAtomic64(count uint64)

func (*cpuProfile) addNonGo

func (p *cpuProfile) addNonGo(stk []uintptr)

addNonGo adds the non-Go stack trace to the profile. It is called from a non-Go thread, so we cannot use much stack at all, nor do anything that needs a g or an m. In particular, we can't call cpuprof.log.write. Instead, we copy the stack into cpuprof.extra, which will be drained the next time a Go thread gets the signal handling event. go:nosplit go:nowritebarrierrec

type dbgVar

type dbgVar struct {
        name  string
        value *int32
}

type divMagic

type divMagic struct {
        shift    uint8
        shift2   uint8
        mul      uint16
        baseMask uint16
}

type eface

type eface struct {
        _type *_type
        data  unsafe.Pointer
}

func convT2E

func convT2E(t *_type, elem unsafe.Pointer) (e eface)

func convT2Enoptr

func convT2Enoptr(t *_type, elem unsafe.Pointer) (e eface)

func efaceOf

func efaceOf(ep *interface{}) *eface

type elfDyn

type elfDyn struct {
        d_tag int64  /* Dynamic entry type */
        d_val uint64 /* Integer value */
}

type elfEhdr

type elfEhdr struct {
        e_ident     [_EI_NIDENT]byte /* Magic number and other info */
        e_type      uint16           /* Object file type */
        e_machine   uint16           /* Architecture */
        e_version   uint32           /* Object file version */
        e_entry     uint64           /* Entry point virtual address */
        e_phoff     uint64           /* Program header table file offset */
        e_shoff     uint64           /* Section header table file offset */
        e_flags     uint32           /* Processor-specific flags */
        e_ehsize    uint16           /* ELF header size in bytes */
        e_phentsize uint16           /* Program header table entry size */
        e_phnum     uint16           /* Program header table entry count */
        e_shentsize uint16           /* Section header table entry size */
        e_shnum     uint16           /* Section header table entry count */
        e_shstrndx  uint16           /* Section header string table index */
}

type elfPhdr

type elfPhdr struct {
        p_type   uint32 /* Segment type */
        p_flags  uint32 /* Segment flags */
        p_offset uint64 /* Segment file offset */
        p_vaddr  uint64 /* Segment virtual address */
        p_paddr  uint64 /* Segment physical address */
        p_filesz uint64 /* Segment size in file */
        p_memsz  uint64 /* Segment size in memory */
        p_align  uint64 /* Segment alignment */
}

type elfShdr

type elfShdr struct {
        sh_name      uint32 /* Section name (string tbl index) */
        sh_type      uint32 /* Section type */
        sh_flags     uint64 /* Section flags */
        sh_addr      uint64 /* Section virtual addr at execution */
        sh_offset    uint64 /* Section file offset */
        sh_size      uint64 /* Section size in bytes */
        sh_link      uint32 /* Link to another section */
        sh_info      uint32 /* Additional section information */
        sh_addralign uint64 /* Section alignment */
        sh_entsize   uint64 /* Entry size if section holds table */
}

type elfSym

type elfSym struct {
        st_name  uint32
        st_info  byte
        st_other byte
        st_shndx uint16
        st_value uint64
        st_size  uint64
}

type elfVerdaux

type elfVerdaux struct {
        vda_name uint32 /* Version or dependency names */
        vda_next uint32 /* Offset in bytes to next verdaux entry */
}

type elfVerdef

type elfVerdef struct {
        vd_version uint16 /* Version revision */
        vd_flags   uint16 /* Version information */
        vd_ndx     uint16 /* Version Index */
        vd_cnt     uint16 /* Number of associated aux entries */
        vd_hash    uint32 /* Version name hash value */
        vd_aux     uint32 /* Offset in bytes to verdaux array */
        vd_next    uint32 /* Offset in bytes to next verdef entry */
}

type epollevent

type epollevent struct {
        events uint32
        data   [8]byte // unaligned uintptr
}

type errorString

An errorString represents a runtime error described by a single string.

type errorString string

func (errorString) Error

func (e errorString) Error() string

func (errorString) RuntimeError

func (e errorString) RuntimeError()

type evacDst

evacDst is an evacuation destination.

type evacDst struct {
        b *bmap          // current destination bucket
        i int            // key/val index into b
        k unsafe.Pointer // pointer to current key storage
        v unsafe.Pointer // pointer to current value storage
}

type finalizer

NOTE: Layout known to queuefinalizer.

type finalizer struct {
        fn   *funcval       // function to call (may be a heap pointer)
        arg  unsafe.Pointer // ptr to object (may be a heap pointer)
        nret uintptr        // bytes of return values from fn
        fint *_type         // type of first argument of fn
        ot   *ptrtype       // type of ptr to object (may be a heap pointer)
}

type finblock

finblock is an array of finalizers to be executed. finblocks are arranged in a linked list for the finalizer queue.

finblock is allocated from non-GC'd memory, so any heap pointers must be specially handled. GC currently assumes that the finalizer queue does not grow during marking (but it can shrink).

go:notinheap

type finblock struct {
        alllink *finblock
        next    *finblock
        cnt     uint32
        _       int32
        fin     [(_FinBlockSize - 2*sys.PtrSize - 2*4) / unsafe.Sizeof(finalizer{})]finalizer
}
var allfin *finblock // list of all blocks
var finc *finblock // cache of free blocks
var finq *finblock // list of finalizers that are to be executed

type findfuncbucket

findfunctab is an array of these structures. Each bucket represents 4096 bytes of the text segment. Each subbucket represents 256 bytes of the text segment. To find a function given a pc, locate the bucket and subbucket for that pc. Add together the idx and subbucket value to obtain a function index. Then scan the functab array starting at that index to find the target function. This table uses 20 bytes for every 4096 bytes of code, or ~0.5% overhead.

type findfuncbucket struct {
        idx        uint32
        subbuckets [16]byte
}

type fixalloc

FixAlloc is a simple free-list allocator for fixed size objects. Malloc uses a FixAlloc wrapped around sysAlloc to manage its mcache and mspan objects.

Memory returned by fixalloc.alloc is zeroed by default, but the caller may take responsibility for zeroing allocations by setting the zero flag to false. This is only safe if the memory never contains heap pointers.

The caller is responsible for locking around FixAlloc calls. Callers can keep state in the object but the first word is smashed by freeing and reallocating.

Consider marking fixalloc'd types go:notinheap.

type fixalloc struct {
        size   uintptr
        first  func(arg, p unsafe.Pointer) // called first time p is returned
        arg    unsafe.Pointer
        list   *mlink
        chunk  uintptr // use uintptr instead of unsafe.Pointer to avoid write barriers
        nchunk uint32
        inuse  uintptr // in-use bytes now
        stat   *uint64
        zero   bool // zero allocations
}

func (*fixalloc) alloc

func (f *fixalloc) alloc() unsafe.Pointer

func (*fixalloc) free

func (f *fixalloc) free(p unsafe.Pointer)

func (*fixalloc) init

func (f *fixalloc) init(size uintptr, first func(arg, p unsafe.Pointer), arg unsafe.Pointer, stat *uint64)

Initialize f to allocate objects of the given size, using the allocator to obtain chunks of memory.

type forcegcstate

type forcegcstate struct {
        lock mutex
        g    *g
        idle uint32
}

type fpreg1

type fpreg1 struct {
        significand [4]uint16
        exponent    uint16
}

type fpstate

type fpstate struct {
        cwd       uint16
        swd       uint16
        ftw       uint16
        fop       uint16
        rip       uint64
        rdp       uint64
        mxcsr     uint32
        mxcr_mask uint32
        _st       [8]fpxreg
        _xmm      [16]xmmreg
        padding   [24]uint32
}

type fpstate1

type fpstate1 struct {
        cwd       uint16
        swd       uint16
        ftw       uint16
        fop       uint16
        rip       uint64
        rdp       uint64
        mxcsr     uint32
        mxcr_mask uint32
        _st       [8]fpxreg1
        _xmm      [16]xmmreg1
        padding   [24]uint32
}

type fpxreg

type fpxreg struct {
        significand [4]uint16
        exponent    uint16
        padding     [3]uint16
}

type fpxreg1

type fpxreg1 struct {
        significand [4]uint16
        exponent    uint16
        padding     [3]uint16
}

type funcID

A FuncID identifies particular functions that need to be treated specially by the runtime. Note that in some situations involving plugins, there may be multiple copies of a particular special runtime function. Note: this list must match the list in cmd/internal/objabi/funcid.go.

type funcID uint8
const (
        funcID_normal funcID = iota // not a special function
        funcID_runtime_main
        funcID_goexit
        funcID_jmpdefer
        funcID_mcall
        funcID_morestack
        funcID_mstart
        funcID_rt0_go
        funcID_asmcgocall
        funcID_sigpanic
        funcID_runfinq
        funcID_gcBgMarkWorker
        funcID_systemstack_switch
        funcID_systemstack
        funcID_cgocallback_gofunc
        funcID_gogo
        funcID_externalthreadhandler
        funcID_debugCallV1
        funcID_gopanic
        funcID_panicwrap
        funcID_wrapper // any autogenerated code (hash/eq algorithms, method wrappers, etc.)
)

type funcInfo

type funcInfo struct {
        *_func
        datap *moduledata
}

func findfunc

func findfunc(pc uintptr) funcInfo

func (funcInfo) _Func

func (f funcInfo) _Func() *Func

func (funcInfo) valid

func (f funcInfo) valid() bool

type funcinl

Pseudo-Func that is returned for PCs that occur in inlined code. A *Func can be either a *_func or a *funcinl, and they are distinguished by the first uintptr.

type funcinl struct {
        zero  uintptr // set to 0 to distinguish from _func
        entry uintptr // entry of the real (the "outermost") frame.
        name  string
        file  string
        line  int
}

type functab

type functab struct {
        entry   uintptr
        funcoff uintptr
}

type functype

type functype struct {
        typ      _type
        inCount  uint16
        outCount uint16
}

func (*functype) dotdotdot

func (t *functype) dotdotdot() bool

func (*functype) in

func (t *functype) in() []*_type

func (*functype) out

func (t *functype) out() []*_type

type funcval

type funcval struct {
        fn uintptr
}

type g

type g struct {
        // Stack parameters.
        // stack describes the actual stack memory: [stack.lo, stack.hi).
        // stackguard0 is the stack pointer compared in the Go stack growth prologue.
        // It is stack.lo+StackGuard normally, but can be StackPreempt to trigger a preemption.
        // stackguard1 is the stack pointer compared in the C stack growth prologue.
        // It is stack.lo+StackGuard on g0 and gsignal stacks.
        // It is ~0 on other goroutine stacks, to trigger a call to morestackc (and crash).
        stack       stack   // offset known to runtime/cgo
        stackguard0 uintptr // offset known to liblink
        stackguard1 uintptr // offset known to liblink

        _panic         *_panic // innermost panic - offset known to liblink
        _defer         *_defer // innermost defer
        m              *m      // current m; offset known to arm liblink
        sched          gobuf
        syscallsp      uintptr        // if status==Gsyscall, syscallsp = sched.sp to use during gc
        syscallpc      uintptr        // if status==Gsyscall, syscallpc = sched.pc to use during gc
        stktopsp       uintptr        // expected sp at top of stack, to check in traceback
        param          unsafe.Pointer // passed parameter on wakeup
        atomicstatus   uint32
        stackLock      uint32 // sigprof/scang lock; TODO: fold in to atomicstatus
        goid           int64
        schedlink      guintptr
        waitsince      int64      // approx time when the g become blocked
        waitreason     waitReason // if status==Gwaiting
        preempt        bool       // preemption signal, duplicates stackguard0 = stackpreempt
        paniconfault   bool       // panic (instead of crash) on unexpected fault address
        preemptscan    bool       // preempted g does scan for gc
        gcscandone     bool       // g has scanned stack; protected by _Gscan bit in status
        gcscanvalid    bool       // false at start of gc cycle, true if G has not run since last scan; TODO: remove?
        throwsplit     bool       // must not split stack
        raceignore     int8       // ignore race detection events
        sysblocktraced bool       // StartTrace has emitted EvGoInSyscall about this goroutine
        sysexitticks   int64      // cputicks when syscall has returned (for tracing)
        traceseq       uint64     // trace event sequencer
        tracelastp     puintptr   // last P emitted an event for this goroutine
        lockedm        muintptr
        sig            uint32
        writebuf       []byte
        sigcode0       uintptr
        sigcode1       uintptr
        sigpc          uintptr
        gopc           uintptr         // pc of go statement that created this goroutine
        ancestors      *[]ancestorInfo // ancestor information goroutine(s) that created this goroutine (only used if debug.tracebackancestors)
        startpc        uintptr         // pc of goroutine function
        racectx        uintptr
        waiting        *sudog         // sudog structures this g is waiting on (that have a valid elem ptr); in lock order
        cgoCtxt        []uintptr      // cgo traceback context
        labels         unsafe.Pointer // profiler labels
        timer          *timer         // cached timer for time.Sleep
        selectDone     uint32         // are we participating in a select and did someone win the race?

        // gcAssistBytes is this G's GC assist credit in terms of
        // bytes allocated. If this is positive, then the G has credit
        // to allocate gcAssistBytes bytes without assisting. If this
        // is negative, then the G must correct this by performing
        // scan work. We track this in bytes to make it fast to update
        // and check for debt in the malloc hot path. The assist ratio
        // determines how this corresponds to scan work debt.
        gcAssistBytes int64
}
var fing *g // goroutine that runs finalizers

func getg

func getg() *g

getg returns the pointer to the current g. The compiler rewrites calls to this function into instructions that fetch the g directly (from TLS or from the dedicated register).

func gfget

func gfget(_p_ *p) *g

Get from gfree list. If local list is empty, grab a batch from global list.

func globrunqget

func globrunqget(_p_ *p, max int32) *g

Try get a batch of G's from the global runnable queue. Sched must be locked.

func malg

func malg(stacksize int32) *g

Allocate a new g, with a stack big enough for stacksize bytes.

func netpollunblock

func netpollunblock(pd *pollDesc, mode int32, ioready bool) *g

func runqsteal

func runqsteal(_p_, p2 *p, stealRunNextG bool) *g

Steal half of elements from local runnable queue of p2 and put onto local runnable queue of p. Returns one of the stolen elements (or nil if failed).

func timejump

func timejump() *g

func timejumpLocked

func timejumpLocked() *g

func traceReader

func traceReader() *g

traceReader returns the trace reader that should be woken up, if any.

func wakefing

func wakefing() *g

type gList

A gList is a list of Gs linked through g.schedlink. A G can only be on one gQueue or gList at a time.

type gList struct {
        head guintptr
}

func netpoll

func netpoll(block bool) gList

polls for ready network connections returns list of goroutines that become runnable

func (*gList) empty

func (l *gList) empty() bool

empty reports whether l is empty.

func (*gList) pop

func (l *gList) pop() *g

pop removes and returns the head of l. If l is empty, it returns nil.

func (*gList) push

func (l *gList) push(gp *g)

push adds gp to the head of l.

func (*gList) pushAll

func (l *gList) pushAll(q gQueue)

pushAll prepends all Gs in q to l.

type gQueue

A gQueue is a dequeue of Gs linked through g.schedlink. A G can only be on one gQueue or gList at a time.

type gQueue struct {
        head guintptr
        tail guintptr
}

func (*gQueue) empty

func (q *gQueue) empty() bool

empty reports whether q is empty.

func (*gQueue) pop

func (q *gQueue) pop() *g

pop removes and returns the head of queue q. It returns nil if q is empty.

func (*gQueue) popList

func (q *gQueue) popList() gList

popList takes all Gs in q and returns them as a gList.

func (*gQueue) push

func (q *gQueue) push(gp *g)

push adds gp to the head of q.

func (*gQueue) pushBack

func (q *gQueue) pushBack(gp *g)

pushBack adds gp to the tail of q.

func (*gQueue) pushBackAll

func (q *gQueue) pushBackAll(q2 gQueue)

pushBackAll adds all Gs in l2 to the tail of q. After this q2 must not be used.

type gcBits

gcBits is an alloc/mark bitmap. This is always used as *gcBits.

go:notinheap

type gcBits uint8

func newAllocBits

func newAllocBits(nelems uintptr) *gcBits

newAllocBits returns a pointer to 8 byte aligned bytes to be used for this span's alloc bits. newAllocBits is used to provide newly initialized spans allocation bits. For spans not being initialized the mark bits are repurposed as allocation bits when the span is swept.

func newMarkBits

func newMarkBits(nelems uintptr) *gcBits

newMarkBits returns a pointer to 8 byte aligned bytes to be used for a span's mark bits.

func (*gcBits) bitp

func (b *gcBits) bitp(n uintptr) (bytep *uint8, mask uint8)

bitp returns a pointer to the byte containing bit n and a mask for selecting that bit from *bytep.

func (*gcBits) bytep

func (b *gcBits) bytep(n uintptr) *uint8

bytep returns a pointer to the n'th byte of b.

type gcBitsArena

go:notinheap

type gcBitsArena struct {
        // gcBitsHeader // side step recursive type bug (issue 14620) by including fields by hand.
        free uintptr // free is the index into bits of the next free byte; read/write atomically
        next *gcBitsArena
        bits [gcBitsChunkBytes - gcBitsHeaderBytes]gcBits
}

func newArenaMayUnlock

func newArenaMayUnlock() *gcBitsArena

newArenaMayUnlock allocates and zeroes a gcBits arena. The caller must hold gcBitsArena.lock. This may temporarily release it.

func (*gcBitsArena) tryAlloc

func (b *gcBitsArena) tryAlloc(bytes uintptr) *gcBits

tryAlloc allocates from b or returns nil if b does not have enough room. This is safe to call concurrently.

type gcBitsHeader

type gcBitsHeader struct {
        free uintptr // free is the index into bits of the next free byte.
        next uintptr // *gcBits triggers recursive type bug. (issue 14620)
}

type gcControllerState

type gcControllerState struct {
        // scanWork is the total scan work performed this cycle. This
        // is updated atomically during the cycle. Updates occur in
        // bounded batches, since it is both written and read
        // throughout the cycle. At the end of the cycle, this is how
        // much of the retained heap is scannable.
        //
        // Currently this is the bytes of heap scanned. For most uses,
        // this is an opaque unit of work, but for estimation the
        // definition is important.
        scanWork int64

        // bgScanCredit is the scan work credit accumulated by the
        // concurrent background scan. This credit is accumulated by
        // the background scan and stolen by mutator assists. This is
        // updated atomically. Updates occur in bounded batches, since
        // it is both written and read throughout the cycle.
        bgScanCredit int64

        // assistTime is the nanoseconds spent in mutator assists
        // during this cycle. This is updated atomically. Updates
        // occur in bounded batches, since it is both written and read
        // throughout the cycle.
        assistTime int64

        // dedicatedMarkTime is the nanoseconds spent in dedicated
        // mark workers during this cycle. This is updated atomically
        // at the end of the concurrent mark phase.
        dedicatedMarkTime int64

        // fractionalMarkTime is the nanoseconds spent in the
        // fractional mark worker during this cycle. This is updated
        // atomically throughout the cycle and will be up-to-date if
        // the fractional mark worker is not currently running.
        fractionalMarkTime int64

        // idleMarkTime is the nanoseconds spent in idle marking
        // during this cycle. This is updated atomically throughout
        // the cycle.
        idleMarkTime int64

        // markStartTime is the absolute start time in nanoseconds
        // that assists and background mark workers started.
        markStartTime int64

        // dedicatedMarkWorkersNeeded is the number of dedicated mark
        // workers that need to be started. This is computed at the
        // beginning of each cycle and decremented atomically as
        // dedicated mark workers get started.
        dedicatedMarkWorkersNeeded int64

        // assistWorkPerByte is the ratio of scan work to allocated
        // bytes that should be performed by mutator assists. This is
        // computed at the beginning of each cycle and updated every
        // time heap_scan is updated.
        assistWorkPerByte float64

        // assistBytesPerWork is 1/assistWorkPerByte.
        assistBytesPerWork float64

        // fractionalUtilizationGoal is the fraction of wall clock
        // time that should be spent in the fractional mark worker on
        // each P that isn't running a dedicated worker.
        //
        // For example, if the utilization goal is 25% and there are
        // no dedicated workers, this will be 0.25. If the goal is
        // 25%, there is one dedicated worker, and GOMAXPROCS is 5,
        // this will be 0.05 to make up the missing 5%.
        //
        // If this is zero, no fractional workers are needed.
        fractionalUtilizationGoal float64

        _ cpu.CacheLinePad
}

gcController implements the GC pacing controller that determines when to trigger concurrent garbage collection and how much marking work to do in mutator assists and background marking.

It uses a feedback control algorithm to adjust the memstats.gc_trigger trigger based on the heap growth and GC CPU utilization each cycle. This algorithm optimizes for heap growth to match GOGC and for CPU utilization between assist and background marking to be 25% of GOMAXPROCS. The high-level design of this algorithm is documented at https://golang.org/s/go15gcpacing.

All fields of gcController are used only during a single mark cycle.

var gcController gcControllerState

func (*gcControllerState) endCycle

func (c *gcControllerState) endCycle() float64

endCycle computes the trigger ratio for the next cycle.

func (*gcControllerState) enlistWorker

func (c *gcControllerState) enlistWorker()

enlistWorker encourages another dedicated mark worker to start on another P if there are spare worker slots. It is used by putfull when more work is made available.

go:nowritebarrier

func (*gcControllerState) findRunnableGCWorker

func (c *gcControllerState) findRunnableGCWorker(_p_ *p) *g

findRunnableGCWorker returns the background mark worker for _p_ if it should be run. This must only be called when gcBlackenEnabled != 0.

func (*gcControllerState) revise

func (c *gcControllerState) revise()

revise updates the assist ratio during the GC cycle to account for improved estimates. This should be called either under STW or whenever memstats.heap_scan, memstats.heap_live, or memstats.next_gc is updated (with mheap_.lock held).

It should only be called when gcBlackenEnabled != 0 (because this is when assists are enabled and the necessary statistics are available).

func (*gcControllerState) startCycle

func (c *gcControllerState) startCycle()

startCycle resets the GC controller's state and computes estimates for a new GC cycle. The caller must hold worldsema.

type gcDrainFlags

type gcDrainFlags int
const (
        gcDrainUntilPreempt gcDrainFlags = 1 << iota
        gcDrainFlushBgCredit
        gcDrainIdle
        gcDrainFractional
)

type gcMarkWorkerMode

gcMarkWorkerMode represents the mode that a concurrent mark worker should operate in.

Concurrent marking happens through four different mechanisms. One is mutator assists, which happen in response to allocations and are not scheduled. The other three are variations in the per-P mark workers and are distinguished by gcMarkWorkerMode.

type gcMarkWorkerMode int
const (
        // gcMarkWorkerDedicatedMode indicates that the P of a mark
        // worker is dedicated to running that mark worker. The mark
        // worker should run without preemption.
        gcMarkWorkerDedicatedMode gcMarkWorkerMode = iota

        // gcMarkWorkerFractionalMode indicates that a P is currently
        // running the "fractional" mark worker. The fractional worker
        // is necessary when GOMAXPROCS*gcBackgroundUtilization is not
        // an integer. The fractional worker should run until it is
        // preempted and will be scheduled to pick up the fractional
        // part of GOMAXPROCS*gcBackgroundUtilization.
        gcMarkWorkerFractionalMode

        // gcMarkWorkerIdleMode indicates that a P is running the mark
        // worker because it has nothing else to do. The idle worker
        // should run until it is preempted and account its time
        // against gcController.idleMarkTime.
        gcMarkWorkerIdleMode
)

type gcMode

gcMode indicates how concurrent a GC cycle should be.

type gcMode int
const (
        gcBackgroundMode gcMode = iota // concurrent GC and sweep
        gcForceMode                    // stop-the-world GC now, concurrent sweep
        gcForceBlockMode               // stop-the-world GC now and STW sweep (forced by user)
)

type gcSweepBlock

type gcSweepBlock struct {
        spans [gcSweepBlockEntries]*mspan
}

type gcSweepBuf

A gcSweepBuf is a set of *mspans.

gcSweepBuf is safe for concurrent push operations *or* concurrent pop operations, but not both simultaneously.

type gcSweepBuf struct {
        spineLock mutex
        spine     unsafe.Pointer // *[N]*gcSweepBlock, accessed atomically
        spineLen  uintptr        // Spine array length, accessed atomically
        spineCap  uintptr        // Spine array cap, accessed under lock

        // index is the first unused slot in the logical concatenation
        // of all blocks. It is accessed atomically.
        index uint32
}

func (*gcSweepBuf) block

func (b *gcSweepBuf) block(i int) []*mspan

block returns the spans in the i'th block of buffer b. block is safe to call concurrently with push.

func (*gcSweepBuf) numBlocks

func (b *gcSweepBuf) numBlocks() int

numBlocks returns the number of blocks in buffer b. numBlocks is safe to call concurrently with any other operation. Spans that have been pushed prior to the call to numBlocks are guaranteed to appear in some block in the range [0, numBlocks()), assuming there are no intervening pops. Spans that are pushed after the call may also appear in these blocks.

func (*gcSweepBuf) pop

func (b *gcSweepBuf) pop() *mspan

pop removes and returns a span from buffer b, or nil if b is empty. pop is safe to call concurrently with other pop operations, but NOT to call concurrently with push.

func (*gcSweepBuf) push

func (b *gcSweepBuf) push(s *mspan)

push adds span s to buffer b. push is safe to call concurrently with other push operations, but NOT to call concurrently with pop.

type gcTrigger

A gcTrigger is a predicate for starting a GC cycle. Specifically, it is an exit condition for the _GCoff phase.

type gcTrigger struct {
        kind gcTriggerKind
        now  int64  // gcTriggerTime: current time
        n    uint32 // gcTriggerCycle: cycle number to start
}

func (gcTrigger) test

func (t gcTrigger) test() bool

test reports whether the trigger condition is satisfied, meaning that the exit condition for the _GCoff phase has been met. The exit condition should be tested when allocating.

type gcTriggerKind

type gcTriggerKind int
const (
        // gcTriggerAlways indicates that a cycle should be started
        // unconditionally, even if GOGC is off or we're in a cycle
        // right now. This cannot be consolidated with other cycles.
        gcTriggerAlways gcTriggerKind = iota

        // gcTriggerHeap indicates that a cycle should be started when
        // the heap size reaches the trigger heap size computed by the
        // controller.
        gcTriggerHeap

        // gcTriggerTime indicates that a cycle should be started when
        // it's been more than forcegcperiod nanoseconds since the
        // previous GC cycle.
        gcTriggerTime

        // gcTriggerCycle indicates that a cycle should be started if
        // we have not yet started cycle number gcTrigger.n (relative
        // to work.cycles).
        gcTriggerCycle
)

type gcWork

A gcWork provides the interface to produce and consume work for the garbage collector.

A gcWork can be used on the stack as follows:

(preemption must be disabled)
gcw := &getg().m.p.ptr().gcw
.. call gcw.put() to produce and gcw.tryGet() to consume ..

It's important that any use of gcWork during the mark phase prevent the garbage collector from transitioning to mark termination since gcWork may locally hold GC work buffers. This can be done by disabling preemption (systemstack or acquirem).

type gcWork struct {
        // wbuf1 and wbuf2 are the primary and secondary work buffers.
        //
        // This can be thought of as a stack of both work buffers'
        // pointers concatenated. When we pop the last pointer, we
        // shift the stack up by one work buffer by bringing in a new
        // full buffer and discarding an empty one. When we fill both
        // buffers, we shift the stack down by one work buffer by
        // bringing in a new empty buffer and discarding a full one.
        // This way we have one buffer's worth of hysteresis, which
        // amortizes the cost of getting or putting a work buffer over
        // at least one buffer of work and reduces contention on the
        // global work lists.
        //
        // wbuf1 is always the buffer we're currently pushing to and
        // popping from and wbuf2 is the buffer that will be discarded
        // next.
        //
        // Invariant: Both wbuf1 and wbuf2 are nil or neither are.
        wbuf1, wbuf2 *workbuf

        // Bytes marked (blackened) on this gcWork. This is aggregated
        // into work.bytesMarked by dispose.
        bytesMarked uint64

        // Scan work performed on this gcWork. This is aggregated into
        // gcController by dispose and may also be flushed by callers.
        scanWork int64

        // flushedWork indicates that a non-empty work buffer was
        // flushed to the global work list since the last gcMarkDone
        // termination check. Specifically, this indicates that this
        // gcWork may have communicated work to another gcWork.
        flushedWork bool

        // pauseGen causes put operations to spin while pauseGen ==
        // gcWorkPauseGen if debugCachedWork is true.
        pauseGen uint32

        // putGen is the pauseGen of the last putGen.
        putGen uint32

        // pauseStack is the stack at which this P was paused if
        // debugCachedWork is true.
        pauseStack [16]uintptr
}

func (*gcWork) balance

func (w *gcWork) balance()

balance moves some work that's cached in this gcWork back on the global queue. go:nowritebarrierrec

func (*gcWork) checkPut

func (w *gcWork) checkPut(ptr uintptr, ptrs []uintptr)

func (*gcWork) dispose

func (w *gcWork) dispose()

dispose returns any cached pointers to the global queue. The buffers are being put on the full queue so that the write barriers will not simply reacquire them before the GC can inspect them. This helps reduce the mutator's ability to hide pointers during the concurrent mark phase.

go:nowritebarrierrec

func (*gcWork) empty

func (w *gcWork) empty() bool

empty reports whether w has no mark work available. go:nowritebarrierrec

func (*gcWork) init

func (w *gcWork) init()

func (*gcWork) put

func (w *gcWork) put(obj uintptr)

put enqueues a pointer for the garbage collector to trace. obj must point to the beginning of a heap object or an oblet. go:nowritebarrierrec

func (*gcWork) putBatch

func (w *gcWork) putBatch(obj []uintptr)

putBatch performs a put on every pointer in obj. See put for constraints on these pointers.

go:nowritebarrierrec

func (*gcWork) putFast

func (w *gcWork) putFast(obj uintptr) bool

putFast does a put and reports whether it can be done quickly otherwise it returns false and the caller needs to call put. go:nowritebarrierrec

func (*gcWork) tryGet

func (w *gcWork) tryGet() uintptr

tryGet dequeues a pointer for the garbage collector to trace.

If there are no pointers remaining in this gcWork or in the global queue, tryGet returns 0. Note that there may still be pointers in other gcWork instances or other caches. go:nowritebarrierrec

func (*gcWork) tryGetFast

func (w *gcWork) tryGetFast() uintptr

tryGetFast dequeues a pointer for the garbage collector to trace if one is readily available. Otherwise it returns 0 and the caller is expected to call tryGet(). go:nowritebarrierrec

A gclink is a node in a linked list of blocks, like mlink, but it is opaque to the garbage collector. The GC does not trace the pointers during collection, and the compiler does not emit write barriers for assignments of gclinkptr values. Code should store references to gclinks as gclinkptr, not as *gclink.

type gclink struct {
        next gclinkptr
}

type gclinkptr

A gclinkptr is a pointer to a gclink, but it is opaque to the garbage collector.

type gclinkptr uintptr

func nextFreeFast

func nextFreeFast(s *mspan) gclinkptr

nextFreeFast returns the next free object if one is quickly available. Otherwise it returns 0.

func stackpoolalloc

func stackpoolalloc(order uint8) gclinkptr

Allocates a stack from the free pool. Must be called with stackpoolmu held.

func (gclinkptr) ptr

func (p gclinkptr) ptr() *gclink

ptr returns the *gclink form of p. The result should be used for accessing fields, not stored in other data structures.

type gobuf

type gobuf struct {
        // The offsets of sp, pc, and g are known to (hard-coded in) libmach.
        //
        // ctxt is unusual with respect to GC: it may be a
        // heap-allocated funcval, so GC needs to track it, but it
        // needs to be set and cleared from assembly, where it's
        // difficult to have write barriers. However, ctxt is really a
        // saved, live register, and we only ever exchange it between
        // the real register and the gobuf. Hence, we treat it as a
        // root during stack scanning, which means assembly that saves
        // and restores it doesn't need write barriers. It's still
        // typed as a pointer so that any other writes from Go get
        // write barriers.
        sp   uintptr
        pc   uintptr
        g    guintptr
        ctxt unsafe.Pointer
        ret  sys.Uintreg
        lr   uintptr
        bp   uintptr // for GOEXPERIMENT=framepointer
}

type gsignalStack

gsignalStack saves the fields of the gsignal stack changed by setGsignalStack.

type gsignalStack struct {
        stack       stack
        stackguard0 uintptr
        stackguard1 uintptr
        stktopsp    uintptr
}

type guintptr

A guintptr holds a goroutine pointer, but typed as a uintptr to bypass write barriers. It is used in the Gobuf goroutine state and in scheduling lists that are manipulated without a P.

The Gobuf.g goroutine pointer is almost always updated by assembly code. In one of the few places it is updated by Go code - func save - it must be treated as a uintptr to avoid a write barrier being emitted at a bad time. Instead of figuring out how to emit the write barriers missing in the assembly manipulation, we change the type of the field to uintptr, so that it does not require write barriers at all.

Goroutine structs are published in the allg list and never freed. That will keep the goroutine structs from being collected. There is never a time that Gobuf.g's contain the only references to a goroutine: the publishing of the goroutine in allg comes first. Goroutine pointers are also kept in non-GC-visible places like TLS, so I can't see them ever moving. If we did want to start moving data in the GC, we'd need to allocate the goroutine structs from an alternate arena. Using guintptr doesn't make that problem any worse.

type guintptr uintptr

func (*guintptr) cas

func (gp *guintptr) cas(old, new guintptr) bool

go:nosplit

func (guintptr) ptr

func (gp guintptr) ptr() *g

go:nosplit

func (*guintptr) set

func (gp *guintptr) set(g *g)

go:nosplit

type hchan

type hchan struct {
        qcount   uint           // total data in the queue
        dataqsiz uint           // size of the circular queue
        buf      unsafe.Pointer // points to an array of dataqsiz elements
        elemsize uint16
        closed   uint32
        elemtype *_type // element type
        sendx    uint   // send index
        recvx    uint   // receive index
        recvq    waitq  // list of recv waiters
        sendq    waitq  // list of send waiters

        // lock protects all fields in hchan, as well as several
        // fields in sudogs blocked on this channel.
        //
        // Do not change another G's status while holding this lock
        // (in particular, do not ready a G), as this can deadlock
        // with stack shrinking.
        lock mutex
}

func makechan

func makechan(t *chantype, size int) *hchan

func makechan64

func makechan64(t *chantype, size int64) *hchan

func reflect_makechan

func reflect_makechan(t *chantype, size int) *hchan

go:linkname reflect_makechan reflect.makechan

func (*hchan) raceaddr

func (c *hchan) raceaddr() unsafe.Pointer

func (*hchan) sortkey

func (c *hchan) sortkey() uintptr

type heapArena

A heapArena stores metadata for a heap arena. heapArenas are stored outside of the Go heap and accessed via the mheap_.arenas index.

This gets allocated directly from the OS, so ideally it should be a multiple of the system page size. For example, avoid adding small fields.

go:notinheap

type heapArena struct {
        // bitmap stores the pointer/scalar bitmap for the words in
        // this arena. See mbitmap.go for a description. Use the
        // heapBits type to access this.
        bitmap [heapArenaBitmapBytes]byte

        // spans maps from virtual address page ID within this arena to *mspan.
        // For allocated spans, their pages map to the span itself.
        // For free spans, only the lowest and highest pages map to the span itself.
        // Internal pages map to an arbitrary span.
        // For pages that have never been allocated, spans entries are nil.
        //
        // Modifications are protected by mheap.lock. Reads can be
        // performed without locking, but ONLY from indexes that are
        // known to contain in-use or stack spans. This means there
        // must not be a safe-point between establishing that an
        // address is live and looking it up in the spans array.
        spans [pagesPerArena]*mspan

        // pageInUse is a bitmap that indicates which spans are in
        // state mSpanInUse. This bitmap is indexed by page number,
        // but only the bit corresponding to the first page in each
        // span is used.
        //
        // Writes are protected by mheap_.lock.
        pageInUse [pagesPerArena / 8]uint8

        // pageMarks is a bitmap that indicates which spans have any
        // marked objects on them. Like pageInUse, only the bit
        // corresponding to the first page in each span is used.
        //
        // Writes are done atomically during marking. Reads are
        // non-atomic and lock-free since they only occur during
        // sweeping (and hence never race with writes).
        //
        // This is used to quickly find whole spans that can be freed.
        //
        // TODO(austin): It would be nice if this was uint64 for
        // faster scanning, but we don't have 64-bit atomic bit
        // operations.
        pageMarks [pagesPerArena / 8]uint8
}

type heapBits

heapBits provides access to the bitmap bits for a single heap word. The methods on heapBits take value receivers so that the compiler can more easily inline calls to those methods and registerize the struct fields independently.

type heapBits struct {
        bitp  *uint8
        shift uint32
        arena uint32 // Index of heap arena containing bitp
        last  *uint8 // Last byte arena's bitmap
}

func heapBitsForAddr

func heapBitsForAddr(addr uintptr) (h heapBits)

heapBitsForAddr returns the heapBits for the address addr. The caller must ensure addr is in an allocated span. In particular, be careful not to point past the end of an object.

nosplit because it is used during write barriers and must not be preempted. go:nosplit

func (heapBits) bits

func (h heapBits) bits() uint32

The caller can test morePointers and isPointer by &-ing with bitScan and bitPointer. The result includes in its higher bits the bits for subsequent words described by the same bitmap byte.

nosplit because it is used during write barriers and must not be preempted. go:nosplit

func (heapBits) clearCheckmarkSpan

func (h heapBits) clearCheckmarkSpan(size, n, total uintptr)

clearCheckmarkSpan undoes all the checkmarking in a span. The actual checkmark bits are ignored, so the only work to do is to fix the pointer bits. (Pointer bits are ignored by scanobject but consulted by typedmemmove.)

func (heapBits) forward

func (h heapBits) forward(n uintptr) heapBits

forward returns the heapBits describing n pointer-sized words ahead of h in memory. That is, if h describes address p, h.forward(n) describes p+n*ptrSize. h.forward(1) is equivalent to h.next(), just slower. Note that forward does not modify h. The caller must record the result. bits returns the heap bits for the current word. go:nosplit

func (heapBits) forwardOrBoundary

func (h heapBits) forwardOrBoundary(n uintptr) (heapBits, uintptr)

forwardOrBoundary is like forward, but stops at boundaries between contiguous sections of the bitmap. It returns the number of words advanced over, which will be <= n.

func (heapBits) initCheckmarkSpan

func (h heapBits) initCheckmarkSpan(size, n, total uintptr)

initCheckmarkSpan initializes a span for being checkmarked. It clears the checkmark bits, which are set to 1 in normal operation.

func (heapBits) initSpan

func (h heapBits) initSpan(s *mspan)

initSpan initializes the heap bitmap for a span. It clears all checkmark bits. If this is a span of pointer-sized objects, it initializes all words to pointer/scan. Otherwise, it initializes all words to scalar/dead.

func (heapBits) isCheckmarked

func (h heapBits) isCheckmarked(size uintptr) bool

isCheckmarked reports whether the heap bits have the checkmarked bit set. It must be told how large the object at h is, because the encoding of the checkmark bit varies by size. h must describe the initial word of the object.

func (heapBits) isPointer

func (h heapBits) isPointer() bool

isPointer reports whether the heap bits describe a pointer word.

nosplit because it is used during write barriers and must not be preempted. go:nosplit

func (heapBits) morePointers

func (h heapBits) morePointers() bool

morePointers reports whether this word and all remaining words in this object are scalars. h must not describe the second word of the object.

func (heapBits) next

func (h heapBits) next() heapBits

next returns the heapBits describing the next pointer-sized word in memory. That is, if h describes address p, h.next() describes p+ptrSize. Note that next does not modify h. The caller must record the result.

nosplit because it is used during write barriers and must not be preempted. go:nosplit

func (heapBits) nextArena

func (h heapBits) nextArena() heapBits

nextArena advances h to the beginning of the next heap arena.

This is a slow-path helper to next. gc's inliner knows that heapBits.next can be inlined even though it calls this. This is marked noinline so it doesn't get inlined into next and cause next to be too big to inline.

go:nosplit go:noinline

func (heapBits) setCheckmarked

func (h heapBits) setCheckmarked(size uintptr)

setCheckmarked sets the checkmarked bit. It must be told how large the object at h is, because the encoding of the checkmark bit varies by size. h must describe the initial word of the object.

type hex

The compiler knows that a print of a value of this type should use printhex instead of printuint (decimal).

type hex uint64

type hiter

A hash iteration structure. If you modify hiter, also change cmd/compile/internal/gc/reflect.go to indicate the layout of this structure.

type hiter struct {
        key         unsafe.Pointer // Must be in first position.  Write nil to indicate iteration end (see cmd/internal/gc/range.go).
        value       unsafe.Pointer // Must be in second position (see cmd/internal/gc/range.go).
        t           *maptype
        h           *hmap
        buckets     unsafe.Pointer // bucket ptr at hash_iter initialization time
        bptr        *bmap          // current bucket
        overflow    *[]*bmap       // keeps overflow buckets of hmap.buckets alive
        oldoverflow *[]*bmap       // keeps overflow buckets of hmap.oldbuckets alive
        startBucket uintptr        // bucket iteration started at
        offset      uint8          // intra-bucket offset to start from during iteration (should be big enough to hold bucketCnt-1)
        wrapped     bool           // already wrapped around from end of bucket array to beginning
        B           uint8
        i           uint8
        bucket      uintptr
        checkBucket uintptr
}

func reflect_mapiterinit

func reflect_mapiterinit(t *maptype, h *hmap) *hiter

go:linkname reflect_mapiterinit reflect.mapiterinit

type hmap

A header for a Go map.

type hmap struct {
        // Note: the format of the hmap is also encoded in cmd/compile/internal/gc/reflect.go.
        // Make sure this stays in sync with the compiler's definition.
        count     int // # live cells == size of map.  Must be first (used by len() builtin)
        flags     uint8
        B         uint8  // log_2 of # of buckets (can hold up to loadFactor * 2^B items)
        noverflow uint16 // approximate number of overflow buckets; see incrnoverflow for details
        hash0     uint32 // hash seed

        buckets    unsafe.Pointer // array of 2^B Buckets. may be nil if count==0.
        oldbuckets unsafe.Pointer // previous bucket array of half the size, non-nil only when growing
        nevacuate  uintptr        // progress counter for evacuation (buckets less than this have been evacuated)

        extra *mapextra // optional fields
}

func makemap

func makemap(t *maptype, hint int, h *hmap) *hmap

makemap implements Go map creation for make(map[k]v, hint). If the compiler has determined that the map or the first bucket can be created on the stack, h and/or bucket may be non-nil. If h != nil, the map can be created directly in h. If h.buckets != nil, bucket pointed to can be used as the first bucket.

func makemap64

func makemap64(t *maptype, hint int64, h *hmap) *hmap

func makemap_small

func makemap_small() *hmap

makehmap_small implements Go map creation for make(map[k]v) and make(map[k]v, hint) when hint is known to be at most bucketCnt at compile time and the map needs to be allocated on the heap.

func reflect_makemap

func reflect_makemap(t *maptype, cap int) *hmap

go:linkname reflect_makemap reflect.makemap

func (*hmap) createOverflow

func (h *hmap) createOverflow()

func (*hmap) growing

func (h *hmap) growing() bool

growing reports whether h is growing. The growth may be to the same size or bigger.

func (*hmap) incrnoverflow

func (h *hmap) incrnoverflow()

incrnoverflow increments h.noverflow. noverflow counts the number of overflow buckets. This is used to trigger same-size map growth. See also tooManyOverflowBuckets. To keep hmap small, noverflow is a uint16. When there are few buckets, noverflow is an exact count. When there are many buckets, noverflow is an approximate count.

func (*hmap) newoverflow

func (h *hmap) newoverflow(t *maptype, b *bmap) *bmap

func (*hmap) noldbuckets

func (h *hmap) noldbuckets() uintptr

noldbuckets calculates the number of buckets prior to the current map growth.

func (*hmap) oldbucketmask

func (h *hmap) oldbucketmask() uintptr

oldbucketmask provides a mask that can be applied to calculate n % noldbuckets().

func (*hmap) sameSizeGrow

func (h *hmap) sameSizeGrow() bool

sameSizeGrow reports whether the current growth is to a map of the same size.

type iface

type iface struct {
        tab  *itab
        data unsafe.Pointer
}

func assertE2I

func assertE2I(inter *interfacetype, e eface) (r iface)

func assertI2I

func assertI2I(inter *interfacetype, i iface) (r iface)

func convI2I

func convI2I(inter *interfacetype, i iface) (r iface)

func convT2I

func convT2I(tab *itab, elem unsafe.Pointer) (i iface)

func convT2Inoptr

func convT2Inoptr(tab *itab, elem unsafe.Pointer) (i iface)

type imethod

type imethod struct {
        name nameOff
        ityp typeOff
}

type inlinedCall

inlinedCall is the encoding of entries in the FUNCDATA_InlTree table.

type inlinedCall struct {
        parent   int16  // index of parent in the inltree, or < 0
        funcID   funcID // type of the called function
        _        byte
        file     int32 // fileno index into filetab
        line     int32 // line number of the call site
        func_    int32 // offset into pclntab for name of called function
        parentPc int32 // position of an instruction whose source position is the call site (offset from entry)
}

type interfacetype

type interfacetype struct {
        typ     _type
        pkgpath name
        mhdr    []imethod
}

type itab

layout of Itab known to compilers allocated in non-garbage-collected memory Needs to be in sync with ../cmd/compile/internal/gc/reflect.go:/^func.dumptypestructs.

type itab struct {
        inter *interfacetype
        _type *_type
        hash  uint32 // copy of _type.hash. Used for type switches.
        _     [4]byte
        fun   [1]uintptr // variable sized. fun[0]==0 means _type does not implement inter.
}

func getitab

func getitab(inter *interfacetype, typ *_type, canfail bool) *itab

func (*itab) init

func (m *itab) init() string

init fills in the m.fun array with all the code pointers for the m.inter/m._type pair. If the type does not implement the interface, it sets m.fun[0] to 0 and returns the name of an interface function that is missing. It is ok to call this multiple times on the same m, even concurrently.

type itabTableType

Note: change the formula in the mallocgc call in itabAdd if you change these fields.

type itabTableType struct {
        size    uintptr             // length of entries array. Always a power of 2.
        count   uintptr             // current number of filled entries.
        entries [itabInitSize]*itab // really [size] large
}

func (*itabTableType) add

func (t *itabTableType) add(m *itab)

add adds the given itab to itab table t. itabLock must be held.

func (*itabTableType) find

func (t *itabTableType) find(inter *interfacetype, typ *_type) *itab

find finds the given interface/type pair in t. Returns nil if the given interface/type pair isn't present.

type itimerval

type itimerval struct {
        it_interval timeval
        it_value    timeval
}

type lfnode

Lock-free stack node. // Also known to export_test.go.

type lfnode struct {
        next    uint64
        pushcnt uintptr
}

func lfstackUnpack

func lfstackUnpack(val uint64) *lfnode

type lfstack

lfstack is the head of a lock-free stack.

The zero value of lfstack is an empty list.

This stack is intrusive. Nodes must embed lfnode as the first field.

The stack does not keep GC-visible pointers to nodes, so the caller is responsible for ensuring the nodes are not garbage collected (typically by allocating them from manually-managed memory).

type lfstack uint64

func (*lfstack) empty

func (head *lfstack) empty() bool

func (*lfstack) pop

func (head *lfstack) pop() unsafe.Pointer

func (*lfstack) push

func (head *lfstack) push(node *lfnode)

type libcall

type libcall struct {
        fn   uintptr
        n    uintptr // number of parameters
        args uintptr // parameters
        r1   uintptr // return values
        r2   uintptr
        err  uintptr // error number
}

type linearAlloc

linearAlloc is a simple linear allocator that pre-reserves a region of memory and then maps that region as needed. The caller is responsible for locking.

type linearAlloc struct {
        next   uintptr // next free byte
        mapped uintptr // one byte past end of mapped space
        end    uintptr // end of reserved space
}

func (*linearAlloc) alloc

func (l *linearAlloc) alloc(size, align uintptr, sysStat *uint64) unsafe.Pointer

func (*linearAlloc) init

func (l *linearAlloc) init(base, size uintptr)

type m

type m struct {
        g0      *g     // goroutine with scheduling stack
        morebuf gobuf  // gobuf arg to morestack
        divmod  uint32 // div/mod denominator for arm - known to liblink

        // Fields not known to debuggers.
        procid        uint64       // for debuggers, but offset not hard-coded
        gsignal       *g           // signal-handling g
        goSigStack    gsignalStack // Go-allocated signal handling stack
        sigmask       sigset       // storage for saved signal mask
        tls           [6]uintptr   // thread-local storage (for x86 extern register)
        mstartfn      func()
        curg          *g       // current running goroutine
        caughtsig     guintptr // goroutine running during fatal signal
        p             puintptr // attached p for executing go code (nil if not executing go code)
        nextp         puintptr
        oldp          puintptr // the p that was attached before executing a syscall
        id            int64
        mallocing     int32
        throwing      int32
        preemptoff    string // if != "", keep curg running on this m
        locks         int32
        dying         int32
        profilehz     int32
        spinning      bool // m is out of work and is actively looking for work
        blocked       bool // m is blocked on a note
        inwb          bool // m is executing a write barrier
        newSigstack   bool // minit on C thread called sigaltstack
        printlock     int8
        incgo         bool   // m is executing a cgo call
        freeWait      uint32 // if == 0, safe to free g0 and delete m (atomic)
        fastrand      [2]uint32
        needextram    bool
        traceback     uint8
        ncgocall      uint64      // number of cgo calls in total
        ncgo          int32       // number of cgo calls currently in progress
        cgoCallersUse uint32      // if non-zero, cgoCallers in use temporarily
        cgoCallers    *cgoCallers // cgo traceback if crashing in cgo call
        park          note
        alllink       *m // on allm
        schedlink     muintptr
        mcache        *mcache
        lockedg       guintptr
        createstack   [32]uintptr    // stack that created this thread.
        lockedExt     uint32         // tracking for external LockOSThread
        lockedInt     uint32         // tracking for internal lockOSThread
        nextwaitm     muintptr       // next m waiting for lock
        waitunlockf   unsafe.Pointer // todo go func(*g, unsafe.pointer) bool
        waitlock      unsafe.Pointer
        waittraceev   byte
        waittraceskip int
        startingtrace bool
        syscalltick   uint32
        thread        uintptr // thread handle
        freelink      *m      // on sched.freem

        // these are here because they are too large to be on the stack
        // of low-level NOSPLIT functions.
        libcall   libcall
        libcallpc uintptr // for cpu profiler
        libcallsp uintptr
        libcallg  guintptr
        syscall   libcall // stores syscall parameters on windows

        vdsoSP uintptr // SP for traceback while in VDSO call (0 if not in call)
        vdsoPC uintptr // PC for traceback while in VDSO call

        mOS
}

func acquirem

func acquirem() *m

go:nosplit

func allocm

func allocm(_p_ *p, fn func()) *m

Allocate a new m unassociated with any thread. Can use p for allocation context if needed. fn is recorded as the new m's m.mstartfn.

This function is allowed to have write barriers even if the caller isn't because it borrows _p_.

go:yeswritebarrierrec

func lockextra

func lockextra(nilokay bool) *m

lockextra locks the extra list and returns the list head. The caller must unlock the list by storing a new list head to extram. If nilokay is true, then lockextra will return a nil list head if that's what it finds. If nilokay is false, lockextra will keep waiting until the list head is no longer nil. go:nosplit

func mget

func mget() *m

Try to get an m from midle list. Sched must be locked. May run during STW, so write barriers are not allowed. go:nowritebarrierrec

type mOS

type mOS struct{}

type mSpanList

mSpanList heads a linked list of spans.

go:notinheap

type mSpanList struct {
        first *mspan // first span in list, or nil if none
        last  *mspan // last span in list, or nil if none
}

func (*mSpanList) init

func (list *mSpanList) init()

Initialize an empty doubly-linked list.

func (*mSpanList) insert

func (list *mSpanList) insert(span *mspan)

func (*mSpanList) insertBack

func (list *mSpanList) insertBack(span *mspan)

func (*mSpanList) isEmpty

func (list *mSpanList) isEmpty() bool

func (*mSpanList) remove

func (list *mSpanList) remove(span *mspan)

func (*mSpanList) takeAll

func (list *mSpanList) takeAll(other *mSpanList)

takeAll removes all spans from other and inserts them at the front of list.

type mSpanState

An mspan representing actual memory has state mSpanInUse, mSpanManual, or mSpanFree. Transitions between these states are constrained as follows:

* A span may transition from free to in-use or manual during any GC

phase.

* During sweeping (gcphase == _GCoff), a span may transition from

in-use to free (as a result of sweeping) or manual to free (as a
result of stacks being freed).

* During GC (gcphase != _GCoff), a span *must not* transition from

manual or in-use to free. Because concurrent GC may read a pointer
and then look up its span, the span state must be monotonic.
type mSpanState uint8
const (
        mSpanDead   mSpanState = iota
        mSpanInUse             // allocated for garbage collected heap
        mSpanManual            // allocated for manual management (e.g., stack allocator)
        mSpanFree
)

type mTreap

go:notinheap

type mTreap struct {
        treap *treapNode
}

func (*mTreap) end

func (root *mTreap) end() treapIter

end returns an iterator which points to the end of the treap (the right-most node in the treap).

func (*mTreap) erase

func (root *mTreap) erase(i treapIter)

erase removes the element referred to by the current position of the iterator. This operation consumes the given iterator, so it should no longer be used. It is up to the caller to get the next or previous iterator before calling erase, if need be.

func (*mTreap) find

func (root *mTreap) find(npages uintptr) *treapNode

find searches for, finds, and returns the treap node containing the smallest span that can hold npages. If no span has at least npages it returns nil. This is a simple binary tree search that tracks the best-fit node found so far. The best-fit node is guaranteed to be on the path to a (maybe non-existent) lowest-base exact match.

func (*mTreap) insert

func (root *mTreap) insert(span *mspan)

insert adds span to the large span treap.

func (*mTreap) removeNode

func (root *mTreap) removeNode(t *treapNode)

func (*mTreap) removeSpan

func (root *mTreap) removeSpan(span *mspan)

removeSpan searches for, finds, deletes span along with the associated treap node. If the span is not in the treap then t will eventually be set to nil and the t.spanKey will throw.

func (*mTreap) rotateLeft

func (root *mTreap) rotateLeft(x *treapNode)

rotateLeft rotates the tree rooted at node x. turning (x a (y b c)) into (y (x a b) c).

func (*mTreap) rotateRight

func (root *mTreap) rotateRight(y *treapNode)

rotateRight rotates the tree rooted at node y. turning (y (x a b) c) into (x a (y b c)).

func (*mTreap) start

func (root *mTreap) start() treapIter

start returns an iterator which points to the start of the treap (the left-most node in the treap).

type mapextra

mapextra holds fields that are not present on all maps.

type mapextra struct {
        // If both key and value do not contain pointers and are inline, then we mark bucket
        // type as containing no pointers. This avoids scanning such maps.
        // However, bmap.overflow is a pointer. In order to keep overflow buckets
        // alive, we store pointers to all overflow buckets in hmap.extra.overflow and hmap.extra.oldoverflow.
        // overflow and oldoverflow are only used if key and value do not contain pointers.
        // overflow contains overflow buckets for hmap.buckets.
        // oldoverflow contains overflow buckets for hmap.oldbuckets.
        // The indirection allows to store a pointer to the slice in hiter.
        overflow    *[]*bmap
        oldoverflow *[]*bmap

        // nextOverflow holds a pointer to a free overflow bucket.
        nextOverflow *bmap
}

type maptype

type maptype struct {
        typ        _type
        key        *_type
        elem       *_type
        bucket     *_type // internal type representing a hash bucket
        keysize    uint8  // size of key slot
        valuesize  uint8  // size of value slot
        bucketsize uint16 // size of bucket
        flags      uint32
}

func (*maptype) hashMightPanic

func (mt *maptype) hashMightPanic() bool

func (*maptype) indirectkey

func (mt *maptype) indirectkey() bool

Note: flag values must match those used in the TMAP case in ../cmd/compile/internal/gc/reflect.go:dtypesym.

func (*maptype) indirectvalue

func (mt *maptype) indirectvalue() bool

func (*maptype) needkeyupdate

func (mt *maptype) needkeyupdate() bool

func (*maptype) reflexivekey

func (mt *maptype) reflexivekey() bool

type markBits

markBits provides access to the mark bit for an object in the heap. bytep points to the byte holding the mark bit. mask is a byte with a single bit set that can be &ed with *bytep to see if the bit has been set. *m.byte&m.mask != 0 indicates the mark bit is set. index can be used along with span information to generate the address of the object in the heap. We maintain one set of mark bits for allocation and one for marking purposes.

type markBits struct {
        bytep *uint8
        mask  uint8
        index uintptr
}

func markBitsForAddr

func markBitsForAddr(p uintptr) markBits

func markBitsForSpan

func markBitsForSpan(base uintptr) (mbits markBits)

markBitsForSpan returns the markBits for the span base address base.

func (*markBits) advance

func (m *markBits) advance()

advance advances the markBits to the next object in the span.

func (markBits) clearMarked

func (m markBits) clearMarked()

clearMarked clears the marked bit in the markbits, atomically.

func (markBits) isMarked

func (m markBits) isMarked() bool

isMarked reports whether mark bit m is set.

func (markBits) setMarked

func (m markBits) setMarked()

setMarked sets the marked bit in the markbits, atomically.

func (markBits) setMarkedNonAtomic

func (m markBits) setMarkedNonAtomic()

setMarkedNonAtomic sets the marked bit in the markbits, non-atomically.

type mcache

Per-thread (in Go, per-P) cache for small objects. No locking needed because it is per-thread (per-P).

mcaches are allocated from non-GC'd memory, so any heap pointers must be specially handled.

go:notinheap

type mcache struct {
        // The following members are accessed on every malloc,
        // so they are grouped here for better caching.
        next_sample int32   // trigger heap sample after allocating this many bytes
        local_scan  uintptr // bytes of scannable heap allocated

        // tiny points to the beginning of the current tiny block, or
        // nil if there is no current tiny block.
        //
        // tiny is a heap pointer. Since mcache is in non-GC'd memory,
        // we handle it by clearing it in releaseAll during mark
        // termination.
        tiny             uintptr
        tinyoffset       uintptr
        local_tinyallocs uintptr // number of tiny allocs not counted in other stats

        alloc [numSpanClasses]*mspan // spans to allocate from, indexed by spanClass

        stackcache [_NumStackOrders]stackfreelist

        // Local allocator stats, flushed during GC.
        local_largefree  uintptr                  // bytes freed for large objects (>maxsmallsize)
        local_nlargefree uintptr                  // number of frees for large objects (>maxsmallsize)
        local_nsmallfree [_NumSizeClasses]uintptr // number of frees for small objects (<=maxsmallsize)

        // flushGen indicates the sweepgen during which this mcache
        // was last flushed. If flushGen != mheap_.sweepgen, the spans
        // in this mcache are stale and need to the flushed so they
        // can be swept. This is done in acquirep.
        flushGen uint32
}

func allocmcache

func allocmcache() *mcache

func gomcache

func gomcache() *mcache

go:nosplit

func (*mcache) nextFree

func (c *mcache) nextFree(spc spanClass) (v gclinkptr, s *mspan, shouldhelpgc bool)

nextFree returns the next free object from the cached span if one is available. Otherwise it refills the cache with a span with an available object and returns that object along with a flag indicating that this was a heavy weight allocation. If it is a heavy weight allocation the caller must determine whether a new GC cycle needs to be started or if the GC is active whether this goroutine needs to assist the GC.

Must run in a non-preemptible context since otherwise the owner of c could change.

func (*mcache) prepareForSweep

func (c *mcache) prepareForSweep()

prepareForSweep flushes c if the system has entered a new sweep phase since c was populated. This must happen between the sweep phase starting and the first allocation from c.

func (*mcache) refill

func (c *mcache) refill(spc spanClass)

refill acquires a new span of span class spc for c. This span will have at least one free object. The current span in c must be full.

Must run in a non-preemptible context since otherwise the owner of c could change.

func (*mcache) releaseAll

func (c *mcache) releaseAll()

type mcentral

Central list of free objects of a given size.

go:notinheap

type mcentral struct {
        lock      mutex
        spanclass spanClass
        nonempty  mSpanList // list of spans with a free object, ie a nonempty free list
        empty     mSpanList // list of spans with no free objects (or cached in an mcache)

        // nmalloc is the cumulative count of objects allocated from
        // this mcentral, assuming all spans in mcaches are
        // fully-allocated. Written atomically, read under STW.
        nmalloc uint64
}

func (*mcentral) cacheSpan

func (c *mcentral) cacheSpan() *mspan

Allocate a span to use in an mcache.

func (*mcentral) freeSpan

func (c *mcentral) freeSpan(s *mspan, preserve bool, wasempty bool) bool

freeSpan updates c and s after sweeping s. It sets s's sweepgen to the latest generation, and, based on the number of free objects in s, moves s to the appropriate list of c or returns it to the heap. freeSpan reports whether s was returned to the heap. If preserve=true, it does not move s (the caller must take care of it).

func (*mcentral) grow

func (c *mcentral) grow() *mspan

grow allocates a new empty span from the heap and initializes it for c's size class.

func (*mcentral) init

func (c *mcentral) init(spc spanClass)

Initialize a single central free list.

func (*mcentral) uncacheSpan

func (c *mcentral) uncacheSpan(s *mspan)

Return span from an mcache.

type mcontext

type mcontext struct {
        gregs       [23]uint64
        fpregs      *fpstate
        __reserved1 [8]uint64
}

type memRecord

A memRecord is the bucket data for a bucket of type memProfile, part of the memory profile.

type memRecord struct {

        // active is the currently published profile. A profiling
        // cycle can be accumulated into active once its complete.
        active memRecordCycle

        // future records the profile events we're counting for cycles
        // that have not yet been published. This is ring buffer
        // indexed by the global heap profile cycle C and stores
        // cycles C, C+1, and C+2. Unlike active, these counts are
        // only for a single cycle; they are not cumulative across
        // cycles.
        //
        // We store cycle C here because there's a window between when
        // C becomes the active cycle and when we've flushed it to
        // active.
        future [3]memRecordCycle
}

type memRecordCycle

memRecordCycle

type memRecordCycle struct {
        allocs, frees           uintptr
        alloc_bytes, free_bytes uintptr
}

func (*memRecordCycle) add

func (a *memRecordCycle) add(b *memRecordCycle)

add accumulates b into a. It does not zero b.

type method

type method struct {
        name nameOff
        mtyp typeOff
        ifn  textOff
        tfn  textOff
}

type mheap

Main malloc heap. The heap itself is the "free" and "scav" treaps, but all the other global data is here too.

mheap must not be heap-allocated because it contains mSpanLists, which must not be heap-allocated.

go:notinheap

type mheap struct {
        lock      mutex
        free      mTreap // free and non-scavenged spans
        scav      mTreap // free and scavenged spans
        sweepgen  uint32 // sweep generation, see comment in mspan
        sweepdone uint32 // all spans are swept
        sweepers  uint32 // number of active sweepone calls

        // allspans is a slice of all mspans ever created. Each mspan
        // appears exactly once.
        //
        // The memory for allspans is manually managed and can be
        // reallocated and move as the heap grows.
        //
        // In general, allspans is protected by mheap_.lock, which
        // prevents concurrent access as well as freeing the backing
        // store. Accesses during STW might not hold the lock, but
        // must ensure that allocation cannot happen around the
        // access (since that may free the backing store).
        allspans []*mspan // all spans out there

        // sweepSpans contains two mspan stacks: one of swept in-use
        // spans, and one of unswept in-use spans. These two trade
        // roles on each GC cycle. Since the sweepgen increases by 2
        // on each cycle, this means the swept spans are in
        // sweepSpans[sweepgen/2%2] and the unswept spans are in
        // sweepSpans[1-sweepgen/2%2]. Sweeping pops spans from the
        // unswept stack and pushes spans that are still in-use on the
        // swept stack. Likewise, allocating an in-use span pushes it
        // on the swept stack.
        sweepSpans [2]gcSweepBuf

        _ uint32 // align uint64 fields on 32-bit for atomics

        // Proportional sweep
        //
        // These parameters represent a linear function from heap_live
        // to page sweep count. The proportional sweep system works to
        // stay in the black by keeping the current page sweep count
        // above this line at the current heap_live.
        //
        // The line has slope sweepPagesPerByte and passes through a
        // basis point at (sweepHeapLiveBasis, pagesSweptBasis). At
        // any given time, the system is at (memstats.heap_live,
        // pagesSwept) in this space.
        //
        // It's important that the line pass through a point we
        // control rather than simply starting at a (0,0) origin
        // because that lets us adjust sweep pacing at any time while
        // accounting for current progress. If we could only adjust
        // the slope, it would create a discontinuity in debt if any
        // progress has already been made.
        pagesInUse         uint64  // pages of spans in stats mSpanInUse; R/W with mheap.lock
        pagesSwept         uint64  // pages swept this cycle; updated atomically
        pagesSweptBasis    uint64  // pagesSwept to use as the origin of the sweep ratio; updated atomically
        sweepHeapLiveBasis uint64  // value of heap_live to use as the origin of sweep ratio; written with lock, read without
        sweepPagesPerByte  float64 // proportional sweep ratio; written with lock, read without

        // reclaimIndex is the page index in allArenas of next page to
        // reclaim. Specifically, it refers to page (i %
        // pagesPerArena) of arena allArenas[i / pagesPerArena].
        //
        // If this is >= 1<<63, the page reclaimer is done scanning
        // the page marks.
        //
        // This is accessed atomically.
        reclaimIndex uint64
        // reclaimCredit is spare credit for extra pages swept. Since
        // the page reclaimer works in large chunks, it may reclaim
        // more than requested. Any spare pages released go to this
        // credit pool.
        //
        // This is accessed atomically.
        reclaimCredit uintptr

        // scavengeCredit is spare credit for extra bytes scavenged.
        // Since the scavenging mechanisms operate on spans, it may
        // scavenge more than requested. Any spare pages released
        // go to this credit pool.
        //
        // This is protected by the mheap lock.
        scavengeCredit uintptr

        // Malloc stats.
        largealloc  uint64                  // bytes allocated for large objects
        nlargealloc uint64                  // number of large object allocations
        largefree   uint64                  // bytes freed for large objects (>maxsmallsize)
        nlargefree  uint64                  // number of frees for large objects (>maxsmallsize)
        nsmallfree  [_NumSizeClasses]uint64 // number of frees for small objects (<=maxsmallsize)

        // arenas is the heap arena map. It points to the metadata for
        // the heap for every arena frame of the entire usable virtual
        // address space.
        //
        // Use arenaIndex to compute indexes into this array.
        //
        // For regions of the address space that are not backed by the
        // Go heap, the arena map contains nil.
        //
        // Modifications are protected by mheap_.lock. Reads can be
        // performed without locking; however, a given entry can
        // transition from nil to non-nil at any time when the lock
        // isn't held. (Entries never transitions back to nil.)
        //
        // In general, this is a two-level mapping consisting of an L1
        // map and possibly many L2 maps. This saves space when there
        // are a huge number of arena frames. However, on many
        // platforms (even 64-bit), arenaL1Bits is 0, making this
        // effectively a single-level map. In this case, arenas[0]
        // will never be nil.
        arenas [1 << arenaL1Bits]*[1 << arenaL2Bits]*heapArena

        // heapArenaAlloc is pre-reserved space for allocating heapArena
        // objects. This is only used on 32-bit, where we pre-reserve
        // this space to avoid interleaving it with the heap itself.
        heapArenaAlloc linearAlloc

        // arenaHints is a list of addresses at which to attempt to
        // add more heap arenas. This is initially populated with a
        // set of general hint addresses, and grown with the bounds of
        // actual heap arena ranges.
        arenaHints *arenaHint

        // arena is a pre-reserved space for allocating heap arenas
        // (the actual arenas). This is only used on 32-bit.
        arena linearAlloc

        // allArenas is the arenaIndex of every mapped arena. This can
        // be used to iterate through the address space.
        //
        // Access is protected by mheap_.lock. However, since this is
        // append-only and old backing arrays are never freed, it is
        // safe to acquire mheap_.lock, copy the slice header, and
        // then release mheap_.lock.
        allArenas []arenaIdx

        // sweepArenas is a snapshot of allArenas taken at the
        // beginning of the sweep cycle. This can be read safely by
        // simply blocking GC (by disabling preemption).
        sweepArenas []arenaIdx

        // central free lists for small size classes.
        // the padding makes sure that the mcentrals are
        // spaced CacheLinePadSize bytes apart, so that each mcentral.lock
        // gets its own cache line.
        // central is indexed by spanClass.
        central [numSpanClasses]struct {
                mcentral mcentral
                pad      [cpu.CacheLinePadSize - unsafe.Sizeof(mcentral{})%cpu.CacheLinePadSize]byte
        }

        spanalloc             fixalloc // allocator for span*
        cachealloc            fixalloc // allocator for mcache*
        treapalloc            fixalloc // allocator for treapNodes*
        specialfinalizeralloc fixalloc // allocator for specialfinalizer*
        specialprofilealloc   fixalloc // allocator for specialprofile*
        speciallock           mutex    // lock for special record allocators.
        arenaHintAlloc        fixalloc // allocator for arenaHints

        unused *specialfinalizer // never set, just here to force the specialfinalizer type into DWARF
}
var mheap_ mheap

func (*mheap) alloc

func (h *mheap) alloc(npage uintptr, spanclass spanClass, large bool, needzero bool) *mspan

alloc allocates a new span of npage pages from the GC'd heap.

Either large must be true or spanclass must indicates the span's size class and scannability.

If needzero is true, the memory for the returned span will be zeroed.

func (*mheap) allocManual

func (h *mheap) allocManual(npage uintptr, stat *uint64) *mspan

allocManual allocates a manually-managed span of npage pages. allocManual returns nil if allocation fails.

allocManual adds the bytes used to *stat, which should be a memstats in-use field. Unlike allocations in the GC'd heap, the allocation does *not* count toward heap_inuse or heap_sys.

The memory backing the returned span may not be zeroed if span.needzero is set.

allocManual must be called on the system stack to prevent stack growth. Since this is used by the stack allocator, stack growth during allocManual would self-deadlock.

go:systemstack

func (*mheap) allocSpanLocked

func (h *mheap) allocSpanLocked(npage uintptr, stat *uint64) *mspan

Allocates a span of the given size. h must be locked. The returned span has been removed from the free structures, but its state is still mSpanFree.

func (*mheap) alloc_m

func (h *mheap) alloc_m(npage uintptr, spanclass spanClass, large bool) *mspan

alloc_m is the internal implementation of mheap.alloc.

alloc_m must run on the system stack because it locks the heap, so any stack growth during alloc_m would self-deadlock.

go:systemstack

func (*mheap) coalesce

func (h *mheap) coalesce(s *mspan)

func (*mheap) freeManual

func (h *mheap) freeManual(s *mspan, stat *uint64)

freeManual frees a manually-managed span returned by allocManual. stat must be the same as the stat passed to the allocManual that allocated s.

This must only be called when gcphase == _GCoff. See mSpanState for an explanation.

freeManual must be called on the system stack to prevent stack growth, just like allocManual.

go:systemstack

func (*mheap) freeSpan

func (h *mheap) freeSpan(s *mspan, large bool)

Free the span back into the heap.

large must match the value of large passed to mheap.alloc. This is used for accounting.

func (*mheap) freeSpanLocked

func (h *mheap) freeSpanLocked(s *mspan, acctinuse, acctidle bool, unusedsince int64)

s must be on the busy list or unlinked.

func (*mheap) grow

func (h *mheap) grow(npage uintptr) bool

Try to add at least npage pages of memory to the heap, returning whether it worked.

h must be locked.

func (*mheap) init

func (h *mheap) init()

Initialize the heap.

func (*mheap) pickFreeSpan

func (h *mheap) pickFreeSpan(npage uintptr) *mspan

pickFreeSpan acquires a free span from internal free list structures if one is available. Otherwise returns nil. h must be locked.

func (*mheap) reclaim

func (h *mheap) reclaim(npage uintptr)

reclaim sweeps and reclaims at leas