Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync: Range reports concurrent map iterator and write #60216

Closed
hardik-cisco opened this issue May 16, 2023 · 13 comments
Closed

sync: Range reports concurrent map iterator and write #60216

hardik-cisco opened this issue May 16, 2023 · 13 comments
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided.

Comments

@hardik-cisco
Copy link

hardik-cisco commented May 16, 2023

What version of Go are you using (go version)?

$ go version
 go1.20

Does this issue reproduce with the latest release?

Yes.

What operating system and processor architecture are you using (go env)?

go env Output
$ go env
GO111MODULE="on"
GOARCH="arm64"
GOHOSTARCH="arm64"
GOHOSTOS="darwin"
GOVERSION="go1.19"
GCCGO="gccgo"

What did you do?

I have a sync map which is being used by multiple go routine running concurrently for read and write simultaneously . I increased the go routines to 1000 per minute (though it varies sometimes)

What did you expect to see?

As it is said that sync map is by default concurrent in nature I expect it to work fine even without taking any explicit lock.

What did you see instead?

Getting this error with sync map

fatal error: concurrent map iteration and map write

goroutine 259 [running]:
sync.(*Map).Range(0xc146047320?, 0xc122dabed8)
/usr/local/go/src/sync/map.go:349 +0x258
/modules/metadatacollectormodule.(*MetadataCollector).getRegisteredContainerMap(0xc000e42630)

@randall77
Copy link
Contributor

@bcmills

Can you post code that reproduces this error?

Can you try with 1.20?

@randall77 randall77 changed the title affected/package: sync sync: Range reports concurrent map iterator and write May 16, 2023
@gopherbot gopherbot added the compiler/runtime Issues related to the Go compiler and/or runtime. label May 16, 2023
@bcmills
Copy link
Contributor

bcmills commented May 16, 2023

What happens if you run the program under the race detector?

@hardik-cisco
Copy link
Author

hardik-cisco commented May 16, 2023

@bcmills

Can you post code that reproduces this error?

Can you try with 1.20?

Regarding code actually It's a part of private codebase so it's difficult to share exact code and I didn't have luck to reproduce it with standalone sample code yet.

Currently our project is on 1.20 only where this is being produced. Updated the same in bug info as well.

@hardik-cisco
Copy link
Author

What happens if you run the program under the race detector?

I will give it a try and update the results here

@hardik-cisco
Copy link
Author

hardik-cisco commented May 16, 2023

What happens if you run the program under the race detector?

I will give it a try and update the results here

Sharing one stacktrace when tried to run it with race detector as suggested which includes sync/map.go:476 @bcmills

WARNING: DATA RACE
Read at 0x00c0000e01e0 by goroutine 218:
cluster-agent/agent/modules/metriccollectormodule/metrics.CollectClusterUtilizationMetrics.func1()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/metriccollectormodule/metrics/clusterutilizationmetric.go:35 +0x134
sync.(*Map).Range()
/opt/homebrew/Cellar/go/1.20.2/libexec/src/sync/map.go:476 +0x198
cluster-agent/agent/modules/metriccollectormodule/metrics.CollectClusterUtilizationMetrics()

@bcmills
Copy link
Contributor

bcmills commented May 16, 2023

What's the full output of the race detector? There are always tell goroutines in a report.

@hardik-cisco
Copy link
Author

hardik-cisco commented May 16, 2023

What's the full output of the race detector? There are always tell goroutines in a report.

It was a long output but most relevant one was this

WARNING: DATA RACE
Read at 0x00c0003b4a50 by goroutine 208:
runtime.mapdelete()
/Users/hardikja/go/go1.19/src/runtime/map.go:695 +0x47c
sync.(*Map).Range()
/Users/hardikja/go/go1.19/src/sync/map.go:349 +0x164
cluster-agent/agent/modules/utils.GetSyncMapLen()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/utils/utils.go:113 +0xb8
cluster-agent/agent/modules/metriccollectormodule.(*ClusterMetricCollector).collectAndReportContainerMetrics()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/metriccollectormodule/clustermetriccollector.go:400 +0x274
cluster-agent/agent/modules/metriccollectormodule.(*ClusterMetricCollector).scheduleMetricCollector.func1()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/metriccollectormodule/clustermetriccollector.go:151 +0x6c

Previous write at 0x00c0003b4a50 by goroutine 212:
runtime.mapaccessK()
/Users/hardikja/go/go1.19/src/runtime/map.go:518 +0x1ec
sync.(*Map).Store()
/Users/hardikja/go/go1.19/src/sync/map.go:168 +0x33c
cluster-agent/agent/modules/containermonitoring.(*ContainerMonitor).populateRegisteredContainersInfo()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/containermonitoring/containermonitoringmodule.go:228 +0x7e4
cluster-agent/agent/modules/containermonitoring.(*ContainerMonitor).registerContainersBatch()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/containermonitoring/containermonitoringmodule.go:336 +0x76c
cluster-agent/agent/modules/containermonitoring.(*ContainerMonitor).registerContainersHelper()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/containermonitoring/containermonitoringmodule.go:295 +0x2a8
cluster-agent/agent/modules/containermonitoring.(*ContainerMonitor).registerContainers()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/containermonitoring/containermonitoringmodule.go:274 +0xd4
cluster-agent/agent/modules/containermonitoring.(*ContainerMonitor).Start.func1()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/containermonitoring/containermonitoringmodule.go:141 +0x34

Goroutine 208 (running) created at:
cluster-agent/agent/modules/metriccollectormodule.(*ClusterMetricCollector).scheduleMetricCollector()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/metriccollectormodule/clustermetriccollector.go:146 +0xa8
cluster-agent/agent/modules/metriccollectormodule.(*ClusterMetricCollector).StartMetricCollector()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/metriccollectormodule/clustermetriccollector.go:125 +0x6e0
main.main()
/Users/hardikja/Desktop/code/cluster-agent/agent/main.go:110 +0x6dc

Goroutine 212 (running) created at:
cluster-agent/agent/modules/containermonitoring.(*ContainerMonitor).Start()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/containermonitoring/containermonitoringmodule.go:141 +0xd0
main.main()
/Users/hardikja/Desktop/code/cluster-agent/agent/main.go:122 +0x77c

==================

@bcmills
Copy link
Contributor

bcmills commented May 16, 2023

@hardik-cisco, does go vet report any issues for your code?

The two lines reported in the race are:

The write is to the dirty map, guarded by m.mu.
The read is from the readOnly map.

The dirty map can be promoted to the readOnly map in these places:

  • Range (with m.mu locked).
  • missLocked (with m.mu presumed to be locked by the caller).

And all of the calls to missLocked do seem to be guarded by the lock as required.

That leads me to the hypothesis that m.mu is being copied. The copylocks check in go vet diagnoses erroneous copies, but it is not enabled by default in go test.

To confirm or reject that hypothesis, please run go vet on your program and let us know whether the copylocks check detects any issues.

@bcmills bcmills added the WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided. label May 16, 2023
@hardik-cisco
Copy link
Author

@hardik-cisco, does go vet report any issues for your code?

The two lines reported in the race are:

The write is to the dirty map, guarded by m.mu. The read is from the readOnly map.

The dirty map can be promoted to the readOnly map in these places:

  • Range (with m.mu locked).
  • missLocked (with m.mu presumed to be locked by the caller).

And all of the calls to missLocked do seem to be guarded by the lock as required.

That leads me to the hypothesis that m.mu is being copied. The copylocks check in go vet diagnoses erroneous copies, but it is not enabled by default in go test.

To confirm or reject that hypothesis, please run go vet on your program and let us know whether the copylocks check detects any issues.

@bcmills pls find the output of go vet on my program as below :

`# cluster-agent/agent/metriclibrary/publish/upload
agent/metriclibrary/publish/upload/clusteragentmetricupload.go:67:92: call of camu.doUploadRequest copies lock value: cluster-agent/agent/genproto/main/go.MetricDataRequest contains google.golang.org/protobuf/internal/impl.MessageState contains sync.Mutex
agent/metriclibrary/publish/upload/clusteragentmetricupload.go:77:93: call of camu.doUploadRequest copies lock value: cluster-agent/agent/genproto/main/go.MetricDataRequest contains google.golang.org/protobuf/internal/impl.MessageState contains sync.Mutex
agent/metriclibrary/publish/upload/clusteragentmetricupload.go:96:10: doUploadRequest passes lock by value: cluster-agent/agent/genproto/main/go.MetricDataRequest contains google.golang.org/protobuf/internal/impl.MessageState contains sync.Mutex

cluster-agent/agent/testutils

agent/testutils/testutils.go:500:14: k8s.io/api/core/v1.Node struct literal uses unkeyed fields

cluster-agent/agent/events/handlers

agent/events/handlers/eventhandler.go:186:25: cluster-agent/agent/modules/clusterhealthmodule.HealthMonitoringEventData struct literal uses unkeyed fields

cluster-agent/agent/modules/clusterhealthmodule

agent/modules/clusterhealthmodule/clusterhealth_test.go:153:22: cluster-agent/agent/modules/utils.PodInfoLite struct literal uses unkeyed fields
agent/modules/clusterhealthmodule/clusterhealth_test.go:154:22: cluster-agent/agent/modules/utils.PodInfoLite struct literal uses unkeyed fields

cluster-agent/agent/modules/containermonitoring

agent/modules/containermonitoring/containermonitoringmodule_test.go:245:21: cluster-agent/agent/swagger/containermonitoringapis.SimMachineMinimalDto struct literal uses unkeyed fields
agent/modules/containermonitoring/containermonitoringmodule_test.go:254:21: cluster-agent/agent/swagger/containermonitoringapis.SimMachineMinimalDto struct literal uses unkeyed fields
agent/modules/containermonitoring/containermonitoringmodule_test.go:377:24: cluster-agent/agent/swagger/containermonitoringapis.SimMachineBatchResponse struct literal uses unkeyed fields
agent/modules/containermonitoring/containermonitoringmodule_test.go:407:24: cluster-agent/agent/swagger/containermonitoringapis.SimMachineBatchResponse struct literal uses unkeyed fields
agent/modules/containermonitoring/containermonitoringmodule_test.go:452:24: cluster-agent/agent/swagger/containermonitoringapis.SimMachineBatchResponse struct literal uses unkeyed fields
agent/modules/containermonitoring/containermonitoringmodule.go:145:9: return copies lock value: sync.Map contains sync.Mutex
agent/modules/containermonitoring/containermonitoringmodule.go:301:69: call of utils.GetSyncMapLen copies lock value: sync.Map contains sync.Mutex
agent/modules/containermonitoring/containermonitoringmodule.go:627:9: return copies lock value: sync.Map contains sync.Mutex
agent/modules/containermonitoring/containermonitoringmodule_test.go:177:45: call of mockContainerMonitor copies lock value: sync.Map contains sync.Mutex
agent/modules/containermonitoring/containermonitoringmodule_test.go:352:62: call of mockContainerMonitor copies lock value: sync.Map contains sync.Mutex
agent/modules/containermonitoring/containermonitoringmodule_test.go:387:45: call of mockContainerMonitor copies lock value: sync.Map contains sync.Mutex
agent/modules/containermonitoring/containermonitoringmodule_test.go:416:45: call of mockContainerMonitor copies lock value: sync.Map contains sync.Mutex
agent/modules/containermonitoring/containermonitoringmodule_test.go:419:41: call of utils.GetSyncMapLen copies lock value: sync.Map contains sync.Mutex
agent/modules/containermonitoring/containermonitoringmodule_test.go:495:74: call of mockContainerMonitor copies lock value: sync.Map contains sync.Mutex
agent/modules/containermonitoring/containermonitoringmodule_test.go:531:69: mockContainerMonitor passes lock by value: sync.Map contains sync.Mutex
agent/modules/containermonitoring/containermonitoringmodule_test.go:556:3: literal copies lock value from registeredContainerInfo: sync.Map contains sync.Mutex
agent/modules/containermonitoring/containermonitoringmodule_test.go:557:3: literal copies lock value from registeredContainersMap: sync.Map contains sync.Mutex

cluster-agent/agent/testutils/mocks/containermonitoringmodule

agent/testutils/mocks/containermonitoringmodule/IContainerMonitoringService.go:64:9: assignment copies lock value to r0: sync.Map contains sync.Mutex
agent/testutils/mocks/containermonitoringmodule/IContainerMonitoringService.go:68:9: return copies lock value: sync.Map contains sync.Mutex

cluster-agent/agent/modules/utils

agent/modules/utils/utils.go:111:24: GetSyncMapLen passes lock by value: sync.Map contains sync.Mutex
agent/modules/utils/utils_test.go:146:35: call of GetSyncMapLen copies lock value: sync.Map contains sync.Mutex
agent/modules/utils/utils_test.go:152:38: call of GetSyncMapLen copies lock value: sync.Map contains sync.Mutex
agent/modules/utils/utils_test.go:158:37: call of GetSyncMapLen copies lock value: sync.Map contains sync.Mutex

cluster-agent/agent/modules/metriccollectormodule

agent/modules/metriccollectormodule/clustermetriccollector.go:283:143: call of collector.ParseIndividualPodLevelMetric copies lock value: sync.Map contains sync.Mutex
agent/modules/metriccollectormodule/clustermetriccollector.go:288:8: call of collector.ParseScaledCpuUsedPctMetric copies lock value: sync.Map contains sync.Mutex
agent/modules/metriccollectormodule/clustermetriccollector.go:360:31: call of moduleutils.GetSyncMapLen copies lock value: sync.Map contains sync.Mutex
agent/modules/metriccollectormodule/clustermetriccollector.go:373:89: call of collector.GetMetricsForContainer copies lock value: sync.Map contains sync.Mutex
agent/modules/metriccollectormodule/clustermetriccollector.go:374:87: call of collector.CollectClusterUtilizationMetrics copies lock value: sync.Map contains sync.Mutex

cluster-agent/agent/modules/metadatacollectormodule

agent/modules/metadatacollectormodule/metadatacollector_test.go:77:50: call of mockCM.On("GetRegisteredContainersInfo").Return copies lock value: sync.Map contains sync.Mutex
agent/modules/metadatacollectormodule/metadatacollector_test.go:97:50: call of mockCM.On("GetRegisteredContainersInfo").Return copies lock value: sync.Map contains sync.Mutex
agent/modules/metadatacollectormodule/metadatacollector_test.go:159:50: call of mockCM.On("GetRegisteredContainersInfo").Return copies lock value: sync.Map contains sync.Mutex

cluster-agent/agent/modules/podmonitoringmodule

agent/modules/podmonitoringmodule/podmonitoring_test.go:1027:38: assignment copies lock value to podHostIdsFromLastRegistrationMap: sync.Map contains sync.Mutex
agent/modules/podmonitoringmodule/podmonitoring_test.go:1258:9: return copies lock value: sync.Map contains sync.Mutex

cluster-agent/agent/modules/metriccollectormodule/metrics

agent/modules/metriccollectormodule/metrics/nodemetric_test.go:22:19: cluster-agent/agent/modules/metadatacollectormodule/metadatatypes.NodeConditions struct literal uses unkeyed fields
agent/modules/metriccollectormodule/metrics/podmetric_test.go:274:20: k8s.io/apimachinery/pkg/util/intstr.IntOrString struct literal uses unkeyed fields
agent/modules/metriccollectormodule/metrics/podmetric_test.go:277:20: k8s.io/apimachinery/pkg/util/intstr.IntOrString struct literal uses unkeyed fields
agent/modules/metriccollectormodule/metrics/clusterutilizationmetric.go:22:13: CollectClusterUtilizationMetrics passes lock by value: sync.Map contains sync.Mutex
agent/modules/metriccollectormodule/metrics/containermetric.go:15:13: GetMetricsForContainer passes lock by value: sync.Map contains sync.Mutex
agent/modules/metriccollectormodule/metrics/podmetric.go:93:13: ParseIndividualPodLevelMetric passes lock by value: sync.Map contains sync.Mutex
agent/modules/metriccollectormodule/metrics/podmetric.go:100:4: call of parseContainersMetadataPodWise copies lock value: sync.Map contains sync.Mutex
agent/modules/metriccollectormodule/metrics/podmetric.go:121:13: parseContainersMetadataPodWise passes lock by value: sync.Map contains sync.Mutex
agent/modules/metriccollectormodule/metrics/podmetric.go:149:13: ParseScaledCpuUsedPctMetric passes lock by value: sync.Map contains sync.Mutex
agent/modules/metriccollectormodule/metrics/podmetric.go:153:3: call of parseContainersMetadataPodWise copies lock value: sync.Map contains sync.Mutex
agent/modules/metriccollectormodule/metrics/containermetric_test.go:126:9: return copies lock value: sync.Map contains sync.Mutex
`

@hardik-cisco
Copy link
Author

Also adding one more trace with golang version 1.20 where found race condition with race detector

==================

WARNING: DATA RACE
Read at 0x00c000a302a0 by goroutine 260:
runtime.mapdelete()
/opt/homebrew/Cellar/go/1.20.2/libexec/src/runtime/map.go:695 +0x49c
sync.(*Map).Range()
/opt/homebrew/Cellar/go/1.20.2/libexec/src/sync/map.go:471 +0x210
cluster-agent/agent/modules/utils.GetSyncMapLen()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/utils/utils.go:113 +0x124
cluster-agent/agent/modules/metriccollectormodule.(*ClusterMetricCollector).collectAndReportContainerMetrics()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/metriccollectormodule/clustermetriccollector.go:360 +0x228
cluster-agent/agent/modules/metriccollectormodule.(*ClusterMetricCollector).scheduleMetricCollector.func1()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/metriccollectormodule/clustermetriccollector.go:114 +0xb0

Previous write at 0x00c000a302a0 by goroutine 263:
runtime.mapaccessK()
/opt/homebrew/Cellar/go/1.20.2/libexec/src/runtime/map.go:518 +0x25c
sync.(*Map).Swap()
/opt/homebrew/Cellar/go/1.20.2/libexec/src/sync/map.go:365 +0x4d8
sync.(*Map).Store()
/opt/homebrew/Cellar/go/1.20.2/libexec/src/sync/map.go:155 +0x74
cluster-agent/agent/modules/containermonitoring.(*ContainerMonitor).populateRegisteredContainersInfo()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/containermonitoring/containermonitoringmodule.go:228 +0xb18
cluster-agent/agent/modules/containermonitoring.(*ContainerMonitor).registerContainersBatch()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/containermonitoring/containermonitoringmodule.go:336 +0x1464
cluster-agent/agent/modules/containermonitoring.(*ContainerMonitor).registerContainersHelper()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/containermonitoring/containermonitoringmodule.go:295 +0x30c
cluster-agent/agent/modules/containermonitoring.(*ContainerMonitor).registerContainers()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/containermonitoring/containermonitoringmodule.go:274 +0x188
cluster-agent/agent/modules/containermonitoring.(*ContainerMonitor).Start.func1()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/containermonitoring/containermonitoringmodule.go:141 +0x3c

Goroutine 260 (running) created at:
cluster-agent/agent/modules/metriccollectormodule.(*ClusterMetricCollector).scheduleMetricCollector()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/metriccollectormodule/clustermetriccollector.go:109 +0xf4
cluster-agent/agent/modules/metriccollectormodule.(*ClusterMetricCollector).StartMetricCollector()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/metriccollectormodule/clustermetriccollector.go:88 +0x30
main.main()
/Users/hardikja/Desktop/code/cluster-agent/agent/main.go:110 +0xa54

Goroutine 263 (running) created at:
cluster-agent/agent/modules/containermonitoring.(*ContainerMonitor).Start()
/Users/hardikja/Desktop/code/cluster-agent/agent/modules/containermonitoring/containermonitoringmodule.go:141 +0xb8
main.main()
/Users/hardikja/Desktop/code/cluster-agent/agent/main.go:122 +0xb7c\

==================

@kuldeepsolanki04
Copy link

kuldeepsolanki04 commented May 18, 2023

@bcmills Below sample code can reproduce the error

package main

import (
	"fmt"
	"sync"
)

var (
	m    sync.Map
)

func writeToMap(key, value string) {

	m.Store(key, value)

}

func deleteFromMap(key string) {

	m.Delete(key)

}

func readFromMap(key string) {
	val, ok := m.Load(key)
	if ok {
		fmt.Printf("Value for key '%s': %s\n", key, val)
	} else {
		fmt.Printf("Key '%s' not found\n", key)
	}
}

func main() {
	go func() {
		for i := 0; ; i++ {
			key := fmt.Sprintf("key%d", i)
			value := fmt.Sprintf("value%d", i)
			writeToMap(key, value)
			//getMapLength(m)
			fmt.Printf("Added key-value pair: %s -> %s\n", key, value)
		}
	}()

	go func() {
		for i := 0; ; i++ {
			key := fmt.Sprintf("key%d", i)
			deleteFromMap(key)
			fmt.Println("length is %d", getMapLength(m)) **// if you remove this line sync map works fine**
			fmt.Printf("Deleted key: %s\n", key)
		}
	}()

	go func() {
		for i := 0; ; i++ {
			key := fmt.Sprintf("key%d", i)
			readFromMap(key)
		}
	}()

	// Wait for a key press to exit the program
	fmt.Scanln()
}
func getMapLength(Map sync.Map) int {
	length := 0
	Map.Range(func(key interface{}, value interface{}) bool {
		length++
		return true
	})
	return length
}


``

If we remove the getMapLength call then sync map works fine, but if we want to use getMapLength in any of go routine parallel then concurrent read write error will be thrown. To resolve this case we have to explicitly take the locks and use the wait groups attached is the dirty solution which should be fixed.

go vet output

go vet test.go    
# command-line-arguments
./test.go:50:46: call of getMapLength copies lock value: sync.Map contains sync.Mutex
./test.go:65:24: getMapLength passes lock by value: sync.Map contains sync.Mutex
./test.go:50:4: fmt.Println call has possible formatting directive %d

-race output

go run  test.go -race 
.
.
.
.
Deleted key: key157
Added key-value pair: key485 -> value485
fatal error: Added key-value pair: key486 -> value486
concurrent map iteration and map writeAdded key-value pair: key487 -> value487


goroutine 7 [running]:
sync.(*Map).Range(0x140000100a0, 0x14000104f08)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/sync/map.go:471 +0x250
main.getMapLength({{0x0, 0x0}, {{}, {}, 0x140001a45d0}, 0x14000196060, 0x9e})
	Documents/test.go:67 +0x9c
main.main.func2()
	Documents/test.go:50 +0xc4
created by main.main
	Documents/test.go:46 +0x30

goroutine 1 [syscall]:
syscall.syscall(0x1400006cb68?, 0x10473a6c0?, 0x800000?, 0x7ffff800000?)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/runtime/sys_darwin.go:23 +0x58
syscall.read(0x1400005c060?, {0x140000701a0?, 0x1400006cc48?, 0x100000000?})
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/syscall/zsyscall_darwin_arm64.go:1209 +0x48
syscall.Read(...)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/syscall/syscall_unix.go:178
internal/poll.ignoringEINTRIO(...)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/internal/poll/fd_unix.go:794
internal/poll.(*FD).Read(0x1400005c060?, {0x140000701a0?, 0x1?, 0x4?})
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/internal/poll/fd_unix.go:163 +0x224
os.(*File).read(...)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/os/file_posix.go:31
os.(*File).Read(0x1400000e010, {0x140000701a0?, 0x1400007e000?, 0x500?})
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/os/file.go:118 +0x5c
io.ReadAtLeast({0x10478b5d8, 0x1400000e010}, {0x140000701a0, 0x1, 0x4}, 0x1)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/io/io.go:332 +0xa0
io.ReadFull(...)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/io/io.go:351
fmt.(*readRune).readByte(0x14000070180)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/fmt/scan.go:321 +0x48
fmt.(*readRune).ReadRune(0x14000070180)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/fmt/scan.go:337 +0xb4
fmt.(*ss).ReadRune(0x1400005c1e0)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/fmt/scan.go:189 +0x6c
fmt.(*ss).getRune(0x1?)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/fmt/scan.go:211 +0x1c
fmt.(*ss).doScan(0x1400005c1e0, {0x0?, 0x0, 0x104900130?})
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/fmt/scan.go:1079 +0xe4
fmt.Fscanln({0x10478b5d8?, 0x1400000e010?}, {0x0, 0x0, 0x0})
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/fmt/scan.go:132 +0x78
fmt.Scanln(...)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/fmt/scan.go:70
main.main()
	Documents/test.go:63 +0x64

goroutine 6 [runnable]:
fmt.(*pp).free(0x14000090000?)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/fmt/print.go:161 +0x108
fmt.Fprintf({0x10478b5f8, 0x1400000e018}, {0x10474fe80, 0x1f}, {0x14000068fa8, 0x2, 0x2})
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/fmt/print.go:226 +0x98
fmt.Printf(...)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/fmt/print.go:233
main.main.func1()
	Documents/test.go:42 +0x154
created by main.main
	Documents/test.go:36 +0x24

goroutine 8 [runnable]:
internal/poll.runtime_Semacquire(0x1400006dd68?)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/runtime/sema.go:67 +0x2c
internal/poll.(*fdMutex).rwlock(0x1400005c0c0, 0x14?)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/internal/poll/fd_mutex.go:154 +0xe0
internal/poll.(*FD).writeLock(...)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/internal/poll/fd_mutex.go:239
internal/poll.(*FD).Write(0x1400005c0c0, {0x14000192000, 0x17, 0x20})
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/internal/poll/fd_unix.go:370 +0x48
os.(*File).write(...)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/os/file_posix.go:48
os.(*File).Write(0x1400000e018, {0x14000192000?, 0x17, 0x1400006df38?})
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/os/file.go:175 +0x60
fmt.Fprintf({0x10478b5f8, 0x1400000e018}, {0x10474d571, 0x13}, {0x1400006df38, 0x1, 0x1})
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/fmt/print.go:225 +0x84
fmt.Printf(...)
	/opt/homebrew/Cellar/go/1.20.3/libexec/src/fmt/print.go:233
main.readFromMap({0x1400010dac0, 0x6})
	Documents/test.go:31 +0x108
main.main.func3()
	Documents/test.go:58 +0x58
created by main.main
	Documents/test.go:55 +0x3c
exit status 2

test.txt

@bcmills
Copy link
Contributor

bcmills commented May 18, 2023

Those vet warnings are correct. You can't pass a sync.Map by value; pass it by pointer instead.

Go values are not like Java references. If you pass a Go struct type by value, each field is copied. In the case of a sync.Map, the copy has its own Mutex but shares a pointer to the underlying map.

@seankhliao
Copy link
Member

closing as not a Go bug.

@seankhliao seankhliao closed this as not planned Won't fix, can't repro, duplicate, stale May 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided.
Projects
None yet
Development

No branches or pull requests

6 participants