New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net/http: HTTP Client slows down over time until stop #66643
Comments
@Xpl0itU not commenting on the issue itself, but there's a bug in your code that may attribute to effects you observe: notice that defer calls in |
To add to what @artyom said, each iteration of the outer loop is also leaking both a ticker and the associated goroutine for calling Also, there are two places in the inner loop where it seems like your intent was to do another iteration of the outer loop. (Namely, where |
CC @neild. |
Updated the code and there are still issues: func downloadFile(ctx context.Context, progressReporter ProgressReporter, client *http.Client, downloadURL, dstPath string, doRetries bool, buffer []byte) error {
filePath := filepath.Base(dstPath)
startTime := time.Now()
ticker := time.NewTicker(250 * time.Millisecond)
defer ticker.Stop()
isError := false
updateProgress := func(downloaded *int64) {
for range ticker.C {
if progressReporter.Cancelled() {
break
}
progressReporter.UpdateDownloadProgress(*downloaded, calculateDownloadSpeed(*downloaded, startTime, time.Now()), filePath)
}
}
for attempt := 1; attempt <= maxRetries; attempt++ {
isError = false
req, err := http.NewRequestWithContext(ctx, "GET", downloadURL, nil)
if err != nil {
return err
}
req.Header.Set("User-Agent", "WiiUDownloader")
req.Header.Set("Connection", "Keep-Alive")
req.Header.Set("Accept-Encoding", "")
resp, err := client.Do(req)
if err != nil {
return err
}
if resp.StatusCode != http.StatusOK {
if doRetries && attempt < maxRetries {
time.Sleep(retryDelay)
continue
}
resp.Body.Close()
return fmt.Errorf("download error after %d attempts, status code: %d", attempt, resp.StatusCode)
}
file, err := os.Create(dstPath)
if err != nil {
resp.Body.Close()
return err
}
var downloaded int64
go updateProgress(&downloaded)
Loop:
for {
select {
case <-ctx.Done():
resp.Body.Close()
file.Close()
return ctx.Err()
default:
n, err := resp.Body.Read(buffer)
if err != nil && err != io.EOF {
resp.Body.Close()
file.Close()
if doRetries && attempt < maxRetries {
time.Sleep(retryDelay)
isError = true
break Loop
}
return fmt.Errorf("download error after %d attempts: %+v", attempt, err)
}
if n == 0 {
resp.Body.Close()
file.Close()
break Loop
}
_, err = file.Write(buffer[:n])
if err != nil {
resp.Body.Close()
file.Close()
if doRetries && attempt < maxRetries {
time.Sleep(retryDelay)
isError = true
break Loop
}
return fmt.Errorf("write error after %d attempts: %+v", attempt, err)
}
downloaded += int64(n)
}
}
if !isError {
break
}
}
return nil
} |
are you sure this isn't the server throttling the response? |
Yes, other programs that access the same server work just fine |
can we get a complete/standalone reproducer for the issue? |
I can’t reproduce the issue locally now, but some of my users can reproduce it with the software, but downloading a large file using the attached function should be enough |
Still happening with 1.22.2, at least on all major amd64 platforms (linux, windows and darwin) |
Without a standalone reproducer, I don't think we can do anything with this issue. |
Fixed by tweaking my http.Client, here are the working settings: client := &http.Client{
Transport: &http.Transport{
Dial: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).Dial,
MaxIdleConns: 100,
MaxIdleConnsPerHost: 100,
MaxConnsPerHost: 100,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ResponseHeaderTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
},
} |
Go version
go version go1.22.1 darwin/arm64
Output of
go env
in your module/workspace:What did you do?
Here's the client:
Here are the functions:
What did you see happen?
For some users, file downloading starts at full speeds but then decreases over time until it stops, this didn't happen on Go 1.21.6.
What did you expect to see?
Speeds should be stable over time.
The text was updated successfully, but these errors were encountered: