Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: slice capacity do not reduce #16995

Closed
lvhuat opened this issue Sep 5, 2016 · 1 comment
Closed

runtime: slice capacity do not reduce #16995

lvhuat opened this issue Sep 5, 2016 · 1 comment

Comments

@lvhuat
Copy link

lvhuat commented Sep 5, 2016

What version of Go are you using (go version)?

golang v1.7

What operating system and processor architecture are you using (go env)?

amd64

What did you do?

I don't if this is a problem.When i use amqp library,i found the slice to buffer data from io will increase and don't reduce down at last

package main

import (
    "fmt"
    "log"
    "sync"
    "time"

    "github.com/streadway/amqp"
)

var queue []*amqp.Delivery
var waitGroup sync.WaitGroup

func bufferDeliveries(in chan *amqp.Delivery, out chan amqp.Delivery) {
    defer waitGroup.Done()
    var queueIn = in

    for delivery := range in {
        select {
        case out <- *delivery:
            // delivered immediately while the consumer chan can receive
        default:
            queue = append(queue, delivery)
        }

        for len(queue) > 0 {
            select {
            case out <- *queue[0]:
                queue = queue[1:]
            case delivery, open := <-queueIn:
                if open {
                    queue = append(queue, delivery)
                } else {
                    // stop receiving to drain the queue
                    queueIn = nil
                }
            }
        }
    }

    close(out)
}

func main() {
    in := make(chan *amqp.Delivery)
    out := make(chan amqp.Delivery)

    waitGroup.Add(3)
    go bufferDeliveries(in, out)

    go func() {
        defer waitGroup.Done()
        counter := 0
        for {
            counter++
            //log.Println("send", counter)
            in <- &amqp.Delivery{
                Body: []byte(fmt.Sprintf("%d", counter)),
            }
            if counter == 1000000 {
                close(in)
                return
            }
        }
    }()

    go func() {
        defer waitGroup.Done()
        pre := 0
        var counter int
        for d := range out {
            fmt.Sscanf(string(d.Body), "%d", &counter)
            if pre >= counter {
                log.Println("bad order", pre, counter, string(d.Body))
                panic("bad order")
            }
            pre = counter
            <-time.After(time.Microsecond)
        }
    }()
    waitGroup.Wait()
    log.Println("the last:", cap(queue), len(queue))
    <-time.After(time.Hour)
}

What did you expect to see?

After run a moment,the slice capacity did not reduce any more.

What did you see instead?

I think there maybe some way to reduce the free the memory.

@randall77 randall77 changed the title Slice capacity do not reduce runtime: slice capacity do not reduce Sep 5, 2016
@randall77
Copy link
Contributor

It is true that s = s[1:] does not deallocate the s[0] slot.
It may help to do

queue[0] = nil
queue = queue[1:]

That won't deallocate the queue[0] entry itself, but it will allow what it pointed to to be garbage collected.

This is a fundamental property of slices. There's no way to deallocate the entry itself. You can copy to another slice if you care about the space.

Closing this as not actionable.

@golang golang locked and limited conversation to collaborators Sep 5, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants