Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

slices: have Delete and others clear the tail #63393

Closed
Deleplace opened this issue Oct 5, 2023 · 41 comments
Closed

slices: have Delete and others clear the tail #63393

Deleplace opened this issue Oct 5, 2023 · 41 comments

Comments

@Deleplace
Copy link
Contributor

Deleplace commented Oct 5, 2023

Proposal

a = slices.Delete(a, i, j)  should zero the (j - i) elements after the length of the returned slice

TL;DR zeroing the elements is error-prone, it should be taken care of by the stdlib function.

Chronology

2012  Go 1.0 released. It doesn’t have generics.

2022  Go 1.18 released. It has generics.

2022  The golang.org/x/exp/slices package has Delete.

2023  Delete is promoted to the new stdlib package slices.

Current implementation

slices.Delete achieves its job in-place by

  • shifting M=len(a)-j tail elements to the left, by copy
  • returning a slice of the original slice, having smaller or equal length

See its source.

Problem: memory

If the elements have a reference type (pointer, map, slice, chan, string), or contain fields having a reference type, then (j-i) elements at the end of the original slice are left untouched and keep referencing some objects. They will not be garbage collected, and they may enjoy a lifetime far exceeding the intent of the developer.

This is a memory leak.

Impact

Slices containing only value types are not impacted. Their memory footprint is the same, regardless of any zeroing.

Some programs will have a memory leak without noticing and without serious consequences, when the objects are few and small.

Some programs will have serious memory leaks, and find it difficult to properly diagnose. The consequences are crashes (OOM), increased GC pressure, overall bad performance, higher costs associated with more memory utilization and more server instances, and higher costs associated with debugging and fixing the leak.

Indirect impact

Some developers do want to address this and write zero values themselves (see documentation recommendation below). But they risk writing incorrect code, turning a mild performance concern into a serious correctness problem.

Partial solution: documentation

The documentation of slices.Delete explicitly states:

Delete might not modify the elements s[len(s)-(j-i):len(s)]. If those elements contain pointers you might consider zeroing those elements so that objects they reference can be garbage collected.

Problem: zeroing is difficult

In order to properly prevent a memory leak, the burden is on the developer to:

  • figure out that there is a risk (memory leak) that requires action on their part,
  • determine if they should do some zeroing before calling Delete, or after calling Delete,
  • properly compute the indices of the elements to be zeroed,
  • figure out how to properly access the elements beyond the length of the returned slice.

This is error-prone.

When dealing with pointer zeroing, the Delete API is both hard to use and easy to misuse.

Cognitive burden

Incorrect code may be written in good faith in a context where several non-trivial concepts must be properly understood by the developers.

  • correct utilization: the developers need to understand what a slice is and how slice headers work, in order to understand the API design “pass a slice as argument, get a slice as return value”

    • this is similar to the API of append
  • developers are implicitly expecting the stdlib to “do the right thing” (principle of least astonishment)

    • the perceived “right thing” is not universal: some want performance by default, some want correctness and safety by default

      • objects not being GCed (memory leak) feels like a correctness/safety issue, but the consequences are mostly a performance issue.
  • even for skilled developers, the list of conditions (listed in previous section) to properly address the problem will too often not be fully met

  • the need for such manual workarounds partially defeats the purpose of having a stdlib function.

Suggested implementation

// Delete removes the elements s[i:j] from s, returning the modified slice.
// Delete panics if s[i:j] is not a valid slice of s.
// Delete is O(len(s)-i), so if many items must be deleted, it is better to
// make a single call deleting them all together than to delete one at a time.
func Delete[S ~[]E, E any](s S, i, j int) S {
    _ = s[i:j] // bounds check

    s = append(s[:i], s[j:]...)
    clear(s[len(s):len(s)+j-i]) // zero/nil out the obsolete elements, for GC
    return s
}

Impacted API

Same changes to these 4 5 functions:

  • slices.Delete
  • slices.DeleteFunc
  • slices.Compact
  • slices.CompactFunc
  • (edit) slices.Replace

Backward compatibility

Changing the behavior of slices.Delete is backward compatible. Its current documentation does not promise to zero or not zero the obsolete elements.

Cost

Runtime overhead of having slices.Delete zero the obsolete elements, per this benchmark:

When deleting 1 element

~1%  (this is a common use case)

When deleting a large range of elements

~30%

The GC work of "collecting" obsolete objects when they will become unreachable is expected to be very small, and is not specifically attributable to slices.Delete (just normal GC work).

Pointers vs. Values

The proposal is mostly useful for slices whose elements are pointers, or contain pointers.

Should we try to keep the existing behaviour for slices of value elements (e.g. int, or a struct of values)?

Probably not, for 2 reasons:

  • Principle of least astonishment: if we zero the pointers, we should zero the values as well.
  • It is not possible to express in Go generic code “do this operation, but only if the type parameter E is a pointer or contains pointer”

List ownership

We could argue that any use of a subslice s[i:j] always leaves elements not set to zero, before i and after j.

If slicing doesn’t do any zeroing, should Delete have to do it?

One major difference is that it makes sense to have various parts of a program use various slices (“views”, “windows”) on the same backing array. The parts need to be careful, as changes made by one may affect the other parts. Using append in such a context is subtle, with reallocation happening transparently. The ownership and responsibility are shared. An arbitrary number of agents only reading and slicing is always fine, not mutating the backing array contents.

Delete is different because it is destructive by nature. No part of the program is supposed to make sense of “a slice where some elements were shifted, and some obsolete elements were left in place” by some other code. Only the slice returned by Delete makes sense, and any other slice of the same backing array can be regarded as obsolete. The caller of Delete gets de facto the ownership of the backing array.

What other languages do

Java

  • ArrayList.remove() nulls out the tail element: src
  • ArrayList.removeRange() nulls out the tail elements: src

C#

  • List<T>.RemoveAt() nulls out the tail element src
  • List<T>.RemoveRange() nulls out the tail elements: src

C++

  • Vector.erase() does not null out the tail element: demo

Edit: precision about a[len(a):cap(a)]

It is out of scope of the current proposal to write anything to the original tail (the elements v17, v18, v19 in the diagram).

We must not assume ownership of the elements beyond the original length, and we must not zero them out. They might be used for legit purpose by other parts of the program, in theory.

@gopherbot gopherbot added this to the Proposal milestone Oct 5, 2023
@Deleplace Deleplace changed the title proposal: slices: clear the tail proposal: slices: have Delete clear the tail Oct 5, 2023
@thediveo
Copy link

thediveo commented Oct 5, 2023

from my engine room: it's exactly the lack of zeroing behavior why I still need to keep my own code in many places, when I really would like to migrate to stdlib here.

@earthboundkid
Copy link
Contributor

I think this would be kind of weird because it returns a slice instead of taking a pointer, so it would be weird to do stuff out of band. Why not just add a new slices.ClearTail?

func ClearTail[E any, S ~[]E](s S) {
   clear(s[len(s):cap(s)])
}

@Deleplace
Copy link
Contributor Author

Not really out of band, as Delete is expected to be destructive in-place. IMO the important consideration is that the tail is now "garbage for everyone", other slice headers pointing to the same memory should be regarded as invalidated.

ClearTail is nice to have but has 2 drawbacks in the context of Delete:

  • the burden of explicitly calling ClearTail falls on the user
  • ClearTail is clearing up to cap(s) instead of old len(s), that's too many. Most often, this is a mild performance penalty. In some rare cases (where the elements beyond len(s) are actually used elsewhere), this is a serious bug.

@earthboundkid
Copy link
Contributor

That's persuasive. It's probably fine to change the behavior here since, anything relying on the old behavior was broken.

@thediveo
Copy link

thediveo commented Oct 5, 2023

could this proposal please also applied to the exp/slices version, so that code that follows the N and N-1 support strategy will benefit as early as possible?

@AndrewHarrisSPU
Copy link

Dropping an element is equivalent to nil-ing a pointer under the circumstance that memory (de-)allocation performed by the runtime and GC are precisely equivalent to the behavior needed to correctly drop the element and associated resources. This is the straightforward thing and often the case, but in every situation where this isn't true, would the proposed behavior of slices.Delete be destructively incorrect?

With the current behavior (and suggested by the current documentation, really) problems can be resolved by dropping elements following slices.Delete, with a local understanding of the correct semantics of dropping.

@merykitty
Copy link

merykitty commented Oct 6, 2023

For clarification, std::vector<T, Alloc>::erase does not zero the content but it does destroy the objects, leaving the trailing elements as uninitialised memory.

This means that, while a std::vector<char>::erase does not zero the elements, std::vector<std::unique_ptr<T>>::erase will release the memory associated with those pointers.

@go101
Copy link

go101 commented Oct 6, 2023

Same problem for slices.Replace.

@Deleplace
Copy link
Contributor Author

@go101 thanks, I'll add Replace to the section "impacted API"

@Jorropo
Copy link
Member

Jorropo commented Oct 6, 2023

I think this only really make sense if you are not expecting to reuse the space cleared out.
If I delete an element but then append, the zeroing is slowing down my program for no reason.

I think that in the cases trailing values are a valid concern something like #60136 or #63393 (comment) is a better solution.
The expected pattern would be:

s = slices.DeleteFunc(s, filter)
s = append(s, stuff...)
slices.ClearTail(s)

Or

slices.ClearTail(slices.DeleteFunc(s, filter))

Assuming Go2, IMO a better solution would be for reslicing a slice beyond it's length to zero the "new" elements, then we could not zero things when slicing down, the GC would instead be able to inspect the len fields of the slice header to determine how many elements it should scan (or single shot elements for pointers to elements).
The point of having the zeroing on extension is that if you are extending in an append, or extending quickly followed by a write, it would be easy for a compiler to figure out the zeroing is not necessary.
This would be at least two big breaking changes tho (how reslicing works and how the GC view liftetime of variably sized objects). 😞

@earthboundkid
Copy link
Contributor

earthboundkid commented Oct 6, 2023

slices.DeleteFunc(s, filter)
s = append(s, stuff...)
slices.ClearTail(s)

That should be s = slices.DeleteFunc(s, filter) etc.

Following what @Deleplace said above, if you wanted to do the clear yourself, you would need to write,

oldLen := len(s)
s = slices.DeleteFunc(s, filter)
clear(s[len(s):oldLen])

This way, you don't accidentally clear out some end capacity that doesn't belong to you.


A question I have is whether there is a cheap way to test whether a slice element is a "reference type" (pointer, interface, map, channel, slice). Clearing the tail only makes sense as an optimization to tell the garbage collector that certain items are free, and that only makes sense when the items are associated with memory not in the slice (ie it's a reference type). Using reflect for this test would probably (but maybe not?) be too slow, but maybe there's some runtime/reflectlite system that could be used to make the test cheap enough.

@earthboundkid
Copy link
Contributor

earthboundkid commented Oct 6, 2023

One tricky thing is that a struct in slice only needs to be cleared if it contains reference types itself. So struct { int ; int } is fine, but struct { *int ; *int } needs to be cleared.

@Deleplace
Copy link
Contributor Author

I had the same thought about "no need to clear value elements", however introducing any difference in behaviour for

  • values
  • pointers
  • structs containing a mix of values and pointers

would violate the principle of least astonishment. Unexpected zeroing/not zeroing behaviours leads to surprises and headaches. Also, half-zeroing a struct (just the pointer fields, not the number fields) would lead to corrupt state, that's dangerous.

@apparentlymart
Copy link

How expensive is a clear in practice? My intuition about "zero values" in Go is that they are often (always?) physically a block of zeroed bytes of the type's size, in which case I would expect clear to be similar to a C memset, suggesting the possibility of optimized implementations using SSE, etc. Even if that isn't how it's implemented today, perhaps it could be in future.

If that were true, I would assert that it seems fine for a relatively "high-level" function like those described here to pay that light cost, and that those who have more demanding performance needs can write their own non-clearing Delete tailored to their specific situation.

The potential for a hidden memory leak seems more concerning to me than the cost of zeroing some memory.

@AndrewHarrisSPU
Copy link

If that were true, I would assert that it seems fine for a relatively "high-level" function like those described here to pay that light cost, and that those who have more demanding performance needs can write their own non-clearing Delete tailored to their specific situation.

The potential for a hidden memory leak seems more concerning to me than the cost of zeroing some memory.

The non-clearing behavior of slices.Delete can be about more than recycling bits, it's also about cases where slices.Delete can't know how to properly release resources (maybe loosely similar to "the destructor isn't trivial" in ctor/dtor languages - plenty of surprising things here...). If I have a slice of io.WriteCloser, for example, currently slices.Delete allows properly .Closeing (with error handling) the elements of the "deleted" partition, later on and in the same scope as the slices.Delete call. The proposed behavior seems to put this out of scope?

@apparentlymart
Copy link

That is a fair point, @AndrewHarrisSPU. Do you think it's common for folks to intentionally retain the "hidden" tail in a separate variable so they can perform cleanup afterwards?

My intuition would be that if you're using slices.Delete then your intention is to immediately discard (without running any cleanup functions) the values in the elements i:j but that you intend to keep the ones from j:len(s) that have now been copied over some or all of the i:j elements, and so it would seem strange to then want to run cleanup functions on the leftover j:len(s) copies which are now effectively aliased (assuming the current implementation, not this proposal). Calling cleanup functions on the ones in j:len(s) would presumably, in some cases at least, make the new copies also invalid.


The potential need to actually do something with the discarded portion makes me think about an new function:

// (This name is terrible and just a placeholder)
func DeleteWithDiscard[S ~[]E, E any](s S, i, j int) (new S, discarded S)

In this case I imagine that the first return value new matches the return value from Delete, while discarded is the portion that would be cleared if the original proposal were to be implemented.

Or to say it another way, Delete as proposed could be implemented in terms of DeleteWithDiscard like this:

func Delete[S ~[]E, E any](s S, i, j int) S {
    ret, toClear := DeleteWithDiscard(s, i, j)
    clear(toClear)
    return ret
}

...but those with more "interesting" cleanup needs could use DeleteWithDiscard directly to conveniently access the part that needs cleaning up and do whatever is needed to properly free those elements.

@earthboundkid
Copy link
Contributor

earthboundkid commented Oct 7, 2023

If you are using slices.Delete on a slice of io.Closers that aren't closed, your code is broken, and you can't fix it by fishing around in the tail of a deleted slice for things to close. What remains in the tail will depends on exactly what in the slice gets deleted. In most cases, the tail will contain a duplicate value from the non-deleted portion of the slice. It may also contain a subset of the deleted values. It won't be consistent and shouldn't be relied on for correctness.

@gazerro
Copy link
Contributor

gazerro commented Oct 19, 2023

I don't think that the use of slices.Delete is the cause of the broken code. The code is broken if the io.Closers are not properly closed. In fact, the use of slices.Delete makes it more evident during a review that the code is broken.

@adonovan
Copy link
Member

adonovan commented Nov 2, 2023

I like this proposal. Failing to clear pointers can increase the asymptotic cost of an operation (in live heap) whereas clearing them increases the CPU cost by only a small constant factor over the existing work Delete must do. Users can easily write their own delete logic in the rare cases when the clearing loop is the hot spot, or when they don't want the clearing semantics. (But if you care so much about performance at the micro-level, why are you shuffliing slices around with Delete in the first place?).

@rsc
Copy link
Contributor

rsc commented Nov 2, 2023

Based on the discussion above, this proposal seems like a likely accept.
— rsc for the proposal review group

In package slices, Delete changes to clear (zero) the elements s[len(s):oldlen], to prevent confusion and especially accidentally retained memory.

@gazerro
Copy link
Contributor

gazerro commented Nov 2, 2023

In package slices, Delete changes to clear (zero) the elements s[len(s):oldlen], to prevent confusion and especially accidentally retained memory.

I would like to point out, to ensure it doesn't go unnoticed, that the proposal intends to have the functions slices.DeleteFunc, slices.Compact, slices.CompactFunc, and slices.Replace changed as well.

@Deleplace
Copy link
Contributor Author

Indeed, 5 functions need to be modified.

Shall I make a CL?

@rsc
Copy link
Contributor

rsc commented Nov 10, 2023

No change in consensus, so accepted. 🎉
This issue now tracks the work of implementing the proposal.
— rsc for the proposal review group

In package slices, Delete, DeleteFunc, Compact, CompactFunc, and Replace change to clear (zero) the elements s[len(s):oldlen], to prevent confusion and especially accidentally retained memory.

@rsc rsc changed the title proposal: slices: have Delete clear the tail slices: have Delete clear the tail Nov 10, 2023
@rsc rsc modified the milestones: Proposal, Backlog Nov 10, 2023
@rsc rsc changed the title slices: have Delete clear the tail slices: have Delete and others clear the tail Nov 10, 2023
@rsc
Copy link
Contributor

rsc commented Nov 10, 2023

@Deleplace please go ahead

@gopherbot
Copy link

Change https://go.dev/cl/541477 mentions this issue: slices: zero the slice elements discarded by Delete, DeleteFunc, Compact, CompactFunc, Replace.

@Deleplace
Copy link
Contributor Author

The functions documentation will now be explicit about zeroing the elements between the original length and the new length.

When the main CL will be merged, I will make another CL to apply the same change to golang.org/x/exp/slices.

@gopherbot
Copy link

Change https://go.dev/cl/543335 mentions this issue: slices: zero the slice elements discarded by Delete, DeleteFunc, Compact, CompactFunc, Replace.

gopherbot pushed a commit to golang/exp that referenced this issue Jan 19, 2024
…act, CompactFunc, Replace.

Backport from stdlib: to avoid memory leaks in slices that contain pointers, clear the elements between the new length and the original length.

Fixes golang/go#63393

Change-Id: I38bf64b27619d8067f2e95ce3c7952ec95ca55b8
Reviewed-on: https://go-review.googlesource.com/c/exp/+/543335
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Eli Bendersky <eliben@google.com>
@Deleplace
Copy link
Contributor Author

Backport to golang.org/x/exp/slices : done

@dmitshur dmitshur modified the milestones: Backlog, Go1.22 Jan 20, 2024
@changkun
Copy link
Member

A bit of late to the party, but we actually observed that this change breaks some of the production code. Here is an example:

package main

import (
	"fmt"
	"slices"
)

func main() {
	x := []int{0, 1, 2, 3, 4, 5}
	y := x
	x = slices.Delete(x, 4, 5)

	fmt.Println(x) // Go 1.21: [0 1 2 3 5];   Go 1.22: [0 1 2 3 5]
	fmt.Println(y) // Go 1.21: [0 1 2 3 5 5]; Go 1.22: [0 1 2 3 5 0]
}

It seems that some Go code may hold a reference to the old slice x, assuming the deleted value from the reference, y remains there. This introduces a behavior change for y[5] where Go 1.21 == 5, but Go 1.22 == 0.

This seems become worse. In Go 1.21, the following code is fine:

	type foo struct{ x int }
	x := []*foo{&foo{}, &foo{}, &foo{}, &foo{}, &foo{}, &foo{}}
	y := x
	x = slices.Delete(x, 4, 5)
	fmt.Println(y[5].x) // 0

But in Go 1.22, it panics:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x480559]

See https://go.dev/play/p/tHsniEOWY2e

@randall77
Copy link
Contributor

@changkun: All the changes you mention here are intended, or at least expected.
Could you be more specific about your production code? The examples you cite here don't look like production code.

@changkun
Copy link
Member

@randall77 It would be harder to paste the relevant production code here, but the idea is well demonstrated in the above example code. We see more than one case where production services crashed due to this behavior change. The len(x) shrinks to 5, but len(y) remains 6 seem to be the root cause.

@randall77
Copy link
Contributor

@changkun I think we need to see some realistic code to decide whether this is a serious issue or not. Generally it is not a great idea to have two slices to the same backing store and modify through one without any expectations of modification by the other.

For instance, in the first case you cite, instead of y ending in 5 5 it now ends in 5 0. It seems to me that it is just as likely to be buggy to have a repeated value than to have a zeroed value. What does your code look like for which the former is ok but the latter is bad?

The len(x) shrinks to 5, but len(y) remains 6 seem to be the root cause.

That's just how slices work. There's no way len(y) is going to change by doing any conceivable operation on x.

@Deleplace
Copy link
Contributor Author

@changkun indeed some tests were passing in Go 1.21 and will now fail in Go 1.22. This is expected and intended. We observed this internally at Google as well. A full blog post about this will be published this week in the Go blog.

While it is certainly not "funny" to break user code, I do believe that only incorrect code will be broken and need fixing. I tried to articulate this in the section "List ownership" above: keeping using old slice values is mostly a mistake in the first place.

@changkun
Copy link
Member

changkun commented Feb 20, 2024

To clarify, nothing was stated to object this change; the previous examples just put the consequence of this change on the table.

It is another discussion and judgment on whether the code produced by an engineer is buggy or not; at least, there is nothing that warns them to not do so. Users should continuously improve their testing infrastructure to prevent such an outage from happening at an early stage but also require Go release to spend additional effort to communicate and inform users of the potential consequences.

I believe I read about this proposal months back and failed to anticipate how it will impact user code as well.

@adonovan
Copy link
Member

To clarify, nothing was stated to object this change; the previous examples just put the consequence of this change on the table.

It is another discussion and judgment on whether the code produced by an engineer is buggy or not; at least, there is nothing that warns them to not do so. Users should continuously improve their testing infrastructure to prevent such an outage from happening at an early stage but also require Go release to spend additional effort to communicate and inform users of the potential consequences.

I believe I read about this proposal months back and failed to anticipate how it will impact user code as well.

You are right that this a user-visible change, and strictly speaking a backward incompatible one. But as @randall77 pointed out, it's difficult to imagine an algorithm that would intentionally rely on the tail of the array being unmodified after a call to Delete, since this implies a very intimate and fragile coupling between the caller of Delete and the logic assuming the tail is unchanged. If I was to see such code during review, I would almost certainly call it out as a likely mistake or at least an opportunity for clarification.

Could you post a reduced example of code that actually does this?

@DeedleFake
Copy link

The most likely circumstance I can think of is code that is working with subslices in a transparent manner such that it doesn't know that they're subslices of a larger slice. Thus is still inherently dangerous, though, as an incorrect append() could also affect other code unexpectedly. The correct thing to do in those cases would be to 3-slice the subslices to remove the excess capacity before passing it down. That prevents both types of potential problems.

I wonder if it would be possible to rig up a vet check to catch some cases of that.

@Deleplace
Copy link
Contributor Author

@DeedleFake could you make a small playground snippet about this?

This may not 100% address your suggestion, but 2 positive notes:

  • Delete(s, i, j) does write zeros before len(s), but leaves s[len(s):cap(s)] intact. "Clearing the tail" does not extend up to the whole capacity. It is possible, and maybe legit, to have other slice variables own and use the portion beyond len(s).
  • There will be a vet check (cmd/vet: warn when ignoring the return value of slices.Delete and friends #62729) in Go 1.23, when ignoring the return value of Delete

@DeedleFake
Copy link

DeedleFake commented Feb 21, 2024

  • Delete(s, i, j) does write zeros before len(s), but leaves s[len(s):cap(s)] intact. "Clearing the tail" does not extend up to the whole capacity. It is possible, and maybe legit, to have other slice variables own and use the portion beyond len(s).

Oh, so it does. For some reason, I was under the impression that it cleared the remaining capacity, though now that I think about it that really doesn't make sense. Never mind, then, and yeah, anything adversely affected by this behavior is almost definitely a bug anyways.

@changkun
Copy link
Member

Could you post a reduced example of code that actually does this?

Without digging into the exact business/product goal the relevant cases want to achieve, I don't directly see it being buggy, as it met the expectation for a long period, even if the code itself might be an unexpected use.

The relevant code holds part of a slice, say B, and iterates it without knowing the original slice A being modified. However, as such, a slice may hold a struct, and it could contain ID and other fields to identify desired elements. When iterating B, it finds the target element based on the information inside a struct and does not necessarily have to find the element on its first attempt. Still, the overall process will retry until all target elements are found and processed eventually because there could be a final iteration that will only have one element to iterate and delete.

@thediveo
Copy link

Met the expectation doesn't exclude being buggy. Your vague description looks to me like desaster waiting to happen all the time. Getting away with shoplifting also meets the business expectation, but still is unlawful.

@earthboundkid
Copy link
Contributor

The code before must not have cared about the slice being sorted or even having particular elements in it. Maybe it's just doing best effort background processing of some kind. So then in that case, wouldn't a simple if s[i] == nil { return } be a sufficient fix in this case?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Accepted
Development

No branches or pull requests