Revealed on: March 25, 2024
Some time in the past I’ve printed a publish that explains how you need to use AsyncStream to construct your personal asynchronous sequences in Swift Concurrency. Since writing that publish, a brand new method to creating AsyncStream objects has been launched to permit for extra comfort stream constructing.
On this publish, I’ll develop on what we’ve already lined within the earlier publish in order that we don’t must go over all the pieces from scratch.
By the top of this publish you’ll perceive the brand new and extra handy makeStream
methodology that was added to AsyncStream
. You’ll learn the way and when it is smart to construct your personal async streams, and I’ll reiterate a few of their gotchas that can assist you keep away from errors that I’ve needed to make prior to now.
Reviewing the older scenario
Whereas I received’t clarify the previous method intimately, I believe it is smart to go over the previous method so as to refresh your thoughts. Or in the event you weren’t aware of the previous method, it should assist put the enhancements in Swift 5.9 into perspective a bit extra.
Pre-Swift 5.9 we may create our AsyncStream
objects as follows:
let stream = AsyncStream(unfolding: {
return Int.random(in: 0..<Int.max)
})
The method proven right here is the best solution to construct an async stream but in addition the least versatile.
Briefly, the closure that we cross to unfolding
right here shall be referred to as each time we’re anticipated to asynchronously produce a brand new worth for our stream. As soon as the worth is produced, you come it in order that the for loop
iterating over this sequence can use the worth. To terminate your async stream, you come nil
out of your closure to point that there aren’t any additional values to be produced.
This method lacks some flexibility and doesn’t match very nicely for reworking issues like delegate based mostly code over into Swift Concurrency.
A extra helpful and versatile solution to construct an AsyncStream
that may bridge a callback based mostly API like CLLocationManagerDelegate
seems to be as follows:
class AsyncLocationStream: NSObject, CLLocationManagerDelegate {
lazy var stream: AsyncStream<CLLocation> = {
AsyncStream { (continuation: AsyncStream<CLLocation>.Continuation) -> Void in
self.continuation = continuation
}
}()
var continuation: AsyncStream<CLLocation>.Continuation?
func locationManager(_ supervisor: CLLocationManager, didUpdateLocations areas: [CLLocation]) {
for location in areas {
continuation?.yield(location)
}
}
}
This code does somewhat bit greater than construct an async stream so let’s go over it in a bit extra element.
First, there’s a lazy var
that’s used to create an occasion of AsyncStream
. After we create the async stream, we cross the AsyncStream
initializer a closure. This closure receives a continuation object that we will use to push values onto our AsyncStream
. As a result of we’re bridging a callback based mostly API we want entry to the continuation from outdoors of the preliminary closure so we assign the continuation to a var
on the AsyncLocationStream
object.
Subsequent, now we have the didUpdateLocations
delegate methodology. From that methodology, we name yield
on the continuation to push each obtained location onto our AsyncStream
which permits anyone that’s writing a for loop
over the stream
property to obtain areas. Right here’s what that would love like in a simplified instance:
let locationStream = AsyncLocationStream()
for await worth in locationStream.stream {
print("location obtained", worth)
}
Whereas this all works completely positive, there’s this elective continuation
that we’re coping with. Fortunately, the brand new makeStream
method takes care of this.
Making a stream with makeStream
In essence, a makeStream
based mostly AsyncStream
works an identical to the one you noticed earlier.
We nonetheless work with a continuation that’s used to yield
values to whoever is iterating our stream. As a way to finish the stream we name end
on the continuation, and to deal with somebody cancelling their Process
or breaking out of the for loop you’ll be able to nonetheless use onTermination
on the continuation to carry out cleanup. We’ll check out onTermination
within the subsequent part.
For now, let’s concentrate on seeing how makeStream
permits us to rewrite the instance you simply noticed to be a bit cleaner.
class AsyncLocationStream: NSObject, CLLocationManagerDelegate {
let stream: AsyncStream<CLLocation>
non-public let continuation: AsyncStream<CLLocation>.Continuation
override init() {
let (stream, continuation) = AsyncStream.makeStream(of: CLLocation.self)
self.stream = stream
self.continuation = continuation
tremendous.init()
}
func locationManager(_ supervisor: CLLocationManager, didUpdateLocations areas: [CLLocation]) {
for location in areas {
continuation.yield(location)
}
}
}
We’ve written somewhat bit extra code than we had earlier than however the code now we have now could be barely cleaner and extra readable.
As an alternative of a lazy var
we will now outline two let
properties which inserts a lot better with what we’re attempting to do. Moreover, we create our AsyncStream
and its continuation in a single line of code as an alternative of needing a closure to raise the continuation from our closure onto our class.
Every little thing else stays just about the identical. We nonetheless name yield
to push values onto our stream, and we nonetheless use end
to finish our continuation (we’re not calling that within the snippet above).
Whereas that is all very handy, AsyncStream.makeStream
comes with the identical reminiscence and lifecycle associated points as its older counterparts. Let’s take a short have a look at these points and the way to repair them within the subsequent part.
Avoiding reminiscence leaks and infinite loops
After we’re iterating an async sequence from inside a process, it’s affordable to count on that sooner or later the article we’re iterating goes out of scope and that our iteration stops.
For instance, if we’re leveraging the AsyncLocationStream
you noticed earlier than from inside a ViewModel
we’d need the placement updates to cease robotically at any time when the display, its ViewModel
, and the AsyncLocationStream
exit of scope.
In actuality, these objects will exit of scope however any process that’s iterating the AsyncLocationStream
‘s stream
received’t finish till the stream’s continuation is explicitly ended. I’ve explored this phenomenon extra in depth on this publish the place I dig into lifecycle administration for async sequences.
Let’s have a look at an instance that demonstrates this impact. We’ll have a look at a dummy LocationProvider
first.
class LocationProvider {
let areas: AsyncStream<UUID>
non-public let continuation: AsyncStream<UUID>.Continuation
non-public let cancellable: AnyCancellable?
init() {
let stream = AsyncStream.makeStream(of: UUID.self)
areas = stream.stream
continuation = stream.continuation
}
deinit {
print("location supplier is gone")
}
func startUpdates() {
cancellable = Timer.publish(each: 1.0, on: .important, in: .frequent)
.autoconnect()
.sink(receiveValue: { [weak self] _ in
print("will ship")
self?.continuation.yield(UUID())
})
}
}
The thing above creates an AsyncStream
similar to you noticed earlier than. After we name startUpdates
we begin simulating receiving location updates. Each second, we ship a brand new distinctive UUID
onto our stream.
To make the check life like, I’ve added a MyViewModel
object that might usually function the interface in between the placement supplier and the view:
class MyViewModel {
let locationProvider = LocationProvider()
var areas: AsyncStream<UUID> {
locationProvider.areas
}
deinit {
print("view mannequin is gone")
}
init() {
locationProvider.startUpdates()
}
}
We’re not doing something particular on this code so let’s transfer on to creating the check state of affairs itself:
var viewModel: MyViewModel? = MyViewModel()
let sampleTask = Process {
guard let areas = viewModel?.areas else { return }
print("earlier than for loop")
for await location in areas {
print(location)
}
print("after for loop")
}
Process {
strive await Process.sleep(for: .seconds(2))
viewModel = nil
}
In our check, we arrange two duties. One which we’ll use to iterate over our AsyncStream
and we print some strings earlier than and after the loop.
Now we have a second process that runs in parallel. This process will wait for 2 seconds after which it units the viewModel
property to nil
. This simulates a display going away and the view mannequin being deallocated due to it.
Let’s have a look at the printed outcomes for this code:
earlier than for loop
will ship
B9BED2DE-B929-47A6-B47D-C28AD723FCB1
will ship
FCE7DAD1-D47C-4D03-81FD-42B0BA38F976
view mannequin is gone
location supplier is gone
Discover how we’re not seeing after the loop
printed right here.
Which means that whereas the view mannequin and site supplier each get deallocated as anticipated, we’re not seeing the for loop finish like we’d wish to.
To repair this, we have to ensure that we end
our continuation when the placement supplier is deallocated:
class LocationProvider {
// ...
deinit {
print("location supplier is gone")
continuation.end()
}
// ...
}
Within the deinit
for LocationProvider
we will name continuation.end()
which can repair the leak that we simply noticed. If we run the code once more, we’ll see the next output:
earlier than for loop
will ship
B3DE2994-E0E1-4397-B04E-448047315133
will ship
D790D3FA-FE40-4182-9F58-1FEC93335F18
view mannequin is gone
location supplier is gone
after for loop
In order that fastened our for loop sitting and ready for a worth that might by no means come (and our Process
being caught eternally in consequence). Nevertheless, we’re not out of the woods but. Let’s change the check setup somewhat bit. As an alternative of deallocating the view mannequin, let’s strive cancelling the Process
that we created to iterate the AsyncStream
.
var viewModel: MyViewModel? = MyViewModel()
let sampleTask = Process {
guard let areas = viewModel?.areas else { return }
print("earlier than for loop")
for await location in areas {
print(location)
}
print("after for loop")
}
Process {
strive await Process.sleep(for: .seconds(2))
sampleTask.cancel()
}
Operating to code now ends in the next output:
earlier than for loop
will ship
0B6E962F-F2ED-4C33-8155-140DB94F3AE0
will ship
1E195613-2CE1-4763-80C4-590083E4353E
after for loop
will ship
will ship
will ship
will ship
So whereas our loop ended, the placement updates don’t cease. We will add an onTermination
closure to our continuation to be notified of an ended for loop (which occurs if you cancel a Process
that’s iterating an async sequence):
class LocationProvider {
// ...
func startUpdates() {
cancellable = Timer.publish(each: 1.0, on: .important, in: .frequent)
.autoconnect()
.sink(receiveValue: { [weak self] _ in
print("will ship")
self?.continuation.yield(UUID())
})
continuation.onTermination = { [weak self] _ in
self?.cancellable = nil
}
}
}
With this code in place, we will now deal with each a process getting cancelled in addition to our LocationProvider
being deallocated.
Everytime you’re writing your personal async streams it’s vital that you simply check what occurs when the proprietor of your continuation is deallocated (you’ll normally wish to end your continuation) or when the for loop that iterates your stream is ended (you’ll wish to carry out some cleanup as wanted).
Making errors right here is kind of simple so you’ll want to maintain an eye fixed out!
In Abstract
On this publish, you noticed the brand new and extra handy AsyncStream.makeStream
methodology in motion. You discovered that this methodology replaces a much less handy AsyncStream
initializer that pressured us to manually retailer a continuation outdoors of the closure which might normally result in having a lazy var
for the stream and an elective for the continuation.
After exhibiting you the way you need to use AsyncStream.makeStream
, you discovered about a number of the gotchas that include async streams generally. I confirmed you how one can check for these gotchas, and how one can repair them to ensure that your streams finish and clear up as and if you count on.