- Threads allow execution of code at the same time
- CPU cores can each execute a single thread at any given time
- Maintaining code invariants is more difficult with concurrency
λμ€ν¨μΉ νλ₯Ό μ΄μ©νλ©΄ μ€μννΈμ ν΄λ‘μ λ₯Ό νμ λ±λ‘ν μ μλ€. λμ€ν¨μΉ νλ μμ μ μννκΈ° μν΄ μ€λ λλ₯Ό κ°μ Έμ¨λ€. μΌλ°μ μΈ μ€λ λλ Run Loopμ κ°μ§κ³ μκ³ , Main μ€λ λμ κ²½μ°λ Main Run Loopκ³Ό Main νλ₯Ό κ°μ§κ³ μλ€.
λμ€ν¨μΉ νμ μ¬λ¬ asynchronous μμ΄ν μ΄ λ±λ‘λμ΄ μλ€λ©΄ μ€λ λλ₯Ό κ°μ Έμμ νλμ© μ²λ¦¬νλ€.
synchronous μμ΄ν μ μ²λ¦¬νκΈ° μν΄μ worker μ€λ λμμ λ€λ₯Έ μ€λ λλ‘ λμ΄κ°μ μ²λ¦¬λ₯Ό νλ€. μ΄ν asynchronous μμ΄ν μ²λ¦¬λ λμ€ν¨μΉ νμ worker μ€λ λμμ μ²λ¦¬νλ€.
- Create a Dispatch Queue to whitch you submit work
- Dispatch Queues execute work items in FIFO order
- Use
.async
to execute your work on the queue
let queue = DispatchQueue(label: "com.example.imagetransform")
queue.async {
let smallImage = iamge.resize(to: rect)
}
- Dispatch main queue executes all items on the main thread
- Simple to chain work between queues
let queue = DispatchQueue(label: "com.example.imagetransform")
queue.async {
let smallImage = iamge.resize(to: rect)
DispatchQueue.main.async {
imageView.image = smallImage
}
}
- Thread pool will limit concurrency
- Worker threads that block can cause more to spawn
- Choosing the right number of queues to use is important
- Identify areas of data flow in your application
- Split into distinct subsystems
- Queues at subsystem granularity
μμ£Ό κ°λ¨νκ² DispatchGroupμ λ§λ€ μ μλ€.
let group = DispatchGroup()
queue.async(group: group) { ... }
queue2.async(group: group) { ... }
queue3.async(group: group) { ... }
group.notify(queue: DispatchQueue.main) { ... }
- Can use subsystem serial queues for mutual exclusion
- Use
.sync
to safely access properties from subsystems - Be aware of "lock ordering" introduced between subsystems
- 1 -> 2 -> 3 -> 1 μ΄λ° μμΌλ‘ κ±Έλ©΄ μ λλ€! deadlock
var count: Int {
queue.sync { self.connections.count }
}
- QoS provides explicit classification of work
- Indicates developer intent
- Affects execution properties of your work
- User Interactive
- User Initiated
- Utility
- Background
- Use
.async
to submit work with a specific QoS class - Dispatch helps resolve priority inversions
- Create single-purpose queues with a specific QoS class
queue.async(qos: .background) {
print("Maintenance work")
}
queue.async(qos: .userInitiated) {
print("Button tapped")
}
- By default
.async
captures execution context at time of submission - Create
DispatchWorkItem
from closures to control execution properties - Use
.assignCurrentContext
to capture current QoS at time of creation
let item = DispatchWorkItem(flags: .assignCurrentContext) {
print("Hello WWDC 2016!")
}
queue.async(execute: item)
- Use
.wait
on work items to signal that this item needs to execute - Dispatch elevates priority of queued work ahead
- Waiting with a
DispatchWorkItem
gives ownership information - Semaphores and Groups do not admit a concept of ownership
(Synchronization is not part of the language in Swift 3)
- Global variables are initially atomically
- Class properties are not atomic
- Lazy properties are not initialized atomically
κ·Έλ κΈ° λλ¬Έμ μ°λ¦¬λ λκΈ°νλ₯Ό μ΄λ»κ² μν¬ μ μλμ§ μμμΌ νλ€! λκΈ°ν μν¬ ν¬μΈνΈλ₯Ό μλͺ» μ‘μΌλ©΄ μ±μ ν¬λμ¬κ° λ°μνκ±°λ λ°μ΄ν°κ° μλͺ»λ μ μλ€.
μΈμ μΆμ² > Thread Sanitizer and Static Analysis
- The Darwin module exposes traditional C lock types
- correct use of C struct based locks such as
pthread_mutex_t
is incredibly hard
- correct use of C struct based locks such as
Foundation.Lock
can be used safely because it is a class- Derive an Objective-C base class with struct based locks as ivars
@implementation LockableObject {
os_unfair_lock _lock;
}
- (instancetype)init ...;
- (void)lock { os_unfair_lock_lock(&_lock); }
- (void)unlock { os_unfair_lock_lock(&_lock); }
@end
μ΄ λ°©λ²μ΄ lockμ μ¬μ©νλ λ°©λ²λ³΄λ€ λ μ’μ λ°©λ²!
- Use
DispatchQueue.sync(execute:)
- harder to misuse than traditional locks, more robust
- better instrumentation (Xcode, assertions, ...)
class MyObject {
private let internalState: Int
private let internalQueue: DispatchQueue
var state: Int {
get {
return internalQueue.sync { internalState }
}
set (newState) {
internalQueue.sync { internalState = newState }
}
}
}
μ΄ ν¨ν΄μ κ°λ¨νμ§λ§ λ€μν μν©μ νμ₯ν΄μ μ μ©μ΄ κ°λ₯νλ€.
- GCD lets you express several preconditions
- Code is running on a given queue
- Code is not running on a given queue
dispatchPrecondition(.onQueue(expectedQueue))
dispatchPrecondition(.notOnQueue(unexpectedQueue))
- Single threaded setup: κ°μ²΄ μμ±, κ°μ²΄ νλ‘νΌν° μ€μ
activate
the concurrent state machineinvalidate
the concurrent state machine- Single threaded deallocation
class BusyController: SubsystemObserving {
init(...) { ... }
}
class BusyController: SubsystemObserving {
init(...) { ... }
func activate() {
DataTransform.sharedInstance.register(observer: self, queue: DispatchQueue.main)
}
}
class BusyController: SubsystemObserving {
func systemStarted(...) { ... }
func systemDone(...) { ... }
}
class BusyController: SubsystemObserving {
deinit {
DataTransform.sharedInstance.unregister(observer: self)
}
}
class BusyController: SubsystemObserving {
private var invalidated: Bool = false
func invalidate() {
dispatchPrecondition(.onQueue(DispatchQueue.main))
invalidated = true
DataTransform.sharedInstance.unregister(observer: self)
}
func systemStarted(...) {
if invalidated { return }
}
deinit {
precondition(invalidated)
}
}
- Attributes and target queue
- Source handlers
let q = DispatchQueue(label: "com.example.queue", attributes: [[.autoreleaseWorkItem]])
let source = DispatchSource.read(fileDescriptor: fd, queue: q)
source.setEventHandler { /* handle your event here */ }
source.setCancelHandler { close(fd) }
- Properties of dispatch objects must not be mutated after activation
- Queues can also be created inactive
extension DispatchObject {
func activate()
}
let queue = DispatchQueue(label: "com.example.queue", attributes: [.initiallyInactive])
- Sources require explicit cancellation
- Event monitoring is stopped
- Cancellation handler runs
- All handlers are deallocated
extension DispatchSource {
func cancel()
}
let source = DispatchSource.read(fileDescriptor: fd, queue: q)
source.setCancelHandler { close(Fd) }
- GCD Objects expect to be in a defined state at deallocation
- Activated
- Not suspended
- Organize your application around data flows into independent subsystems
- Synchronize state with Dispatch Queues
- Use the active/invalidate pattern