Home Handling Race Conditions in iOS with DispatchSemaphore
Post
Cancel

Handling Race Conditions in iOS with DispatchSemaphore

The Key Point 🌟

DispatchSemaphore is basically like a lock system. It’s like having a limited number of keys to a room, where only few people can access the room at the same time. This mechanism can be useful in scenarios where multiple threads access a shared variable or data structure. It can be used to prevent overload, data corruption, and many of the typical issues caused by race conditions.

At the end of this article, you will:

  • Understand what Semaphores are
  • What problems they help us solve
  • How DispatchSemaphores work

Utility And Value of Resources

Time and Resources are the most valuable commodity in Computer Science.
The main factors that determines the utility of any computer program is really just it’s ability to efficiently allocate resources on time to logical processes. And a resource is really just anything that is useful and adds value to a process or system. Going by that we know the need to process things asynchronously and effectively manage resources is of absolute importance.

When building iOS applications, you will run into scenarios where your code needs to execute their tasks sequentially (i.e certain tasks must be executed before executing the next task). This is a traditional compsci problem that often leads to race conditions, especially when those tasks are asynchronous in nature.

Race Conditions

A race condition occurs when two or more threads can access shared data and they try to change it at the same time.

In the course of writing iOS apps, we inevitably as iOS developers run into tasks that requires asynchronous operations to perform their activities. This is usually not a problem, until those async tasks try to concurrently access a shared resource. The way we would prevent race conditions in native iOS is through a number of mechanisms, one being NSLocks, the other being through queues in dispatch semaphores.

Defining the Problem

Let’s say you have to make a couple of API calls to your remote server to fetch some token data before you can begin fetching and processing other fetched data. You would probably want to perform these tasks asynchronously yet sequentially. This would mean you have to wait for each API call to finish processing before you can fire off your next API session, one after the other since they share dependencies and resources.

This action of sequencing your asynchronous tasks and preventing race conditions is exactly what DispatchSemaphores helps us solve.

What’s a Semaphore?

In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system. Semaphores are a type of synchronization primitive

That long blob is what you will get from wikipedia if you google semaphores. But what does it mean in plain english, without the mumbo jumbo comp sci terminology? It’s basically just a flag system for signaling when a resource should be accessed or restricted. So by that definition, it’s a great way to prevent race conditions since we can use it to restrict access to a limited resource that is not yet available for use.

DispatchSemaphore

What DispatchSemaphore really enables us to do, is to perform asynchronous tasks in a synchronous order. It enables us to perform one async task and wait for it’s completion before executing the next async task

DispatchSemaphore is like a traffic cop in our app, that’s directing when any method (car) can access a specific resource (road), and doing so we prevent collisions that would occur from multiple processes racing to access a single resource.

How DispatchSemaphore Work: Waiting and Signaling 🚦

The Dispatch semaphore uses the counter value to determine whether a thread can access a shared resource. The counter value is changed when either the signal() or wait() function is called.

Think of the counter value as the maximum amount of cars allowed to access a specific road at the same time. The wait() method is how we check to see if there’s a red light or green light, and the signal() method is how we change the light to green and allow the next car in line to access the road.

wait() method

To use the semaphore effectively, we should call wait() before accessing the shared resource. This checks if the resource is available and, if not, the thread will wait. After using the resource, we call signal() to notify the semaphore that we are done.

When calling wait(), the semaphore counter is decremented by 1. If the resulting value is less than zero, the thread is put on hold. If the resulting value is greater than or equal to zero, the code will execute without waiting.

signal() method

When we are done accessing a shared resource it’s customary to call the signal() method, which increments the semaphore’s counter by 1. If the previous value was less than zero, the function will wake the oldest waiting thread. If the previous value is greater than or equal to zero, there are no waiting threads in the queue.

Real World Use Case

Lets say you want to download a bunch of movies from a remote server concurrently but you want to set a limit to how many download request you are sending to your server so you don’t overload it. You can utilize dispatch semaphore to manage and set a limit to this concurrent operations.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
class DownloadManager {
    
    let urls: [URL]
    let semaphore: DispatchSemaphore
    var downloadedData: [Data?] = []
    
    init(urls: [URL], maxConcurrentDownloads: Int) {
        self.urls = urls
        self.semaphore = DispatchSemaphore(value: maxConcurrentDownloads)
    }
    
    func startDownloads() {
        DispatchQueue.global(qos: .background).async {
            for url in self.urls {
                self.semaphore.wait() // decrease the count to 0
                self.downloadData(from: url) { data in
                    self.downloadedData.append(data)
                    self.semaphore.signal() // increases the count back to 1
                }
            }
        }
    }


    func downloadData(from url: URL, completion: @escaping (Data?) -> Void) {
    URLSession.shared.dataTask(with: url) { data, _, error in
        guard let data = data, error == nil else {
            completion(nil)
            return
        }
        completion(data)
    }.resume()
  }
}  

Ok What’s Going on Here

The DownloadManager class represents a manager for downloading data from multiple URLs. In the init method, we initialize the list of URLs and a semaphore with a maximum value of maxConcurrentDownloads. This value represents the maximum number of concurrent downloads we want to allow. In the startDownloads method, we loop over the URLs and call the downloadData method for each URL. Before calling the downloadData method, we call wait on the semaphore to limit the number of concurrent downloads. Once the download is completed, we append the data to an array and signal the semaphore to allow another download.

By using DispatchSemaphore, we ensure that only a limited number of movie downloads occur at the same time, this sort of batch operation prevents us from overloading the server or the user’s cellular data. This is especially important in scenarios where we need to download a large number of files or when the downloads are resource-intensive.

let’s consider a simpler example. Suppose we have a shared resource, a variable that represents the number of times a button has been tapped. If two or more threads access this variable simultaneously, they can overwrite each other’s changes and cause data corruption. To prevent this, we can use DispatchSemaphore to ensure that only one thread can access the variable at a time.

Here’s how we can implement this solution:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class ViewController: UIViewController {
    
    var tapCount: Int = 0
    let semaphore = DispatchSemaphore(value: 1)
    
    @IBAction func tappedButton(_ sender: UIButton) {
        DispatchQueue.global(qos: .background).async {
            self.semaphore.wait() // decrease the count to 0
            self.tapCount += 1
            self.semaphore.signal() // increases the count back to 1
            print("Button tapped \(self.tapCount) times.")
        }
    }
    
}

In this code, we define a semaphore with an initial value of 1, indicating that only one thread can access the resource at a time. In the tappedButton method, we execute the code that increments the tap count asynchronously on a background thread. Before accessing the shared variable, we call the wait method on the semaphore to decrease its value by one. This method blocks the thread if the semaphore’s value is zero, waiting for another thread to signal the semaphore. Once the semaphore is available, the thread increments the tap count, signals the semaphore with the signal method, and prints the current count.

By using DispatchSemaphore, we ensure that each thread accesses the shared variable in a mutually exclusive way, avoiding race conditions. The semaphore acts as a gatekeeper, allowing only one thread at a time to access the resource. If a thread tries to access the resource while another thread is using it, it will wait until the semaphore signals it is available.

TLDR

DispatchSemaphore is a synchronization mechanism in iOS that limits access to a shared resource. It works like a lock, where only a limited number of threads can access the shared resource at the same time. This helps to prevent race conditions and ensures that the resource is used efficiently.

It’s like having a limited number of keys to a room, where only a few people can access it at the same time. This mechanism can be useful in scenarios where multiple threads access a shared variable or data structure, or when managing concurrent operations like network requests and file downloads. By using DispatchSemaphore, we can build more robust and efficient apps.

This post is licensed under CC BY 4.0 by the author.