Chapters

Hide chapters

Kotlin Coroutines by Tutorials

Second Edition · Android 10 · Kotlin 1.3 · Android Studio 3.5

Section I: Introduction to Coroutines

Section 1: 9 chapters
Show chapters Hide chapters

4. Suspending Functions
Written by Filip Babić

So far, you’ve learned a lot about coroutines. You’ve seen how to launch coroutines and deliver asynchronous work without any overhead from thread allocations or memory leaks. However, the base foundation of coroutines is the ability to suspend code, control its flow at will, and return values from synchronous and asynchronous operations with the same kind of syntax and sequential code structure.

In this chapter, you’ll learn more about how suspendable functions work from within. You will see how you can convert existing code, which relies on callbacks, to suspendable functions, which are called in the same way as regular, blocking, functions. Throughout it all, you will learn what the most important piece of the coroutines puzzle is.

Suspending vs. non-suspending

Up until now, you’ve learned that coroutines rely on the concept of suspending code and suspending functions. Suspended code is based on the same concepts as regular code, except the system has the ability to pause its execution and continue it later on. But when you’re using two functions, a suspendable and a regular one, the calls seem pretty much the same.

If you go a step further and duplicate a function you use, but add the suspend modifier keyword at the start, you could call both of the functions with the same parameters. You’d have to wrap the suspendable function in a launch block, because the Kotlin coroutines API is built like that, but the actual function call doesn’t change.

The system differentiates these two functions by the suspend modifier at compile time, but where and how do these functions work differently, and how do both functions work with respect to the suspension mechanism in Kotlin coroutines? The answer can be found by analyzing the bytecode each of the functions generate, and by explaining how the call-stack works in both of the cases. You’ll start by analyzing the non-suspendable, regular, variant first.

Analyzing a regular function

To follow the code in this chapter, import this chapter’s starter project, using IntelliJ, and selecting Import Project, and navigating to the suspending-functions/projects/starter folder, selecting the suspending_functions project.

If you open up Main.kt, in the starter project, you’ll notice a small main function. It’s calling a simple, regular, non-suspendable function, which doesn’t rely on callbacks or coroutines. There will be four different variants of this function. This variant is the most rudimentary, so let’s analyze it first:

fun getUserStandard(userId: String): User {
  Thread.sleep(1000)

  return User(userId, "Filip")
}

The function takes in one parameter: the userId. It puts the current thread to sleep for a second, to mimic a long running operation. After that, it returns a User. In reality, the function is simple, and there are no hidden mechanisms at work here. Analyze the bytecode generated by pressing Tools ▶︎ Kotlin ▶︎ Show Kotlin Bytecode. After that you should see the Kotlin Bytecode window opened, and by pressing the Decompile button, you can see the generated code, which should look something like this

@NotNull
public static final User getUserStandard(@NotNull String userId) {
  Intrinsics.checkParameterIsNotNull(userId, "userId");
  Thread.sleep(1000L);
  return new User(userId, "Filip");
}

After inspecting it, you can see that it doesn’t differentiate much from the actual code. It’s completely straightforward and does what the code says it does.

The only addition to the code is the nullchecks and annotations the compiler adds, to make sure non-null type system is followed. Once the program starts this function, it will check that the parameters are not null, and return a user after one second.

This function is clean and simple, but the problem here lies in the Thread.sleep(1000) call. If you call the function on the main thread, you’ll effectively freeze your UI for a second. It’s much better if you implement this using a callback, and by creating a new thread for the long-running operation. That is actually the second example; see how you’d implement this using a callback.

Implementing the function with callbacks

A better solution to this problem would be having a function, which takes in a callback as a parameter. This callback would serve as a means of notifying the program about the user value being ready for use. Furthermore, it would create a separate thread of execution, to offload the main thread.

To do this, replace the getUserStandard() with the following code:

fun getUserFromNetworkCallback(
    userId: String,
    onUserReady: (User) -> Unit) {
  thread {
    Thread.sleep(1000)

    val user = User(userId, "Filip")
    onUserReady(user)
  }
  println("end")
}

Update the main function to the following:

fun main() {
  getUserFromNetworkCallback("101") { user ->
    println(user)
  }
  println("main end")
}

Run the bytecode analyzer again, and you should see the following output:

public static final void getUserFromNetworkCallback(
@NotNull final String userId,
@NotNull final Function1 onUserReady) {
  Intrinsics.checkParameterIsNotNull(userId, "userId");
  Intrinsics.checkParameterIsNotNull(onUserReady, "onUserReady");
  ThreadsKt.thread$default(
  false,
  false,
  (ClassLoader)null,
  (String)null,
  0,
  (Function0)(new Function0 () {
    // $FF: synthetic method
    // $FF: bridge method
    public Object invoke() {
      this.invoke();
      return Unit.INSTANCE;
    }

    public final void invoke () {
      Thread.sleep(1000L);
      User user = new User(userId, "Filip");
      onUserReady.invoke(user);
    }
  }), 31, (Object)null);
  

  String var2 = "end";
  System.out.println(var2);
}

It’s quite a big change compared to the previously generated piece of code. Again the system does a series of nullchecks to enforce the type system. After that, it creates a new thread, and within public final void invoke() of the thread, it calls the wrapped code. The code itself doesn’t change much from the last example, but now it relies on a thread and a callback.

Once the system runs getUserFromNetworkCallback(), it creates a thread. Once the thread is fully set up, it runs the block of code, and propagates the result back using the callback. If you run the code above, you’ll get the following result:

end
main end
User(id=101, name=Filip)

This means the main function can indeed finish before the getUserFromNetworkCallback does. The thread it starts lives on after the main thread, and so can the code. This function is a bit better than the last example, since it offloads the work from the main thread, using the callback to finally consume the data. But the problem here is that the code you use to build up a value can throw an exception. This means that you’d have to wrap it in a try/catch block. But it would be best if the try/catch block was at the actual place of creating a value. However, if you catch an exception there, how do you propagate it back to the main code?

This is usually done by using a slightly different signature of the callback passed to the function you wish to run, allowing it to pass either a value or an exception. See how to handle both of those paths in which the function can end.

Handling happy and unhappy paths

When programming, you usually have something called a happy path. It’s the course of action your program takes, when everything goes smoothly. Opposite of that, you have an unhappy path, which is when things go wrong. In the example above, if things went wrong, you wouldn’t have any way of handling that case from within the callback. You’d either have to wrap the entire function call in a try/catch block, or catch exceptions from within the thread function. The former is a bit ugly, as you’d really want to have all possible paths handled at the same place. The latter isn’t much better either, as all you can pass to the callback is a value, so you’d have to either pass a nullable value, or an empty object, and go from that.

To make this functionality available and a bit more clean, programmers define the callback as a two-parameter lambda, with the first being the value, if there is any, and the second being the error, if it occurred. The signature of the function, and its callback, would be next, so replace the code in Main.kt:

fun getUserFromNetworkCallback(
    userId: String,
    onUserResponse: (User?, Throwable?) -> Unit) {
  thread {

    try {
      Thread.sleep(1000)
      val user = User(userId, "Filip")

      onUserResponse(user, null)
    } catch (error: Throwable) {
      onUserResponse(null, error)
    }
  }
}

The callback can now accept either a value or an error. Whichever parameter or path is taken, it should be valid, and non-null, while the remaining parameter will be null, showing you that the path it governs hasn’t happened. When looking at the bytecode by pressing the Decompile button, in the bytecode decompiler window, you should see the following:

public static final void getUserFromNetworkCallback(
@NotNull final String userId,
@NotNull final Function2 onUserResponse) {
  Intrinsics.checkParameterIsNotNull(userId, "userId");
  Intrinsics.checkParameterIsNotNull(onUserResponse, "onUserResponse");
  ThreadsKt.thread$default(
  false,
  false,
  (ClassLoader)null,
  (String)null,
  0,
  (Function0)(new Function0 () {
    // $FF: synthetic method
    // $FF: bridge method
    public Object invoke() {
      this.invoke();
      return Unit.INSTANCE;
    }

    public final void invoke () {
      try {
        Thread.sleep(1000L);
        User user = new User(userId, "Filip");
        onUserResponse.invoke(user, (Object)null);
      } catch (Throwable var2) {
        onUserResponse.invoke((Object)null, var2);
      }

    }
  }), 31, (Object)null);

The code hasn’t changed that much, it’s just wrapping everything in a try/catch, and passing either the pair of (value, null) or (null, error), back to the user. Head back to main(), and change the code to the following:

fun main() {
  getUserFromNetworkCallback("101") { user, error ->
    user?.run(::println)

    error?.printStackTrace()
  }
}

If there is a non-null user value, you can print it out or do something else with it.

On the other hand, if there is an error, you can print its stack trace or check the error type and so on. This approach is much better than the previous ones, but there’s still one problem with it. It relies on callbacks, so if you needed three or four different requests and values, you’d have to build that dreaded “Callback Hell” staircase. Additionally, there’s the overhead of allocating a new Thread, every time you call a function like this.

Analyzing a suspendable function

The caveats found in the examples with callbacks are things which can be remedied with the use of coroutines. Revise the changes you need to make to the example above, to improve it even further:

  • Remove the callback and implement the example with coroutines.
  • Provide efficient error handling.
  • Remove the new Thread allocation overhead.

To surpass all these obstacles, you’ll learn another function from the Coroutines API — suspendCoroutine(). This function allows you to manually create a coroutine and handle its control state and flow. Unlike the launch block, which just defined a way in which a coroutine was built, but took care of everything behind the scenes.

But, before we venture into suspendCoroutine(), analyze what happens when you just add the suspend modifier to any existing function. Add another function to the Main.kt file, with the following signature:

suspend fun getUserSuspend(userId: String): User {
  delay(1000)

  return User(userId, "Filip")
}

This function is very similar to the first example, except you added the suspend modifier, and you don’t sleep the thread but call delay() - a suspendable function which suspends coroutines for a given amount of time. Given these changes, you’re probably thinking the difference in bytecode cannot be that big, right?

Well, the bytecode, which you can get using the Decompile button in the kotlin bytecode decompiler is the following:

@Nullable
public static final Object getUserSuspend(
@NotNull String userId,
@NotNull Continuation var1) {
  Object $continuation;
  label28: {
    if (var1 instanceof < undefinedtype >) {
      $continuation = (<undefinedtype>)var1;
      if ((((<undefinedtype>)$continuation).label & Integer.MIN_VALUE) != 0) {
        ((<undefinedtype>)$continuation).label -= Integer.MIN_VALUE;
        break label28;
      }
    }

    $continuation = new ContinuationImpl(var1) {
    // $FF: synthetic field
    Object result;
    int label;
    Object L $0;

    @Nullable
    public final Object invokeSuspend (@NotNull Object result) {
      this.result = result;
      this.label | = Integer.MIN_VALUE;
      return MainKt.getUserSuspend((String)null, this);
    }
  };
  }

  Object var2 =((<undefinedtype>)$continuation).result;
  Object var4 = IntrinsicsKt . getCOROUTINE_SUSPENDED ();
  switch(((<undefinedtype>)$continuation).label) {
    case 0:
    if (var2 instanceof Failure) {
      throw ((Failure) var2).exception;
    }

    ((<undefinedtype>)$continuation).L$0 = userId;
    ((<undefinedtype>)$continuation).label = 1;
    if (DelayKt.delay(1000L, (Continuation)$continuation) == var4) {
    return var4;
  }
    break;
    case 1:
    userId = (String)((<undefinedtype>)$continuation).L$0;
    if (var2 instanceof Failure) {
      throw ((Failure) var2).exception;
    }
    break;
    default:
    throw new IllegalStateException ("call to ’resume’ before ’invoke’ with coroutine");
  }

  return new User (userId, "Filip");
}

This massive block of code is a huge difference than from the previous examples, and it’s a behemoth compared to the very first example you’ve seen. Going over the bits one step at a time, to get a sense of what’s happening, here:

  • One of the first things you’ll notice is the extra parameter to the function — the Continuation. It forms the entire foundation of coroutines, and it is the most important thing by which suspendable functions are different from regular ones. Continuations allow functions to work in the suspended mode. They allow the system to go back to the originating call site of a function, after it has suspended them. You could say that Continuations are just callbacks for the system or the program currently running, and that by using continuations, the system knows how to navigate the execution of functions and the call stack.
  • That being said, all functions actually have a hidden, internal, Continuation they are tied to. The system uses it to navigate around the call stack and the code in general. However, suspendable functions have an additional instance which they use, so that they can be suspended, and that the program can continue with execution, finally using the second Continuation, to navigate back to the suspendable function call site or receive its result.
  • The rest of the code first checks which continuation we’re in. Since each suspendable function can create multiple Continuation objects. Each continuation would describe one flow the function can take. For example, if you call delay(1000) on a suspendable function, you’re actually creating another instance of execution, which finishes in one second, and returns back to the originating point — the line at which delay was called.
  • The code wraps the continuation arguments, and calls the function from within. Once that is finished, it checks on the label for the currently active continuation. If the label has reached zero, it means it’s at the end of the latest execution — the delay(). In that case it just returns the result from that execution, which is in turn the rest of the function call. In the end, it also increases the label, to one, to notify that it’s past delay(), and should continue on with the code.
  • Finally, if the label is one, which is the largest index in the continuation-stack, so to speak, it means the function has resumed after delay(), and that it’s ready to serve you the value — the User. If anything went wrong up until that point, the system throws an exception.

There’s another, default, case, which just throws an exception if the system tries to resume() with a continuation or execution flow, but hasn’t actually invoked the function call. This can sometimes happen when a child Job finishes after its parent. It’s a default, fallback mechanism, for cases which are extremely rare. If you use your coroutines carefully and the way they are supposed to be used, parent Jobs should always wait for their children, and this shouldn’t happen.

Briefly, the system uses continuations for small state-machines, and internal callbacks, so that it knows how to navigate through the code, and which execution flows exist, and at which points it should suspend, and resume later on. The state is described using the label, and it can have as many states as there are suspension points in the function.

To call the newly created function, you can use the next snippet of code:

fun main() {
  GlobalScope.launch {
    val user = getUserSuspend("101")

    println(user)
  }
  
  Thread.sleep(1500)
}

The function call is just like the first example. The difference is it’s suspendable, so you can push it in a coroutine, offloading the main thread. You also rely on the internal threads from the Coroutine API, so there’s no additional overhead. The code is sequential, even though it could be asynchronous. And you can use try/catch blocks, at the call site, even though the value could be produced asynchronously. All points from the previous example have been addressed!

Changing code to suspendable

Another question is when should you migrate existing code to suspendable functions and coroutines? This is a relatively biased question, but there are still some objective guidelines you can follow, to determine if you’re better off with coroutines or standard mechanisms.

Generally speaking, if your code is filled with complex threading, and often allocates new threads to do the work you need, but you don’t have the ability to use a fixed pool of threads, instead of creating new threads as you go, you should migrate to coroutines. The performance benefits are visible immediately, as the Coroutines API already has predefined threading mechanisms which make it easy for you to switch between threads and distribute multiple pieces of work between threads.

This often coincides with the first reason to switch, but if you’re building new threads, due to asynchronous or long-running operations, you’re often abusing callbacks heavily, because the easiest way to communicate between threads is through callbacks. And if you’re using callbacks, you probably have problems with code styling, readability and the cognitive load needed to understand the business logic behind the functions. In that case, you should try to migrate your code to coroutines, as well.

The problem comes when there’s some API which isn’t yours to change. In those cases you cannot change the source code. Let’s say you have the following code, but it’s coming from an external library:

fun readFile(path: String, onReady: (File) -> Unit) {
  Thread.sleep(1000)
  // some heavy operation

  onReady(File(path))
}

This function forces you to use a callback, even though you might have a better way to handle the long-running or asynchronous operation. But you could easily wrap this function to work with suspendCoroutine():

suspend fun readFileSuspend(path: String): File =
    suspendCoroutine {
      readFile(path) { file ->
        it.resume(file)
      }
    }

This code is perfectly fine, because if it manages to read a file, it will pass it to the coroutine as a result. If something is wrong, it will throw an exception, and you can catch it at the call site. Having the ability to completely wrap possibly asynchronous operations with coroutines is extremely powerful. But if your functions rely on callbacks to constantly produce values - like subscribing to sockets, then coroutines such as these don’t really make sense. It’s better off to implement such mechanisms with the Channel or Flow APIs, which you’ll learn more in “Chapter 11: Channels” and “Chapter 14: Beginning with Coroutines Flow”.

Elaborating continuations

Having first-class continuations is the key concept which differentiates a standard function, from a suspendable one. But what is a continuation after all? Every time a program calls a function, it is added on to the program’s call-stack. This is a stack of all the functions, in the order they were called, which are currently held in memory, and haven’t finished yet. Continuations manipulate this execution flow, and in turn help handle the call-stack.

You’ve already learned that a Continuation is in fact a callback, but implemented at a very low system level. A more precise explanation would be that it’s an abstract wrapper around the program’s control state. It holds means to control how and when the program will execute further, and what its result will be — an exception or a value.

Once a function finishes, the program takes it off the stack, and proceeds with the next function. The trick is how the system knows where to return, after each of the functions are executed. This information is held within the aforementioned Continuation. Each continuation holds a little information about the context in which the function was called. Like the local variables, the parameters the function got passed, the thread it was called in, and so on. By using that information, the system can simply rely on the continuation, to tell it where it needs to be, when a function ends.

Try and see what the lifecycle of functions, and a Continuation is, from the function call, to a finish.

Living in the stack

When a program first starts, its call-stack has only one entry — the initial function, usually called main(). This is because within it, no other functions have been called yet. The initial function is important, because when the program reaches its end, it calls back to the continuation of main(), which completes the program, and notifies the system to release it from memory.

As the program lives, it calls other functions, adding them to the stack.

Call stack with Continuation
Call stack with Continuation

So if you had this code fun main() {}, the lifecycle of the program-level continuation is contained within the brackets of the main function. But when another function is called, the first thing the system does is create a new Continuation for the new function. It adds some information to the new continuation, like what is the parent function and its Continuation object — in this case main(). It additionally passes the information about which line of code the function was called at, and with which arguments, and what its return type should be.

Examine what happens with the following code snippet:

fun main() {
  val numbers = listOf(1, 2, 5)
}
  • The system creates a continuation, which will live within listOf().

  • Initially, it knows that it’s been called at the first line of main(), so it can return at the appropriate position in code when finished.

  • Next, it knows that its parent is main(). This allows listOf() a way to finish the entire program, propagating calls all the way up to the initial Continuation. For example, this can happen when an exception occurs. Finally, it knows that the parameter passed to listOf() is a variable-argument array, with the values 1, 2, 5 and that at the end of the function, we should receive back a List<Int>.

  • With all of this information, it navigates the function execution and lifecycle, from the calling point, to the return statement.

When looking at a deeper level, it’s just like having a local variable declared, calling an initializer function with a pointer to that variable, so you can set the value elsewhere — in listOf(), and then using a goto statement, to return to a line after the initializer call, having prepared the variable for usage.

Another analogy which could be used to explain continuations is video games. In most video games, you have things which are called checkpoints. When you go on an adventure, pursuing a quest or some other task at hand, which would in computing be like calling a function, you have to pass some distance and complete a smaller set of tasks. When you’re done, you can go back to your checkpoint, and finish your quest. On the other hand, if something bad happened — you failed the mission in the game, which would be similar to throwing an exception in computing, you always have the ability to reload the game and restart from the checkpoint. You can achieve similar behavior if you wrap a function in a try/catch block, as you can effectively return back to the checkpoint and start over.

Handling the continuation

In the last version of getUser(), you used suspendCoroutine() from the Coroutines API. It’s a top level function which allows you to create coroutines, just like launch(), but specifically for returning values, rather than launching work. Another distinct thing about suspendCoroutine() is that it takes in a lambda as an argument, which is of the type block: (Continuation<T>) -> Unit. What this means is, that you can handle a Continuation as a first-class citizen, calling functions on the object as you please. This allows for manual control-state and control-flow manipulation.

The functions available on Continuations are resume(), resumeWith() and resumeWithException(). You also have access to the CoroutineContext, by calling the continuation.context, but you’ll learn about contexts later on in “Chapter 6: Coroutine Context.”

Analyzing the Continuation more, resume() passes down a successful value of type T, whichever type you’re trying to return from a coroutine. You use this when you deem the conditions in the coroutine valid, and want to go back to the rest of the code. resumeWithException() takes in a Throwable, in case of something going awry. This allows you to finish the coroutine with an error, which you can later on catch and handle.

Having this presents amazing ability to return values from functions, which might be asynchronous, without knowing what’s behind. Just like an API should be. You’re probably thinking: But what if the function doesn’t end?

In that case, once again, you’ll be waiting for a value, which isn’t coming, resulting yet again in another halting problem, where your code is suspended infinitely.

To remedy this, it’s best to be aggressive with continuations. No matter what, try to always produce a result back, even if it’s only an exception. At least in that case your function will end, and you will have something to handle. Conveniently enough, the Continuation has a function to do just that. It’s called resumeWith(), and it takes in the aforementioned Result monad. The Result can only be one of the two states at a certain time. Either a Success, holding the value you need, or a Failure, holding the exception.

It also holds some utility functions, like the runCatching(), which receives a lambda it tries to run to get the Success case, with some value. In case something goes wrong, by the help of a try/catch block, it catches the exception and returns a Failure result in the end. After the continuation receives the Result, it unwraps it, and you get the value or the exception, so that you can handle it yourself.

Whenever you’re using suspendCoroutine(), or any other way of resuming values with continuations, it’s strongly recommended to enforce this approach, so you don’t end up with coroutines that never finish.

Creating your own suspendable API

One of the things we mentioned Jetbrains had in mind for the Coroutines API was extensibility. You’ve seen how you can turn your own functions into suspendable ones, but another thing you can do is create an API-like set of utilities which hide the thread and context switching ceremony.

We’ve prepared some examples for you in Api.kt. Open it up, and you should see a few functions ready, but let’s go over them one by one.

The first one is a convenience method, which uses suspendCoroutine(), and the Result’s runCatching() to try and process a value for you.

suspend fun <T : Any> getValue(provider: () -> T): T =
    suspendCoroutine { continuation ->
      continuation.resumeWith(Result.runCatching { provider() })
    }

If you were to call this function somewhere in your code, it would look something like this:

GlobalScope.launch {
  val user = getValue { getUserFromNetwork("101") }
    
  println(user)
}

This allows you to abstract away all of the functions which try to fetch some data, through the network, file-reading or database lookups and push them to the background thread, allowing the main thread to only worry about rendering the data, and the rest of the code about fetching it.

The next two examples are extremely simple, and are useful for thread-switching:

fun executeBackground(action: suspend () -> Unit) {
  GlobalScope.launch { action() }
}

fun executeMain(action: suspend () -> Unit) {
  GlobalScope.launch(context = Dispatchers.Main) { action() }
}

The first one takes in an action lambda block, and runs it in the background, using the default launch context. The second one also takes in the action block, but runs it using the Dispatchers.Main context, so you can easily switch to the main thread, without knowing the details of the implementation.

Using them, you’d have code similar to this:

executeBackground {
  val user = getValue { getUserFromNetwork("101") }

  executeMain { println(user) }
}

The naming could be a bit better, but you get the idea behind this. Now you have the same behavior as with GlobalScope.launch blocks, but you don’t rely upon knowing which scope, and which functions are used behind the scenes.

This is great when you’re building the base business logic layer, as you could provide both the main and background contexts, and scopes in which you’d run the functions. And in the concrete implementations, or subclasses of the base presenter, view model or controller, you’d simply call these functions, and let the core part of the layer worry about threading.

Play around with these more, and build even more utility functions on top of them, according to your needs.

If you want to check out these examples, import this chapter’s final project, using IntelliJ, and selecting Import Project, and navigating to the suspending-functions/projects/final folder, selecting the suspending_functions project.

Key points

  • Having callbacks as a means of notifying result values can be pretty ugly and cognitive-heavy.
  • Coroutines and suspendable functions remove the need for callbacks and excessive thread allocation.
  • What separates a regular function from a suspendable one is the first-class continuation support, which the Coroutine API uses internally.
  • Continuations are already present in the system, and are used to handle function lifecycle — returning the values, jumping to statements in code, and updating the call-stack.
  • You can think of continuations as low-level callbacks, which the system calls to when it needs to navigate through the call-stack.
  • Continuations always persist a batch of information about the context in which the function is called — the parameters passed, call site and the return type.
  • There are three main ways in which the continuation can resolve - in a happy path, returning a value the function is expected to return, throwing an exception in case something goes bad, and blocking infinitely because of flawed business logic.
  • Utilizing the suspend modifier, and functions like launch() and suspendCoroutine(), you can create your own API, which abstracts away the threading used for executing code.

Where to go from here?

In this chapter you’ve learned a lot about the foundation of coroutines. Through an extensive overview of differences between suspendable and non-suspendable functions you’ve seen how suspendable functions utilize Continuations to navigate around and return values as results.

The next chapter, Chapter 5, “Async/Await,” relies heavily on the usage of functions which leverage continuations and suspendable functions to their favor, to return values from code which may or may not be asynchronous and long-running. So read on to learn more about how you can process values from functions which used to require a ton of callbacks!

Have a technical question? Want to report a bug? You can ask questions and report bugs to the book authors in our official book forum here.
© 2024 Kodeco Inc.