“Asynchrony” means many things. It means concurrency (e.g., thread pools) and parallelism (e.g., GPUs). It means parameterizing a computation on where it should execute. It means the ability to chain work, propagating values and errors to user-specified continuations. It means forking and joining async control flow without blocking. It requires guarantees that async work can make forward progress. It means a standard way to request that a computation stop early, and a way to propagate the “I have now stopped” notification back to the caller. And — since this is C++ — it means doing all that with bare-metal performance and seamless integration with C++20 coroutines.
The async programming model espoused by the recent standard Executors papers satisfies these constraints while also enabling a full suite of Generic asynchronous algorithms. C++23 will likely come with algorithms such as `then` for chaining work, `timeout` and `stop_when` for cancelling work after certain criteria are met, `when_all` for launching concurrent work and joining (without blocking), and `sync_wait` for blocking until work has finished.
The core abstraction of the Executors design, known as “Sender/Receiver”, is to asynchronous algorithms what the STL’s “Iterator” abstraction is to sequence algorithms: an enabling abstraction that makes reusable algorithms possible. With the containers and algorithms of the STL, the C++ Standard Library took a giant step forward in power and reusability. With C++23 Executors, the Standard Library takes the next step.