core.async in Browsers

July 29, 2014

Summary: Javascript's concurrency model forces code to give up control of when a callback will be called. core.async gives you back that control and hence lets you code in a more natural style.

Well, there comes a time in every programmer's life when they take a look at the ThoughtWorks Technology Radar and they realize that core.async is in the Trial circle, meaning you should see if you might want to use it.

And if you're there, right there in that phase of your programming trajectory, eyeballing core.async for your next (or current) project, Welcome. This post is for you. Here it goes.

Why core.async? Well, the short answer is that it makes concurrency much, much, much, very much easier. I mean, let's face it: concurrency is so hard by itself, it has plenty of muches to spare. Now, I haven't used core.async a lot on the JVM. I wrote some, but it wasn't really the right thing for it. I plan on writing more later, I just haven't had the right project for it.

But I have used it a lot in ClojureScript in browsers. 1 And it is nice. It lets you do things that you could write yourself, given enough time. But you're more likely to solve the 16-ring Tower of Hanoi before you get all the kinks out. It's much better to let a machine do the hard work. That's what the 20th Century was all about: machines instead of muscle. And the 21st Century will be about computers instead of brains. Best get ahead of the curve.

I say you should let the machine do the work, but maybe that's too vague. Let's look at a concrete example. First, how do you do an ajax request then do something with the value? Easy:

(ajax "http://example.com/json-api"
      #(js/console.log %))

2

We're in Javascript, so we have to pass a callback which will get the result. That was easy. A little harder is making two API calls and doing something with both results.

(ajax "http://example.com/random-number"
      (fn [r1]
        (ajax "http://example.com/non-random-number"
              (fn [r2]
                (js/console.log (/ (:n r1) (:n r2)))))))

Alright, that wasn't too bad. A little indentation never hurt anyone. But, wait a second! We don't do the second request until the first request is already done. I've got a browser the size of a minivan and a 20 Megabit internet connection, and I'm doing one request at a time? That sucks!

We could start them both at the same time. But what order will they come back in? Welcome to the world of concurrency!!!! Things happening (maybe) at the same time, or at least you don't know what order they will happen in!

Well, let's try something. What if the first one to finish wrote its result down, then the second one to finish would know that it was second and it could do the final calculation? What would that look like?

(def r1 (atom nil))
(def r2 (atom nil))

(defn final-calculation []
  (js/console.log (/ @r1 @r2)))

(defn try-final-calculation []
  (when (and @r1 @r2)
    (final-calculation)))

(ajax "http://example.com/random-number"
  #(do
    (reset! r1 %)
    (try-final-calculation)))

(ajax "http://example.com/non-random-number"
  #(do
    (reset! r2 %)
    (try-final-calculation)))

Ok, well, that should work. What happens if you have to do 3 AJAX requests? Not so bad, either. What about 17? Oh, man, that sucks. We could do something like make a super-promise, where you can promise many values and only call a function at the end when they're all there. Yes, you can do that. It really wouldn't be hard, even.

(defn super-promise
  "Create a promise for many values. Use `deliver`
  to add values.

  keys: all of these keys must be present before calling f
  f: the function to call. Will be passed a map."
  [keys f]
  (let [r (atom {})]
    (add-watch r :promise
               (fn [_ _ _ s]
                 (when (every? #(contains? s %) keys)
                   (f s))))
    r))

(defn deliver [promise key value]
  (swap! promise assoc key value))

(def rs (super-promise [:r1 :r2]
                       (fn [{:keys [r1 r2]}]
                         (js/console.log (/ (:n r1) (:n r2))))))

(ajax "http://example.com/random-number"
  #(deliver rs :r1 %))

(ajax "http://example.com/non-random-number"
  #(deliver rs :r2 %))

Fhew! That's done. It works. It scales to many simultaneous AJAX calls. It's generic. Well, generic for this particular pattern. If we have a different pattern, we'd have to come up with a different solution.

We're looking through a small porthole into callback hell. The identifying characteristic of callback hell is that you give over control from your code, which was all nice and procedural and easy to follow, you give the control over to whatever demon is going to call that callback. You sell your virtual soul for a bit of asynchrony. But you can't cheat the Devil. When all is said and done, all of your work gets done but you need some savior angel to help you coordinate all of the pieces back together again. In this case, it's the super-promise, which works in the first circle of hell, but even Dante can't help you if you go further.3

Now that we've got a decent solution to this particular problem established pre-core.async, let's look at what it would be using core.async. We'll assume that our ajax-channel function returns a core.async channel.

(let [r1-channel (ajax-channel "http://example.com/random-number")
      r2-channel (ajax-channel "http://example.com/non-random-number")]
  (go
    (js/console.log (/ (:n (<! r1-channel)) (:n (<! r2-channel))))))

Let me just get it out of the way and never mention it again: it's shorter. It's shorter even than the naive solution using two atoms. And it's shorter than the super-promise solution even if you don't include the super-promise code. I'm done talking about the size, because it's only a little important.

Now that that's out there, on to the more significant stuff. First and foremost is that you never lose control. The code even reads procedurally. Start two ajax requests and remember the channels. Start a go block (which means run the code asynchronously) and log the result of dividing the first result by the second result.

Does it scale? You betcha! Imagine we need to make 192 imaginary AJAX calls before the Devil takes his due. The only way to do that is to do them all as fast as the browser fairies let you.

(let [numbers (range 192)
      urls (map #(str "http://example.com/choir?angelid=" %) numbers)
      channels (map ajax-channel urls)]
  (go
    (doseq [c channels]
      (js/console.log "Got: " (<! c)))))

The AJAX requests come back as fast as they can (meaning arbitrary order), and the results are logged in their original (numeric) order. You could do them in any order you want. That's because you're not giving up control.

How does this work? How can you have asynchrony and not give up control?

I alluded to it before: you're making the machine do the work. That go block up there is actually a powerful macro that transforms your procedural code into a mess of callbacks (like in our super-promise example) that you would never want to write yourself. I mean, maybe you want to, but maybe you're nuts. And you'll get it wrong.

The transformation in the go block is pretty easy, as things go. It's mechanical. It's easy like lifting a car with your hands. Put enough leverage (by using a jack) and you can do it. It converts an easy motion (pushing down on the lever or turning the screw) into a powerful force. The go macro converts your easy code into a bunch of callbacks and coordinates them with a powerful state machine which will angelically reassemble them without ever losing control.

It's all good-ol' callbacks and mutable state underground. But above ground, you've got code that's easy to reason about. No Devil's bargain. You've got an angel negotiating for you. That's the key thing! Channels are amazingly easy to reason about because each channel is so simple. But that's a story for another day!

I should just mention that, yes, core.async is about procedural programming. Channels are mutable state. core.async is made for the small part of your code that is procedural and side-effecting. Every program has got such a part. If you're doing concurrent things (and in Javascript, you always are), core.async might be able to help provide a first-class mechanism for communication and coordination.

That's what you might call the "core" of core.async in ClojureScript. It's about regaining control of your asynchronous calls and not smearing your logic across your code in little bits contained in callbacks. You keep your code's semantic integrity and you keep your sanity.

If staying out of callback hell is to your liking, you just might like the divine help of a LispCast video course dedicated to teaching core.async in a gentle, graceful way. Presented in a unique visual format and at just the right pace, LispCast Clojure core.async will guide you to a deep understanding of the fundamentals of core.async so you can clean up your code, get more concurrency, and get back control.

You might also like


  1. Don't say "the browser" because there are many and they are different.

  2. Let's imagine these functions exist and work as expected.

  3. And thank Clojure for the atom, which is like a cross or holy water when you find yourself down there.

React: Another Level of Indirection

December 31, 2013

Any problem in computer science can be solved with another level of indirection. -- David Wheeler

The React library from Facebook makes DOM programming functional by using a Virtual DOM. The Virtual DOM is an indirection mechanism that solves the difficult problem of DOM programming: how to deal with incremental changes to a stateful tree structure. By abstracting away the statefulness, the Virtual DOM turns the real DOM into an immediate mode GUI, which is perfect for functional programming. Further, the Virtual DOM provides the last piece of the Web Frontend Puzzle for ClojureScript.

David Nolen has written up a short explanation of how the Virtual DOM works, as well as some amazing performance benchmarking of React in a ClojureScript context. His work is important, so you should read it. I'd like to focus a bit more on the expressivity and why I would use React even if it were not so fast.

One can view MVC frameworks as an attempt to impose some structure on the code that has to interface with the DOM. The bargain they propose is this: wrap the DOM in View objects, so that subtrees of DOM nodes are managed by a View object. Wrap your state in Model objects, which will notify the View objects of changes. You thereby keep a layer of indirection between Models and Views, which inherently need different structure and need to change independently.

The layer of indirection (usually an event or observer system) solves a coupling problem. It decouples your state from the DOM, while leaving you to deal with all of the difficulties of the DOM. You are essentially making a new type of DOM that is better suited to your GUI domain than plain HTML, but is still a PITA. It is little more than a coat of paint. The DOM is still stateful.

React takes a different approach. It provides a level of indirection which solves the actual problem with the DOM--statefulness. The DOM has become a smart canvas. Paint the whole picture again, but only the different parts get wet.

Instead of wrapping the DOM in View objects, you create a Virtual DOM (completely managed by React) which mirrors the real DOM. When the model changes, you generate a new Virtual DOM. The differences are calculated and converted into batch operations to the real DOM. In essence, React is a function which takes two DOMs and generates a list of DOM operations, i.e., it's referentially transparent.

It's easy to imagine how this changes the game. You no longer need an initializer to set up the DOM and observers to modify it. The first Virtual DOM rendering is like the second one, in fact like any other! Your "View", if you want to call it that, is simply a function from state to Virtual DOM nodes. If the state and DOM nodes are immutable, all the better. There's less work to know if it has changed.

This also means that your Views can be composed functionally. Define a component (as a function) and your subcomponents, and build them up functionally. All of the functional abstraction and refactoring that you're used to is available. If you're doing it right, your code should get shorter, easier to read, and more fun to maintain.

My (short) experience rewriting a program to use React converted me. It was the only library I have used that actually made DOM programming fun and functional--and dare I say my code now works!

Which gets me to my last point, which is that React is the final puzzle piece for ClojureScript web frontend development.

Any other problems left? I can't think of any. That's something to discuss on Twitter.

I suggest you try React. Om by David Nolen is the most mature React ClojureScript library I know of. It does a bit more than I've described here (Om manages your state tree for you, among other things) and is evolving quickly. I have some code that generates React Virtual DOM using hiccup style with macros (so the work is done at compile time), but other than that, contains only half-baked implementations of what's in Om. If you're interested in the hiccup macros, let me know what you'd use it for and I'll put it on github.

For more inspiration, history, interviews, and trends of interest to Clojure programmers, get the free Clojure Gazette.

Learn More

Clojure pulls in ideas from many different languages and paradigms, and also from the broader world, including music and philosophy. The Clojure Gazette shares that vision and weaves a rich tapestry of ideas from the daily flow of library releases to the deep historical roots of computer science.

You might also like