5986 stories

By theodolite in "No, I Will Not Debate You" on MeFi

1 Share
Dear Sir Oswald,

Thank you for your letter and for your enclosures. I have given some thought to our recent correspondence. It is always difficult to decide on how to respond to people whose ethos is so alien and, in fact, repellent to one's own. It is not that I take exception to the general points made by you but that every ounce of my energy has been devoted to an active opposition to cruel bigotry, compulsive violence, and the sadistic persecution which has characterised the philosophy and practice of fascism.

I feel obliged to say that the emotional universes we inhabit are so distinct, and in deepest ways opposed, that nothing fruitful or sincere could ever emerge from association between us.

I should like you to understand the intensity of this conviction on my part. It is not out of any attempt to be rude that I say this but because of all that I value in human experience and human achievement.

Yours sincerely,

Bertrand Russell
Read the whole story
13 hours ago
Share this story

The Split-brain Universe

1 Share

An extended Nowa Fantaskyka remix.

The year is 1982. I read Isaac Asimov’s newly-published Foundation’s Edge with a sinking heart. Here is the one of Hard-SF’s Holy Trinity writing— with a straight face, as far as I can tell— about the “consciousness” of rocks and trees and doors, for Chrissakes. Isaac, what happened? I wonder. Conscious rocks? Are you going senile?

No, as it turned out. Asimov had simply discovered physical panpsychism: a school of thought that holds that everything— rocks, trees, electrons, even Donald Trump— is conscious to some degree. The panpsychics regard consciousness as an intrinsic property of matter, like mass and charge and spin. It’s an ancient belief— its roots go all the way back to ancient Greece—but it has recently found new life among consciousness researchers. Asimov was simply ahead of his time.

I’ve always regarded panpsychism as an audacious cop-out. Hanging a sign that says “intrinsic” on one of Nature’s biggest mysteries doesn’t solve anything; it merely sweeps it under the rug. Turns out, though, that I’d never really met audacious before. Not until I read “The Universe in Consciousness” by Bernardo Kastrup, in the Journal of Consciousness Studies.

Kastrup goes panpsychism one better. He’s not saying that all matter is conscious. He’s saying that all matter is consciousness— that consciousness is all there is, and matter is just one of its manifestations. “Nothing exists outside or independent of cosmic consciousness,” he writes. “The perceivable cosmos is in consciousness, as opposed to being conscious.” Oh, and he also says the whole universe suffers from Multiple Personality Disorder.

It reads like some kind of flaky New Age metaphor. He means it literally, though.

He calls it science.


Just as plausible, apparently.

Even on a purely local level, there are reasons to be skeptical of MPS (or DID, as it’s known today: Dissociative Identity Disorder). DID diagnoses tend to spike in the wake of new movies or books about multiple personalities, for example. Many cases don’t show themselves until after the subject has spent time in therapy— generally for some other issue entirely— only to have the alters emerge following nudges and leading questions from therapists whose critical and methodological credentials might not be so rigorous as one would like. And there is the— shall we say questionable nature of certain alternate personalities themselves. One case in the literature reported an alter that identified as a German Shepherd. Another identified— don’t ask me how— as a lobster. (I know what you’re thinking, but this was years before the ascension of Jordan Peterson in the public consciousness.)

When you put this all together with the fact that even normal conscious processes seem to act like a kind of noisy parliament— that we all, to some extent, “talk to ourselves”, all have different facets to our personalities— it’s not unreasonable to wonder if the whole thing didn’t boil down to a bunch of overactive imaginations, being coached by people who really should have known better. Psychic CosPlaying, if you will. This interpretation is popular enough to have its own formal title: the Sociocognitive Model.

There could be a sort of psychiatric Sturgeon’s Law at play here, though; the fact that 90% of such studies are crap doesn’t necessarily mean that all of them are. Brain scans of “possessed” DID bodies show distinctly different profiles than those of professional actors trained to merely behave as though they were: the parts of the brain that lit up in actors are associated with imagination and empathy, while those lighting up in DID patients are involved with stress and fear responses. I’m not entirely convinced— can actors, knowingly faking a condition, really stand in for delusional people who sincerely believe in their affliction? Still, the stats are strong; and it’s hard to argue with a different study in which the visual centers of a sighted person’s brain apparently shut down in a sighted person when a “blind” alter took the controls.

Also let’s not forget the whole split-brain phenomenon. We know that different selves can exist simultaneously within a single brain, at least if it’s been partitioned in some way.

This is the premise upon which Kastrup bases his model of Reality Itself.


You’ve probably heard of quantum entanglement. Kastrup argues that entangled systems form a single, integrated, and above all irreducible system. Also that, since everything is ultimately entangled to something else, the entire inanimate universe is “one indivisible whole”, as irreducible as a quark. He argues— let me quote him here directly, so you won’t think I’m making this up—

“that the sole ontological primitive there is is cosmic phenomenal consciousness … Nothing exists outside or independent of cosmic consciousness. Under this interpretation one should say that the cosmos is constituted by phenomenality, as opposed to bearing phenomenality. In other words, here the perceivable cosmos is in consciousness, as opposed to being conscious.”

Why would he invoke such an apparently loopy argument? How are we any further ahead in understanding our consciousness by positing that the universe itself is built from the stuff? Kastrup is trying to reconcile the “combination problem” of bottom-up panpsychism: even if you accept that every particle contains a primitive conscious “essence”, you’re still stuck with explaining how those rudiments combine to form the self-reflective sapience of complex objects like ourselves. Kastrup’s answer is to start at the other end. Instead of positing that consciousness emerges from the very small and working up to sentient beings, why not posit that it’s a property of the universe as a whole and work down?

Well, for one thing, because now you’ve got the opposite problem: rather than having to explain how little particles of proto-consciousness combine to form true sapience, now you have to explain how some universal ubermind splits into separate entities (i.e., if we’re all part of the same cosmic consciousness, why can’t I read your mind? Why do you and I even exist as distinct beings?)

This is where DID comes in. Kastrup claims that the same processes that give rise to multiple personalities in humans also occur at the level of the whole Universe, that all of inanimate “reality” consists of Thought, and its animate components— cats, earthworms, anything existing within a bounded metabolism— are encysted bits of consciousness isolated from the Cosmic Self:

“We, as well as all other living organisms, are but dissociated alters of cosmic consciousness, surrounded by its thoughts. The inanimate world we see around us is the revealed appearance of these thoughts. The living organisms we share the world with are the revealed appearances of other dissociated alters.”

And what about Reality before the emergence of living organisms?

“I submit that, before its first alter [i.e., separate conscious entity] ever formed, the only phenomenal contents of cosmic consciousness were thoughts.”

In case you’re wondering (and you damn well should be): yes, the Journal of Consciousness Studies is peer-reviewed. Respectable, even. Heavy hitters like David Chalmers and Daniel Dennet appear in its pages. And Kastrup doesn’t just pull claims out of his ass; he cites authorities from Augusto to von Neumann to back up his quantum/cosmic entanglement riff, for example. Personally, I’m not convinced— I think I see inconsistencies in his reasoning— but not being a physicist, what would I know? I haven’t read the authorities he cites, and wouldn’t understand them if I did. This Universal Split-Brain thing reads like Philip K. Dick on a bad day; then again, couldn’t you say the same about Schrödinger’s Cat, or the Many Worlds hypothesis?

Still, reading Kastrup’s paper, I have to keep reminding myself: Peer-reviewed. Respectable. Daniel Dennet.

Of course, repeat that too often and it starts to sound like a religious incantation.


To an SF writer, this is obviously a gold mine.

Kastrup’s model is epic creation myth: a formless thinking void, creating sentient beings In Its Image. The idea that Thou Art God (Stranger in a Strange Land, anyone?), that God is everywhere— that part of the paradigm reads like it was lifted beat-for-beat out of the Abrahamic religions. The idea that “The world is imagined” seems lifted from the Dharmic ones.

The roads we might travel from this starting point! Here’s just one: at our local Earthbound scale of reality DID is classed as a pathology, something to be cured. The patient is healthy only when their alters have been reintegrated. Does this scale up? Is the entire universe, as it currently exists, somehow “sick”? Is the reintegration of fragmented alters the only way to cure it, can the Universe only be restored to health only by resorbing all sentient beings back into some primordial pool of Being? Are we the disease, and our eradication the cure?

You may remember that I’m planning to write a concluding volume to the trilogy begun with Blindsight and continued in Echopraxia. I had my own thoughts as to how that story would conclude— but I have to say, Kastrup’s paper has opened doors I never considered before.

It just seems so off-the-wall that— peer-reviewed or not— I don’t know if I could ever sell it in a Hard-SF novel.

Read the whole story
2 days ago
Share this story

Book Review: The Black Swan

1 Comment and 2 Shares


Writing a review of The Black Swan is a nerve-wracking experience.

First, because it forces me to reveal I am about ten years behind the times in my reading habits.

But second, because its author Nassim Nicholas Taleb is infamous for angry Twitter rants against people who misunderstand his work. Much better men than I have read and reviewed Black Swan, messed it up, and ended up the victim of Taleb’s acerbic tongue.

One might ask: what’s the worst that could happen? A famous intellectual yells at me on Twitter for a few minutes? Isn’t that normal these days? Sure, occasionally Taleb will go further and write an entire enraged Medium article about some particularly egregious flub, but only occasionally. And even that isn’t so bad, is it?

But such an argument betrays the following underlying view:

It assumes that events can always be mapped onto a bell curve, with a peak at the average and dropping off quickly as one moves towards extremes. Most reviews of Black Swan will get an angry Twitter rant. A few will get only a snarky Facebook post or an entire enraged Medium article. By the time we get to real extremes in either directions – a mere passive-aggressive Reddit comment, or a dramatic violent assault – the probabilities are so low that they can safely be ignored.

Some distributions really do follow a bell curve. The classic example is height. The average person is about 5’7. The likelihood of anyone being a different height drops off dramatically with distance from the mean. Only about one in a million people should be taller than 7 feet; only one in a billion should be as tall as 7’5. Nobody is order-of-magnitude differences in height from anyone else. Taleb calls the world of bell curves and minor differences Mediocristan. If Taleb’s reaction to bad reviews dwells alongside height in Mediocristan, I am safe; nothing an order-of-magnitude difference from an angry Twitter rant is likely to happen in entire lifetimes of misinterpreting his work.

But other distributions are nothing like a bell curve. Taleb cites power-law distributions as an example, and calls their world Extremistan. Wealth inequality lives in Extremistan. If wealth followed a bell curve around the median household income of $57,000, and a standard deviation scaled the same way as height, then a rich person earning $70,000 would be as remarkable as a tall person hitting 7 feet. Someone who earned $76,000 would be the same kind of prodigy of nature as the 7’6 Yao Ming. Instead, people earning $70,000 are dirt-common, some people earn millions, and the occasional tycoon can make hundreds of millions of dollars per year. In Mediocristan, the extremes don’t matter; in Extremistan, sometimes only the extremes matter. If you have a room full of 99 average-height people plus Yao Ming, Yao only has 1.3% of the total height in the room. If you have a room full of 99 average-income people plus Jeff Bezos, Bezos has 99.99% of the total wealth.

Here are Taleb’s potential reactions graphed onto a power-law distribution. Although the likelihood of any given reaction continues to decline the further it is away from average, it declines much less quickly than on the bell curve. Violent assault is no longer such a remote possibility; maybe my considerations should even be dominated by it.

So: are book reviews in Mediocristan or Extremistan?

I notice this BBC article about an author who hunted down a bad reviewer of his book and knocked her unconscious with a wine bottle. And Lord Byron wrote such a scathing meta-review of book reviewers that multiple reviewers challenged him to a duel, but the duel seems to have never taken place, plus I’m not sure Lord Byron is a good person to generalize from.

19th century intellectuals believed a bad review gave John Keats tuberculosis; they were so upset about this that they used his gravestone to complain:

Keats’ friend Shelley wrote the poem Adonais to memorialize the event, in which he said of the reviewer:

Our Adonais has drunk poison—oh!
What deaf and viperous murderer could crown
Life’s early cup with such a draught of woe?
The nameless worm would now itself disown:
It felt, yet could escape, the magic tone
Whose prelude held all envy, hate and wrong,
But what was howling in one breast alone,
Silent with expectation of the song,
Whose master’s hand is cold, whose silver lyre unstrung.

So are book reviews in Mediocristan or Extremistan? Well, every so often your review causes one of history’s greatest poets to die of tuberculosis, plus another great poet writes a five-hundred-line poem condemning you and calling you a “nameless worm”, and it becomes a classic that gets read by millions of schoolchildren each year for centuries after your death. And that’s just the worst thing that’s happened because of a book review so far. The next one could be even worse!


This sounds like maybe an argument for inaction, but Taleb is more optimistic. He points out that black swans are often good. For example, pharma companies usually just sit around churning out new antidepressants that totally aren’t just SSRI clones they swear. If you invest in one of these companies, you may win a bit if their SSRI clone succeeds, and lose a bit if it fails. But drug sales fall on a power law; every so often companies get a blockbuster that lets them double, triple, or dectuple their money. Tomorrow a pharma company might discover the cure for cancer, or the cure for aging, and get to sell it to everyone forever. So when you invest in a pharma company, you have randomness on your side: the worst that can happen is you lose your money, but the best that can happen is multiple-order-of-magnitude profits.

Taleb proposes a “barbell” strategy of combining some low-risk investments with some that expose you to positive black swans:

If you know that you are vulnerable to prediction errors, and if you accept that most “risk measures” are flawed, because of the Black Swan, then your strategy is to be as hyperconservative and hyperaggressive as you can be instead of being mildly aggressive or conservative. Instead of putting your money in “medium risk” investments (how do you know it is medium risk? by listening to tenure-seeking “experts”?), you need to put a portion, say 85 to 90 percent, in extremely safe instruments, like Treasury bills—as safe a class of instruments as you can manage to find on this planet. The remaining 10 to 15 percent you put in extremely speculative bets, as leveraged as possible (like options), preferably venture capital-style portfolios.* That way you do not depend on errors of risk management; no Black Swan can hurt you at all, beyond your “floor,” the nest egg that you have in maximally safe investments. Or, equivalently, you can have a speculative portfolio and insure it (if possible) against losses of more than, say, 15 percent. You are “clipping” your incomputable risk , the one that is harmful to you. Instead of having medium risk, you have high risk on one side and no risk on the other. The average will be medium risk but constitutes a positive exposure to the Black Swan […]

The “barbell” strategy [is] taking maximum exposure to the positive Black Swans while remaining paranoid about the negative ones. For your exposure to the positive Black Swan, you do not need to have any precise understanding of the structure of uncertainty. I find it hard to explain that when you have a very limited loss you need to get as aggressive, as speculative, and some­times as “unreasonable” as you can be.

So: how good can a book review get?

Here’s a graph of all the book reviews I’ve ever done by hit count (in thousands). I’m not going to calculate out, but it looks like a power law distribution! Some of my book reviews have been pretty successful – my review of Twelve Rules got mentioned in The Atlantic. Can things get even better than that? I met my first serious girlfriend through a blog post. Can things get even better than that? I had someone tell me a blog post on effective altruism convinced them to pledge to donate 10% of their salary to efficient charities forever; given some conservative assumptions, that probably saves twenty or thirty lives. So a book review has a small chance of giving a great poet tuberculosis, but also a small chance of saving dozens of lives. Overall it probably seems worth it.


The Black Swan uses discussions of power laws and risk as a jumping-off point to explore a wider variety of topics about human fallibility. This places it in the context of similar books about rationality and bias that came out around the same time. I’m especially thinking of Philip Tetlock’s Superforecasting, Nate Silver’s The Signal And The Noise, Daniel Kahneman’s Thinking Fast And Slow, and of course The Sequences. The Black Swan shares much of its material with these – in fact, it often cites Kahneman and Tetlock approvingly. But aside from the more in-depth discussion of risk, I notice two important points of this book Taleb keeps coming back to again and again, which as far as I know are unique to him.

The first is “the ludic fallacy”, the false belief that life works like a game or a probability textbook thought experiment. Taleb cautions against the (to me tempting) mistake of comparing black swans to lottery tickets – ie “investing in pharma companies is like having a lottery ticket to win big if they invent a blockbuster”. The lottery is a game where you know the rules and probabilities beforehand. The chance of winning is whatever it is. The prize is whatever it is. You know both beforehand; all you have to do is crunch the numbers to see if it’s a good deal.

Pharma – and most other real-life things – are totally different. Nobody hands you the chance of a pharma company inventing a blockbuster drug, and nobody hands you the amount of money you’ll win if it does. There is Knightian uncertainty – uncertainty about how much uncertainty there is, uncertainty that doesn’t come pre-quantified.

Taleb gives cautionary examples of what happens if you ignore this. You make some kind of beautiful model that tells you there’s only a 0.01% chance of the stock market doing some particular bad thing. Then you invest based on that data, and the stock market does that bad thing, and you lose all your money. You were taking account of the quantified risk in your model, but not of the unquantifiable risk that your model was incorrect.

In retrospect, this is an obvious point. But it’s also obvious in retrospect that everything classes teach about probability fall victim to it, to the point where it’s hard to even think about probability in non-ludic terms. I keep having to catch myself writing some kind of “Okay, assume the risk of a Black Swan is 10%…” example in this review, because then I know Taleb will hunt me down and violently assault me. But it’s hard to resist.

I would like to excuse myself by saying it’s impossible to discuss probability without these terms, or at least that you have to start by teaching these terms and then branch into the real-world unquantifiable stuff, except that Taleb managed to write his book without doing either of those things. Granted, the book is a little bit weird. You could go through several chapters on the Lebanese Civil War or whether the French Third Republic had the best intellectuals, without noticing it was a book on probability. Nevertheless, it sets itself the task of discussing risk without starting with the ludic fallacy, and it succeeds.

I don’t know to what degree the project of “becoming well-calibrated with probabilities” is a solution to the ludic fallacy, or a case of stupidly falling victim to the ludic fallacy.

The second key concept of this book – obviously not completely original to Taleb, but I think Taleb gives it a new meaning and emphasis – is “Platonicity”, the anti-empirical desire to cram the messy real world into elegant theoretical buckets. Taleb treats the bell curve as one of the clearest examples; it’s a mathematically beautiful example of what certain risks should look like, so incompetent statisticians and economists assume that risks in a certain domain do fit the model.

He ties this into Tetlock’s “fox vs. hedgehog” dichotomy. The prognosticators who tried to fit everything to their theory usually did badly; the ones who accepted the complexity of reality and maintained a toolbox of possibilities usually did better.

He also mentions – and somehow I didn’t know this already – that modern empiricism descends from Sextus Empiricus, a classical doctor who popularized skeptical and empirical ideas as the proper way to do medicine. Sextus seems like a pretty fun guy; his surviving works include Against The Grammarians, Against The Rhetoricians, Against The Geometers, Against The Arithmeticians, Against The Astrologers, Against The Musicians, Against The Logicians, Against The Physicists, and Against The Ethicists. Medicine is certainly a great example of empiricism vs. Platonicity, with Hippocrates and his followers cramming everything into their preconceived Four Humors model – to the detriment of medical science – for thousands of years.

But Empiricus’ solution – to not hold any beliefs, and to act entirely out of habit – falls short. And I am not sure I understood what Taleb is arguing for here. There’s certainly a true platitude in this area (wait, does “platitude” share a root with “Platonic”? It looks like both go back to a Greek word meaning “broad”, but it’s not on purpose. Whatever.) of “try to go where the evidence guides you instead of having prejudices”. But there’s also a point on the other side – unless you have some paradigm to guide you, you exist in a world of chaotic noise. I am less sanguine than Taleb that “be empiricist, not theoretical” is sufficient advice, as opposed to “find the Golden Mean between empiricism and theory” – which is of course a much harder and more annoying adage, since finding a Golden Mean isn’t trivial.

That is – what would it mean for a doctor to try to do medicine without the “theory” that the heart pumped the blood? You’d find a patient with all the signs of cardiogenic shock, and say “Eh, I dunno. Maybe I should x-ray his feet, or something?” What if she had no preconceived ideas at all? Would she start reciting Sanskrit poetry, on the grounds that there’s no reason to think that would help more or less than anything else? Whereas a doctor who had read a lot of medical school textbooks – Taleb hates textbooks! – would immediately recognize the signs of cardiogenic shock, do the tests that the textbooks recommend, and give the appropriate treatment.

Yes, eventually an empiricist doctor would notice empirical facts that made her believe the heart pumped blood (and all the other true things). But then she would…write it down in a textbook. That’s what theories are – crystallized, compressed empiricism.

I think maybe Empiricus and Taleb would retort that some people form theories with only a smidgeon of evidence – I don’t know what evidence Hippocrates had for the Four Humors, but it clearly wasn’t enough. And then they stick to them dogmatically even when the evidence contradicts them. I agree with both criticisms. But then it seems like the problem is bad theories, rather than ever having theories at all. Four Humors Theory and Germ Theory are both theories – it’s just that one is wrong and the other is right. If nobody had ever been willing to accept the germ theory of disease, we’d be in a much worse place. And you can’t just say “Well, you could atheoretically notice antibiotics work and use them empirically” – much of the research into antibiotics, and the ways we use antibiotics, are in place because we more or less understand what they’re doing.

I would argue that Empiricus and Taleb are arguing not for experience over theory, but for the adjustment of certain parameters of inference – how much fudge factor we accept in compressing our data, how much we weight prior probabilities versus new evidence, how surprised to be at evidence that doesn’t fit our theories. I expect Empiricus, Taleb, and I are in agreement about which direction we want those parameters shifted. I know this sounds like a boring intellectual semantic point, but I think it’s important and occasionally saves your life if you’re practicing some craft like medicine that has a corpus of theory built up around it which you ignore at your peril.

(I also think they fail to understand the degree to which common sense is just under-the-hood inference in the same way that abstract theorizing is above-the-hood inference, and so doesn’t rescue us from these concerns).

Charitably, The Black Swan isn’t making the silly error of denying a Golden Mean of parameter position. It’s just arguing that most people today are on the too-Platonic side of things, and so society as a whole needs to shift the parameters toward the more-empirical side. Certainly this is true of most people in the world Nassim Nicholas Taleb inhabits. In Taleb’s world famous people walk around all day asserting “Everything is on a bell curve! Anyone who thinks risk is unpredictable is a dangerous heretic!” Then Taleb breaks in past their security cordon and shouts “But what if things aren’t on a bell curve? What if there are black swans?!” Then the famous person has a rage-induced seizure, as their bodyguards try to drag Taleb away. Honestly it sounds like an exciting life.

Lest you think I am exaggerating:

The psychologist Philip Tetlock (the expert buster in Chapter 10) , after listening to one of my talks, reported that he was struck by the presence of an acute state of cognitive dissonance in the audience. But how people resolve this cognitive tension, as it strikes at the core of everything they have been taught and at the methods they practice, and realize that they will continue to practice, can vary a lot. It was symptomatic that almost all people who attacked my thinking attacked a deformed version of it, like “it is all random and unpredictable” rather than “it is largely random,” or got mixed up by showing me how the bell curve works in some physical domains. Some even had to change my biography. At a panel in Lugano, Myron Scholes once got in to a state of rage, and went after a transformed version of my ideas. I could see pain in his face. Once, in Paris, a prominent member of the mathematical establishment, who invested part of his life on some minute sub-sub-property of the Gaussian, blew a fuse—right when I showed empirical evidence of the role of Black Swans in markets. He turned red with anger, had difficulty breathing, and started hurling insults at me for having desecrated the institution, lacking pudeur (modesty); he shouted “I am a member of the Academy of Science!” to give more strength to his insults.

One hazard of reviewing books long after they come out is that, if the book was truly great, it starts sounding banal. If its points were so devastating and irrefutable that they became universally accepted, then it sounds like the author is just spouting cliches. I think The Black Swan might have reached that level of influence. I haven’t even bothered explaining the term “black swan” because I assume every educated reader now knows what it means. So it seems very possible that pre-book society was so egregiously biased toward the Platonic theoretical side that it needed someone to tell it to shift in the direction of empiricism, Taleb did that, and now he sounds silly because everyone knows that you can’t just declare everything a bell curve and call it a day. Maybe this book should be read backwards. But the nature of all mental processes as a necessary balance between theory and evidence is my personal hobby-horse, just as evidence being good and theory being bad is Taleb’s personal hobby-horse, so I can’t let this pass without at least one hobby-cavalry-duel.

I have a more specific worry about skeptical empiricism, which is that it seems like an especially dangerous way to handle Extremistan and black swans.

Taleb memorably compares much of the financial world to “picking up pennies in front of a steamroller” – ie, it is very easy to get small positive returns most of the time as long as you expose yourself to horrendous risk.

EG: imagine living in sunny California and making a bet with your friend about the weather. Each day it doesn’t rain, he gives you $1. Each day it rains, you give him $1000. Your friend will certainly take this bet, since long-term it pays off in his favor. But for the first few months, you will look pretty smart as you pump him out of a constant stream of free dollars. Your stupidity will only become apparent way down the line, when one of the state’s rare rainstorms arrives and you’re on the hook for much more than you won.

Here the theorist will calculate the probability of rain, calculate everybody’s expected utility, and predict that your friend will eventually come out ahead.

But the good empiricist will just watch you getting a steady stream of free dollars, and your friend losing money every day, and say that you did the right thing and your friend is the moron!

More generally, as long as Black Swans are rare enough not to show up in your dataset, empiricists are likely to fall for picking-pennies-from-in-front-of-steamroller bets, whereas (sufficiently smart) theorists will reject them.

For example, Banker 1 follows a strategy that exposes herself terribly to black swan risk, and ensures she will go bankrupt as soon as the market goes down, but which makes her 10% per year while the market is going up. Banker 2 follows a strategy that protects herself against black swan risk, but only makes 8% per year while the market is going up. A naive empiricist will judge them by their results, see that Banker 1 has done better each of the past five years, and give all his money to Banker 1, with disastrous results. Somebody who has a deep theoretical understanding of the underlying territory might be able to avoid that mistake.

This problem also comes up in medicine. Imagine two different drugs. Both cure the same disease and do it equally well. Drug 1 has a side effect of mild headache in 50% of patients. Drug 2 has a side effect of death in 0.01% of patients. I think a lot of doctors test both drugs, find that Drug 2 always results in less hassle and happier patients, and stick with it. But this is plausibly the wrong move and a good understanding of the theory would make them much more cautious.

(yes, both of these examples are also examples of the ludic fallacy. I fail, sorry.)

Overall this seems like a form of Goodhart’s Law, where any attempt to measure something empirically risks having people optimize for your measurement in a way that makes all the unmeasurable things worse. Black swan risks are one example of an unmeasurable thing; you can’t really measure how common or how bad they are until they happen. If you focus entirely on empirical measurement, you’ll incentivize people to take any trade that improves ordinary results at the cost of increasing black swan risk later. If you want to prevent that, you need a model that includes the possibility of black swan risk – which is going to involve some theory.

Nassim Taleb has been thinking about this kind of thing his whole life and I’m sure he hasn’t missed this point. Probably we are just using terms differently. But I do think the way he uses terms minimizes concern about this type of error, and I do worry the damage can sometimes be pretty large.


I previously mentioned that The Black Swan seems to stand in the tradition of other rationality books like Thinking Fast And Slow and The Signal And The Noise. Is this a fair analysis? If so, what do we make of this tradition?

While Taleb has nothing but praise for eg Kahneman, his book also takes a very different tone. For one thing, it’s part-autobiography / diary / vague-thought-log of Taleb, who is a very interesting person. I read some reviews saying he “needed an editor”, and I understand the sentiment, but – does he? Yes, his book is weird and disconnected. It’s also really fun to read, and sold three million copies. If people who “need an editor” often sell more copies than people who don’t, and are more enjoyable, are we sure we’re not just arbitrarily demanding people conform to a certain standard of book-writing that isn’t really better than alternative standards? Are we sure it’s really true that you can’t just stick several chapters about the biography of a fake Russian author into the middle of your book for no reason, without admitting that it’s fake? Are you sure you can’t insert a thinly-disguised version of yourself into the story about the Russian author, have yourself be such a suave and attractive individual that she falls for you and you start a torrid love affair, and then make fun of her cuckolded husband, who is suspiciously similar to the academics you despise? Are you sure this is an inappropriate thing to do in the middle of a book on probability? Maybe Nate Silver would have done it too if he had thought of it first.

Also sort of surprising: Taleb hates nerds. He explains:

To set the terminology straight, what I call “a nerd” here doesn’t have to look sloppy, unaesthetic, and sallow, and wear glasses and a portable computer on his belt as if it were an ostensible weapon. A nerd is simply someone who thinks exceedingly inside the box.

Have you ever wondered why so many of these straight-A students end up going nowhere in life while someone who lagged behind is now getting the shekels, buying the diamonds, and getting his phone calls returned? Or even getting the Nobel Prize in a real discipline (say, medicine)? Some of this may have something to do with luck in outcomes, but there is this sterile and obscurantist quality that is often associated with classroom knowledge that may get in the way of understanding what’s going on in real life. In an IQ test, as well as in any academic setting (including sports), Dr. John would vastly outperform Fat Tony. But Fat Tony would outperform Dr. John in any other possible ecological, real-life situation. In fact, Tony, in spite of his lack of culture, has an enormous curiosity about the texture of reality, and his own erudition—to me, he is more scientific in the literal, though not in the social, sense than Dr. John.

Going after nerds in your book contrasting Gaussian to power law distributions, with references to the works of Poincaré and Popper, is a bold choice. It also separates Taleb from the rest of the rationality tradition. I interpret eg The Signal And The Noise as pro-nerd. Its overall thesis is “Ordinary people are going around being woefully biased about all sorts of things. Good thing that bright people like Nate Silver can use the latest advances in statistics to figure out where they are going wrong, do the hard work of processing the statistical signal correctly, and create a brighter future for all of us.” Taleb turns that on its head. For him, ordinary people – taxi drivers, barbers, vibrant salt-of-the-earth heavily-accented New Yorkers – are the heroes, who know what’s up and are too sensible to go around saying that everything must be a bell curve, or that they have a clever theory which proves the market can never crash. It’s only the egghead intellectuals who could make such an error.

I am not sure this is true – my last New York taxi driver spent the ride explaining to me that he was the Messiah, which seems like an error on some important axis of reasoning that most intellectuals get right. But I understand that some of Taleb’s later works – Antifragile and Skin In The Game – may address more of what he means by this. It looks like Kahneman, Silver, et al are basically trying to figure out what doing things optimally would look like – which is a very nerdy project. Taleb is trying to figure out how to run systems without an assumption that you will necessarily be right very often.

I am reminded of the example of doctors being asked probability questions, about whether a certain finding on a mammogram implies X probability of breast cancer. The doctors all get this horribly wrong, because none of them ever learned anything about probability. But after getting every question on the test wrong, they will go and perform actions which are basically optimized for correctly diagnosing and treating breast cancer, even though their probability-related answers imply they should do totally different things.

I see Kahneman, Tetlock, Silver, and Yudkowsky as all being in the tradition of finding optimal laws of probability that point out why the doctors are wrong, and figuring out how to train doctors to answer probability questions right. I see Taleb as being on the side of the doctors – trying to figure out a system where the right decisions get made whether anyone has a deep mathematical understanding of the situation or not. Taleb appreciates the others’ work – you have to know something about probability before you can discuss why some systems tend towards getting it right vs. getting it wrong – but overall he agrees that “rationality is about winning” – the doctor who eventually gives the right treatment is better than a statistician who answers all relevant math questions correctly but has no idea what to do.

Relatedly, I think Taleb’s critique of nerds works because he’s trying to resurrect a Greco-Roman concept of the intellectual – arete and mens sana in corpore sano and all that – and clearly uses “nerd” to mean everything about modern faux-intellectuals that falls short of his vision. Thales cornering the market on olive presses is his kind of guy, and he doesn’t think that all of the people who have rage-induced seizures when he whispers the phrase “power law distribution” in their ears really cut it. His book is both a discussion of his own area of study (risk), and a celebration of and guide to what he thinks intellectualism should be. I might have missed the section of Marcus Aurelius where he talks about how angry Twitter rants are a good use of your time, but aside from that I think the autobiographical parts of the book make a convincing aesthetic argument that Taleb is living the dream and we should try to live it too.

Perhaps relating to this, of Taleb, Silver, Tetlock, Yudkowksy, and Kahneman, Taleb seems to have stuck around longest. All of them continue to do great object-level work in their respective fields, but it seems like the “moment” for books about rationality came and passed around 2010. Maybe it’s because the relevant science has slowed down – who is doing Kahneman-level work anymore? Maybe it’s because people spent about eight years seeing if knowing about cognitive biases made them more successful at anything, noticed it didn’t, and stopped caring. But reading The Black Swan really does feel like looking back to another era when the public briefly became enraptured by human rationality, and then, after learning a few cool principles, said “whatever” and moved on.

Except for Taleb. I’m excited to see he’s still working in this field and writing more books expanding on these principles. I look forward to reading the other books in this series.

Read the whole story
2 days ago
Share this story
1 public comment
1 day ago
"Going after nerds in your book contrasting Gaussian to power law distributions, with references to the works of Poincaré and Popper, is a bold choice."
Princeton, NJ or NYC

GnuPG can now be used to perform notarial acts in the State of Washington


Washington State Electronic Notary Public endorsements

C.J. Collier cjac at colliertech.org
Mon Sep 17 20:53:02 CEST 2018

Hello folks!

I thought I'd write you to let you know about my recent conversation with
the Washington State Department of Licensing (DoL).  In July, I submitted
my application for renewal of my Notary commission.  This year, the WA
Notary laws changed to allow notaries to request an endorsement with
renewal or initial application <https://www.dol.wa.gov/forms/659007.pdf>.
Since I have a bit of experience
<http://www.colliertech.org/state/19.34_RCW/> with non-repudiation online,
I checked the box indicating a request for Electronic Notary Public
endorsement, listed "Collier Technologies LLC / GnuPG" as my software of
choice, and sent in the extra $15.  My application was processed and my
third commission as notary was approved on August 17th.

I received in the mail a letter dated September 6th 2018, however, which
informed me that the software I indicated "does not meet the requirements
<http://apps.leg.wa.gov/wac/default.aspx?cite=308-30-130> of the enclosed
laws and rules."

The requirements indicated were:

Requirements for technologies and technology providers.
A tamper-evident technology shall comply with these rules:
(1) A technology provider shall enroll only notaries public who have been
issued an electronic records notary public endorsement pursuant to WAC
308-30-030 <http://apps.leg.wa.gov/wac/default.aspx?cite=308-30-030>.
(2) A technology provider shall take reasonable steps to ensure that a
notary public who has enrolled to use the technology has the knowledge to
use it to perform electronic notarial acts in compliance with these rules.
(3) A tamper-evident technology shall require access to the system by a
password or other secure means of authentication.
(4) A tamper-evident technology shall enable a notary public to affix the
notary's electronic signature and seal or stamp in a manner that attributes
such signature and seal or stamp to the notary.
(5) A technology provider shall provide prorated fees to align the usage
and cost of the tamper-evident technology with the term limit of the notary
public electronic records notary public endorsement.
(6) A technology provider shall suspend the use of any tamper-evident
technology for any notary public whose endorsement has been revoked,
suspended, or canceled by the state of Washington or the notary public.

This all seemed to me to be something that GnuPG is designed to do and does
quite well.  So I sent an email on Friday night to the sender of the letter
requesting specific issues that my provider did not comply with.  This
morning I received a call from the DoL, and was able to successfully argue
for GnuPG's qualification as an electronic records notary public technology
provider for the State of Washington.

In short, GnuPG can now be used to perform notarial acts
<http://app.leg.wa.gov/RCW/default.aspx?cite=42.45.140> in the State of


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.gnupg.org/pipermail/gnupg-users/attachments/20180917/92100b6d/attachment-0001.html>

More information about the Gnupg-users mailing list

Read the whole story
2 days ago
Share this story

Falling in love with Rust

1 Comment and 2 Shares

Falling in love with Rust

Let me preface this with an apology: this is a technology love story, and as such, it’s long, rambling, sentimental and personal. Also befitting a love story, it has a When Harry Met Sally feel to it, in that its origins are inauspicious…

First encounters

Over a decade ago, I worked on a technology to which a competitor paid the highest possible compliment: they tried to implement their own knockoff. Because this was done in the open (and because it is uniquely mesmerizing to watch one’s own work mimicked), I spent way too much time following their mailing list and tracking their progress (and yes, taking an especially shameful delight in their occasional feuds). On their team, there was one technologist who was clearly exceptionally capable — and I confess to being relieved when he chose to leave the team relatively early in the project’s life. This was all in 2005; for years for me, Rust was “that thing that Graydon disappeared to go work on.” From the description as I read it at the time, Graydon’s new project seemed outrageously ambitious — and I assumed that little would ever come of it, though certainly not for lack of ability or effort…

Fast forward eight years to 2013 or so. Impressively, Graydon’s Rust was not only still alive, but it had gathered a community and was getting quite a bit of attention — enough to merit a serious look. There seemed to be some very intriguing ideas, but any budding interest that I might have had frankly withered when I learned that Rust had adopted the M:N threading model — including its more baroque consequences like segmented stacks. In my experience, every system that has adopted the M:N model has lived to regret it — and it was unfortunate to have a promising new system appear to be ignorant of the scarred shoulders that it could otherwise stand upon. For me, the implications were larger than this single decision: I was concerned that it may be indicative of a deeper malaise that would make Rust a poor fit for the infrastructure software that I like to write. So while impressed that Rust’s ambitious vision was coming to any sort of fruition at all, I decided that Rust wasn’t for me personally — and I didn’t think much more about it…

Some time later, a truly amazing thing happened: Rust ripped it out. Rust’s reasoning for removing segmented stacks is a concise but thorough damnation; their rationale for removing M:N is clear-eyed, thoughtful and reflective — but also unequivocal in its resolve. Suddenly, Rust became very interesting: all systems make mistakes, but few muster the courage to rectify them; on that basis alone, Rust became a project worthy of close attention.

So several years later, in 2015, it was with great interest that I learned that Adam started experimenting with Rust. On first read of Adam’s blog entry, I assumed he would end what appeared to be excruciating pain by deleting the Rust compiler from his computer (if not by moving to a commune in Vermont) — but Adam surprised me when he ended up being very positive about Rust, despite his rough experiences. In particular, Adam hailed the important new ideas like the ownership model — and explicitly hoped that his experience would serve as a warning to others to approach the language in a different way.

In the years since, Rust continued to mature and my curiosity (and I daresay, that of many software engineers) has steadily intensified: the more I have discovered, the more intrigued I have become. This interest has coincided with my personal quest to find a programming language for the back half of my career: as I mentioned in my Node Summit 2017 talk on platform as a reflection of values, I have been searching for a language that reflects my personal engineering values around robustness and performance. These values reflect a deeper sense within me: that software can be permanent — that software’s unique duality as both information and machine afford a timeless perfection and utility that stand apart from other human endeavor. In this regard, I have believed (and continue to believe) that we are living in a Golden Age of software, one that will produce artifacts that will endure for generations. Of course, it can be hard to hold such heady thoughts when we seem to be up to our armpits in vendored flotsam, flooded by sloppy abstractions hastily implemented. Among current languages, only Rust seems to share this aspiration for permanence, with a perspective that is decidedly larger than itself.

Taking the plunge

So I have been actively looking for an opportunity to dive into Rust in earnest, and earlier this year, one presented itself: for a while, I have been working on a new mechanism for system visualization that I dubbed statemaps. The software for rendering statemaps needs to inhale a data stream, coalesce it down to a reasonable size, and render it as a dynamic image that can be manipulated by the user. This originally started off as being written in node.js, but performance became a problem (especially for larger data sets) and I did what we at Joyent have done in such situations: I rewrote the hot loop in C, and then dropped that into a node.js add-on (allowing the SVG-rendering code to remain in JavaScript). This was fine, but painful: the C was straightforward, but the glue code to bridge into node.js was every bit as capricious, tedious, and error-prone as it has always been. Given the performance constraint, the desire for the power of a higher level language, and the experimental nature of the software, statemaps made for an excellent candidate to reimplement in Rust; my intensifying curiosity could finally be sated!

As I set out, I had the advantage of having watched (if from afar) many others have their first encounters with Rust. And if those years of being a Rust looky-loo taught me anything, it’s that the early days can be like the first days of snowboarding or windsurfing: lots of painful falling down! So I took deliberate approach with Rust: rather than do what one is wont to do when learning a new language and tinker a program into existence, I really sat down to learn Rust. This is frankly my bias anyway (I always look for the first principles of a creation, as explained by its creators), but with Rust, I went further: not only did I buy the canonical reference (The Rust Programming Language by Steve Klabnik, Carol Nichols and community contributors), I also bought an O’Reilly book with a bit more narrative (Programming Rust by Jim Blandy and Jason Orendorff). And with this latter book, I did something that I haven’t done since cribbing BASIC programs from Enter magazine back in the day: I typed in the example program in the introductory chapters. I found this to be very valuable: it got the fingers and the brain warmed up while still absorbing Rust’s new ideas — and debugging my inevitable transcription errors allowed me to get some understanding of what it was that I was typing. At the end was something that actually did something, and (importantly), by working with a program that was already correct, I was able to painlessly feel some of the tremendous promise of Rust.

Encouraged by these early (if gentle) experiences, I dove into my statemap rewrite. It took a little while (and yes, I had some altercations with the borrow checker!), but I’m almost shocked about how happy I am with the rewrite of statemaps in Rust. Because I know that many are in the shoes I occupied just a short while ago (namely, intensely wondering about Rust, but also wary of its learning curve — and concerned about the investment of time and energy that climbing it will necessitate), I would like to expand on some of the things that I love about Rust other than the ownership model. This isn’t because I don’t love the ownership model (I absolutely do) or that the ownership model isn’t core to Rust (it is rightfully thought of as Rust’s epicenter), but because I think its sheer magnitude sometimes dwarfs other attributes of Rust — attributes that I find very compelling! In a way, I am writing this for my past self — because if I have one regret about Rust, it’s that I didn’t see beyond the ownership model to learn it earlier.

I will discuss these attributes in roughly the order I discovered them with the (obvious?) caveat that this shouldn’t be considered authoritative; I’m still very much new to Rust, and my apologies in advance for any technical details that I get wrong!

1. Rust’s error handling is beautiful

The first thing that really struck me about Rust was its beautiful error handling — but to appreciate why it so resonated with me requires some additional context. Despite its obvious importance, error handling is something we haven’t really gotten right in systems software. For example, as Dave Pacheo observed with respect to node.js, we often conflate different kinds of errors — namely, programmatic errors (i.e., my program is broken because of a logic error) with operational errors (i.e., an error condition external to my program has occurred and it affects my operation). In C, this conflation is unusual, but you see it with the infamous SIGSEGV signal handler that has been known to sneak into more than one undergraduate project moments before a deadline to deal with an otherwise undebuggable condition. In the Java world, this is slightly more common with the (frowned upon) behavior of catching java.lang.NullPointerException or otherwise trying to drive on in light of clearly broken logic. And in the JavaScript world, this conflation is commonplace — and underlies one of the most serious objections to promises.

Beyond the ontological confusion, error handling suffers from an infamous mechanical problem: for a function that may return a value but may also fail, how is the caller to delineate the two conditions? (This is known as the semipredicate problem after a Lisp construct that suffers from it.) C handles this as it handles so many things: by leaving it to the programmer to figure out their own (bad) convention. Some use sentinel values (e.g., Linux system calls cleave the return space in two and use negative values to denote the error condition); some return defined values on success and failure and then set an orthogonal error code; and of course, some just silently eat errors entirely (or even worse).

C++ and Java (and many other languages before them) tried to solve this with the notion of exceptions. I do not like exceptions: for reasons not dissimilar to Dijkstra’s in his famous admonition against “goto”, I consider exceptions harmful. While they are perhaps convenient from a function signature perspective, exceptions allow errors to wait in ambush, deep in the tall grass of implicit dependencies. When the error strikes, higher-level software may well not know what hit it, let alone from whom — and suddenly an operational error has become a programmatic one. (Java tries to mitigate this sneak attack with checked exceptions, but while well-intentioned, they have serious flaws in practice.) In this regard, exceptions are a concrete example of trading the speed of developing software with its long-term operability. One of our deepest, most fundamental problems as a craft is that we have enshrined “velocity” above all else, willfully blinding ourselves to the long-term consequences of gimcrack software. Exceptions optimize for the developer by allowing them to pretend that errors are someone else’s problem — or perhaps that they just won’t happen at all.

Fortunately, exceptions aren’t the only way to solve this, and other languages take other approaches. Closure-heavy languages like JavaScript afford environments like node.js the luxury of passing an error as an argument — but this argument can be ignored or otherwise abused (and it’s untyped regardless), making this solution far from perfect. And Go uses its support for multiple return values to (by convention) return both a result and an error value. While this approach is certainly an improvement over C, it is also noisy, repetitive and error-prone.

By contrast, Rust takes an approach that is unique among systems-oriented languages: leveraging first algebraic data types — whereby a thing can be exactly one of an enumerated list of types and the programmer is required to be explicit about its type to manipulate it — and then combining it with its support for parameterized types. Together, this allows functions to return one thing that’s one of two types: one type that denotes success and one that denotes failure. The caller can then pattern match on the type of what has been returned: if it’s of the success type, it can get at the underlying thing (by unwrapping it), and if it’s of the error type, it can get at the underlying error and either handle it, propagate it, or improve upon it (by adding additional context) and propagating it. What it cannot do (or at least, cannot do implicitly) is simply ignore it: it has to deal with it explicitly, one way or the other. (For all of the details, see Recoverable Errors with Result.)

To make this concrete, in Rust you end up with code that looks like this:

fn do_it(filename: &str) -> Result<(), io::Error> {
    let stat = match fs::metadata(filename) {
        Ok(result) => { result },
        Err(err) => { return Err(err); }

    let file = match File::open(filename) {
        Ok(result) => { result },
        Err(err) => { return Err(err); }

    /* ... */


Already, this is pretty good: it’s cleaner and more robust than multiple return values, return sentinels and exceptions — in part because the type system helps you get this correct. But it’s also verbose, so Rust takes it one step further by introducing the propagation operator: if your function returns a Result, when you call a function that itself returns a Result, you can append a question mark on the call to the function denoting that upon Ok, the result should be unwrapped and the expression becomes the unwrapped thing — and upon Err the error should be returned (and therefore propagated). This is easier seen than explained! Using the propagation operator turns our above example into this:

fn do_it_better(filename: &str) -> Result<(), io::Error> {
    let stat = fs::metadata(filename)?;
    let file = File::open(filename)?;

    /* ... */


This, to me, is beautiful: it is robust; it is readable; it is not magic. And it is safe in that the compiler helps us arrive at this and then prevents us from straying from it.

Platforms reflect their values, and I daresay the propagation operator is an embodiment of Rust’s: balancing elegance and expressiveness with robustness and performance. This balance is reflected in a mantra that one hears frequently in the Rust community: “we can have nice things.” Which is to say: while historically some of these values were in tension (i.e., making software more expressive might implicitly be making it less robust or more poorly performing), through innovation Rust is finding solutions that don’t compromise one of these values for the sake of the other.

2. The macros are incredible

When I was first learning C, I was (rightly) warned against using the C preprocessor. But like many of the things that we are cautioned about in our youth, this warning was one that the wise give to the enthusiastic to prevent injury; the truth is far more subtle. And indeed, as I came of age as a C programmer, I not only came to use the preprocessor, but to rely upon it. Yes, it needed to be used carefully — but in the right hands it could generate cleaner, better code. (Indeed, the preprocessor is very core to the way we implement DTrace’s statically defined tracing.) So if anything, my problems with the preprocessor were not its dangers so much as its many limitations: because it is, in fact, a preprocessor and not built into the language, there were all sorts of things that it would never be able to do — like access the abstract syntax tree.

With Rust, I have been delighted by its support for hygienic macros. This not only solves the many safety problems with preprocessor-based macros, it allows them to be outrageously powerful: with access to the AST, macros are afforded an almost limitless expansion of the syntax — but invoked with an indicator (a trailing bang) that makes it clear to the programmer when they are using a macro. For example, one of the fully worked examples in Programming Rust is a json! macro that allows for JSON to be easy declared in Rust. This gets to the ergonomics of Rust, and there are many macros (e.g., format!, vec!, etc.) that make Rust more pleasant to use.

Another advantage of macros: they are so flexible and powerful that they allow for effective experimentation. For example, the propagation operator that I love so much actually started life as a try! macro; that this macro was being used ubiquitously (and successfully) allowed a language-based solution to be considered. Languages can be (and have been!) ruined by too much experimentation happening in the language rather than in how it’s used; through its rich macros, it seems that Rust can enable the core of the language to remain smaller — and to make sure that when it expands, it is for the right reasons and in the right way.

3. format! is a pleasure

Okay, this is a small one but it’s (another) one of those little pleasantries that has made Rust really enjoyable. Many (most? all?) languages have an approximation or equivalent of the venerable sprintf, whereby variable input is formatted according to a format string. Rust’s variant of this is the format! macro (which is in turn invoked by println!, panic!, etc.), and (in keeping with one of the broader themes of Rust) it feels like it has learned from much that came before it. It is type-safe (of course) but it is also clean in that the {} format specifier can be used on any type that implements the Display trait. I also love that the {:?} format specifier denotes that the argument’s Debug trait implementation should be invoked to print debug output. More generally, all of the format specifiers map to particular traits, allowing for an elegant approach to an historically grotty problem. There are a bunch of other niceties, and it’s all a concrete example of how Rust uses macros to deliver nice things without sullying syntax or otherwise special-casing. None of the formatting capabilities are unique to Rust, but that’s the point: in this (small) domain (as in many) Rust feels like a distillation of the best work that came before it. As anyone who has had to endure one of my talks can attest, I believe that appreciating history is essential both to understand our present and to map our future. Rust seems to have that perspective in the best ways: it is reverential of the past without being incarcerated by it.

4. include_str! is a godsend

One of the filthy aspects of the statemap code is that it is effectively encapsulating another program — a JavaScript program that lives in the SVG to allow for the interactivity of the statemap. This code lives in its own file, which the statemap code should pass through to the generated SVG. In the node.js/C hybrid, I am forced to locate the file in the filesystem — which is annoying because it has to be delivered along with the binary and located, etc. Now Rust — like many languages (including ES6) — has support for raw-string literals. As an aside, it’s interesting to see the discussion leading up to its addition, and in particular, how a group of people really looked at every language that does this to see what should be mimicked versus what could be improved upon. I really like the syntax that Rust converged on: r followed by one or more octothorpes followed by a quote to begin a raw string literal, and a quote followed by a matching number of octothorpes followed to end a literal, e.g.:

    let str = r##""What a curious feeling!" said Alice"##;

This alone would have allowed me to do what I want, but still a tad gross in that it’s a bunch of JavaScript living inside a raw literal in a .rs file. Enter include_str!, which allows me to tell the compiler to find the specified file in the filesystem during compilation, and statically drop it into a string variable that I can manipulate:

         * Now drop in our in-SVG code.
        let lib = include_str!("statemap-svg.js");

So nice! Over the years I have wanted this many times over for my C, and it’s another one of those little (but significant!) things that make Rust so refreshing.

5. Serde is stunningly good

Serde is a Rust crate that allows for serialization and deserialization, and it’s just exceptionally good. It uses macros (and, in particular, Rust’s procedural macros) to generate structure-specific routines for serialization and deserialization. As a result, Serde requires remarkably little programmer lift to use and performs eye-wateringly well — a concrete embodiment of Rust’s repeated defiance of the conventional wisdom that programmers must choose between abstractions and performance!

For example, in the statemap implementation, the input is concatenated JSON that begins with a metadata payload. To read this payload in Rust, I define the structure, and denote that I wish to derive the Deserialize trait as implemented by Serde:

#[derive(Deserialize, Debug)]
struct StatemapInputMetadata {
    start: Vec,
    title: String,
    host: Option,
    entityKind: Option,
    states: HashMap,

Then, to actually parse it:

     let metadata: StatemapInputMetadata = serde_json::from_str(payload)?;

That’s… it. Thanks to the magic of the propagation operator, the errors are properly handled and propagated — and it has handled tedious, error-prone things for me like the optionality of certain members (itself beautifully expressed via Rust’s ubiquitous Option type). With this one line of code, I now (robustly) have a StatemapInputMetadata instance that I can use and operate upon — and this performs incredibly well on top of it all. In this regard, Serde represents the best of software: it is a sophisticated, intricate implementation making available elegant, robust, high-performing abstractions; as legendary White Sox play-by-play announcer Hawk Harrelson might say, MERCY!

6. I love tuples

In my C, I have been known to declare anonymous structures in functions. More generally, in any strongly typed language, there are plenty of times when you don’t want to have to fill out paperwork to be able to structure your data: you just want a tad more structure for a small job. For this, Rust borrows an age-old construct from ML in tuples. Tuples are expressed as a parenthetical list, and they basically work as you expect them to work in that they are static in size and type, and you can index into any member. For example, in some test code that needs to make sure that names for colors are correctly interpreted, I have this:

        let colors = vec![
            ("aliceblue", (240, 248, 255)),
            ("antiquewhite", (250, 235, 215)),
            ("aqua", (0, 255, 255)),
            ("aquamarine", (127, 255, 212)),
            ("azure", (240, 255, 255)),
            /* ... */

Then colors[2].0 (say) which will be the string “aqua”; (colors[1].1).2 will be the integer 215. Don’t let the absence of a type declaration in the above deceive you: tuples are strongly typed, it’s just that Rust is inferring the type for me. So if I accidentally try to (say) add an element to the above vector that contains a tuple of mismatched signature (e.g., the tuple “((188, 143, 143), ("rosybrown"))“, which has the order reversed), Rust will give me a compile-time error.

The full integration of tuples makes them a joy to use. For example, if a function returns a tuple, you can easily assign its constituent parts to disjoint variables, e.g.:

fn get_coord() -> (u32, u32) {
   (1, 2)

fn do_some_work() {
    let (x, y) = get_coord();
    /* x has the value 1, y has the value 2 */

Great stuff!

7. The integrated testing is terrific

One of my regrets on DTrace is that we didn’t start on the DTrace test suite at the same time we started the project. And even after we starting building it (too late, but blessedly before we shipped it), it still lived away from the source for several years. And even now, it’s a bit of a pain to run — you really need to know it’s there.

This represents everything that’s wrong with testing in C: because it requires bespoke machinery, too many people don’t bother — even when they know better! Viz.: in the original statemap implementation, there is zero testing code — and not because I don’t believe in it, but just because it was too much work for something relatively small. Yes, there are plenty of testing frameworks for C and C++, but in my experience, the integrated frameworks are too constrictive — and again, not worth it for a smaller project.

With the rise of test-driven development, many languages have taken a more integrated approach to testing. For example, Go has a rightfully lauded testing framework, Python has unittest, etc. Rust takes a highly integrated approach that combines the best of all worlds: test code lives alongside the code that it’s testing — but without having to make the code bend to a heavyweight framework. The workhorses here are conditional compilation and Cargo, which together make it so easy to write tests and run them that I found myself doing true test-driven development with statemaps — namely writing the tests as I develop the code.

8. The community is amazing

In my experience, the best communities are ones that are inclusive in their membership but resolute in their shared values. When communities aren’t inclusive, they stagnate, or rot (or worse); when communities don’t share values, they feud and fracture. This can be a very tricky balance, especially when so many open source projects start out as the work of a single individual: it’s very hard for a community not to reflect the idiosyncrasies of its founder. This is important because in the open source era, community is critical: one is selecting a community as much as one is selecting a technology, as each informs the future of the other. One factor that I value a bit less is strictly size: some of my favorite communities are small ones — and some of my least favorite are huge.

For purposes of a community, Rust has a luxury of clearly articulated, broadly shared values that are featured prominently and reiterated frequently. If you head to the Rust website this is the first sentence you’ll read:

Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.

That gets right to it: it says that as a community, we value performance and robustness — and we believe that we shouldn’t have to choose between these two. (And we have seen that this isn’t mere rhetoric, as so many Rust decisions show that these values are truly the lodestar of the project.)

And with respect to inclusiveness, it is revealing that you will likely read that statement of values in your native tongue, as the Rust web page has been translated into thirteen languages. Just the fact that it has been translated into so many languages makes Rust nearly unique among its peers. But perhaps more interesting is where this globally inclusive view likely finds its roots: among the sites of its peers, only Ruby is similarly localized. Given that several prominent Rustaceans like Steve Klabnik and Carol Nichols came from the Ruby community, it would not be unreasonable to guess that they brought this globally inclusive view with them. This kind of inclusion is one that one sees again and again in the Rust community: different perspectives from different languages and different backgrounds. Those who come to Rust bring with them their experiences — good and bad — from the old country, and the result is a melting pot of ideas. This is an inclusiveness that runs deep: by welcoming such disparate perspectives into a community and then uniting them with shared values and a common purpose, Rust achieves a rich and productive heterogeneity of thought. That is, because the community agrees about the big things (namely, its fundamental values), it has room to constructively disagree (that is, achieve consensus) on the smaller ones.

Which isn’t to say this is easy! Check out Ashley Williams in the opening keynote from RustConf 2018 for how exhausting it can be to hash through these smaller differences in practice. Rust has taken a harder path than the “traditional” BDFL model, but it’s a qualitatively better one — and I believe that many of the things that I love about Rust are a reflection of (and a tribute to) its robust community.

9. The performance rips

Finally, we come to the last thing I discovered in my Rust odyssey — but in many ways, the most important one. As I described in an internal presentation, I had experienced some frustrations trying to implement in Rust the same structure I had had in C. So I mentally gave up on performance, resolving to just get something working first, and then optimize it later.

I did get it working, and was able to benchmark it, but to give some some context for the numbers, here is the time to generate a statemap in the old (slow) pure node.js implementation for a modest trace (229M, ~3.9M state transitions) on my 2.9 GHz Core i7 laptop:

% time ./statemap-js/bin/statemap ./pg-zfs.out > js.svg

real	1m23.092s
user	1m21.106s
sys	0m1.871s

This is bad — and larger input will cause it to just run out of memory. And here’s the version as reimplemented as a C/node.js hybrid:

% time ./statemap-c/bin/statemap ./pg-zfs.out > c.svg

real	0m11.800s
user	0m11.414s
sys	0m0.330s

This was (as designed) a 10X improvement in performance, and represents speed-of-light numbers in that this seems to be an optimal implementation. Because I had written my Rust naively (and my C carefully), my hope was that the Rust would be no more than 20% slower — but I was braced for pretty much anything. Or at least, I thought I was; I was actually genuinely taken aback by the results:

$ time ./statemap.rs/target/release/statemap ./pg-zfs.out > rs.svg
3943472 records processed, 24999 rectangles

real	0m8.072s
user	0m7.828s
sys	0m0.186s

Yes, you read that correctly: my naive Rust was ~32% faster than my carefully implemented C. This blew me away, and in the time since, I have spent some time on a real lab machine running SmartOS (where I have reproduced these results and been able to study them a bit). My findings are going to have to wait for another blog entry, but suffice it to say that despite executing a shockingly similar number of instructions, the Rust implementation has a different load/store mix (it is much more store-heavy than C) — and is much better behaved with respect to the cache. Given the degree that Rust passes by value, this makes some sense, but much more study is merited.

It’s also worth mentioning that there are some easy wins that will make the Rust implementation even faster: after I had publicized the fact that I had a Rust implementation of statemaps working, I was delighted when David Tolnay, the author of Serde, took the time to make some excellent suggestions for improvement. For a newcomer like me, it’s a great feeling to have someone with such deep expertise as David’s take an interest in helping me make my software perform even better — and it is revealing as to the core values of the community.

Rust’s shockingly good performance — and the community’s desire to make it even better — fundamentally changed my disposition towards it: instead of seeing Rust as a language to augment C and replace dynamic languages, I’m looking at it as a language to replace both C and dynamic languages in all but the very lowest layers of the stack. C — like assembly — will continue to have a very important place for me, but it’s hard to not see that place as getting much smaller relative to the barnstorming performance of Rust!

Beyond the first impressions

I wouldn’t want to imply that this is an exhaustive list of everything that I have fallen in love with about Rust. That list is much longer would include at least the ownership model; the trait system; Cargo; the type inference system. And I feel like I have just scratched the surface; I haven’t waded into known strengths of Rust like the FFI and the concurrency model! (Despite having written plenty of multithreaded code in my life, I haven’t so much as created a thread in Rust!)

Building a future

I can say with confidence that my future is in Rust. As I have spent my career doing OS kernel development, a natural question would be: do I intend to rewrite the OS kernel in Rust? In a word, no. To understand my reluctance, take some of my most recent experience: this blog entry was delayed because I needed to debug (and fix) a nasty problem with our implementation of the Linux ABI. As it turns out, Linux and SmartOS make slightly different guarantees with respect to the interaction of vfork and signals, and our code was fatally failing on a condition that should be impossible. Any old Unix hand (or quick study!) will tell you that vfork and signal disposition are each semantic superfund sites in their own right — and that their horrific (and ill-defined) confluence can only be unimaginably toxic. But the real problem is that actual software implicitly depends on these semantics — and any operating system that is going to want to run existing software will itself have to mimic them. You don’t want to write this code, because no one wants to write this code.

Now, one option (which I honor!) is to rewrite the OS from scratch, as if legacy applications essentially didn’t exist. While there is a tremendous amount of good that can come out of this (and it can find many use cases), it’s not a fit for me personally.

So while I may not want to rewrite the OS kernel in Rust, I do think that Rust is an excellent fit for much of the broader system. For example, at the recent OpenZFS Developers Summit, Matt Ahrens and I were noodling the notion of user-level components for ZFS in Rust. Specifically: zdb is badly in need of a rewrite — and Rust would make an excellent candidate for it. There are many such examples spread throughout ZFS and the broader the system, including a few in kernel. Might we want to have a device driver model that allows for Rust drivers? Maybe! (And certainly, it’s technically possible.) In any case, you can count on a lot more Rust from me and into the indefinite future — whether in the OS, near the OS, or above the OS.

Taking your own plunge

I wrote all of this up in part to not only explain why I took the plunge, but to encourage others to take their own. If you were as I was and are contemplating diving into Rust, a couple of pieces of advice, for whatever they’re worth:

  • I would recommend getting both The Rust Programming Language and Programming Rust. They are each excellent in their own right, and different enough to merit owning both. I also found it very valuable to have two different sources on subjects that were particularly thorny.
  • Understand ownership before you start to write code. The more you understand ownership in the abstract, the less you’ll have to learn at the merciless hands of compiler error messages.
  • Get in the habit of running rustc on short programs. Cargo is terrific, but I personally have found it very valuable to write short Rust programs to understand a particular idea — especially when you want to understand optional or new features of the compiler. (Roll on, non-lexical lifetimes!)
  • Be careful about porting something to Rust as a first project — or otherwise implementing something you’ve implemented before. Now, obviously, this is exactly what I did, and it can certainly be incredibly valuable to be able to compare an implementation in Rust to an implementation in another language — but it can also cut against you: the fact that I had implemented statemaps in C sent me down some paths that were right for C but wrong for Rust; I made much better progress when I rethought the implementation of my problem the way Rust wanted me to think about it.
  • Check out the New Rustacean podcast by Chris Krycho. I have really enjoyed Chris’s podcasts, and have been working my way through them when commuting or doing household chores. I particularly enjoyed his interview with Sean Griffen and his interview with Carol Nichols.
  • Check out rustlings. I learned about this a little too late for me; I wish I had known about it earlier! I did work through the Rust koans, which I enjoyed and would recommend for the first few hours with Rust.

I’m sure that there’s a bunch of stuff that I missed; if there’s a particular resource that you found useful when learning Rust, message me or leave a comment here and I’ll add it.

Let me close by offering a sincere thanks to those in the Rust community who have been working so long to develop such a terrific piece of software — and especially those who have worked so patiently to explain their work to us newcomers. You should be proud of what you’ve accomplished, both in terms of a revolutionary technology and a welcoming community — thank you for inspiring so many of us about what infrastructure software can become, and I look forward to many years of implementing in Rust!

Read the whole story
2 days ago
Share this story

VA-11 HALL-A sequel N1RV ANN-A announced

1 Share

Sukeban Games are returning to the cyberpunk world of bartending visual novel RPG doodad VA-11 HALL-A [god, please call it Valhalla -common sense ed.] with a sequel named N1RV ANN-A [oh for… I’ve had enough -former ed.] due in 2020. We’ll play a new bartender behind a new bar in a new city, listening to new people’s woes while mixing them drinks that will surely will bring only comfort and no further woes at all nuh uh. Meet some of the weirdos we’ll be serving in the announcement trailer below.


Read the whole story
3 days ago
Share this story
Next Page of Stories