5998 stories

Don’t be surprised that a fashion exhibit just became the Met’s most popular show ever

1 Comment
The back of a jacket on display at the Metropolitan Museum of Art’s “Heavenly Bodies” exhibit, which closed October 8.

The Museum of Metropolitan Art’s Catholic fashion exhibit, “Heavenly Bodies,” has broken every record.

At New York’s Metropolitan Museum of Art, fashion reigns supreme. In early May, all eyes were on the opening of a new exhibit from the Met’s Costume Institute, “Heavenly Bodies: Fashion and the Catholic Imagination,” which showed potential to draw record crowds to the museum and provide it with a major source of income at a moment of financial uncertainty.

That’s exactly what it did. The show closed out its five-month run as the museum’s most popular show of all time, beating out 1978’s “Treasures of Tutankhamun” for the top spot. All told, 1,659,647 people turned out for the Costume Institute’s dramatic depiction of Catholic fashion, according to a release from the museum. This final figure cements fashion’s dominance at the nation’s best-known art institution.

It wasn’t always this way. The First Monday in May, a documentary about the making of the Met Gala, the Costume Institute’s annual fundraiser, begins with Costume Institute curator Andrew Bolton, Met trustee (and Vogue editor) Anna Wintour, then-Met director Thomas Campbell, and former Costume Institute curator Harold Koda all explaining why fashion has historically been treated as a second-class discipline within the museum: because it’s known as a decorative art and not a “real” art (like painting, sculpture, or architecture), because it’s still considered women’s domain and therefore frivolous, because the department is literally located in a basement.

On top of that, the overtly commercial nature of fashion — versus the less acknowledged but very real commercialism of art — leads some people to dismiss fashion as an art form.

But with the runaway success of “Alexander McQueen: Savage Beauty” in 2011, fashion exhibits started becoming reliable traffic drivers for the Met. “Savage Beauty,” a theatrical, unearthly retrospective that took place in the wake of McQueen dying by suicide in 2010, saw 661,509 visitors in a little over three months, making it one of the 10 most popular Met exhibits ever at the time. Since then, “Savage Beauty” has since been displaced by the Costume Institute’s “China: Through The Looking Glass” (2015) and “Manus x Machina: Fashion in an Age of Technology” (2017), now the museum’s sixth- and eighth-most-visited shows.

Two visitors look at a series of McQueen dresses behind glass. Andrew H. Walker/Getty Images
Visitors to the Met’s “Alexander McQueen: Savage Beauty” exhibit in 2011.

Though museum collections have long housed clothing and accessories, the appeal of specialized fashion exhibits has sharpened thanks to digital media’s democratization of the fashion industry. Magazines and newspapers used to relate what happened at fashion shows to consumers; now anyone can watch them by live stream, catch the photos on Instagram minutes later, and tell the designer exactly what they think of his or her work in the comments section.

“We get to have input and feelings and reactions to what’s on the runway, so we’re involved in fashion in a greater way,” says Caroline Bellios, an adjunct professor of fashion at the School of the Art Institute of Chicago. “We take a greater ownership of it.”

Bellios also points out that as the public conversation about identity grows louder, so does our fluency in fashion as a form of self-presentation, furthering the accessibility of fashion in a museum context. Besides, looking at art can feel intimidating — What does it mean? What am I supposed to be getting out of this? — but with clothing, you can always defer to the classic question: Would I wear this?

A blue gown is mounted on a gold mannequin. George Pimentel/Getty Images
A dress on display at the Met in “China: Through the Looking Glass.”

“Heavenly Bodies” didn’t only stand to win because it’s a fashion exhibit. It was also about Catholicism, a proven hook for Met visitors.

Indeed, “Heavenly Bodies” has proven more popular even than “The Vatican Collections,” an exhibit from the spring of 1983 that is now the museum’s fourth most popular. A touring show that originated in New York and later made stops in Chicago and San Francisco, “The Vatican Collections” featured more than 200 works of art borrowed from the Vatican Museum in Rome. The Met gave the Vatican Museum $580,000 to restore some of the works included in the exhibit, and at the end of the show’s stay in New York, the Met told the New York Times that 855,939 people had attended, grossing $2.38 million for the museum.

The profit from “The Vatican Collections,” a Met spokesperson told the Times, would “mean a good deal for the financial health of the museum.’’

“Heavenly Bodies” examined the influence of Catholicism and its aesthetics on fashion designers and showcased more than 40 vestments from the church itself. Many years and one long courtship of the Vatican in the making, it is the biggest show the museum has ever held. The combination of fashion and Vatican treasures made it kind of like The Avengers of the art world — a winning crossover event if there ever was one.

You would be right to wonder whether it, too, will mean a good deal for the financial health of the Met, which has faced steep money challenges in recent years. In March, the museum did away with its pay-what-you-wish policy for non-New Yorkers and started charging them a mandatory $25 entrance fee; in April, it appointed a new director, Max Hollein, who is known as an “aggressive” fundraiser.

Unlike every other department at the Met, the Costume Institute finances itself through the Met Gala. The event supports the department’s exhibits, publications, acquisitions, and capital improvements, a rep for the museum said in an email.

Perhaps out of necessity, the Met Gala has escalated into a dazzling spectacle of fashion and celebrity under Wintour’s leadership, drawing out everyone from Beyoncé to the Kardashians. Last year, the evening raised $12 million for the Costume Institute. As the Met Gala has become a major pop culture moment (it’s the setting for Ocean’s 8), it’s raised the Costume Institute’s profile and budget.

A mannequin wears an elaborate headdress. Amelia Krales for The Goods
The detail on a piece on view in the Met’s “Heavenly Bodies” exhibit.

Since “Savage Beauty” in 2011, some onlookers have worried about the pressure the Costume Institute faces to match the success of its past exhibits. Bolton, who curated the McQueen exhibit before becoming the head of the department in 2015, says as much in The First Monday in May: “The McQueen show has become a little bit of an albatross in a way, a bit of a millstone around my neck. It’s the show that every show I’ve done subsequently has been measured against.”

“Savage Beauty” was a turning point for the Costume Institute, and for fashion curators in the museum world generally, earning greater visibility and respect for their work. (When the exhibit traveled to London’s Victoria and Albert Museum, it beat attendance records there too.) Last year, the Museum of Modern Art held its first fashion exhibit in decades, and the Brooklyn Museum focused an entire show on Georgia O’Keeffe’s personal style (plus some of her paintings). Between 2011 and 2016, “The Fashion World of Jean Paul Gaultier From the Sidewalk to the Catwalk” traveled around the world, originating in Montreal before stopping off in 12 cities including Dallas, Madrid, Stockholm, Melbourne, and Seoul; according to the Montreal Museum of Fine Arts, its worldwide attendance totaled more than 2 million.

“One potential drawback is that because of Savage Beauty’s resounding success, there is now pressure in some institutions for every fashion exhibition to be a ‘blockbuster,’” says Bellios. “If every topic for a fashion exhibition has to inspire mass appeal, it could begin to limit what kinds of clothing and dress are shown and what kinds of stories are told.”

With “Heavenly Bodies,” the Met pulled out all the stops. In addition to it being the largest show the museum has ever held, it had an unusually long run (May through October) and was housed in the Cloisters, the museum’s medieval branch in far northern Manhattan, which helped draw visitors to the distant outpost. Conditions were right for a blockbuster, and that’s precisely what it was.

Want more stories from The Goods by Vox? Sign up for our newsletter here.

Read the whole story
2 days ago
"Heavenly Bodies" was really good! I know nothing about fashion and expected it to be gimmicky but it was surprisingly engaging.
Share this story

bpftrace (DTrace 2.0) for Linux 2018

1 Comment
The private bpftrace repository has just been made public, which is big news for DTrace fans. Created by Alastair Robertson, bpftrace is an open source high-level tracing front-end that lets you analyze systems in custom ways. It's shaping up to be a DTrace version 2.0: more capable, and built from the ground up for the modern era of the eBPF virtual machine. eBPF (extended Berkeley Packet Filter) is in the Linux kernel and is the new hotness in systems engineering. It is being developed for BSD, too, where BPF originated. Screenshot: tracing read latency for PID 181:
# bpftrace -e 'kprobe:vfs_read /pid == 30153/ { @start[tid] = nsecs; }
kretprobe:vfs_read /@start[tid]/ { @ns = hist(nsecs - @start[tid]); delete(@start[tid]); }'
Attaching 2 probes...

[256, 512)         10900 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                      |
[512, 1k)          18291 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[1k, 2k)            4998 |@@@@@@@@@@@@@@                                      |
[2k, 4k)              57 |                                                    |
[4k, 8k)             117 |                                                    |
[8k, 16k)             48 |                                                    |
[16k, 32k)           109 |                                                    |
[32k, 64k)             3 |                                                    |
Some one-liners:
# New processes with arguments
bpftrace -e 'tracepoint:syscalls:sys_enter_execve { join(args->argv); }'

# Files opened by process
bpftrace -e 'tracepoint:syscalls:sys_enter_open { printf("%s %s\n", comm, str(args->filename)); }'

# Syscall count by program
bpftrace -e 'tracepoint:raw_syscalls:sys_enter { @[comm] = count(); }'

# Syscall count by syscall
bpftrace -e 'tracepoint:syscalls:sys_enter_* { @[name] = count(); }'

# Syscall count by process
bpftrace -e 'tracepoint:raw_syscalls:sys_enter { @[pid, comm] = count(); }'

# Read bytes by process:
bpftrace -e 'tracepoint:syscalls:sys_exit_read /args->ret/ { @[comm] = sum(args->ret); }'

# Read size distribution by process:
bpftrace -e 'tracepoint:syscalls:sys_exit_read { @[comm] = hist(args->ret); }'

# Disk size by process
bpftrace -e 'tracepoint:block:block_rq_issue { printf("%d %s %d\n", pid, comm, args->bytes); }'

# Pages paged in by process
bpftrace -e 'software:major-faults:1 { @[comm] = count(); }'

# Page faults by process
bpftrace -e 'software:faults:1 { @[comm] = count(); }'

# Profile user-level stacks at 99 Hertz, for PID 189:
bpftrace -e 'profile:hz:99 /pid == 189/ { @[ustack] = count(); }'
If they look familiar, they are from my old [DTrace one-liners] that I switched to bpftrace. It's also been a milestone to see these all work in bpftrace, which they now do! bpftrace has most of DTrace's capabilities and more. ## bpftrace The prior work that led to [bpftrace] (aka BPFtrace) is discussed my earlier [DTrace for Linux 2016] post, which explains how the Linux kernel gained [eBPF]: the virtual machine that we're using to run tracing programs. That post covered the [bcc] front end (BPF Complier Collection), which has let me rewrite my DTraceToolkit tools. bcc is powerful but laborious to program. bpftrace is a complimentary addition that provides a high-level language for one-liners and short scripts. Alastair recently developed struct support, and applied it to tracepoints (which is used by the above one-liners), and applied it to kprobes yesterday. That was the last major missing piece! Here's an example:
# cat path.bt
#include <linux/path.h>
#include <linux/dcache.h>

	printf("open path: %s\n", str(((path *)arg0)->dentry->d_name.name));

# bpftrace path.bt
Attaching 1 probe...
open path: dev
open path: if_inet6
open path: retrans_time_ms
Myself, Willian Gaspar, and Matheus Marchini, have been contributing other capabilities and bug fixes. For a list of the minor work left to do, see the [issue list], and for what is done, see the [reference guide] and [prior commits]. bpftrace uses existing Linux kernel facilities (eBPF, kprobes, uprobes, tracepoints, perf\_events), as well as bcc libraries. Internally, bpftrace uses a lex/yacc parser to convert programs to AST, then llvm IR actions, then BPF.

To learn bpftrace, I've created a couple of references: - [one-liners tutorial] - [reference guide] The one-liners tutorial is based on my [FreeBSD DTrace Tutorial], and I think is a time-efficient way to learn bpftrace (as well as DTrace on FreeBSD). Since I helped developed bpftrace, I'm aware of how fresh my own code is and how likely I introduced bugs. Might bpftrace SIGSEGV and coredump? Probably, let us know, we already fixed plenty of these. Might bpftrace spew incomprehensible LLVM or BPF verifier errors (with -v)? Probably, let us know. Might bpftrace panic the kernel? That's much less likely, since that component is eBPF which is now years old. What I'd expect is some user-level bugs where the bpftrace process crashes or gets stuck and needs to be killed. Lets us know on github, and if you can, please help fix them and the other issues. bpftrace has a test suite with over 200 tests. ## bpftrace/eBPF vs DTrace equivalency I'll show examples of bpftrace/eBPF vs DTrace in the sections that follow, but it's important to note that the eBPF engine was not created as a Linux version of DTrace. eBPF does more. eBPF was created by Alexei Starovoitov while at PLUMgrid (he's now at Facebook) as a generic in-kernel virtual machine, with software defined networks as the primary use case. Observability is one of many other use cases, as is [eXpress Data Path], [container security and networking], [iptables], [infrared decoding], [intrusion detection], [hyperupcalls], and [FUSE performance]. eBPF is hard to use directly, so bcc was created as a Python front-end. bpftrace was created as an even *higher*-level front end for custom ad-hoc tracing, and can serve a similar role as DTrace. We've been adding bpftrace features as we need them, not just because DTrace had them. I can think of over a dozen things that DTrace can do that bpftrace currently cannot, including custom aggregation printing, shell arguments, translators, sizeof(), speculative tracing, and forced panics. Some we may add as the need arises, but so far I haven't really needed them. And in some cases it's best left to bcc, rather than cluttering up bpftrace with niche functionality. DTrace, too, lacked functionality like argument processing that I implemented elsewhere, in shell wrappers. Instead of augmenting bpftrace with shell wrappers, we can just switch to bcc, which can make use of any Python library (including argparse). This makes bpftrace and bcc complimentary. Visualizing this:
There's also a number of things that eBPF can do that DTrace can't, but one I want to mention is the ability to **save and retrieve stack traces** as variables. I've already used it with bpftrace to debug file descriptor leaks. Here's a tool that does this (fdleak.bt):
	$fd = retval;
	@alloc_stack[comm, pid, $fd] = ustack;

	$fd = arg1;
	delete(@alloc_stack[comm, pid, $fd]);
The output shows user-level stacks that were allocated but not freed. It works by saving the stack on allocation, then deleting it on free. When bpftrace exits, it prints the left-overs in @alloc\_stack: the stacks that weren't freed. I wrote a longer version that includes timestamps, so I can find the longer-term survivors. You can apply this technique to any object allocation, and whip up your own leak detection tools on the fly. With DTrace, this required dumping all stacks to a file and post-processing, which involved much higher overhead. eBPF can do it all in-kernel, and filter out potentially millions of objects that were allocated and freed quickly. There's an even more important use case, which I showed in my [off-wake time] post, prior to bpftrace: saving waker stacks and associating them with sleeper stacks. It's hard to overstate how important this is. It makes off-CPU analysis effective, which is the counterpart to CPU analysis. The combination of CPU and off-CPU analysis means we have a way to attack any performance issue: a performance engineer's dream. ## DTrace to bpftrace Cheatsheet If you already know DTrace, it should take you ten minutes to learn the equivalents in bpftrace. Here's key differences as of August 2018:
    function@ = quantize(value)@ = hist(value)
    function@ = lquantize(value, min, max, step)@ = lhist(value, min, max, step)
    variableself->name = 0delete(@name[tid])
    variableprobeprov probemod probenamename
This list doesn't cover the extra things that bpftrace can do, like access aggregation elements, do user-level tracing system wide, save stacks as variables, etc. See the [reference guide] for more details. ## Script Comparison I could pick scripts that minimize or maximize the differences, but either would be misleading. Here's a comparison for an average-sized DTraceToolkit script (seeksize.d) that uses common functionality.

#pragma D option quiet

 * Print header
	printf("Tracing... Hit Ctrl-C to end.\n");

self int last[dev_t];

 * Process io start
/self->last[args[0]->b_edev] != 0/
	/* calculate seek distance */
	this->last = self->last[args[0]->b_edev];
	this->dist = (int)(args[0]->b_blkno - this->last) > 0 ?
	    args[0]->b_blkno - this->last : this->last - args[0]->b_blkno;

	/* store details */
	@Size[pid, curpsinfo->pr_psargs] = quantize(this->dist);

	/* save last position of disk head */
	self->last[args[0]->b_edev] = args[0]->b_blkno +
	    args[0]->b_bcount / 512;

 * Print final report
	printf("\n%8s  %s\n", "PID", "CMD");
	printa("%8d  %S\n%@d\n", @Size);

 * Print header
    printf("Tracing... Hit Ctrl-C to end.\n");

 * Process io start
    // calculate seek distance
    $last = @last[args->dev];
    $dist = (args->sector - $last) > 0 ?
        args->sector - $last : $last - args->sector;

    // store details
    @size[pid, comm] = hist($dist);

    // save last position of disk head
    @last[args->dev] = args->sector + args->nr_sector;

 * Print final report
    printf("\n@[PID, COMM]:\n");

This example shows that the overall script is similar. The major work in porting these scripts to Linux is not the syntax, but is the research and testing to find equivalent Linux tracepoints and their arguments. And that major work is irrespective of the tracing language. Now that I ported it, I've found it little odd. I would have guessed that this answers the question: are the disks seeking? But it's really answering a tricker question: is an application causing the disks to seek? I wrote seeksize.d in 2004, so I have to think back to that time to understand it. Back then I could already tell if disks were seeking by interpreting iostat(1) output: seeing high disk latency but small I/O. What iostat(1) couldn't tell me is if it was due to competing applications using the disks, or if an application itself was applying a random workload. That's what I wrote seeksize.d to answer, an answer that directs further tuning: whether you need to separate applications on different disks, or tune one application's disk workload. ## Why did this take so long? I've spoken to many engineers and companies about finishing a high-level tracer for Linux. It's been an interesting case study of Linux vs commercial development. In summary, the reasons this took so long were: ### 1. Linux isn't a company Scott McNealy, CEO of Sun Microsystems when DTrace was developed, was fond of the saying "all the wood behind one arrow". So much so, that staff once installed a [giant arrow] through his office window as an April fool's joke. In Linux, all effort hasn't been behind one tracing arrow, but split among 14 (SystemTap, LTTng, ftrace, perf\_events, dtrace4linux, OEL DTrace, ktap, sysdig, Intel PIN, bcc, shark, ply, and bpftrace). If Linux was a company, management could consolidate overlapping projects and focus on one or two tracers. But it isn't. ### 2. Linux won Linux dropped the ball on its own dynamic tracing implementation (DProbes in 2000), creating an opportunity for Sun to develop its own as a competitive feature. Sun had billions in revenue, and countless staff worked on making DTrace successful (including sales, marketing, educational services, etc.) That circumstance doesn't exist for Linux. Linux has already won, and there's no company offering any real resources to build a competitive tracer, with one exception: Red Hat. ### 3. Most funding went into SystemTap Ask a RHEL customer about DTrace, and they may say they had it years ago: SystemTap. However, SystemTap never fully merged into the Linux kernel (parts did, like uprobes), so as an out-of-tree project it required maintenance to work at all, and Red Hat only did this for RHEL. Try complaining to Red Hat about this as an Ubuntu user: they'll often say "switch to RHEL". Can you blame them? Problem is, this was the only real money on the table to build a Linux DTrace (especially since Red Hat picked up the big ex-Sun accounts, who wanted DTrace), and it went into SystemTap. ### 4. Good-enough solutions Another impediment has been good-enough solutions. Some companies have figured out how to get SystemTap (or LTTng or ftrace or perf) to work well enough, and are happy. Would they like eBPF/bcc/bpftrace finished? Sure. But they aren't going to help develop them, as they already have a good-enough solution. Some companies are happy enough using my ftrace perf-tools, or my bcc tools – so my own prior engineering work gives them a reason not to help build something better. ### 5. CTF/BTF DTrace was built on Solaris, which already had Common Type Format to provide the struct information it needed. Linux had DWARF and debuginfo, which unlike CTF, is not commonly installed. That has hindered developing a real DTrace-like tracer. Only recently, in the Linux 4.18 release, do we now have a CTF technology in Linux: BPF Type Format (BTF). ## Default Install It's worth mentioning that DTrace was a default install on Solaris. That really aided adoption, as customers didn't have a choice! Now imagine what it would take to make bpftrace a default install on *all* Linux distros. I think it's a long shot, which means that Linux may never have the same experience as DTrace on Solaris. But this is an area where you can help: ask your distro maintainer to include bpftrace (plus bcc and sysstat) by default. These are **crisis tools**, often used to analyze a system suffering an urgent performance issue, and the time to install them ("apt-get update; apt-get install ...") can mean the issue disappears before you've had a chance to see it. ## What about DTrace? Just because Linux got eBPF, doesn't make DTrace a terrible tool overnight. It will still be useful on OSes that have it, and any that does is well positioned for a future upgrade to eBPF, which can reuse internals like provider instrumentation. Work has already began to bring eBPF to BSD, where BPF originated. It may be worth reassuring people who have invested time into learning DTrace that I don't think that time is wasted. The hardest part of using DTrace or bpftrace is knowing what to do with it, following a methodology to approach problems. Once you get into the guts of some complex software for a pressing production issue, whether you type quantize() or hist() is really the least of your problems. Any time you've spent debugging problems with DTrace is going to help with bpftrace as well. ## What about ply? [ply], by Tobais Waldekranz, is another front-end to BPF. I like it. It isn't complete yet, but has some interesting differences to bpftrace: ply emits instructions directly, whereas bpftrace uses llvm's IR API (which can be fairly difficult to code in). ply is also C, whereas bpftrace is C++.
# ply -A -c 'kprobe:SyS_read { @start[tid()] = nsecs(); }
    kretprobe:SyS_read /@start[tid()]/ { @ns.quantize(nsecs() - @start[tid()]);
        @start[tid()] = nil; }'
2 probes active
^Cde-activating probes
	[ 512,   1k)	       3 |########                        |
	[  1k,   2k)	       7 |###################             |
	[  2k,   4k)	      12 |################################|
	[  4k,   8k)	       3 |########                        |
	[  8k,  16k)	       2 |#####                           |
	[ 16k,  32k)	       0 |                                |
	[ 32k,  64k)	       0 |                                |
	[ 64k, 128k)	       3 |########                        |
	[128k, 256k)	       1 |###                             |
	[256k, 512k)	       1 |###                             |
	[512k,   1M)	       2 |#####                           |
It's possible that ply will find its users on embedded Linux and minimal environments, whereas bpftrace will be popular for large server Linux. Note that Tobais currently has been developing many new ply features (a ply 2.0) in his own ply repo, that isn't reflected in the iovisor repo yet. ## Acknowledgements bpftrace was created by Alastair Robertson, and builds upon eBPF and bcc that has had many contributors. See the Acknowledgemets section in my [DTrace for Linux 2016] post which lists DTrace and earlier tracing work. In particular, if you're using bpftrace, you're also using a lot of code developed by Alexei Starovoitov (eBPF), Brendan Blanco (libbpf), myself, Sasha Goldshtein (USDT), and others. See the [bcc] repo for the library components we're using. Facebook also have many engineers working on BPF and bcc, who have made it the mature technology it is today. bpftrace itself has a language similar to DTrace, as well as awk and C. This wasn't the first attempt at a high-level language on eBPF: the first was Shark, by Jovi Zangwei, then ply by Tobais Waldekranz, then bpftrace by Alastair Robertson. At some point Richard Henderson was developing eBPF as a backend for SystemTap. Thanks to everyone who worked on these, and all the other technologies that we use. Finally, thanks to Netfilx, who provide a great supportive environment for me to work on and contribute to different technologies, including BPF. ## Conclusion eBPF changed the landscape of Linux tracing since it began merging into the kernel four years ago. We now have libraries, tools, and a high-level front-end built from scratch for eBPF that is nearing completion: bpftrace. bpftrace gives you the power to analyze the system in custom detail, and find performance wins that other tools cannot. It is complementary to bcc: bcc is great for complex tools, and bpftrace is great for ad hoc one-liners. In this post I described bpftrace and how it compares to DTrace. In my next post, I'll focus more on bpftrace. More reading: - [bpftrace] on github - [one-liners tutorial] - [reference guide] - [bcc] on github Alastair's first talk on bpftrace is at the [Tracing Summit] in Edinburgh, Oct 25th. To get started with bpftrace, you need a newer Linux kernel version (4.9 or newer is better), and follow the INSTALL.md in the repo, eg: [Ubuntu bpftrace install instructions], which will improve once we have packages. (If you work at Netflix, there's a nflx-bpftrace package for the Bionic BaseAMI, which will become a default install.) What cool thing will you build with bpftrace? [Ubuntu bpftrace install instructions]: https://github.com/iovisor/bpftrace/blob/master/INSTALL.md#ubuntu [Tracing Summit]: https://tracingsummit.org/wiki/TracingSummit2018#Schedule [picture of this]: https://www.slideshare.net/brendangregg/from-dtrace-to-linux/34 [giant arrow]: https://www.slideshare.net/brendangregg/from-dtrace-to-linux/34 [reference guide]: https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md [one-liners tutorial]: https://github.com/iovisor/bpftrace/blob/master/docs/tutorial_one_liners.md [issue list]: https://github.com/iovisor/bpftrace/issues [tools]: https://github.com/iovisor/bcc#tools [FreeBSD DTrace Tutorial]: https://wiki.freebsd.org/DTrace/Tutorial [DTrace for Linux 2016]: /blog/2016-10-27/dtrace-for-linux-2016.html [bcc]: https://github.com/iovisor/bcc [off-wake time]: /blog/2016-02-01/linux-wakeup-offwake-profiling.html [DTrace one-liners]: http://www.brendangregg.com/dtrace.html#OneLiners [bpftrace]: https://github.com/iovisor/bpftrace [FUSE performance]: https://ossna18.sched.com/event/FAPa/when-ebpf-meets-fuse-improving-performance-of-user-file-systems-ashish-bijlani-georgia-tech [infrared decoding]: https://lwn.net/Articles/759188/ [microkernel]: https://lwn.net/Articles/755919/ [iptables]: https://cilium.io/blog/2018/04/17/why-is-the-kernel-community-replacing-iptables/ [eXpress Data Path]: https://lwn.net/Kernel/Index/#Networking-eXpress_Data_Path_XDP [intrusion detection]: https://www.slideshare.net/AlexMaestretti/security-monitoring-with-ebpf [hyperupcalls]: https://www.usenix.org/conference/atc18/presentation/amit [container security and networking]: https://github.com/cilium/cilium [ply]: https://github.com/iovisor/ply [eBPF]: /ebpf.html [prior commits]: https://github.com/iovisor/bpftrace/commits/master
Read the whole story
6 days ago
Share this story

Poll: 48% of white evangelicals would support Kavanaugh even if the allegations against him were true

1 Comment
Forty-eight percent of white evangelicals say they would support.

A Marist poll illustrates how white evangelical culture minimizes the seriousness of assault.

Forty-eight percent of white evangelicals say that embattled Supreme Court nominee Brett Kavanaugh should be confirmed even if the allegations of sexual assault against him are true.

Marist asked 1,000 respondents from September 22 to 24 whether they would support Kavanaugh if the allegations by Christine Blasey Ford, who says Kavanaugh assaulted her at a high school party decades ago, were found to be true.

Support for Kavanaugh’s confirmation falls along partisan lines with 12 percent of Democrats and 54 percent of Republicans saying yes. Kavanaugh has consistently denied the allegations, as well as the allegations of at least three other accusers, whose more recent allegations came out too late to be included in the poll.

Aside from the 48 percent who said they would support Kavanaugh’s appointment to the court, 36 percent of white evangelicals say they would not support it, and 16 percent did not have an answer. Mirroring the poll results, many prominent white evangelicals have spoken out in Kavanaugh’s defense, characterizing the allegations against him as part of a liberal plot to waylay his nomination. Jerry Falwell Jr., president of the evangelical Liberty University, sent 300 women Liberty students to Washington, DC, to support Kavanaugh during this week’s Senate confirmation hearings.

Likewise, prominent evangelical Franklin Graham has expressed support for Kavanaugh, minimizing the nature of the alleged assault and characterizing it as merely a clumsy come-on, telling the Christian Broadcasting Network, “She said no and he respected it and walked away.”

Graham’s version directly contradicts Ford’s account of the incident. In any case, Graham told CBN, “It’s just a shame that a person like Judge Kavanaugh, who has a stellar record — that somebody can bring something up that he did as a teenager close to 40 years ago.”

Most of the public evangelical response to the allegations against Kavanaugh rests on the idea that Kavanaugh is innocent, and that Ford’s allegations are either a fabrication, exaggeration, or a case of mistaken identity.

Yet the Marist poll results suggest that, for many white evangelicals, another factor is at play in their support for Kavanaugh: They don’t think what he did is bad enough to be disqualifying.

Or, as one unidentified woman Trump and Kavanaugh supporter from Montana said, according to MSNBC, “It’s no big deal.” While the woman did not reveal her own religious affiliation, her words are nevertheless indicative of many conservatives’, particularly white evangelicals’, stance on the allegations against Kavanaugh.

Such a perspective fits neatly within the context of evangelical sexual culture, which in recent months has been characterized by a wider suspicion of the #MeToo movement. Within evangelical culture, as I’ve written previously, the idea that women are “supposed” to be the gatekeepers of male sexuality, that male sexual urges are inherently uncontrollable, and the idea that forgiveness is automatically “owed” to any alleged abuser, converge to create a climate in which allegations of sexual harassment and abuse tend to be seen as minor or, at least, forgivable.

Certainly, the evangelical community is already redeeming its own people accused of sexual misconduct during the #MeToo movement. Earlier this month, former Southern Baptist Convention president Paige Patterson — who left his position as president of the Southwestern Baptist Seminary in disgrace after accusations of sexism — returned to public ministry with a pair of sermons that denigrated the #MeToo movement and focused on the problem of false rape allegations.

Today, as Christine Blasey Ford testifies in front of the Senate, GOP senators will be trying to argue that her assault never happened. But what might be even scarier is the idea that, if it did, plenty of Americans wouldn’t even care.

Read the whole story
17 days ago
GOP Delenda Est.
17 days ago
I'm not a fan of Kavanaugh in general, (he's no Gorsuch, that's for sure) but I do follow blogs from both sides of this debate, so I suppose I can put some of this in context. It comes down to a combination of "It was a different time" and "Kids do stupid things and we need to make allowances for them to live their lives once they wise up." The latter is actually codified into our legal system with the expunging of the records of minors when they turn 18. People's brains don't even really settle into a regular pattern until they're about 25 years old. For the different time thing, the early 80's were the tail end of the sexual revolution, that greatly lowered people's prohibitions against sexual behavior in general due to birth control being widely available. This was before the AIDS/HIV scare greatly attenuated the sexual revolution, and WAY before our current prudish neo-victorian mores taking hold. The combination of him being between 17 and 19 during the time of all of these allegations, and under 18 for the most serious one, allow for a great deal of forgiveness in the minds of people who like him for his other stances. As for me, I really don't care one bit. He was on the bottom of the list as far as possible choices. He's probably the one that is the most in line with Democrat legal positions out of all of the possible choices on the list.
12 days ago
I find the justifications you describe morally repugnant and evidence of the complete ethical bankruptcy of the GOP.
12 days ago
I agree, and extend that to the Democrat party as well for their covering up of Bill Clinton's multiple rapes. (I don't include Lewinski in that.)
11 days ago
And it turns out that's my line. Have a good life.
Share this story

Informal Terms

1 Comment
Read the whole story
17 days ago
I *absolutely* answer to "hey fucker".
Share this story

Roughtime: Securing Time with Digital Signatures

1 Comment
Roughtime: Securing Time with Digital Signatures
Roughtime: Securing Time with Digital Signatures

When you visit a secure website, it offers you a TLS certificate that asserts its identity. Every certificate has an expiration date, and when it’s passed due, it is no longer valid. The idea is almost as old as the web itself: limiting the lifetime of certificates is meant to reduce the risk in case a TLS server’s secret key is compromised.

Certificates aren’t the only cryptographic artifacts that expire. When you visit a site protected by Cloudflare, we also tell you whether its certificate has been revoked (see our blog post on OCSP stapling) — for example, due to the secret key being compromised — and this value (a so-called OCSP staple) has an expiration date, too.

Thus, to determine if a certificate is valid and hasn’t been revoked, your system needs to know the current time. Indeed, time is crucial for the security of TLS and myriad other protocols. To help keep clocks in sync, we are announcing a free, high-availability, and low-latency authenticated time service called Roughtime, available at roughtime.cloudflare.com on port 2002.

Time is tricky

It may surprise you to learn that, in practice, clients’ clocks are heavily skewed. A recent study of Chrome users showed that a significant fraction of reported TLS-certificate errors are caused by client-clock skew. During the period in which error reports were collected, 6.7% of client-reported times were behind by more than 24 hours. (0.05% were ahead by more than 24 hours.) This skew was a causal factor for at least 33.5% of the sampled reports from Windows users, 8.71% from Mac OS, 8.46% from Android, and 1.72% from Chrome OS. These errors are usually presented to users as warnings that the user can click through to get to where they’re going. However, showing too many warnings makes users grow accustomed to clicking through them; this is risky, since these warnings are meant to keep users away from malicious websites.

Clock skew also holds us back from improving the security of certificates themselves. We’d like to issue certificates with shorter lifetimes because the less time the certificate is valid, the lower the risk of the secret key being exposed. (This is why Let’s Encrypt issues certificates valid for just 90 days by default.) But the long tail of skewed clocks limits the effective lifetime of certificates; shortening the lifetime too much would only lead to more warnings.

Endpoints on the Internet often synchronize their clocks using a protocol like the Network Time Protocol (NTP). NTP aims for precise synchronization, and even accounts for network latency. However, it is usually deployed without security features, as the added overhead on high-load servers degrades precision significantly. As a result, a man-in-the-middle attacker between the client and server can easily influence the client’s clock. By moving the client back in time, the attacker can force it to accept expired (and possibly compromised) certificates; by moving forward in time, it can force the client to accept a certificate that is not yet valid.

Fortunately, for settings in which both security and precision are paramount, workable solutions are on the horizon. But for many applications, precise network time isn’t essential; it suffices to be accurate, say, within 10 seconds of real time. This observation is the primary motivation of Google’s Roughtime protocol, a simple protocol by which clients can synchronize their clocks with one or more authenticated servers. Roughtime lacks the precision of NTP, but aims to be accurate enough for cryptographic applications, and since the responses are authenticated, man-in-the-middle attacks aren’t possible.

The protocol is designed to be simple and flexible. A client can get Roughtime from just one server it trusts, or it may contact many servers to make its calculation more robust. But its most distinctive feature is that it adds accountability to time servers. If a server misbehaves by providing the wrong time, then the protocol allows clients to produce publicly verifiable, cryptographic proof of this misbehavior. Making servers auditable in this manner makes them accountable to provide accurate time.

We are deploying a Roughtime service for two reasons.

First, the clock we use for this service is the same as the clock we use to determine whether our customers’ certificates are valid and haven’t been revoked; as a result, exposing this service makes us accountable for the validity of TLS artifacts we serve to clients on behalf of our customers.

Second, Roughtime is a great idea whose time has come. But it is only useful if several independent organizations participate; the more Roughtime servers there are, the more robust the ecosystem becomes. Our hope is that putting our weight behind it will help the Roughtime ecosystem grow.

The Roughtime protocol

At its most basic level, Roughtime is a one-round protocol in which the client requests the current time and the server sends a signed response. The response is comprised of a timestamp (the number of microseconds since the Unix epoch) and a radius (in microseconds) used to indicate the server’s certainty about the reported time. For example, a radius of 1,000,000μs means the server is reasonably sure that the true time is within one second of the reported time.

The server proves freshness of its response as follows. The request consists of a short, random string commonly called a nonce (pronounced /nän(t)s/, or sometimes /ˈen wən(t)s/). The server incorporates the nonce into its signed response so that it’s needed to verify the signature. If the nonce is sufficiently long (say, 16 bytes), then the number of possible nonces is so large that it’s extremely unlikely the server has encountered (or will ever encounter) a request with the same nonce. Thus, a valid signature serves as cryptographic proof that the response is fresh.

The client uses the server’s root public key to verify the signature. (The key is obtained out-of-band; you can get our key here.) When the server starts, it generates an online public/secret key pair; the root secret key is used to create a delegation for the online public key, and the online secret key is used to sign the response. The delegation serves the same function as a traditional X.509 certificate on the web: as illustrated in the figure below, the client first uses the root public key to verify the delegation, then uses the online public key to verify the response. This allows for operational separation of the delegator and the server and limits exposure of the root secret key.

Roughtime: Securing Time with Digital Signatures
Simplified Roughtime (without delegation)

Roughtime: Securing Time with Digital Signatures
Roughtime with delegation

Roughtime offers two features designed to make it scalable.  First, when the volume of requests is high, the server may batch-sign a number of clients’ requests by constructing a Merkle tree from the nonces. The server signs the root of the tree and sends in its response the information needed to prove to the client that its request is in the tree. (The data structure is a binary tree, so the amount of information is proportional to the base-2 logarithm of the number of requests in the batch; see the figure below) Second, the protocol is executed over UDP. In order to prevent the Roughtime server from being an amplifier for DDoS attacks, the request is padded to 1KB; if the UDP packet is too short, then it’s dropped without further processing. Check out this blog post for a more in-depth discussion.

Roughtime: Securing Time with Digital Signatures
Roughtime with batching

Using Roughtime

The protocol is flexible enough to support a variety of use cases. A web browser could use a Roughtime server to proactively synchronize its clock when validating TLS certificates. It could also be used retroactively to avoid showing the user too many warnings: when a certificate validation error occurs — in particular, when the browser believes it’s expired or not yet valid — Roughtime could be used to determine if the clock skew was the root cause. Instead of telling the user the certificate is invalid, it could tell the user that their clock is incorrect.

Using just one server is sufficient if that server is trustworthy, but a security-conscious user could make requests to many servers; the delta might be computed by eliminating outliers and averaging the responses, or by some more sophisticated method. This makes the calculation robust to one or more of the servers misbehaving.

Making servers accountable

The real power of Roughtime is that it’s auditable. Consider the following mode of operation. The client has a list of servers it will query in a particular order. The client generates a random string — called a blind in the parlance of Roughtime — hashes it, and uses the output as the nonce for its request to the server. For subsequent requests, it computes the nonce as follows: generate a blind, compute the hash of this string and the response from the previous server (including the timestamp and signature), and use this hash as the nonce for the next request.

Roughtime: Securing Time with Digital Signatures
Chaining multiple Roughtime servers

Creating a chain of timestamps in this way binds each response to the response that precedes it. Thus, the sequence of blinds and signatures constitute a publicly verifiable, cryptographic proof that the timestamps were requested in order (a “clockchain” if you will 😉). If the servers are roughly synchronized, then we expect that the sequence to monotonically increase, at least roughly. If one of the servers were consistently behind or ahead of the others, then this would be evident in the sequence. Suppose you get the following sequence of timestamps, each from different servers:

Server Timestamp
ServerA-Roughtime 2018-08-29 14:51:50 -0700 PDT
ServerB-Roughtime 2018-08-29 14:51:51 -0700 PDT +0:00:01
Cloudflare-Roughtime 2018-08-29 12:51:52 -0700 PDT -1:59:59
ServerC-Roughtime 2018-08-29 14:51:53 -0700 PDT +2:00:01

Servers B and C corroborate the time given by server A, but — oh no! Cloudflare is two hours behind! Unless servers A, B, and C are in cahoots, it’s likely that the time offered by Cloudflare is incorrect. Moreover, you have verifiable, cryptographic proof. In this way, the Roughtime protocol makes our server (and all Roughtime servers) accountable to provide accurate time, or, at least, to be in sync with the others.

The Roughtime ecosystem

The infrastructure for monitoring and auditing the Roughtime ecosystem hasn’t been built yet. Right now there’s only a handful of servers: in addition to Cloudflare’s and Google’s, there’s also a really nice Rust implementation. The more diversity there is, the healthier the ecosystem becomes. We hope to see more organizations adopt this protocol.

Cloudflare’s Roughtime service

For the initial deployment of this service, our primary goals are to ensure high availability and minimal maintenance overhead. Each machine at each Cloudflare location executes an instance of the service and responds to queries using its system clock. The server signs each request individually rather than batch-signing them as described above; we rely on our load balancer to ensure no machine is overwhelmed. There are three ways in which we envision this service could be used:

  1. TLS authentication. When a TLS application (a web browser for example) starts, it could make a request to roughtime.cloudflare.com and compute the difference between the reported time and its system time. Whenever it authenticates a TLS server, it would add this difference to the system time to get the current time.
  2. Roughtime daemon. One could implement an OS daemon that periodically requests the current time. If the reported time differs from the system time by more than a second, it might issue an alert.
  3. Server auditing. As the Roughtime ecosystem grows, it will be important to ensure that all of the servers are in sync. Individuals or organizations may take it upon themselves to monitor the ecosystem and ensure that the servers are in sync with one another.

The service is reachable wherever you are via our anycast network. This is important for a service like Roughtime, because minimizing network latency helps improve accuracy. For information about how to configure a client to use Cloudflare-Roughtime, check out the developer documentation. Note that our initial release is somewhat experimental. As such, our root public key may change in the future. See the developer docs for information on obtaining the current public key.

If you want to see what time our Roughtime server returns, click the button below!


Subscribe to the blog for daily updates on our announcements.

Roughtime: Securing Time with Digital Signatures

Read the whole story
19 days ago
"clockchain". Groan.
Share this story

Do my Homework

1 Comment and 2 Shares

So, anent nothing in particular, I was contemplating another of James Nicoll's essays on Tor.com the other day—this one concerning utopias in SF—and found myself trying to stare into my own cognitive blind spot.

Like all fiction genres, SF is prone to fashion trends. For example, since the late 1970s, psi powers as a trope have gone into steep decline (I'd attribute this to the death and subsequent waning influence of editor John W. Campbell, who in addition to being a bigoted right-winger was into any number of bizarre fringe beliefs). "Population time bomb"/overpopulation stories have also gone into decline, perhaps due to the gradual realization that thanks to the green revolution and demographic transition we aren't doomed as a direct consequence of overpopulation—climate change and collapsing agriculture are another matter, but we're already far past the point at which a collapse into cannibalism and barbarism was so gloatingly depicted in much 1960s and 1970s SF. And so are stories about our totalitarian Stalinist/Soviet overlords and their final triumph over the decadent free western world. These are all, if you like, examples of formerly-popular tropes which succumbed to, respectively, critiques of their scientific plausibility (psi powers), the intersection of unforeseen scientific breakthroughs with the reversal of an existing trend to mitigate a damaging outcome (food production revolution/population growth tapering off), and the inexorable historical dialectic (snark intentional).

Oddly enough, tales of what the world will be like in the tantalizingly close future year 2000 AD are also thin on the ground these days. As are tales of the first man on the moon (it's always a man in those stories, although nobody in the 1950s thought to call the hero of a two-fisted space engineering story "Armstrong"), the big East/West Third World War (but hold the front page!), and a bunch of other obsolescent futures that were contingent on milestones we've already driven past.

Some other technological marvels predicted in earlier SF have dropped out of fiction except as background scenery, for they're now the stuff of corporate press releases and funding rounds. Reusable space launchers? Check. (Elon Musk really, really wants to be the Man who Sold the Moon.) Space elevators/tether systems? Nobody would bother writing a novel like "The Fountains of Paradise" these days, they're too plonkingly obvious. It'd be like writing a novel about ITER, as opposed to a novel where ITER is the setting. Pocket supercomputer/videophone gadgets in every teenager's pocket? No, that's just too whacky: nobody would believe it! And so on. (Add sarcasm tags to taste.)

We are living through the golden age of grimdark dystopian futures, especially in Young Adult literature (and lest we forget, there's much truth to the old saying that "the golden age of SF is 12", even for those of us who write and read more adult themes). There's also a burgeoning wave of CliFi, fiction set in the aftermath of global climate change. We're now seeing Afrofuturism and other cultures taken into the mainstream of commercial SF, rather than being marginalized and systematically excluded: diversity is on the rise (and the grumpy white men don't like it).

Which leads me to my question: what are the blind spots in current SF? The topics that nobody is writing about but that folks should be writing about? (Keep reading below the cut before you think about replying!)

I can immediately think of four blind spots, right now (and this is without engaging my brain and trying to work out what topics I have, as a pale-skinned male of privilege, been trained to studiously ignore):

  1. In the 1950-1999 period, tales of the 21st century were everywhere. Where are the equivalent stories of the 22nd century, that should be being told today? (There are a few, but they are if anything prominent because of their scarcity.)

  2. The social systems based on late-stage currently-existing capitalism are hideously broken, but almost all the SF I see takes some variation on the current system as a given: in the future, apparently people will have these things called "jobs" whereby an "employer" (typically a Very Slow AI controlled by a privileged caste of "executives") acquires an exclusive right to their labour in return for vouchers which may be exchanged for food, clothing, and shinies (these vouchers are apparently called "money"). Seriously folks, can't we imagine something better?

  3. What does a world look like in which the (very approximately) 2,500-10,000 year old reign of the patriarchy has been broken for good? The commodification of women and children that followed the development of settled agricultural societies with ruling/warrior castes to police and enforce laws casts a very long shadow, even in societies that notionally endorse gender equality in law. (Consider, for example, that a restricted diet stunts growth, and that average adult stature tracks food availability by a generation or three, and ask why men are, on average, taller than women; or why rape culture exists and where it came from: or where the impetus for #MeToo is coming from ...) Even if the arc of history indeed does bend towards justice, we're still a long way from finding it (whether it be for racism, sexism, or any other entrenched, long-standing historic injustice). Which in turn leads me to ...

  4. Blind justice: "the law in its majesty forbids the millionaire and the pauper alike from sleeping under bridges". Stable societies need norms of behaviour and some way of ensuring that most people comply with them, but our current approach to legal codes is broken. One size does not fit all (if the pauper and the millionaire both face a $50 fine for the same offense, then the law is a hideously onerous burden on one of them and trivially ignored by the other—yes, I know there are jurisdictions where fines are proportional to income, but they're the exception rather than the rule and they rely on the concept of a fine as punishment). Nor is it clear that punishment by incarceration or state violence achieves anything productive, or that our judicial systems produce anything that can reasonably be termed justice (in strict Rawlsian terms). What does a future social contract look like? Hell, what does a future legal system look like? Malka Older ("Infomocracy") and Ada Palmer ("Too Like the Lightning") have been ploughing that field, with a side-order of trying to conceptualize what a new age of enlightenment might look like, but again: being able to name them just highlights how few authors are exploring these vital issues in SF. Indeed, law enforcement is a huge blind spot for many Americans, as witness this think-piece in The Atlantic (How Mars Will be Policed) which seems to assume that the current American quasi-military police caste is a universal constant.

So: four themes (the world as it might be an entire human lifetime hence: what could replace the ideology of industrial-era capitalism: how would a world without entrenched hierarchies of race, privilege, and gender look: and what the future of law, justice, and society might be) are going under-represented in SF.

And here is my subsequent question: what big themes am I (and everyone else) ignoring?

Do my homework, please. Comment thread provided below for your mutual entertainment.

Read the whole story
19 days ago
Good questions.
Share this story
Next Page of Stories