6604 stories
·
82 followers

artsyspacehoe: sombra-avocados-es-dulces: tiredandkindagay: This is every singl...

2 Shares

artsyspacehoe:

sombra-avocados-es-dulces:

tiredandkindagay:

This is every single wlw on this fucking website, dont even deny it

I feel so attacked right now

I love this because they’re wives with children irl

Read the whole story
skorgu
13 days ago
reply
karmakaze
14 days ago
reply
07974
Share this story
Delete

thebibliosphere: jambonsama: writing-prompt-s: Werewolves and vampires are still around, long...

2 Shares

thebibliosphere:

jambonsama:

writing-prompt-s:

Werewolves and vampires are still around, long after most humans have been enslaved or eliminated by the AI uprising. Now it’s time to give those robots something they weren’t calculating for.

@thebibliosphere 

HA HA HA HA HA

Time for the nerdy math vampires to shine.

Vampire: It took me 100 years but I managed to figure out how their coding kept changing. Once you upload this into the mainframe it should reduce their functional capacity and enable you to free the humans from the compound.

Werewolf: Reduce them to what?

Vampire: A toaster on legs.

Werewolf: Nice. What about the humans, how do we help them after this?

Vampire: Pft, please give them a few decades to reproduce and they’ll be fine. Bubonic plague didn’t wipe those motherfuckers out. Humans are the evolutionary equivalent of toddlers: they bounce.

Read the whole story
skorgu
13 days ago
reply
karmakaze
13 days ago
reply
07974
Share this story
Delete

Photo

1 Comment and 10 Shares


Read the whole story
ChrisDL
12 days ago
reply
I feel attacked and seen.
New York
popular
12 days ago
reply
skorgu
13 days ago
reply
jhamill
13 days ago
reply
California
Share this story
Delete

How We Turn Authorization Logic into SQL

1 Comment
Comments

How We Turn Authorization Logic into SQL

Authorization questions can take many different forms in your application. The most obvious one, given kitten and resource, is to ask something like "Can kitten access this resource?" A slightly more sophisticated but still common case is to ask "What are all the resources that kitten can access?"

Letting users answer the second type of question is the goal of a project we call data filtering. Briefly, data filtering turns authorization logic from our declarative policy language, Polar, into SQL to enable efficient and secure data access. In this post I'll describe how we built data filtering for the first, second, and third times, and talk about the functionality we hope to make available to all of our users over the coming releases.

Oso and Polar

To understand exactly how we're approaching the problem, it helps to know a little bit about how Oso works under the hood. Oso is a cross-platform authorization framework based a declarative policy language that we built, called Polar. More specifically, Polar is an interpreted logic programming language, and it runs inside a library loaded by your application. To make a specific authorization decision, the Polar interpreter evaluates your authorization policy (a Polar program) for a given “subject verb object” triple (we usually give them the names actor, action, and resource).

Polar is a logic programming language based on Prolog. Logic programming is an underutilized programming style where programs are expressed as logical formulas and executed by trying to verify them for a given set of values for their variables. A feature of logic programming that sets it apart from other styles is that in many cases, not every variable needs to have an associated value in order for a program that refers to it to continue executing.

This means that an expression like kitten(Max) has two possible interpretations: either we know who Max is, so we should be able to say whether or not they're kitten; or we don't, in which case we treat this as an assertion: Max is kitten until proven otherwise.

🐻‍❄️ vs 🐍

The same idea extends to rules with more arguments: as an example, let’s compare the Polar rule

add(a, b, c) if a + b = c;

with the Python function

def add(a, b):
    return a + b

The most obvious change between the two versions is their arity: the Python function takes two arguments while the Polar rule takes three. The second most obvious difference is the absence of anything like a return statement in the Polar rule. This would be the usual in an expression-oriented language like Ruby or Rust, where the return would be implicit, but Polar rules don’t return values at all; they only make assertions about their arguments. Nevertheless, both definitions can be used to find the sum of two numbers. The Python function returns the sum as a value, while the Polar rule assigns the sum to its third argument:

# Python
sum = add(2, 2)
# Polar
add(2, 2, sum)

But the power of logic programming makes the Polar rule more versatile. We can use it to obtain a representation of the difference between two numbers instead, by calling it like this:

add(10, difference, 20)

In some languages, this would assign 10 to difference. In Polar, it instead assigns a placeholder value called a "partial" that we'll talk about in more detail below. But both representations, in theory, carry the same information.

To summarize, we might try to describe the difference between Polar and Python by saying that the Polar rule defines a relationship between values while the Python function computes a value from values.

What does this have to do with data filtering?

The add example above isn’t super interesting, because we already know how to do subtraction. But the same principle works outside of just arithmetic. To see how it could apply in the context of authorization, consider this simple allow rule:

allow(actor, “edit”, resource) if actor = resource.owner;

The most basic application for this rule is to check whether a given actor can edit a specific resource. But if, as above, we were to call the rule without a known resource as its third argument, we might expect to receive back the set of possible values for resource that satisfy the rule. In other words, fully supporting the logic programming abstraction here would let one rule answer both of the questions we referred to in the intro: Can this actor edit this resource? And, what resources can this actor edit? Letting developers use their existing Oso policies to answer the second type of question is what we call "data filtering."

Partial success

Before going into detail about how we’ve been trying to make this possible, let’s take a look at what happens right now, if you try to call the allow rule given above in a Polar REPL using the Python library. (Anyone following along at home is encouraged to use the Python REPL, as there's some disparity between host libraries when it comes to handling unbound variables.)

When you enter something like

query> actor = { name: “gwen” } and 
       resource = { created_by: actor } and 
       allow(actor, “edit”, resource)

the Polar shell simply displays the variable/value associations

actor = { name: ‘gwen’ }
resource = { created_by: { name: ‘gwen’ } }

These are just the values we passed in. Neither one has been "computed" by the rule here: the rule has simply verified that all the required conditions hold over them. However if we leave resource undefined, the output becomes more complicated:

query> actor = { name: "gwen" } and
       allow(actor, "edit", resource)

actor = {'name': 'gwen'}
resource = Expression(And, [Expression(Unify, [{'name': 'gwen'}, Expression(Dot, [Variable('_this'), 'created_by'])])])

The binding for resource above is no longer a concrete or “ground” value. Instead, the Expression type represents a predicate that must hold on resource for the rule to be satisfied. We call these provisional assignments “partials”, which as far as I know is a term we made up. But, it makes sense if you consider the variables they refer to as being “partially” bound to a value: whatever it is, whoever it was created_by, their name is "gwen".

Partials are computed as a side effect of regular policy evaluation, so we already have something to start with. The next step is to transform the set of partials into the set of values they describe. SQL is a widely-used tool for describing data sets filtered by conditions, so "targeting" is useful for both practical and explanatory purposes: in the case above, our goal would be roughly to turn the condition expressed by the partial _this.created_by = { name: "gwen" } into a query such as SELECT posts.* FROM posts WHERE posts.created_by_id = <gwen's id>.

v0: authorized_session

Our first attempt at this resulted in two Python integration libraries that we still maintain today. Oso’s Django and SQLAlchemy integrations turn partials from Polar into database queries, and use this capability to automatically apply authorization to every model we pull out of the database. This makes for a “hands-free” experience where you can often avoid writing any explicit authorization logic at all in your application, not even calls to Oso API functions; everything is automatically handled by middleware.

Unfortunately, these libraries have some limitations. First and most obviously, they’re Python-specific, and framework-specific on top of that, which is not our vibe here at Oso. They’re also limited in what kinds of policies they can translate into queries, and tightly coupled to the details of how we represent partials, which makes them difficult to extend. The SQL they produce relies heavily on nested subqueries: for example, here's a selection from our tests:

SELECT posts.id AS posts_id, posts.contents AS posts_contents, posts.title AS posts_title, posts.access_level AS
       posts_access_level, posts.created_by_id AS posts_created_by_id, posts.needs_moderation AS
       posts_needs_moderation
       FROM posts
       WHERE (EXISTS (SELECT 1
       FROM users
       WHERE users.id = posts.created_by_id AND (EXISTS (SELECT 1        FROM posts
       WHERE users.id = posts.created_by_id AND posts.id > ? AND (EXISTS (SELECT 1
       FROM users
       WHERE users.id = posts.created_by_id AND (EXISTS (SELECT 1
       FROM posts
       WHERE users.id = posts.created_by_id AND posts.id > ? AND (EXISTS (SELECT 1
       FROM users
       WHERE users.id = posts.created_by_id AND (EXISTS (SELECT 1
       FROM posts
       WHERE users.id = posts.created_by_id AND posts.id > ?)))))))))))) AND (EXISTS (SELECT 1
       FROM users
       WHERE users.id = posts.created_by_id AND (EXISTS (SELECT 1
       FROM posts
       WHERE users.id = posts.created_by_id)))) AND posts.id > ? AND posts.id =

Finally, while the authorized_session API they offer is very cool and has some compelling use cases, it’s not as general as we would like. Popular ORMs already offer abstractions around database queries that we’d like to plug into. Our ideal, general-purpose API would return a query with authorization already applied, that the user can then refine in whatever way they need to, for example by applying additional conditions, or paginating the results.

v1: authorized_query

Our next attempt would be more ambitious. Oso v0.20.1 saw the initial release of our framework-agnostic data filtering API for the Python, Ruby and Node libraries. This time we'd offer a more flexible, slightly lower-level API that returns an ORM query or a collection of objects to the user. We added a component to the Rust library that translates partial results into what we called a "filter plan", a structure that captures information about dependencies between different data sources and describes a sequence of filters that, when composed, yield the desired query or set of objects. To use data filtering, the user would supply Oso with callbacks to operate on "queries", in their application-specific form. In a Rails app, for example, this would be through ActiveRecord::Relation objects.

While this design works in many cases, it also has some drawbacks. For example, it makes no attempt to consolidate partial results into a single database query, instead making separate queries over each table. It also assembles some intermediate results in memory in order to generate subsequent queries. That means that although you can now call authorized_query and get a query object back, there's actually no guarantee Oso is accessing your data store in an efficient way. And not only would Oso issue intermediate queries in order to construct the final query, but the SQL it ended up generating wan't always ideal either. Take a look at the sequence of database queries issued to handle an API call in our GitClub example application

Untitled

Finally, v1 was originally based on a prototype design that didn't quite capture some fairly common usage patterns. To support these cases, we had to cram a lot more information into our filter descriptions, which made data filtering significantly more complicated for users to configure.

v2: don't know much about algebra

Our next iteration tried to solve these problems by choosing a new representation for filters, one that we believed would be more able to express the full range of queries we'd like to support. Fortunately, SQL is already based on a convenient formalism called relational algebra, that can express complicated selections of data by composing a few general primitives. Although we aim to support more than just relational databases, we felt that this way of describing data sets was general enough to work for most of our users, and that we'd be able to extend it fairly easily to work with non-relational data if the need arose.

So we went ahead with a proof-of-concept for the new filter design. The v1 approach required users to analyze a complicated data structure with non-obvious semantics in order to get a useful query out of data filtering. The new approach would essentially send back an abstract syntax syntax tree describing an idealized SQL query, which would turn itself into an ORM query object by recursively calling a user-defined method on each node. User configuration would now consist of providing definitions for these to_query methods for each relational primitive: select, join, etc.

This approach quickly proved its ability to render relatively complicated policies as a single database query:

Untitled

Nevertheless, we still had some misgivings. Expressing queries as an AST made them easy to work with programmatically, but hard to inspect and verify. It also became clear that we didn't really need the full power of the "algebraic" representation, and that a simpler, more regular format would have the same degree of expressiveness while also being easier for us to understand. Finally, the plan for how users would configure data filtering worked fine in Ruby, which we used to develop the new version, but would translate awkwardly into languages where concepts like "subclass" and "method" have no clear interpretation.

v3: the present moment

Rather than expressing a query as an arbitrarily nested algebraic expression, we decided to flatten the structure by using a disjunctive normal form representation, and to separate logical conditions (users.id = 99) from relational information (Which tables? How are they joined?) at the top level. The result is a new format that's comprehensible to humans while still being easy for a machine to digest:

Untitled

We've also pushed the details of how we translate relations into joins out into the host library, which keeps the core simpler and more platform-independent. The user configuration question is still somewhat open, but it's no longer tied to language-specific concepts like subclasses, and the data it has to handle are much simpler and more regular. We expect the picture to become clearer as we begin porting the new version to more languages.

v4???

While we're excited about our improved data filtering code and expect it eventually to supplant the original version, we plan to support both APIs initially. The new system has the potential to be much more performant than v1 and to support a wider range of policies, but there are still some limitations on the queries it's able to generate that mean it's not quite a drop-in replacement -- although based on the policies we've seen in the wild, we don't expect most users to encounter any problems.

Assuming the new design proves to be a good foundation for future work, one of our next steps will be to port it to the remaining host libraries. Big nostra culpa here to any Rust/Java/Go users in need of data filtering: sorry for the wait! We want to deliver something solid, and dynamic languages are good for prototyping, so we hope you understand. Thank you ... for bearing with us 🐻

We use GitHub issues to gauge community interest in different platforms and prioritize work accordingly, so in the event you'd really like to use data filtering with Rust, Go, Java, .NET, Haskell, Elixir, Lua, Lisp, Julia, OCaml, APL, or whatever else your favorite platform happens to be — we'd appreciate hearing from you there!

thx for reading 🤓

If you'd like to learn more about Oso, please come and join our community Slack channel and let's chat. And since you made it all the way down to the bottom, we might as well mention that we're hiring.

Good luck out there bear cubs, and happy authorizing!


Comments
Read the whole story
skorgu
14 days ago
reply
Very cool.
Share this story
Delete

Say Goodbye to the Hotel Pennsylvania — Demolition Prep Is Underway

1 Comment
Say Goodbye to the Hotel Pennsylvania — Demolition Prep Is Underway

Built in 1919 by McKim, Mead & White, the Hotel Pennsylvania in New York City is slated for demolition and mattresses are being removed.

Continue reading Say Goodbye to the Hotel Pennsylvania — Demolition Prep Is Underway at Untapped New York.

Read the whole story
skorgu
21 days ago
reply
Nooooo!
Share this story
Delete

Individuals matter

1 Share

One of the most common mistakes I see people make when looking at data is incorrectly using an overly simplified model. A specific variant of this that has derailed the majority of work roadmaps I've looked at is treating people as interchangeable, as if it doesn't matter who is doing what, as if individuals don't matter.

Individuals matter.

A pattern I've repeatedly seen during the roadmap creation and review process is that people will plan out the next few quarters of work and then assign some number of people to it, one person for one quarter to a project, two people for three quarters to another, etc. Nominally, this process enables teams to understand what other teams are doing and plan appropriately. I've never worked in an organization where this actually worked, where this actually enabled teams to effectively execute with dependencies on other teams.

What I've seen happen instead is, when work starts on the projects, people will ask who's working the project and then will make a guess at whether or not the project will be completed on time or in an effective way or even be completed at all based on who ends up working on the project. "Oh, Joe is taking feature X? He never ships anything reasonable. Looks like we can't depend on it because that's never going to work. Let's do Y instead of Z since that won't require X to actually work". The roadmap creation and review process maintains the polite fiction that people are interchangeable, but everyone knows this isn't true and teams that are effective and want to ship on time can't play along when the rubber hits the road even if they play along with the managers, directors, and VPs, who create roadmaps as if people can be generically abstracted over.

Another place the non-fungability of people causes predictable problems is with how managers operate teams. Managers who want to create effective teams1 end up fighting the system in order to do so. Non-engineering orgs mostly treat people as fungible, and the finance org at a number of companies I've worked for forces the engineering org to treat people as fungible by requiring the org to budget in terms of headcount. The company, of course, spends money and not "heads", but internal bookkeeping is done in terms of "heads", so $X of budget will be, for some team, translated into something like "three staff-level heads". There's no way to convert that into "two more effective and better-paid staff level heads"2. If you hire two staff engineers and not a third, the "head" and the associated budget will eventually get moved somewhere else.

One thing I've repeatedly seen is that a hiring manager will want to hire someone who they think will be highly effective or even just someone who has specialized skills and then not be able to hire because the company has translated budget into "heads" at a rate that doesn't allow for hiring some kind of heads. There will be a "comp team" or other group in HR that will object because the comp team has no concept of "an effective engineer" or "a specialty that's hard to hire for"; for a person, role, level, and location defines them and someone who's paid too much for their role and level is therefore a bad hire. If anyone reasonable had power over the process that they were willing to use, this wouldn't happen but, by design, the bureaucracy is set up so that few people have power3.

A similar thing happens with retention. A great engineer I know who was regularly creating $x0M/yr4 of additional profit for the company per year wanted to move to Portugal, so the company cut the person's cash comp by a factor of four, causing them to leave for a company that doesn't have location-based pay. This was escalated up to the director level, but that wasn't sufficient to override HR, so they left. HR didn't care that the person made the company more money than HR saves by doing location adjustments for all international employees combined because HR at the company had no notion of the value of an employee, only the cost, title, level, and location5.

Relatedly, a "move" I've seen twice, once from a distance and once from up close, is when HR decides attrition is too low. In one case, the head of HR thought that the company's ~5% attrition was "unhealthy" because it was too low and in another, HR thought that the company's attrition sitting at a bit under 10% was too low. In both cases, the company made some moves that resulted in attrition moving up to what HR thought was a "healthy" level. In the case I saw from a distance, folks I know at the company agree that the majority of the company's best engineers left over the next year, many after only a few months. In the case I saw up close, I made a list of the most effective engineers I was aware of (like the person mentioned above who increased the company's revenue by 0.7% on his paternity leave) and, when the company successfully pushed attrition to over 10% overall, the most effective engineers left at over double that rate (which understates the impact of this because they tended to be long-tenured and senior engineers, where the normal expected attrition would be less than half the average company attrition).

Some people seem to view companies like a game of SimCity, where if you want more money, you can turn a knob, increase taxes, and get more money, uniformly impacting the city. But companies are not a game of SimCity. If you want more attrition and turn a knob that cranks that up, you don't get additional attrition that's sampled uniformly at random. People, as a whole, cannot be treated as an abstraction where the actions company leadership takes impacts everyone in the same way. The people who are most effective will be disproportionately likely to leave if you turn a knob that leads to increased attrition.

So far, we've talked about how treating individual people as fungible doesn't work for corporations but, of course, it also doesn't work in general. For example, a complaint from a friend of mine who's done a fair amount of "on the ground" development work in Africa is that a lot of people who are looking to donate want, clear, simple criteria to guide their donations (e.g., an RCT showed that the intervention was highly effective). But many effective interventions cannot have their impact demonstrated ex ante in any simple way because, among other reasons, the composition of the team implementing the intervention is important, resulting in a randomized trial or other experiment not being applicable to team implementing the intervention other than the teams from the trial in the context they were operating in during the trial.

An example of this would be an intervention they worked on that, among other things, helped wipe out guinea worm in a country. Ex post, we can say that was a highly effective intervention since it was a team of three people operating on a budget of $12/(person-day)6 for a relatively short time period, making it a high ROI intervention, but there was no way to make a quantitative case for the intervention ex ante, nor does it seem plausible that there could've been a set of randomized trials or experiments that would've justified the intervention.

Their intervention wasn't wiping out guinea worm, that was just a side effect. The intervention was, basically, travelling around the country and embedding in regional government offices in order to understand their problems and then advise/facilitate better decision making. In the course of talking to people and suggesting improvements/changes, they realized that guinea worm could with better distribution of clean water (guinea worm can come from drinking unfiltered water; giving people clean water can solve that problem) and that aid money flowing into the country specifically for water-related projects, like building wells, was already sufficient if the it was distributed to places in the country that had high rates of guinea worm due to contaminated water instead of to the places aid money was flowing to (which were locations that had a lot of aid money flowing to them for a variety of reasons, such as being near a local "office" that was doing a lot of charity work). The specific thing this team did to help wipe out guinea worm was to give powerpoint presentations to government officials on how the government could advise organizations receiving aid money on how those organizations could more efficiently place wells. At the margin, wiping out guinea worm in a country would probably be sufficient for the intervention to be high ROI, but that's a very small fraction of the "return" from this three person team. I only mention it because it's a self-contained easily-quantifiable change. Most of the value of "leveling up" decision making in regional government offices is very difficult to quantify (and, to the extent that it can be quantified, will still have very large error bars).

Many interventions that seem the same ex ante, probably even most, produce little to no impact. My friend has a lot of comments on organizations that send a lot of people around to do similar sounding work but that produce little value, such as the Peace Corps.

A major difference between my friend's team and most teams is that my friend's team was composed of people who had a track record of being highly effective across a variety of contexts. In an earlier job, my friend started a job at a large-ish ($5B/yr revenue) government-run utility company and was immediately assigned a problem that, unbeknownst to her, had been an open problem for years that was considered to be unsolvable. No one was willing to touch the problem, so they hired her because they wanted a scapegoat to blame and fire when the problem blew up. Instead, she solved the problem she was assigned to as well as a number of other problems that were considered unsolvable. A team of three such people will be able to get a lot of mileage out of potentially high ROI interventions that most teams would not succeed at, such as going to a foreign country and improving governmental decision making in regional offices across the country enough that the government is able to solve serious open problems that had been plaguing the country for decades.

Many of the highest ROI interventions are similarly skill intensive and not amenable to simple back-of-the-envelope calculations, but most discussions I see on the topic, both in person and online, rely heavily on simplistic but irrelevant back-of-the-envelope calculations. This is not just a problem limited to cocktail-party conversations. My friend's intervention was almost killed by the organization she worked for because the organization was infested with what she thinks of "overly simplistic EA thinking", which caused leadership in the organization to try to redirect resources to projects where the computation of expected return was simpler because those projects were thought to be higher impact even though they were, ex post, lower impact. This issue of projects which are more legible getting more funding is an issue across organizations as well as within them. For example, my friend says that, back when GiveWell was mainly or only recommending charities that had simply quantifiable return, she basically couldn't get her friends who worked in other fields to put resources towards efforts that weren't endorsed by GiveWell. People who didn't know about her aid background would say things like "haven't you heard of GiveWell?" when she suggested putting resources towards any particular cause, project, or organization.

I talked to a friend of mine who worked at GiveWell during that time period about this and, according to him, the reason GiveWell initially focused on charities that had easily quantifiable value wasn't that they thought those were the highest impact charities. Instead, it was because, as a young organization, they needed to be credible and it's easier to make a credible case for charities whose value is easily quantifiable. He would not, and he thinks GiveWell would not, endorse donors funnelling all resources into charities endorsed by GiveWell and neglecting other ways to improve the world. But many people want the world to be simple and apply the algorithm "charity on GiveWell list = good; not on GiveWell list = bad" because it makes the world simple for them.

Unfortunately for those people, as well as for the world, the world is not simple.

Coming back to the tech company examples, Laurence Tratt notes something that I've also observed:

One thing I've found very interesting in large organisations is when they realise that they need to do something different (i.e. they're slowly failing and want to turn the ship around). The obvious thing is to let a small team take risks on the basis that they might win big. Instead they tend to form endless committees which just perpetuate the drift that caused the committees to be formed in the first place! I think this is because they really struggle to see people as anything other than fungible, even if they really want to: it's almost beyond their ability to break out of their organisational mould, even when it spells long-term doom.

One lens we can use to look at what's going on is legibility. When you have a complex system, whether that's a company with thousands of engineers or a world with many billions of dollars going to aid work, the system is too complex for any decision maker to really understand, whether that's an exec at a company or a potential donor trying to understand where their money should go. One way to address this problem is by reducing the perceived complexity of the problem via imagining that individuals are fungible, making the system more legible. That produces relatively inefficient outcomes but, unlike trying to understand the issues at hand, it's highly scalable, and if there's one thing that tech companies like, it's doing things that scale, and treating a complex system like it's SimCity or Civilization is highly scalable. When returns are relatively evenly distributed, losing out on potential outlier returns in the name of legibility is a good trade-off. But when ROI is a heavy-tailed distribution, when the right person can, on their paternity leave, increase company revenue of a giant tech company by 0.7% and then much more when they work on that full-time, then severely tamping down on the right side of the curve to improve legibility is very costly and can cost you the majority of your potential returns.

Thanks to Laurence Tratt, Pam Wolf, Ben Kuhn, Peter Bhat Harkins, John Hergenroeder, Andrey Mishchenko, and Sophia Wisdom for comments/corrections/discussion.

Appendix: re-orgs

A friend of mine recently told me a story about a trendy tech company where they tried to move six people to another project, one that the people didn't want to work on that they thoguht didn't really make sense. The result was that two senior devs quit, the EM retired, one PM was fired (long story), and three people left the team. The team for both the old project and the new project had to be re-created from scratch.

It could be much worse. In that case, at least there were some people who didn't leave the company. I once asked someone why feature X, which had been publicly promised, hadn't been implemented yet and also the entire sub-product was broken. The answer was that, after about a year of work, when shipping the feature was thought to be weeks away, leadership decided that the feature, which was previously considered a top priority, was no longer a priority and should be abandoned. The team argued that the feature was very close to being done and they just wanted enough runway to finish the feature. When that was denied, the entire team quit and the sub-product has slowly decayed since then. After many years, there was one attempted reboot of the team but, for reasons beyond the scope of this story, it was done with a new manager managing new grads and didn't really re-create what the old team was capable of.

As we've previously seen, an effective team is difficult to create, due to the institutional knowledge that exists on a team, as well as the team's culture, but destroying a team is very easy.

I find it interesting that so many people in senior management roles persist in thinking that they can re-direct people as easily as opening up the city view in Civilization and assigning workers to switch from one task to another when the senior ICs I talk to have high accuracy in predicting when these kinds of moves won't work out.


  1. On the flip side, there are managers who want to maximize the return to their career. At every company I've worked at that wasn't a startup, doing that involves moving up the ladder, which is easiest to do by collecting as many people as possible. At one company I've worked for, the explicitly stated promo criteria are basically "how many people report up to this person".

    Tying promotions and compensation to the number of people managed could make sense if you think of people as mostly fungible, but is otherwise an obviously silly idea.

    [return]
  2. This isn't quite this simple when you take into account retention budgets (money set aside from a pool that doesn't come out of the org's normal budget, often used to match offers from people who are leaving), etc., but adding this nuance doesn't really change the fundamental point. [return]
  3. There are advantages to a system where people don't have power, such as mitigating abuses of power, various biases, nepotism, etc. One can argue that reducing variance in outcomes by making people powerless is the preferred result, but in winner-take-most markets, which many tech markets are, forcing everyone lowest-common-denominator effectiveness is a recipe for being an also ran.

    A specific, small-scale, example of this is the massive advantage companies that don't have a bureaucratic comms/PR approval process for technical blog posts have. The theory behind having the onerous process that most companies have is that the company is protected from downside risk of a bad blog post, but examples of bad engineering blog posts that would've been mitigated by having an onerous process are few and far between, whereas the companies that have good processes for writing publicly get a lot of value that's easy to see.

    A larger scale example of this is that the large, now >= $500B companies, all made aggressive moves that wouldn't have been possible at their bureaucracy laden competitors, which allowed them to wipe the floor with their competitors. Of course, many other companies that made serious bets instead of playing it safe failed more quickly than companies trying to play it safe, but those companies at least had a chance, unlike the companies that played it safe.

    [return]
  4. I'm generally skeptical of claims like this. At multiple companies that I've worked for, if you tally up the claimed revenue or user growth wins and compare them to actual revenue or user growth, you can see that there's some funny business going on since the total claimed wins are much larger than the observed total.

    Just because I'm generally curious about measurements, I sometimes did my own analysis of people's claimed wins and I almost always came up with an estimate that was much lower than the original estimate. Of course, I generally didn't publish these results internally since that would, in general, be a good way to make a lot of enemies without causing any change. In one extreme case, I found that the experimental methodology one entire org used was broken, causing them to get spurious wins in their A/B tests. I quietly informed them and they did nothing about it, which was the only reasonable move for them since having experiments that systematically showed improvement when none existed was a cheap and effective way for the org to gain more power by having its people get promoted and having more headcount allocated to it. And if anyone with power over the bureaucracy cared about accuracy of results, such a large discrepancy between claimed wins and actual results couldn't exist in the first place.

    Anyway, despite my general skepticism of claimed wins in general, I found this person's claimed wins highly credible after checking them myself. A project of theirs, done on their paternity leave (done while on leave because their manager and, really, the organization as well as the company, didn't support the kind of work they were doing) increased the company's revenue by 0.7%, robust and actually increasing in value through a long-term holdback, and they were able to produce wins of that magnitude after leadership was embarrassed into allowing them to do valuable work.

    P.S. If you'd like to play along at home, another fun game you can play after figuring out which teams and orgs hit their roadmap goals. For bonus points, plot the percentage of roadmap goals a team hits vs. their headcount growth as well as how predictive hitting last quarter's goals are for hitting next quarter's goals across teams.

    [return]
  5. I've seen quite a few people leave their employers due to location adjustments during the pandemic. In one case, HR insisted the person was actually very well compensated because, even though it might appear as if the person isn't highly paid because they were paid significantly less than many people who were one level below them, according to HR's formula, which included a location-based pay adjustment, the person was one of the highest paid people for their level at the entire company in terms of normalized pay. Putting aside abstract considerations about fairness, for an employee, HR telling them that they're highly paid given their location is like HR having a formula that pays based on height telling an employee that they're well paid for their height. That may be true according to whatever formula HR has but, practically speaking, that means nothing to the employee, who can go work somewhere that has a smaller height-based pay adjustment.

    Companies were able to get away with severe location-based pay adjustments with no cost to themselves before the pandemic. But, since the pandemic, a lot of companies have ramped up remote hiring and some of those companies have relatively small location-based pay adjustments, which has allowed them to disproportionately hire away who they choose from companies that still maintain severe location-based pay adjustments.

    [return]
  6. Technically, their budget ended up being higher than this because one team member contracted typhoid and paid for some medical expenses from their personal budget and not from the organization's budget, but $12/(person-day), the organizational funding, is a pretty good approximation. [return]
Read the whole story
skorgu
21 days ago
reply
Share this story
Delete
Next Page of Stories