> Though the most successful founders are usually good people, they tend to have a piratical gleam in their eye. They're not Goody Two-Shoes type good. Morally, they care about getting the big questions right, but not about observing proprieties. That's why I'd use the word naughty rather than evil. They delight in breaking rules, but not rules that matter. This quality may be redundant though; it may be implied by imagination.
> Sam Altman of Loopt is one of the most successful alumni, so we asked him what question we could put on the Y Combinator application that would help us discover more people like him. He said to ask about a time when they'd hacked something to their advantage—hacked in the sense of beating the system, not breaking into computers. It has become one of the questions we pay most attention to when judging applications.
"What We Look for in Founders", PG
https://paulgraham.com/founders.html
I think the more powerful you become, the less endearing this trait is.
(Which wasn't even the taxi drivers, although they were plenty bad enough on their own.)
To them*
Which is the whole problem. These narcissistic egotists think they, alone, individually, are capable of deciding what's best not just for their companies but for humanity writ large.
Taking taxis 15 years ago was an absolute scammy shitty experience and it’s only marginally better now thanks to an actual competitive marketplace
> They delight in breaking rules, but not rules that matter.
The question becomes "what rules matter?". And the answer inevitably becomes "only the ones that work in my favor and/or that I agree with".
I think someone trying to defend this would go "oh come on, does it really matter if a rich actress gets slightly richer?" And no, honestly, it doesn't matter that much. Not to me, anyway. But it matters that it establishes (or rather, confirms and reinforces) a culture of disregard and makes it about what you think matters, and not about what someone else might think matters about the things in their life. Their life belongs to them, a fact that utopians have forgotten again and again everywhere and everywhen. And once all judgement is up to you, if you're a sufficiently ambitious and motivated reasoner (and the kind of person we're talking about here is), you can justify pretty much whatever you want without that pesky real-world check of a person going "um actually no I don't want you to do that".
Sometimes I think anti-tech takes get this wrong. They see the problem as breaking the rules at all, as disrupting the status quo at all, as taking any action that might reasonably be foreseen to cause harm. But you do really have to do that if you want to make something good sometimes. You can't foresee every consequence of your actions - I doubt, for example, that Airbnb's founders were thinking about issues with housing policy when they started their company. But what differentiates behavior like this from risk-taking is that the harm here is deliberate and considered. Mistakes happen, but this was not a mistake. It was a choice to say "this is mine now".
That isn't a high bar to clear. And I think we can demand that tech leaders clear it without stifling the innovation that is tech at its best.
I mention this specifically because I remember mark andreseen comment something similar in lex fridman's podcast, something along the lines of getting "those creative people" together to build on ai.
Usually those people are considered sociopaths.
Maybe it's time to ask the employees of OpenAI who fought to get Altman back, How this behavior is compatible with their moral standards or whether money is the most important thing.