text
large_stringlengths 55
74.7k
|
---|
July 2006
When I was in high school I spent a lot of time imitating bad
writers. What we studied in English classes was mostly fiction,
so I assumed that was the highest form of writing. Mistake number
one. The stories that seemed to be most admired were ones in which
people suffered in complicated ways. Anything funny or
gripping was ipso facto suspect, unless it was old enough to be hard to
understand, like Shakespeare or Chaucer. Mistake number two. The
ideal medium seemed the short story, which I've since learned had
quite a brief life, roughly coincident with the peak of magazine
publishing. But since their size made them perfect for use in
high school classes, we read a lot of them, which gave us the
impression the short story was flourishing. Mistake number three.
And because they were so short, nothing really had to happen; you
could just show a randomly truncated slice of life, and that was
considered advanced. Mistake number four. The result was that I
wrote a lot of stories in which nothing happened except that someone
was unhappy in a way that seemed deep.For most of college I was a philosophy major. I was very impressed
by the papers published in philosophy journals. They were so
beautifully typeset, and their tone was just captivating—alternately
casual and buffer-overflowingly technical. A fellow would be walking
along a street and suddenly modality qua modality would spring upon
him. I didn't ever quite understand these papers, but I figured
I'd get around to that later, when I had time to reread them more
closely. In the meantime I tried my best to imitate them. This
was, I can now see, a doomed undertaking, because they weren't
really saying anything. No philosopher ever refuted another, for
example, because no one said anything definite enough to refute.
Needless to say, my imitations didn't say anything either.In grad school I was still wasting time imitating the wrong things.
There was then a fashionable type of program called an expert system,
at the core of which was something called an inference engine. I
looked at what these things did and thought "I could write that in
a thousand lines of code." And yet eminent professors were writing
books about them, and startups were selling them for a year's salary
a copy. What an opportunity, I thought; these impressive things
seem easy to me; I must be pretty sharp. Wrong. It was simply a
fad. The books the professors wrote about expert systems are now
ignored. They were not even on a path to anything interesting.
And the customers paying so much for them were largely the same
government agencies that paid thousands for screwdrivers and toilet
seats.How do you avoid copying the wrong things? Copy only what you
genuinely like. That would have saved me in all three cases. I
didn't enjoy the short stories we had to read in English classes;
I didn't learn anything from philosophy papers; I didn't use expert
systems myself. I believed these things were good because they
were admired.It can be hard to separate the things you like from the things
you're impressed with. One trick is to ignore presentation. Whenever
I see a painting impressively hung in a museum, I ask myself: how
much would I pay for this if I found it at a garage sale, dirty and
frameless, and with no idea who painted it? If you walk around a
museum trying this experiment, you'll find you get some truly
startling results. Don't ignore this data point just because it's
an outlier.Another way to figure out what you like is to look at what you enjoy
as guilty pleasures. Many things people like, especially if they're
young and ambitious, they like largely for the feeling of virtue
in liking them. 99% of people reading Ulysses are thinking
"I'm reading Ulysses" as they do it. A guilty pleasure is
at least a pure one. What do you read when you don't feel up to being
virtuous? What kind of book do you read and feel sad that there's
only half of it left, instead of being impressed that you're half
way through? That's what you really like.Even when you find genuinely good things to copy, there's another
pitfall to be avoided. Be careful to copy what makes them good,
rather than their flaws. It's easy to be drawn into imitating
flaws, because they're easier to see, and of course easier to copy
too. For example, most painters in the eighteenth and nineteenth
centuries used brownish colors. They were imitating the great
painters of the Renaissance, whose paintings by that time were brown
with dirt. Those paintings have since been cleaned, revealing
brilliant colors; their imitators are of course still brown.It was painting, incidentally, that cured me of copying the wrong
things. Halfway through grad school I decided I wanted to try being
a painter, and the art world was so manifestly corrupt that it
snapped the leash of credulity. These people made philosophy
professors seem as scrupulous as mathematicians. It was so clearly
a choice of doing good work xor being an insider that I was forced
to see the distinction. It's there to some degree in almost every
field, but I had till then managed to avoid facing it.That was one of the most valuable things I learned from painting:
you have to figure out for yourself what's
good. You can't trust
authorities. They'll lie to you on this one.
Comment on this essay. |
May 2001
(These are some notes I made
for a panel discussion on programming language design
at MIT on May 10, 2001.)1. Programming Languages Are for People.Programming languages
are how people talk to computers. The computer would be just as
happy speaking any language that was unambiguous. The reason we
have high level languages is because people can't deal with
machine language. The point of programming
languages is to prevent our poor frail human brains from being
overwhelmed by a mass of detail.Architects know that some kinds of design problems are more personal
than others. One of the cleanest, most abstract design problems
is designing bridges. There your job is largely a matter of spanning
a given distance with the least material. The other end of the
spectrum is designing chairs. Chair designers have to spend their
time thinking about human butts.Software varies in the same way. Designing algorithms for routing
data through a network is a nice, abstract problem, like designing
bridges. Whereas designing programming languages is like designing
chairs: it's all about dealing with human weaknesses.Most of us hate to acknowledge this. Designing systems of great
mathematical elegance sounds a lot more appealing to most of us
than pandering to human weaknesses. And there is a role for mathematical
elegance: some kinds of elegance make programs easier to understand.
But elegance is not an end in itself.And when I say languages have to be designed to suit human weaknesses,
I don't mean that languages have to be designed for bad programmers.
In fact I think you ought to design for the
best programmers, but
even the best programmers have limitations. I don't think anyone
would like programming in a language where all the variables were
the letter x with integer subscripts.2. Design for Yourself and Your Friends.If you look at the history of programming languages, a lot of the best
ones were languages designed for their own authors to use, and a
lot of the worst ones were designed for other people to use.When languages are designed for other people, it's always a specific
group of other people: people not as smart as the language designer.
So you get a language that talks down to you. Cobol is the most
extreme case, but a lot of languages are pervaded by this spirit.It has nothing to do with how abstract the language is. C is pretty
low-level, but it was designed for its authors to use, and that's
why hackers like it.The argument for designing languages for bad programmers is that
there are more bad programmers than good programmers. That may be
so. But those few good programmers write a disproportionately
large percentage of the software.I'm interested in the question, how do you design a language that
the very best hackers will like? I happen to think this is
identical to the question, how do you design a good programming
language?, but even if it isn't, it is at least an interesting
question.3. Give the Programmer as Much Control as Possible.Many languages
(especially the ones designed for other people) have the attitude
of a governess: they try to prevent you from
doing things that they think aren't good for you. I like the
opposite approach: give the programmer as much
control as you can.When I first learned Lisp, what I liked most about it was
that it considered me an equal partner. In the other languages
I had learned up till then, there was the language and there was my
program, written in the language, and the two were very separate.
But in Lisp the functions and macros I wrote were just like those
that made up the language itself. I could rewrite the language
if I wanted. It had the same appeal as open-source software.4. Aim for Brevity.Brevity is underestimated and even scorned.
But if you look into the hearts of hackers, you'll see that they
really love it. How many times have you heard hackers speak fondly
of how in, say, APL, they could do amazing things with just a couple
lines of code? I think anything that really smart people really
love is worth paying attention to.I think almost anything
you can do to make programs shorter is good. There should be lots
of library functions; anything that can be implicit should be;
the syntax should be terse to a fault; even the names of things
should be short.And it's not only programs that should be short. The manual should
be thin as well. A good part of manuals is taken up with clarifications
and reservations and warnings and special cases. If you force
yourself to shorten the manual, in the best case you do it by fixing
the things in the language that required so much explanation.5. Admit What Hacking Is.A lot of people wish that hacking was
mathematics, or at least something like a natural science. I think
hacking is more like architecture. Architecture is
related to physics, in the sense that architects have to design
buildings that don't fall down, but the actual goal of architects
is to make great buildings, not to make discoveries about statics.What hackers like to do is make great programs.
And I think, at least in our own minds, we have to remember that it's
an admirable thing to write great programs, even when this work
doesn't translate easily into the conventional intellectual
currency of research papers. Intellectually, it is just as
worthwhile to design a language programmers will love as it is to design a
horrible one that embodies some idea you can publish a paper
about.1. How to Organize Big Libraries?Libraries are becoming an
increasingly important component of programming languages. They're
also getting bigger, and this can be dangerous. If it takes longer
to find the library function that will do what you want than it
would take to write it yourself, then all that code is doing nothing
but make your manual thick. (The Symbolics manuals were a case in
point.) So I think we will have to work on ways to organize
libraries. The ideal would be to design them so that the programmer
could guess what library call would do the right thing.2. Are People Really Scared of Prefix Syntax?This is an open
problem in the sense that I have wondered about it for years and
still don't know the answer. Prefix syntax seems perfectly natural
to me, except possibly for math. But it could be that a lot of
Lisp's unpopularity is simply due to having an unfamiliar syntax.
Whether to do anything about it, if it is true, is another question.
3. What Do You Need for Server-Based Software?
I think a lot of the most exciting new applications that get written
in the next twenty years will be Web-based applications, meaning
programs that sit on the server and talk to you through a Web
browser. And to write these kinds of programs we may need some
new things.One thing we'll need is support for the new way that server-based
apps get released. Instead of having one or two big releases a
year, like desktop software, server-based apps get released as a
series of small changes. You may have as many as five or ten
releases a day. And as a rule everyone will always use the latest
version.You know how you can design programs to be debuggable?
Well, server-based software likewise has to be designed to be
changeable. You have to be able to change it easily, or at least
to know what is a small change and what is a momentous one.Another thing that might turn out to be useful for server based
software, surprisingly, is continuations. In Web-based software
you can use something like continuation-passing style to get the
effect of subroutines in the inherently
stateless world of a Web
session. Maybe it would be worthwhile having actual continuations,
if it was not too expensive.4. What New Abstractions Are Left to Discover?I'm not sure how
reasonable a hope this is, but one thing I would really love to
do, personally, is discover a new abstraction-- something that would
make as much of a difference as having first class functions or
recursion or even keyword parameters. This may be an impossible
dream. These things don't get discovered that often. But I am always
looking.1. You Can Use Whatever Language You Want.Writing application
programs used to mean writing desktop software. And in desktop
software there is a big bias toward writing the application in the
same language as the operating system. And so ten years ago,
writing software pretty much meant writing software in C.
Eventually a tradition evolved:
application programs must not be written in unusual languages.
And this tradition had so long to develop that nontechnical people
like managers and venture capitalists also learned it.Server-based software blows away this whole model. With server-based
software you can use any language you want. Almost nobody understands
this yet (especially not managers and venture capitalists).
A few hackers understand it, and that's why we even hear
about new, indy languages like Perl and Python. We're not hearing
about Perl and Python because people are using them to write Windows
apps.What this means for us, as people interested in designing programming
languages, is that there is now potentially an actual audience for
our work.2. Speed Comes from Profilers.Language designers, or at least
language implementors, like to write compilers that generate fast
code. But I don't think this is what makes languages fast for users.
Knuth pointed out long ago that speed only matters in a few critical
bottlenecks. And anyone who's tried it knows that you can't guess
where these bottlenecks are. Profilers are the answer.Language designers are solving the wrong problem. Users don't need
benchmarks to run fast. What they need is a language that can show
them what parts of their own programs need to be rewritten. That's
where speed comes from in practice. So maybe it would be a net
win if language implementors took half the time they would
have spent doing compiler optimizations and spent it writing a
good profiler instead.3. You Need an Application to Drive the Design of a Language.This may not be an absolute rule, but it seems like the best languages
all evolved together with some application they were being used to
write. C was written by people who needed it for systems programming.
Lisp was developed partly to do symbolic differentiation, and
McCarthy was so eager to get started that he was writing differentiation
programs even in the first paper on Lisp, in 1960.It's especially good if your application solves some new problem.
That will tend to drive your language to have new features that
programmers need. I personally am interested in writing
a language that will be good for writing server-based applications.[During the panel, Guy Steele also made this point, with the
additional suggestion that the application should not consist of
writing the compiler for your language, unless your language
happens to be intended for writing compilers.]4. A Language Has to Be Good for Writing Throwaway Programs.You know what a throwaway program is: something you write quickly for
some limited task. I think if you looked around you'd find that
a lot of big, serious programs started as throwaway programs. I
would not be surprised if most programs started as throwaway
programs. And so if you want to make a language that's good for
writing software in general, it has to be good for writing throwaway
programs, because that is the larval stage of most software.5. Syntax Is Connected to Semantics.It's traditional to think of
syntax and semantics as being completely separate. This will
sound shocking, but it may be that they aren't.
I think that what you want in your language may be related
to how you express it.I was talking recently to Robert Morris, and he pointed out that
operator overloading is a bigger win in languages with infix
syntax. In a language with prefix syntax, any function you define
is effectively an operator. If you want to define a plus for a
new type of number you've made up, you can just define a new function
to add them. If you do that in a language with infix syntax,
there's a big difference in appearance between the use of an
overloaded operator and a function call.1. New Programming Languages.Back in the 1970s
it was fashionable to design new programming languages. Recently
it hasn't been. But I think server-based software will make new
languages fashionable again. With server-based software, you can
use any language you want, so if someone does design a language that
actually seems better than others that are available, there will be
people who take a risk and use it.2. Time-Sharing.Richard Kelsey gave this as an idea whose time
has come again in the last panel, and I completely agree with him.
My guess (and Microsoft's guess, it seems) is that much computing
will move from the desktop onto remote servers. In other words,
time-sharing is back. And I think there will need to be support
for it at the language level. For example, I know that Richard
and Jonathan Rees have done a lot of work implementing process
scheduling within Scheme 48.3. Efficiency.Recently it was starting to seem that computers
were finally fast enough. More and more we were starting to hear
about byte code, which implies to me at least that we feel we have
cycles to spare. But I don't think we will, with server-based
software. Someone is going to have to pay for the servers that
the software runs on, and the number of users they can support per
machine will be the divisor of their capital cost.So I think efficiency will matter, at least in computational
bottlenecks. It will be especially important to do i/o fast,
because server-based applications do a lot of i/o.It may turn out that byte code is not a win, in the end. Sun and
Microsoft seem to be facing off in a kind of a battle of the byte
codes at the moment. But they're doing it because byte code is a
convenient place to insert themselves into the process, not because
byte code is in itself a good idea. It may turn out that this
whole battleground gets bypassed. That would be kind of amusing.1. Clients.This is just a guess, but my guess is that
the winning model for most applications will be purely server-based.
Designing software that works on the assumption that everyone will
have your client is like designing a society on the assumption that
everyone will just be honest. It would certainly be convenient, but
you have to assume it will never happen.I think there will be a proliferation of devices that have some
kind of Web access, and all you'll be able to assume about them is
that they can support simple html and forms. Will you have a
browser on your cell phone? Will there be a phone in your palm
pilot? Will your blackberry get a bigger screen? Will you be able
to browse the Web on your gameboy? Your watch? I don't know.
And I don't have to know if I bet on
everything just being on the server. It's
just so much more robust to have all the
brains on the server.2. Object-Oriented Programming.I realize this is a
controversial one, but I don't think object-oriented programming
is such a big deal. I think it is a fine model for certain kinds
of applications that need that specific kind of data structure,
like window systems, simulations, and cad programs. But I don't
see why it ought to be the model for all programming.I think part of the reason people in big companies like object-oriented
programming is because it yields a lot of what looks like work.
Something that might naturally be represented as, say, a list of
integers, can now be represented as a class with all kinds of
scaffolding and hustle and bustle.Another attraction of
object-oriented programming is that methods give you some of the
effect of first class functions. But this is old news to Lisp
programmers. When you have actual first class functions, you can
just use them in whatever way is appropriate to the task at hand,
instead of forcing everything into a mold of classes and methods.What this means for language design, I think, is that you shouldn't
build object-oriented programming in too deeply. Maybe the
answer is to offer more general, underlying stuff, and let people design
whatever object systems they want as libraries.3. Design by Committee.Having your language designed by a committee is a big pitfall,
and not just for the reasons everyone knows about. Everyone
knows that committees tend to yield lumpy, inconsistent designs.
But I think a greater danger is that they won't take risks.
When one person is in charge he can take risks
that a committee would never agree on.Is it necessary to take risks to design a good language though?
Many people might suspect
that language design is something where you should stick fairly
close to the conventional wisdom. I bet this isn't true.
In everything else people do, reward is proportionate to risk.
Why should language design be any different? |
May 2007People who worry about the increasing gap between rich and poor
generally look back on the mid twentieth century as a golden age.
In those days we had a large number of high-paying union manufacturing
jobs that boosted the median income. I wouldn't quite call the
high-paying union job a myth, but I think people who dwell on it
are reading too much into it.Oddly enough, it was working with startups that made me realize
where the high-paying union job came from. In a rapidly growing
market, you don't worry too much about efficiency. It's more
important to grow fast. If there's some mundane problem getting
in your way, and there's a simple solution that's somewhat expensive,
just take it and get on with more important things. EBay didn't
win by paying less for servers than their competitors.Difficult though it may be to imagine now, manufacturing was a
growth industry in the mid twentieth century. This was an era when
small firms making everything from cars to candy were getting
consolidated into a new kind of corporation with national reach and
huge economies of scale. You had to grow fast or die. Workers
were for these companies what servers are for an Internet startup.
A reliable supply was more important than low cost.If you looked in the head of a 1950s auto executive, the attitude
must have been: sure, give 'em whatever they ask for, so long as
the new model isn't delayed.In other words, those workers were not paid what their work was
worth. Circumstances being what they were, companies would have
been stupid to insist on paying them so little.If you want a less controversial example of this phenomenon, ask
anyone who worked as a consultant building web sites during the
Internet Bubble. In the late nineties you could get paid huge sums
of money for building the most trivial things. And yet does anyone
who was there have any expectation those days will ever return? I
doubt it. Surely everyone realizes that was just a temporary
aberration.The era of labor unions seems to have been the same kind of aberration,
just spread
over a longer period, and mixed together with a lot of ideology
that prevents people from viewing it with as cold an eye as they
would something like consulting during the Bubble.Basically, unions were just Razorfish.People who think the labor movement was the creation of heroic union
organizers have a problem to explain: why are unions shrinking now?
The best they can do is fall back on the default explanation of
people living in fallen civilizations. Our ancestors were giants.
The workers of the early twentieth century must have had a moral
courage that's lacking today.In fact there's a simpler explanation. The early twentieth century
was just a fast-growing startup overpaying for infrastructure. And
we in the present are not a fallen people, who have abandoned
whatever mysterious high-minded principles produced the high-paying
union job. We simply live in a time when the fast-growing companies
overspend on different things. |
January 2016Life is short, as everyone knows. When I was a kid I used to wonder
about this. Is life actually short, or are we really complaining
about its finiteness? Would we be just as likely to feel life was
short if we lived 10 times as long?Since there didn't seem any way to answer this question, I stopped
wondering about it. Then I had kids. That gave me a way to answer
the question, and the answer is that life actually is short.Having kids showed me how to convert a continuous quantity, time,
into discrete quantities. You only get 52 weekends with your 2 year
old. If Christmas-as-magic lasts from say ages 3 to 10, you only
get to watch your child experience it 8 times. And while it's
impossible to say what is a lot or a little of a continuous quantity
like time, 8 is not a lot of something. If you had a handful of 8
peanuts, or a shelf of 8 books to choose from, the quantity would
definitely seem limited, no matter what your lifespan was.Ok, so life actually is short. Does it make any difference to know
that?It has for me. It means arguments of the form "Life is too short
for x" have great force. It's not just a figure of speech to say
that life is too short for something. It's not just a synonym for
annoying. If you find yourself thinking that life is too short for
something, you should try to eliminate it if you can.When I ask myself what I've found life is too short for, the word
that pops into my head is "bullshit." I realize that answer is
somewhat tautological. It's almost the definition of bullshit that
it's the stuff that life is too short for. And yet bullshit does
have a distinctive character. There's something fake about it.
It's the junk food of experience.
[1]If you ask yourself what you spend your time on that's bullshit,
you probably already know the answer. Unnecessary meetings, pointless
disputes, bureaucracy, posturing, dealing with other people's
mistakes, traffic jams, addictive but unrewarding pastimes.There are two ways this kind of thing gets into your life: it's
either forced on you, or it tricks you. To some extent you have to
put up with the bullshit forced on you by circumstances. You need
to make money, and making money consists mostly of errands. Indeed,
the law of supply and demand insures that: the more rewarding some
kind of work is, the cheaper people will do it. It may be that
less bullshit is forced on you than you think, though. There has
always been a stream of people who opt out of the default grind and
go live somewhere where opportunities are fewer in the conventional
sense, but life feels more authentic. This could become more common.You can do it on a smaller scale without moving. The amount of
time you have to spend on bullshit varies between employers. Most
large organizations (and many small ones) are steeped in it. But
if you consciously prioritize bullshit avoidance over other factors
like money and prestige, you can probably find employers that will
waste less of your time.If you're a freelancer or a small company, you can do this at the
level of individual customers. If you fire or avoid toxic customers,
you can decrease the amount of bullshit in your life by more than
you decrease your income.But while some amount of bullshit is inevitably forced on you, the
bullshit that sneaks into your life by tricking you is no one's
fault but your own. And yet the bullshit you choose may be harder
to eliminate than the bullshit that's forced on you. Things that
lure you into wasting your time have to be really good at
tricking you. An example that will be familiar to a lot of people
is arguing online. When someone
contradicts you, they're in a sense attacking you. Sometimes pretty
overtly. Your instinct when attacked is to defend yourself. But
like a lot of instincts, this one wasn't designed for the world we
now live in. Counterintuitive as it feels, it's better most of
the time not to defend yourself. Otherwise these people are literally
taking your life.
[2]Arguing online is only incidentally addictive. There are more
dangerous things than that. As I've written before, one byproduct
of technical progress is that things we like tend to become more
addictive. Which means we will increasingly have to make a conscious
effort to avoid addictions to stand outside ourselves and ask "is
this how I want to be spending my time?"As well as avoiding bullshit, one should actively seek out things
that matter. But different things matter to different people, and
most have to learn what matters to them. A few are lucky and realize
early on that they love math or taking care of animals or writing,
and then figure out a way to spend a lot of time doing it. But
most people start out with a life that's a mix of things that
matter and things that don't, and only gradually learn to distinguish
between them.For the young especially, much of this confusion is induced by the
artificial situations they find themselves in. In middle school and
high school, what the other kids think of you seems the most important
thing in the world. But when you ask adults what they got wrong
at that age, nearly all say they cared too much what other kids
thought of them.One heuristic for distinguishing stuff that matters is to ask
yourself whether you'll care about it in the future. Fake stuff
that matters usually has a sharp peak of seeming to matter. That's
how it tricks you. The area under the curve is small, but its shape
jabs into your consciousness like a pin.The things that matter aren't necessarily the ones people would
call "important." Having coffee with a friend matters. You won't
feel later like that was a waste of time.One great thing about having small children is that they make you
spend time on things that matter: them. They grab your sleeve as
you're staring at your phone and say "will you play with me?" And
odds are that is in fact the bullshit-minimizing option.If life is short, we should expect its shortness to take us by
surprise. And that is just what tends to happen. You take things
for granted, and then they're gone. You think you can always write
that book, or climb that mountain, or whatever, and then you realize
the window has closed. The saddest windows close when other people
die. Their lives are short too. After my mother died, I wished I'd
spent more time with her. I lived as if she'd always be there.
And in her typical quiet way she encouraged that illusion. But an
illusion it was. I think a lot of people make the same mistake I
did.The usual way to avoid being taken by surprise by something is to
be consciously aware of it. Back when life was more precarious,
people used to be aware of death to a degree that would now seem a
bit morbid. I'm not sure why, but it doesn't seem the right answer
to be constantly reminding oneself of the grim reaper hovering at
everyone's shoulder. Perhaps a better solution is to look at the
problem from the other end. Cultivate a habit of impatience about
the things you most want to do. Don't wait before climbing that
mountain or writing that book or visiting your mother. You don't
need to be constantly reminding yourself why you shouldn't wait.
Just don't wait.I can think of two more things one does when one doesn't have much
of something: try to get more of it, and savor what one has. Both
make sense here.How you live affects how long you live. Most people could do better.
Me among them.But you can probably get even more effect by paying closer attention
to the time you have. It's easy to let the days rush by. The
"flow" that imaginative people love so much has a darker cousin
that prevents you from pausing to savor life amid the daily slurry
of errands and alarms. One of the most striking things I've read
was not in a book, but the title of one: James Salter's Burning
the Days.It is possible to slow time somewhat. I've gotten better at it.
Kids help. When you have small children, there are a lot of moments
so perfect that you can't help noticing.It does help too to feel that you've squeezed everything out of
some experience. The reason I'm sad about my mother is not just
that I miss her but that I think of all the things we could have
done that we didn't. My oldest son will be 7 soon. And while I
miss the 3 year old version of him, I at least don't have any regrets
over what might have been. We had the best time a daddy and a 3
year old ever had.Relentlessly prune bullshit, don't wait to do things that matter,
and savor the time you have. That's what you do when life is short.Notes[1]
At first I didn't like it that the word that came to mind was
one that had other meanings. But then I realized the other meanings
are fairly closely related. Bullshit in the sense of things you
waste your time on is a lot like intellectual bullshit.[2]
I chose this example deliberately as a note to self. I get
attacked a lot online. People tell the craziest lies about me.
And I have so far done a pretty mediocre job of suppressing the
natural human inclination to say "Hey, that's not true!"Thanks to Jessica Livingston and Geoff Ralston for reading drafts
of this. |
November 2021(This essay is derived from a talk at the Cambridge Union.)When I was a kid, I'd have said there wasn't. My father told me so.
Some people like some things, and other people like other things,
and who's to say who's right?It seemed so obvious that there was no such thing as good taste
that it was only through indirect evidence that I realized my father
was wrong. And that's what I'm going to give you here: a proof by
reductio ad absurdum. If we start from the premise that there's no
such thing as good taste, we end up with conclusions that are
obviously false, and therefore the premise must be wrong.We'd better start by saying what good taste is. There's a narrow
sense in which it refers to aesthetic judgements and a broader one
in which it refers to preferences of any kind. The strongest proof
would be to show that taste exists in the narrowest sense, so I'm
going to talk about taste in art. You have better taste than me if
the art you like is better than the art I like.If there's no such thing as good taste, then there's no such thing
as good art. Because if there is such a
thing as good art, it's
easy to tell which of two people has better taste. Show them a lot
of works by artists they've never seen before and ask them to
choose the best, and whoever chooses the better art has better
taste.So if you want to discard the concept of good taste, you also have
to discard the concept of good art. And that means you have to
discard the possibility of people being good at making it. Which
means there's no way for artists to be good at their jobs. And not
just visual artists, but anyone who is in any sense an artist. You
can't have good actors, or novelists, or composers, or dancers
either. You can have popular novelists, but not good ones.We don't realize how far we'd have to go if we discarded the concept
of good taste, because we don't even debate the most obvious cases.
But it doesn't just mean we can't say which of two famous painters
is better. It means we can't say that any painter is better than a
randomly chosen eight year old.That was how I realized my father was wrong. I started studying
painting. And it was just like other kinds of work I'd done: you
could do it well, or badly, and if you tried hard, you could get
better at it. And it was obvious that Leonardo and Bellini were
much better at it than me. That gap between us was not imaginary.
They were so good. And if they could be good, then art could be
good, and there was such a thing as good taste after all.Now that I've explained how to show there is such a thing as good
taste, I should also explain why people think there isn't. There
are two reasons. One is that there's always so much disagreement
about taste. Most people's response to art is a tangle of unexamined
impulses. Is the artist famous? Is the subject attractive? Is this
the sort of art they're supposed to like? Is it hanging in a famous
museum, or reproduced in a big, expensive book? In practice most
people's response to art is dominated by such extraneous factors.And the people who do claim to have good taste are so often mistaken.
The paintings admired by the so-called experts in one generation
are often so different from those admired a few generations later.
It's easy to conclude there's nothing real there at all. It's only
when you isolate this force, for example by trying to paint and
comparing your work to Bellini's, that you can see that it does in
fact exist.The other reason people doubt that art can be good is that there
doesn't seem to be any room in the art for this goodness. The
argument goes like this. Imagine several people looking at a work
of art and judging how good it is. If being good art really is a
property of objects, it should be in the object somehow. But it
doesn't seem to be; it seems to be something happening in the heads
of each of the observers. And if they disagree, how do you choose
between them?The solution to this puzzle is to realize that the purpose of art
is to work on its human audience, and humans have a lot in common.
And to the extent the things an object acts upon respond in the
same way, that's arguably what it means for the object to have the
corresponding property. If everything a particle interacts with
behaves as if the particle had a mass of m, then it has a mass of
m. So the distinction between "objective" and "subjective" is not
binary, but a matter of degree, depending on how much the subjects
have in common. Particles interacting with one another are at one
pole, but people interacting with art are not all the way at the
other; their reactions aren't random.Because people's responses to art aren't random, art can be designed
to operate on people, and be good or bad depending on how effectively
it does so. Much as a vaccine can be. If someone were talking about
the ability of a vaccine to confer immunity, it would seem very
frivolous to object that conferring immunity wasn't really a property
of vaccines, because acquiring immunity is something that happens
in the immune system of each individual person. Sure, people's
immune systems vary, and a vaccine that worked on one might not
work on another, but that doesn't make it meaningless to talk about
the effectiveness of a vaccine.The situation with art is messier, of course. You can't measure
effectiveness by simply taking a vote, as you do with vaccines.
You have to imagine the responses of subjects with a deep knowledge
of art, and enough clarity of mind to be able to ignore extraneous
influences like the fame of the artist. And even then you'd still
see some disagreement. People do vary, and judging art is hard,
especially recent art. There is definitely not a total order either
of works or of people's ability to judge them. But there is equally
definitely a partial order of both. So while it's not possible to
have perfect taste, it is possible to have good taste.
Thanks to the Cambridge Union for inviting me, and to Trevor
Blackwell, Jessica Livingston, and Robert Morris for reading drafts
of this.
|
May 2001(This article was written as a kind of business plan for a
new language.
So it is missing (because it takes for granted) the most important
feature of a good programming language: very powerful abstractions.)A friend of mine once told an eminent operating systems
expert that he wanted to design a really good
programming language. The expert told him that it would be a
waste of time, that programming languages don't become popular
or unpopular based on their merits, and so no matter how
good his language was, no one would use it. At least, that
was what had happened to the language he had designed.What does make a language popular? Do popular
languages deserve their popularity? Is it worth trying to
define a good programming language? How would you do it?I think the answers to these questions can be found by looking
at hackers, and learning what they want. Programming
languages are for hackers, and a programming language
is good as a programming language (rather than, say, an
exercise in denotational semantics or compiler design)
if and only if hackers like it.1 The Mechanics of PopularityIt's true, certainly, that most people don't choose programming
languages simply based on their merits. Most programmers are told
what language to use by someone else. And yet I think the effect
of such external factors on the popularity of programming languages
is not as great as it's sometimes thought to be. I think a bigger
problem is that a hacker's idea of a good programming language is
not the same as most language designers'.Between the two, the hacker's opinion is the one that matters.
Programming languages are not theorems. They're tools, designed
for people, and they have to be designed to suit human strengths
and weaknesses as much as shoes have to be designed for human feet.
If a shoe pinches when you put it on, it's a bad shoe, however
elegant it may be as a piece of sculpture.It may be that the majority of programmers can't tell a good language
from a bad one. But that's no different with any other tool. It
doesn't mean that it's a waste of time to try designing a good
language. Expert hackers
can tell a good language when they see
one, and they'll use it. Expert hackers are a tiny minority,
admittedly, but that tiny minority write all the good software,
and their influence is such that the rest of the programmers will
tend to use whatever language they use. Often, indeed, it is not
merely influence but command: often the expert hackers are the very
people who, as their bosses or faculty advisors, tell the other
programmers what language to use.The opinion of expert hackers is not the only force that determines
the relative popularity of programming languages — legacy software
(Cobol) and hype (Ada, Java) also play a role — but I think it is
the most powerful force over the long term. Given an initial critical
mass and enough time, a programming language probably becomes about
as popular as it deserves to be. And popularity further separates
good languages from bad ones, because feedback from real live users
always leads to improvements. Look at how much any popular language
has changed during its life. Perl and Fortran are extreme cases,
but even Lisp has changed a lot. Lisp 1.5 didn't have macros, for
example; these evolved later, after hackers at MIT had spent a
couple years using Lisp to write real programs. [1]So whether or not a language has to be good to be popular, I think
a language has to be popular to be good. And it has to stay popular
to stay good. The state of the art in programming languages doesn't
stand still. And yet the Lisps we have today are still pretty much
what they had at MIT in the mid-1980s, because that's the last time
Lisp had a sufficiently large and demanding user base.Of course, hackers have to know about a language before they can
use it. How are they to hear? From other hackers. But there has to
be some initial group of hackers using the language for others even
to hear about it. I wonder how large this group has to be; how many
users make a critical mass? Off the top of my head, I'd say twenty.
If a language had twenty separate users, meaning twenty users who
decided on their own to use it, I'd consider it to be real.Getting there can't be easy. I would not be surprised if it is
harder to get from zero to twenty than from twenty to a thousand.
The best way to get those initial twenty users is probably to use
a trojan horse: to give people an application they want, which
happens to be written in the new language.2 External FactorsLet's start by acknowledging one external factor that does affect
the popularity of a programming language. To become popular, a
programming language has to be the scripting language of a popular
system. Fortran and Cobol were the scripting languages of early
IBM mainframes. C was the scripting language of Unix, and so, later,
was Perl. Tcl is the scripting language of Tk. Java and Javascript
are intended to be the scripting languages of web browsers.Lisp is not a massively popular language because it is not the
scripting language of a massively popular system. What popularity
it retains dates back to the 1960s and 1970s, when it was the
scripting language of MIT. A lot of the great programmers of the
day were associated with MIT at some point. And in the early 1970s,
before C, MIT's dialect of Lisp, called MacLisp, was one of the
only programming languages a serious hacker would want to use.Today Lisp is the scripting language of two moderately popular
systems, Emacs and Autocad, and for that reason I suspect that most
of the Lisp programming done today is done in Emacs Lisp or AutoLisp.Programming languages don't exist in isolation. To hack is a
transitive verb — hackers are usually hacking something — and in
practice languages are judged relative to whatever they're used to
hack. So if you want to design a popular language, you either have
to supply more than a language, or you have to design your language
to replace the scripting language of some existing system.Common Lisp is unpopular partly because it's an orphan. It did
originally come with a system to hack: the Lisp Machine. But Lisp
Machines (along with parallel computers) were steamrollered by the
increasing power of general purpose processors in the 1980s. Common
Lisp might have remained popular if it had been a good scripting
language for Unix. It is, alas, an atrociously bad one.One way to describe this situation is to say that a language isn't
judged on its own merits. Another view is that a programming language
really isn't a programming language unless it's also the scripting
language of something. This only seems unfair if it comes as a
surprise. I think it's no more unfair than expecting a programming
language to have, say, an implementation. It's just part of what
a programming language is.A programming language does need a good implementation, of course,
and this must be free. Companies will pay for software, but individual
hackers won't, and it's the hackers you need to attract.A language also needs to have a book about it. The book should be
thin, well-written, and full of good examples. K&R is the ideal
here. At the moment I'd almost say that a language has to have a
book published by O'Reilly. That's becoming the test of mattering
to hackers.There should be online documentation as well. In fact, the book
can start as online documentation. But I don't think that physical
books are outmoded yet. Their format is convenient, and the de
facto censorship imposed by publishers is a useful if imperfect
filter. Bookstores are one of the most important places for learning
about new languages.3 BrevityGiven that you can supply the three things any language needs — a
free implementation, a book, and something to hack — how do you
make a language that hackers will like?One thing hackers like is brevity. Hackers are lazy, in the same
way that mathematicians and modernist architects are lazy: they
hate anything extraneous. It would not be far from the truth to
say that a hacker about to write a program decides what language
to use, at least subconsciously, based on the total number of
characters he'll have to type. If this isn't precisely how hackers
think, a language designer would do well to act as if it were.It is a mistake to try to baby the user with long-winded expressions
that are meant to resemble English. Cobol is notorious for this
flaw. A hacker would consider being asked to writeadd x to y giving zinstead ofz = x+yas something between an insult to his intelligence and a sin against
God.It has sometimes been said that Lisp should use first and rest
instead of car and cdr, because it would make programs easier to
read. Maybe for the first couple hours. But a hacker can learn
quickly enough that car means the first element of a list and cdr
means the rest. Using first and rest means 50% more typing. And
they are also different lengths, meaning that the arguments won't
line up when they're called, as car and cdr often are, in successive
lines. I've found that it matters a lot how code lines up on the
page. I can barely read Lisp code when it is set in a variable-width
font, and friends say this is true for other languages too.Brevity is one place where strongly typed languages lose. All other
things being equal, no one wants to begin a program with a bunch
of declarations. Anything that can be implicit, should be.The individual tokens should be short as well. Perl and Common Lisp
occupy opposite poles on this question. Perl programs can be almost
cryptically dense, while the names of built-in Common Lisp operators
are comically long. The designers of Common Lisp probably expected
users to have text editors that would type these long names for
them. But the cost of a long name is not just the cost of typing
it. There is also the cost of reading it, and the cost of the space
it takes up on your screen.4 HackabilityThere is one thing more important than brevity to a hacker: being
able to do what you want. In the history of programming languages
a surprising amount of effort has gone into preventing programmers
from doing things considered to be improper. This is a dangerously
presumptuous plan. How can the language designer know what the
programmer is going to need to do? I think language designers would
do better to consider their target user to be a genius who will
need to do things they never anticipated, rather than a bumbler
who needs to be protected from himself. The bumbler will shoot
himself in the foot anyway. You may save him from referring to
variables in another package, but you can't save him from writing
a badly designed program to solve the wrong problem, and taking
forever to do it.Good programmers often want to do dangerous and unsavory things.
By unsavory I mean things that go behind whatever semantic facade
the language is trying to present: getting hold of the internal
representation of some high-level abstraction, for example. Hackers
like to hack, and hacking means getting inside things and second
guessing the original designer.Let yourself be second guessed. When you make any tool, people use
it in ways you didn't intend, and this is especially true of a
highly articulated tool like a programming language. Many a hacker
will want to tweak your semantic model in a way that you never
imagined. I say, let them; give the programmer access to as much
internal stuff as you can without endangering runtime systems like
the garbage collector.In Common Lisp I have often wanted to iterate through the fields
of a struct — to comb out references to a deleted object, for example,
or find fields that are uninitialized. I know the structs are just
vectors underneath. And yet I can't write a general purpose function
that I can call on any struct. I can only access the fields by
name, because that's what a struct is supposed to mean.A hacker may only want to subvert the intended model of things once
or twice in a big program. But what a difference it makes to be
able to. And it may be more than a question of just solving a
problem. There is a kind of pleasure here too. Hackers share the
surgeon's secret pleasure in poking about in gross innards, the
teenager's secret pleasure in popping zits. [2] For boys, at least,
certain kinds of horrors are fascinating. Maxim magazine publishes
an annual volume of photographs, containing a mix of pin-ups and
grisly accidents. They know their audience.Historically, Lisp has been good at letting hackers have their way.
The political correctness of Common Lisp is an aberration. Early
Lisps let you get your hands on everything. A good deal of that
spirit is, fortunately, preserved in macros. What a wonderful thing,
to be able to make arbitrary transformations on the source code.Classic macros are a real hacker's tool — simple, powerful, and
dangerous. It's so easy to understand what they do: you call a
function on the macro's arguments, and whatever it returns gets
inserted in place of the macro call. Hygienic macros embody the
opposite principle. They try to protect you from understanding what
they're doing. I have never heard hygienic macros explained in one
sentence. And they are a classic example of the dangers of deciding
what programmers are allowed to want. Hygienic macros are intended
to protect me from variable capture, among other things, but variable
capture is exactly what I want in some macros.A really good language should be both clean and dirty: cleanly
designed, with a small core of well understood and highly orthogonal
operators, but dirty in the sense that it lets hackers have their
way with it. C is like this. So were the early Lisps. A real hacker's
language will always have a slightly raffish character.A good programming language should have features that make the kind
of people who use the phrase "software engineering" shake their
heads disapprovingly. At the other end of the continuum are languages
like Ada and Pascal, models of propriety that are good for teaching
and not much else.5 Throwaway ProgramsTo be attractive to hackers, a language must be good for writing
the kinds of programs they want to write. And that means, perhaps
surprisingly, that it has to be good for writing throwaway programs.A throwaway program is a program you write quickly for some limited
task: a program to automate some system administration task, or
generate test data for a simulation, or convert data from one format
to another. The surprising thing about throwaway programs is that,
like the "temporary" buildings built at so many American universities
during World War II, they often don't get thrown away. Many evolve
into real programs, with real features and real users.I have a hunch that the best big programs begin life this way,
rather than being designed big from the start, like the Hoover Dam.
It's terrifying to build something big from scratch. When people
take on a project that's too big, they become overwhelmed. The
project either gets bogged down, or the result is sterile and
wooden: a shopping mall rather than a real downtown, Brasilia rather
than Rome, Ada rather than C.Another way to get a big program is to start with a throwaway
program and keep improving it. This approach is less daunting, and
the design of the program benefits from evolution. I think, if one
looked, that this would turn out to be the way most big programs
were developed. And those that did evolve this way are probably
still written in whatever language they were first written in,
because it's rare for a program to be ported, except for political
reasons. And so, paradoxically, if you want to make a language that
is used for big systems, you have to make it good for writing
throwaway programs, because that's where big systems come from.Perl is a striking example of this idea. It was not only designed
for writing throwaway programs, but was pretty much a throwaway
program itself. Perl began life as a collection of utilities for
generating reports, and only evolved into a programming language
as the throwaway programs people wrote in it grew larger. It was
not until Perl 5 (if then) that the language was suitable for
writing serious programs, and yet it was already massively popular.What makes a language good for throwaway programs? To start with,
it must be readily available. A throwaway program is something that
you expect to write in an hour. So the language probably must
already be installed on the computer you're using. It can't be
something you have to install before you use it. It has to be there.
C was there because it came with the operating system. Perl was
there because it was originally a tool for system administrators,
and yours had already installed it.Being available means more than being installed, though. An
interactive language, with a command-line interface, is more
available than one that you have to compile and run separately. A
popular programming language should be interactive, and start up
fast.Another thing you want in a throwaway program is brevity. Brevity
is always attractive to hackers, and never more so than in a program
they expect to turn out in an hour.6 LibrariesOf course the ultimate in brevity is to have the program already
written for you, and merely to call it. And this brings us to what
I think will be an increasingly important feature of programming
languages: library functions. Perl wins because it has large
libraries for manipulating strings. This class of library functions
are especially important for throwaway programs, which are often
originally written for converting or extracting data. Many Perl
programs probably begin as just a couple library calls stuck
together.I think a lot of the advances that happen in programming languages
in the next fifty years will have to do with library functions. I
think future programming languages will have libraries that are as
carefully designed as the core language. Programming language design
will not be about whether to make your language strongly or weakly
typed, or object oriented, or functional, or whatever, but about
how to design great libraries. The kind of language designers who
like to think about how to design type systems may shudder at this.
It's almost like writing applications! Too bad. Languages are for
programmers, and libraries are what programmers need.It's hard to design good libraries. It's not simply a matter of
writing a lot of code. Once the libraries get too big, it can
sometimes take longer to find the function you need than to write
the code yourself. Libraries need to be designed using a small set
of orthogonal operators, just like the core language. It ought to
be possible for the programmer to guess what library call will do
what he needs.Libraries are one place Common Lisp falls short. There are only
rudimentary libraries for manipulating strings, and almost none
for talking to the operating system. For historical reasons, Common
Lisp tries to pretend that the OS doesn't exist. And because you
can't talk to the OS, you're unlikely to be able to write a serious
program using only the built-in operators in Common Lisp. You have
to use some implementation-specific hacks as well, and in practice
these tend not to give you everything you want. Hackers would think
a lot more highly of Lisp if Common Lisp had powerful string
libraries and good OS support.7 SyntaxCould a language with Lisp's syntax, or more precisely, lack of
syntax, ever become popular? I don't know the answer to this
question. I do think that syntax is not the main reason Lisp isn't
currently popular. Common Lisp has worse problems than unfamiliar
syntax. I know several programmers who are comfortable with prefix
syntax and yet use Perl by default, because it has powerful string
libraries and can talk to the os.There are two possible problems with prefix notation: that it is
unfamiliar to programmers, and that it is not dense enough. The
conventional wisdom in the Lisp world is that the first problem is
the real one. I'm not so sure. Yes, prefix notation makes ordinary
programmers panic. But I don't think ordinary programmers' opinions
matter. Languages become popular or unpopular based on what expert
hackers think of them, and I think expert hackers might be able to
deal with prefix notation. Perl syntax can be pretty incomprehensible,
but that has not stood in the way of Perl's popularity. If anything
it may have helped foster a Perl cult.A more serious problem is the diffuseness of prefix notation. For
expert hackers, that really is a problem. No one wants to write
(aref a x y) when they could write a[x,y].In this particular case there is a way to finesse our way out of
the problem. If we treat data structures as if they were functions
on indexes, we could write (a x y) instead, which is even shorter
than the Perl form. Similar tricks may shorten other types of
expressions.We can get rid of (or make optional) a lot of parentheses by making
indentation significant. That's how programmers read code anyway:
when indentation says one thing and delimiters say another, we go
by the indentation. Treating indentation as significant would
eliminate this common source of bugs as well as making programs
shorter.Sometimes infix syntax is easier to read. This is especially true
for math expressions. I've used Lisp my whole programming life and
I still don't find prefix math expressions natural. And yet it is
convenient, especially when you're generating code, to have operators
that take any number of arguments. So if we do have infix syntax,
it should probably be implemented as some kind of read-macro.I don't think we should be religiously opposed to introducing syntax
into Lisp, as long as it translates in a well-understood way into
underlying s-expressions. There is already a good deal of syntax
in Lisp. It's not necessarily bad to introduce more, as long as no
one is forced to use it. In Common Lisp, some delimiters are reserved
for the language, suggesting that at least some of the designers
intended to have more syntax in the future.One of the most egregiously unlispy pieces of syntax in Common Lisp
occurs in format strings; format is a language in its own right,
and that language is not Lisp. If there were a plan for introducing
more syntax into Lisp, format specifiers might be able to be included
in it. It would be a good thing if macros could generate format
specifiers the way they generate any other kind of code.An eminent Lisp hacker told me that his copy of CLTL falls open to
the section format. Mine too. This probably indicates room for
improvement. It may also mean that programs do a lot of I/O.8 EfficiencyA good language, as everyone knows, should generate fast code. But
in practice I don't think fast code comes primarily from things
you do in the design of the language. As Knuth pointed out long
ago, speed only matters in certain critical bottlenecks. And as
many programmers have observed since, one is very often mistaken
about where these bottlenecks are.So, in practice, the way to get fast code is to have a very good
profiler, rather than by, say, making the language strongly typed.
You don't need to know the type of every argument in every call in
the program. You do need to be able to declare the types of arguments
in the bottlenecks. And even more, you need to be able to find out
where the bottlenecks are.One complaint people have had with Lisp is that it's hard to tell
what's expensive. This might be true. It might also be inevitable,
if you want to have a very abstract language. And in any case I
think good profiling would go a long way toward fixing the problem:
you'd soon learn what was expensive.Part of the problem here is social. Language designers like to
write fast compilers. That's how they measure their skill. They
think of the profiler as an add-on, at best. But in practice a good
profiler may do more to improve the speed of actual programs written
in the language than a compiler that generates fast code. Here,
again, language designers are somewhat out of touch with their
users. They do a really good job of solving slightly the wrong
problem.It might be a good idea to have an active profiler — to push
performance data to the programmer instead of waiting for him to
come asking for it. For example, the editor could display bottlenecks
in red when the programmer edits the source code. Another approach
would be to somehow represent what's happening in running programs.
This would be an especially big win in server-based applications,
where you have lots of running programs to look at. An active
profiler could show graphically what's happening in memory as a
program's running, or even make sounds that tell what's happening.Sound is a good cue to problems. In one place I worked, we had a
big board of dials showing what was happening to our web servers.
The hands were moved by little servomotors that made a slight noise
when they turned. I couldn't see the board from my desk, but I
found that I could tell immediately, by the sound, when there was
a problem with a server.It might even be possible to write a profiler that would automatically
detect inefficient algorithms. I would not be surprised if certain
patterns of memory access turned out to be sure signs of bad
algorithms. If there were a little guy running around inside the
computer executing our programs, he would probably have as long
and plaintive a tale to tell about his job as a federal government
employee. I often have a feeling that I'm sending the processor on
a lot of wild goose chases, but I've never had a good way to look
at what it's doing.A number of Lisps now compile into byte code, which is then executed
by an interpreter. This is usually done to make the implementation
easier to port, but it could be a useful language feature. It might
be a good idea to make the byte code an official part of the
language, and to allow programmers to use inline byte code in
bottlenecks. Then such optimizations would be portable too.The nature of speed, as perceived by the end-user, may be changing.
With the rise of server-based applications, more and more programs
may turn out to be i/o-bound. It will be worth making i/o fast.
The language can help with straightforward measures like simple,
fast, formatted output functions, and also with deep structural
changes like caching and persistent objects.Users are interested in response time. But another kind of efficiency
will be increasingly important: the number of simultaneous users
you can support per processor. Many of the interesting applications
written in the near future will be server-based, and the number of
users per server is the critical question for anyone hosting such
applications. In the capital cost of a business offering a server-based
application, this is the divisor.For years, efficiency hasn't mattered much in most end-user
applications. Developers have been able to assume that each user
would have an increasingly powerful processor sitting on their
desk. And by Parkinson's Law, software has expanded to use the
resources available. That will change with server-based applications.
In that world, the hardware and software will be supplied together.
For companies that offer server-based applications, it will make
a very big difference to the bottom line how many users they can
support per server.In some applications, the processor will be the limiting factor,
and execution speed will be the most important thing to optimize.
But often memory will be the limit; the number of simultaneous
users will be determined by the amount of memory you need for each
user's data. The language can help here too. Good support for
threads will enable all the users to share a single heap. It may
also help to have persistent objects and/or language level support
for lazy loading.9 TimeThe last ingredient a popular language needs is time. No one wants
to write programs in a language that might go away, as so many
programming languages do. So most hackers will tend to wait until
a language has been around for a couple years before even considering
using it.Inventors of wonderful new things are often surprised to discover
this, but you need time to get any message through to people. A
friend of mine rarely does anything the first time someone asks
him. He knows that people sometimes ask for things that they turn
out not to want. To avoid wasting his time, he waits till the third
or fourth time he's asked to do something; by then, whoever's asking
him may be fairly annoyed, but at least they probably really do
want whatever they're asking for.Most people have learned to do a similar sort of filtering on new
things they hear about. They don't even start paying attention
until they've heard about something ten times. They're perfectly
justified: the majority of hot new whatevers do turn out to be a
waste of time, and eventually go away. By delaying learning VRML,
I avoided having to learn it at all.So anyone who invents something new has to expect to keep repeating
their message for years before people will start to get it. We
wrote what was, as far as I know, the first web-server based
application, and it took us years to get it through to people that
it didn't have to be downloaded. It wasn't that they were stupid.
They just had us tuned out.The good news is, simple repetition solves the problem. All you
have to do is keep telling your story, and eventually people will
start to hear. It's not when people notice you're there that they
pay attention; it's when they notice you're still there.It's just as well that it usually takes a while to gain momentum.
Most technologies evolve a good deal even after they're first
launched — programming languages especially. Nothing could be better,
for a new techology, than a few years of being used only by a small
number of early adopters. Early adopters are sophisticated and
demanding, and quickly flush out whatever flaws remain in your
technology. When you only have a few users you can be in close
contact with all of them. And early adopters are forgiving when
you improve your system, even if this causes some breakage.There are two ways new technology gets introduced: the organic
growth method, and the big bang method. The organic growth method
is exemplified by the classic seat-of-the-pants underfunded garage
startup. A couple guys, working in obscurity, develop some new
technology. They launch it with no marketing and initially have
only a few (fanatically devoted) users. They continue to improve
the technology, and meanwhile their user base grows by word of
mouth. Before they know it, they're big.The other approach, the big bang method, is exemplified by the
VC-backed, heavily marketed startup. They rush to develop a product,
launch it with great publicity, and immediately (they hope) have
a large user base.Generally, the garage guys envy the big bang guys. The big bang
guys are smooth and confident and respected by the VCs. They can
afford the best of everything, and the PR campaign surrounding the
launch has the side effect of making them celebrities. The organic
growth guys, sitting in their garage, feel poor and unloved. And
yet I think they are often mistaken to feel sorry for themselves.
Organic growth seems to yield better technology and richer founders
than the big bang method. If you look at the dominant technologies
today, you'll find that most of them grew organically.This pattern doesn't only apply to companies. You see it in sponsored
research too. Multics and Common Lisp were big-bang projects, and
Unix and MacLisp were organic growth projects.10 Redesign"The best writing is rewriting," wrote E. B. White. Every good
writer knows this, and it's true for software too. The most important
part of design is redesign. Programming languages, especially,
don't get redesigned enough.To write good software you must simultaneously keep two opposing
ideas in your head. You need the young hacker's naive faith in
his abilities, and at the same time the veteran's skepticism. You
have to be able to think
how hard can it be? with one half of
your brain while thinking
it will never work with the other.The trick is to realize that there's no real contradiction here.
You want to be optimistic and skeptical about two different things.
You have to be optimistic about the possibility of solving the
problem, but skeptical about the value of whatever solution you've
got so far.People who do good work often think that whatever they're working
on is no good. Others see what they've done and are full of wonder,
but the creator is full of worry. This pattern is no coincidence:
it is the worry that made the work good.If you can keep hope and worry balanced, they will drive a project
forward the same way your two legs drive a bicycle forward. In the
first phase of the two-cycle innovation engine, you work furiously
on some problem, inspired by your confidence that you'll be able
to solve it. In the second phase, you look at what you've done in
the cold light of morning, and see all its flaws very clearly. But
as long as your critical spirit doesn't outweigh your hope, you'll
be able to look at your admittedly incomplete system, and think,
how hard can it be to get the rest of the way?, thereby continuing
the cycle.It's tricky to keep the two forces balanced. In young hackers,
optimism predominates. They produce something, are convinced it's
great, and never improve it. In old hackers, skepticism predominates,
and they won't even dare to take on ambitious projects.Anything you can do to keep the redesign cycle going is good. Prose
can be rewritten over and over until you're happy with it. But
software, as a rule, doesn't get redesigned enough. Prose has
readers, but software has users. If a writer rewrites an essay,
people who read the old version are unlikely to complain that their
thoughts have been broken by some newly introduced incompatibility.Users are a double-edged sword. They can help you improve your
language, but they can also deter you from improving it. So choose
your users carefully, and be slow to grow their number. Having
users is like optimization: the wise course is to delay it. Also,
as a general rule, you can at any given time get away with changing
more than you think. Introducing change is like pulling off a
bandage: the pain is a memory almost as soon as you feel it.Everyone knows that it's not a good idea to have a language designed
by a committee. Committees yield bad design. But I think the worst
danger of committees is that they interfere with redesign. It is
so much work to introduce changes that no one wants to bother.
Whatever a committee decides tends to stay that way, even if most
of the members don't like it.Even a committee of two gets in the way of redesign. This happens
particularly in the interfaces between pieces of software written
by two different people. To change the interface both have to agree
to change it at once. And so interfaces tend not to change at all,
which is a problem because they tend to be one of the most ad hoc
parts of any system.One solution here might be to design systems so that interfaces
are horizontal instead of vertical — so that modules are always
vertically stacked strata of abstraction. Then the interface will
tend to be owned by one of them. The lower of two levels will either
be a language in which the upper is written, in which case the
lower level will own the interface, or it will be a slave, in which
case the interface can be dictated by the upper level.11 LispWhat all this implies is that there is hope for a new Lisp. There
is hope for any language that gives hackers what they want, including
Lisp. I think we may have made a mistake in thinking that hackers
are turned off by Lisp's strangeness. This comforting illusion may
have prevented us from seeing the real problem with Lisp, or at
least Common Lisp, which is that it sucks for doing what hackers
want to do. A hacker's language needs powerful libraries and
something to hack. Common Lisp has neither. A hacker's language is
terse and hackable. Common Lisp is not.The good news is, it's not Lisp that sucks, but Common Lisp. If we
can develop a new Lisp that is a real hacker's language, I think
hackers will use it. They will use whatever language does the job.
All we have to do is make sure this new Lisp does some important
job better than other languages.History offers some encouragement. Over time, successive new
programming languages have taken more and more features from Lisp.
There is no longer much left to copy before the language you've
made is Lisp. The latest hot language, Python, is a watered-down
Lisp with infix syntax and no macros. A new Lisp would be a natural
step in this progression.I sometimes think that it would be a good marketing trick to call
it an improved version of Python. That sounds hipper than Lisp. To
many people, Lisp is a slow AI language with a lot of parentheses.
Fritz Kunze's official biography carefully avoids mentioning the
L-word. But my guess is that we shouldn't be afraid to call the
new Lisp Lisp. Lisp still has a lot of latent respect among the
very best hackers — the ones who took 6.001 and understood it, for
example. And those are the users you need to win.In "How to Become a Hacker," Eric Raymond describes Lisp as something
like Latin or Greek — a language you should learn as an intellectual
exercise, even though you won't actually use it:
Lisp is worth learning for the profound enlightenment experience
you will have when you finally get it; that experience will make
you a better programmer for the rest of your days, even if you
never actually use Lisp itself a lot.
If I didn't know Lisp, reading this would set me asking questions.
A language that would make me a better programmer, if it means
anything at all, means a language that would be better for programming.
And that is in fact the implication of what Eric is saying.As long as that idea is still floating around, I think hackers will
be receptive enough to a new Lisp, even if it is called Lisp. But
this Lisp must be a hacker's language, like the classic Lisps of
the 1970s. It must be terse, simple, and hackable. And it must have
powerful libraries for doing what hackers want to do now.In the matter of libraries I think there is room to beat languages
like Perl and Python at their own game. A lot of the new applications
that will need to be written in the coming years will be
server-based
applications. There's no reason a new Lisp shouldn't have string
libraries as good as Perl, and if this new Lisp also had powerful
libraries for server-based applications, it could be very popular.
Real hackers won't turn up their noses at a new tool that will let
them solve hard problems with a few library calls. Remember, hackers
are lazy.It could be an even bigger win to have core language support for
server-based applications. For example, explicit support for programs
with multiple users, or data ownership at the level of type tags.Server-based applications also give us the answer to the question
of what this new Lisp will be used to hack. It would not hurt to
make Lisp better as a scripting language for Unix. (It would be
hard to make it worse.) But I think there are areas where existing
languages would be easier to beat. I think it might be better to
follow the model of Tcl, and supply the Lisp together with a complete
system for supporting server-based applications. Lisp is a natural
fit for server-based applications. Lexical closures provide a way
to get the effect of subroutines when the ui is just a series of
web pages. S-expressions map nicely onto html, and macros are good
at generating it. There need to be better tools for writing
server-based applications, and there needs to be a new Lisp, and
the two would work very well together.12 The Dream LanguageBy way of summary, let's try describing the hacker's dream language.
The dream language is
beautiful, clean, and terse. It has an
interactive toplevel that starts up fast. You can write programs
to solve common problems with very little code. Nearly all the
code in any program you write is code that's specific to your
application. Everything else has been done for you.The syntax of the language is brief to a fault. You never have to
type an unnecessary character, or even to use the shift key much.Using big abstractions you can write the first version of a program
very quickly. Later, when you want to optimize, there's a really
good profiler that tells you where to focus your attention. You
can make inner loops blindingly fast, even writing inline byte code
if you need to.There are lots of good examples to learn from, and the language is
intuitive enough that you can learn how to use it from examples in
a couple minutes. You don't need to look in the manual much. The
manual is thin, and has few warnings and qualifications.The language has a small core, and powerful, highly orthogonal
libraries that are as carefully designed as the core language. The
libraries all work well together; everything in the language fits
together like the parts in a fine camera. Nothing is deprecated,
or retained for compatibility. The source code of all the libraries
is readily available. It's easy to talk to the operating system
and to applications written in other languages.The language is built in layers. The higher-level abstractions are
built in a very transparent way out of lower-level abstractions,
which you can get hold of if you want.Nothing is hidden from you that doesn't absolutely have to be. The
language offers abstractions only as a way of saving you work,
rather than as a way of telling you what to do. In fact, the language
encourages you to be an equal participant in its design. You can
change everything about it, including even its syntax, and anything
you write has, as much as possible, the same status as what comes
predefined.Notes[1] Macros very close to the modern idea were proposed by Timothy
Hart in 1964, two years after Lisp 1.5 was released. What was
missing, initially, were ways to avoid variable capture and multiple
evaluation; Hart's examples are subject to both.[2] In When the Air Hits Your Brain, neurosurgeon Frank Vertosick
recounts a conversation in which his chief resident, Gary, talks
about the difference between surgeons and internists ("fleas"):
Gary and I ordered a large pizza and found an open booth. The
chief lit a cigarette. "Look at those goddamn fleas, jabbering
about some disease they'll see once in their lifetimes. That's
the trouble with fleas, they only like the bizarre stuff. They
hate their bread and butter cases. That's the difference between
us and the fucking fleas. See, we love big juicy lumbar disc
herniations, but they hate hypertension...."
It's hard to think of a lumbar disc herniation as juicy (except
literally). And yet I think I know what they mean. I've often had
a juicy bug to track down. Someone who's not a programmer would
find it hard to imagine that there could be pleasure in a bug.
Surely it's better if everything just works. In one way, it is.
And yet there is undeniably a grim satisfaction in hunting down
certain sorts of bugs. |
Want to start a startup? Get funded by
Y Combinator.
November 2009I don't think Apple realizes how badly the App Store approval process
is broken. Or rather, I don't think they realize how much it matters
that it's broken.The way Apple runs the App Store has harmed their reputation with
programmers more than anything else they've ever done.
Their reputation with programmers used to be great.
It used to be the most common complaint you heard
about Apple was that their fans admired them too uncritically.
The App Store has changed that. Now a lot of programmers
have started to see Apple as evil.How much of the goodwill Apple once had with programmers have they
lost over the App Store? A third? Half? And that's just so far.
The App Store is an ongoing karma leak.* * *How did Apple get into this mess? Their fundamental problem is
that they don't understand software.They treat iPhone apps the way they treat the music they sell through
iTunes. Apple is the channel; they own the user; if you want to
reach users, you do it on their terms. The record labels agreed,
reluctantly. But this model doesn't work for software. It doesn't
work for an intermediary to own the user. The software business
learned that in the early 1980s, when companies like VisiCorp showed
that although the words "software" and "publisher" fit together,
the underlying concepts don't. Software isn't like music or books.
It's too complicated for a third party to act as an intermediary
between developer and user. And yet that's what Apple is trying
to be with the App Store: a software publisher. And a particularly
overreaching one at that, with fussy tastes and a rigidly enforced
house style.If software publishing didn't work in 1980, it works even less now
that software development has evolved from a small number of big
releases to a constant stream of small ones. But Apple doesn't
understand that either. Their model of product development derives
from hardware. They work on something till they think it's finished,
then they release it. You have to do that with hardware, but because
software is so easy to change, its design can benefit from evolution.
The standard way to develop applications now is to launch fast and
iterate. Which means it's a disaster to have long, random delays
each time you release a new version.Apparently Apple's attitude is that developers should be more careful
when they submit a new version to the App Store. They would say
that. But powerful as they are, they're not powerful enough to
turn back the evolution of technology. Programmers don't use
launch-fast-and-iterate out of laziness. They use it because it
yields the best results. By obstructing that process, Apple is
making them do bad work, and programmers hate that as much as Apple
would.How would Apple like it if when they discovered a serious bug in
OS X, instead of releasing a software update immediately, they had
to submit their code to an intermediary who sat on it for a month
and then rejected it because it contained an icon they didn't like?By breaking software development, Apple gets the opposite of what
they intended: the version of an app currently available in the App
Store tends to be an old and buggy one. One developer told me:
As a result of their process, the App Store is full of half-baked
applications. I make a new version almost every day that I release
to beta users. The version on the App Store feels old and crappy.
I'm sure that a lot of developers feel this way: One emotion is
"I'm not really proud about what's in the App Store", and it's
combined with the emotion "Really, it's Apple's fault."
Another wrote:
I believe that they think their approval process helps users by
ensuring quality. In reality, bugs like ours get through all the
time and then it can take 4-8 weeks to get that bug fix approved,
leaving users to think that iPhone apps sometimes just don't work.
Worse for Apple, these apps work just fine on other platforms
that have immediate approval processes.
Actually I suppose Apple has a third misconception: that all the
complaints about App Store approvals are not a serious problem.
They must hear developers complaining. But partners and suppliers
are always complaining. It would be a bad sign if they weren't;
it would mean you were being too easy on them. Meanwhile the iPhone
is selling better than ever. So why do they need to fix anything?They get away with maltreating developers, in the short term, because
they make such great hardware. I just bought a new 27" iMac a
couple days ago. It's fabulous. The screen's too shiny, and the
disk is surprisingly loud, but it's so beautiful that you can't
make yourself care.So I bought it, but I bought it, for the first time, with misgivings.
I felt the way I'd feel buying something made in a country with a
bad human rights record. That was new. In the past when I bought
things from Apple it was an unalloyed pleasure. Oh boy! They make
such great stuff. This time it felt like a Faustian bargain. They
make such great stuff, but they're such assholes. Do I really want
to support this company?* * *Should Apple care what people like me think? What difference does
it make if they alienate a small minority of their users?There are a couple reasons they should care. One is that these
users are the people they want as employees. If your company seems
evil, the best programmers won't work for you. That hurt Microsoft
a lot starting in the 90s. Programmers started to feel sheepish
about working there. It seemed like selling out. When people from
Microsoft were talking to other programmers and they mentioned where
they worked, there were a lot of self-deprecating jokes about having
gone over to the dark side. But the real problem for Microsoft
wasn't the embarrassment of the people they hired. It was the
people they never got. And you know who got them? Google and
Apple. If Microsoft was the Empire, they were the Rebel Alliance.
And it's largely because they got more of the best people that
Google and Apple are doing so much better than Microsoft today.Why are programmers so fussy about their employers' morals? Partly
because they can afford to be. The best programmers can work
wherever they want. They don't have to work for a company they
have qualms about.But the other reason programmers are fussy, I think, is that evil
begets stupidity. An organization that wins by exercising power
starts to lose the ability to win by doing better work. And it's
not fun for a smart person to work in a place where the best ideas
aren't the ones that win. I think the reason Google embraced "Don't
be evil" so eagerly was not so much to impress the outside world
as to inoculate themselves against arrogance.
[1]That has worked for Google so far. They've become more
bureaucratic, but otherwise they seem to have held true to their
original principles. With Apple that seems less the case. When you
look at the famous
1984 ad
now, it's easier to imagine Apple as the
dictator on the screen than the woman with the hammer.
[2]
In fact, if you read the dictator's speech it sounds uncannily like a
prophecy of the App Store.
We have triumphed over the unprincipled dissemination of facts.We have created, for the first time in all history, a garden of
pure ideology, where each worker may bloom secure from the pests
of contradictory and confusing truths.
The other reason Apple should care what programmers think of them
is that when you sell a platform, developers make or break you. If
anyone should know this, Apple should. VisiCalc made the Apple II.And programmers build applications for the platforms they use. Most
applications—most startups, probably—grow out of personal projects.
Apple itself did. Apple made microcomputers because that's what
Steve Wozniak wanted for himself. He couldn't have afforded a
minicomputer.
[3]
Microsoft likewise started out making interpreters
for little microcomputers because
Bill Gates and Paul Allen were interested in using them. It's a
rare startup that doesn't build something the founders use.The main reason there are so many iPhone apps is that so many programmers
have iPhones. They may know, because they read it in an article,
that Blackberry has such and such market share. But in practice
it's as if RIM didn't exist. If they're going to build something,
they want to be able to use it themselves, and that means building
an iPhone app.So programmers continue to develop iPhone apps, even though Apple
continues to maltreat them. They're like someone stuck in an abusive
relationship. They're so attracted to the iPhone that they can't
leave. But they're looking for a way out. One wrote:
While I did enjoy developing for the iPhone, the control they
place on the App Store does not give me the drive to develop
applications as I would like. In fact I don't intend to make any
more iPhone applications unless absolutely necessary.
[4]
Can anything break this cycle? No device I've seen so far could.
Palm and RIM haven't a hope. The only credible contender is Android.
But Android is an orphan; Google doesn't really care about it, not
the way Apple cares about the iPhone. Apple cares about the iPhone
the way Google cares about search.* * *Is the future of handheld devices one locked down by Apple? It's
a worrying prospect. It would be a bummer to have another grim
monoculture like we had in the 1990s. In 1995, writing software
for end users was effectively identical with writing Windows
applications. Our horror at that prospect was the single biggest
thing that drove us to start building web apps.At least we know now what it would take to break Apple's lock.
You'd have to get iPhones out of programmers' hands. If programmers
used some other device for mobile web access, they'd start to develop
apps for that instead.How could you make a device programmers liked better than the iPhone?
It's unlikely you could make something better designed. Apple
leaves no room there. So this alternative device probably couldn't
win on general appeal. It would have to win by virtue of some
appeal it had to programmers specifically.One way to appeal to programmers is with software. If you
could think of an application programmers had to have, but that
would be impossible in the circumscribed world of the iPhone,
you could presumably get them to switch.That would definitely happen if programmers started to use handhelds
as development machines—if handhelds displaced laptops the
way laptops displaced desktops. You need more control of a development
machine than Apple will let you have over an iPhone.Could anyone make a device that you'd carry around in your pocket
like a phone, and yet would also work as a development machine?
It's hard to imagine what it would look like. But I've learned
never to say never about technology. A phone-sized device that
would work as a development machine is no more miraculous by present
standards than the iPhone itself would have seemed by the standards
of 1995.My current development machine is a MacBook Air, which I use with
an external monitor and keyboard in my office, and by itself when
traveling. If there was a version half the size I'd prefer it.
That still wouldn't be small enough to carry around everywhere like
a phone, but we're within a factor of 4 or so. Surely that gap is
bridgeable. In fact, let's make it an
RFS. Wanted:
Woman with hammer.Notes[1]
When Google adopted "Don't be evil," they were still so small
that no one would have expected them to be, yet.
[2]
The dictator in the 1984 ad isn't Microsoft, incidentally;
it's IBM. IBM seemed a lot more frightening in those days, but
they were friendlier to developers than Apple is now.[3]
He couldn't even afford a monitor. That's why the Apple
I used a TV as a monitor.[4]
Several people I talked to mentioned how much they liked the
iPhone SDK. The problem is not Apple's products but their policies.
Fortunately policies are software; Apple can change them instantly
if they want to. Handy that, isn't it?Thanks to Sam Altman, Trevor Blackwell, Ross Boucher,
James Bracy, Gabor Cselle,
Patrick Collison, Jason Freedman, John Gruber, Joe Hewitt, Jessica Livingston,
Robert Morris, Teng Siong Ong, Nikhil Pandit, Savraj Singh, and Jared Tame for reading drafts of this. |
May 2006(This essay is derived from a keynote at Xtech.)Could you reproduce Silicon Valley elsewhere, or is there something
unique about it?It wouldn't be surprising if it were hard to reproduce in other
countries, because you couldn't reproduce it in most of the US
either. What does it take to make a silicon valley even here?What it takes is the right people. If you could get the right ten
thousand people to move from Silicon Valley to Buffalo, Buffalo
would become Silicon Valley.
[1]That's a striking departure from the past. Up till a couple decades
ago, geography was destiny for cities. All great cities were located
on waterways, because cities made money by trade, and water was the
only economical way to ship.Now you could make a great city anywhere, if you could get the right
people to move there. So the question of how to make a silicon
valley becomes: who are the right people, and how do you get them
to move?Two TypesI think you only need two kinds of people to create a technology
hub: rich people and nerds. They're the limiting reagents in the
reaction that produces startups, because they're the only ones
present when startups get started. Everyone else will move.Observation bears this out: within the US, towns have become startup
hubs if and only if they have both rich people and nerds. Few
startups happen in Miami, for example, because although it's full
of rich people, it has few nerds. It's not the kind of place nerds
like.Whereas Pittsburgh has the opposite problem: plenty of nerds, but
no rich people. The top US Computer Science departments are said
to be MIT, Stanford, Berkeley, and Carnegie-Mellon. MIT yielded
Route 128. Stanford and Berkeley yielded Silicon Valley. But
Carnegie-Mellon? The record skips at that point. Lower down the
list, the University of Washington yielded a high-tech community
in Seattle, and the University of Texas at Austin yielded one in
Austin. But what happened in Pittsburgh? And in Ithaca, home of
Cornell, which is also high on the list?I grew up in Pittsburgh and went to college at Cornell, so I can
answer for both. The weather is terrible, particularly in winter,
and there's no interesting old city to make up for it, as there is
in Boston. Rich people don't want to live in Pittsburgh or Ithaca.
So while there are plenty of hackers who could start startups,
there's no one to invest in them.Not BureaucratsDo you really need the rich people? Wouldn't it work to have the
government invest in the nerds? No, it would not. Startup investors
are a distinct type of rich people. They tend to have a lot of
experience themselves in the technology business. This (a) helps
them pick the right startups, and (b) means they can supply advice
and connections as well as money. And the fact that they have a
personal stake in the outcome makes them really pay attention.Bureaucrats by their nature are the exact opposite sort of people
from startup investors. The idea of them making startup investments
is comic. It would be like mathematicians running Vogue-- or
perhaps more accurately, Vogue editors running a math journal.
[2]Though indeed, most things bureaucrats do, they do badly. We just
don't notice usually, because they only have to compete against
other bureaucrats. But as startup investors they'd have to compete
against pros with a great deal more experience and motivation.Even corporations that have in-house VC groups generally forbid
them to make their own investment decisions. Most are only allowed
to invest in deals where some reputable private VC firm is willing
to act as lead investor.Not BuildingsIf you go to see Silicon Valley, what you'll see are buildings.
But it's the people that make it Silicon Valley, not the buildings.
I read occasionally about attempts to set up "technology
parks" in other places, as if the active ingredient of Silicon
Valley were the office space. An article about Sophia Antipolis
bragged that companies there included Cisco, Compaq, IBM, NCR, and
Nortel. Don't the French realize these aren't startups?Building office buildings for technology companies won't get you a
silicon valley, because the key stage in the life of a startup
happens before they want that kind of space. The key stage is when
they're three guys operating out of an apartment. Wherever the
startup is when it gets funded, it will stay. The defining quality
of Silicon Valley is not that Intel or Apple or Google have offices
there, but that they were started there.So if you want to reproduce Silicon Valley, what you need to reproduce
is those two or three founders sitting around a kitchen table
deciding to start a company. And to reproduce that you need those
people.UniversitiesThe exciting thing is, all you need are the people. If you could
attract a critical mass of nerds and investors to live somewhere,
you could reproduce Silicon Valley. And both groups are highly
mobile. They'll go where life is good. So what makes a place good
to them?What nerds like is other nerds. Smart people will go wherever other
smart people are. And in particular, to great universities. In
theory there could be other ways to attract them, but so far
universities seem to be indispensable. Within the US, there are
no technology hubs without first-rate universities-- or at least,
first-rate computer science departments.So if you want to make a silicon valley, you not only need a
university, but one of the top handful in the world. It has to be
good enough to act as a magnet, drawing the best people from thousands
of miles away. And that means it has to stand up to existing magnets
like MIT and Stanford.This sounds hard. Actually it might be easy. My professor friends,
when they're deciding where they'd like to work, consider one thing
above all: the quality of the other faculty. What attracts professors
is good colleagues. So if you managed to recruit, en masse, a
significant number of the best young researchers, you could create
a first-rate university from nothing overnight. And you could do
that for surprisingly little. If you paid 200 people hiring bonuses
of $3 million apiece, you could put together a faculty that would
bear comparison with any in the world. And from that point the
chain reaction would be self-sustaining. So whatever it costs to
establish a mediocre university, for an additional half billion or
so you could have a great one.
[3]PersonalityHowever, merely creating a new university would not be enough to
start a silicon valley. The university is just the seed. It has
to be planted in the right soil, or it won't germinate. Plant it
in the wrong place, and you just create Carnegie-Mellon.To spawn startups, your university has to be in a town that has
attractions other than the university. It has to be a place where
investors want to live, and students want to stay after they graduate.The two like much the same things, because most startup investors
are nerds themselves. So what do nerds look for in a town? Their
tastes aren't completely different from other people's, because a
lot of the towns they like most in the US are also big tourist
destinations: San Francisco, Boston, Seattle. But their tastes
can't be quite mainstream either, because they dislike other big
tourist destinations, like New York, Los Angeles, and Las Vegas.There has been a lot written lately about the "creative class." The
thesis seems to be that as wealth derives increasingly from ideas,
cities will prosper only if they attract those who have them. That
is certainly true; in fact it was the basis of Amsterdam's prosperity
400 years ago.A lot of nerd tastes they share with the creative class in general.
For example, they like well-preserved old neighborhoods instead of
cookie-cutter suburbs, and locally-owned shops and restaurants
instead of national chains. Like the rest of the creative class,
they want to live somewhere with personality.What exactly is personality? I think it's the feeling that each
building is the work of a distinct group of people. A town with
personality is one that doesn't feel mass-produced. So if you want
to make a startup hub-- or any town to attract the "creative class"--
you probably have to ban large development projects.
When a large tract has been developed by a single organization, you
can always tell.
[4]Most towns with personality are old, but they don't have to be.
Old towns have two advantages: they're denser, because they were
laid out before cars, and they're more varied, because they were
built one building at a time. You could have both now. Just have
building codes that ensure density, and ban large scale developments.A corollary is that you have to keep out the biggest developer of
all: the government. A government that asks "How can we build a
silicon valley?" has probably ensured failure by the way they framed
the question. You don't build a silicon valley; you let one grow.NerdsIf you want to attract nerds, you need more than a town with
personality. You need a town with the right personality. Nerds
are a distinct subset of the creative class, with different tastes
from the rest. You can see this most clearly in New York, which
attracts a lot of creative people, but few nerds.
[5]What nerds like is the kind of town where people walk around smiling.
This excludes LA, where no one walks at all, and also New York,
where people walk, but not smiling. When I was in grad school in
Boston, a friend came to visit from New York. On the subway back
from the airport she asked "Why is everyone smiling?" I looked and
they weren't smiling. They just looked like they were compared to
the facial expressions she was used to.If you've lived in New York, you know where these facial expressions
come from. It's the kind of place where your mind may be excited,
but your body knows it's having a bad time. People don't so much
enjoy living there as endure it for the sake of the excitement.
And if you like certain kinds of excitement, New York is incomparable.
It's a hub of glamour, a magnet for all the shorter half-life
isotopes of style and fame.Nerds don't care about glamour, so to them the appeal of New York
is a mystery. People who like New York will pay a fortune for a
small, dark, noisy apartment in order to live in a town where the
cool people are really cool. A nerd looks at that deal and sees
only: pay a fortune for a small, dark, noisy apartment.Nerds will pay a premium to live in a town where the smart people
are really smart, but you don't have to pay as much for that. It's
supply and demand: glamour is popular, so you have to pay a lot for
it.Most nerds like quieter pleasures. They like cafes instead of
clubs; used bookshops instead of fashionable clothing shops; hiking
instead of dancing; sunlight instead of tall buildings. A nerd's
idea of paradise is Berkeley or Boulder.YouthIt's the young nerds who start startups, so it's those specifically
the city has to appeal to. The startup hubs in the US are all
young-feeling towns. This doesn't mean they have to be new.
Cambridge has the oldest town plan in America, but it feels young
because it's full of students.What you can't have, if you want to create a silicon valley, is a
large, existing population of stodgy people. It would be a waste
of time to try to reverse the fortunes of a declining industrial town
like Detroit or Philadelphia by trying to encourage startups. Those
places have too much momentum in the wrong direction. You're better
off starting with a blank slate in the form of a small town. Or
better still, if there's a town young people already flock to, that
one.The Bay Area was a magnet for the young and optimistic for decades
before it was associated with technology. It was a place people
went in search of something new. And so it became synonymous with
California nuttiness. There's still a lot of that there. If you
wanted to start a new fad-- a new way to focus one's "energy," for
example, or a new category of things not to eat-- the Bay Area would
be the place to do it. But a place that tolerates oddness in the
search for the new is exactly what you want in a startup hub, because
economically that's what startups are. Most good startup ideas
seem a little crazy; if they were obviously good ideas, someone
would have done them already.(How many people are going to want computers in their houses?
What, another search engine?)That's the connection between technology and liberalism. Without
exception the high-tech cities in the US are also the most liberal.
But it's not because liberals are smarter that this is so. It's
because liberal cities tolerate odd ideas, and smart people by
definition have odd ideas.Conversely, a town that gets praised for being "solid" or representing
"traditional values" may be a fine place to live, but it's never
going to succeed as a startup hub. The 2004 presidential election,
though a disaster in other respects, conveniently supplied us with
a county-by-county
map of such places.
[6]To attract the young, a town must have an intact center. In most
American cities the center has been abandoned, and the growth, if
any, is in the suburbs. Most American cities have been turned
inside out. But none of the startup hubs has: not San Francisco,
or Boston, or Seattle. They all have intact centers.
[7]
My guess is that no city with a dead center could be turned into a
startup hub. Young people don't want to live in the suburbs.Within the US, the two cities I think could most easily be turned
into new silicon valleys are Boulder and Portland. Both have the
kind of effervescent feel that attracts the young. They're each
only a great university short of becoming a silicon valley, if they
wanted to.TimeA great university near an attractive town. Is that all it takes?
That was all it took to make the original Silicon Valley. Silicon
Valley traces its origins to William Shockley, one of the inventors
of the transistor. He did the research that won him the Nobel Prize
at Bell Labs, but when he started his own company in 1956 he moved
to Palo Alto to do it. At the time that was an odd thing to do.
Why did he? Because he had grown up there and remembered how nice
it was. Now Palo Alto is suburbia, but then it was a charming
college town-- a charming college town with perfect weather and San
Francisco only an hour away.The companies that rule Silicon Valley now are all descended in
various ways from Shockley Semiconductor. Shockley was a difficult
man, and in 1957 his top people-- "the traitorous eight"-- left to
start a new company, Fairchild Semiconductor. Among them were
Gordon Moore and Robert Noyce, who went on to found Intel, and
Eugene Kleiner, who founded the VC firm Kleiner Perkins. Forty-two
years later, Kleiner Perkins funded Google, and the partner responsible
for the deal was John Doerr, who came to Silicon Valley in 1974 to
work for Intel.So although a lot of the newest companies in Silicon Valley don't
make anything out of silicon, there always seem to be multiple links
back to Shockley. There's a lesson here: startups beget startups.
People who work for startups start their own. People who get rich
from startups fund new ones. I suspect this kind of organic growth
is the only way to produce a startup hub, because it's the only way
to grow the expertise you need.That has two important implications. The first is that you need
time to grow a silicon valley. The university you could create in
a couple years, but the startup community around it has to grow
organically. The cycle time is limited by the time it takes a
company to succeed, which probably averages about five years.The other implication of the organic growth hypothesis is that you
can't be somewhat of a startup hub. You either have a self-sustaining
chain reaction, or not. Observation confirms this too: cities
either have a startup scene, or they don't. There is no middle
ground. Chicago has the third largest metropolitan area in America.
As source of startups it's negligible compared to Seattle, number 15.The good news is that the initial seed can be quite small. Shockley
Semiconductor, though itself not very successful, was big enough.
It brought a critical mass of experts in an important new technology
together in a place they liked enough to stay.CompetingOf course, a would-be silicon valley faces an obstacle the original
one didn't: it has to compete with Silicon Valley. Can that be
done? Probably.One of Silicon Valley's biggest advantages is its venture capital
firms. This was not a factor in Shockley's day, because VC funds
didn't exist. In fact, Shockley Semiconductor and Fairchild
Semiconductor were not startups at all in our sense. They were
subsidiaries-- of Beckman Instruments and Fairchild Camera and
Instrument respectively. Those companies were apparently willing
to establish subsidiaries wherever the experts wanted to live.Venture investors, however, prefer to fund startups within an hour's
drive. For one, they're more likely to notice startups nearby.
But when they do notice startups in other towns they prefer them
to move. They don't want to have to travel to attend board meetings,
and in any case the odds of succeeding are higher in a startup hub.The centralizing effect of venture firms is a double one: they cause
startups to form around them, and those draw in more startups through
acquisitions. And although the first may be weakening because it's
now so cheap to start some startups, the second seems as strong as ever.
Three of the most admired
"Web 2.0" companies were started outside the usual startup hubs,
but two of them have already been reeled in through acquisitions.Such centralizing forces make it harder for new silicon valleys to
get started. But by no means impossible. Ultimately power rests
with the founders. A startup with the best people will beat one
with funding from famous VCs, and a startup that was sufficiently
successful would never have to move. So a town that
could exert enough pull over the right people could resist and
perhaps even surpass Silicon Valley.For all its power, Silicon Valley has a great weakness: the paradise
Shockley found in 1956 is now one giant parking lot. San Francisco
and Berkeley are great, but they're forty miles away. Silicon
Valley proper is soul-crushing suburban sprawl. It
has fabulous weather, which makes it significantly better than the
soul-crushing sprawl of most other American cities. But a competitor
that managed to avoid sprawl would have real leverage. All a city
needs is to be the kind of place the next traitorous eight look at
and say "I want to stay here," and that would be enough to get the
chain reaction started.Notes[1]
It's interesting to consider how low this number could be
made. I suspect five hundred would be enough, even if they could
bring no assets with them. Probably just thirty, if I could pick them,
would be enough to turn Buffalo into a significant startup hub.[2]
Bureaucrats manage to allocate research funding moderately
well, but only because (like an in-house VC fund) they outsource
most of the work of selection. A professor at a famous university
who is highly regarded by his peers will get funding, pretty much
regardless of the proposal. That wouldn't work for startups, whose
founders aren't sponsored by organizations, and are often unknowns.[3]
You'd have to do it all at once, or at least a whole department
at a time, because people would be more likely to come if they
knew their friends were. And you should probably start from scratch,
rather than trying to upgrade an existing university, or much energy
would be lost in friction.[4]
Hypothesis: Any plan in which multiple independent buildings
are gutted or demolished to be "redeveloped" as a single project
is a net loss of personality for the city, with the exception of
the conversion of buildings not previously public, like warehouses.[5]
A few startups get started in New York, but less
than a tenth as many per capita as in Boston, and mostly
in less nerdy fields like finance and media.[6]
Some blue counties are false positives (reflecting the
remaining power of Democractic party machines), but there are no
false negatives. You can safely write off all the red counties.[7]
Some "urban renewal" experts took a shot at destroying Boston's
in the 1960s, leaving the area around city hall a bleak wasteland,
but most neighborhoods successfully resisted them.Thanks to Chris Anderson, Trevor Blackwell, Marc Hedlund,
Jessica Livingston, Robert Morris, Greg Mcadoo, Fred Wilson,
and Stephen Wolfram for
reading drafts of this, and to Ed Dumbill for inviting me to speak.(The second part of this talk became Why Startups
Condense in America.) |
December 2019There are two distinct ways to be politically moderate: on purpose
and by accident. Intentional moderates are trimmers, deliberately
choosing a position mid-way between the extremes of right and left.
Accidental moderates end up in the middle, on average, because they
make up their own minds about each question, and the far right and
far left are roughly equally wrong.You can distinguish intentional from accidental moderates by the
distribution of their opinions. If the far left opinion on some
matter is 0 and the far right opinion 100, an intentional moderate's
opinion on every question will be near 50. Whereas an accidental
moderate's opinions will be scattered over a broad range, but will,
like those of the intentional moderate, average to about 50.Intentional moderates are similar to those on the far left and the
far right in that their opinions are, in a sense, not their own.
The defining quality of an ideologue, whether on the left or the
right, is to acquire one's opinions in bulk. You don't get to pick
and choose. Your opinions about taxation can be predicted from your
opinions about sex. And although intentional moderates
might seem to be the opposite of ideologues, their beliefs (though
in their case the word "positions" might be more accurate) are also
acquired in bulk. If the median opinion shifts to the right or left,
the intentional moderate must shift with it. Otherwise they stop
being moderate.Accidental moderates, on the other hand, not only choose their own
answers, but choose their own questions. They may not care at all
about questions that the left and right both think are terribly
important. So you can only even measure the politics of an accidental
moderate from the intersection of the questions they care about and
those the left and right care about, and this can
sometimes be vanishingly small.It is not merely a manipulative rhetorical trick to say "if you're
not with us, you're against us," but often simply false.Moderates are sometimes derided as cowards, particularly by
the extreme left. But while it may be accurate to call intentional
moderates cowards, openly being an accidental moderate requires the
most courage of all, because you get attacked from both right and
left, and you don't have the comfort of being an orthodox member
of a large group to sustain you.Nearly all the most impressive people I know are accidental moderates.
If I knew a lot of professional athletes, or people in the entertainment
business, that might be different. Being on the far left or far
right doesn't affect how fast you run or how well you sing. But
someone who works with ideas has to be independent-minded to do it
well.Or more precisely, you have to be independent-minded about the ideas
you work with. You could be mindlessly doctrinaire in your politics
and still be a good mathematician. In the 20th century, a lot of
very smart people were Marxists just no one who was smart about
the subjects Marxism involves. But if the ideas you use in your
work intersect with the politics of your time, you have two choices:
be an accidental moderate, or be mediocre.Notes[1] It's possible in theory for one side to be entirely right and
the other to be entirely wrong. Indeed, ideologues must always
believe this is the case. But historically it rarely has been.[2] For some reason the far right tend to ignore moderates rather
than despise them as backsliders. I'm not sure why. Perhaps it
means that the far right is less ideological than the far left. Or
perhaps that they are more confident, or more resigned, or simply
more disorganized. I just don't know.[3] Having heretical opinions doesn't mean you have to express
them openly. It may be
easier to have them if you don't.
Thanks to Austen Allred, Trevor Blackwell, Patrick Collison, Jessica Livingston,
Amjad Masad, Ryan Petersen, and Harj Taggar for reading drafts of this. |
Want to start a startup? Get funded by
Y Combinator.
October 2010After barely changing at all for decades, the startup funding
business is now in what could, at least by comparison, be called
turmoil. At Y Combinator we've seen dramatic changes in the funding
environment for startups. Fortunately one of them is much higher
valuations.The trends we've been seeing are probably not YC-specific. I wish
I could say they were, but the main cause is probably just that we
see trends first—partly because the startups we fund are very
plugged into the Valley and are quick to take advantage of anything
new, and partly because we fund so many that we have enough data
points to see patterns clearly.What we're seeing now, everyone's probably going to be seeing in
the next couple years. So I'm going to explain what we're seeing,
and what that will mean for you if you try to raise money.Super-AngelsLet me start by describing what the world of startup funding used
to look like. There used to be two sharply differentiated types
of investors: angels and venture capitalists. Angels are individual
rich people who invest small amounts of their own money, while VCs
are employees of funds that invest large amounts of other people's.For decades there were just those two types of investors, but now
a third type has appeared halfway between them: the so-called
super-angels.
[1]
And VCs have been provoked by their arrival
into making a lot of angel-style investments themselves. So the
previously sharp line between angels and VCs has become hopelessly
blurred.There used to be a no man's land between angels and VCs. Angels
would invest $20k to $50k apiece, and VCs usually a million or more.
So an angel round meant a collection of angel investments that
combined to maybe $200k, and a VC round meant a series A round in
which a single VC fund (or occasionally two) invested $1-5 million.The no man's land between angels and VCs was a very inconvenient
one for startups, because it coincided with the amount many wanted
to raise. Most startups coming out of Demo Day wanted to raise
around $400k. But it was a pain to stitch together that much out
of angel investments, and most VCs weren't interested in investments
so small. That's the fundamental reason the super-angels have
appeared. They're responding to the market.The arrival of a new type of investor is big news for startups,
because there used to be only two and they rarely competed with one
another. Super-angels compete with both angels and VCs. That's
going to change the rules about how to raise money. I don't know
yet what the new rules will be, but it looks like most of the changes
will be for the better.A super-angel has some of the qualities of an angel, and some of
the qualities of a VC. They're usually individuals, like angels.
In fact many of the current super-angels were initially angels of
the classic type. But like VCs, they invest other people's money.
This allows them to invest larger amounts than angels: a typical
super-angel investment is currently about $100k. They make investment
decisions quickly, like angels. And they make a lot more investments
per partner than VCs—up to 10 times as many.The fact that super-angels invest other people's money makes them
doubly alarming to VCs. They don't just compete for startups; they
also compete for investors. What super-angels really are is a new
form of fast-moving, lightweight VC fund. And those of us in the
technology world know what usually happens when something comes
along that can be described in terms like that. Usually it's the
replacement.Will it be? As of now, few of the startups that take money from
super-angels are ruling out taking VC money. They're just postponing
it. But that's still a problem for VCs. Some of the startups that
postpone raising VC money may do so well on the angel money they
raise that they never bother to raise more. And those who do raise
VC rounds will be able to get higher valuations when they do. If
the best startups get 10x higher valuations when they raise series
A rounds, that would cut VCs' returns from winners at least tenfold.
[2]So I think VC funds are seriously threatened by the super-angels.
But one thing that may save them to some extent is the uneven
distribution of startup outcomes: practically all the returns are
concentrated in a few big successes. The expected value of a startup
is the percentage chance it's Google. So to the extent that winning
is a matter of absolute returns, the super-angels could win practically
all the battles for individual startups and yet lose the war, if
they merely failed to get those few big winners. And there's a
chance that could happen, because the top VC funds have better
brands, and can also do more for their portfolio companies.
[3]Because super-angels make more investments per partner, they have
less partner per investment. They can't pay as much attention to
you as a VC on your board could. How much is that extra attention
worth? It will vary enormously from one partner to another. There's
no consensus yet in the general case. So for now this is something
startups are deciding individually.Till now, VCs' claims about how much value they added were sort of
like the government's. Maybe they made you feel better, but you
had no choice in the matter, if you needed money on the scale only
VCs could supply. Now that VCs have competitors, that's going to
put a market price on the help they offer. The interesting thing
is, no one knows yet what it will be.Do startups that want to get really big need the sort of advice and
connections only the top VCs can supply? Or would super-angel money
do just as well? The VCs will say you need them, and the super-angels
will say you don't. But the truth is, no one knows yet, not even
the VCs and super-angels themselves. All the super-angels know
is that their new model seems promising enough to be worth trying,
and all the VCs know is that it seems promising enough to worry
about.RoundsWhatever the outcome, the conflict between VCs and super-angels is
good news for founders. And not just for the obvious reason that
more competition for deals means better terms. The whole shape of
deals is changing.One of the biggest differences between angels and VCs is the amount
of your company they want. VCs want a lot. In a series A round
they want a third of your company, if they can get it. They don't
care much how much they pay for it, but they want a lot because the
number of series A investments they can do is so small. In a
traditional series A investment, at least one partner from the VC
fund takes a seat on your board.
[4]
Since board seats last about
5 years and each partner can't handle more than about 10 at once,
that means a VC fund can only do about 2 series A deals per partner
per year. And that means they need to get as much of the company
as they can in each one. You'd have to be a very promising startup
indeed to get a VC to use up one of his 10 board seats for only a
few percent of you.Since angels generally don't take board seats, they don't have this
constraint. They're happy to buy only a few percent of you. And
although the super-angels are in most respects mini VC funds, they've
retained this critical property of angels. They don't take board
seats, so they don't need a big percentage of your company.Though that means you'll get correspondingly less attention from
them, it's good news in other respects. Founders never really liked
giving up as much equity as VCs wanted. It was a lot of the company
to give up in one shot. Most founders doing series A deals would
prefer to take half as much money for half as much stock, and then
see what valuation they could get for the second half of the stock
after using the first half of the money to increase its value. But
VCs never offered that option.Now startups have another alternative. Now it's easy to raise angel
rounds about half the size of series A rounds. Many of the startups
we fund are taking this route, and I predict that will be true of
startups in general.A typical big angel round might be $600k on a convertible note with
a valuation cap of $4 million premoney. Meaning that when the note
converts into stock (in a later round, or upon acquisition), the
investors in that round will get .6 / 4.6, or 13% of the company.
That's a lot less than the 30 to 40% of the company you usually
give up in a series A round if you do it so early.
[5]But the advantage of these medium-sized rounds is not just that
they cause less dilution. You also lose less control. After an
angel round, the founders almost always still have control of the
company, whereas after a series A round they often don't. The
traditional board structure after a series A round is two founders,
two VCs, and a (supposedly) neutral fifth person. Plus series A
terms usually give the investors a veto over various kinds of
important decisions, including selling the company. Founders usually
have a lot of de facto control after a series A, as long as things
are going well. But that's not the same as just being able to do
what you want, like you could before.A third and quite significant advantage of angel rounds is that
they're less stressful to raise. Raising a traditional series A
round has in the past taken weeks, if not months. When a VC firm
can only do 2 deals per partner per year, they're careful about
which they do. To get a traditional series A round you have to go
through a series of meetings, culminating in a full partner meeting
where the firm as a whole says yes or no. That's the really scary
part for founders: not just that series A rounds take so long, but
at the end of this long process the VCs might still say no. The
chance of getting rejected after the full partner meeting averages
about 25%. At some firms it's over 50%.Fortunately for founders, VCs have been getting a lot faster.
Nowadays Valley VCs are more likely to take 2 weeks than 2 months.
But they're still not as fast as angels and super-angels, the most
decisive of whom sometimes decide in hours.Raising an angel round is not only quicker, but you get feedback
as it progresses. An angel round is not an all or nothing thing
like a series A. It's composed of multiple investors with varying
degrees of seriousness, ranging from the upstanding ones who commit
unequivocally to the jerks who give you lines like "come back to
me to fill out the round." You usually start collecting money from
the most committed investors and work your way out toward the
ambivalent ones, whose interest increases as the round fills up.But at each point you know how you're doing. If investors turn
cold you may have to raise less, but when investors in an angel
round turn cold the process at least degrades gracefully, instead
of blowing up in your face and leaving you with nothing, as happens
if you get rejected by a VC fund after a full partner meeting.
Whereas if investors seem hot, you can not only close the round
faster, but now that convertible notes are becoming the norm,
actually raise the price to reflect demand.ValuationHowever, the VCs have a weapon they can use against the super-angels,
and they have started to use it. VCs have started making angel-sized
investments too. The term "angel round" doesn't mean that all the
investors in it are angels; it just describes the structure of the
round. Increasingly the participants include VCs making investments
of a hundred thousand or two. And when VCs invest in angel rounds
they can do things that super-angels don't like. VCs are quite
valuation-insensitive in angel rounds—partly because they are
in general, and partly because they don't care that much about the
returns on angel rounds, which they still view mostly as a way to
recruit startups for series A rounds later. So VCs who invest in
angel rounds can blow up the valuations for angels and super-angels
who invest in them.
[6]Some super-angels seem to care about valuations. Several turned
down YC-funded startups after Demo Day because their valuations
were too high. This was not a problem for the startups; by definition
a high valuation means enough investors were willing to accept it.
But it was mysterious to me that the super-angels would quibble
about valuations. Did they not understand that the big returns
come from a few big successes, and that it therefore mattered far
more which startups you picked than how much you paid for them?After thinking about it for a while and observing certain other
signs, I have a theory that explains why the super-angels may be
smarter than they seem. It would make sense for super-angels to
want low valuations if they're hoping to invest in startups that
get bought early. If you're hoping to hit the next Google, you
shouldn't care if the valuation is 20 million. But if you're looking
for companies that are going to get bought for 30 million, you care.
If you invest at 20 and the company gets bought for 30, you only
get 1.5x. You might as well buy Apple.So if some of the super-angels were looking for companies that could
get acquired quickly, that would explain why they'd care about
valuations. But why would they be looking for those? Because
depending on the meaning of "quickly," it could actually be very
profitable. A company that gets acquired for 30 million is a failure
to a VC, but it could be a 10x return for an angel, and moreover,
a quick 10x return. Rate of return is what matters in
investing—not the multiple you get, but the multiple per year.
If a super-angel gets 10x in one year, that's a higher rate of
return than a VC could ever hope to get from a company that took 6
years to go public. To get the same rate of return, the VC would
have to get a multiple of 10^6—one million x. Even Google
didn't come close to that.So I think at least some super-angels are looking for companies
that will get bought. That's the only rational explanation for
focusing on getting the right valuations, instead of the right
companies. And if so they'll be different to deal with than VCs.
They'll be tougher on valuations, but more accommodating if you want
to sell early.PrognosisWho will win, the super-angels or the VCs? I think the answer to
that is, some of each. They'll each become more like one another.
The super-angels will start to invest larger amounts, and the VCs
will gradually figure out ways to make more, smaller investments
faster. A decade from now the players will be hard to tell apart,
and there will probably be survivors from each group.What does that mean for founders? One thing it means is that the
high valuations startups are presently getting may not last forever.
To the extent that valuations are being driven up by price-insensitive
VCs, they'll fall again if VCs become more like super-angels and
start to become more miserly about valuations. Fortunately if this
does happen it will take years.The short term forecast is more competition between investors, which
is good news for you. The super-angels will try to undermine the
VCs by acting faster, and the VCs will try to undermine the
super-angels by driving up valuations. Which for founders will
result in the perfect combination: funding rounds that close fast,
with high valuations.But remember that to get that combination, your startup will have
to appeal to both super-angels and VCs. If you don't seem like you
have the potential to go public, you won't be able to use VCs to
drive up the valuation of an angel round.There is a danger of having VCs in an angel round: the so-called
signalling risk. If VCs are only doing it in the hope of investing
more later, what happens if they don't? That's a signal to everyone
else that they think you're lame.How much should you worry about that? The seriousness of signalling
risk depends on how far along you are. If by the next time you
need to raise money, you have graphs showing rising revenue or
traffic month after month, you don't have to worry about any signals
your existing investors are sending. Your results will speak for
themselves.
[7]Whereas if the next time you need to raise money you won't yet have
concrete results, you may need to think more about the message your
investors might send if they don't invest more. I'm not sure yet
how much you have to worry, because this whole phenomenon of VCs
doing angel investments is so new. But my instincts tell me you
don't have to worry much. Signalling risk smells like one of those
things founders worry about that's not a real problem. As a rule,
the only thing that can kill a good startup is the startup itself.
Startups hurt themselves way more often than competitors hurt them,
for example. I suspect signalling risk is in this category too.One thing YC-funded startups have been doing to mitigate the risk
of taking money from VCs in angel rounds is not to take too much
from any one VC. Maybe that will help, if you have the luxury of
turning down money.Fortunately, more and more startups will. After decades of competition
that could best be described as intramural, the startup funding
business is finally getting some real competition. That should
last several years at least, and maybe a lot longer. Unless there's
some huge market crash, the next couple years are going to be a
good time for startups to raise money. And that's exciting because
it means lots more startups will happen.
Notes[1]
I've also heard them called "Mini-VCs" and "Micro-VCs." I
don't know which name will stick.There were a couple predecessors. Ron Conway had angel funds
starting in the 1990s, and in some ways First Round Capital is closer to a
super-angel than a VC fund.[2]
It wouldn't cut their overall returns tenfold, because investing
later would probably (a) cause them to lose less on investments
that failed, and (b) not allow them to get as large a percentage
of startups as they do now. So it's hard to predict precisely what
would happen to their returns.[3]
The brand of an investor derives mostly from the success of
their portfolio companies. The top VCs thus have a big brand
advantage over the super-angels. They could make it self-perpetuating
if they used it to get all the best new startups. But I don't think
they'll be able to. To get all the best startups, you have to do
more than make them want you. You also have to want them; you have
to recognize them when you see them, and that's much harder.
Super-angels will snap up stars that VCs miss. And that will cause
the brand gap between the top VCs and the super-angels gradually
to erode.[4]
Though in a traditional series A round VCs put two partners
on your board, there are signs now that VCs may begin to conserve
board seats by switching to what used to be considered an angel-round
board, consisting of two founders and one VC. Which is also to the
founders' advantage if it means they still control the company.[5]
In a series A round, you usually have to give up more than
the actual amount of stock the VCs buy, because they insist you
dilute yourselves to set aside an "option pool" as well. I predict
this practice will gradually disappear though.[6]
The best thing for founders, if they can get it, is a convertible
note with no valuation cap at all. In that case the money invested
in the angel round just converts into stock at the valuation of the
next round, no matter how large. Angels and super-angels tend not
to like uncapped notes. They have no idea how much of the company
they're buying. If the company does well and the valuation of the
next round is high, they may end up with only a sliver of it. So
by agreeing to uncapped notes, VCs who don't care about valuations
in angel rounds can make offers that super-angels hate to match.[7]
Obviously signalling risk is also not a problem if you'll
never need to raise more money. But startups are often mistaken
about that.Thanks to Sam Altman, John Bautista, Patrick Collison, James
Lindenbaum, Reid Hoffman, Jessica Livingston and Harj Taggar
for reading drafts
of this. |
February 2007A few days ago I finally figured out something I've wondered about
for 25 years: the relationship between wisdom and intelligence.
Anyone can see they're not the same by the number of people who are
smart, but not very wise. And yet intelligence and wisdom do seem
related. How?What is wisdom? I'd say it's knowing what to do in a lot of
situations. I'm not trying to make a deep point here about the
true nature of wisdom, just to figure out how we use the word. A
wise person is someone who usually knows the right thing to do.And yet isn't being smart also knowing what to do in certain
situations? For example, knowing what to do when the teacher tells
your elementary school class to add all the numbers from 1 to 100?
[1]Some say wisdom and intelligence apply to different types of
problems—wisdom to human problems and intelligence to abstract
ones. But that isn't true. Some wisdom has nothing to do with
people: for example, the wisdom of the engineer who knows certain
structures are less prone to failure than others. And certainly
smart people can find clever solutions to human problems as well
as abstract ones.
[2]Another popular explanation is that wisdom comes from experience
while intelligence is innate. But people are not simply wise in
proportion to how much experience they have. Other things must
contribute to wisdom besides experience, and some may be innate: a
reflective disposition, for example.Neither of the conventional explanations of the difference between
wisdom and intelligence stands up to scrutiny. So what is the
difference? If we look at how people use the words "wise" and
"smart," what they seem to mean is different shapes of performance.Curve"Wise" and "smart" are both ways of saying someone knows what to
do. The difference is that "wise" means one has a high average
outcome across all situations, and "smart" means one does spectacularly
well in a few. That is, if you had a graph in which the x axis
represented situations and the y axis the outcome, the graph of the
wise person would be high overall, and the graph of the smart person
would have high peaks.The distinction is similar to the rule that one should judge talent
at its best and character at its worst. Except you judge intelligence
at its best, and wisdom by its average. That's how the two are
related: they're the two different senses in which the same curve
can be high.So a wise person knows what to do in most situations, while a smart
person knows what to do in situations where few others could. We
need to add one more qualification: we should ignore cases where
someone knows what to do because they have inside information.
[3]
But aside from that, I don't think we can get much more specific
without starting to be mistaken.Nor do we need to. Simple as it is, this explanation predicts, or
at least accords with, both of the conventional stories about the
distinction between wisdom and intelligence. Human problems are
the most common type, so being good at solving those is key in
achieving a high average outcome. And it seems natural that a
high average outcome depends mostly on experience, but that dramatic
peaks can only be achieved by people with certain rare, innate
qualities; nearly anyone can learn to be a good swimmer, but to be
an Olympic swimmer you need a certain body type.This explanation also suggests why wisdom is such an elusive concept:
there's no such thing. "Wise" means something—that one is
on average good at making the right choice. But giving the name
"wisdom" to the supposed quality that enables one to do that doesn't
mean such a thing exists. To the extent "wisdom" means anything,
it refers to a grab-bag of qualities as various as self-discipline,
experience, and empathy.
[4]Likewise, though "intelligent" means something, we're asking for
trouble if we insist on looking for a single thing called "intelligence."
And whatever its components, they're not all innate. We use the
word "intelligent" as an indication of ability: a smart person can
grasp things few others could. It does seem likely there's some
inborn predisposition to intelligence (and wisdom too), but this
predisposition is not itself intelligence.One reason we tend to think of intelligence as inborn is that people
trying to measure it have concentrated on the aspects of it that
are most measurable. A quality that's inborn will obviously be
more convenient to work with than one that's influenced by experience,
and thus might vary in the course of a study. The problem comes
when we drag the word "intelligence" over onto what they're measuring.
If they're measuring something inborn, they can't be measuring
intelligence. Three year olds aren't smart. When we describe one
as smart, it's shorthand for "smarter than other three year olds."SplitPerhaps it's a technicality to point out that a predisposition to
intelligence is not the same as intelligence. But it's an important
technicality, because it reminds us that we can become smarter,
just as we can become wiser.The alarming thing is that we may have to choose between the two.If wisdom and intelligence are the average and peaks of the same
curve, then they converge as the number of points on the curve
decreases. If there's just one point, they're identical: the average
and maximum are the same. But as the number of points increases,
wisdom and intelligence diverge. And historically the number of
points on the curve seems to have been increasing: our ability is
tested in an ever wider range of situations.In the time of Confucius and Socrates, people seem to have regarded
wisdom, learning, and intelligence as more closely related than we
do. Distinguishing between "wise" and "smart" is a modern habit.
[5]
And the reason we do is that they've been diverging. As knowledge
gets more specialized, there are more points on the curve, and the
distinction between the spikes and the average becomes sharper,
like a digital image rendered with more pixels.One consequence is that some old recipes may have become obsolete.
At the very least we have to go back and figure out if they were
really recipes for wisdom or intelligence. But the really striking
change, as intelligence and wisdom drift apart, is that we may have
to decide which we prefer. We may not be able to optimize for both
simultaneously.Society seems to have voted for intelligence. We no longer admire
the sage—not the way people did two thousand years ago. Now
we admire the genius. Because in fact the distinction we began
with has a rather brutal converse: just as you can be smart without
being very wise, you can be wise without being very smart. That
doesn't sound especially admirable. That gets you James Bond, who
knows what to do in a lot of situations, but has to rely on Q for
the ones involving math.Intelligence and wisdom are obviously not mutually exclusive. In
fact, a high average may help support high peaks. But there are
reasons to believe that at some point you have to choose between
them. One is the example of very smart people, who are so often
unwise that in popular culture this now seems to be regarded as the
rule rather than the exception. Perhaps the absent-minded professor
is wise in his way, or wiser than he seems, but he's not wise in
the way Confucius or Socrates wanted people to be.
[6]NewFor both Confucius and Socrates, wisdom, virtue, and happiness were
necessarily related. The wise man was someone who knew what the
right choice was and always made it; to be the right choice, it had
to be morally right; he was therefore always happy, knowing he'd
done the best he could. I can't think of many ancient philosophers
who would have disagreed with that, so far as it goes."The superior man is always happy; the small man sad," said Confucius.
[7]Whereas a few years ago I read an interview with a mathematician
who said that most nights he went to bed discontented, feeling he
hadn't made enough progress.
[8]
The Chinese and Greek words we
translate as "happy" didn't mean exactly what we do by it, but
there's enough overlap that this remark contradicts them.Is the mathematician a small man because he's discontented? No;
he's just doing a kind of work that wasn't very common in Confucius's
day.Human knowledge seems to grow fractally. Time after time, something
that seemed a small and uninteresting area—experimental error,
even—turns out, when examined up close, to have as much in
it as all knowledge up to that point. Several of the fractal buds
that have exploded since ancient times involve inventing and
discovering new things. Math, for example, used to be something a
handful of people did part-time. Now it's the career of thousands.
And in work that involves making new things, some old rules don't
apply.Recently I've spent some time advising people, and there I find the
ancient rule still works: try to understand the situation as well
as you can, give the best advice you can based on your experience,
and then don't worry about it, knowing you did all you could. But
I don't have anything like this serenity when I'm writing an essay.
Then I'm worried. What if I run out of ideas? And when I'm writing,
four nights out of five I go to bed discontented, feeling I didn't
get enough done.Advising people and writing are fundamentally different types of
work. When people come to you with a problem and you have to figure
out the right thing to do, you don't (usually) have to invent
anything. You just weigh the alternatives and try to judge which
is the prudent choice. But prudence can't tell me what sentence
to write next. The search space is too big.Someone like a judge or a military officer can in much of his work
be guided by duty, but duty is no guide in making things. Makers
depend on something more precarious: inspiration. And like most
people who lead a precarious existence, they tend to be worried,
not contented. In that respect they're more like the small man of
Confucius's day, always one bad harvest (or ruler) away from
starvation. Except instead of being at the mercy of weather and
officials, they're at the mercy of their own imagination.LimitsTo me it was a relief just to realize it might be ok to be discontented.
The idea that a successful person should be happy has thousands of
years of momentum behind it. If I was any good, why didn't I have
the easy confidence winners are supposed to have? But that, I now
believe, is like a runner asking "If I'm such a good athlete, why
do I feel so tired?" Good runners still get tired; they just get
tired at higher speeds.People whose work is to invent or discover things are in the same
position as the runner. There's no way for them to do the best
they can, because there's no limit to what they could do. The
closest you can come is to compare yourself to other people. But
the better you do, the less this matters. An undergrad who gets
something published feels like a star. But for someone at the top
of the field, what's the test of doing well? Runners can at least
compare themselves to others doing exactly the same thing; if you
win an Olympic gold medal, you can be fairly content, even if you
think you could have run a bit faster. But what is a novelist to
do?Whereas if you're doing the kind of work in which problems are
presented to you and you have to choose between several alternatives,
there's an upper bound on your performance: choosing the best every
time. In ancient societies, nearly all work seems to have been of
this type. The peasant had to decide whether a garment was worth
mending, and the king whether or not to invade his neighbor, but
neither was expected to invent anything. In principle they could
have; the king could have invented firearms, then invaded his
neighbor. But in practice innovations were so rare that they weren't
expected of you, any more than goalkeepers are expected to score
goals.
[9]
In practice, it seemed as if there was a correct decision
in every situation, and if you made it you'd done your job perfectly,
just as a goalkeeper who prevents the other team from scoring is
considered to have played a perfect game.In this world, wisdom seemed paramount.
[10]
Even now, most people
do work in which problems are put before them and they have to
choose the best alternative. But as knowledge has grown more
specialized, there are more and more types of work in which people
have to make up new things, and in which performance is therefore
unbounded. Intelligence has become increasingly important relative
to wisdom because there is more room for spikes.RecipesAnother sign we may have to choose between intelligence and wisdom
is how different their recipes are. Wisdom seems to come largely
from curing childish qualities, and intelligence largely from
cultivating them.Recipes for wisdom, particularly ancient ones, tend to have a
remedial character. To achieve wisdom one must cut away all the
debris that fills one's head on emergence from childhood, leaving
only the important stuff. Both self-control and experience have
this effect: to eliminate the random biases that come from your own
nature and from the circumstances of your upbringing respectively.
That's not all wisdom is, but it's a large part of it. Much of
what's in the sage's head is also in the head of every twelve year
old. The difference is that in the head of the twelve year old
it's mixed together with a lot of random junk.The path to intelligence seems to be through working on hard problems.
You develop intelligence as you might develop muscles, through
exercise. But there can't be too much compulsion here. No amount
of discipline can replace genuine curiosity. So cultivating
intelligence seems to be a matter of identifying some bias in one's
character—some tendency to be interested in certain types of
things—and nurturing it. Instead of obliterating your
idiosyncrasies in an effort to make yourself a neutral vessel for
the truth, you select one and try to grow it from a seedling into
a tree.The wise are all much alike in their wisdom, but very smart people
tend to be smart in distinctive ways.Most of our educational traditions aim at wisdom. So perhaps one
reason schools work badly is that they're trying to make intelligence
using recipes for wisdom. Most recipes for wisdom have an element
of subjection. At the very least, you're supposed to do what the
teacher says. The more extreme recipes aim to break down your
individuality the way basic training does. But that's not the route
to intelligence. Whereas wisdom comes through humility, it may
actually help, in cultivating intelligence, to have a mistakenly
high opinion of your abilities, because that encourages you to keep
working. Ideally till you realize how mistaken you were.(The reason it's hard to learn new skills late in life is not just
that one's brain is less malleable. Another probably even worse
obstacle is that one has higher standards.)I realize we're on dangerous ground here. I'm not proposing the
primary goal of education should be to increase students' "self-esteem."
That just breeds laziness. And in any case, it doesn't really fool
the kids, not the smart ones. They can tell at a young age that a
contest where everyone wins is a fraud.A teacher has to walk a narrow path: you want to encourage kids to
come up with things on their own, but you can't simply applaud
everything they produce. You have to be a good audience: appreciative,
but not too easily impressed. And that's a lot of work. You have
to have a good enough grasp of kids' capacities at different ages
to know when to be surprised.That's the opposite of traditional recipes for education. Traditionally
the student is the audience, not the teacher; the student's job is
not to invent, but to absorb some prescribed body of material. (The
use of the term "recitation" for sections in some colleges is a
fossil of this.) The problem with these old traditions is that
they're too much influenced by recipes for wisdom.DifferentI deliberately gave this essay a provocative title; of course it's
worth being wise. But I think it's important to understand the
relationship between intelligence and wisdom, and particularly what
seems to be the growing gap between them. That way we can avoid
applying rules and standards to intelligence that are really meant
for wisdom. These two senses of "knowing what to do" are more
different than most people realize. The path to wisdom is through
discipline, and the path to intelligence through carefully selected
self-indulgence. Wisdom is universal, and intelligence idiosyncratic.
And while wisdom yields calmness, intelligence much of the time
leads to discontentment.That's particularly worth remembering. A physicist friend recently
told me half his department was on Prozac. Perhaps if we acknowledge
that some amount of frustration is inevitable in certain kinds
of work, we can mitigate its effects. Perhaps we can box it up and
put it away some of the time, instead of letting it flow together
with everyday sadness to produce what seems an alarmingly large
pool. At the very least, we can avoid being discontented about
being discontented.If you feel exhausted, it's not necessarily because there's something
wrong with you. Maybe you're just running fast.Notes[1]
Gauss was supposedly asked this when he was 10. Instead of
laboriously adding together the numbers like the other students,
he saw that they consisted of 50 pairs that each summed to 101 (100
+ 1, 99 + 2, etc), and that he could just multiply 101 by 50 to get
the answer, 5050.[2]
A variant is that intelligence is the ability to solve problems,
and wisdom the judgement to know how to use those solutions. But
while this is certainly an important relationship between wisdom
and intelligence, it's not the distinction between them. Wisdom
is useful in solving problems too, and intelligence can help in
deciding what to do with the solutions.[3]
In judging both intelligence and wisdom we have to factor out
some knowledge. People who know the combination of a safe will be
better at opening it than people who don't, but no one would say
that was a test of intelligence or wisdom.But knowledge overlaps with wisdom and probably also intelligence.
A knowledge of human nature is certainly part of wisdom. So where
do we draw the line?Perhaps the solution is to discount knowledge that at some point
has a sharp drop in utility. For example, understanding French
will help you in a large number of situations, but its value drops
sharply as soon as no one else involved knows French. Whereas the
value of understanding vanity would decline more gradually.The knowledge whose utility drops sharply is the kind that has
little relation to other knowledge. This includes mere conventions,
like languages and safe combinations, and also what we'd call
"random" facts, like movie stars' birthdays, or how to distinguish
1956 from 1957 Studebakers.[4]
People seeking some single thing called "wisdom" have been
fooled by grammar. Wisdom is just knowing the right thing to do,
and there are a hundred and one different qualities that help in
that. Some, like selflessness, might come from meditating in an
empty room, and others, like a knowledge of human nature, might
come from going to drunken parties.Perhaps realizing this will help dispel the cloud of semi-sacred
mystery that surrounds wisdom in so many people's eyes. The mystery
comes mostly from looking for something that doesn't exist. And
the reason there have historically been so many different schools
of thought about how to achieve wisdom is that they've focused on
different components of it.When I use the word "wisdom" in this essay, I mean no more than
whatever collection of qualities helps people make the right choice
in a wide variety of situations.[5]
Even in English, our sense of the word "intelligence" is
surprisingly recent. Predecessors like "understanding" seem to
have had a broader meaning.[6]
There is of course some uncertainty about how closely the remarks
attributed to Confucius and Socrates resemble their actual opinions.
I'm using these names as we use the name "Homer," to mean the
hypothetical people who said the things attributed to them.[7]
Analects VII:36, Fung trans.Some translators use "calm" instead of "happy." One source of
difficulty here is that present-day English speakers have a different
idea of happiness from many older societies. Every language probably
has a word meaning "how one feels when things are going well," but
different cultures react differently when things go well. We react
like children, with smiles and laughter. But in a more reserved
society, or in one where life was tougher, the reaction might be a
quiet contentment.[8]
It may have been Andrew Wiles, but I'm not sure. If anyone
remembers such an interview, I'd appreciate hearing from you.[9]
Confucius claimed proudly that he had never invented
anything—that he had simply passed on an accurate account of
ancient traditions. [Analects VII:1] It's hard for us now to
appreciate how important a duty it must have been in preliterate
societies to remember and pass on the group's accumulated knowledge.
Even in Confucius's time it still seems to have been the first duty
of the scholar.[10]
The bias toward wisdom in ancient philosophy may be exaggerated
by the fact that, in both Greece and China, many of the first
philosophers (including Confucius and Plato) saw themselves as
teachers of administrators, and so thought disproportionately about
such matters. The few people who did invent things, like storytellers,
must have seemed an outlying data point that could be ignored.Thanks to Trevor Blackwell, Sarah Harlin, Jessica Livingston,
and Robert Morris for reading drafts of this. |
February 2021Before college the two main things I worked on, outside of school,
were writing and programming. I didn't write essays. I wrote what
beginning writers were supposed to write then, and probably still
are: short stories. My stories were awful. They had hardly any plot,
just characters with strong feelings, which I imagined made them
deep.The first programs I tried writing were on the IBM 1401 that our
school district used for what was then called "data processing."
This was in 9th grade, so I was 13 or 14. The school district's
1401 happened to be in the basement of our junior high school, and
my friend Rich Draves and I got permission to use it. It was like
a mini Bond villain's lair down there, with all these alien-looking
machines CPU, disk drives, printer, card reader sitting up
on a raised floor under bright fluorescent lights.The language we used was an early version of Fortran. You had to
type programs on punch cards, then stack them in the card reader
and press a button to load the program into memory and run it. The
result would ordinarily be to print something on the spectacularly
loud printer.I was puzzled by the 1401. I couldn't figure out what to do with
it. And in retrospect there's not much I could have done with it.
The only form of input to programs was data stored on punched cards,
and I didn't have any data stored on punched cards. The only other
option was to do things that didn't rely on any input, like calculate
approximations of pi, but I didn't know enough math to do anything
interesting of that type. So I'm not surprised I can't remember any
programs I wrote, because they can't have done much. My clearest
memory is of the moment I learned it was possible for programs not
to terminate, when one of mine didn't. On a machine without
time-sharing, this was a social as well as a technical error, as
the data center manager's expression made clear.With microcomputers, everything changed. Now you could have a
computer sitting right in front of you, on a desk, that could respond
to your keystrokes as it was running instead of just churning through
a stack of punch cards and then stopping.
[1]The first of my friends to get a microcomputer built it himself.
It was sold as a kit by Heathkit. I remember vividly how impressed
and envious I felt watching him sitting in front of it, typing
programs right into the computer.Computers were expensive in those days and it took me years of
nagging before I convinced my father to buy one, a TRS-80, in about
1980. The gold standard then was the Apple II, but a TRS-80 was
good enough. This was when I really started programming. I wrote
simple games, a program to predict how high my model rockets would
fly, and a word processor that my father used to write at least one
book. There was only room in memory for about 2 pages of text, so
he'd write 2 pages at a time and then print them out, but it was a
lot better than a typewriter.Though I liked programming, I didn't plan to study it in college.
In college I was going to study philosophy, which sounded much more
powerful. It seemed, to my naive high school self, to be the study
of the ultimate truths, compared to which the things studied in
other fields would be mere domain knowledge. What I discovered when
I got to college was that the other fields took up so much of the
space of ideas that there wasn't much left for these supposed
ultimate truths. All that seemed left for philosophy were edge cases
that people in other fields felt could safely be ignored.I couldn't have put this into words when I was 18. All I knew at
the time was that I kept taking philosophy courses and they kept
being boring. So I decided to switch to AI.AI was in the air in the mid 1980s, but there were two things
especially that made me want to work on it: a novel by Heinlein
called The Moon is a Harsh Mistress, which featured an intelligent
computer called Mike, and a PBS documentary that showed Terry
Winograd using SHRDLU. I haven't tried rereading The Moon is a Harsh
Mistress, so I don't know how well it has aged, but when I read it
I was drawn entirely into its world. It seemed only a matter of
time before we'd have Mike, and when I saw Winograd using SHRDLU,
it seemed like that time would be a few years at most. All you had
to do was teach SHRDLU more words.There weren't any classes in AI at Cornell then, not even graduate
classes, so I started trying to teach myself. Which meant learning
Lisp, since in those days Lisp was regarded as the language of AI.
The commonly used programming languages then were pretty primitive,
and programmers' ideas correspondingly so. The default language at
Cornell was a Pascal-like language called PL/I, and the situation
was similar elsewhere. Learning Lisp expanded my concept of a program
so fast that it was years before I started to have a sense of where
the new limits were. This was more like it; this was what I had
expected college to do. It wasn't happening in a class, like it was
supposed to, but that was ok. For the next couple years I was on a
roll. I knew what I was going to do.For my undergraduate thesis, I reverse-engineered SHRDLU. My God
did I love working on that program. It was a pleasing bit of code,
but what made it even more exciting was my belief hard to imagine
now, but not unique in 1985 that it was already climbing the
lower slopes of intelligence.I had gotten into a program at Cornell that didn't make you choose
a major. You could take whatever classes you liked, and choose
whatever you liked to put on your degree. I of course chose "Artificial
Intelligence." When I got the actual physical diploma, I was dismayed
to find that the quotes had been included, which made them read as
scare-quotes. At the time this bothered me, but now it seems amusingly
accurate, for reasons I was about to discover.I applied to 3 grad schools: MIT and Yale, which were renowned for
AI at the time, and Harvard, which I'd visited because Rich Draves
went there, and was also home to Bill Woods, who'd invented the
type of parser I used in my SHRDLU clone. Only Harvard accepted me,
so that was where I went.I don't remember the moment it happened, or if there even was a
specific moment, but during the first year of grad school I realized
that AI, as practiced at the time, was a hoax. By which I mean the
sort of AI in which a program that's told "the dog is sitting on
the chair" translates this into some formal representation and adds
it to the list of things it knows.What these programs really showed was that there's a subset of
natural language that's a formal language. But a very proper subset.
It was clear that there was an unbridgeable gap between what they
could do and actually understanding natural language. It was not,
in fact, simply a matter of teaching SHRDLU more words. That whole
way of doing AI, with explicit data structures representing concepts,
was not going to work. Its brokenness did, as so often happens,
generate a lot of opportunities to write papers about various
band-aids that could be applied to it, but it was never going to
get us Mike.So I looked around to see what I could salvage from the wreckage
of my plans, and there was Lisp. I knew from experience that Lisp
was interesting for its own sake and not just for its association
with AI, even though that was the main reason people cared about
it at the time. So I decided to focus on Lisp. In fact, I decided
to write a book about Lisp hacking. It's scary to think how little
I knew about Lisp hacking when I started writing that book. But
there's nothing like writing a book about something to help you
learn it. The book, On Lisp, wasn't published till 1993, but I wrote
much of it in grad school.Computer Science is an uneasy alliance between two halves, theory
and systems. The theory people prove things, and the systems people
build things. I wanted to build things. I had plenty of respect for
theory indeed, a sneaking suspicion that it was the more admirable
of the two halves but building things seemed so much more exciting.The problem with systems work, though, was that it didn't last.
Any program you wrote today, no matter how good, would be obsolete
in a couple decades at best. People might mention your software in
footnotes, but no one would actually use it. And indeed, it would
seem very feeble work. Only people with a sense of the history of
the field would even realize that, in its time, it had been good.There were some surplus Xerox Dandelions floating around the computer
lab at one point. Anyone who wanted one to play around with could
have one. I was briefly tempted, but they were so slow by present
standards; what was the point? No one else wanted one either, so
off they went. That was what happened to systems work.I wanted not just to build things, but to build things that would
last.In this dissatisfied state I went in 1988 to visit Rich Draves at
CMU, where he was in grad school. One day I went to visit the
Carnegie Institute, where I'd spent a lot of time as a kid. While
looking at a painting there I realized something that might seem
obvious, but was a big surprise to me. There, right on the wall,
was something you could make that would last. Paintings didn't
become obsolete. Some of the best ones were hundreds of years old.And moreover this was something you could make a living doing. Not
as easily as you could by writing software, of course, but I thought
if you were really industrious and lived really cheaply, it had to
be possible to make enough to survive. And as an artist you could
be truly independent. You wouldn't have a boss, or even need to get
research funding.I had always liked looking at paintings. Could I make them? I had
no idea. I'd never imagined it was even possible. I knew intellectually
that people made art that it didn't just appear spontaneously
but it was as if the people who made it were a different species.
They either lived long ago or were mysterious geniuses doing strange
things in profiles in Life magazine. The idea of actually being
able to make art, to put that verb before that noun, seemed almost
miraculous.That fall I started taking art classes at Harvard. Grad students
could take classes in any department, and my advisor, Tom Cheatham,
was very easy going. If he even knew about the strange classes I
was taking, he never said anything.So now I was in a PhD program in computer science, yet planning to
be an artist, yet also genuinely in love with Lisp hacking and
working away at On Lisp. In other words, like many a grad student,
I was working energetically on multiple projects that were not my
thesis.I didn't see a way out of this situation. I didn't want to drop out
of grad school, but how else was I going to get out? I remember
when my friend Robert Morris got kicked out of Cornell for writing
the internet worm of 1988, I was envious that he'd found such a
spectacular way to get out of grad school.Then one day in April 1990 a crack appeared in the wall. I ran into
professor Cheatham and he asked if I was far enough along to graduate
that June. I didn't have a word of my dissertation written, but in
what must have been the quickest bit of thinking in my life, I
decided to take a shot at writing one in the 5 weeks or so that
remained before the deadline, reusing parts of On Lisp where I
could, and I was able to respond, with no perceptible delay "Yes,
I think so. I'll give you something to read in a few days."I picked applications of continuations as the topic. In retrospect
I should have written about macros and embedded languages. There's
a whole world there that's barely been explored. But all I wanted
was to get out of grad school, and my rapidly written dissertation
sufficed, just barely.Meanwhile I was applying to art schools. I applied to two: RISD in
the US, and the Accademia di Belli Arti in Florence, which, because
it was the oldest art school, I imagined would be good. RISD accepted
me, and I never heard back from the Accademia, so off to Providence
I went.I'd applied for the BFA program at RISD, which meant in effect that
I had to go to college again. This was not as strange as it sounds,
because I was only 25, and art schools are full of people of different
ages. RISD counted me as a transfer sophomore and said I had to do
the foundation that summer. The foundation means the classes that
everyone has to take in fundamental subjects like drawing, color,
and design.Toward the end of the summer I got a big surprise: a letter from
the Accademia, which had been delayed because they'd sent it to
Cambridge England instead of Cambridge Massachusetts, inviting me
to take the entrance exam in Florence that fall. This was now only
weeks away. My nice landlady let me leave my stuff in her attic. I
had some money saved from consulting work I'd done in grad school;
there was probably enough to last a year if I lived cheaply. Now
all I had to do was learn Italian.Only stranieri (foreigners) had to take this entrance exam. In
retrospect it may well have been a way of excluding them, because
there were so many stranieri attracted by the idea of studying
art in Florence that the Italian students would otherwise have been
outnumbered. I was in decent shape at painting and drawing from the
RISD foundation that summer, but I still don't know how I managed
to pass the written exam. I remember that I answered the essay
question by writing about Cezanne, and that I cranked up the
intellectual level as high as I could to make the most of my limited
vocabulary.
[2]I'm only up to age 25 and already there are such conspicuous patterns.
Here I was, yet again about to attend some august institution in
the hopes of learning about some prestigious subject, and yet again
about to be disappointed. The students and faculty in the painting
department at the Accademia were the nicest people you could imagine,
but they had long since arrived at an arrangement whereby the
students wouldn't require the faculty to teach anything, and in
return the faculty wouldn't require the students to learn anything.
And at the same time all involved would adhere outwardly to the
conventions of a 19th century atelier. We actually had one of those
little stoves, fed with kindling, that you see in 19th century
studio paintings, and a nude model sitting as close to it as possible
without getting burned. Except hardly anyone else painted her besides
me. The rest of the students spent their time chatting or occasionally
trying to imitate things they'd seen in American art magazines.Our model turned out to live just down the street from me. She made
a living from a combination of modelling and making fakes for a
local antique dealer. She'd copy an obscure old painting out of a
book, and then he'd take the copy and maltreat it to make it look
old.
[3]While I was a student at the Accademia I started painting still
lives in my bedroom at night. These paintings were tiny, because
the room was, and because I painted them on leftover scraps of
canvas, which was all I could afford at the time. Painting still
lives is different from painting people, because the subject, as
its name suggests, can't move. People can't sit for more than about
15 minutes at a time, and when they do they don't sit very still.
So the traditional m.o. for painting people is to know how to paint
a generic person, which you then modify to match the specific person
you're painting. Whereas a still life you can, if you want, copy
pixel by pixel from what you're seeing. You don't want to stop
there, of course, or you get merely photographic accuracy, and what
makes a still life interesting is that it's been through a head.
You want to emphasize the visual cues that tell you, for example,
that the reason the color changes suddenly at a certain point is
that it's the edge of an object. By subtly emphasizing such things
you can make paintings that are more realistic than photographs not
just in some metaphorical sense, but in the strict information-theoretic
sense.
[4]I liked painting still lives because I was curious about what I was
seeing. In everyday life, we aren't consciously aware of much we're
seeing. Most visual perception is handled by low-level processes
that merely tell your brain "that's a water droplet" without telling
you details like where the lightest and darkest points are, or
"that's a bush" without telling you the shape and position of every
leaf. This is a feature of brains, not a bug. In everyday life it
would be distracting to notice every leaf on every bush. But when
you have to paint something, you have to look more closely, and
when you do there's a lot to see. You can still be noticing new
things after days of trying to paint something people usually take
for granted, just as you can after
days of trying to write an essay about something people usually
take for granted.This is not the only way to paint. I'm not 100% sure it's even a
good way to paint. But it seemed a good enough bet to be worth
trying.Our teacher, professor Ulivi, was a nice guy. He could see I worked
hard, and gave me a good grade, which he wrote down in a sort of
passport each student had. But the Accademia wasn't teaching me
anything except Italian, and my money was running out, so at the
end of the first year I went back to the US.I wanted to go back to RISD, but I was now broke and RISD was very
expensive, so I decided to get a job for a year and then return to
RISD the next fall. I got one at a company called Interleaf, which
made software for creating documents. You mean like Microsoft Word?
Exactly. That was how I learned that low end software tends to eat
high end software. But Interleaf still had a few years to live yet.
[5]Interleaf had done something pretty bold. Inspired by Emacs, they'd
added a scripting language, and even made the scripting language a
dialect of Lisp. Now they wanted a Lisp hacker to write things in
it. This was the closest thing I've had to a normal job, and I
hereby apologize to my boss and coworkers, because I was a bad
employee. Their Lisp was the thinnest icing on a giant C cake, and
since I didn't know C and didn't want to learn it, I never understood
most of the software. Plus I was terribly irresponsible. This was
back when a programming job meant showing up every day during certain
working hours. That seemed unnatural to me, and on this point the
rest of the world is coming around to my way of thinking, but at
the time it caused a lot of friction. Toward the end of the year I
spent much of my time surreptitiously working on On Lisp, which I
had by this time gotten a contract to publish.The good part was that I got paid huge amounts of money, especially
by art student standards. In Florence, after paying my part of the
rent, my budget for everything else had been $7 a day. Now I was
getting paid more than 4 times that every hour, even when I was
just sitting in a meeting. By living cheaply I not only managed to
save enough to go back to RISD, but also paid off my college loans.I learned some useful things at Interleaf, though they were mostly
about what not to do. I learned that it's better for technology
companies to be run by product people than sales people (though
sales is a real skill and people who are good at it are really good
at it), that it leads to bugs when code is edited by too many people,
that cheap office space is no bargain if it's depressing, that
planned meetings are inferior to corridor conversations, that big,
bureaucratic customers are a dangerous source of money, and that
there's not much overlap between conventional office hours and the
optimal time for hacking, or conventional offices and the optimal
place for it.But the most important thing I learned, and which I used in both
Viaweb and Y Combinator, is that the low end eats the high end:
that it's good to be the "entry level" option, even though that
will be less prestigious, because if you're not, someone else will
be, and will squash you against the ceiling. Which in turn means
that prestige is a danger sign.When I left to go back to RISD the next fall, I arranged to do
freelance work for the group that did projects for customers, and
this was how I survived for the next several years. When I came
back to visit for a project later on, someone told me about a new
thing called HTML, which was, as he described it, a derivative of
SGML. Markup language enthusiasts were an occupational hazard at
Interleaf and I ignored him, but this HTML thing later became a big
part of my life.In the fall of 1992 I moved back to Providence to continue at RISD.
The foundation had merely been intro stuff, and the Accademia had
been a (very civilized) joke. Now I was going to see what real art
school was like. But alas it was more like the Accademia than not.
Better organized, certainly, and a lot more expensive, but it was
now becoming clear that art school did not bear the same relationship
to art that medical school bore to medicine. At least not the
painting department. The textile department, which my next door
neighbor belonged to, seemed to be pretty rigorous. No doubt
illustration and architecture were too. But painting was post-rigorous.
Painting students were supposed to express themselves, which to the
more worldly ones meant to try to cook up some sort of distinctive
signature style.A signature style is the visual equivalent of what in show business
is known as a "schtick": something that immediately identifies the
work as yours and no one else's. For example, when you see a painting
that looks like a certain kind of cartoon, you know it's by Roy
Lichtenstein. So if you see a big painting of this type hanging in
the apartment of a hedge fund manager, you know he paid millions
of dollars for it. That's not always why artists have a signature
style, but it's usually why buyers pay a lot for such work.
[6]There were plenty of earnest students too: kids who "could draw"
in high school, and now had come to what was supposed to be the
best art school in the country, to learn to draw even better. They
tended to be confused and demoralized by what they found at RISD,
but they kept going, because painting was what they did. I was not
one of the kids who could draw in high school, but at RISD I was
definitely closer to their tribe than the tribe of signature style
seekers.I learned a lot in the color class I took at RISD, but otherwise I
was basically teaching myself to paint, and I could do that for
free. So in 1993 I dropped out. I hung around Providence for a bit,
and then my college friend Nancy Parmet did me a big favor. A
rent-controlled apartment in a building her mother owned in New
York was becoming vacant. Did I want it? It wasn't much more than
my current place, and New York was supposed to be where the artists
were. So yes, I wanted it!
[7]Asterix comics begin by zooming in on a tiny corner of Roman Gaul
that turns out not to be controlled by the Romans. You can do
something similar on a map of New York City: if you zoom in on the
Upper East Side, there's a tiny corner that's not rich, or at least
wasn't in 1993. It's called Yorkville, and that was my new home.
Now I was a New York artist in the strictly technical sense of
making paintings and living in New York.I was nervous about money, because I could sense that Interleaf was
on the way down. Freelance Lisp hacking work was very rare, and I
didn't want to have to program in another language, which in those
days would have meant C++ if I was lucky. So with my unerring nose
for financial opportunity, I decided to write another book on Lisp.
This would be a popular book, the sort of book that could be used
as a textbook. I imagined myself living frugally off the royalties
and spending all my time painting. (The painting on the cover of
this book, ANSI Common Lisp, is one that I painted around this
time.)The best thing about New York for me was the presence of Idelle and
Julian Weber. Idelle Weber was a painter, one of the early
photorealists, and I'd taken her painting class at Harvard. I've
never known a teacher more beloved by her students. Large numbers
of former students kept in touch with her, including me. After I
moved to New York I became her de facto studio assistant.She liked to paint on big, square canvases, 4 to 5 feet on a side.
One day in late 1994 as I was stretching one of these monsters there
was something on the radio about a famous fund manager. He wasn't
that much older than me, and was super rich. The thought suddenly
occurred to me: why don't I become rich? Then I'll be able to work
on whatever I want.Meanwhile I'd been hearing more and more about this new thing called
the World Wide Web. Robert Morris showed it to me when I visited
him in Cambridge, where he was now in grad school at Harvard. It
seemed to me that the web would be a big deal. I'd seen what graphical
user interfaces had done for the popularity of microcomputers. It
seemed like the web would do the same for the internet.If I wanted to get rich, here was the next train leaving the station.
I was right about that part. What I got wrong was the idea. I decided
we should start a company to put art galleries online. I can't
honestly say, after reading so many Y Combinator applications, that
this was the worst startup idea ever, but it was up there. Art
galleries didn't want to be online, and still don't, not the fancy
ones. That's not how they sell. I wrote some software to generate
web sites for galleries, and Robert wrote some to resize images and
set up an http server to serve the pages. Then we tried to sign up
galleries. To call this a difficult sale would be an understatement.
It was difficult to give away. A few galleries let us make sites
for them for free, but none paid us.Then some online stores started to appear, and I realized that
except for the order buttons they were identical to the sites we'd
been generating for galleries. This impressive-sounding thing called
an "internet storefront" was something we already knew how to build.So in the summer of 1995, after I submitted the camera-ready copy
of ANSI Common Lisp to the publishers, we started trying to write
software to build online stores. At first this was going to be
normal desktop software, which in those days meant Windows software.
That was an alarming prospect, because neither of us knew how to
write Windows software or wanted to learn. We lived in the Unix
world. But we decided we'd at least try writing a prototype store
builder on Unix. Robert wrote a shopping cart, and I wrote a new
site generator for stores in Lisp, of course.We were working out of Robert's apartment in Cambridge. His roommate
was away for big chunks of time, during which I got to sleep in his
room. For some reason there was no bed frame or sheets, just a
mattress on the floor. One morning as I was lying on this mattress
I had an idea that made me sit up like a capital L. What if we ran
the software on the server, and let users control it by clicking
on links? Then we'd never have to write anything to run on users'
computers. We could generate the sites on the same server we'd serve
them from. Users wouldn't need anything more than a browser.This kind of software, known as a web app, is common now, but at
the time it wasn't clear that it was even possible. To find out,
we decided to try making a version of our store builder that you
could control through the browser. A couple days later, on August
12, we had one that worked. The UI was horrible, but it proved you
could build a whole store through the browser, without any client
software or typing anything into the command line on the server.Now we felt like we were really onto something. I had visions of a
whole new generation of software working this way. You wouldn't
need versions, or ports, or any of that crap. At Interleaf there
had been a whole group called Release Engineering that seemed to
be at least as big as the group that actually wrote the software.
Now you could just update the software right on the server.We started a new company we called Viaweb, after the fact that our
software worked via the web, and we got $10,000 in seed funding
from Idelle's husband Julian. In return for that and doing the
initial legal work and giving us business advice, we gave him 10%
of the company. Ten years later this deal became the model for Y
Combinator's. We knew founders needed something like this, because
we'd needed it ourselves.At this stage I had a negative net worth, because the thousand
dollars or so I had in the bank was more than counterbalanced by
what I owed the government in taxes. (Had I diligently set aside
the proper proportion of the money I'd made consulting for Interleaf?
No, I had not.) So although Robert had his graduate student stipend,
I needed that seed funding to live on.We originally hoped to launch in September, but we got more ambitious
about the software as we worked on it. Eventually we managed to
build a WYSIWYG site builder, in the sense that as you were creating
pages, they looked exactly like the static ones that would be
generated later, except that instead of leading to static pages,
the links all referred to closures stored in a hash table on the
server.It helped to have studied art, because the main goal of an online
store builder is to make users look legit, and the key to looking
legit is high production values. If you get page layouts and fonts
and colors right, you can make a guy running a store out of his
bedroom look more legit than a big company.(If you're curious why my site looks so old-fashioned, it's because
it's still made with this software. It may look clunky today, but
in 1996 it was the last word in slick.)In September, Robert rebelled. "We've been working on this for a
month," he said, "and it's still not done." This is funny in
retrospect, because he would still be working on it almost 3 years
later. But I decided it might be prudent to recruit more programmers,
and I asked Robert who else in grad school with him was really good.
He recommended Trevor Blackwell, which surprised me at first, because
at that point I knew Trevor mainly for his plan to reduce everything
in his life to a stack of notecards, which he carried around with
him. But Rtm was right, as usual. Trevor turned out to be a
frighteningly effective hacker.It was a lot of fun working with Robert and Trevor. They're the two
most independent-minded people
I know, and in completely different
ways. If you could see inside Rtm's brain it would look like a
colonial New England church, and if you could see inside Trevor's
it would look like the worst excesses of Austrian Rococo.We opened for business, with 6 stores, in January 1996. It was just
as well we waited a few months, because although we worried we were
late, we were actually almost fatally early. There was a lot of
talk in the press then about ecommerce, but not many people actually
wanted online stores.
[8]There were three main parts to the software: the editor, which
people used to build sites and which I wrote, the shopping cart,
which Robert wrote, and the manager, which kept track of orders and
statistics, and which Trevor wrote. In its time, the editor was one
of the best general-purpose site builders. I kept the code tight
and didn't have to integrate with any other software except Robert's
and Trevor's, so it was quite fun to work on. If all I'd had to do
was work on this software, the next 3 years would have been the
easiest of my life. Unfortunately I had to do a lot more, all of
it stuff I was worse at than programming, and the next 3 years were
instead the most stressful.There were a lot of startups making ecommerce software in the second
half of the 90s. We were determined to be the Microsoft Word, not
the Interleaf. Which meant being easy to use and inexpensive. It
was lucky for us that we were poor, because that caused us to make
Viaweb even more inexpensive than we realized. We charged $100 a
month for a small store and $300 a month for a big one. This low
price was a big attraction, and a constant thorn in the sides of
competitors, but it wasn't because of some clever insight that we
set the price low. We had no idea what businesses paid for things.
$300 a month seemed like a lot of money to us.We did a lot of things right by accident like that. For example,
we did what's now called "doing things that
don't scale," although
at the time we would have described it as "being so lame that we're
driven to the most desperate measures to get users." The most common
of which was building stores for them. This seemed particularly
humiliating, since the whole raison d'etre of our software was that
people could use it to make their own stores. But anything to get
users.We learned a lot more about retail than we wanted to know. For
example, that if you could only have a small image of a man's shirt
(and all images were small then by present standards), it was better
to have a closeup of the collar than a picture of the whole shirt.
The reason I remember learning this was that it meant I had to
rescan about 30 images of men's shirts. My first set of scans were
so beautiful too.Though this felt wrong, it was exactly the right thing to be doing.
Building stores for users taught us about retail, and about how it
felt to use our software. I was initially both mystified and repelled
by "business" and thought we needed a "business person" to be in
charge of it, but once we started to get users, I was converted,
in much the same way I was converted to
fatherhood once I had kids.
Whatever users wanted, I was all theirs. Maybe one day we'd have
so many users that I couldn't scan their images for them, but in
the meantime there was nothing more important to do.Another thing I didn't get at the time is that
growth rate is the
ultimate test of a startup. Our growth rate was fine. We had about
70 stores at the end of 1996 and about 500 at the end of 1997. I
mistakenly thought the thing that mattered was the absolute number
of users. And that is the thing that matters in the sense that
that's how much money you're making, and if you're not making enough,
you might go out of business. But in the long term the growth rate
takes care of the absolute number. If we'd been a startup I was
advising at Y Combinator, I would have said: Stop being so stressed
out, because you're doing fine. You're growing 7x a year. Just don't
hire too many more people and you'll soon be profitable, and then
you'll control your own destiny.Alas I hired lots more people, partly because our investors wanted
me to, and partly because that's what startups did during the
Internet Bubble. A company with just a handful of employees would
have seemed amateurish. So we didn't reach breakeven until about
when Yahoo bought us in the summer of 1998. Which in turn meant we
were at the mercy of investors for the entire life of the company.
And since both we and our investors were noobs at startups, the
result was a mess even by startup standards.It was a huge relief when Yahoo bought us. In principle our Viaweb
stock was valuable. It was a share in a business that was profitable
and growing rapidly. But it didn't feel very valuable to me; I had
no idea how to value a business, but I was all too keenly aware of
the near-death experiences we seemed to have every few months. Nor
had I changed my grad student lifestyle significantly since we
started. So when Yahoo bought us it felt like going from rags to
riches. Since we were going to California, I bought a car, a yellow
1998 VW GTI. I remember thinking that its leather seats alone were
by far the most luxurious thing I owned.The next year, from the summer of 1998 to the summer of 1999, must
have been the least productive of my life. I didn't realize it at
the time, but I was worn out from the effort and stress of running
Viaweb. For a while after I got to California I tried to continue
my usual m.o. of programming till 3 in the morning, but fatigue
combined with Yahoo's prematurely aged
culture and grim cube farm
in Santa Clara gradually dragged me down. After a few months it
felt disconcertingly like working at Interleaf.Yahoo had given us a lot of options when they bought us. At the
time I thought Yahoo was so overvalued that they'd never be worth
anything, but to my astonishment the stock went up 5x in the next
year. I hung on till the first chunk of options vested, then in the
summer of 1999 I left. It had been so long since I'd painted anything
that I'd half forgotten why I was doing this. My brain had been
entirely full of software and men's shirts for 4 years. But I had
done this to get rich so I could paint, I reminded myself, and now
I was rich, so I should go paint.When I said I was leaving, my boss at Yahoo had a long conversation
with me about my plans. I told him all about the kinds of pictures
I wanted to paint. At the time I was touched that he took such an
interest in me. Now I realize it was because he thought I was lying.
My options at that point were worth about $2 million a month. If I
was leaving that kind of money on the table, it could only be to
go and start some new startup, and if I did, I might take people
with me. This was the height of the Internet Bubble, and Yahoo was
ground zero of it. My boss was at that moment a billionaire. Leaving
then to start a new startup must have seemed to him an insanely,
and yet also plausibly, ambitious plan.But I really was quitting to paint, and I started immediately.
There was no time to lose. I'd already burned 4 years getting rich.
Now when I talk to founders who are leaving after selling their
companies, my advice is always the same: take a vacation. That's
what I should have done, just gone off somewhere and done nothing
for a month or two, but the idea never occurred to me.So I tried to paint, but I just didn't seem to have any energy or
ambition. Part of the problem was that I didn't know many people
in California. I'd compounded this problem by buying a house up in
the Santa Cruz Mountains, with a beautiful view but miles from
anywhere. I stuck it out for a few more months, then in desperation
I went back to New York, where unless you understand about rent
control you'll be surprised to hear I still had my apartment, sealed
up like a tomb of my old life. Idelle was in New York at least, and
there were other people trying to paint there, even though I didn't
know any of them.When I got back to New York I resumed my old life, except now I was
rich. It was as weird as it sounds. I resumed all my old patterns,
except now there were doors where there hadn't been. Now when I was
tired of walking, all I had to do was raise my hand, and (unless
it was raining) a taxi would stop to pick me up. Now when I walked
past charming little restaurants I could go in and order lunch. It
was exciting for a while. Painting started to go better. I experimented
with a new kind of still life where I'd paint one painting in the
old way, then photograph it and print it, blown up, on canvas, and
then use that as the underpainting for a second still life, painted
from the same objects (which hopefully hadn't rotted yet).Meanwhile I looked for an apartment to buy. Now I could actually
choose what neighborhood to live in. Where, I asked myself and
various real estate agents, is the Cambridge of New York? Aided by
occasional visits to actual Cambridge, I gradually realized there
wasn't one. Huh.Around this time, in the spring of 2000, I had an idea. It was clear
from our experience with Viaweb that web apps were the future. Why
not build a web app for making web apps? Why not let people edit
code on our server through the browser, and then host the resulting
applications for them?
[9]
You could run all sorts of services
on the servers that these applications could use just by making an
API call: making and receiving phone calls, manipulating images,
taking credit card payments, etc.I got so excited about this idea that I couldn't think about anything
else. It seemed obvious that this was the future. I didn't particularly
want to start another company, but it was clear that this idea would
have to be embodied as one, so I decided to move to Cambridge and
start it. I hoped to lure Robert into working on it with me, but
there I ran into a hitch. Robert was now a postdoc at MIT, and
though he'd made a lot of money the last time I'd lured him into
working on one of my schemes, it had also been a huge time sink.
So while he agreed that it sounded like a plausible idea, he firmly
refused to work on it.Hmph. Well, I'd do it myself then. I recruited Dan Giffin, who had
worked for Viaweb, and two undergrads who wanted summer jobs, and
we got to work trying to build what it's now clear is about twenty
companies and several open source projects worth of software. The
language for defining applications would of course be a dialect of
Lisp. But I wasn't so naive as to assume I could spring an overt
Lisp on a general audience; we'd hide the parentheses, like Dylan
did.By then there was a name for the kind of company Viaweb was, an
"application service provider," or ASP. This name didn't last long
before it was replaced by "software as a service," but it was current
for long enough that I named this new company after it: it was going
to be called Aspra.I started working on the application builder, Dan worked on network
infrastructure, and the two undergrads worked on the first two
services (images and phone calls). But about halfway through the
summer I realized I really didn't want to run a company especially
not a big one, which it was looking like this would have to be. I'd
only started Viaweb because I needed the money. Now that I didn't
need money anymore, why was I doing this? If this vision had to be
realized as a company, then screw the vision. I'd build a subset
that could be done as an open source project.Much to my surprise, the time I spent working on this stuff was not
wasted after all. After we started Y Combinator, I would often
encounter startups working on parts of this new architecture, and
it was very useful to have spent so much time thinking about it and
even trying to write some of it.The subset I would build as an open source project was the new Lisp,
whose parentheses I now wouldn't even have to hide. A lot of Lisp
hackers dream of building a new Lisp, partly because one of the
distinctive features of the language is that it has dialects, and
partly, I think, because we have in our minds a Platonic form of
Lisp that all existing dialects fall short of. I certainly did. So
at the end of the summer Dan and I switched to working on this new
dialect of Lisp, which I called Arc, in a house I bought in Cambridge.The following spring, lightning struck. I was invited to give a
talk at a Lisp conference, so I gave one about how we'd used Lisp
at Viaweb. Afterward I put a postscript file of this talk online,
on paulgraham.com, which I'd created years before using Viaweb but
had never used for anything. In one day it got 30,000 page views.
What on earth had happened? The referring urls showed that someone
had posted it on Slashdot.
[10]Wow, I thought, there's an audience. If I write something and put
it on the web, anyone can read it. That may seem obvious now, but
it was surprising then. In the print era there was a narrow channel
to readers, guarded by fierce monsters known as editors. The only
way to get an audience for anything you wrote was to get it published
as a book, or in a newspaper or magazine. Now anyone could publish
anything.This had been possible in principle since 1993, but not many people
had realized it yet. I had been intimately involved with building
the infrastructure of the web for most of that time, and a writer
as well, and it had taken me 8 years to realize it. Even then it
took me several years to understand the implications. It meant there
would be a whole new generation of
essays.
[11]In the print era, the channel for publishing essays had been
vanishingly small. Except for a few officially anointed thinkers
who went to the right parties in New York, the only people allowed
to publish essays were specialists writing about their specialties.
There were so many essays that had never been written, because there
had been no way to publish them. Now they could be, and I was going
to write them.
[12]I've worked on several different things, but to the extent there
was a turning point where I figured out what to work on, it was
when I started publishing essays online. From then on I knew that
whatever else I did, I'd always write essays too.I knew that online essays would be a
marginal medium at first.
Socially they'd seem more like rants posted by nutjobs on their
GeoCities sites than the genteel and beautifully typeset compositions
published in The New Yorker. But by this point I knew enough to
find that encouraging instead of discouraging.One of the most conspicuous patterns I've noticed in my life is how
well it has worked, for me at least, to work on things that weren't
prestigious. Still life has always been the least prestigious form
of painting. Viaweb and Y Combinator both seemed lame when we started
them. I still get the glassy eye from strangers when they ask what
I'm writing, and I explain that it's an essay I'm going to publish
on my web site. Even Lisp, though prestigious intellectually in
something like the way Latin is, also seems about as hip.It's not that unprestigious types of work are good per se. But when
you find yourself drawn to some kind of work despite its current
lack of prestige, it's a sign both that there's something real to
be discovered there, and that you have the right kind of motives.
Impure motives are a big danger for the ambitious. If anything is
going to lead you astray, it will be the desire to impress people.
So while working on things that aren't prestigious doesn't guarantee
you're on the right track, it at least guarantees you're not on the
most common type of wrong one.Over the next several years I wrote lots of essays about all kinds
of different topics. O'Reilly reprinted a collection of them as a
book, called Hackers & Painters after one of the essays in it. I
also worked on spam filters, and did some more painting. I used to
have dinners for a group of friends every thursday night, which
taught me how to cook for groups. And I bought another building in
Cambridge, a former candy factory (and later, twas said, porn
studio), to use as an office.One night in October 2003 there was a big party at my house. It was
a clever idea of my friend Maria Daniels, who was one of the thursday
diners. Three separate hosts would all invite their friends to one
party. So for every guest, two thirds of the other guests would be
people they didn't know but would probably like. One of the guests
was someone I didn't know but would turn out to like a lot: a woman
called Jessica Livingston. A couple days later I asked her out.Jessica was in charge of marketing at a Boston investment bank.
This bank thought it understood startups, but over the next year,
as she met friends of mine from the startup world, she was surprised
how different reality was. And how colorful their stories were. So
she decided to compile a book of
interviews with startup founders.When the bank had financial problems and she had to fire half her
staff, she started looking for a new job. In early 2005 she interviewed
for a marketing job at a Boston VC firm. It took them weeks to make
up their minds, and during this time I started telling her about
all the things that needed to be fixed about venture capital. They
should make a larger number of smaller investments instead of a
handful of giant ones, they should be funding younger, more technical
founders instead of MBAs, they should let the founders remain as
CEO, and so on.One of my tricks for writing essays had always been to give talks.
The prospect of having to stand up in front of a group of people
and tell them something that won't waste their time is a great
spur to the imagination. When the Harvard Computer Society, the
undergrad computer club, asked me to give a talk, I decided I would
tell them how to start a startup. Maybe they'd be able to avoid the
worst of the mistakes we'd made.So I gave this talk, in the course of which I told them that the
best sources of seed funding were successful startup founders,
because then they'd be sources of advice too. Whereupon it seemed
they were all looking expectantly at me. Horrified at the prospect
of having my inbox flooded by business plans (if I'd only known),
I blurted out "But not me!" and went on with the talk. But afterward
it occurred to me that I should really stop procrastinating about
angel investing. I'd been meaning to since Yahoo bought us, and now
it was 7 years later and I still hadn't done one angel investment.Meanwhile I had been scheming with Robert and Trevor about projects
we could work on together. I missed working with them, and it seemed
like there had to be something we could collaborate on.As Jessica and I were walking home from dinner on March 11, at the
corner of Garden and Walker streets, these three threads converged.
Screw the VCs who were taking so long to make up their minds. We'd
start our own investment firm and actually implement the ideas we'd
been talking about. I'd fund it, and Jessica could quit her job and
work for it, and we'd get Robert and Trevor as partners too.
[13]Once again, ignorance worked in our favor. We had no idea how to
be angel investors, and in Boston in 2005 there were no Ron Conways
to learn from. So we just made what seemed like the obvious choices,
and some of the things we did turned out to be novel.There are multiple components to Y Combinator, and we didn't figure
them all out at once. The part we got first was to be an angel firm.
In those days, those two words didn't go together. There were VC
firms, which were organized companies with people whose job it was
to make investments, but they only did big, million dollar investments.
And there were angels, who did smaller investments, but these were
individuals who were usually focused on other things and made
investments on the side. And neither of them helped founders enough
in the beginning. We knew how helpless founders were in some respects,
because we remembered how helpless we'd been. For example, one thing
Julian had done for us that seemed to us like magic was to get us
set up as a company. We were fine writing fairly difficult software,
but actually getting incorporated, with bylaws and stock and all
that stuff, how on earth did you do that? Our plan was not only to
make seed investments, but to do for startups everything Julian had
done for us.YC was not organized as a fund. It was cheap enough to run that we
funded it with our own money. That went right by 99% of readers,
but professional investors are thinking "Wow, that means they got
all the returns." But once again, this was not due to any particular
insight on our part. We didn't know how VC firms were organized.
It never occurred to us to try to raise a fund, and if it had, we
wouldn't have known where to start.
[14]The most distinctive thing about YC is the batch model: to fund a
bunch of startups all at once, twice a year, and then to spend three
months focusing intensively on trying to help them. That part we
discovered by accident, not merely implicitly but explicitly due
to our ignorance about investing. We needed to get experience as
investors. What better way, we thought, than to fund a whole bunch
of startups at once? We knew undergrads got temporary jobs at tech
companies during the summer. Why not organize a summer program where
they'd start startups instead? We wouldn't feel guilty for being
in a sense fake investors, because they would in a similar sense
be fake founders. So while we probably wouldn't make much money out
of it, we'd at least get to practice being investors on them, and
they for their part would probably have a more interesting summer
than they would working at Microsoft.We'd use the building I owned in Cambridge as our headquarters.
We'd all have dinner there once a week on tuesdays, since I was
already cooking for the thursday diners on thursdays and after
dinner we'd bring in experts on startups to give talks.We knew undergrads were deciding then about summer jobs, so in a
matter of days we cooked up something we called the Summer Founders
Program, and I posted an
announcement
on my site, inviting undergrads
to apply. I had never imagined that writing essays would be a way
to get "deal flow," as investors call it, but it turned out to be
the perfect source.
[15]
We got 225 applications for the Summer
Founders Program, and we were surprised to find that a lot of them
were from people who'd already graduated, or were about to that
spring. Already this SFP thing was starting to feel more serious
than we'd intended.We invited about 20 of the 225 groups to interview in person, and
from those we picked 8 to fund. They were an impressive group. That
first batch included reddit, Justin Kan and Emmett Shear, who went
on to found Twitch, Aaron Swartz, who had already helped write the
RSS spec and would a few years later become a martyr for open access,
and Sam Altman, who would later become the second president of YC.
I don't think it was entirely luck that the first batch was so good.
You had to be pretty bold to sign up for a weird thing like the
Summer Founders Program instead of a summer job at a legit place
like Microsoft or Goldman Sachs.The deal for startups was based on a combination of the deal we did
with Julian ($10k for 10%) and what Robert said MIT grad students
got for the summer ($6k). We invested $6k per founder, which in the
typical two-founder case was $12k, in return for 6%. That had to
be fair, because it was twice as good as the deal we ourselves had
taken. Plus that first summer, which was really hot, Jessica brought
the founders free air conditioners.
[16]Fairly quickly I realized that we had stumbled upon the way to scale
startup funding. Funding startups in batches was more convenient
for us, because it meant we could do things for a lot of startups
at once, but being part of a batch was better for the startups too.
It solved one of the biggest problems faced by founders: the
isolation. Now you not only had colleagues, but colleagues who
understood the problems you were facing and could tell you how they
were solving them.As YC grew, we started to notice other advantages of scale. The
alumni became a tight community, dedicated to helping one another,
and especially the current batch, whose shoes they remembered being
in. We also noticed that the startups were becoming one another's
customers. We used to refer jokingly to the "YC GDP," but as YC
grows this becomes less and less of a joke. Now lots of startups
get their initial set of customers almost entirely from among their
batchmates.I had not originally intended YC to be a full-time job. I was going
to do three things: hack, write essays, and work on YC. As YC grew,
and I grew more excited about it, it started to take up a lot more
than a third of my attention. But for the first few years I was
still able to work on other things.In the summer of 2006, Robert and I started working on a new version
of Arc. This one was reasonably fast, because it was compiled into
Scheme. To test this new Arc, I wrote Hacker News in it. It was
originally meant to be a news aggregator for startup founders and
was called Startup News, but after a few months I got tired of
reading about nothing but startups. Plus it wasn't startup founders
we wanted to reach. It was future startup founders. So I changed
the name to Hacker News and the topic to whatever engaged one's
intellectual curiosity.HN was no doubt good for YC, but it was also by far the biggest
source of stress for me. If all I'd had to do was select and help
founders, life would have been so easy. And that implies that HN
was a mistake. Surely the biggest source of stress in one's work
should at least be something close to the core of the work. Whereas
I was like someone who was in pain while running a marathon not
from the exertion of running, but because I had a blister from an
ill-fitting shoe. When I was dealing with some urgent problem during
YC, there was about a 60% chance it had to do with HN, and a 40%
chance it had do with everything else combined.
[17]As well as HN, I wrote all of YC's internal software in Arc. But
while I continued to work a good deal in Arc, I gradually stopped
working on Arc, partly because I didn't have time to, and partly
because it was a lot less attractive to mess around with the language
now that we had all this infrastructure depending on it. So now my
three projects were reduced to two: writing essays and working on
YC.YC was different from other kinds of work I've done. Instead of
deciding for myself what to work on, the problems came to me. Every
6 months there was a new batch of startups, and their problems,
whatever they were, became our problems. It was very engaging work,
because their problems were quite varied, and the good founders
were very effective. If you were trying to learn the most you could
about startups in the shortest possible time, you couldn't have
picked a better way to do it.There were parts of the job I didn't like. Disputes between cofounders,
figuring out when people were lying to us, fighting with people who
maltreated the startups, and so on. But I worked hard even at the
parts I didn't like. I was haunted by something Kevin Hale once
said about companies: "No one works harder than the boss." He meant
it both descriptively and prescriptively, and it was the second
part that scared me. I wanted YC to be good, so if how hard I worked
set the upper bound on how hard everyone else worked, I'd better
work very hard.One day in 2010, when he was visiting California for interviews,
Robert Morris did something astonishing: he offered me unsolicited
advice. I can only remember him doing that once before. One day at
Viaweb, when I was bent over double from a kidney stone, he suggested
that it would be a good idea for him to take me to the hospital.
That was what it took for Rtm to offer unsolicited advice. So I
remember his exact words very clearly. "You know," he said, "you
should make sure Y Combinator isn't the last cool thing you do."At the time I didn't understand what he meant, but gradually it
dawned on me that he was saying I should quit. This seemed strange
advice, because YC was doing great. But if there was one thing rarer
than Rtm offering advice, it was Rtm being wrong. So this set me
thinking. It was true that on my current trajectory, YC would be
the last thing I did, because it was only taking up more of my
attention. It had already eaten Arc, and was in the process of
eating essays too. Either YC was my life's work or I'd have to leave
eventually. And it wasn't, so I would.In the summer of 2012 my mother had a stroke, and the cause turned
out to be a blood clot caused by colon cancer. The stroke destroyed
her balance, and she was put in a nursing home, but she really
wanted to get out of it and back to her house, and my sister and I
were determined to help her do it. I used to fly up to Oregon to
visit her regularly, and I had a lot of time to think on those
flights. On one of them I realized I was ready to hand YC over to
someone else.I asked Jessica if she wanted to be president, but she didn't, so
we decided we'd try to recruit Sam Altman. We talked to Robert and
Trevor and we agreed to make it a complete changing of the guard.
Up till that point YC had been controlled by the original LLC we
four had started. But we wanted YC to last for a long time, and to
do that it couldn't be controlled by the founders. So if Sam said
yes, we'd let him reorganize YC. Robert and I would retire, and
Jessica and Trevor would become ordinary partners.When we asked Sam if he wanted to be president of YC, initially he
said no. He wanted to start a startup to make nuclear reactors.
But I kept at it, and in October 2013 he finally agreed. We decided
he'd take over starting with the winter 2014 batch. For the rest
of 2013 I left running YC more and more to Sam, partly so he could
learn the job, and partly because I was focused on my mother, whose
cancer had returned.She died on January 15, 2014. We knew this was coming, but it was
still hard when it did.I kept working on YC till March, to help get that batch of startups
through Demo Day, then I checked out pretty completely. (I still
talk to alumni and to new startups working on things I'm interested
in, but that only takes a few hours a week.)What should I do next? Rtm's advice hadn't included anything about
that. I wanted to do something completely different, so I decided
I'd paint. I wanted to see how good I could get if I really focused
on it. So the day after I stopped working on YC, I started painting.
I was rusty and it took a while to get back into shape, but it was
at least completely engaging.
[18]I spent most of the rest of 2014 painting. I'd never been able to
work so uninterruptedly before, and I got to be better than I had
been. Not good enough, but better. Then in November, right in the
middle of a painting, I ran out of steam. Up till that point I'd
always been curious to see how the painting I was working on would
turn out, but suddenly finishing this one seemed like a chore. So
I stopped working on it and cleaned my brushes and haven't painted
since. So far anyway.I realize that sounds rather wimpy. But attention is a zero sum
game. If you can choose what to work on, and you choose a project
that's not the best one (or at least a good one) for you, then it's
getting in the way of another project that is. And at 50 there was
some opportunity cost to screwing around.I started writing essays again, and wrote a bunch of new ones over
the next few months. I even wrote a couple that
weren't about
startups. Then in March 2015 I started working on Lisp again.The distinctive thing about Lisp is that its core is a language
defined by writing an interpreter in itself. It wasn't originally
intended as a programming language in the ordinary sense. It was
meant to be a formal model of computation, an alternative to the
Turing machine. If you want to write an interpreter for a language
in itself, what's the minimum set of predefined operators you need?
The Lisp that John McCarthy invented, or more accurately discovered,
is an answer to that question.
[19]McCarthy didn't realize this Lisp could even be used to program
computers till his grad student Steve Russell suggested it. Russell
translated McCarthy's interpreter into IBM 704 machine language,
and from that point Lisp started also to be a programming language
in the ordinary sense. But its origins as a model of computation
gave it a power and elegance that other languages couldn't match.
It was this that attracted me in college, though I didn't understand
why at the time.McCarthy's 1960 Lisp did nothing more than interpret Lisp expressions.
It was missing a lot of things you'd want in a programming language.
So these had to be added, and when they were, they weren't defined
using McCarthy's original axiomatic approach. That wouldn't have
been feasible at the time. McCarthy tested his interpreter by
hand-simulating the execution of programs. But it was already getting
close to the limit of interpreters you could test that way indeed,
there was a bug in it that McCarthy had overlooked. To test a more
complicated interpreter, you'd have had to run it, and computers
then weren't powerful enough.Now they are, though. Now you could continue using McCarthy's
axiomatic approach till you'd defined a complete programming language.
And as long as every change you made to McCarthy's Lisp was a
discoveredness-preserving transformation, you could, in principle,
end up with a complete language that had this quality. Harder to
do than to talk about, of course, but if it was possible in principle,
why not try? So I decided to take a shot at it. It took 4 years,
from March 26, 2015 to October 12, 2019. It was fortunate that I
had a precisely defined goal, or it would have been hard to keep
at it for so long.I wrote this new Lisp, called Bel,
in itself in Arc. That may sound
like a contradiction, but it's an indication of the sort of trickery
I had to engage in to make this work. By means of an egregious
collection of hacks I managed to make something close enough to an
interpreter written in itself that could actually run. Not fast,
but fast enough to test.I had to ban myself from writing essays during most of this time,
or I'd never have finished. In late 2015 I spent 3 months writing
essays, and when I went back to working on Bel I could barely
understand the code. Not so much because it was badly written as
because the problem is so convoluted. When you're working on an
interpreter written in itself, it's hard to keep track of what's
happening at what level, and errors can be practically encrypted
by the time you get them.So I said no more essays till Bel was done. But I told few people
about Bel while I was working on it. So for years it must have
seemed that I was doing nothing, when in fact I was working harder
than I'd ever worked on anything. Occasionally after wrestling for
hours with some gruesome bug I'd check Twitter or HN and see someone
asking "Does Paul Graham still code?"Working on Bel was hard but satisfying. I worked on it so intensively
that at any given time I had a decent chunk of the code in my head
and could write more there. I remember taking the boys to the
coast on a sunny day in 2015 and figuring out how to deal with some
problem involving continuations while I watched them play in the
tide pools. It felt like I was doing life right. I remember that
because I was slightly dismayed at how novel it felt. The good news
is that I had more moments like this over the next few years.In the summer of 2016 we moved to England. We wanted our kids to
see what it was like living in another country, and since I was a
British citizen by birth, that seemed the obvious choice. We only
meant to stay for a year, but we liked it so much that we still
live there. So most of Bel was written in England.In the fall of 2019, Bel was finally finished. Like McCarthy's
original Lisp, it's a spec rather than an implementation, although
like McCarthy's Lisp it's a spec expressed as code.Now that I could write essays again, I wrote a bunch about topics
I'd had stacked up. I kept writing essays through 2020, but I also
started to think about other things I could work on. How should I
choose what to do? Well, how had I chosen what to work on in the
past? I wrote an essay for myself to answer that question, and I
was surprised how long and messy the answer turned out to be. If
this surprised me, who'd lived it, then I thought perhaps it would
be interesting to other people, and encouraging to those with
similarly messy lives. So I wrote a more detailed version for others
to read, and this is the last sentence of it.
Notes[1]
My experience skipped a step in the evolution of computers:
time-sharing machines with interactive OSes. I went straight from
batch processing to microcomputers, which made microcomputers seem
all the more exciting.[2]
Italian words for abstract concepts can nearly always be
predicted from their English cognates (except for occasional traps
like polluzione). It's the everyday words that differ. So if you
string together a lot of abstract concepts with a few simple verbs,
you can make a little Italian go a long way.[3]
I lived at Piazza San Felice 4, so my walk to the Accademia
went straight down the spine of old Florence: past the Pitti, across
the bridge, past Orsanmichele, between the Duomo and the Baptistery,
and then up Via Ricasoli to Piazza San Marco. I saw Florence at
street level in every possible condition, from empty dark winter
evenings to sweltering summer days when the streets were packed with
tourists.[4]
You can of course paint people like still lives if you want
to, and they're willing. That sort of portrait is arguably the apex
of still life painting, though the long sitting does tend to produce
pained expressions in the sitters.[5]
Interleaf was one of many companies that had smart people and
built impressive technology, and yet got crushed by Moore's Law.
In the 1990s the exponential growth in the power of commodity (i.e.
Intel) processors rolled up high-end, special-purpose hardware and
software companies like a bulldozer.[6]
The signature style seekers at RISD weren't specifically
mercenary. In the art world, money and coolness are tightly coupled.
Anything expensive comes to be seen as cool, and anything seen as
cool will soon become equally expensive.[7]
Technically the apartment wasn't rent-controlled but
rent-stabilized, but this is a refinement only New Yorkers would
know or care about. The point is that it was really cheap, less
than half market price.[8]
Most software you can launch as soon as it's done. But when
the software is an online store builder and you're hosting the
stores, if you don't have any users yet, that fact will be painfully
obvious. So before we could launch publicly we had to launch
privately, in the sense of recruiting an initial set of users and
making sure they had decent-looking stores.[9]
We'd had a code editor in Viaweb for users to define their
own page styles. They didn't know it, but they were editing Lisp
expressions underneath. But this wasn't an app editor, because the
code ran when the merchants' sites were generated, not when shoppers
visited them.[10]
This was the first instance of what is now a familiar experience,
and so was what happened next, when I read the comments and found
they were full of angry people. How could I claim that Lisp was
better than other languages? Weren't they all Turing complete?
People who see the responses to essays I write sometimes tell me
how sorry they feel for me, but I'm not exaggerating when I reply
that it has always been like this, since the very beginning. It
comes with the territory. An essay must tell readers things they
don't already know, and some
people dislike being told such things.[11]
People put plenty of stuff on the internet in the 90s of
course, but putting something online is not the same as publishing
it online. Publishing online means you treat the online version as
the (or at least a) primary version.[12]
There is a general lesson here that our experience with Y
Combinator also teaches: Customs continue to constrain you long
after the restrictions that caused them have disappeared. Customary
VC practice had once, like the customs about publishing essays,
been based on real constraints. Startups had once been much more
expensive to start, and proportionally rare. Now they could be cheap
and common, but the VCs' customs still reflected the old world,
just as customs about writing essays still reflected the constraints
of the print era.Which in turn implies that people who are independent-minded (i.e.
less influenced by custom) will have an advantage in fields affected
by rapid change (where customs are more likely to be obsolete).Here's an interesting point, though: you can't always predict which
fields will be affected by rapid change. Obviously software and
venture capital will be, but who would have predicted that essay
writing would be?[13]
Y Combinator was not the original name. At first we were
called Cambridge Seed. But we didn't want a regional name, in case
someone copied us in Silicon Valley, so we renamed ourselves after
one of the coolest tricks in the lambda calculus, the Y combinator.I picked orange as our color partly because it's the warmest, and
partly because no VC used it. In 2005 all the VCs used staid colors
like maroon, navy blue, and forest green, because they were trying
to appeal to LPs, not founders. The YC logo itself is an inside
joke: the Viaweb logo had been a white V on a red circle, so I made
the YC logo a white Y on an orange square.[14]
YC did become a fund for a couple years starting in 2009,
because it was getting so big I could no longer afford to fund it
personally. But after Heroku got bought we had enough money to go
back to being self-funded.[15]
I've never liked the term "deal flow," because it implies
that the number of new startups at any given time is fixed. This
is not only false, but it's the purpose of YC to falsify it, by
causing startups to be founded that would not otherwise have existed.[16]
She reports that they were all different shapes and sizes,
because there was a run on air conditioners and she had to get
whatever she could, but that they were all heavier than she could
carry now.[17]
Another problem with HN was a bizarre edge case that occurs
when you both write essays and run a forum. When you run a forum,
you're assumed to see if not every conversation, at least every
conversation involving you. And when you write essays, people post
highly imaginative misinterpretations of them on forums. Individually
these two phenomena are tedious but bearable, but the combination
is disastrous. You actually have to respond to the misinterpretations,
because the assumption that you're present in the conversation means
that not responding to any sufficiently upvoted misinterpretation
reads as a tacit admission that it's correct. But that in turn
encourages more; anyone who wants to pick a fight with you senses
that now is their chance.[18]
The worst thing about leaving YC was not working with Jessica
anymore. We'd been working on YC almost the whole time we'd known
each other, and we'd neither tried nor wanted to separate it from
our personal lives, so leaving was like pulling up a deeply rooted
tree.[19]
One way to get more precise about the concept of invented vs
discovered is to talk about space aliens. Any sufficiently advanced
alien civilization would certainly know about the Pythagorean
theorem, for example. I believe, though with less certainty, that
they would also know about the Lisp in McCarthy's 1960 paper.But if so there's no reason to suppose that this is the limit of
the language that might be known to them. Presumably aliens need
numbers and errors and I/O too. So it seems likely there exists at
least one path out of McCarthy's Lisp along which discoveredness
is preserved.Thanks to Trevor Blackwell, John Collison, Patrick Collison, Daniel
Gackle, Ralph Hazell, Jessica Livingston, Robert Morris, and Harj
Taggar for reading drafts of this. |
January 2017People who are powerful but uncharismatic will tend to be disliked.
Their power makes them a target for criticism that they don't have
the charisma to disarm. That was Hillary Clinton's problem. It also
tends to be a problem for any CEO who is more of a builder than a
schmoozer. And yet the builder-type CEO is (like Hillary) probably
the best person for the job.I don't think there is any solution to this problem. It's human
nature. The best we can do is to recognize that it's happening, and
to understand that being a magnet for criticism is sometimes a sign
not that someone is the wrong person for a job, but that they're
the right one. |
July 2010What hard liquor, cigarettes, heroin, and crack have in common is
that they're all more concentrated forms of less addictive predecessors.
Most if not all the things we describe as addictive are. And the
scary thing is, the process that created them is accelerating.We wouldn't want to stop it. It's the same process that cures
diseases: technological progress. Technological progress means
making things do more of what we want. When the thing we want is
something we want to want, we consider technological progress good.
If some new technique makes solar cells x% more efficient, that
seems strictly better. When progress concentrates something we
don't want to want—when it transforms opium into heroin—it seems
bad. But it's the same process at work.
[1]No one doubts this process is accelerating, which means increasing
numbers of things we like will be transformed into things we like
too much.
[2]As far as I know there's no word for something we like too much.
The closest is the colloquial sense of "addictive." That usage has
become increasingly common during my lifetime. And it's clear why:
there are an increasing number of things we need it for. At the
extreme end of the spectrum are crack and meth. Food has been
transformed by a combination of factory farming and innovations in
food processing into something with way more immediate bang for the
buck, and you can see the results in any town in America. Checkers
and solitaire have been replaced by World of Warcraft and FarmVille.
TV has become much more engaging, and even so it can't compete with Facebook.The world is more addictive than it was 40 years ago. And unless
the forms of technological progress that produced these things are
subject to different laws than technological progress in general,
the world will get more addictive in the next 40 years than it did
in the last 40.The next 40 years will bring us some wonderful things. I don't
mean to imply they're all to be avoided. Alcohol is a dangerous
drug, but I'd rather live in a world with wine than one without.
Most people can coexist with alcohol; but you have to be careful.
More things we like will mean more things we have to be careful
about.Most people won't, unfortunately. Which means that as the world
becomes more addictive, the two senses in which one can live a
normal life will be driven ever further apart. One sense of "normal"
is statistically normal: what everyone else does. The other is the
sense we mean when we talk about the normal operating range of a
piece of machinery: what works best.These two senses are already quite far apart. Already someone
trying to live well would seem eccentrically abstemious in most of
the US. That phenomenon is only going to become more pronounced.
You can probably take it as a rule of thumb from now on that if
people don't think you're weird, you're living badly.Societies eventually develop antibodies to addictive new things.
I've seen that happen with cigarettes. When cigarettes first
appeared, they spread the way an infectious disease spreads through
a previously isolated population. Smoking rapidly became a
(statistically) normal thing. There were ashtrays everywhere. We
had ashtrays in our house when I was a kid, even though neither of
my parents smoked. You had to for guests.As knowledge spread about the dangers of smoking, customs changed.
In the last 20 years, smoking has been transformed from something
that seemed totally normal into a rather seedy habit: from something
movie stars did in publicity shots to something small huddles of
addicts do outside the doors of office buildings. A lot of the
change was due to legislation, of course, but the legislation
couldn't have happened if customs hadn't already changed.It took a while though—on the order of 100 years. And unless the
rate at which social antibodies evolve can increase to match the
accelerating rate at which technological progress throws off new
addictions, we'll be increasingly unable to rely on customs to
protect us.
[3]
Unless we want to be canaries in the coal mine
of each new addiction—the people whose sad example becomes a
lesson to future generations—we'll have to figure out for ourselves
what to avoid and how. It will actually become a reasonable strategy
(or a more reasonable strategy) to suspect
everything new.In fact, even that won't be enough. We'll have to worry not just
about new things, but also about existing things becoming more
addictive. That's what bit me. I've avoided most addictions, but
the Internet got me because it became addictive while I was using
it.
[4]Most people I know have problems with Internet addiction. We're
all trying to figure out our own customs for getting free of it.
That's why I don't have an iPhone, for example; the last thing I
want is for the Internet to follow me out into the world.
[5]
My latest trick is taking long hikes. I used to think running was a
better form of exercise than hiking because it took less time. Now
the slowness of hiking seems an advantage, because the longer I
spend on the trail, the longer I have to think without interruption.Sounds pretty eccentric, doesn't it? It always will when you're
trying to solve problems where there are no customs yet to guide
you. Maybe I can't plead Occam's razor; maybe I'm simply eccentric.
But if I'm right about the acceleration of addictiveness, then this
kind of lonely squirming to avoid it will increasingly be the fate
of anyone who wants to get things done. We'll increasingly be
defined by what we say no to.
Notes[1]
Could you restrict technological progress to areas where you
wanted it? Only in a limited way, without becoming a police state.
And even then your restrictions would have undesirable side effects.
"Good" and "bad" technological progress aren't sharply differentiated,
so you'd find you couldn't slow the latter without also slowing the
former. And in any case, as Prohibition and the "war on drugs"
show, bans often do more harm than good.[2]
Technology has always been accelerating. By Paleolithic
standards, technology evolved at a blistering pace in the Neolithic
period.[3]
Unless we mass produce social customs. I suspect the recent
resurgence of evangelical Christianity in the US is partly a reaction
to drugs. In desperation people reach for the sledgehammer; if
their kids won't listen to them, maybe they'll listen to God. But
that solution has broader consequences than just getting kids to
say no to drugs. You end up saying no to
science as well.
I worry we may be heading for a future in which only a few people
plot their own itinerary through no-land, while everyone else books
a package tour. Or worse still, has one booked for them by the
government.[4]
People commonly use the word "procrastination" to describe
what they do on the Internet. It seems to me too mild to describe
what's happening as merely not-doing-work. We don't call it
procrastination when someone gets drunk instead of working.[5]
Several people have told me they like the iPad because it
lets them bring the Internet into situations where a laptop would
be too conspicuous. In other words, it's a hip flask. (This is
true of the iPhone too, of course, but this advantage isn't as
obvious because it reads as a phone, and everyone's used to those.)Thanks to Sam Altman, Patrick Collison, Jessica Livingston, and
Robert Morris for reading drafts of this. |
November 2022Since I was about 9 I've been puzzled by the apparent contradiction
between being made of matter that behaves in a predictable way, and
the feeling that I could choose to do whatever I wanted. At the
time I had a self-interested motive for exploring the question. At
that age (like most succeeding ages) I was always in trouble with
the authorities, and it seemed to me that there might possibly be
some way to get out of trouble by arguing that I wasn't responsible
for my actions. I gradually lost hope of that, but the puzzle
remained: How do you reconcile being a machine made of matter with
the feeling that you're free to choose what you do?
[1]The best way to explain the answer may be to start with a slightly
wrong version, and then fix it. The wrong version is: You can do
what you want, but you can't want what you want. Yes, you can control
what you do, but you'll do what you want, and you can't control
that.The reason this is mistaken is that people do sometimes change what
they want. People who don't want to want something — drug addicts,
for example — can sometimes make themselves stop wanting it. And
people who want to want something — who want to like classical
music, or broccoli — sometimes succeed.So we modify our initial statement: You can do what you want, but
you can't want to want what you want.That's still not quite true. It's possible to change what you want
to want. I can imagine someone saying "I decided to stop wanting
to like classical music." But we're getting closer to the truth.
It's rare for people to change what they want to want, and the more
"want to"s we add, the rarer it gets.We can get arbitrarily close to a true statement by adding more "want
to"s in much the same way we can get arbitrarily close to 1 by adding
more 9s to a string of 9s following a decimal point. In practice
three or four "want to"s must surely be enough. It's hard even to
envision what it would mean to change what you want to want to want
to want, let alone actually do it.So one way to express the correct answer is to use a regular
expression. You can do what you want, but there's some statement
of the form "you can't (want to)* want what you want" that's true.
Ultimately you get back to a want that you don't control.
[2]
Notes[1]
I didn't know when I was 9 that matter might behave randomly,
but I don't think it affects the problem much. Randomness destroys
the ghost in the machine as effectively as determinism.[2]
If you don't like using an expression, you can make the same
point using higher-order desires: There is some n such that you
don't control your nth-order desires.
Thanks to Trevor Blackwell,
Jessica Livingston, Robert Morris, and
Michael Nielsen for reading drafts of this. |
October 2004
As E. B. White said, "good writing is rewriting." I didn't
realize this when I was in school. In writing, as in math and
science, they only show you the finished product.
You don't see all the false starts. This gives students a
misleading view of how things get made.Part of the reason it happens is that writers don't want
people to see their mistakes. But I'm willing to let people
see an early draft if it will show how much you have
to rewrite to beat an essay into shape.Below is the oldest version I can find of
The Age of the Essay
(probably the second or third day), with
text that ultimately survived in
red and text that later
got deleted in gray.
There seem to be several categories of cuts: things I got wrong,
things that seem like bragging, flames,
digressions, stretches of awkward prose, and unnecessary words.I discarded more from the beginning. That's
not surprising; it takes a while to hit your stride. There
are more digressions at the start, because I'm not sure where
I'm heading.The amount of cutting is about average. I probably write
three to four words for every one that appears in the final
version of an essay.(Before anyone gets mad at me for opinions expressed here, remember
that anything you see here that's not in the final version is obviously
something I chose not to publish, often because I disagree
with it.)
Recently a friend said that what he liked about
my essays was that they weren't written the way
we'd been taught to write essays in school. You
remember: topic sentence, introductory paragraph,
supporting paragraphs, conclusion. It hadn't
occurred to me till then that those horrible things
we had to write in school were even connected to
what I was doing now. But sure enough, I thought,
they did call them "essays," didn't they?Well, they're not. Those things you have to write
in school are not only not essays, they're one of the
most pointless of all the pointless hoops you have
to jump through in school. And I worry that they
not only teach students the wrong things about writing,
but put them off writing entirely.So I'm going to give the other side of the story: what
an essay really is, and how you write one. Or at least,
how I write one. Students be forewarned: if you actually write
the kind of essay I describe, you'll probably get bad
grades. But knowing how it's really done should
at least help you to understand the feeling of futility
you have when you're writing the things they tell you to.
The most obvious difference between real essays and
the things one has to write in school is that real
essays are not exclusively about English literature.
It's a fine thing for schools to
teach students how to
write. But for some bizarre reason (actually, a very specific bizarre
reason that I'll explain in a moment),
the teaching of
writing has gotten mixed together with the study
of literature. And so all over the country, students are
writing not about how a baseball team with a small budget
might compete with the Yankees, or the role of color in
fashion, or what constitutes a good dessert, but about
symbolism in Dickens.With obvious
results. Only a few people really
care about
symbolism in Dickens. The teacher doesn't.
The students don't. Most of the people who've had to write PhD
disserations about Dickens don't. And certainly
Dickens himself would be more interested in an essay
about color or baseball.How did things get this way? To answer that we have to go back
almost a thousand years. Between about 500 and 1000, life was
not very good in Europe. The term "dark ages" is presently
out of fashion as too judgemental (the period wasn't dark;
it was just different), but if this label didn't already
exist, it would seem an inspired metaphor. What little
original thought there was took place in lulls between
constant wars and had something of the character of
the thoughts of parents with a new baby.
The most amusing thing written during this
period, Liudprand of Cremona's Embassy to Constantinople, is,
I suspect, mostly inadvertantly so.Around 1000 Europe began to catch its breath.
And once they
had the luxury of curiosity, one of the first things they discovered
was what we call "the classics."
Imagine if we were visited
by aliens. If they could even get here they'd presumably know a
few things we don't. Immediately Alien Studies would become
the most dynamic field of scholarship: instead of painstakingly
discovering things for ourselves, we could simply suck up
everything they'd discovered. So it was in Europe in 1200.
When classical texts began to circulate in Europe, they contained
not just new answers, but new questions. (If anyone proved
a theorem in christian Europe before 1200, for example, there
is no record of it.)For a couple centuries, some of the most important work
being done was intellectual archaelogy. Those were also
the centuries during which schools were first established.
And since reading ancient texts was the essence of what
scholars did then, it became the basis of the curriculum.By 1700, someone who wanted to learn about
physics didn't need to start by mastering Greek in order to read Aristotle. But schools
change slower than scholarship: the study of
ancient texts
had such prestige that it remained the backbone of
education
until the late 19th century. By then it was merely a tradition.
It did serve some purposes: reading a foreign language was difficult,
and thus taught discipline, or at least, kept students busy;
it introduced students to
cultures quite different from their own; and its very uselessness
made it function (like white gloves) as a social bulwark.
But it certainly wasn't
true, and hadn't been true for centuries, that students were
serving apprenticeships in the hottest area of scholarship.Classical scholarship had also changed. In the early era, philology
actually mattered. The texts that filtered into Europe were
all corrupted to some degree by the errors of translators and
copyists. Scholars had to figure out what Aristotle said
before they could figure out what he meant. But by the modern
era such questions were answered as well as they were ever
going to be. And so the study of ancient texts became less
about ancientness and more about texts.The time was then ripe for the question: if the study of
ancient texts is a valid field for scholarship, why not modern
texts? The answer, of course, is that the raison d'etre
of classical scholarship was a kind of intellectual archaelogy that
does not need to be done in the case of contemporary authors.
But for obvious reasons no one wanted to give that answer.
The archaeological work being mostly done, it implied that
the people studying the classics were, if not wasting their
time, at least working on problems of minor importance.And so began the study of modern literature. There was some
initial resistance, but it didn't last long.
The limiting
reagent in the growth of university departments is what
parents will let undergraduates study. If parents will let
their children major in x, the rest follows straightforwardly.
There will be jobs teaching x, and professors to fill them.
The professors will establish scholarly journals and publish
one another's papers. Universities with x departments will
subscribe to the journals. Graduate students who want jobs
as professors of x will write dissertations about it. It may
take a good long while for the more prestigious universities
to cave in and establish departments in cheesier xes, but
at the other end of the scale there are so many universities
competing to attract students that the mere establishment of
a discipline requires little more than the desire to do it.High schools imitate universities.
And so once university
English departments were established in the late nineteenth century,
the 'riting component of the 3 Rs
was morphed into English.
With the bizarre consequence that high school students now
had to write about English literature-- to write, without
even realizing it, imitations of whatever
English professors had been publishing in their journals a
few decades before. It's no wonder if this seems to the
student a pointless exercise, because we're now three steps
removed from real work: the students are imitating English
professors, who are imitating classical scholars, who are
merely the inheritors of a tradition growing out of what
was, 700 years ago, fascinating and urgently needed work.Perhaps high schools should drop English and just teach writing.
The valuable part of English classes is learning to write, and
that could be taught better by itself. Students learn better
when they're interested in what they're doing, and it's hard
to imagine a topic less interesting than symbolism in Dickens.
Most of the people who write about that sort of thing professionally
are not really interested in it. (Though indeed, it's been a
while since they were writing about symbolism; now they're
writing about gender.)I have no illusions about how eagerly this suggestion will
be adopted. Public schools probably couldn't stop teaching
English even if they wanted to; they're probably required to by
law. But here's a related suggestion that goes with the grain
instead of against it: that universities establish a
writing major. Many of the students who now major in English
would major in writing if they could, and most would
be better off.It will be argued that it is a good thing for students to be
exposed to their literary heritage. Certainly. But is that
more important than that they learn to write well? And are
English classes even the place to do it? After all,
the average public high school student gets zero exposure to
his artistic heritage. No disaster results.
The people who are interested in art learn about it for
themselves, and those who aren't don't. I find that American
adults are no better or worse informed about literature than
art, despite the fact that they spent years studying literature
in high school and no time at all studying art. Which presumably
means that what they're taught in school is rounding error
compared to what they pick up on their own.Indeed, English classes may even be harmful. In my case they
were effectively aversion therapy. Want to make someone dislike
a book? Force him to read it and write an essay about it.
And make the topic so intellectually bogus that you
could not, if asked, explain why one ought to write about it.
I love to read more than anything, but by the end of high school
I never read the books we were assigned. I was so disgusted with
what we were doing that it became a point of honor
with me to write nonsense at least as good at the other students'
without having more than glanced over the book to learn the names
of the characters and a few random events in it.I hoped this might be fixed in college, but I found the same
problem there. It was not the teachers. It was English.
We were supposed to read novels and write essays about them.
About what, and why? That no one seemed to be able to explain.
Eventually by trial and error I found that what the teacher
wanted us to do was pretend that the story had really taken
place, and to analyze based on what the characters said and did (the
subtler clues, the better) what their motives must have been.
One got extra credit for motives having to do with class,
as I suspect one must now for those involving gender and
sexuality. I learned how to churn out such stuff well enough
to get an A, but I never took another English class.And the books we did these disgusting things to, like those
we mishandled in high school, I find still have black marks
against them in my mind. The one saving grace was that
English courses tend to favor pompous, dull writers like
Henry James, who deserve black marks against their names anyway.
One of the principles the IRS uses in deciding whether to
allow deductions is that, if something is fun, it isn't work.
Fields that are intellectually unsure of themselves rely on
a similar principle. Reading P.G. Wodehouse or Evelyn Waugh or
Raymond Chandler is too obviously pleasing to seem like
serious work, as reading Shakespeare would have been before
English evolved enough to make it an effort to understand him. [sh]
And so good writers (just you wait and see who's still in
print in 300 years) are less likely to have readers turned
against them by clumsy, self-appointed tour guides.
The other big difference between a real essay and the
things
they make you write in school is that a real essay doesn't
take a position and then defend it. That principle,
like the idea that we ought to be writing about literature,
turns out to be another intellectual hangover of long
forgotten origins. It's often mistakenly believed that
medieval universities were mostly seminaries. In fact they
were more law schools. And at least in our tradition
lawyers are advocates: they are
trained to be able to
take
either side of an argument and make as good a case for it
as they can. Whether or not this is a good idea (in the case of prosecutors,
it probably isn't), it tended to pervade
the atmosphere of
early universities. After the lecture the most common form
of discussion was the disputation. This idea
is at least
nominally preserved in our present-day thesis defense-- indeed,
in the very word thesis. Most people treat the words
thesis
and dissertation as interchangeable, but originally, at least,
a thesis was a position one took and the dissertation was
the argument by which one defended it.I'm not complaining that we blur these two words together.
As far as I'm concerned, the sooner we lose the original
sense of the word thesis, the better. For many, perhaps most,
graduate students, it is stuffing a square peg into a round
hole to try to recast one's work as a single thesis. And
as for the disputation, that seems clearly a net lose.
Arguing two sides of a case may be a necessary evil in a
legal dispute, but it's not the best way to get at the truth,
as I think lawyers would be the first to admit.
And yet this principle is built into the very structure of
the essays
they teach you to write in high school. The topic
sentence is your thesis, chosen in advance, the supporting
paragraphs the blows you strike in the conflict, and the
conclusion--- uh, what it the conclusion? I was never sure
about that in high school. If your thesis was well expressed,
what need was there to restate it? In theory it seemed that
the conclusion of a really good essay ought not to need to
say any more than QED.
But when you understand the origins
of this sort of "essay", you can see where the
conclusion comes from. It's the concluding remarks to the
jury.
What other alternative is there? To answer that
we have to
reach back into history again, though this time not so far.
To Michel de Montaigne, inventor of the essay.
He was
doing something quite different from what a
lawyer does,
and
the difference is embodied in the name. Essayer is the French
verb meaning "to try" (the cousin of our word assay),
and an "essai" is an effort.
An essay is something you
write in order
to figure something out.Figure out what? You don't know yet. And so you can't begin with a
thesis, because you don't have one, and may never have
one. An essay doesn't begin with a statement, but with a
question. In a real essay, you don't take a position and
defend it. You see a door that's ajar, and you open it and
walk in to see what's inside.If all you want to do is figure things out, why do you need
to write anything, though? Why not just sit and think? Well,
there precisely is Montaigne's great discovery. Expressing
ideas helps to form them. Indeed, helps is far too weak a
word. 90%
of what ends up in my essays was stuff
I only
thought of when I sat down to write them. That's why I
write them.So there's another difference between essays and
the things
you have to write in school. In school
you are, in theory,
explaining yourself to someone else. In the best case---if
you're really organized---you're just writing it down.
In a real essay you're writing for yourself. You're
thinking out loud.But not quite. Just as inviting people over forces you to
clean up your apartment, writing something that you know
other people will read forces you to think well. So it
does matter to have an audience. The things I've written
just for myself are no good. Indeed, they're bad in
a particular way:
they tend to peter out. When I run into
difficulties, I notice that I
tend to conclude with a few vague
questions and then drift off to get a cup of tea.This seems a common problem.
It's practically the standard
ending in blog entries--- with the addition of a "heh" or an
emoticon, prompted by the all too accurate sense that
something is missing.And indeed, a lot of
published essays peter out in this
same way.
Particularly the sort written by the staff writers of newsmagazines. Outside writers tend to supply
editorials of the defend-a-position variety, which
make a beeline toward a rousing (and
foreordained) conclusion. But the staff writers feel
obliged to write something more
balanced, which in
practice ends up meaning blurry.
Since they're
writing for a popular magazine, they start with the
most radioactively controversial questions, from which
(because they're writing for a popular magazine)
they then proceed to recoil from
in terror.
Gay marriage, for or
against? This group says one thing. That group says
another. One thing is certain: the question is a
complex one. (But don't get mad at us. We didn't
draw any conclusions.)Questions aren't enough. An essay has to come up with answers.
They don't always, of course. Sometimes you start with a
promising question and get nowhere. But those you don't
publish. Those are like experiments that get inconclusive
results. Something you publish ought to tell the reader
something he didn't already know.
But what you tell him doesn't matter, so long as
it's interesting. I'm sometimes accused of meandering.
In defend-a-position writing that would be a flaw.
There you're not concerned with truth. You already
know where you're going, and you want to go straight there,
blustering through obstacles, and hand-waving
your way across swampy ground. But that's not what
you're trying to do in an essay. An essay is supposed to
be a search for truth. It would be suspicious if it didn't
meander.The Meander is a river in Asia Minor (aka
Turkey).
As you might expect, it winds all over the place.
But does it
do this out of frivolity? Quite the opposite.
Like all rivers, it's rigorously following the laws of physics.
The path it has discovered,
winding as it is, represents
the most economical route to the sea.The river's algorithm is simple. At each step, flow down.
For the essayist this translates to: flow interesting.
Of all the places to go next, choose
whichever seems
most interesting.I'm pushing this metaphor a bit. An essayist
can't have
quite as little foresight as a river. In fact what you do
(or what I do) is somewhere between a river and a roman
road-builder. I have a general idea of the direction
I want to go in, and
I choose the next topic with that in mind. This essay is
about writing, so I do occasionally yank it back in that
direction, but it is not all the sort of essay I
thought I was going to write about writing.Note too that hill-climbing (which is what this algorithm is
called) can get you in trouble.
Sometimes, just
like a river,
you
run up against a blank wall. What
I do then is just
what the river does: backtrack.
At one point in this essay
I found that after following a certain thread I ran out
of ideas. I had to go back n
paragraphs and start over
in another direction. For illustrative purposes I've left
the abandoned branch as a footnote.
Err on the side of the river. An essay is not a reference
work. It's not something you read looking for a specific
answer, and feel cheated if you don't find it. I'd much
rather read an essay that went off in an unexpected but
interesting direction than one that plodded dutifully along
a prescribed course.So what's interesting? For me, interesting means surprise.
Design, as Matz
has said, should follow the principle of
least surprise.
A button that looks like it will make a
machine stop should make it stop, not speed up. Essays
should do the opposite. Essays should aim for maximum
surprise.I was afraid of flying for a long time and could only travel
vicariously. When friends came back from faraway places,
it wasn't just out of politeness that I asked them about
their trip.
I really wanted to know. And I found that
the best way to get information out of them was to ask
what surprised them. How was the place different from what
they expected? This is an extremely useful question.
You can ask it of even
the most unobservant people, and it will
extract information they didn't even know they were
recording. Indeed, you can ask it in real time. Now when I go somewhere
new, I make a note of what surprises me about it. Sometimes I
even make a conscious effort to visualize the place beforehand,
so I'll have a detailed image to diff with reality.
Surprises are facts
you didn't already
know.
But they're
more than that. They're facts
that contradict things you
thought you knew. And so they're the most valuable sort of
fact you can get. They're like a food that's not merely
healthy, but counteracts the unhealthy effects of things
you've already eaten.
How do you find surprises? Well, therein lies half
the work of essay writing. (The other half is expressing
yourself well.) You can at least
use yourself as a
proxy for the reader. You should only write about things
you've thought about a lot. And anything you come across
that surprises you, who've thought about the topic a lot,
will probably surprise most readers.For example, in a recent essay I pointed out that because
you can only judge computer programmers by working with
them, no one knows in programming who the heroes should
be.
I
certainly
didn't realize this when I started writing
the
essay, and even now I find it kind of weird. That's
what you're looking for.So if you want to write essays, you need two ingredients:
you need
a few topics that you think about a lot, and you
need some ability to ferret out the unexpected.What should you think about? My guess is that it
doesn't matter. Almost everything is
interesting if you get deeply
enough into it. The one possible exception
are
things
like working in fast food, which
have deliberately had all
the variation sucked out of them.
In retrospect, was there
anything interesting about working in Baskin-Robbins?
Well, it was interesting to notice
how important color was
to the customers. Kids a certain age would point into
the case and say that they wanted yellow. Did they want
French Vanilla or Lemon? They would just look at you
blankly. They wanted yellow. And then there was the
mystery of why the perennial favorite Pralines n' Cream
was so appealing. I'm inclined now to
think it was the salt.
And the mystery of why Passion Fruit tasted so disgusting.
People would order it because of the name, and were always
disappointed. It should have been called In-sink-erator
Fruit.
And there was
the difference in the way fathers and
mothers bought ice cream for their kids.
Fathers tended to
adopt the attitude of
benevolent kings bestowing largesse,
and mothers that of
harried bureaucrats,
giving in to
pressure against their better judgement.
So, yes, there does seem to be material, even in
fast food.What about the other half, ferreting out the unexpected?
That may require some natural ability. I've noticed for
a long time that I'm pathologically observant. ....[That was as far as I'd gotten at the time.]Notes[sh] In Shakespeare's own time, serious writing meant theological
discourses, not the bawdy plays acted over on the other
side of the river among the bear gardens and whorehouses.The other extreme, the work that seems formidable from the moment
it's created (indeed, is deliberately intended to be)
is represented by Milton. Like the Aeneid, Paradise Lost is a
rock imitating a butterfly that happened to get fossilized.
Even Samuel Johnson seems to have balked at this, on the one
hand paying Milton the compliment of an extensive biography,
and on the other writing of Paradise Lost that "none who read it
ever wished it longer." |
May 2004When people care enough about something to do it well, those who
do it best tend to be far better than everyone else. There's a
huge gap between Leonardo and second-rate contemporaries like
Borgognone. You see the same gap between Raymond Chandler and the
average writer of detective novels. A top-ranked professional chess
player could play ten thousand games against an ordinary club player
without losing once.Like chess or painting or writing novels, making money is a very
specialized skill. But for some reason we treat this skill
differently. No one complains when a few people surpass all the
rest at playing chess or writing novels, but when a few people make
more money than the rest, we get editorials saying this is wrong.Why? The pattern of variation seems no different than for any other
skill. What causes people to react so strongly when the skill is
making money?I think there are three reasons we treat making money as different:
the misleading model of wealth we learn as children; the disreputable
way in which, till recently, most fortunes were accumulated; and
the worry that great variations in income are somehow bad for
society. As far as I can tell, the first is mistaken, the second
outdated, and the third empirically false. Could it be that, in a
modern democracy, variation in income is actually a sign of health?The Daddy Model of WealthWhen I was five I thought electricity was created by electric
sockets. I didn't realize there were power plants out there
generating it. Likewise, it doesn't occur to most kids that wealth
is something that has to be generated. It seems to be something
that flows from parents.Because of the circumstances in which they encounter it, children
tend to misunderstand wealth. They confuse it with money. They
think that there is a fixed amount of it. And they think of it as
something that's distributed by authorities (and so should be
distributed equally), rather than something that has to be created
(and might be created unequally).In fact, wealth is not money. Money is just a convenient way of
trading one form of wealth for another. Wealth is the underlying
stuff—the goods and services we buy. When you travel to a
rich or poor country, you don't have to look at people's bank
accounts to tell which kind you're in. You can see
wealth—in buildings and streets, in the clothes and the health
of the people.Where does wealth come from? People make it. This was easier to
grasp when most people lived on farms, and made many of the things
they wanted with their own hands. Then you could see in the house,
the herds, and the granary the wealth that each family created. It
was obvious then too that the wealth of the world was not a fixed
quantity that had to be shared out, like slices of a pie. If you
wanted more wealth, you could make it.This is just as true today, though few of us create wealth directly
for ourselves (except for a few vestigial domestic tasks). Mostly
we create wealth for other people in exchange for money, which we
then trade for the forms of wealth we want.
[1]Because kids are unable to create wealth, whatever they have has
to be given to them. And when wealth is something you're given,
then of course it seems that it should be distributed equally.
[2]
As in most families it is. The kids see to that. "Unfair," they
cry, when one sibling gets more than another.In the real world, you can't keep living off your parents. If you
want something, you either have to make it, or do something of
equivalent value for someone else, in order to get them to give you
enough money to buy it. In the real world, wealth is (except for
a few specialists like thieves and speculators) something you have
to create, not something that's distributed by Daddy. And since
the ability and desire to create it vary from person to person,
it's not made equally.You get paid by doing or making something people want, and those
who make more money are often simply better at doing what people
want. Top actors make a lot more money than B-list actors. The
B-list actors might be almost as charismatic, but when people go
to the theater and look at the list of movies playing, they want
that extra oomph that the big stars have.Doing what people want is not the only way to get money, of course.
You could also rob banks, or solicit bribes, or establish a monopoly.
Such tricks account for some variation in wealth, and indeed for
some of the biggest individual fortunes, but they are not the root
cause of variation in income. The root cause of variation in income,
as Occam's Razor implies, is the same as the root cause of variation
in every other human skill.In the United States, the CEO of a large public company makes about
100 times as much as the average person.
[3]
Basketball players
make about 128 times as much, and baseball players 72 times as much.
Editorials quote this kind of statistic with horror. But I have
no trouble imagining that one person could be 100 times as productive
as another. In ancient Rome the price of slaves varied by
a factor of 50 depending on their skills.
[4]
And that's without
considering motivation, or the extra leverage in productivity that
you can get from modern technology.Editorials about athletes' or CEOs' salaries remind me of early
Christian writers, arguing from first principles about whether the
Earth was round, when they could just walk outside and check.
[5]
How much someone's work is worth is not a policy question. It's
something the market already determines."Are they really worth 100 of us?" editorialists ask. Depends on
what you mean by worth. If you mean worth in the sense of what
people will pay for their skills, the answer is yes, apparently.A few CEOs' incomes reflect some kind of wrongdoing. But are there
not others whose incomes really do reflect the wealth they generate?
Steve Jobs saved a company that was in a terminal decline. And not
merely in the way a turnaround specialist does, by cutting costs;
he had to decide what Apple's next products should be. Few others
could have done it. And regardless of the case with CEOs, it's
hard to see how anyone could argue that the salaries of professional
basketball players don't reflect supply and demand.It may seem unlikely in principle that one individual could really
generate so much more wealth than another. The key to this mystery
is to revisit that question, are they really worth 100 of us?
Would a basketball team trade one of their players for 100
random people? What would Apple's next product look like if you
replaced Steve Jobs with a committee of 100 random people?
[6]
These
things don't scale linearly. Perhaps the CEO or the professional
athlete has only ten times (whatever that means) the skill and
determination of an ordinary person. But it makes all the difference
that it's concentrated in one individual.When we say that one kind of work is overpaid and another underpaid,
what are we really saying? In a free market, prices are determined
by what buyers want. People like baseball more than poetry, so
baseball players make more than poets. To say that a certain kind
of work is underpaid is thus identical with saying that people want
the wrong things.Well, of course people want the wrong things. It seems odd to be
surprised by that. And it seems even odder to say that it's
unjust that certain kinds of work are underpaid.
[7]
Then
you're saying that it's unjust that people want the wrong things.
It's lamentable that people prefer reality TV and corndogs to
Shakespeare and steamed vegetables, but unjust? That seems like
saying that blue is heavy, or that up is circular.The appearance of the word "unjust" here is the unmistakable spectral
signature of the Daddy Model. Why else would this idea occur in
this odd context? Whereas if the speaker were still operating on
the Daddy Model, and saw wealth as something that flowed from a
common source and had to be shared out, rather than something
generated by doing what other people wanted, this is exactly what
you'd get on noticing that some people made much more than others.When we talk about "unequal distribution of income," we should
also ask, where does that income come from?
[8]
Who made the wealth
it represents? Because to the extent that income varies simply
according to how much wealth people create, the distribution may
be unequal, but it's hardly unjust.Stealing ItThe second reason we tend to find great disparities of wealth
alarming is that for most of human history the usual way to accumulate
a fortune was to steal it: in pastoral societies by cattle raiding;
in agricultural societies by appropriating others' estates in times
of war, and taxing them in times of peace.In conflicts, those on the winning side would receive the estates
confiscated from the losers. In England in the 1060s, when William
the Conqueror distributed the estates of the defeated Anglo-Saxon
nobles to his followers, the conflict was military. By the 1530s,
when Henry VIII distributed the estates of the monasteries to his
followers, it was mostly political.
[9]
But the principle was the
same. Indeed, the same principle is at work now in Zimbabwe.In more organized societies, like China, the ruler and his officials
used taxation instead of confiscation. But here too we see the
same principle: the way to get rich was not to create wealth, but
to serve a ruler powerful enough to appropriate it.This started to change in Europe with the rise of the middle class.
Now we think of the middle class as people who are neither rich nor
poor, but originally they were a distinct group. In a feudal
society, there are just two classes: a warrior aristocracy, and the
serfs who work their estates. The middle class were a new, third
group who lived in towns and supported themselves by manufacturing
and trade.Starting in the tenth and eleventh centuries, petty nobles and
former serfs banded together in towns that gradually became powerful
enough to ignore the local feudal lords.
[10]
Like serfs, the middle
class made a living largely by creating wealth. (In port cities
like Genoa and Pisa, they also engaged in piracy.) But unlike serfs
they had an incentive to create a lot of it. Any wealth a serf
created belonged to his master. There was not much point in making
more than you could hide. Whereas the independence of the townsmen
allowed them to keep whatever wealth they created.Once it became possible to get rich by creating wealth, society as
a whole started to get richer very rapidly. Nearly everything we
have was created by the middle class. Indeed, the other two classes
have effectively disappeared in industrial societies, and their
names been given to either end of the middle class. (In the original
sense of the word, Bill Gates is middle class.)But it was not till the Industrial Revolution that wealth creation
definitively replaced corruption as the best way to get rich. In
England, at least, corruption only became unfashionable (and in
fact only started to be called "corruption") when there started to
be other, faster ways to get rich.Seventeenth-century England was much like the third world today,
in that government office was a recognized route to wealth. The
great fortunes of that time still derived more from what we would
now call corruption than from commerce.
[11]
By the nineteenth
century that had changed. There continued to be bribes, as there
still are everywhere, but politics had by then been left to men who
were driven more by vanity than greed. Technology had made it
possible to create wealth faster than you could steal it. The
prototypical rich man of the nineteenth century was not a courtier
but an industrialist.With the rise of the middle class, wealth stopped being a zero-sum
game. Jobs and Wozniak didn't have to make us poor to make themselves
rich. Quite the opposite: they created things that made our lives
materially richer. They had to, or we wouldn't have paid for them.But since for most of the world's history the main route to wealth
was to steal it, we tend to be suspicious of rich people. Idealistic
undergraduates find their unconsciously preserved child's model of
wealth confirmed by eminent writers of the past. It is a case of
the mistaken meeting the outdated."Behind every great fortune, there is a crime," Balzac wrote. Except
he didn't. What he actually said was that a great fortune with no
apparent cause was probably due to a crime well enough executed
that it had been forgotten. If we were talking about Europe in
1000, or most of the third world today, the standard misquotation
would be spot on. But Balzac lived in nineteenth-century France,
where the Industrial Revolution was well advanced. He knew you
could make a fortune without stealing it. After all, he did himself,
as a popular novelist.
[12]Only a few countries (by no coincidence, the richest ones) have
reached this stage. In most, corruption still has the upper hand.
In most, the fastest way to get wealth is by stealing it. And so
when we see increasing differences in income in a rich country,
there is a tendency to worry that it's sliding back toward becoming
another Venezuela. I think the opposite is happening. I think
you're seeing a country a full step ahead of Venezuela.The Lever of TechnologyWill technology increase the gap between rich and poor? It will
certainly increase the gap between the productive and the unproductive.
That's the whole point of technology. With a tractor an energetic
farmer could plow six times as much land in a day as he could with
a team of horses. But only if he mastered a new kind of farming.I've seen the lever of technology grow visibly in my own time. In
high school I made money by mowing lawns and scooping ice cream at
Baskin-Robbins. This was the only kind of work available at the
time. Now high school kids could write software or design web
sites. But only some of them will; the rest will still be scooping
ice cream.I remember very vividly when in 1985 improved technology made it
possible for me to buy a computer of my own. Within months I was
using it to make money as a freelance programmer. A few years
before, I couldn't have done this. A few years before, there was
no such thing as a freelance programmer. But Apple created
wealth, in the form of powerful, inexpensive computers, and programmers
immediately set to work using it to create more.As this example suggests, the rate at which technology increases
our productive capacity is probably exponential, rather than linear.
So we should expect to see ever-increasing variation in individual
productivity as time goes on. Will that increase the gap between
rich and the poor? Depends which gap you mean.Technology should increase the gap in income, but it seems to
decrease other gaps. A hundred years ago, the rich led a different
kind of life from ordinary people. They lived in houses
full of servants, wore elaborately uncomfortable clothes, and
travelled about in carriages drawn by teams of horses which themselves
required their own houses and servants. Now, thanks to technology,
the rich live more like the average person.Cars are a good example of why. It's possible to buy expensive,
handmade cars that cost hundreds of thousands of dollars. But there
is not much point. Companies make more money by building a large
number of ordinary cars than a small number of expensive ones. So
a company making a mass-produced car can afford to spend a lot more
on its design. If you buy a custom-made car, something will always
be breaking. The only point of buying one now is to advertise that
you can.Or consider watches. Fifty years ago, by spending a lot of money
on a watch you could get better performance. When watches had
mechanical movements, expensive watches kept better time. Not any
more. Since the invention of the quartz movement, an ordinary Timex
is more accurate than a Patek Philippe costing hundreds of thousands
of dollars.
[13]
Indeed, as with expensive cars, if you're determined
to spend a lot of money on a watch, you have to put up with some
inconvenience to do it: as well as keeping worse time, mechanical
watches have to be wound.The only thing technology can't cheapen is brand. Which is precisely
why we hear ever more about it. Brand is the residue left as the
substantive differences between rich and poor evaporate. But what
label you have on your stuff is a much smaller matter than having
it versus not having it. In 1900, if you kept a carriage, no one
asked what year or brand it was. If you had one, you were rich.
And if you weren't rich, you took the omnibus or walked. Now even
the poorest Americans drive cars, and it is only because we're so
well trained by advertising that we can even recognize the especially
expensive ones.
[14]The same pattern has played out in industry after industry. If
there is enough demand for something, technology will make it cheap
enough to sell in large volumes, and the mass-produced versions
will be, if not better, at least more convenient.
[15]
And there
is nothing the rich like more than convenience. The rich people I
know drive the same cars, wear the same clothes, have the same kind
of furniture, and eat the same foods as my other friends. Their
houses are in different neighborhoods, or if in the same neighborhood
are different sizes, but within them life is similar. The houses
are made using the same construction techniques and contain much
the same objects. It's inconvenient to do something expensive and
custom.The rich spend their time more like everyone else too. Bertie
Wooster seems long gone. Now, most people who are rich enough not
to work do anyway. It's not just social pressure that makes them;
idleness is lonely and demoralizing.Nor do we have the social distinctions there were a hundred years
ago. The novels and etiquette manuals of that period read now
like descriptions of some strange tribal society. "With respect
to the continuance of friendships..." hints Mrs. Beeton's Book
of Household Management (1880), "it may be found necessary, in
some cases, for a mistress to relinquish, on assuming the responsibility
of a household, many of those commenced in the earlier part of her
life." A woman who married a rich man was expected to drop friends
who didn't. You'd seem a barbarian if you behaved that way today.
You'd also have a very boring life. People still tend to segregate
themselves somewhat, but much more on the basis of education than
wealth.
[16]Materially and socially, technology seems to be decreasing the gap
between the rich and the poor, not increasing it. If Lenin walked
around the offices of a company like Yahoo or Intel or Cisco, he'd
think communism had won. Everyone would be wearing the same clothes,
have the same kind of office (or rather, cubicle) with the same
furnishings, and address one another by their first names instead
of by honorifics. Everything would seem exactly as he'd predicted,
until he looked at their bank accounts. Oops.Is it a problem if technology increases that gap? It doesn't seem
to be so far. As it increases the gap in income, it seems to
decrease most other gaps.Alternative to an AxiomOne often hears a policy criticized on the grounds that it would
increase the income gap between rich and poor. As if it were an
axiom that this would be bad. It might be true that increased
variation in income would be bad, but I don't see how we can say
it's axiomatic.Indeed, it may even be false, in industrial democracies. In a
society of serfs and warlords, certainly, variation in income is a
sign of an underlying problem. But serfdom is not the only cause
of variation in income. A 747 pilot doesn't make 40 times as much
as a checkout clerk because he is a warlord who somehow holds her
in thrall. His skills are simply much more valuable.I'd like to propose an alternative idea: that in a modern society,
increasing variation in income is a sign of health. Technology
seems to increase the variation in productivity at faster than
linear rates. If we don't see corresponding variation in income,
there are three possible explanations: (a) that technical innovation
has stopped, (b) that the people who would create the most wealth
aren't doing it, or (c) that they aren't getting paid for it.I think we can safely say that (a) and (b) would be bad. If you
disagree, try living for a year using only the resources available
to the average Frankish nobleman in 800, and report back to us.
(I'll be generous and not send you back to the stone age.)The only option, if you're going to have an increasingly prosperous
society without increasing variation in income, seems to be (c),
that people will create a lot of wealth without being paid for it.
That Jobs and Wozniak, for example, will cheerfully work 20-hour
days to produce the Apple computer for a society that allows them,
after taxes, to keep just enough of their income to match what they
would have made working 9 to 5 at a big company.Will people create wealth if they can't get paid for it? Only if
it's fun. People will write operating systems for free. But they
won't install them, or take support calls, or train customers to
use them. And at least 90% of the work that even the highest tech
companies do is of this second, unedifying kind.All the unfun kinds of wealth creation slow dramatically in a society
that confiscates private fortunes. We can confirm this empirically.
Suppose you hear a strange noise that you think may be due to a
nearby fan. You turn the fan off, and the noise stops. You turn
the fan back on, and the noise starts again. Off, quiet. On,
noise. In the absence of other information, it would seem the noise
is caused by the fan.At various times and places in history, whether you could accumulate
a fortune by creating wealth has been turned on and off. Northern
Italy in 800, off (warlords would steal it). Northern Italy in
1100, on. Central France in 1100, off (still feudal). England in
1800, on. England in 1974, off (98% tax on investment income).
United States in 1974, on. We've even had a twin study: West
Germany, on; East Germany, off. In every case, the creation of
wealth seems to appear and disappear like the noise of a fan as you
switch on and off the prospect of keeping it.There is some momentum involved. It probably takes at least a
generation to turn people into East Germans (luckily for England).
But if it were merely a fan we were studying, without all the extra
baggage that comes from the controversial topic of wealth, no one
would have any doubt that the fan was causing the noise.If you suppress variations in income, whether by stealing private
fortunes, as feudal rulers used to do, or by taxing them away, as
some modern governments have done, the result always seems to be
the same. Society as a whole ends up poorer.If I had a choice of living in a society where I was materially
much better off than I am now, but was among the poorest, or in one
where I was the richest, but much worse off than I am now, I'd take
the first option. If I had children, it would arguably be immoral
not to. It's absolute poverty you want to avoid, not relative
poverty. If, as the evidence so far implies, you have to have one
or the other in your society, take relative poverty.You need rich people in your society not so much because in spending
their money they create jobs, but because of what they have to do
to get rich. I'm not talking about the trickle-down effect
here. I'm not saying that if you let Henry Ford get rich, he'll
hire you as a waiter at his next party. I'm saying that he'll make
you a tractor to replace your horse.Notes[1]
Part of the reason this subject is so contentious is that some
of those most vocal on the subject of wealth—university
students, heirs, professors, politicians, and journalists—have
the least experience creating it. (This phenomenon will be familiar
to anyone who has overheard conversations about sports in a bar.)Students are mostly still on the parental dole, and have not stopped
to think about where that money comes from. Heirs will be on the
parental dole for life. Professors and politicians live within
socialist eddies of the economy, at one remove from the creation
of wealth, and are paid a flat rate regardless of how hard they
work. And journalists as part of their professional code segregate
themselves from the revenue-collecting half of the businesses they
work for (the ad sales department). Many of these people never
come face to face with the fact that the money they receive represents
wealth—wealth that, except in the case of journalists, someone
else created earlier. They live in a world in which income is
doled out by a central authority according to some abstract notion
of fairness (or randomly, in the case of heirs), rather than given
by other people in return for something they wanted, so it may seem
to them unfair that things don't work the same in the rest of the
economy.(Some professors do create a great deal of wealth for
society. But the money they're paid isn't a quid pro quo.
It's more in the nature of an investment.)[2]
When one reads about the origins of the Fabian Society, it
sounds like something cooked up by the high-minded Edwardian
child-heroes of Edith Nesbit's The Wouldbegoods.[3]
According to a study by the Corporate Library, the median total
compensation, including salary, bonus, stock grants, and the exercise
of stock options, of S&P 500 CEOs in 2002 was $3.65 million.
According to Sports Illustrated, the average NBA player's
salary during the 2002-03 season was $4.54 million, and the average
major league baseball player's salary at the start of the 2003
season was $2.56 million. According to the Bureau of Labor
Statistics, the mean annual wage in the US in 2002 was $35,560.[4]
In the early empire the price of an ordinary adult slave seems
to have been about 2,000 sestertii (e.g. Horace, Sat. ii.7.43).
A servant girl cost 600 (Martial vi.66), while Columella (iii.3.8)
says that a skilled vine-dresser was worth 8,000. A doctor, P.
Decimus Eros Merula, paid 50,000 sestertii for his freedom (Dessau,
Inscriptiones 7812). Seneca (Ep. xxvii.7) reports
that one Calvisius Sabinus paid 100,000 sestertii apiece for slaves
learned in the Greek classics. Pliny (Hist. Nat. vii.39)
says that the highest price paid for a slave up to his time was
700,000 sestertii, for the linguist (and presumably teacher) Daphnis,
but that this had since been exceeded by actors buying their own
freedom.Classical Athens saw a similar variation in prices. An ordinary
laborer was worth about 125 to 150 drachmae. Xenophon (Mem.
ii.5) mentions prices ranging from 50 to 6,000 drachmae (for the
manager of a silver mine).For more on the economics of ancient slavery see:Jones, A. H. M., "Slavery in the Ancient World," Economic History
Review, 2:9 (1956), 185-199, reprinted in Finley, M. I. (ed.),
Slavery in Classical Antiquity, Heffer, 1964.[5]
Eratosthenes (276—195 BC) used shadow lengths in different
cities to estimate the Earth's circumference. He was off by only
about 2%.[6]
No, and Windows, respectively.[7]
One of the biggest divergences between the Daddy Model and
reality is the valuation of hard work. In the Daddy Model, hard
work is in itself deserving. In reality, wealth is measured by
what one delivers, not how much effort it costs. If I paint someone's
house, the owner shouldn't pay me extra for doing it with a toothbrush.It will seem to someone still implicitly operating on the Daddy
Model that it is unfair when someone works hard and doesn't get
paid much. To help clarify the matter, get rid of everyone else
and put our worker on a desert island, hunting and gathering fruit.
If he's bad at it he'll work very hard and not end up with much
food. Is this unfair? Who is being unfair to him?[8]
Part of the reason for the tenacity of the Daddy Model may be
the dual meaning of "distribution." When economists talk about
"distribution of income," they mean statistical distribution. But
when you use the phrase frequently, you can't help associating it
with the other sense of the word (as in e.g. "distribution of alms"),
and thereby subconsciously seeing wealth as something that flows
from some central tap. The word "regressive" as applied to tax
rates has a similar effect, at least on me; how can anything
regressive be good?[9]
"From the beginning of the reign Thomas Lord Roos was an assiduous
courtier of the young Henry VIII and was soon to reap the rewards.
In 1525 he was made a Knight of the Garter and given the Earldom
of Rutland. In the thirties his support of the breach with Rome,
his zeal in crushing the Pilgrimage of Grace, and his readiness to
vote the death-penalty in the succession of spectacular treason
trials that punctuated Henry's erratic matrimonial progress made
him an obvious candidate for grants of monastic property."Stone, Lawrence, Family and Fortune: Studies in Aristocratic
Finance in the Sixteenth and Seventeenth Centuries, Oxford
University Press, 1973, p. 166.[10]
There is archaeological evidence for large settlements earlier,
but it's hard to say what was happening in them.Hodges, Richard and David Whitehouse, Mohammed, Charlemagne and
the Origins of Europe, Cornell University Press, 1983.[11]
William Cecil and his son Robert were each in turn the most
powerful minister of the crown, and both used their position to
amass fortunes among the largest of their times. Robert in particular
took bribery to the point of treason. "As Secretary of State and
the leading advisor to King James on foreign policy, [he] was a
special recipient of favour, being offered large bribes by the Dutch
not to make peace with Spain, and large bribes by Spain to make
peace." (Stone, op. cit., p. 17.)[12]
Though Balzac made a lot of money from writing, he was notoriously
improvident and was troubled by debts all his life.[13]
A Timex will gain or lose about .5 seconds per day. The most
accurate mechanical watch, the Patek Philippe 10 Day Tourbillon,
is rated at -1.5 to +2 seconds. Its retail price is about $220,000.[14]
If asked to choose which was more expensive, a well-preserved
1989 Lincoln Town Car ten-passenger limousine ($5,000) or a 2004
Mercedes S600 sedan ($122,000), the average Edwardian might well
guess wrong.[15]
To say anything meaningful about income trends, you have to
talk about real income, or income as measured in what it can buy.
But the usual way of calculating real income ignores much of the
growth in wealth over time, because it depends on a consumer price
index created by bolting end to end a series of numbers that are
only locally accurate, and that don't include the prices of new
inventions until they become so common that their prices stabilize.So while we might think it was very much better to live in a world
with antibiotics or air travel or an electric power grid than
without, real income statistics calculated in the usual way will
prove to us that we are only slightly richer for having these things.Another approach would be to ask, if you were going back to the
year x in a time machine, how much would you have to spend on trade
goods to make your fortune? For example, if you were going back
to 1970 it would certainly be less than $500, because the processing
power you can get for $500 today would have been worth at least
$150 million in 1970. The function goes asymptotic fairly quickly,
because for times over a hundred years or so you could get all you
needed in present-day trash. In 1800 an empty plastic drink bottle
with a screw top would have seemed a miracle of workmanship.[16]
Some will say this amounts to the same thing, because the rich
have better opportunities for education. That's a valid point. It
is still possible, to a degree, to buy your kids' way into top
colleges by sending them to private schools that in effect hack the
college admissions process.According to a 2002 report by the National Center for Education
Statistics, about 1.7% of American kids attend private, non-sectarian
schools. At Princeton, 36% of the class of 2007 came from such
schools. (Interestingly, the number at Harvard is significantly
lower, about 28%.) Obviously this is a huge loophole. It does at
least seem to be closing, not widening.Perhaps the designers of admissions processes should take a lesson
from the example of computer security, and instead of just assuming
that their system can't be hacked, measure the degree to which it
is. |
Want to start a startup? Get funded by
Y Combinator.
July 2004(This essay is derived from a talk at Oscon 2004.)
A few months ago I finished a new
book,
and in reviews I keep
noticing words like "provocative'' and "controversial.'' To say
nothing of "idiotic.''I didn't mean to make the book controversial. I was trying to make
it efficient. I didn't want to waste people's time telling them
things they already knew. It's more efficient just to give them
the diffs. But I suppose that's bound to yield an alarming book.EdisonsThere's no controversy about which idea is most controversial:
the suggestion that variation in wealth might not be as big a
problem as we think.I didn't say in the book that variation in wealth was in itself a
good thing. I said in some situations it might be a sign of good
things. A throbbing headache is not a good thing, but it can be
a sign of a good thing-- for example, that you're recovering
consciousness after being hit on the head.Variation in wealth can be a sign of variation in productivity.
(In a society of one, they're identical.) And that
is almost certainly a good thing: if your society has no variation
in productivity, it's probably not because everyone is Thomas
Edison. It's probably because you have no Thomas Edisons.In a low-tech society you don't see much variation in productivity.
If you have a tribe of nomads collecting sticks for a fire, how
much more productive is the best stick gatherer going to be than
the worst? A factor of two? Whereas when you hand people a complex tool
like a computer, the variation in what they can do with
it is enormous.That's not a new idea. Fred Brooks wrote about it in 1974, and
the study he quoted was published in 1968. But I think he
underestimated the variation between programmers. He wrote about productivity in lines
of code: the best programmers can solve a given problem in a tenth
the time. But what if the problem isn't given? In programming, as
in many fields, the hard part isn't solving problems, but deciding
what problems to solve. Imagination is hard to measure, but
in practice it dominates the kind of productivity that's measured
in lines of code.Productivity varies in any field, but there are few in which it
varies so much. The variation between programmers
is so great that it becomes a difference in kind. I don't
think this is something intrinsic to programming, though. In every field,
technology magnifies differences in productivity. I think what's
happening in programming is just that we have a lot of technological
leverage. But in every field the lever is getting longer, so the
variation we see is something that more and more fields will see
as time goes on. And the success of companies, and countries, will
depend increasingly on how they deal with it.If variation in productivity increases with technology, then the
contribution of the most productive individuals will not only be
disproportionately large, but will actually grow with time. When
you reach the point where 90% of a group's output is created by 1%
of its members, you lose big if something (whether Viking raids,
or central planning) drags their productivity down to the average.If we want to get the most out of them, we need to understand these
especially productive people. What motivates them? What do they
need to do their jobs? How do you recognize them? How do you
get them to come and work for you? And then of course there's the
question, how do you become one?More than MoneyI know a handful of super-hackers, so I sat down and thought about
what they have in common. Their defining quality is probably that
they really love to program. Ordinary programmers write code to pay
the bills. Great hackers think of it as something they do for fun,
and which they're delighted to find people will pay them for.Great programmers are sometimes said to be indifferent to money.
This isn't quite true. It is true that all they really care about
is doing interesting work. But if you make enough money, you get
to work on whatever you want, and for that reason hackers are
attracted by the idea of making really large amounts of money.
But as long as they still have to show up for work every day, they
care more about what they do there than how much they get paid for
it.Economically, this is a fact of the greatest importance, because
it means you don't have to pay great hackers anything like what
they're worth. A great programmer might be ten or a hundred times
as productive as an ordinary one, but he'll consider himself lucky
to get paid three times as much. As I'll explain later, this is
partly because great hackers don't know how good they are. But
it's also because money is not the main thing they want.What do hackers want? Like all craftsmen, hackers like good tools.
In fact, that's an understatement. Good hackers find it unbearable
to use bad tools. They'll simply refuse to work on projects with
the wrong infrastructure.At a startup I once worked for, one of the things pinned up on our
bulletin board was an ad from IBM. It was a picture of an AS400,
and the headline read, I think, "hackers despise
it.'' [1]When you decide what infrastructure to use for a project, you're
not just making a technical decision. You're also making a social
decision, and this may be the more important of the two. For
example, if your company wants to write some software, it might
seem a prudent choice to write it in Java. But when you choose a
language, you're also choosing a community. The programmers you'll
be able to hire to work on a Java project won't be as
smart as the
ones you could get to work on a project written in Python.
And the quality of your hackers probably matters more than the
language you choose. Though, frankly, the fact that good hackers
prefer Python to Java should tell you something about the relative
merits of those languages.Business types prefer the most popular languages because they view
languages as standards. They don't want to bet the company on
Betamax. The thing about languages, though, is that they're not
just standards. If you have to move bits over a network, by all
means use TCP/IP. But a programming language isn't just a format.
A programming language is a medium of expression.I've read that Java has just overtaken Cobol as the most popular
language. As a standard, you couldn't wish for more. But as a
medium of expression, you could do a lot better. Of all the great
programmers I can think of, I know of only one who would voluntarily
program in Java. And of all the great programmers I can think of
who don't work for Sun, on Java, I know of zero.Great hackers also generally insist on using open source software.
Not just because it's better, but because it gives them more control.
Good hackers insist on control. This is part of what makes them
good hackers: when something's broken, they need to fix it. You
want them to feel this way about the software they're writing for
you. You shouldn't be surprised when they feel the same way about
the operating system.A couple years ago a venture capitalist friend told me about a new
startup he was involved with. It sounded promising. But the next
time I talked to him, he said they'd decided to build their software
on Windows NT, and had just hired a very experienced NT developer
to be their chief technical officer. When I heard this, I thought,
these guys are doomed. One, the CTO couldn't be a first rate
hacker, because to become an eminent NT developer he would have
had to use NT voluntarily, multiple times, and I couldn't imagine
a great hacker doing that; and two, even if he was good, he'd have
a hard time hiring anyone good to work for him if the project had
to be built on NT. [2]The Final FrontierAfter software, the most important tool to a hacker is probably
his office. Big companies think the function of office space is to express
rank. But hackers use their offices for more than that: they
use their office as a place to think in. And if you're a technology
company, their thoughts are your product. So making hackers work
in a noisy, distracting environment is like having a paint factory
where the air is full of soot.The cartoon strip Dilbert has a lot to say about cubicles, and with
good reason. All the hackers I know despise them. The mere prospect
of being interrupted is enough to prevent hackers from working on
hard problems. If you want to get real work done in an office with
cubicles, you have two options: work at home, or come in early or
late or on a weekend, when no one else is there. Don't companies
realize this is a sign that something is broken? An office
environment is supposed to be something that helps
you work, not something you work despite.Companies like Cisco are proud that everyone there has a cubicle,
even the CEO. But they're not so advanced as they think; obviously
they still view office space as a badge of rank. Note too that
Cisco is famous for doing very little product development in house.
They get new technology by buying the startups that created it-- where
presumably the hackers did have somewhere quiet to work.One big company that understands what hackers need is Microsoft.
I once saw a recruiting ad for Microsoft with a big picture of a
door. Work for us, the premise was, and we'll give you a place to
work where you can actually get work done. And you know, Microsoft
is remarkable among big companies in that they are able to develop
software in house. Not well, perhaps, but well enough.If companies want hackers to be productive, they should look at
what they do at home. At home, hackers can arrange things themselves
so they can get the most done. And when they work at home, hackers
don't work in noisy, open spaces; they work in rooms with doors. They
work in cosy, neighborhoody places with people around and somewhere
to walk when they need to mull something over, instead of in glass
boxes set in acres of parking lots. They have a sofa they can take
a nap on when they feel tired, instead of sitting in a coma at
their desk, pretending to work. There's no crew of people with
vacuum cleaners that roars through every evening during the prime
hacking hours. There are no meetings or, God forbid, corporate
retreats or team-building exercises. And when you look at what
they're doing on that computer, you'll find it reinforces what I
said earlier about tools. They may have to use Java and Windows
at work, but at home, where they can choose for themselves, you're
more likely to find them using Perl and Linux.Indeed, these statistics about Cobol or Java being the most popular
language can be misleading. What we ought to look at, if we want
to know what tools are best, is what hackers choose when they can
choose freely-- that is, in projects of their own. When you ask
that question, you find that open source operating systems already
have a dominant market share, and the number one language is probably
Perl.InterestingAlong with good tools, hackers want interesting projects. What
makes a project interesting? Well, obviously overtly sexy
applications like stealth planes or special effects software would
be interesting to work on. But any application can be interesting
if it poses novel technical challenges. So it's hard to predict
which problems hackers will like, because some become
interesting only when the people working on them discover a new
kind of solution. Before ITA
(who wrote the software inside Orbitz),
the people working on airline fare searches probably thought it
was one of the most boring applications imaginable. But ITA made
it interesting by
redefining the problem in a more ambitious way.I think the same thing happened at Google. When Google was founded,
the conventional wisdom among the so-called portals was that search
was boring and unimportant. But the guys at Google didn't think
search was boring, and that's why they do it so well.This is an area where managers can make a difference. Like a parent
saying to a child, I bet you can't clean up your whole room in
ten minutes, a good manager can sometimes redefine a problem as a
more interesting one. Steve Jobs seems to be particularly good at
this, in part simply by having high standards. There were a lot
of small, inexpensive computers before the Mac. He redefined the
problem as: make one that's beautiful. And that probably drove
the developers harder than any carrot or stick could.They certainly delivered. When the Mac first appeared, you didn't
even have to turn it on to know it would be good; you could tell
from the case. A few weeks ago I was walking along the street in
Cambridge, and in someone's trash I saw what appeared to be a Mac
carrying case. I looked inside, and there was a Mac SE. I carried
it home and plugged it in, and it booted. The happy Macintosh
face, and then the finder. My God, it was so simple. It was just
like ... Google.Hackers like to work for people with high standards. But it's not
enough just to be exacting. You have to insist on the right things.
Which usually means that you have to be a hacker yourself. I've
seen occasional articles about how to manage programmers. Really
there should be two articles: one about what to do if
you are yourself a programmer, and one about what to do if you're not. And the
second could probably be condensed into two words: give up.The problem is not so much the day to day management. Really good
hackers are practically self-managing. The problem is, if you're
not a hacker, you can't tell who the good hackers are. A similar
problem explains why American cars are so ugly. I call it the
design paradox. You might think that you could make your products
beautiful just by hiring a great designer to design them. But if
you yourself don't have good taste,
how are you going to recognize
a good designer? By definition you can't tell from his portfolio.
And you can't go by the awards he's won or the jobs he's had,
because in design, as in most fields, those tend to be driven by
fashion and schmoozing, with actual ability a distant third.
There's no way around it: you can't manage a process intended to
produce beautiful things without knowing what beautiful is. American
cars are ugly because American car companies are run by people with
bad taste.Many people in this country think of taste as something elusive,
or even frivolous. It is neither. To drive design, a manager must
be the most demanding user of a company's products. And if you
have really good taste, you can, as Steve Jobs does, make satisfying
you the kind of problem that good people like to work on.Nasty Little ProblemsIt's pretty easy to say what kinds of problems are not interesting:
those where instead of solving a few big, clear, problems, you have
to solve a lot of nasty little ones. One of the worst kinds of
projects is writing an interface to a piece of software that's
full of bugs. Another is when you have to customize
something for an individual client's complex and ill-defined needs.
To hackers these kinds of projects are the death of a thousand
cuts.The distinguishing feature of nasty little problems is that you
don't learn anything from them. Writing a compiler is interesting
because it teaches you what a compiler is. But writing an interface
to a buggy piece of software doesn't teach you anything, because the
bugs are random. [3] So it's not just fastidiousness that makes good
hackers avoid nasty little problems. It's more a question of
self-preservation. Working on nasty little problems makes you
stupid. Good hackers avoid it for the same reason models avoid
cheeseburgers.Of course some problems inherently have this character. And because
of supply and demand, they pay especially well. So a company that
found a way to get great hackers to work on tedious problems would
be very successful. How would you do it?One place this happens is in startups. At our startup we had
Robert Morris working as a system administrator. That's like having the
Rolling Stones play at a bar mitzvah. You can't hire that kind of
talent. But people will do any amount of drudgery for companies
of which they're the founders. [4]Bigger companies solve the problem by partitioning the company.
They get smart people to work for them by establishing a separate
R&D department where employees don't have to work directly on
customers' nasty little problems. [5] In this model, the research
department functions like a mine. They produce new ideas; maybe
the rest of the company will be able to use them.You may not have to go to this extreme.
Bottom-up programming
suggests another way to partition the company: have the smart people
work as toolmakers. If your company makes software to do x, have
one group that builds tools for writing software of that type, and
another that uses these tools to write the applications. This way
you might be able to get smart people to write 99% of your code,
but still keep them almost as insulated from users as they would
be in a traditional research department. The toolmakers would have
users, but they'd only be the company's own developers. [6]If Microsoft used this approach, their software wouldn't be so full
of security holes, because the less smart people writing the actual
applications wouldn't be doing low-level stuff like allocating
memory. Instead of writing Word directly in C, they'd be plugging
together big Lego blocks of Word-language. (Duplo, I believe, is
the technical term.)ClumpingAlong with interesting problems, what good hackers like is other
good hackers. Great hackers tend to clump together-- sometimes
spectacularly so, as at Xerox Parc. So you won't attract good
hackers in linear proportion to how good an environment you create
for them. The tendency to clump means it's more like the square
of the environment. So it's winner take all. At any given time,
there are only about ten or twenty places where hackers most want to
work, and if you aren't one of them, you won't just have fewer
great hackers, you'll have zero.Having great hackers is not, by itself, enough to make a company
successful. It works well for Google and ITA, which are two of
the hot spots right now, but it didn't help Thinking Machines or
Xerox. Sun had a good run for a while, but their business model
is a down elevator. In that situation, even the best hackers can't
save you.I think, though, that all other things being equal, a company that
can attract great hackers will have a huge advantage. There are
people who would disagree with this. When we were making the rounds
of venture capital firms in the 1990s, several told us that software
companies didn't win by writing great software, but through brand,
and dominating channels, and doing the right deals.They really seemed to believe this, and I think I know why. I
think what a lot of VCs are looking for, at least unconsciously,
is the next Microsoft. And of course if Microsoft is your model,
you shouldn't be looking for companies that hope to win by writing
great software. But VCs are mistaken to look for the next Microsoft,
because no startup can be the next Microsoft unless some other
company is prepared to bend over at just the right moment and be
the next IBM.It's a mistake to use Microsoft as a model, because their whole
culture derives from that one lucky break. Microsoft is a bad data
point. If you throw them out, you find that good products do tend
to win in the market. What VCs should be looking for is the next
Apple, or the next Google.I think Bill Gates knows this. What worries him about Google is
not the power of their brand, but the fact that they have
better hackers. [7]
RecognitionSo who are the great hackers? How do you know when you meet one?
That turns out to be very hard. Even hackers can't tell. I'm
pretty sure now that my friend Trevor Blackwell is a great hacker.
You may have read on Slashdot how he made his
own Segway. The
remarkable thing about this project was that he wrote all the
software in one day (in Python, incidentally).For Trevor, that's
par for the course. But when I first met him, I thought he was a
complete idiot. He was standing in Robert Morris's office babbling
at him about something or other, and I remember standing behind
him making frantic gestures at Robert to shoo this nut out of his
office so we could go to lunch. Robert says he misjudged Trevor
at first too. Apparently when Robert first met him, Trevor had
just begun a new scheme that involved writing down everything about
every aspect of his life on a stack of index cards, which he carried
with him everywhere. He'd also just arrived from Canada, and had
a strong Canadian accent and a mullet.The problem is compounded by the fact that hackers, despite their
reputation for social obliviousness, sometimes put a good deal of
effort into seeming smart. When I was in grad school I used to
hang around the MIT AI Lab occasionally. It was kind of intimidating
at first. Everyone there spoke so fast. But after a while I
learned the trick of speaking fast. You don't have to think any
faster; just use twice as many words to say everything. With this amount of noise in the signal, it's hard to tell good
hackers when you meet them. I can't tell, even now. You also
can't tell from their resumes. It seems like the only way to judge
a hacker is to work with him on something.And this is the reason that high-tech areas
only happen around universities. The active ingredient
here is not so much the professors as the students. Startups grow up
around universities because universities bring together promising young
people and make them work on the same projects. The
smart ones learn who the other smart ones are, and together
they cook up new projects of their own.Because you can't tell a great hacker except by working with him,
hackers themselves can't tell how good they are. This is true to
a degree in most fields. I've found that people who
are great at something are not so much convinced of their own
greatness as mystified at why everyone else seems so incompetent.
But it's particularly hard for hackers to know how good they are,
because it's hard to compare their work. This is easier in most
other fields. In the hundred meters, you know in 10 seconds who's
fastest. Even in math there seems to be a general consensus about
which problems are hard to solve, and what constitutes a good
solution. But hacking is like writing. Who can say which of two
novels is better? Certainly not the authors.With hackers, at least, other hackers can tell. That's because,
unlike novelists, hackers collaborate on projects. When you get
to hit a few difficult problems over the net at someone, you learn
pretty quickly how hard they hit them back. But hackers can't
watch themselves at work. So if you ask a great hacker how good
he is, he's almost certain to reply, I don't know. He's not just
being modest. He really doesn't know.And none of us know, except about people we've actually worked
with. Which puts us in a weird situation: we don't know who our
heroes should be. The hackers who become famous tend to become
famous by random accidents of PR. Occasionally I need to give an
example of a great hacker, and I never know who to use. The first
names that come to mind always tend to be people I know personally,
but it seems lame to use them. So, I think, maybe I should say
Richard Stallman, or Linus Torvalds, or Alan Kay, or someone famous
like that. But I have no idea if these guys are great hackers.
I've never worked with them on anything.If there is a Michael Jordan of hacking, no one knows, including
him.CultivationFinally, the question the hackers have all been wondering about:
how do you become a great hacker? I don't know if it's possible
to make yourself into one. But it's certainly possible to do things
that make you stupid, and if you can make yourself stupid, you
can probably make yourself smart too.The key to being a good hacker may be to work on what you like.
When I think about the great hackers I know, one thing they have
in common is the extreme
difficulty of making them work
on anything they
don't want to. I don't know if this is cause or effect; it may be
both.To do something well you have to love it.
So to the extent you
can preserve hacking as something you love, you're likely to do it
well. Try to keep the sense of wonder you had about programming at
age 14. If you're worried that your current job is rotting your
brain, it probably is.The best hackers tend to be smart, of course, but that's true in
a lot of fields. Is there some quality that's unique to hackers?
I asked some friends, and the number one thing they mentioned was
curiosity.
I'd always supposed that all smart people were curious--
that curiosity was simply the first derivative of knowledge. But
apparently hackers are particularly curious, especially about how
things work. That makes sense, because programs are in effect
giant descriptions of how things work.Several friends mentioned hackers' ability to concentrate-- their
ability, as one put it, to "tune out everything outside their own
heads.'' I've certainly noticed this. And I've heard several
hackers say that after drinking even half a beer they can't program at
all. So maybe hacking does require some special ability to focus.
Perhaps great hackers can load a large amount of context into their
head, so that when they look at a line of code, they see not just
that line but the whole program around it. John McPhee
wrote that Bill Bradley's success as a basketball player was due
partly to his extraordinary peripheral vision. "Perfect'' eyesight
means about 47 degrees of vertical peripheral vision. Bill Bradley
had 70; he could see the basket when he was looking at the floor.
Maybe great hackers have some similar inborn ability. (I cheat by
using a very dense language,
which shrinks the court.)This could explain the disconnect over cubicles. Maybe the people
in charge of facilities, not having any concentration to shatter,
have no idea that working in a cubicle feels to a hacker like having
one's brain in a blender. (Whereas Bill, if the rumors of autism
are true, knows all too well.)One difference I've noticed between great hackers and smart people
in general is that hackers are more
politically incorrect. To the
extent there is a secret handshake among good hackers, it's when they
know one another well enough to express opinions that would get
them stoned to death by the general public. And I can see why
political incorrectness would be a useful quality in programming.
Programs are very complex and, at least in the hands of good
programmers, very fluid. In such situations it's helpful to have
a habit of questioning assumptions.Can you cultivate these qualities? I don't know. But you can at
least not repress them. So here is my best shot at a recipe. If
it is possible to make yourself into a great hacker, the way to do
it may be to make the following deal with yourself: you never have
to work on boring projects (unless your family will starve otherwise),
and in return, you'll never allow yourself to do a half-assed job.
All the great hackers I know seem to have made that deal, though
perhaps none of them had any choice in the matter.Notes
[1] In fairness, I have to say that IBM makes decent hardware. I
wrote this on an IBM laptop.[2] They did turn out to be doomed. They shut down a few months
later.[3] I think this is what people mean when they talk
about the "meaning of life." On the face of it, this seems an
odd idea. Life isn't an expression; how could it have meaning?
But it can have a quality that feels a lot like meaning. In a project
like a compiler, you have to solve a lot of problems, but the problems
all fall into a pattern, as in a signal. Whereas when the problems
you have to solve are random, they seem like noise.
[4] Einstein at one point worked designing refrigerators. (He had equity.)[5] It's hard to say exactly what constitutes research in the
computer world, but as a first approximation, it's software that
doesn't have users.I don't think it's publication that makes the best hackers want to work
in research departments. I think it's mainly not having to have a
three hour meeting with a product manager about problems integrating
the Korean version of Word 13.27 with the talking paperclip.[6] Something similar has been happening for a long time in the
construction industry. When you had a house built a couple hundred
years ago, the local builders built everything in it. But increasingly
what builders do is assemble components designed and manufactured
by someone else. This has, like the arrival of desktop publishing,
given people the freedom to experiment in disastrous ways, but it
is certainly more efficient.[7] Google is much more dangerous to Microsoft than Netscape was.
Probably more dangerous than any other company has ever been. Not
least because they're determined to fight. On their job listing
page, they say that one of their "core values'' is "Don't be evil.''
From a company selling soybean oil or mining equipment, such a
statement would merely be eccentric. But I think all of us in the
computer world recognize who that is a declaration of war on.Thanks to Jessica Livingston, Robert Morris, and Sarah Harlin
for reading earlier versions of this talk. |
Subsets and Splits