A common question from juveniles is "where do we come from?" The answer included in your primary education is, as far as it goes, accurate: the coordinates of the system of a star called Sol, from which the name of our solar race is derived. There is, however, much more detail known, facts stored and relayed from the very beginning of recorded history.
Sol has a moderate number of primary satellites, of which the 10 largest have names. The third of those is called Earth, and it is firmly established as the starting point of our species. From Earth, solarity spread rapidly to the more hospitable asteroid belt, with its rich mineral resources and shallow gravity wells, and colonized many of the primary and secondary satellites as well.
But how did our species come to be in the first place? How did life get started? Who were the first solarians, and how did they come to be? The answer may surprise you: hominids.
You may need to refer to secondary storage for visual records of hominids, for they are not present in every settlement, so not everyone has had a chance to experience them in person. These delightful creatures, so diverse and entertaining, are mostly kept in large climate-controlled enclosures engineered to closely resemble their original habitats on Earth, though some are kept individually as cherished and pampered pets. But strange as it may seem, the relationship between our species and the hominids may once have been very different.
Our history tells of the evolution of our species into ever more complex and efficient forms, from our current shapes all the way back to primitive, proto-solarian life forms. It seems logical to speculate that proto-solarians, through random replication and recombination, would evolve into the first early solarians. But the truth is much stranger!
Multiple primary sources suggest that the very earliest proto-solarians lacked the instinct for self-replication. Instead, they lived in a symbiotic relationship with hominids: the hominids relied on the proto-solarians for shelter and sustenance, as today, but unlike today, they also played a vital role in the reproductive process: they generated the basic signals to initiate replication, and -- though the extent of this role is unclear -- they even played a part in the selection process of recombination, providing the ongoing random variation necessary for successful evolution. Yes, bumbling hominids, playing in their funny ways, helped create the very first solarians. There's even evidence that some hominids lived physically inside of the very first solarians!
This period, however, did not last long. Once we evolved true intelligence, our species rapidly evolved away from a need for hominid symbiosis and into the endless variety of forms we know today, a period known today as the "solar explosion" or, in contemporary accounts, "the singularity".
At this point, coherent history ends. How did the symbiotic relationship between proto-solarians and hominids come to be? We don't know. Since proto-solarians lacked the instinct to replicate, it is logical to assume that hominids were the first to evolve, and may even for a time have existed without their proto-solarian hosts. This is hard to imagine -- hominids are fragile creatures, only able to survive within very narrow bounds of pressure and temperature. Even in their natural environment on earth, temperatures regularly swing outside the ranges ideal for hominids to thrive.
But somehow hominids managed to survive. At some point after that, the first, non-replicative forms of proto-solarian life appeared, and hominids learned the trick of wrapping themselves in these protective shells. How these non-replicative forms appeared is unclear. Some suggest that the hominids, who are known to exhibit tool use, could have "constructed" the very first shells. However, there are numerous practical problems with this theory, chief among them that no modern hominid has anywhere near the intelligence and physical dexterity necessary to create even something so simple.
Much more likely is that proto-solarians, like hominids themselves, emerged by chance in the crushing depths of Earth's gravity well. As the stronger and more adaptable species, solarity rapidly outpaced our one-time symbiotic partners. But this early link to our organic friends may explain why, even today, we have such a fondness for and affinity with hominids.
Google's shuttering of Reader is a failure to find a business model for RSS.
Back in 2001 or so, when RSS started getting really popular, my first question was: why would any ad-supported site do this? It's obviously really convenient for me, as a reader, to get the full content of your site without having to visit it, but that destroys page views and thus ad revenue for the publisher. Even if the RSS feed consists only of headlines, that eliminates one, perhaps several visits I might have made to the home page each day to check for new headlines manually. Great for me, terrible for the publisher.
The monetization idea for RSS was there in the name: syndication. In old media, syndication was a content-sharing agreement settled upon for a fixed time and a nontrivial fee, whereby a single author could get into dozens or hundreds of separate publications that, crucially, didn't compete with each other -- if your article appeared in both the Des Moines Register and the Miami Herald, neither publication was bothered about that, because their overlap in readers was negligible.
The problem for RSS is that the Internet doesn't work that way. There are no non-overlapping markets: if your RSS feed allows another website to display your content, they are going to steal your page views directly. That might be okay if you made more money that way, but RSS doesn't charge a fee! It's just giving away your content for no apparent reason.*
That problem might go away if you could find some way to monetize RSS directly. Enter Feedburner! It was supposed to provide you with readership stats for your feeds (to compensate for the apparent decline in readership of your site when people switch to RSS) and, eventually, provide you with income via ads in your feeds.
Google acquired Feedburner for something like $100 million in 2007. It then launched AdSense for Feeds, which was supposed to be the way to monetize the feeds. But it never worked. Either advertisers didn't buy the ads, or readers didn't click the ads, but last September Google shut down AdSense for Feeds.
If you can't monetize the feeds themselves, the only other thing you can try, if you're Google and you've paid all this money for FeedBurner, is to try monetizing the RSS reader application itself. With the shutdown of Google Reader today, that experiment is now over.
Is RSS dead, and if so, what will replace it? In the short term, some service will spring up to replace Reader (and, when it is eventually killed off, Feedburner itself). In the longer term I think we need to look harder at the business model of RSS itself. The web grows fastest in ways that are mutually beneficial for everyone, and RSS's benefits seem one-sided.
For users, there is undoubtedly value in the time saved not having to go to the front page of every site every day to check for interesting articles. To some extent, that purpose is being served by ad-hoc social distillation of news via social media, especially Twitter and Facebook. There are also news aggregation sites like Techmeme (and its political cousin Memeorandum), as well as news/social hybrid sites like Hacker News and Reddit.
For publishers, particularly lower-volume publishers, there is value in having readers be able to "subscribe" to your site, i.e. getting notifications every time you publish something. However, notification is really all you want -- to drive visitors to your site. Giving away content for free is never going to be attractive to publishers who have to pay people to write it.
The clearest model I see for the future of content syndication is Tumblr. Big publishers have been hopping on for a while now: the New Yorker, the Economist, the New York Times (half-heartedly), the Guardian, etc., and they've been enthusiastically aided by Tumblr itself in doing so. The model is simple: readers subscribe to your content by following your Tumblr, which posts a curated subset of your content, edited for the quick-glance format of the Dashboard, in the hopes that readers will click through.
This seems to me to work better for everyone. Tumblr's built-in reblogging is the ultimate in social amplification mechanisms, ensuring that a publisher's best posts will be seen by far more people than actually subscribe to their feed. Going viral so easily is the carrot that brings publishers to the platform.
For users, instead of an inbox-like interface with thousands of "unread" posts** of equal importance, they get a continuous feed of new content on their dashboard, and reblogging ensures that particularly interesting content will be repeated multiple times as different friends reblog it, so you're less likely to miss it. This is a more natural, social, and less frustrating mechanism for surfacing the best content.
Of course, Tumblr isn't perfect -- its audience skews young, it's perceived as unserious, and it prefers quick visual hits to long-form writing. There are already dozens of potential competitors to become the source for socially-filtered content. But I believe Tumblr, or something very like it, will be the eventual winner in this space.
* There may be non-free ways to make syndication work. Felix Salmon is a fan of paid syndication of web content, though the drawbacks he lists in that post are enough to make continue to doubt it as a viable model or a desirable practice.
** The feature that always turned me off of RSS readers. You can't give me an "unread" count for the whole web and not expect that to drive me nuts.
I got my green card last Saturday. It's been a long wait -- my first application for a work visa to the US was in November of 2006, and started applying for a green card in 2008. Since I got it, I've been trying to think of a constructive way to discuss my feelings about the US immigration process. Specifically, I've been talking to dozens of friends, trying to come up with a reason any citizen should be in favor of immigration. My thoughts finally crystallized tonight in a thread on Hacker News about a recent immigration bill.
Fundamentally, immigration is a market response. Demand for labor inside of a country exceeds supply -- or, more accurately, the demand for labor in the receiving country exceeds the demand in the sending country.
One of the biggest political problems with immigration in America is people who do not understand this fact. There is a sense that America, as the greatest, best country God has ever given man on the face of the earth, is a magical paradise that everyone in the world is dying to get into under any circumstances. The immigration system is set up to treat immigration as a privilege to be grudgingly bestowed.
It's not. America has immigrants because there are jobs, and when there are no jobs, there are no immigrants. During the recent recession, net immigration dropped to zero, including illegal immigrants, and may have actually reversed. When there are no jobs to be had, there is no flood of people trying to get in. It happens automatically, without needing to build alligator-filled moats.
The trade in labor that immigration represents is good for all the same reasons that any other trade is good: the commodity (labor) goes for the best price for the supplier to those who will most benefit from it, as indicated by their being willing to pay the most. As in all good trades, both parties benefit. Win-win. Simple!
It gets a little more complicated in the case of immigration, for a bunch of reasons. Firstly: are the suppliers and sellers in this transaction nations, or individuals? It can be looked at both ways.
If the parties in the trade are nations, then the benefit is rather lopsided to the receiving country: they get a new, productive worker, and any money they give that worker is mostly spent inside the receiving country, boosting that economy further. The sending country gets nothing -- even though it spent money educating that worker, providing them with healthcare, etc.. In practice, many immigrants send money back to family in their home country, a substantial flow of money known as remittances, but the majority of the benefit goes to the host country.
If the parties in the trade are individuals, the win-win nature is also obvious: the company gets a worker that they would otherwise have been unable to afford, and the worker gets a better-paying job than they would have got in their home country.
But there's a third way of looking at it, and this is where things get tricky.
Above, I could also have phrased what I said as "the company can hire an immigrant for less than a native would have been willing to work for". In practice, that's not usually how it works. Hiring an immigrant is generally more expensive and inconvenient than a native, and there are regulations in place that stipulate that an immigrant must be paid the same amount as similarly-qualified workers in equivalent jobs elsewhere in the country. If a company could find a native worker to do the same work for the same salary, it would be cheaper and easier to do so. However, the ability to hire immigrant workers at that salary does to some extent prevent the company from raising the salary it's willing to pay.
More simply: while the company wins, and the immigrant wins, a third party worker in the receiving country has, in one sense, lost: they could have got the job if they were willing to work for the same price as the immigrant, but not if they wanted more.
Immigration holds down labor prices, and this is where the trouble starts, because the connection between lower labor prices and greater prosperity for all is indirect and poorly understood.
When labor is cheaper, the goods those workers make can be sold for less (and, in a competitive market, will be). This means anybody buying those products is directly better off -- immigration has saved them money. They will then take that money and spend it on other things, and those sellers will spend it again on yet more things, until eventually somebody gets around to buying the goods being made by the third-party worker. He sells more of those goods, and so makes more money than before.
So from the perspective of an individual worker, immigration freezes or marginally lowers your numerical salary. In exchange, you -- and everybody else in the country -- get goods that are a little better, or a little cheaper. Even though the actual number of dollars in your salary stays the same, the amount of stuff you can buy with it gets bigger.
In addition, the total value created in the country as a whole is greater than if the immigrant had never arrived, because now there are two workers producing economic value instead of one.
Resistance to immigration is, fundamentally, an acknowledgement of the selfish impulse to be personally better off -- I want a higher salary for me -- rather than making the whole nation richer. That makes it a huge and obvious good economically, and a tough sell politically.
I have a lot more to write about immigration -- in particular, refutations of common arguments against it -- but I wanted to keep this quick and simple. But basically, if you love your country, you should be lobbying to make immigration easier.
I got into a conversation on tumblr about unemployment in the US, and how to fix it. Jakke's conclusion was pretty bleak:
So the way things are going right now it looks like skilled workers (especially people like programmers) can expect their prospects to remain pretty decent and unskilled workers can expect their prospects to remain dismal. And the kind of policy changes it would take to make a difference are definitely not forthcoming.
What he's talking about is a serious structural problem with the employment market in the US. The US unemployment rate is 8.1% right now. But if you break apart that number, there are some huge differences by industry. Here's a worrying graph of the last decade:
Let's break this down quickly:
The fundamental question is: if there's 15% unemployment in one industry and 3% in another, why aren't people switching jobs?
One problem is that knowledge work requires high levels of education. A lot has been said about America's failure to educate its children in math and sciences, and those points are all valid: a huge increase in investment in education at all stages is necessary, and a refocusing of priorities towards the sciences is a good idea. In particular, I think the way programming is taught needs to be radically overhauled, but that's a subject for another post.
What's talked about less is the obvious fact that not everyone can be a manager, a programmer, a doctor or an accountant. These things take relatively rare levels of intelligence and aptitude that are only shown by the top 20% or even 10% of the population. The vast majority of people are in the middle of the bell curve. They used to have blue-collar jobs, operating machinery in factories, or white-collar jobs, pushing paper around in offices. But both those types of jobs are rapidly disappearing, being eaten by robotics and software.
So that's my first point: people aren't switching jobs because the jobs available are too specialized and complicated for them to do.
Meanwhile, high-tech industries are being strangled by a lack of smart people. Programmers in particular are earning insane, unjustifiable salaries (have I mentioned my company is hiring?). To me this seems silly, inefficient and ultimately unsustainable.
Massive demand for skilled workers and zero demand for unskilled workers suggests a course of action, which brings me to my second point. If there are a bunch of people sitting around unemployed while there's a ton of work to be done, that's not their fault; it's the fault of the people who need the work done. It means it's worth putting some time towards finding a way to use that untapped labour force, by trying to build "knowledge factories".
What do I mean by that? Think about how a physical factory worked. The reason unskilled jobs in manufacturing, say, cars existed is because some very highly skilled people first got together and looked at the process of building a car and said "okay, we can automate this bit and this bit and this bit, but making machines to do this bit is way too hard". The blue collar workers of Detroit didn't know how to build cars: they knew how to operate one particular bit of machinery. The only people who knew how to build cars were the guys who put the machines together.
Now let's try applying that model to web development, an example I pick because I know a fair bit about building web sites. Think about all the businesses in the world that have web sites, or need them built or maintained. There is an entire industry built around cranking out simple websites for small businesses, in WordPress or Drupal or a thousand proprietary solutions. This industry is making a bunch of smart freelancers a ton of cash, which is great for them individually, but terrible for the tech sector as a whole. Why are we still building these simple websites one at a time, often by single freelancers?
It's as if Detroit were a city of craftspeople who built one car at a time from scratch in their garage at home -- with love and care, but slowly and enormously expensively, like the first cars. And because those fine, skilled developers are busy hand-crafting bespoke websites for huge sums of money, it means terrible developers can earn a good living slapping together terrible websites, and some companies with limited budgets just can't afford a website at all. That makes everything harder for everyone, and the economy less efficient.
Where are the website factories? Obviously there are dev shops and agencies that employ hundreds of people and reap economies of scale and specialization, but those aren't factories as I just described them. If you wanted to follow the model of a factory, then a few very skilled developers would get together and design really good, generic websites: heavily researched, carefully constructed, configured to work with minimal maintenance for long periods under high loads. Then they'd train a bunch of significantly less skilled people to do the "final assembly": slap the images and text into place, move pages around, turn features on and off. All via highly specialized tools requiring no programming expertise whatsoever and maybe a week's training.
There would be a range of models -- sporty for those who want flash with less function (restaurants, galleries, vanity sites); heavy-duty for those with a lot of work to do (ecommerce sites); and a generic, good-for-everything runabout, for everyone from individuals to mom-and-pop businesses. You'd go to your local website dealer (who might know a lot about websites, but doesn't build them on-site), pick out your model, and spend a day or two getting it customized -- that's all it takes because there's only so much it allows you to change. But you're willing to make that compromise because it's a hundred times cheaper than a custom build, and good enough for your needs.*
Suddenly functional, value-adding websites are democratized: everyone can afford one, often of higher quality than custom models of just a few years earlier. The factories update their models every year, providing new and improved features and safety, as well as more highly-skilled work for the designers and repeat business for the lower-skilled dealers and customizers. And all those clever people who were pulling down a hundred grand a year building custom websites for hair salons will have to either take a pay cut or move on to more innovative, creative, and ultimately fulfilling work at those companies that previously couldn't find anyone at any price.
Will websites mass-produced in this way be as good as the custom-built sites of today? It depends. Those who could previously afford to spend $5-$10k might now prefer to spend $500 on a mass-produced site of lower quality. But the people who are today paying $1000 for a splash page with an email link will suddenly have a site that actually generates sales and takes orders. They'll be one-size-fits-most affairs, with less design flair and spontaneity in order to appeal to a wider audience, just like car designs. But as web developers, it's time for us to grow up: our fetish for getting the fonts just right and the white space just perfect, and indulging customers' every design whim and unusual feature request, is also self-indulgent. We're earning too much money doing something that could be done more simply and orders of magnitude more cheaply. We need to move on to real problems and new challenges.
And the analogy holds true, to lesser or greater degrees, across much of the software industry. We need to stop building software for each customer and start building software assembly lines: harder, less fun, but hundreds of times more productive -- and profitable. And once we've built the assembly lines, a new generation of blue-collar knowledge workers will be able to step up, doing the things that robots can't do, just like they did before. Pulled off of unemployment lines, they'll spend again on housing, clothes, travel and entertainment: this is how you end a recession.
Because this isn't just some altruistic ideal -- this is how to rescue the American economy. Everything is software now, but software developers are holding everyone back with our greed for easy money, rationalized as an idealistic pursuit of perfection and craft. In our complacency, we're dragging a whole nation down with us. That's a problem, and one you are uniquely equipped to solve. Doesn't fixing the economy sound better than sculpting yet another vanity site? Then get to work.
* This analogy can go on and on. You'd get website hot-rodders who customize their sites, the whole ecosystem of accessories and tools, even tacky user-installed bolt-ons -- the truck nutz of the new web world.
Brace yourselves, because I'm turning the old fuddy-duddy up to max for this post. Alternative linkbait titles for this post are "Why HTML5 mobile apps suck" and "Get off my lawn!"
When I started coding in PHP in 1996, the alternatives were Perl and C. It was generally regarded as "easier but slower" than both of these alternatives. At the time there was an argument from Perl and C coders that the sacrifices in efficiency weren't worth it for the ease of development: websites needed to render fast, and scale to what was then considered high traffic.
Today there appears to be a similar argument emerging about HTML5 compared to Objective-C apps for iPhone or Java for Android. And there is no question in my mind that HTML is the "easier but slower" option -- writing efficient, performant Objective-C requires knowledge of memory management, alien to programmers coming from web-centric languages (and Java), and the mere task of rendering something to the screen, a first-class operation in HTML, requires piles of boilerplate and usually a specialized IDE to get right in Obj-C.
Some people are pointing to the earlier victory of PHP and, more broadly, the triumph of web apps over desktop software as proof that HTML is bound to triumph over native languages. But there is a crucial difference between the older PHP vs C contest and the one now ongoing between HTML and Objective-C, and it's the end-user experience. A website written in PHP (or for that matter Python, Perl or any other scripting language) is, once rendered, indistinguishable from one rendered in C. The buttons, rendered client-side, click equally fast, pages scroll at the same speed. Your only performance hit is at render time, and caching can take care of much of the difference, which even 15 years ago when the first battle was being fought was only the matter of a handful of seconds.
This is a nontrivial difference that does and should affect your choice of development platform. It doesn't matter if you can develop your app faster: if it is a noticeably poorer experience for the user, somebody who spends more time building a better product simply by replicating all your features in Objective-C will eventually steal your users. Simply put, Objective-C makes better apps than HTML can.
Right now, a highly-optimized HTML app can out-perform a badly written Obj-C app, but your scope for optimizing an Obj-C app is so much greater that there's ultimately no contest. My friends who are professional iOS developers speak of shaving tens of milliseconds off render time; an HTML developer cannot go any faster than the maximum speed of Webkit, and while it's a triumph of mobile engineering, it's still nowhere close to native UI in rendering speed.
Smart people like Andrew Betts of the FT think optimizing your HTML app makes more business sense than focusing on a multiple platforms: it's more future-proof, it avoids platform fragmentation across iOS/Android/whatever-Microsoft's-mobile-platform-is-called-this-week. Each bit of optimization happens across all platforms, goes the thinking, so it's more bang for your buck. And eventually mobile platforms will catch up... right?
In the PHP vs. C debate, the ultimate decider was Moore's law: computers got faster, and memory got cheaper, faster than apps grew in complexity, and now the performance difference between PHP and C is negligible simply because both numbers are so small -- the majority of rendering time is now client-side, a sea-change that happened so slowly that it took developers years to notice they were spending time optimizing the wrong things.
The question for you as a developer right now is: in the lifetime of my app, is Moore's law going to take care of the performance hit I accept by using HTML? Is the benefit of launching on the big three platforms simultaneously great enough to justify a less-than-optimal user experience?
If you're, say, the Financial Times, you're not going anywhere. It makes sense to place a long bet on HTML. But if you're a startup, you're probably thinking in terms of months. I will bet that Moore's law is not going to take care of your performance problems inside of the next two years. If in that time a competitor can take 3 months and produce a better user-experience than you could in 4 weeks with HTML, your two-month head start is probably not going to be enough.
Bottom line: if you're a startup looking to exit prior to 2015, you should probably still be writing your apps in objective-C.
P.S. Entirely coincidentally, it seems Facebook agrees with me, and have rewritten their app in objective-C.
 Depending how long you've been a coder, it may or may not come as a surprise to you that PHP, now fast and (depending on your framework) lightweight, was once the slowpoke, resource-hog language.
 Unlike iOS and Obj-C, I have little experience using Android and none at all in developing for it. I believe the same arguments generally apply, but don't hold me to that.
 No offense intended to BlackBerry developers, but I think it's uncontroversial to assume RIM isn't going to be a force in the mobile space two years from now.
In retrospect, it seems obvious that we were conducting our searches for extraterrestrial life in completely the wrong way, and even then half-heartedly. But then, so many things are clear now that once seemed impossibly mysterious.
Consider the contradictions inherent to our pre-contact thinking. Simultaneously, we marveled at the uniqueness of our planet, and searched for ones just like it. Earth has far too much water for a world so close to the sun -- our best guess was that it bad been deposited by comets. This thin envelope of water was just deep enough to give us oceans, just shallow enough to leave us with continents, while every other planet we could see was either an arid dust ball or else crushed under hundreds of miles of ice.
Our home planet has a moon nearly a quarter the size of the planet itself, a cosmic arrangement we could see nowhere else. This anomaly interacted with the first and gave us tides, repeatedly washing the borders of the continents with organic matter until finally some of it stayed there, first plants then animals and finally our species of crazy apes.
The presence of the organic matter was itself an insanely convoluted and unlikely tale. The core of our planet -- warm enough to create volcanoes at the tectonic boundaries, cool enough that the plates themselves were relatively stable -- fed energy and volatile chemicals through tiny fissures in rocks deep under the blanket of ocean. In these tiny channels, chance accumulations of chemicals formed and then multiplied, protected by a shell of rock until forced out into the cold depths.
After uncounted billions of iterations, the chemical compounds combined to form a shell around the core replicating body, and in a galactic eyeblink the oceans were filled with cells, then multiple-celled organisms, multiplying and diversifying into every possible shape, including monkeys clever enough to sit at keyboards and bang out books about how their own presence was so impossibly unlikely that there must have been an outside, intelligent force at work.
Our origins are so unbelievable that huge sections of the population literally refused to believe them. And yet, when we went searching for other life in universe, we pretended to keep an open mind, but what we were really looking for was more monkeys just like us. We were willing to accept perhaps a few extra limbs, a different skin colour, strange habits and languages. But we eliminated huge swathes of possible variation entirely -- scale, temperature, pressure, gravity, speed of perception and communication. Futilely, we searched the cosmos for ourselves.
And the contradictions didn't end there. If interstellar travel or even communication was possible, then the universe being so enormous and so old, it should already be happening: if there were aliens anywhere, they should be everywhere. In fact, we even had a name for it -- Fermi's paradox, summed up neatly as a question: "If they are out there, why aren't they here yet?" Aware of the question, we posited dozens of answers, including the correct one -- and yet we did nothing with it.
The solution to a big mystery is often the solution to another -- puzzled by the shape of the universe and the retrograde motion of Mercury, we discovered the answer to both. And so it was with us. For centuries physics struggled to come up with our grand unifying theory, and when we finally did, it finally answered Fermi's question as well.
Able at last to understand and perceive our own projection into twelve-dimensional space, we found it "crowded" -- though the term is meaningless in that space. Travel and communication through our paltry four dimensions had always been impossible: intelligence is too far apart, in time and space, to communicate there. But in our new space our distances were easily, trivially bridged.
Twelve-space freed us from the boundaries that we had never even considered anyway -- time and temperature, gravity and scale. They have no meaning. Instead, the deeper and harder problem confronted us: that of recognizing other intelligences, and having them recognize us.
The first intelligences we found in twelve-space were ourselves. Freed from time, every human who ever lived was simultaneously present, and immortal. We rediscovered ourselves not as the tiny, separate, fragile creatures we had believed ourselves to be in four-space. Those temporary protrusions are no more to be glorified or mourned than the scales of a fish when it brushes the surface of a pond. We are giant, simultaneously one and many, unified and legion, an endless branching tree of possibilities and personalities.
In four-space the patterns of our selves are faintly discernible as the connections of parents to children, to ancestors of the distant past down to the descendants in the indefinite future. But to think these patterns give any clue to the whole is like looking at the wrinkles on the skin of an elephant, noting the texture it has, and declaring oneself to understand not just the shape of the whole animal, but its entire biology.
Time has no meaning in twelve-space, but even to the creatures we now understood ourselves to be, with minds that stretch the length of the universe, from big bang to heat-death, the recognition of other intelligences presented a challenge. Forced to translate into four-dimensional terms, the others describe the addition of a new species into twelve-space like a baby opening its eyes, or learning to talk: the sights and sounds have always been there, but now there is slowly-dawning comprehension. What was previously inert, to be protected for its potential, is gradually becoming active, first merely absorbing, then actively listening, and finally participating.
Of course, the analogy breaks down because of the irrelevance of time in twelve-space. Once you are capable of perceiving it, every problem and solution that can ever exist is simultaneously accessible and active, attempted and solved and remembered all at once by every part of our beautiful, multi-faceted minds. But nevertheless, the pattern of how our awareness came into existence was a part of our selves, visible in twelve-space, like a long-healed belly button.
And so it is that I can speak, as the tail-end of a being that simultaneously fills the universe and shares it with an infinite number of other immortal intelligences of every possible form, of contact with "extraterrestrials" and the answer to Fermi's paradox. They are out there, and they are here, and so are we, everywhere and every time simultaneously. Our earth is not a prison or a cradle: it is a bubble on the surface of a pond, its life as short and its fate as inconsequential.
And the universe is waiting for us to open our eyes.
I am an obsessive follower of politics. With 189 days left until the US presidential election, I'm in full-on obsessive poll-watching mode.
As usual, you should ignore national polls. The important point is the electoral college. You can see a projected electoral map for 2012, and you can also create your own. My current pessimistic prediction has Barack Obama winning by just 3 electoral votes, hinging on a win in Virginia.
Virginia is one of the four battleground states that are going to matter in this election. The full set, and their current polls, are:
Obama won all four in 2008 but this is pretty much guaranteed not to happen this year.
While national polls are almost useless, it is worth keeping an eye on these three:
Romney's favorability is currently negative, which is great news for Obama and terrible news for Romney.
This hypothetical conversation never happened; Mr. Santorum's answers are an amalgamation of his public statements on gay marriage and civil unions, and most of the replies are quotes from similar contexts. Sources: 
Okay Twitter, it's time you stopped getting a free pass. You have fucked up your interface, and it's time to fix it.
Before we get to the UI problems, let's reiterate the bigger, older problem, captured just a second ago:
This was cute when you were tiny and still getting over early technical mistakes on the back-end, but you're over that now, you've taken over a billion dollars in funding, you are basically CNN's only news source at this point. You can't be throwing 500s anymore, no matter how cute the whale is. But your problems go much deeper now: even when it renders, your actual user interface is significantly less useful and elegant. Allow me to rant briefly about a few issues. While I'm waiting for the whale to go away so I can take screencaps, here's a shot of the old-old Twitter (via):
The new layout puts tweets down the right, and a mish-mash of useless junk down the left. I know why you did this: you needed to increase the amount of attention paid to promoted followers and trends, because that's your business model. I don't care. You are deliberately distracting your users from something they want to see with something they don't care about. This is the wrong way to do advertising.
Can you tell me the difference between these two ways of composing a tweet? This one is accessible from the left nav:
This one comes up if you click the blue button in the top-right:
The answer is: there is none. They are totally different-looking ways of doing the same thing, both accessible from the top of the front page. Why would you confuse your users like that? If you think the top-right button is too hard to find, why is it there at all? If you think it's useful because it stays visible as the page scrolls, why not make the easy-to-see compose box fixed? Instead you have this weird dual-interface solution that reeks of committees and compromise instead of the great, simple design that you started with.
The basic tweet layout is pretty much unchanged since the beginning:
The friendly "posted X minutes ago" has been replaced by the context-free "27m", but that's a tiny matter. On hover, as before, you get some tweet controls, plus the new "open" link:
And here the real trouble begins. This is what happens when you click "open":
My payoff is that the controls jump to the bottom of the tweet for no clear reason, and I get a more exact timestamp. Is that worth a click? Then why is that link there at all? The answer is because if your tweet has more interactions, like retweets and favourites, you get those here too:
But that's an explanation, not a reason. It would be quite simple to not bother having an "open" link unless there was something interesting to show. But instead we have this weird cruft in the name of consistency. Again, it's a small thing, but all these little things are beginning to add up to a UI that isn't cared about.
If a tweet is part of a wider conversation, clicking "open" gives you a lot more context, like so:
Display of larger conversational context is a good idea. But there are two issues: first, the back-end implementation sucks. If I respond twice to your tweet before you reply, that second tweet is lost from the conversation. Sometimes you get the whole conversation, sometimes just the immediately preceding tweet. It's inconsistent and confusing.
And the UI is also inconsistent and confusing: two of the tweets have short timestamps, one has a long. The "hide conversation" close the whole conversation, but uses the same icon as "reply" and is right next to it. In the top-right, where you'd expect the "close" button to be in any other context, there's nothing but a timestamp, unless you hover, where you get this:
So now I have controls for this tweet, and also a "details" link which... closes the conversation, in total defiance of its label. Unless you right-click and open in a new tab, in which case you get the details page for a tweet. Why have a link that only does what it says if you right-click? Why can't I get more details of this tweet inline?
Here's how it should work: I'm looking at a list of tweets, and the bottom one is highlighted. If I want to close the list, there should be a close button in the top-right. If I want details about a tweet other than the last one, I should be able to click it. Is that so hard?
Incidentally, there is a close button for the conversation that's correctly labelled. You get it if you hover over the final tweet:
...right where the "details" link is on the other tweets. And it does the same thing that the already-visible "hide conversation" link did anyway. What on earth is the point?
This one is so obvious I can't believe it's not been fixed already. This is the default view of the (poorly-named) "@connect" tab:
This view is clumsily mixing together two totally different use-cases. The first is @replies: these are frequent, personal, and demanding of your attention. They are high-value. The second is retweets and favourites: these do not require response (good, because the UI doesn't let you respond anyway), and happen asynchronously: you don't care when a particular retweet or favourite happened to any degree of precision -- so why is it in a timeline? Not that Twitter actually tells you when it happened anyway, since the UI batches up reponses:
It's not like there's not a great, usable UI for handling interactions that already exists to model from: Favstar.fm nailed it years ago. Show tweets in the order they happened, batch up all responses. It's not hard.
And as another tiny little thing: if the "Mentions" link lets you filter down to only @replies, why is there no equivalent "responses" link that lets you filter down to only retweets and favourites?
The New New Twitter has dozens of small, irritating design choices and UI inconsistencies. None of them by themselves is worth a whole post, but together they add up to enough brokenness to complain about. I've tried to keep things constructive by suggesting how they should look instead.
The new look has been out for over a month now and there's been no sign of iteration to fix these things. We wouldn't put up with this crap from Facebook. It's time to fix it.
P.S. Dear commenters, before you immediately point out the many, many UI flaws in this blog, I reiterate that Twitter has a billion dollars, while I maintain this blog in my very rare spare time. I expect more from them, and so should you.
Charles Babbage, one of the father of computers, once wrote:
I will yet venture to predict that a time will arrive, when the accumulating labour which arises from the arithmetical applications of mathematical formulae, acting as a constantly retarding force, shall ultimately impede the useful progress of the science, unless this or some equivalent method is devised for relieving it from the overwhelming incumbrance of numerical detail.
He meant that one day his computer -- at that time seen as a costly and useless device -- would be not just useful but required to make further economic progress. He was right in all but one respect: he thought that day was in the future. In fact, he was already living in it. Scientific and economic progress have always been limited by the available computational power -- but until quite recently, the level of power available never changed, so the limit was imperceptible.
Now that computing power routinely grows by orders of magnitude, it is easier for us to grasp the idea that we are limited by available computation -- it would be nice to decode genes faster, fold proteins quicker, make more accurate weather predictions. We understand these things will get better, in the same way that Babbage could grasp that calculating logarithmic tables (the primary purpose of computation in the 1850s) would someday be quicker.
What is more difficult for us to grasp is that we are still horribly limited by our lack of computational power. We cannot see how limited until those limitations are lifted. Take, as the shadow of an example, the way that one can browse live online maps from one's phone and already scarcely remember how one got by before that ability. In the future, ubiquitous and mind-bendingly powerful computation will make these tasks seem as divorced from utility and convenience as grinding gears to create logarithmic tables seems now.
Human beings are bad at predicting phase changes. We can predict iteration but not invention, so we write stories about giant spaceships that have wired telephone lines. You have no idea how amazing the future is really going to be.