Bad UI, Not Buggy Apps, killed the Blackberry

So CNN Money reports, What Killed BlackBerry: Terrible Apps, but I disagree, I left because I couldn’t seem to install/update apps. I LOVED the Unified Message Inbox, that connected Facebook, Email, and SMS. I HATED that LinkedIn never worked — on the BUSINESS PHONE?  Blackberry App World was a buggy painful experience in contrast to the App Store.

I still miss my Blackberry keyboard, but I don’t miss my crash prone buggy phone… I loved my Palm Treo 650, best interface ever, but the damned thing was buggy, the Phone Application would hang/crash, and they refused to support Syncing to my Mac… and they kept breaking the third party application I bought… trying to force me into buggy Palm Desktop instead of supporting Apple iSync.

I think that the bad syncing with the Mac did more damage to Blackberry than people let on. So many Web developers are Mac people… not just the graphic artists.  Internet Executives, road warriors, old Unix die-hards, all toted Apple Laptops around.  People obsessed over the fact that 90%+ machines ran Windows, but when you take out the embedded machines, public kiosks, and other devices that people didn’t control, Apple’s Marketshare was WAY higher.

Apple may have forced you to use iTunes, but people LIKE iTunes.  Over time, they made it so you can use the iPhone totally untethered from a computer of any kind.  You can sync with your computer, sync everything via the Cloud, etc.  I absolutely LOVE the fact that when I update a Contact on my machine OR phone, my other computers all sync as well.  With my Blackberry, everything was unsynced until I plugged into a machine, and even then it was haphazard if it would sync.  MobileMe’s death was the final death knell for me (Apple pushed me off Blackberry), because my phone was totally unusable.  Had Blackberry embraced iSync instead of forcing me into using third party software (which they broke and forced me to use their horrible one), I might still be with them.

But that’s how market leaders are destroyed.  2% of the market here, 2% there, and pretty soon, you’re NOT the only dominant player, and that lets your competitors eat your lunch.

Performance Tuning Websites

Download speed used to be one of the ways you could tell a real web pro from a graphic designer that knew how to make things pretty.  One of our excercizes used to be to make pages in a single table (this predated CSS), but carefully spanning rows and columns to move your content around, a pain to develop, but fast to render, since Netscape’s browser used to be terrible at “embedded tables.”  Obviously, this is archaic (along with worrying about 28.8k modem download speeds), but the concept of page that is fast to download and fast to render never went away.  When Google officially announced that it would take page load into account, people started to finally pay attention.  One of the best “starting points” it’s Yahoo’s Performance Guide.

However, the process of making a website fast is pretty straight forward:

  1. Home Page and other enterances: VERY fast and simple
  2. Limit third party items that might cause delays via DNS or download
  3. Prevent things that can get VERY slow from being on these pages

One and two are the ones most often paid attention to, but #3 is potentially the biggest impact and most ignored.  For example, adding gzip to your server cuts file transmission size, that saves time, and is nice, and can get 80 ms loads to 20 ms loads, but #3 is where page loads can move from 100 ms to 10-15 seconds.  For example, if you query the database to build your navigation, it’s easier to manage your navigation, but a hiccup at the database level (that locks that table), and your site hangs loading.  A solution like Memcached moves your “read only” data out of the database and into RAM.  You can still manage it in the database, and the site will update relatively quickly, but there is no reason to consult the database multiple times for information that changes infrequently.

Third party servers often get ignored, but you have no control of when they have problems.  Serving Javascript from a third party has the advantage that you don’t have to maintain it, but puts you at risk of the user’s experience massively degrading.  Consider removing as many third party elements as possible.  Solutions like Google’s Javascript based tracking for Google Analytics has the advantage of having near-zero impact on page load (except the transmission of the text – and the download of the Javascript library), but unlike images, tends to not have performance problems, and the site will load even if it is having trouble obtaining the Javascript library at the bottom of your page.

Getting the 50% – 200% improvements are great, but a real focus on the few things that can explode out of control will serve you better in the long run.

Copyright in the Digital Age

In Code and Other Laws of Cyberspace, Lawrence Lessig was over 10 years ahead of his time, but pointed to the fact that code, as in software, was as important to the realities of the online legal regime as the laws passed by governing bodies.  There seems to be an increasing understanding that copyright, as we know it, is becoming obsolete.

Our notion of copyright, the exclusive right of an author/creator to control distribution, makes less and less sense as the technology evolves.  Copyright, at it’s core, protected the author from exploitation from the owners of the printing press.  Without copyright, the owner of the printing press would be able to create multiple copies of a book, article, etc., without compensation to the original author.

Consider as a thought exercise, a novel writer, who brings a sample to a press owner, who agrees to share the revenues with the author.  Without copyright, that author would be able to collect from that press owner, but had no protection from dozens of other press owners taking that work and making copies without compensation.  Copyright protected the author.  Our founding fathers established limited protection, 14 years for registered copyrights, with another 14 year renewal available, which made the protection a limited time.  With extensions and treaty obligations, Congress has extended the protection to around 100 years, give or take, depending upon whether it is published, (70 years after the death of author, or for corporate works, 95 years from publication or 120 years from creation, whichever expires first).  This has insured that all works are protected seemingly forever.

However, in a digital realm, we are no longer worrying about the press owners, but everyone.  Everyone with a computer is capable of duplicating any work, so copyright attempts to regulate everyone.  In addition, the terms have been extended beyond anything reasonable, making the “public domain” trade-off merely theoretical.  For a television show released in 2010, it will be in the public domain in 2105, when nobody will have the ability to duplicate the product.  As culture speeds up, the lifespan of these works is measured in months or years, yet the copyright will last nearly 100 years.

In the computer space, we see the blatantly illegal Abandonware issue, where enthusiasts have archives of no longer available products available for download and possible emulation.  While one might question the literary importance of early computer games, they certainly played a role in American and global culture, and the copyright regime makes it likely that these works will never be available.  Publishers from the 1980s and 1990s are long gone, the copyright holders defunct or swallowed into larger companies, all with no interest in preserving the works of that time period.  For every game like Civilization with endless sequels (and presumably originals maintained and later republished as Flash games or equivalent), there are plenty of games that were exciting but the company went defunct, and changing architecture makes it impossible to maintain.

If I want to show my son the games of my youth, the laws of copyright may not apply (the disks/cartridges may be in a box at my parents house), but with no way to play them, the laws of code render them gone.  The copyright system simply has no way of maintaining preservation of our digital past.  Websites go up and down, articles disappear or are archived, and the only record may be a print out that someone grabbed at the time, threw in a box, and has no legal right to republish.

The intersection of law and code is interesting, because the code permits saving the file and ANYONE republishing it, while the law prohbits anyone from doing so.  Alternative, in the case of abandon ware, the law permits me to own and play my purchased copy, but doesn’t permit any reasonable way of actually doing so without the works of those flaunting the laws.

Napster may be long gone, but for over 10 years, nobody assumes an obligation to pay for anything, just choosing to for convenience.  Copyright is increasing a blunt instrument, simply at odds with how people publish and consume content.  Youtube lets anyone with an interest parody something, but leaves the enforcement of fair use to the increasing lawsuit nervous companies to simply take down something that uses a few seconds of clips.  The meaning of copyright needs to be reconsidered when everyone can duplicate, creating of content may be increasingly expensive, and our culture may simply be at the mercy of technology.

Decades of movies that will never be released in a digital format may exist in people’s VHS collection, but without a way to play them, they’ll simply be lost.  Culture is important, and who knows what future historians will be interested in when researching culture of the 20th and 21st century.  Some of our early writing samples are of mundane things, simply because they survived, and it is tragic if we simply litigate our creative history out of existence.  Current copyright is obsolete, and a new line needs to be drawn to preserve our culture and our rights.

Disney may not be interested in re-releasing Song of the South, but should they be allowed to keep it out of the nation’s cultural archive?

Beyond Relational Databases: Is Your Data Relational?

One of the strangest things about technology is how it moves in circles.  The relational database isn’t new technology, and while many changes to the storage model and the performance of the system has changed, the underlying concept is the same.  The leading databases, except for Oracle, all bare SQL in the name, giving the impression that SQL was critical to the concept of the relational database, not merely a front end language for describing access to relational data.

Web sites fit nicely into a relational model.  They have categories, articles, products, etc., sets of data.  The idea of applying set theory to data is at the core of the relational database.  I can quickly and easily get all Articles in the Category of SEO, because those fields are tagged, and I simply pull the appropriate subset.  You can always get intersections (with JOINs), unions, set deletes (EXCEPT), and other set operations… if you are using sets of data.

Martin Kleppmann asks, on Carsonified, Should you go Beyond Relational Databases? That’s the wrong question to ask.  The question is, “Is your data relational?”  If you have groupings of like data, then you need a relational database.  If you are building an application with non-relational data, then storing it in the database to have a quick id look up is foolish, and you should be looking for persistent data storage that is optimized for that sort of data.

For temporary storage, a system like memcached is perfect, it gives you lightning fast references to data that may only exist temporarily.  For a long term storage, maybe a database is your answer, or maybe you need something more tied to your data structure.  We wouldn’t suggest Microsoft switch from it’s DOC format (and the Docx XML version) to relational databases, but I wouldn’t put relational data into something more object oriented.  You might use objects to represent it in memory for easier programming, but if the data is essentially relational, keep it in relations.

Data structures are at the core of computer science.  With all the free information out there, there is no excuse to be building a large scale system without knowing the basics.  The fact that Twitter built their operation without knowing what they were doing doesn’t mean that everyone can… Bill Gates dropped out of Harvard and made a fortune, not every Harvard drop-out is so successful.

Twitter Users and Businesses are in a Bubble

TinyURL.com has been around forever (didn’t realize it was just 2002, until they got publicity), people used them when ugly nasty query strings broke in emails.  They predated Twitter, predated Social media, and predated link popularity as a mainstream thing to worry about.

Tr.im didn’t seem to have a model other than “I’ll knock off TinyURL” with 6 fewer characters using an international TLD that seems Web 2.0 and clever.  They seem extremely upset that Bit.ly, with one extra character ripped off their idea.

The fact is, there is NO reason that Twitter counts URL characters in Tweets.  The idea of a microblog was the SMS integration, hence the 140 character restriction, the 160 of SMS less 20 to handle control characters.  Twitter could easily treat your URL as a symbol, relaying it as a normal URL on the website, and a Bit.ly one on their SMS connections with the appropriate character count.

Twitter is a small percentage of the Internet.  It’s a fascinating exploration of how a simple technology can capture imagination and run with it, most of Twitter, @username, #hashtags, and shortened URLs were all things pushed by the user base, not Twitter with it’s simple Follow/Following/Feed base approach to their data.

However, TinyURL had 70% of their traffic from non-Twitter, Tr.im seemed to have no push other than Twitter, and left the company in the lurch.  I find the whole thing kind of silly, Twitter ought to roll URL shortening into their package, and stop molesting links when it’s not needed (on the website) and shorten where the 140 character limit matters.  But Twitter isn’t a technology company, they are a social fad with a only presence.

95% of Americans don’t Tweet, so while Twitter is exciting to a part of the technology elite, it’s not where most Americans go to find things, so overstating its importance is a bit silly.  Email and Web search still dominate the Internet usage, social media may get there, but Twitter is not the end all and be all of the universe.

This makes Tr.im’s whining about Bit.ly not only counter productive and pointless, but also wrong.