Automated Speech must be Free Speech

As aggregation and news feeds become the primary way we digest information, the definitions of electronic speech will be a critical test of whether we maintain the free flow of information that we have grown accustomed to or if free press will simply wither on the vine.

The New York TImes is attempting to muddy the issues by making fun of Free Speech for Computers, but it’s not the computer that has free speech rights, but the company or owner of the machine whose freedom of expression needs protecting.  The 2003 case that Mr. Wu mocks was Search King‘s lawsuit, where a search engine spamming company was manipulating Google’s algorithm and had their rankings dropped.  They argued that Google was responsible for their business loss, Google was vindicated that their “search ranking results” constitutes the company’s free speech and you couldn’t sue over it.  Had the case gone the other way, organic search as we know it might be dead, for anytime a site dropped in the rankings, they could sue.

Nobody would argue that my blog post isn’t free speech, or that a large company doesn’t have some basic free speech rights for a corporate blog (commercial speech, not political speech, but still speech).  So what is “Computer Speech?”

Any site that automatically aggregates information, crawls the Internet looking for information and organizes it, or even a search engine, is being treated as “computer speech,” despite the fact that the computer is executing clearly designed behavior to express what it is intended to express.  Take away that free speech, and Altavista, Google, and any other automated search engine never comes into existence, the early news aggregators never appear, sites like <a href=””>Reddit</a&gt; may never exist either.

“Computer speech” may be commercial speech, or it may be someone’s rambling opinion, and whether I express my views algorithmically as a programmer or verbally as a writer, I should be entitled to the same protections.

Media Submarkets on the Web

I love reading what marketing focused online marketers have to save, because coming from a technology background, I like understanding what my colleagues without a background in tech thing are the market moving forces.  I’ve been quoting a bunch of articles from Media Post, because the daily emails often prompt a good opportunity to think.  Mr. Allen inadvertently suggests that social media of today traces its roots to the early days of the web, and while he is correct that the desire for interactivity shows signs at the early web, the underlying technology has supported this.

Though today’s websites share no common code with the BBS world of the 1970s-1990s, it shares a cultural desire to share information, files, and resources in an online manner.  Some of the early Unix BBS systems were designed to support information sharing like a dial-up BBS between local users at a University, albeit over the TCP/IP network and Telnet instead of a modem and a terminal emulation/dialer program.

However, the “Social Media” world of today required a certain technological shift.  The “Web 2.0” technology shift, and the AJAX acronym didn’t require new technology, but did require a changing software landscape.  In 2001, when I started in this business, trackable links that didn’t break search engines required custom coding and careful management of the HTTP protocol responses.  In 2009, you go to and it does it for you.  In 2001, building a website required building an article repository to manage content, in 2009 many CMS systems are available off the shelf.  In 2001, SEO was emerging from the hacker days of Altavista, and riding the PageRank mathematics of Google’s rise and Yahoo’s use of Google PageRank for sorting.

Why does this matter?  In 2001, building a website required technology skills.  In 2009, has you up and running in 15 minutes, and you can start working on your site.  The early promise of the Web was two-way communications.  Netscape shipped with an HTML editor, because the whole concept of Hypertext was easily shareable and editable documents.  The HTTP spec had concepts of file movement that were never adopted until the DAV group realized that you could do collaborative editing with them.  HTML editing turned out to be too complicated, but Web 2.0 featured the concept of mini code.  IFrames let websites include content elsewhere, but you were at their mercy for displaying it.  Instead, we have interactive forms that pull information from anywhere.

The social media of today traces it’s social roots to the first acoustic modem on a computer, but the technology is new.  When AJAX came out as a popular acronym, it became socially acceptable to put critical content behind a Javascript layer, previously a no-no of web design, Javascript for convenience was accepted, but not required.  The underlying technology was there, but easy libraries brought it to the junior programmers.

Designing a high end website still requires technology and database skills.  But prototype-grade environments like Ruby-on-Rails and CakePHP brought RAD concepts from the desktop application Visual Basic world to web programming for everyone.  And while it certainly brought out many applications that don’t scale, it made these rapid fire AJAX/JSON mini web services easy to write, and that made the social media world possible.

So while marketers may see this evolution of mini-markets, they miss the underlying technology shift.  Once a media is cheap to create, the advertising on it becomes affordable.  The wire service made real reporting cost effective, the web made mail-order effective, and the underlying language libraries let companies without a technology team build interactive websites, creating these markets.

Social Media – New Walled Gardens

In the early days of the Internet, the term Walled Garden was used to refer to private content areas, and the debate as to which content should be private and which should be public.  In those early days, Walled Gardens were seen as an differentiator… one might pay more for AOL than the local ISP if the unique content was of value, which led to the acquisitions of Compuserve and Time Warner.  The search driven Internet pushed Walled Gardens out, and free content in, and that has dominated for years…

But now, Facebook establishes a walled garden, with huge amounts of content only available to members, but interestingly, it’s all user generated.  Who would have thought that the users would support hidden content, but that is Facebook’s appeal.  I see what my friends are up to, they see what I am up to.  The fact that the content is only available inside of Facebook seems incidental, since my family photos are now private.

I see news articles posted by friends, and we openly comment on them, in a way, it’s like a miniature blog, only the content isn’t available to the outside web.  You need a Facebook account, and to by my friend to see it.

In technology, everything moves in trends.  The mainframe centralized computer system and the client server model of the 3270 Terminal (with a smart GUI viewer), moved to the minicomputer and the dumb terminal, we then moved to PCs with the power back at the terminal, to a network centric model (both the Winterm push in corporate environments, and browser-as-computer in the public one), to smart applications that talk via APIs back to the web-based systems.  Is there really that much of a difference between an iPhone application that connects back to a web service as a 3270 Data Entry screen that draws the interface locally and sends the data up the serial line?