Home » Blog » Archives

Archive for the ‘Commentary’ Category

On The Impact of Damage non-locality in Incentive Economies around Data Sharing

June 17th, 2010

For centuries, it was common for scientists to exchange ideas with epistular discussions. These days, remotely located scientists collaborate via email, or exchange digital documents when they don’t meet face to face. These are way faster and easier to exchange than hand-written letters sent via postal services. Unfortunately, they still retain that ‘after the fact’ property that they are often revealed only when some scholar decides later they were important enough to dig out and organize.

With that in mind, I find myself excited every time I get the chance to participate in in ‘blog rebuttals’ like the ones that David Karger and myself have been having lately about requirements, motives and incentives for people to share structured data on the web. Both of us care a great deal about this problem and we still cross paths and cross-pollinate ideas even after I left MIT. We also have very different backgrounds but they overlap enough so that we can understand each other’s language even when we try to explain our own (sometimes still foggy) thinking.

It is a rare situation when people from different backgrounds cross paths and earn each other’s respect. It is even rarer when their discussions are aired publicly as they are  happening; this creates a very healthy and stimulating environment not only for those participating but also for eventual readers.

In any case, the point of contention in the current discussion is the reasons why people would want to share structured data and what can facilitate it.

It seems to me that the basic (and implicit) assumption of David’s thinking is that because a web of hyperlinked web pages came to exist, it would be enough to understand why it did, replicate the technological substrate (and its social lubrification properties) and the same growth property would apply to different kind of content.

I question that assumption and I’m frankly surprised that questioning whether the nature of the content can influence the growth dynamics of a sharing ecosystem makes him dismiss it as being related to a particular class of people (programmers) or to a particular class of business models (my employer’s).

It might well be that David is right and the same exact principles apply… but it seems a rather risky thing to take for granted. People post pictures on public sites, write public tweets, contribute to wikipedia, write public blogs, or create personal web sites, all this is shared and all this is public. These are facts. They don’t publish nearly as much structured data and this is another fact. But believing that people would do the same with structured data if only there was technology that made it easier or made is transparent, is as assumption, not a fact. It implicitly assumes that the nature of the content being contributed has no impact on the incentive economies around it.

And it seems to me a rather strong assumption considering, for example, that it doesn’t hold true for open sharing of software code.

Is it because software programmers are more capricious about sharing? Is it because what’s being shared is considered more valuable? Or is it because the incentive economies around sharing change dramatically when collaboration becomes a necessary condition to sustainability?

Could it be that sharing for independent and dispersed consumption (say, a picture, a tweet, a blog post) is governed by economies of incentives that are different from sharing for collaborative and reciprocal consumption? (say, software source code, wikipedia, designs for lego mindstorm robots or electronic circuitry)

I am the first to admit that it is reasonable to dismiss my questioning for being philosophical or academic, or too ephemeral to provide valuable practical benefits, but recent insights that crystalized collectively inside Metaweb (my employer) make me think otherwise. The trivial, yet far-reaching insight is this:

the impact of mistakes in hypertext are localized,
while the impact of mistakes in structured data or software are not

If somebody writes something false, misleading or spammy on a web page, that action impacts the perceived value of that page but it doesn’t impact any other. Pages have different relevance depending on their location or rank so the negative impact of that action changes depending on the page importance. But the ‘locality of negative impact’ property remains the same: no other page is directly influenced by that action.

This is not true for data or software: a change in one line of code, or one structured assertion, could potentially trigger a cascading effect of damage.

This explains very clearly, for example, why there are no successful software projects that use a wikipedia model for collaboration and allow anybody that shows up to be able to modify the central code repository.

Is that prospect equally unstable for collaborative development over structured data? or is there something in between, some hybrid collaboration models that take the best practices between the wiki models (which shines in lowering the barrier to entry) and the open software development models (which manages to distill quality in an organic way)?

I understand these questions don’t necessarely apply to the economy of incentives of individuals wanting to publish their structured datasets without the need for collaboration, but I present them here as a cautionary tale about taking the applicability of models for granted.

More than programmers vs. professors, I think the tension between David and myself is about the nature of our work: he’s focusing on facilitating the sharing of results from individual entities (including groups), I’m focusing on fostering collaboration and catalyzing network effects between such entities.

Still, I believe that understanding the motives and the incentive economies around sharing, even for purely individualistic reasons, is the only way to provide solutions that meet people’s real needs. Taking them for granted is a very risky thing to do.

Permalink | Posted in Commentary
 

Drivers vs. Enablers

June 5th, 2010

I’ve heard many times people saying that the web exists because of “view source”.

“view source”, if you don’t know what I mean, is the ability that web browsers have to show you the source HTML content of the web page you are currently browsing. If you ask around, pretty much everybody that worked on the web early on will tell you that they learned HTML by example, by viewing the source or other people’s pages. Tricks and techniques were found by somebody, applied, and spread quickly.

There is wide and general consensus that ‘view source’ was a very instrumental tool to easily propagate knowledge and simplify adopting the web as a platform, yet its role is often confused.

“view source” was an enabler, a catalyst; something that makes it easier for a reaction or a process to take place and thus increases rate, effectiveness, adoption, or whatever metric you want to use.

But it is misleading to confuse “view source” for a driver: something that makes it beneficial and sustainable for the process to take place. The principal driver for the web was the ability for people to publish something to the entire world with dramatically reduced startup costs and virtually zero marginal costs. “view source” made it easier and reduced such startup costs, but had nothing to do with lowering marginal costs and certainly had very little to do with the intrinsic world-wide publishing features of the web.

You might think that the current HTML5 vs. Flash diatribe is what’s sparking these considerations, but it’s not: it’s something that Prof. David Karger wrote about my previous post (we deeply enjoy these blog-based conversations). He’s suggesting that while my approach of looking for sustainable models for open data contributions is good and worthwhile, he believes that a more effective strategy can be the one of convincing the tool builders to basically add a “view source” for data and that once that is in place, we wouldn’t have to care as the data would be revealed simply by people using the tools.

It’s easy to see the appeal for such a strategy: the coordination costs are greatly reduced as you have to talk and convince a much smaller population and all composed of people that already care about surfacing data and see potential benefits for further adoption of their toolsets.

On the other hand, if feels to me that it’s confusing enablers for drivers.

The order I pose questions in my mind when engineering adoption strategies is normally “why” then “how”: taking for granted that because you have drivers then everybody else must share it or have a similar one can easily lead you astray . The question of motive, of “what’s in for me?”, might feel materialistic, un-intellectual and limiting, but an understandable and predictable reward is the basis for behavioral sustainability.

David is basing his thoughts around Exhibit and I assume he considers the driver to be the tool itself and its usefulness: it can taking your data and presents it neatly and interactively without you having to do much work or bother your IT administrators to setup and maintain server-side software. That’s appealing, that’s valuable and that’s easy to explain.

The enabler for the network effect is that “cut/paste data” icon that people can click and obtain the underlying data representation of the model…. and do whatever they want with it.

But here is where things start to get interesting when you consider drivers and enablers separately: ‘view source’ was a great enabler for the web because it was useful for other people’s adoption but didn’t impact your own adoption drivers. The fact that others had access to the html code of your pages didn’t hurt you in any way…. mostly because the complexity of the system was locked on your end in your servers and your domain name is something you control and they can’t replicate. What you had access to was a thin surface of a much more complicated system running on somebody else’s servers. It was convenient to you and your developers to have that view-source and the fact that others benefited from it posed no threats to you.

This is dramatically different in the Exhibit situation (or in many other open data scenarios): not only you can take the data with you, but you can take the entire exhibit. Some people are not bothered by this fact, but you can assume that normal people get a weird feeling when they think that others can just take their entire work and run with it.

This need of ‘preventing people from benefitting from your work without you benefitting from theirs’ is precisely the leverage used by reciprocal copyright licenses (the GPL first, the CC-share-alike later) to promote themselves, but there is nothing in the Exhibit adoption model that addresses this issue explicitly.

If your business is to tell or synthesize stories emerged from piles of data (journalists, historians, researchers, politicians, teachers, curators, analysts, etc), we need to think about a contribution ecosystem where sharing your data benefits you and in a way that it’s obvious for you to understand (and to explain to your boss!).  Or, as David suggests, a ‘view source’-style model where the individualistic driver is clear and obvious and the collaborative enabler is transparent, meaning that it doesn’t require them to do work and is not perceived as a threat to their individualistic driver.

The thing is: with Exhibit, or with any other system that makes the entire data available (this includes Freebase), the immediate perception that people have is that making their entire dataset available to others is clearly benefiting others and doesn’t seem to offer clear benefits for them (which was the central issue of my previous post).

Sure, you can try to guilt-trip them into releasing their data (cultural pressure) or use reciprocal licensing models (legal pressure), but really, the driver that works best is when people want to collaborate with one another  (or are not bothered by others doing it on their own work) because they immediately perceive value in doing so.

Both Exhibit and Gridworks were designed with the explicit goal to be at first drivers for individual adoption (so that you have a social platform to work with) and potential enablers for collaborative action later (so that you can experiment with trying to build these network effects); but a critical condition for the collaborative enabler is that it must not reduce the benefit of individual adoption or otherwise it will reduce its ability to drive network effects.

Think for a second about a web where a ‘view source’ command in a browser pulled the entire codebase out of a website you’re visiting: do you really think it would have survived this long? remember how heated the debate was when the GPLv3 wanted to contain reciprocal constraints even for software that was just executed and not redistributed (which would have impacted all web sites and web services which are now exempt)?

It is incredibly valuable to be inspired by systems and strategies that worked in the past and by the dynamics that made them sustainable… but we must do so by appreciating both the similarities and the differences if we want to be successful in replicating their impact.

Counterintuitively, what might be required to bootstrap a more sustainable open data ecosystem is not more being more open but less, building tools that focus first on protecting individual investments, and then in fostering selective disclosure and collaboration over such disclosed part.

We sure can (and did) engineer systems that act as trojan horses for openness (Exhibit is one obvious example), but they have failed so far to create sustainable network effects because, I think, we have not yet identified the dynamics that entice stable and sustainable collaborative models around data sharing.

 

The sad story of Xerox

February 25th, 2010

I stumbled upon the news today that Xerox is suing Google and Yahoo! for patent infringement on search technology.

Did they just find out about that patent? I mean, was it hidden in a drawer somewhere for all these years? weren’t they supposed to be the ‘document company’?

That got me to dig a little deeper: from a very superficial view of their market evaluation over time, the company hasn’t been this bad since the early 80′s, yeah you read that right “80′s”, not even the various bubbles and recessions in between did as much damage.

A little more digging shows that S&P recently degraded Xerox ratings to “brink of junk” territory (and I’m quoting the Wall Street Journal here, not my words) early this month (Feb 2010) following their acquisition of Affiliated Computer Services Inc. the week before (for 5.6B$.. and Xerox market cap today is 8B$… yeah, fishy).

So, let me get this straight: Xerox has one of the most advanced and prolific research labs in the history of innovation (PARC) where things like window-based graphical user interfaces, the mouse, ethernet and the laser printer were invented.

Yet, they failed to capitalize on *any* of those inventions (not even the laser printing one which really feels like a no brainer to me).

And now they sue Google and Yahoo (but hey, not Microsoft! go figure) for patent infringement? on search? 10 years later?

I have a generally moderate position on software patents (I think there are a few genius ones that do deserve their temporary monopoly), but I feel the problem is not in the concept of rewarding innovation (which I strongly support) but in the way the system has turned around and now it’s used to abuse and harass way more than to protect investment.

Xerox is nothing but the poster child of failure to capitalize from its own innovation and, frankly, resorting to the judiciary system to compensate for it shouts managerial incapacity to my ears.

Not only that, but it gives off a sense of utter desperation: one thing is for two directly competing businesses to sue each other trying to get any minimum advantage. It’s not pretty, but it’s understandable (it’s a prisoner’s dilemma scenario where cease-fire and moral-high-ground are inherently unstable).

Completely different case when a company facing difficulties is trying to compensate by milking somebody else’s cash cow for no other reason than they thought about it too but could not (or did not want to) profit from it when they did. This is no prisoner’s dilemma, this is no unavoidable escalation: this feels no different than any other patent troll that feed off as parasites on the fact that having “filed” an idea first can give them a temporary monopoly on it, no matter how obvious it is for others to come up with the same solution when faced with the same problem.

Shame on you, Xerox: you were a company that I have admired and respected greatly over the years. So sad now to think of you as a desperate patent troll.

UPDATE: apparently, they are not new to waking up late in the game and using suing as a measure to compensate for their managerial inadequacy to capitalize on their own invention. Still, pretty sad overall.

Permalink | Posted in Commentary