Institutions are stupid – they can’t remember a thing. Most organizations, from Hewlett-Packard to your local TV newsroom, have no formalized method of collecting and collating important data about important stuff in a way that makes it accessible and relevant for future use.
Of course every company has staffers who have been around forever and have a large amount of accrued knowledge, but that’s not very efficient, both because they don’t know who and when someone needs to know something unless they’re asked (and if you don’t know that they know… you get the idea), and ultimately they can just quit, taking all that info with them.
Basically, humans are piss-poor methods of storage. We’re tempermental, buggy, and prone to crashing and losing the records forever.
I’ve been really thinking about this topic – using social media to store institutional memory – since I read this article, which referenced this 2005 story from CIO Magazine. The latter dealt mostly with securities compliance, but made one claim that absolutely gobsmacked me:
The average number of emails sent each day worldwide will hit 36.2 billion in 2006…The Enterprise Strategy Group reports that as much as 75 percent of most companies’ intellectual property is contained in the messages and attachments they send through their e-mail systems. [emphasis mine]
And guess what, folks? For most organizations, the recall of that information is virtually impossible unless you choose to sift through the tens of thousands of emails sitting (hopefully) on your company server.
So that’s scary.
Institutional memory also came up, in a slightly different mode, in this post from Mathew Ingram’s work blog, in which he discussed a fake Digg post about Sony recalling 650,000 PlayStations. Once the ruse was discovered, Digg apparently removed the story from their site altogether – thus nullifying the promise of using social media to retain their (our) institutional memory.
Because there’s no record, they (and we) can’t learn from our collective mistake. I’m sure Digg will try to change their algorythms to try and ensure something like this doesn’t happen again – but you can’t make a fake story unhappen. How about at least allowing it to exist (corrected, reviled and underscored) as a case study, thereby helping to ensure similar mistakes are not repeated ad infinitum? Is that not part of the promise of social media?