The Economist has an article on a paper concerning data transparency.
The paper uses a natural experiment involving meetings of the Federal Open Markets Committee (FOMC), the panel that sets interest rates. In 1993 the Fed, pressed by Congress to make its proceedings less opaque, not only promised to publish transcripts of the FOMC’s future meetings, but also opened up its archive. Transcripts of earlier meetings had never been intended for public release; members of the FOMC were not even aware that such records had been retained.
The transcripts before 1993 display an FOMC unburdened by the knowledge that their deliberations would one day be on the record. Conversely, from 1993 all participants knew that their input would eventually be made public. The new study compares the two periods to see what impact the new transparency had.
Using a linguistic algorithm, the authors identify discussions of economic policy (as opposed to administrative matters, say) and analyse how they changed after 1993. Newer members in particular did behave differently, talking more about economic issues and citing more data when doing so—suggesting that the increased transparency had induced them to mug up on their briefing notes. But the publication of transcripts also seemed to inhibit policy discussions. Less experienced members asked fewer questions, made fewer statements and were more likely to follow the chairman’s lead. That, the authors assume, is because they were unsure of themselves, and did not want to advocate policies that might later backfire. A committee can be too open, it seems.
This is very interesting from both a technical and a meta point-of-view. On the technical side, the paper is 52 pages long, and I haven’t had the chance to read it in full, but it looks like a nifty application of some standard text mining. Cool stuff!
As to the meta side: I have no formal training in psychology, but I’m pretty sure this is a well understood phenomenon. The Economist draws the unfortunate conclusion that “[a] committee can be too open.” I can think of nothing* that a government does that shouldn’t be free and open to public scrutiny, and that’s for exactly the reason that people (in this case, governmental employees) change their behaviour when they are being watched. Call it the Heisenberg Uncertainty Principle of Openness,* if you will. The act of opening data changes the state of the observed parties generating that data.
But it also has a darkside: Now that everybody knows that everything they do on-line is being kept by the NSA, and that even those who try to keep prying eyes out of their digital lives are considered suspect by the NSA, self-censorship, even unconsciously, is taking place. How does this self-censorship of one’s behaviour on-line bleed into real life? Do we check what we say in a public space for fear that some “If you see something, says something” cowboy is sitting at the next table? How does the knowledge that everything is being monitored keep the populace docile? Evidence of its happening has just been found at the FOMC; how does it look for the part of the population that tries to keep the growth of governmental power in check?
Glenn Greenwald shows how it’s affecting the ability of the Fourth Estate to do its duty.
*with maybe the exception of some weapons systems.
**Yes, I am aware that strictly speaking that this is the Observer Effect and not the Uncertainty Principle, but I’m going with the way that it’s more commonly used now.