One of the major concerns of shared knowledge is privacy. Usually you want privacy to protect yourself from the rest of the world: to hide your weaknesses and avoid other people using them against you.
I'm personally on the "I have nothing to hide" side. In the other end I consider some people not being able to understand some of my decisions, acts or point of views because they don't see the world the way I see it. So I only tell things when I'm asked (I will not hide in this case).
But I realise the vast majority of people is not like me. People want to hide some of their private life, thoughts. IMO it's usually for psychological reasons that you're afraid of how you will be seen on the eyes of the others (our personal informations are not considered neutral to ourselves). And I don't think it is going to change anytime soon (as I said, some people just don't want to face the truth about things and especially about themselves). So we have to take that into account. It's also interresting to note that it's exactly the same problem for a country to have some of his weaknesses used by anyone (and so keep them private). If the mankind was trustworthy there wouldn't be any such problem. But we are not.
That's also why I think the world (earth) would go better if humans were not in charge of it. I'm thinking about autonomous/neutral robots here. Machines for which information is neutral (equally good and bad). Systems that would not be inhrently inclined to use the bad parts against another system (as humans do). But we are far from this possibility. And if we design robot "brains" with the human model of evolution, we will probably end up with the same problem... That's why I think robots should not be modeled on us. (but I'll talk later about the future of robot "brains")