What constitutes an optimal communication architecture for finding out the ‘truth’? Consider, for instance, people trying to estimate the reliability of a product or the probability that something will occur. In these situations, nobody knows for sure what the truth is, but everyone has some individual information. In modelling terms, people start with an initial belief more or less close to the truth, given by some random draw. Over time, this belief gets updated by communication in the social network. Two fundamental questions are (1) What kind of social networks generate consensus (i.e. all people ending up with the same beliefs) and (2) if there is consensus, how close is it to the truth? The study “Wisdom of Crowds” of Golub and Jackson (2010) comes to the conclusion that even with a naïve form of individual learning, consensus is almost always reached. Further, consensus equals truth if the network is “democratic”, in the sense of no individual having a dominant influence position in the network. In our paper, we argue that the appealing wisdom of crowds phenomenon is based on the — unfortunately — unrealistic assumption that agents always update their beliefs in exactly the same way. This stems from the usual way social network data is collected, where the strength or weight of a connection between two individuals is a combined frequency of interaction and influence. However, the way people actually communicate is random, such that agents do not update their beliefs in exactly the same way at every point in time. We show that even if the social network does not privilege any agent in terms of inﬂuence, a large society almost always fails to converge to the truth. We conclude that the wisdom of crowds is an elusive concept that reveals the danger of mistaking consensus for truth. Moreover, classic network measures fail to acknowledge that the consensus level is highly sensitive to early communication.