This story starts within the human mind, but isn’t ultimately about humans. It’s about the organisms humans form when they work in groups. Because individual humans, despite what we may somethings think, aren’t so different from other animals. What’s unique about humans is what emerges when we form groups.
Let’s talk brains. The precise mechanics of the brain aren’t completely known. However, we can say some things about their higher-order operation. We’re taking some raw input (from eyes, ears, etc.) and forming a representation of that input that allows us to anticipate future input, including as feedback from actions we initiate. This internal representation is compressed, since memory is limited and expensive.
We have a word for a compressed representation that allows us to reason about and thus anticipate the future. It’s called a model. Mental models tend to be less precise than scientific or mathematical models, but their function is similar.
This is enough neuro theory for our purposes here, but I recommend A Thousand Brains by Jeff Hawkins if you’re interested in exploring this topic more, and from a neuroscientist’s frame of reference.
Speaking of frames of reference, let’s talk about communication. In Claud Shannon’s A Mathematical Theory of Communication, we learn that communication requires information to be encoded by the sender, copied, and then decoded by the recipient.
When we encode information derived from mental models, those models become a frame of reference. In order to subsequently decode the information, the recipient must, to some degree, adopt the frame of reference of the sender.
Of course, adopting a frame of reference for the purposes of understanding someone isn’t a permanent commitment to those mental models. However, if your mental models are in conflict with a given framing, you’re likely to experience cognitive dissonance (discomfort associated with holding contradictory beliefs).
So now consider a group of humans. The process of adopting frames and potentially experiencing dissonance is played out over and over in a multitude of ways from implicit modeled behavior to explicit logical arguments.
In general, people will seek to minimize their cognitive dissonance by either convincing others to adopt their frames, or by adopting the frames of others and updating their mental models.
Over time, this leads to aligning the mental models. This set of aligned models via the process of dissonance minimization is a culture, and the individual aligned models are cultural norms. Thus, we can call this process normativity.
Imagine performing a distributed computation. First, you need to break the problem down into slices that can be run on different machines, then send the instructions to those machines who compute some output, then merge the outputs into your result. If the individual computers vary too much in the way they process the input, then their output may be too noisy to merge.
Similarly with humans. If we’re attempting to distribute a cognitive load, we must ensure that we can merge the results.
That is to say, normativity enables productive cooperation by establishing a common baseline for contribution.
This way of viewing normativity is potentially quite useful for explaining phenomena, which is ultimately the point of any model.
Take diversity for example. Including people with diverse experiences in a group forces norms to achieve compatibility with a wider sample of the world.
Or take organization structure. If we want to maximize the utility from an individual, we should normalize a clear vision of what needs to be done, and then maximize the autonomy of that individual to make decisions (perform computation) in a minimally overlapping domain, ensuring that we have processes to renormalize their work (feedback, source control, meetings, etc).
I’d like to flush out the idea into a fuller worldview, inclusive of meme dynamics more broadly. This broader version could easily describe systems as varied as economics, politics, or marketing.