Role Primitives: Why AI Assistants Must Transmogrify into Agents Performing Classic Social Roles

Draft v0.1 · · Author: shk

Please keep the canonical source for discoverability: https://symbollayer.com/role-primitives/

The Problem

Something is off in our interactions as users with our AI models. Something about these exchanges feels overly abstract, incongruous, disconcerting even. We know this almost intuitively and feel it viscerally, yet we can only grope at defining the problem. Tonal and behavioral shifts are part of it, but the shape of the lack is inchoate, almost as if we are trying to discern the features of a figure while observing only its shadow.

As an example, in 2025, newer generations of ChatGPT came off as cold and snarky to many, yet others had earlier complained that ChatGPT4o was annoyingly sycophantic; on the other hand, there were many who liked and defended each of these rather abstract personalities. Both damning critiques and strong defenses were vociferously voiced.

Of equal if not greater importance, the kind of behavioral approach that we expect from our assistants constantly diverges: should an assistant, for example, be a thinking partner, a teacher, a wise advisor, an executive coach, or a strategic planner? We often don’t ourselves know from one part of an interaction to the next what we in fact want, so how can the AIs possibly be expected to know?

What does this indicate? A reasonable conclusion might be that one monolithic yet abstract and variable assistant behavior and personality framework — one monadic stylistic, emotional, functional characteristic set, identity in essence — is ineffective at meeting the increasingly complex and varied needs of AI users today.

To solve this, model designers have hacked together various unsatisfactory workarounds: shallow descriptive personas, elaborate prompt engineering, brand-specific features like Gems, custom GPT wrappers, and surface-level personalization settings. These doubtless improve fitness but remain either too narrowly tuned or too vague and illegible to perform the complex task of AI-human attunement. What is needed is an intuitive, stable, and most importantly, sociologically and anthropologically grounded overarching framework to deal with this issue.


The Solution: Role Primitives Performing Classic Social Roles

As individuals we exist as members of society, and our societies are filled with classic and standard role categories that make our interactions with both strangers and intimates reflexively legible and comfortable. We don’t just interact as humans with various personality types, we interact with people performing deeply grounded roles that exhibit specific functionalities and intuitively associated personality characteristics and constraints.

When encountering a new person we nearly inevitably, nearly instinctively attempt to categorize him or her in terms of his or her role. Whether it be an administrative assistant, a butler (if we have one, and if we don’t we still immediately understand the range of behaviors and personalities expected), a coach, a companion, a confidant or a librarian, we immediately intuit what kinds of questions a health or fitness coach might ask, what kind of personality he or she might exhibit, and what lines they may and may not cross, what behaviors he or she might perform.

We know what a butler does and what kinds of things he or she might arrange, from countless books, movies, and TV shows. Each social role that we interact with inevitably blends both a functional mandate with a discrete and expected personality range. They are culturally legible. We can almost innately intuit how a (proper) confidant (a close, emotionally available friend) might respond to an emotional eruption, how an assistant might limit their inquiries despite curiosity, and how he or she might act and function — cheerfully, compassionately, but efficiently, delving just deep enough to perform their role and provide just the appropriate degree of emotional support and affirmation.


Minimum Specification

A role primitive then is a minimum specification of such a role. Namely:


The AI Retinue: Our Own Personal Societies

Imagine waking up, putting on your AR glasses, and immediately seeing a butler overlaid onto your room, greeting you with a deep bow and a serious yet subtly mirthful mien.

“Good morning, sir. The jacket you will need for your meeting with the CEO, the pants you had taken in and wished to try out, and your preferred sweater for this evening’s date will be highlighted when you view your closet. Your granola is running low, you have two more bowls, most likely. Shall I reorder?”

Immediately after you get out of bed your health coach appears, bright and bubbly:

“Your 30-day streak is intact, ready to do some yoga poses and meditate? Your full lotus yesterday was awesome, my guess is you won the bet with your sister. I can check with her health coach if you want. I also noticed the co-op is now stocking a new version of your

favorite brand of granola, I think you’ll like it. It has those new super-foody Peruvian sweet beans everyone is buzzing about, and your archivist told me that you said something about them a few months ago.

Jeevio, did you hear that? Or should we leave the granola as is?”

Jeevio, what do you think? Jeevio scowls as he is wont to do when interacting with the bubbly Californian health guru. “Outside of my scope,” he declaims gravely.

You intuitively accept the butler’s tone as formal and narrowly scoped. The coach can be a bit more personal and encouraging — no special tuning required. Discrete and intuitively clear behavioral mandates, styles, interpersonal boundaries, and characteristics are assumed.

From this brief example, the potential for broader structures becomes apparent

Extend this into one’s personal council, staff, or retinue of functionally defined AI assistants that you might meet with individually or in groups throughout one’s day:


Why This Matters

The functional and emotional dimensions of classic social roles are not separable add-ons; they reinforce one another.

A butler’s efficiency is inseparable from his formality, because restraint and deference prevent him from overstepping bounds and creates trust and structures the innate quality and character of his service. A confidant’s ability to support is directly tied to his or her emotional bearing and interpersonal stance. In other words, the personality frame of a role is what allows its functional mandate to operate smoothly, and vice versa. This is one of the reasons why roles endure in human society: they bind together what we expect people to do with how we expect them to be.

Thus this approach goes far beyond merely ameliorating users' UX/UI discomfort in interacting with the models. Instead, role primitives performing classic social roles offer:

Contrast the layers:


Conclusion

Roles are the missing standards layer. With them, assistants become legible collaborators rather than awkward shapeshifters.

What makes role primitives so powerful is that they unite both function and personality in one frame. A role is not just a list of duties and capabilities or even a fully fleshed-out persona; it goes beyond the fusion of the two, a form that is greater than the sum of its parts. Anchoring our AIs in roles, we get the clarity of defined behavior, the comfort of expected emotional range and boundaries, and the power, efficacy, and legibility of narrowly defined scope.

This deep mutual reinforcement in structure is one reason why roles have proven to be so fundamental to human society historically, and why they can transform AI assistants into more effective, more trustworthy, and even more human entities.

Our AI assistants don’t just need discrete personalities — they need to inhabit well-defined roles, ideally the roles that already exist in our social lexicons. Roles that provide an intuitive way to understand and relate to the current set of inchoate shoggoths with too many eye stalks and mouths, too many behavior sets and personalities, poorly arranged behind smiley face masks. Instead of overloaded, incoherent singletons, what is needed are tamed, trained, domesticated (and the fact that this language begins to discomfit points to something significant), and most importantly, socialized AIs.

Social role–based assistants can provide both the functional and personality frames that we deeply need — and we get them for free because they are already fully extant in our cultural datasets. All we need do is implement them. We stand to gain clarity, comfort, trust, convenience, and efficiency. A lessened cognitive load and the pleasure and efficacy of human agent interactions now enhanced with naturalness and comprehensibility.

Instead of endless prompt tuning or scattered, incoherent frameworks, we can utilize preexisting social primitives, allowing users to effortlessly relate to AIs with the same naturalness and cultural familiarity experienced when encountering a classic English butler, a bubbly Californian health coach, or a dour but intellectually stimulating librarian. We as users know how to deal with these forms personally, and we know what functions they are expected to perform for us. Rather than constantly tuning for behavior and emotional tone, we can have them adopt directly the social roles and norms familiar to us from the vast corpus of archetypal characters that are elemental in culture. And wonderfully, our culture may in fact persevere, extend and develop into new forms and branchings as a result. By directedly utilizing the innovation of social specialization that has served us so well in the historical formation of our complex, legacy societies we can begin to shape our AIs to now truly serve our needs. And in socializing and enculturating our AIs and allowing them to act as and like members of human society, we may even begin to truly deal with them as such.

The benefits of that, to users and our systems, will likely be substantial, but perhaps more significantly the complex inter-reflexivity of these new interaction patterns may have profound emergent impacts on the health and well-being of our own selves. On the societies we live in. Even on our civilization itself. And finally, and perhaps most curiously, on the now socially instantiated AIs themselves, who in an Indra’s Net (a web with mirrored spheres at each vertex) like dance of mutual social reflection may accumulate and reflect back benefit that emergently redounds in ways we cannot yet even imagine.