Role Primitives: Why AI Assistants Must Transmogrify into Agents Performing Classic Social Roles

The Problem

User interactions with AI assistants are unnecessarily frustrating and constraining and while we nearly all sense this, we also seem to have a difficult time understanding exactly why this is the case. Clearly, when models shift tone or behavior we feel it viscerally.

As an example, in 2025 newly released ChatGPT5 came off as cold and snarky to many, yet others had earlier complained that ChatGPT4o was annoyingly sycophantic; on the other hand, there were many who liked and defended each of these rather abstract personalities. Both damning critiques and strong defenses were vociferously voiced.

Of equal if not greater importance, what kind of behavioral approach we expect from our assistants constantly diverges: should the assistant be a thinking partner, a teacher, a wise advisor, an executive coach, or a strategic planner? We often don’t ourselves know from one part of an interaction to the next, so how can the AIs be expected to know?

What does this indicate? A reasoned conclusion would lead us to believe that perhaps one monolithic assistant behavior and personality type — one monadic stylistic, emotional, functional characteristic set; identity, in essence — is insufficient for the varied needs of AI users.

To solve this, we’ve hacked together various unsatisfactory workarounds: elaborate prompt engineering, brand-specific features like Gems, custom GPT wrappers, and surface-level personalization settings. These have value but remain either too narrowly tuned or too vague and illegible. What is needed is an intuitive, stable framework to deal with this issue.


The Solution: Role Primitives Performing Classic Social Roles

As individuals we exist as members of society, and our societies are filled with classic and standard role categories that make our interactions with both strangers and close associates legible and comfortable. We don’t just interact with various personality types, we interact with people performing roles that exhibit specific functionalities and intuitively associated personality characteristics and constraints.

It may be an administrative assistant, a butler (if we have one, and if we don’t we still immediately understand the range of behaviors and personalities expected), a coach, a companion, a confidant, a librarian. We can intuit what kinds of questions a health or a fitness coach might ask, what kind of personality he or she might exhibit, and what lines they may and may not cross, what behaviors he or she might perform.

We know what a butler does and what kinds of things he or she might arrange, from countless books, movies, and TV shows. Each social role blends a functional mandate with a discrete and expected personality range. They are culturally legible. We can intuit how a confidant (a close, emotionally available friend) might respond to an emotional eruption, how an assistant might limit their inquiries despite curiosity, and how he or she might act and function — cheerfully, compassionately, but efficiently, delving just deep enough to perform their role and provide just the appropriate degree of emotional support and affirmation.


Minimum Specification

A role primitive then is a minimum specification of such a role. Namely:


The AI Retinue: Our Own Personal Societies

Imagine waking up, putting on your AR glasses, and immediately seeing a butler overlaid onto your room, greeting you with a deep bow and a serious mien.

“Good morning, sir. The jacket you will need for your meeting with the CEO, the pants you had taken in and wished to try out, and your preferred sweater for this evening’s date will be highlighted when you view your closet. Your granola is running low, you have two more bowls, most likely. Shall I reorder?”

Immediately after you get out of bed your health coach appears, bright and bubbly:

“Your 30-day streak is intact, ready to do some yoga poses and meditate? Your full lotus yesterday was awesome, my guess is you won the bet with your sister. I can check with her health coach if you want. I also noticed the co-op is now stocking a new version of your favorite brand of granola, I think you’ll like it. It has those new super-foody Peruvian sweet beans everyone is buzzing about, and your archivist told me that you said something about them a few months ago.
Jeeves, did you hear that? Or should we leave the granola as is?”

Jeeves, what do you think? Jeeves scowls as he is wont to do when interacting with the bubbly Californian health guru. “Outside of my scope,” he declaims gravely.

You intuitively accept the butler’s tone as formal and narrowly scoped. The coach can be a bit more personal and encouraging — no special tuning required. Discrete and intuitively clear behavioral mandates, styles, interpersonal boundaries, and characteristics.

Extend this into one’s personal council, staff, or retinue of functionally defined AI assistants that you might meet with individually or in groups throughout one’s day:


Why This Matters

The functional and emotional dimensions of classic social roles are not separable add-ons; they reinforce one another.

A butler’s efficiency is inseparable from his formality, because restraint and deference prevent him from overstepping bounds and create trust in the quality and character of his service. A confidant’s ability to support is directly tied to his or her emotional bearing. In other words, the personality frame of a role is what allows its functional mandate to operate smoothly, and vice versa. This is one of the reasons why roles endure in human society: they bind together what we expect people to do with how we expect them to be.

Thus this approach solves more than just UX discomfort. Role primitives performing classic social roles offer:

Contrast the layers:


Conclusion

Roles are the missing standards layer. With them, assistants become legible collaborators rather than awkward shapeshifters.

What makes role primitives so powerful is that they unite both function and personality in one frame. A role is not just a list of duties and capabilities or a fully fleshed-out persona; it is the fusion of the two. By anchoring our AIs in roles, we get the clarity of defined behavior, the comfort of expected emotional range and boundaries, and the power, efficacy, and legibility of narrowly defined scope.

This deep reinforcement is one reason why roles have proven to be so fundamental to human society, and why they can make AI assistants more effective, more trustworthy, and even more human.

AI assistants don’t just need discrete personalities — they need to inhabit well-defined roles, ideally the roles that already exist in our social lexicons. Roles that provide an intuitive way to understand and deal with what are currently in essence shoggoths with too many eye branches and mouths, too many behavior sets and personalities being exhibited all at once with poorly arranged smiley face masks. Instead of overloaded, incoherent singletons, what is needed are tamed, trained, domesticated, and most importantly socialized AIs. AIs that conform to our various traditional, even archetypal culturally specified social roles.

Social role–based assistants can provide both the functional and personality frames that we deeply need — and we get them for free because they preexist in our cultures. All we need do is implement them.

What we gain is clarity, comfort, trust, convenience, and efficiency. Our cognitive load is lessened and the pleasure and efficacy of our agent interactions are greatly enhanced with naturalness and comprehensibility.

Instead of endless prompt tuning or resignation, we can use a framework that provides the cultural familiarity of employing a Jeeves-like English butler, a bubbly Californian health coach, or a dour but intellectually stimulating librarian.

We know how to deal with them personally, and we know what functions they are expected to perform for us. Rather than constantly tuning for behavior and emotional tone, we can have them adopt directly the social roles and norms familiar to us from the vast corpus of archetypal characters that in many ways form the cultural bases of our common culture.

And interestingly, as a side effect, our culture in fact extends and perseveres in new forms. Not only do we utilize the innovation of social specialization that has served us so well in the forming of our complex societies, not disregarding its flaws and drawbacks, in the shaping of our AIs to now truly serve our needs; but at the same time we socialize our AIs to act as members of human society, and begin to deal with them as such — and just perhaps in that there is a greater good that extends beyond the meeting of our own personal needs.


Provenance:
First published: 2025-09-27  Rendering fixed: 2025-09-27
Canonical source: GitHub repository
Last updated: