Polarist Craft

Design notes

Notes on how Polarist is built and why. Design decisions, the constraints behind them, and the reasoning we'd rather not hide.

01

One message, 20,000 characters.

That's about 3,600 words — roughly a longform magazine feature, eight double-spaced pages, or the transcript of a 25-minute talk. Long enough to develop a real thought; short enough that the conversation stays conversation-shaped rather than sliding into document-dump territory.

Polarist is built on a simple premise: you don't have to say everything at once. Splitting a long thought across several messages isn't a workaround — it's how dialogue works. Polarist remembers what you've said, so the next message can just keep going.

Messages you split across the same topic are threaded together automatically. You don't have to mark them, tag them, or remind the AI that this is still the same thread. It just works.

Why is the cap locale-aware?

Japanese and Korean messages are capped at 10,000 characters instead of 20,000. The reason is simple: one Japanese or Korean character carries roughly twice the information of one Latin letter, so a fair ceiling has to reflect the script's density. X/Twitter made this exact observation in 2017 — they doubled Latin-script tweets to 280 characters while keeping CJK tweets at 140. Polarist follows the same 2:1 ratio.

The detection runs on the actual content — not on cookies, not on your UI language, not on any header you could spoof. Paste a Japanese article into the English interface and you'll get the CJK cap; write English with the Japanese interface and you'll get the Latin cap. The boundary is the text itself.

Fine print on the detection

This is a coarse Unicode-range heuristic, not a real language identifier. Polarist counts Hiragana, Katakana, common Han ideographs, and Hangul syllables; if they make up 30% or more of the message, you get the CJK cap. Otherwise, you get the Latin cap. Mixed-script messages (English prose with a stray Chinese character, or Japanese writing with English loanwords) are handled gracefully by the 30% threshold.

Vietnamese, Arabic, Hebrew, Thai, and Devanagari all fall into the Latin bucket because they sit outside the ranges we count — the 20,000-character cap is still generous for these scripts in practice, but the classification is a deliberate simplification rather than true linguistic analysis.

02

A conversation without sessions.

Most AI chat services start you from scratch every time you open a new conversation. There's a session — a container — and memory lives inside its walls. Step outside and the AI forgets you.

Polarist has no walls.

You keep talking. When the topic shifts, Polarist notices and keeps that thread separate. When you come back to a topic — tomorrow, next week, six months from now — Polarist pulls the relevant past back into view without being asked. You don't start a conversation; you just continue one.

This is the single most important thing to understand about Polarist: there is no "new chat" button, because there is no chat to be new. The conversation is one long thing, and it's been going since the day you signed up.

The memory system that makes this work is Polarist's own technology, and we're deliberately not explaining how it does the thing. Use it for a few days and the difference from a session-scoped chat will speak for itself. The "how" we'll write about another time.

03

One conversation, many models.

Free users talk to one model. Pro users talk to a smarter one. Both talk to Polarist — which is to say, both get the same memory, the same topic threading, the same everything that makes the conversation feel continuous. The model swap happens underneath, and it doesn't change who you're talking to.

This is a deliberate inversion of how most services position their tiers. "Pro = smarter AI" is the industry-standard framing, and Polarist does offer that — but the part that matters for the "remembers you" promise isn't the model, it's the memory layer that surrounds it. And the memory layer is identical on Free and Pro.

Which model powers your tier today is an implementation detail. Good models come and go; what Polarist commits to is "the best available for each tier, transparently swapped as the landscape changes." The model you're talking to tomorrow might not be the model you were talking to yesterday, and that's fine — because Polarist isn't any single model. It's the layer on top.

← Back to home