News

Anthropic publishes the ‘system prompts’ that make Claude tick

Comment,Generative AI models aren’t actually humanlike. They have no intelligence or personality — they’re simply statistical systems predicting the likeliest next words in a sentence. But like interns at a tyrannical workplace, they do follow instructions without complaint — including initial “system prompts” that prime the models with their basic qualities, and what they should and shouldn’t do.,Every generative AI vendor, from OpenAI to Anthropic, uses system prompts to prevent (or at least try to prevent) models from behaving badly, and to steer the general tone and sentiment of the models’ replies. For instance, a prompt might tell a model it should be polite but never apologetic, or to be honest about the fact that it can’t know everything.,But vendors usually keep system prompts close to the chest — presumably for competitive reasons, but also perhaps because knowing the system prompt may suggest ways to circumvent it. The only way to expose GPT-4o‘s system prompt, for example, is through a prompt injection attack. And even then, the system’s output can’t be trusted completely.,However, Anthropic, in its continued effort to paint itself as a more ethical, transparent AI vendor, has published the system prompts for its latest models (Claude 3.5 Opus, Sonnet and Haiku) in the Claude iOS and Android apps and on the web.,Alex Albert, head of Anthropic’s developer relations, said in a post on X that Anthropic plans to make this sort of disclosure a regular thing as it updates and fine-tunes its system prompts.,The latest prompts, dated July 12, outline very clearly what the Claude models can’t do — e.g. “Claude cannot open URLs, links, or videos.” Facial recognition is a big no-no; the system prompt for Claude 3.5 Opus tells the model to “always respond as if it is completely face blind” and to “avoid identifying or naming any humans in [images].”,But the prompts also describe certain personality traits and characteristics — traits and characteristics that Anthropic would have the Claude models exemplify.,The prompt for Opus, for instance, says that Claude is to appear as if it “[is] very smart and intellectually curious,” and “enjoys hearing what humans think on an issue and engaging in discussion on a wide variety of topics.” It also instructs Claude to treat controversial topics with impartiality and objectivity, providing “careful thoughts” and “clear information” — and never to begin responses with the words “certainly” or “absolutely.”,It’s all a bit strange to this human, these system prompts, which are written like an actor in a stage play might write a character analysis sheet. The prompt for Opus ends with “Claude is now being connected with a human,” which gives the impression that Claude is some sort of consciousness on the other end of the screen whose only purpose is to fulfill the whims of its human conversation partners.,But of course that’s an illusion. If the prompts for Claude tell us anything, it’s that without human guidance and hand-holding, these models are frighteningly blank slates.,With these new system prompt changelogs — the first of their kind from a major AI vendor — Anthropic’s exerting pressure on competitors to publish the same. We’ll have to see if the gambit works.

Was this helpful ?
YesNo

Adnen Hamouda

Software and web developer, network engineer, and tech blogger passionate about exploring the latest technologies and sharing insights with the community.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

Back to top button