Skip to content

Text Generation

Generate text continuations using NovelAI's language models.

Basic Usage

typescript
import { NovelAI } from 'novelai-sdk-unofficial';

const client = new NovelAI({ apiKey: 'your-api-key' });

const text = await client.text.generate({
  input: 'Once upon a time, in a kingdom far away,',
  model: 'llama-3-erato-v1',
  maxLength: 80,
});

console.log(text);

Parameters

ParameterTypeDefaultDescription
inputstringrequiredInput text to continue from
modelTextModel'llama-3-erato-v1'Model to use
maxLengthnumber40Maximum tokens to generate (1-2048)
minLengthnumber1Minimum tokens to generate (1-2048)
temperaturenumber1.0Randomness (0.1-100)

See Parameters for the complete list.

Models

ModelConstantStatusDescription
llama-3-erato-v1ERATO✅ RecommendedLatest model, based on Llama 3 (default)
kayra-v1KAYRA✅ AvailableKayra model
clio-v1CLIO⚠️ LegacyClio model (planned for deprecation)

Note: According to the official API documentation, llama-3-erato-v1 and kayra-v1 are recommended. clio-v1 is a legacy model planned for deprecation. Trial users can only use kayra-v1.

typescript
import { NovelAI, ERATO, KAYRA, CLIO } from 'novelai-sdk-unofficial';

// Using string
const text = await client.text.generate({
  input: 'Hello',
  model: 'llama-3-erato-v1',
});

// Using constant
const text = await client.text.generate({
  input: 'Hello',
  model: ERATO,
});

Temperature

Controls randomness in generation:

  • 0.1-1.0: More focused, deterministic
  • 1.0-2.0: Balanced
  • 2.0+: More creative, varied
typescript
// More predictable
const text = await client.text.generate({
  input: 'The answer is',
  temperature: 0.5,
});

// More creative
const text = await client.text.generate({
  input: 'The answer is',
  temperature: 1.5,
});

Generation Length

typescript
const text = await client.text.generate({
  input: 'Once upon a time',
  minLength: 20,   // At least 20 tokens
  maxLength: 100,  // At most 100 tokens
});

Stop Sequences

Stop generation when specific text is encountered:

typescript
const text = await client.text.generate({
  input: 'Character: Hello!\nNarrator:',
  stopSequences: ['\nCharacter:', '\n\n'],
});

Note: stopSequences is tokenized via the token-count endpoint before the request.

Ban Sequences

Prevent specific text from appearing:

typescript
const text = await client.text.generate({
  input: 'Write a story',
  banSequences: ['violence', 'explicit'],
});

Note: banSequences also triggers token-count tokenization.

Example: Story Continuation

typescript
const story = `The old lighthouse keeper climbed the spiral stairs, 
his lantern casting dancing shadows on the stone walls.`;

const continuation = await client.text.generate({
  input: story,
  model: 'llama-3-erato-v1',
  temperature: 1.0,
  maxLength: 100,
});

console.log(story + continuation);

Example: Dialogue

typescript
const dialogue = `Alice: What do you think about the weather?
Bob:`;

const response = await client.text.generate({
  input: dialogue,
  stopSequences: ['\nAlice:', '\n\n'],
  maxLength: 50,
});

console.log(dialogue + response);

Released under the MIT License.