Text Generation
Generate text continuations using NovelAI's language models.
Basic Usage
typescript
import { NovelAI } from 'novelai-sdk-unofficial';
const client = new NovelAI({ apiKey: 'your-api-key' });
const text = await client.text.generate({
input: 'Once upon a time, in a kingdom far away,',
model: 'llama-3-erato-v1',
maxLength: 80,
});
console.log(text);Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
input | string | required | Input text to continue from |
model | TextModel | 'llama-3-erato-v1' | Model to use |
maxLength | number | 40 | Maximum tokens to generate (1-2048) |
minLength | number | 1 | Minimum tokens to generate (1-2048) |
temperature | number | 1.0 | Randomness (0.1-100) |
See Parameters for the complete list.
Models
| Model | Constant | Status | Description |
|---|---|---|---|
llama-3-erato-v1 | ERATO | ✅ Recommended | Latest model, based on Llama 3 (default) |
kayra-v1 | KAYRA | ✅ Available | Kayra model |
clio-v1 | CLIO | ⚠️ Legacy | Clio model (planned for deprecation) |
Note: According to the official API documentation,
llama-3-erato-v1andkayra-v1are recommended.clio-v1is a legacy model planned for deprecation. Trial users can only usekayra-v1.
typescript
import { NovelAI, ERATO, KAYRA, CLIO } from 'novelai-sdk-unofficial';
// Using string
const text = await client.text.generate({
input: 'Hello',
model: 'llama-3-erato-v1',
});
// Using constant
const text = await client.text.generate({
input: 'Hello',
model: ERATO,
});Temperature
Controls randomness in generation:
0.1-1.0: More focused, deterministic1.0-2.0: Balanced2.0+: More creative, varied
typescript
// More predictable
const text = await client.text.generate({
input: 'The answer is',
temperature: 0.5,
});
// More creative
const text = await client.text.generate({
input: 'The answer is',
temperature: 1.5,
});Generation Length
typescript
const text = await client.text.generate({
input: 'Once upon a time',
minLength: 20, // At least 20 tokens
maxLength: 100, // At most 100 tokens
});Stop Sequences
Stop generation when specific text is encountered:
typescript
const text = await client.text.generate({
input: 'Character: Hello!\nNarrator:',
stopSequences: ['\nCharacter:', '\n\n'],
});Note:
stopSequencesis tokenized via the token-count endpoint before the request.
Ban Sequences
Prevent specific text from appearing:
typescript
const text = await client.text.generate({
input: 'Write a story',
banSequences: ['violence', 'explicit'],
});Note:
banSequencesalso triggers token-count tokenization.
Example: Story Continuation
typescript
const story = `The old lighthouse keeper climbed the spiral stairs,
his lantern casting dancing shadows on the stone walls.`;
const continuation = await client.text.generate({
input: story,
model: 'llama-3-erato-v1',
temperature: 1.0,
maxLength: 100,
});
console.log(story + continuation);Example: Dialogue
typescript
const dialogue = `Alice: What do you think about the weather?
Bob:`;
const response = await client.text.generate({
input: dialogue,
stopSequences: ['\nAlice:', '\n\n'],
maxLength: 50,
});
console.log(dialogue + response);