novelai-sdk-unofficial / TextGeneration
Class: TextGeneration
Defined in: src/client.ts:310
High-level text generation interface
Provides user-friendly methods for text generation with automatic parameter validation and processing.
Constructors
Constructor
new TextGeneration(
client):TextGeneration
Defined in: src/client.ts:313
Parameters
client
Returns
TextGeneration
Methods
generate()
generate(
params):Promise<string>
Defined in: src/client.ts:389
Generate text using the NovelAI API
This method provides a user-friendly interface with:
- Automatic parameter validation via Zod schemas
- Sensible default values for optional parameters
- camelCase to snake_case parameter conversion
Parameters
params
User-friendly generation parameters
banSequences?
string[] = ...
Sequences to prevent in output
bracketBan?
boolean = ...
Ban bracket tokens
cfgScale?
number = ...
CFG scale. 0 = disabled
cfgUc?
string = ...
CFG negative prompt
generateUntilSentence?
boolean = ...
Generate until sentence end
input
string = ...
Input text context - the prompt to continue from
logitBias?
object[] = ...
Adjusts token probabilities
maxLength?
number = ...
Maximum tokens to generate (1-2048)
minLength?
number = ...
Minimum tokens to generate (1-2048)
minP?
number = ...
Min-P sampling threshold (0.0-1.0). 0 = disabled
mirostatLr?
number = ...
Mirostat learning rate (0.0-1.0)
mirostatTau?
number = ...
Mirostat tau parameter. 0 = disabled
model?
"llama-3-erato-v1" | "kayra-v1" | "clio-v1" = ...
Text generation model to use
numLogprobs?
number = ...
Number of logprobs to return (0-30)
order?
number[] = ...
Sampler order
phraseRepPen?
"off" | "very_light" | "light" | "medium" | "aggressive" | "very_aggressive" = ...
Phrase repetition penalty preset
prefix?
string = ...
Prefix/module to use
repetitionPenalty?
number = ...
Penalizes repeated tokens. 1.0 = disabled
repetitionPenaltyFrequency?
number = ...
Frequency-based repetition penalty (-16 to 16). 0 = disabled
repetitionPenaltyPresence?
number = ...
Presence-based repetition penalty (-16 to 16). 0 = disabled
repetitionPenaltyRange?
number = ...
Range for repetition penalty calculation (0-8192). 0 = full context
repetitionPenaltySlope?
number = ...
Repetition penalty slope (0-10). 0 = disabled
stopSequences?
string[] = ...
Sequences that stop generation when encountered
tailFreeSampling?
number = ...
Tail-free sampling parameter (0.0-1.0). 1.0 = disabled
temperature?
number = ...
Controls randomness (0.1-100). Higher = more random
topA?
number = ...
Top-a sampling threshold. 0 = disabled
topG?
number = ...
Top-G sampling (0-65536). 0 = disabled
topK?
number = ...
Limits vocabulary to top K tokens. 0 = disabled
topP?
number = ...
Nucleus sampling threshold (0.0-1.0). 1.0 = disabled
typicalP?
number = ...
Typical sampling threshold. 1.0 = disabled
unifiedConf?
number = ...
Unified Conf (entropy scale)
unifiedLinear?
number = ...
Unified Linear
unifiedQuad?
number = ...
Unified Quad
Returns
Promise<string>
Generated text string
Throws
ZodError if parameters fail validation
Example
const client = new NovelAI({ apiKey: 'your-api-key' });
// Simple generation
const text = await client.text.generate({
input: 'Once upon a time',
});
// With options
const text = await client.text.generate({
input: 'Once upon a time',
model: 'llama-3-erato-v1',
temperature: 1.2,
maxLength: 80,
});generateStream()
generateStream(
params,signal?):AsyncGenerator<string,void,unknown>
Defined in: src/client.ts:436
Generate text with streaming output
Returns an async generator that yields text chunks as they are generated. Useful for displaying text progressively to users.
Parameters
params
User-friendly generation parameters
banSequences?
string[] = ...
Sequences to prevent in output
bracketBan?
boolean = ...
Ban bracket tokens
cfgScale?
number = ...
CFG scale. 0 = disabled
cfgUc?
string = ...
CFG negative prompt
generateUntilSentence?
boolean = ...
Generate until sentence end
input
string = ...
Input text context - the prompt to continue from
logitBias?
object[] = ...
Adjusts token probabilities
maxLength?
number = ...
Maximum tokens to generate (1-2048)
minLength?
number = ...
Minimum tokens to generate (1-2048)
minP?
number = ...
Min-P sampling threshold (0.0-1.0). 0 = disabled
mirostatLr?
number = ...
Mirostat learning rate (0.0-1.0)
mirostatTau?
number = ...
Mirostat tau parameter. 0 = disabled
model?
"llama-3-erato-v1" | "kayra-v1" | "clio-v1" = ...
Text generation model to use
numLogprobs?
number = ...
Number of logprobs to return (0-30)
order?
number[] = ...
Sampler order
phraseRepPen?
"off" | "very_light" | "light" | "medium" | "aggressive" | "very_aggressive" = ...
Phrase repetition penalty preset
prefix?
string = ...
Prefix/module to use
repetitionPenalty?
number = ...
Penalizes repeated tokens. 1.0 = disabled
repetitionPenaltyFrequency?
number = ...
Frequency-based repetition penalty (-16 to 16). 0 = disabled
repetitionPenaltyPresence?
number = ...
Presence-based repetition penalty (-16 to 16). 0 = disabled
repetitionPenaltyRange?
number = ...
Range for repetition penalty calculation (0-8192). 0 = full context
repetitionPenaltySlope?
number = ...
Repetition penalty slope (0-10). 0 = disabled
stopSequences?
string[] = ...
Sequences that stop generation when encountered
tailFreeSampling?
number = ...
Tail-free sampling parameter (0.0-1.0). 1.0 = disabled
temperature?
number = ...
Controls randomness (0.1-100). Higher = more random
topA?
number = ...
Top-a sampling threshold. 0 = disabled
topG?
number = ...
Top-G sampling (0-65536). 0 = disabled
topK?
number = ...
Limits vocabulary to top K tokens. 0 = disabled
topP?
number = ...
Nucleus sampling threshold (0.0-1.0). 1.0 = disabled
typicalP?
number = ...
Typical sampling threshold. 1.0 = disabled
unifiedConf?
number = ...
Unified Conf (entropy scale)
unifiedLinear?
number = ...
Unified Linear
unifiedQuad?
number = ...
Unified Quad
signal?
AbortSignal
Optional AbortSignal for cancellation
Returns
AsyncGenerator<string, void, unknown>
Yields
Text chunks as strings
Example
const client = new NovelAI({ apiKey: 'your-api-key' });
const controller = new AbortController();
const stream = client.text.generateStream({
input: 'Once upon a time',
temperature: 1.0,
maxLength: 100,
}, controller.signal);
// Cancel after 5 seconds
setTimeout(() => controller.abort(), 5000);
try {
for await (const chunk of stream) {
process.stdout.write(chunk);
}
} catch (error) {
if (error.name === 'AbortError') {
console.log('Generation cancelled');
}
}