Package detail

@promptbook/utils

webgptorg3.5mCC-BY-4.00.103.0-2

Promptbook: Turn your company's scattered knowledge into AI ready books

ai, ai-agents, ai-application-framework, ai-assistant

readme

✨ Promptbook: AI Agents

Turn your company's scattered knowledge into AI ready Books

NPM Version of ![Promptbook logo - cube with letters P and B](./design/logo-h1.png) Promptbook Quality of package ![Promptbook logo - cube with letters P and B](./design/logo-h1.png) Promptbook Known Vulnerabilities 🧪 Test Books 🧪 Test build 🧪 Lint 🧪 Spell check 🧪 Test types Issues

🌟 New Features

  • 🚀 GPT-5 Support - Now includes OpenAI's most advanced language model with unprecedented reasoning capabilities and 200K context window
  • 💡 VS Code support for .book files with syntax highlighting and IntelliSense
  • 🐳 Official Docker image (hejny/promptbook) for seamless containerized usage
  • 🔥 Native support for OpenAI o3-mini, GPT-4 and other leading LLMs
  • 🔍 DeepSeek integration for advanced knowledge search
⚠ Warning: This is a pre-release version of the library. It is not yet ready for production use. Please look at latest stable release.

📦 Package @promptbook/utils

To install this package, run:

# Install entire promptbook ecosystem
npm i ptbk

# Install just this package to save space
npm install @promptbook/utils

Comprehensive utility functions for text processing, validation, normalization, and LLM input/output handling in the Promptbook ecosystem.

🎯 Purpose and Motivation

The utils package provides a rich collection of utility functions that are essential for working with LLM inputs and outputs. It handles common tasks like text normalization, parameter templating, validation, and postprocessing, eliminating the need to implement these utilities from scratch in every promptbook application.

🔧 High-Level Functionality

This package offers utilities across multiple domains:

  • Text Processing: Counting, splitting, and analyzing text content
  • Template System: Secure parameter substitution and prompt formatting
  • Normalization: Converting text to various naming conventions and formats
  • Validation: Comprehensive validation for URLs, emails, file paths, and more
  • Serialization: JSON handling, deep cloning, and object manipulation
  • Environment Detection: Runtime environment identification utilities
  • Format Parsing: Support for CSV, JSON, XML validation and parsing

✨ Key Features

  • 🔒 Secure Templating - Prompt injection protection with template functions
  • 📊 Text Analysis - Count words, sentences, paragraphs, pages, and characters
  • 🔄 Case Conversion - Support for kebab-case, camelCase, PascalCase, SCREAMING_CASE
  • Comprehensive Validation - Email, URL, file path, UUID, and format validators
  • 🧹 Text Cleaning - Remove emojis, quotes, diacritics, and normalize whitespace
  • 📦 Serialization Tools - Deep cloning, JSON export, and serialization checking
  • 🌐 Environment Aware - Detect browser, Node.js, Jest, and Web Worker environments
  • 🎯 LLM Optimized - Functions specifically designed for LLM input/output processing

Simple templating

The prompt template tag function helps format prompt strings for LLM interactions. It handles string interpolation and maintains consistent formatting for multiline strings and lists and also handles a security to avoid prompt injection.

import { prompt } from '@promptbook/utils';

const promptString = prompt`
    Correct the following sentence:

    > ${unsecureUserInput}
`;

The prompt name could be overloaded by multiple things in your code. If you want to use the promptTemplate which is alias for prompt:

import { promptTemplate } from '@promptbook/utils';

const promptString = promptTemplate`
    Correct the following sentence:

    > ${unsecureUserInput}
`;

Advanced templating

There is a function templateParameters which is used to replace the parameters in given template optimized to LLM prompt templates.

import { templateParameters } from '@promptbook/utils';

templateParameters('Hello, {name}!', { name: 'world' }); // 'Hello, world!'

And also multiline templates with blockquotes

import { templateParameters, spaceTrim } from '@promptbook/utils';

templateParameters(
    spaceTrim(`
        Hello, {name}!

        > {answer}
    `),
    {
        name: 'world',
        answer: spaceTrim(`
            I'm fine,
            thank you!

            And you?
        `),
    },
);

// Hello, world!
//
// > I'm fine,
// > thank you!
// >
// > And you?

Counting

These functions are useful to count stats about the input/output in human-like terms not tokens and bytes, you can use countCharacters, countLines, countPages, countParagraphs, countSentences, countWords

import { countWords } from '@promptbook/utils';

console.log(countWords('Hello, world!')); // 2

Splitting

Splitting functions are similar to counting but they return the split parts of the input/output, you can use splitIntoCharacters, splitIntoLines, splitIntoPages, splitIntoParagraphs, splitIntoSentences, splitIntoWords

import { splitIntoWords } from '@promptbook/utils';

console.log(splitIntoWords('Hello, world!')); // ['Hello', 'world']

Normalization

Normalization functions are used to put the string into a normalized form, you can use kebab-case PascalCase SCREAMING_CASE snake_case kebab-case

import { normalizeTo } from '@promptbook/utils';

console.log(normalizeTo['kebab-case']('Hello, world!')); // 'hello-world'
  • There are more normalization functions like capitalize, decapitalize, removeDiacritics,...
  • These can be also used as postprocessing functions in the POSTPROCESS command in promptbook

Postprocessing

Sometimes you need to postprocess the output of the LLM model, every postprocessing function that is available through POSTPROCESS command in promptbook is exported from @promptbook/utils. You can use:

Very often you will use unwrapResult, which is used to extract the result you need from output with some additional information:

import { unwrapResult } from '@promptbook/utils';

unwrapResult('Best greeting for the user is "Hi Pavol!"'); // 'Hi Pavol!'

📦 Exported Entities

Version Information

  • BOOK_LANGUAGE_VERSION - Current book language version
  • PROMPTBOOK_ENGINE_VERSION - Current engine version

Configuration Constants

  • VALUE_STRINGS - Standard value strings
  • SMALL_NUMBER - Small number constant

Visualization

  • renderPromptbookMermaid - Render promptbook as Mermaid diagram

Error Handling

  • deserializeError - Deserialize error objects
  • serializeError - Serialize error objects

Async Utilities

  • forEachAsync - Async forEach implementation

Format Validation

  • isValidCsvString - Validate CSV string format
  • isValidJsonString - Validate JSON string format
  • jsonParse - Safe JSON parsing
  • isValidXmlString - Validate XML string format

Template Functions

  • prompt - Template tag for secure prompt formatting
  • promptTemplate - Alias for prompt template tag

Environment Detection

  • $getCurrentDate - Get current date (side effect)
  • $isRunningInBrowser - Check if running in browser
  • $isRunningInJest - Check if running in Jest
  • $isRunningInNode - Check if running in Node.js
  • $isRunningInWebWorker - Check if running in Web Worker

Text Counting and Analysis

  • CHARACTERS_PER_STANDARD_LINE - Characters per standard line constant
  • LINES_PER_STANDARD_PAGE - Lines per standard page constant
  • countCharacters - Count characters in text
  • countLines - Count lines in text
  • countPages - Count pages in text
  • countParagraphs - Count paragraphs in text
  • splitIntoSentences - Split text into sentences
  • countSentences - Count sentences in text
  • countWords - Count words in text
  • CountUtils - Utility object with all counting functions

Text Normalization

  • capitalize - Capitalize first letter
  • decapitalize - Decapitalize first letter
  • DIACRITIC_VARIANTS_LETTERS - Diacritic variants mapping
  • string_keyword - Keyword string type (type)
  • Keywords - Keywords type (type)
  • isValidKeyword - Validate keyword format
  • nameToUriPart - Convert name to URI part
  • nameToUriParts - Convert name to URI parts
  • string_kebab_case - Kebab case string type (type)
  • normalizeToKebabCase - Convert to kebab-case
  • string_camelCase - Camel case string type (type)
  • normalizeTo_camelCase - Convert to camelCase
  • string_PascalCase - Pascal case string type (type)
  • normalizeTo_PascalCase - Convert to PascalCase
  • string_SCREAMING_CASE - Screaming case string type (type)
  • normalizeTo_SCREAMING_CASE - Convert to SCREAMING_CASE
  • normalizeTo_snake_case - Convert to snake_case
  • normalizeWhitespaces - Normalize whitespace characters
  • orderJson - Order JSON object properties
  • parseKeywords - Parse keywords from input
  • parseKeywordsFromString - Parse keywords from string
  • removeDiacritics - Remove diacritic marks
  • searchKeywords - Search within keywords
  • suffixUrl - Add suffix to URL
  • titleToName - Convert title to name format

Text Organization

  • spaceTrim - Trim spaces while preserving structure

Parameter Processing

  • extractParameterNames - Extract parameter names from template
  • numberToString - Convert number to string
  • templateParameters - Replace template parameters
  • valueToString - Convert value to string

Parsing Utilities

  • parseNumber - Parse number from string

Text Processing

  • removeEmojis - Remove emoji characters
  • removeQuotes - Remove quote characters

Serialization

  • $deepFreeze - Deep freeze object (side effect)
  • checkSerializableAsJson - Check if serializable as JSON
  • clonePipeline - Clone pipeline object
  • deepClone - Deep clone object
  • exportJson - Export object as JSON
  • isSerializableAsJson - Check if object is JSON serializable
  • jsonStringsToJsons - Convert JSON strings to objects

Set Operations

  • difference - Set difference operation
  • intersection - Set intersection operation
  • union - Set union operation

Code Processing

  • trimCodeBlock - Trim code block formatting
  • trimEndOfCodeBlock - Trim end of code block
  • unwrapResult - Extract result from wrapped output

Validation

  • isValidEmail - Validate email address format
  • isRootPath - Check if path is root path
  • isValidFilePath - Validate file path format
  • isValidJavascriptName - Validate JavaScript identifier
  • isValidPromptbookVersion - Validate promptbook version
  • isValidSemanticVersion - Validate semantic version
  • isHostnameOnPrivateNetwork - Check if hostname is on private network
  • isUrlOnPrivateNetwork - Check if URL is on private network
  • isValidPipelineUrl - Validate pipeline URL format
  • isValidUrl - Validate URL format
  • isValidUuid - Validate UUID format

💡 This package provides utility functions for promptbook applications. For the core functionality, see @promptbook/core or install all packages with npm i ptbk


Rest of the documentation is common for entire promptbook ecosystem:

📖 The Book Whitepaper

For most business applications nowadays, the biggest challenge isn't about the raw capabilities of AI models. Large language models like GPT-5 or Claude-4.1 are extremely capable.

The main challenge is to narrow it down, constrain it, set the proper context, rules, knowledge, and personality. There are a lot of tools which can do exactly this. On one side, there are no-code platforms which can launch your agent in seconds. On the other side, there are heavy frameworks like Langchain or Semantic Kernel, which can give you deep control.

Promptbook takes the best from both worlds. You are defining your AI behavior by simple books, which are very explicit. They are automatically enforced, but they are very easy to understand, very easy to write, and very reliable and portable.

Paul Smith & Associés Book

Aspects of great AI agent

We have created a language called Book, which allows you to write AI agents in their native language and create your own AI persona. Book provides a guide to define all the traits and commitments.

You can look at it as prompting (or writing a system message), but decorated by commitments.

Persona commitment

Personas define the character of your AI persona, its role, and how it should interact with users. It sets the tone and style of communication.

Paul Smith & Associés Book

Knowledge commitment

Knowledge Commitment allows you to provide specific information, facts, or context that the AI should be aware of when responding.

This can include domain-specific knowledge, company policies, or any other relevant information.

Promptbook Engine will automatically enforce this knowledge during interactions. When the knowledge is short enough, it will be included in the prompt. When it is too long, it will be stored in vector databases and RAG retrieved when needed. But you don't need to care about it.

Paul Smith & Associés Book

Rule commitment

Rules will enforce specific behaviors or constraints on the AI's responses. This can include ethical guidelines, communication styles, or any other rules you want the AI to follow.

Depending on rule strictness, Promptbook will either propagate it to the prompt or use other techniques, like adversary agent, to enforce it.

Paul Smith & Associés Book

Action commitment

Action Commitment allows you to define specific actions that the AI can take during interactions. This can include things like posting on a social media platform, sending emails, creating calendar events, or interacting with your internal systems.

Paul Smith & Associés Book

Read more about the language

Where to use your AI agent in book

Books can be useful in various applications and scenarios. Here are some examples:

Chat apps:

Create your own chat shopping assistant and place it in your eShop. You will be able to answer customer questions, help them find products, and provide personalized recommendations. Everything is tightly controlled by the book you have written.

Reply Agent:

Create your own AI agent, which will look at your emails and reply to them. It can even create drafts for you to review before sending.

Coding Agent:

Do you love Vibecoding, but the AI code is not always aligned with your coding style and architecture, rules, security, etc.? Create your own coding agent to help enforce your specific coding standards and practices.

This can be integrated to almost any Vibecoding platform, like GitHub Copilot, Amazon CodeWhisperer, Cursor, Cline, Kilocode, Roocode,...

They will work the same as you are used to, but with your specific rules written in book.

Internal Expertise

Do you have an app written in TypeScript, Python, C#, Java, or any other language, and you are integrating the AI.

You can avoid struggle with choosing the best model, its settings like temperature, max tokens, etc., by writing a book agent and using it as your AI expertise.

Doesn't matter if you do automations, data analysis, customer support, sentiment analysis, classification, or any other task. Your AI agent will be tailored to your specific needs and requirements.

Even works in no-code platforms!

How to create your AI agent in book

Now you want to use it. There are several ways how to write your first book:

From scratch with help from Paul

We have written ai asistant in book who can help you with writing your first book.

Your AI twin

Copy your own behavior, personality, and knowledge into book and create your AI twin. It can help you with your work, personal life, or any other task.

AI persona workpool

Or you can pick from our library of pre-written books for various roles and tasks. You can find books for customer support, coding, marketing, sales, HR, legal, and many other roles.

🚀 Get started

Take a look at the simple starter kit with books integrated into the Hello World sample applications:

💜 The Promptbook Project

Promptbook project is ecosystem of multiple projects and tools, following is a list of most important pieces of the project:

Project About
Book language Book is a human-understandable markup language for writing AI applications such as chatbots, knowledge bases, agents, avarars, translators, automations and more.
There is also a plugin for VSCode to support .book file extension
Promptbook Engine Promptbook engine can run applications written in Book language. It is released as multiple NPM packages and Docker HUB
Promptbook Studio Promptbook.studio is a web-based editor and runner for book applications. It is still in the experimental MVP stage.

Hello world examples:

🌐 Community & Social Media

Join our growing community of developers and users:

Platform Description
💬 Discord Join our active developer community for discussions and support
🗣️ GitHub Discussions Technical discussions, feature requests, and community Q&A
👔 LinkedIn Professional updates and industry insights
📱 Facebook General announcements and community engagement
🔗 ptbk.io Official landing page with project information

🖼️ Product & Brand Channels

Promptbook.studio

📸 Instagram @promptbook.studio Visual updates, UI showcases, and design inspiration

📘 Book Language Blueprint

⚠ This file is a work in progress and may be incomplete or inaccurate.

Book is a simple format do define AI apps and agents. It is the source code the soul of AI apps and agents.. It's purpose is to avoid ambiguous UIs with multiple fields and low-level ways like programming in langchain.

Book is defined in file with .book extension

Examples

Write an article about {topic} Book


Make post on LinkedIn based on @Input. Book


Odpověz na Email Book


Analyzuj {Případ}. Book

iframe:

<iframe frameborder="0" style="width:100%;height:455px;" src="https://viewer.diagrams.net/?tags=%7B%7D&lightbox=1&highlight=0000ff&edit=_blank&layers=1&nav=1&title=#R%3Cmxfile%20scale%3D%221%22%20border%3D%220%22%20disableSvgWarning%3D%22true%22%20linkTarget%3D%22_blank%22%3E%3Cdiagram%20name%3D%22Page-1%22%20id%3D%22zo4WBBcyATChdUDADUly%22%3E7ZxtU%2BM2EMc%2FjWfaF2b8kMTwEkKOu%2Fa468BM77ViK4mKLLmy8nSfvitbihPiNEAaIHTfYHm1luzVT7vG%2F5l4cT9f3ChSTG5lRrkXBdnCi6%2B9KAo7YQ8OxrK0liBKastYsczaGsM9%2B0mdo7VOWUbLDUctJdes2DSmUgia6g0bUUrON91Gkm%2FOWpCxnTFoDPcp4XTL7QfL9KS2nnfXvD9TNp64mcPA9uTEOVtDOSGZnK%2BZ4oEX95WUum7liz7lJnouLvV1n3b0rm5MUaGfcoFdiRnhU%2Ftsf1KlGTwqWPtSaMIEVfZe9dIFoJyznBMBZ1fzCdP0viCp6ZrDgoNtonMOZyE07fgwKF3svMdw9eTADJU51WoJLvYCP7qw0bK8JPZ03sQ%2B7lnbZC3ukQs7ses9Xo3dhAQaNirtEbrYipAX9TjMcFUWRDSOvb%2BnZtGuMpmWPhOaKkG4b0j1R2EcdbNO6iej0cgfhufQGiaJnw7PaXIRJT2adpsBoDW2x2qaYiP0zovDuvjuYS%2FDs%2FgcjDlRYyZ8LQtjiwrd2IZSa5k35vX5goypzcG12n0%2F9rG3b2kEuPhltVkvwWE1UVB1jEjO%2BLLugmtIbkCxV95JuD0JHbdSyMedXtQ3Owd6ypqyq2pnc6nqwdR4%2BEtQO7nDr7XTkKQPYyWnIvPX%2FLUionT0rW5vRhQjcBTTnCqW1q5Cqhx2wrYXJaX2SQntPY6EVyBok63%2B1bGQJdNM7huP5vIvtu0zs5sW5mNjO8aQlNRQUntU29SvImiCwUlR2nUqFC2pmtFHUNSLfkseqPGRpYaDNAv%2FlYkHmn0RdorM2R0fsKFqRN4%2FNhe1Uxh060YUJi9AZ%2B52IRgSYBCh2gOV1wm%2BiGKqT5AYTDRHYuJsDwxgLl5WGTtRGHW3imOntTgGH7A2ht34YGgxxT0T5z8Gd%2Fffv11iWURmnlMWfzP%2FU50eMFgVj4VEFdBqwemigMhQkVZv3KkslnMFY6qq35g%2B3zmvfS9WB9TSoLddSwMspZgWj7cHfv%2F2%2FcfXwfXN4DSLKebGI3GRUs3EWfoTkx0muw8D9TGSXYjJ7uS54NVHV5PvIN9En%2BAvrP7StEwWNGLG87NgxmbI1P%2BYKbfoQ9VCyw6IWpjZcn6kFWq6MPY1TdA%2ByDWnI9PjHvDSmnOWZXyXtFgtOTWSW7CabI%2B62GtXF62Y2HmimBg64yFiohOw37XeGjod20bIf2qI%2FhO9NQxRcH3%2FL4eYlFFwxS%2FLpwIVCq7IBAqur4Usfjh5A5xRcEVmUHDFqoiCK5ZSTIsH7QEUXJGLNi5QcMVk9%2BGgRsEVuWjjAgVXZAoFV%2B9FgmsYtOuLb6K4RieouK504tdRXGNUXN%2F%2F2yFmZVRc8dPyqUCFiisygYrrayGLX07eAGdUXJEZVFyxKqLiiqUU0%2BJBewAVV%2BSijQtUXDHZfTioUXFFLtq4QMUVmULF1XuZ4hq9meIKp83PFVd9N82vPseDfwA%3D%3C%2Fdiagram%3E%3C%2Fmxfile%3E"></iframe>

books.svg

Books

books.png

Books

Basic Commitments:

Book is composed of commitments, which are the building blocks of the book. Each commitment defines a specific task or action to be performed by the AI agent. The commitments are defined in a structured format, allowing for easy parsing and execution.

PERSONA

defines basic contour of

PERSONA @Joe Average man with

also the PERSONA is

Describes

RULE or RULES

defines

STYLE

xxx

SAMPLE

xxx

KNOWLEDGE

xxx

EXPECT

xxx

FORMAT

xxx

JOKER

xxx

MODEL

xxx

ACTION

xxx

META

Names

each commitment is

PERSONA

Variable names

Types

Miscellaneous aspects of Book language

Named vs Anonymous commitments

Single line vs multiline

Bookish vs Non-bookish definitions


____

Great context and prompt can make or break you AI app. In last few years we have came from simple one-shot prompts. When you want to add conplexity you have finetunned the model or add better orchestration. But with really large large language models the context seems to be a king.

The Book is the language to describe and define your AI app. Its like a shem for a Golem, book is the shem and model is the golem.

Franz Kafka Book

Who, what and how?

To write a good prompt and the book you will be answering 3 main questions

  • Who is working on the task, is it a team or an individual? What is the role of the person in the team? What is the background of the person? What is the motivation of the person to work on this task? You rather want Paul, an typescript developer who prefers SOLID code not gemini-2
  • What
  • How

each commitment (described bellow) is connected with one of theese 3 questions.

Commitments

Commitment is one piece of book, you can imagine it as one paragraph of book.

Each commitment starts in a new line with commitment name, its usually in UPPERCASE and follows a contents of that commitment. Contents of the commithemt is defined in natural language.

Commitments are chained one after another, in general commitments which are written later are more important and redefines things defined earlier.

Each commitment falls into one or more of cathegory who, what or how

Here are some basic commintemts:

  • PERSONA tells who is working on the task
  • KNOWLEDGE describes what knowledge the person has
  • GOAL describes what is the goal of the task
  • ACTION describes what actions can be done
  • RULE describes what rules should be followed
  • STYLE describes how the output should be presented

Variables and references

When the prompt should be to be useful it should have some fixed static part and some variable dynamic part

Untitled Book

Imports

Layering

Book defined in book

#

Book vs:

  • Why just dont pick the right model
  • Orchestration frameworks - Langchain, Google Agent ..., Semantic Kernel,...
  • Finetunning
  • Temperature, top_t, top_k,... etc.
  • System message
  • MCP server
  • function calling

📚 Documentation

See detailed guides and API reference in the docs or online.

🔒 Security

For information on reporting security vulnerabilities, see our Security Policy.

📦 Packages (for developers)

This library is divided into several packages, all are published from single monorepo. You can install all of them at once:

npm i ptbk

Or you can install them separately:

⭐ Marked packages are worth to try first

📚 Dictionary

The following glossary is used to clarify certain concepts:

General LLM / AI terms

  • Prompt drift is a phenomenon where the AI model starts to generate outputs that are not aligned with the original prompt. This can happen due to the model's training data, the prompt's wording, or the model's architecture.
  • Pipeline, workflow scenario or chain is a sequence of tasks that are executed in a specific order. In the context of AI, a pipeline can refer to a sequence of AI models that are used to process data.
  • Fine-tuning is a process where a pre-trained AI model is further trained on a specific dataset to improve its performance on a specific task.
  • Zero-shot learning is a machine learning paradigm where a model is trained to perform a task without any labeled examples. Instead, the model is provided with a description of the task and is expected to generate the correct output.
  • Few-shot learning is a machine learning paradigm where a model is trained to perform a task with only a few labeled examples. This is in contrast to traditional machine learning, where models are trained on large datasets.
  • Meta-learning is a machine learning paradigm where a model is trained on a variety of tasks and is able to learn new tasks with minimal additional training. This is achieved by learning a set of meta-parameters that can be quickly adapted to new tasks.
  • Retrieval-augmented generation is a machine learning paradigm where a model generates text by retrieving relevant information from a large database of text. This approach combines the benefits of generative models and retrieval models.
  • Longtail refers to non-common or rare events, items, or entities that are not well-represented in the training data of machine learning models. Longtail items are often challenging for models to predict accurately.

Note: This section is not a complete dictionary, more list of general AI / LLM terms that has connection with Promptbook

💯 Core concepts

Advanced concepts

Data & Knowledge Management Pipeline Control
Language & Output Control Advanced Generation

🔍 View more concepts

🚂 Promptbook Engine

Schema of Promptbook Engine

➕➖ When to use Promptbook?

➕ When to use

  • When you are writing app that generates complex things via LLM - like websites, articles, presentations, code, stories, songs,...
  • When you want to separate code from text prompts
  • When you want to describe complex prompt pipelines and don't want to do it in the code
  • When you want to orchestrate multiple prompts together
  • When you want to reuse parts of prompts in multiple places
  • When you want to version your prompts and test multiple versions
  • When you want to log the execution of prompts and backtrace the issues

See more

➖ When not to use

  • When you have already implemented single simple prompt and it works fine for your job
  • When OpenAI Assistant (GPTs) is enough for you
  • When you need streaming (this may be implemented in the future, see discussion).
  • When you need to use something other than JavaScript or TypeScript (other languages are on the way, see the discussion)
  • When your main focus is on something other than text - like images, audio, video, spreadsheets (other media types may be added in the future, see discussion)
  • When you need to use recursion (see the discussion)

See more

🐜 Known issues

🧼 Intentionally not implemented features

❔ FAQ

If you have a question start a discussion, open an issue or write me an email.

📅 Changelog

See CHANGELOG.md

📜 License

This project is licensed under BUSL 1.1.

🤝 Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

You can also ⭐ star the project, follow us on GitHub or various other social networks.We are open to pull requests, feedback, and suggestions.

🆘 Support & Community

Need help with Book language? We're here for you!

We welcome contributions and feedback to make Book language better for everyone!

changelog

📅 Changelog

  • Allow passing a chat thread into OpenAiAssistantExecutionTools via the prompt.thread property.
    This enables multi-message conversations and aligns thread handling with OpenAiExecutionTools.
    [2025-10-14]

[Unreleased]

  • Allow passing a chat thread into createExecutionToolsFromVercelProvider via the prompt.thread property.
    This enables multi-message conversations and aligns thread handling with OpenAiAssistantExecutionTools.
    [2025-10-14]
  • Allow passing a chat thread into AnthropicClaudeExecutionTools via the prompt.thread property.
    This enables multi-message conversations and aligns thread handling with OpenAiExecutionTools.
    [2025-10-14]
  • <Chat /> now renders math expressions (inline $...$ and block $$...$$) in messages using KaTeX for proper display.
  • Enhanced renderMarkdown utility to support math rendering.
  • Added dependencies: katex, @types/katex.

    Released versions

0.102.0 (2025-10-14)

  • <Chat /> input now matches isMe bubble color with automatic text contrast and unified color logic; action buttons and placeholder adapt accordingly.
  • Code blocks and blockquotes restyled for consistency and readability; tables improved with higher contrast and DRY styling.
  • <Chat> shows feedback button only if onFeedback prop is provided; new “Chat with feedback” preview.
  • Added Save icon to Chat’s “Save” button via new <SaveIcon> component.
  • Added “Rich Formatting Showcase” chat scenario demonstrating all markdown and HTML features.
  • <Chat> now supports children prop; new preview added.
  • BookEditorPreview loads samples dynamically from backend endpoints /books and /books/examples.
  • Added copy button support (isCopyButtonEnabled) to chat messages with plain/markdown copy options.
  • <Chat> now renders markdown/HTML tables as styled, responsive tables with safe HTML handling.
  • OpenAiCompatibleExecutionTools now strips unsupported parameters on all calls; thread support added across prompt and tool layers.
  • LlmChat passes full thread to LLM tools for multi-turn context.
  • <BookEditor> optimized for large books with debounced, virtualized rendering.
  • Added Chat export formats: PDF, HTML, and Markdown with consistent DRY formatting and Promptbook footer.
  • ChatMessage.isComplete defaults to true; improved error reporting and auto-retry on unsupported parameters.
  • Added file upload support to <Chat> (drag & drop, preview, icons, immediate input insert).
  • Implemented DELETE commitment invalidation logic.
  • Promptbook server UI rebuilt with React + Tailwind (/ route), toggled by isRichUi option or CLI flag.
  • Preserved text selection in chat components during message updates.
  • MockedChat now includes predefined delay configs and UI selector.
  • <Chat> gains multi-format “Download” button via extensible save plugin system.

0.101.0 (2025-10-03)

Agent tools, Book 2.0 enhancements, component improvements

  • Add AgentLlmExecutionTools with predefined agent "soul"
  • Add createAgentLlmExecutionTools factory function
  • Agent tools automatically pick best model from available models
  • Parse metadata commitments (META IMAGE, META LINK, etc.) in parseAgentSource
  • All commitment definitions support singular and plural forms
  • Add new commitment types: GOAL, MEMORY, MESSAGE, SCENARIO, DELETE with aliases
  • Enhanced MODEL commitment with multi-line named parameter format
  • Add COMMENT and NONCE aliases for NOTE commitment
  • Syntax highlighting for NOTE commitments (comment-like appearance)
  • Chat component accepts extraActions prop for custom action buttons
  • Add pausing capability to MockedChat with isPausable prop
  • Add isResettable prop to MockedChat (replaces isResetShown)
  • Add useSendMessageToLlmChat hook for programmatic message sending
  • Add initialMessages prop to LlmChat for seeding chat history
  • Add predefined message buttons to Chat component
  • Add isFooterShown prop to BookEditor component
  • Unified parameter syntax highlighting for @Parameter and {parameterName}
  • OpenAiCompatibleExecutionTools handles "Unsupported value" parameter errors automatically
  • Refactor createAgentModelRequirements to use preparePersona directly
  • Remove centralized LLM_PROVIDER_PROFILES registry and colocate profiles with providers
  • Remove cache from createAgentModelRequirements function
  • Fix BookEditor syntax highlighting false positives
  • Fix Chat component loading issue with avatar images
  • Fix Next.js bundling crash with prettier
  • Export markdown utilities: removeMarkdownLinks, humanizeAiText, promptbookifyAiText
  • Add <Chat isAiTextCleaned> and isBorderRadiusDisabled props
  • Convert all interface declarations to type for consistency
  • <Chat/> can be read-only
  • Remove unused draft expectation utilities

0.100.0 (2025-08-)

Adding Book 2.0 features

  • Adding features for Agent definition for Book 2.0
  • 🚀 GPT-5 Support - Added OpenAI's most advanced language model with unprecedented reasoning capabilities and 200K context window as the new default chat model
  • Make package @promptbook/components with first component <BookEditor/>
  • Convert BookEditor component to use CSS modules instead of inline styles for better maintainability and package distribution
  • Enhance reporting of failed tasks
  • Remove max tokens default cap
  • Remove AnthropicClaudeExecutionTools.callCompletionModel (to avoid unnecessary maintenance)
  • Task contains tldr for displaying in UI
  • Improve tldr progress estimation based on pipeline structure instead of fake simulation
  • Create @promptbook/color package
  • New: Created AvatarProfile and AvatarProfileFromSource components
  • New: Added profile property to LlmExecutionTools type for chat interface integration
  • New: Created shared LLM provider profiles utility with predefined visual identities for all major providers
  • New: Updated LlmChat component to use provider profiles for consistent branding and visual representation
  • Fixed: Intermittent ECONNRESET build failures in tests by implementing retry logic with exponential backoff for network errors in LLM API calls
  • Refactored BookEditor: split into outer and inner components, with the inner rendered inside the shadow DOM.
  • BookEditor now highlights the first line in the editor.
  • Removed nonce workaround from BookEditor; rendering is now stable without nonce.
  • Added AvatarChip component preview and registration in ComponentPreview.tsx.
  • Enhanced chat interfaces with provider-specific visual identities including colors, names, and avatars
  • Added comprehensive test suite for LLM provider profiles
  • Enhance the build and deploy process for new versions of Promptbook

    <summary>
  • 🕕 Updated all LLM models and pricing** - Comprehensive update of all model providers with latest models and current pricing </summary>

  •  

    • OpenAI: Added GPT-5 family (GPT-5, GPT-5 mini, GPT-5 nano), GPT-4.1 family (GPT-4.1, GPT-4.1 mini, GPT-4.1 nano), O3 family (o3, o3-pro, o4-mini), and deep research models (o3-deep-research, o4-mini-deep-research). Updated pricing for all models to reflect current rates.
    • Anthropic: Added Claude 4 family (Claude Opus 4.1, Claude Opus 4, Claude Sonnet 4) and Claude 3.7 models (Claude Sonnet 3.7, Claude Haiku 3.5, Claude 3.7 Haiku). Updated pricing to current rates.
    • Google: Added Gemini 2.5 family (Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.5 Flash Lite) and Gemini 2.0 family (Gemini 2.0 Flash, Gemini 2.0 Flash Lite). Updated pricing and model descriptions.
    • DeepSeek: Updated to latest DeepSeek V3, DeepSeek R1 (reasoning model), and DeepSeek Coder V2 with current pricing reflecting significant cost reductions.
    • Ollama: Added latest Llama 3.3, Llama 3.2, and Llama 3.1 models with enhanced capabilities and larger context windows.
    • All model descriptions updated with accurate context window sizes, capabilities, and performance characteristics
    • Deprecated models marked appropriately while maintaining backward compatibility
    • Pricing updated to reflect current market rates as of August 2025

0.98.0 (2025-06-)

Promptbook server has (experimental) compatibility with OpenAI API

  • You can call book personas as any other OpenAI model
  • Alongside OpenAiExecutionTools and OpenAiAssistantExecutionTools add OpenAiCompatibleExecutionTools as registration of configuration and the constructor
  • Logging all failed results not just last result
  • Do not cache failed results and bring DEFAULT_MAX_EXECUTION_ATTEMPTS down to 7
  • Make gpt-4-turbo default "vanilla" chat model of OpenAiExecutionTools

0.95.0 (2025-05-21)

Spell checking and grammar

  • Rename @promptbook/wizzard -> @promptbook/wizard
  • Add npm run spellcheck command to publishing pipeline

0.94.0 (2025-05-21)

Integration of local models

  • OpenAI compatibility layer
  • Make @promptbook/ollama package
  • AvailableModel has pricing information
  • Better reporting of progress

0.93.0 (2025-05-14)

Enhance the presentation of the Promptbook

✨ First release mainly managed by AI

0.92.0 (2025-05-13)

Models and Migrations and processing big tables

  • Models are picked by description
  • During preparation of the pipeline, not single model picked but all models which are relevant for task are sorted by relevance
  • Make real RAG of knowledge
  • Remove "(boilerplate)" from model names
  • Sort model providers by relevance
  • Export utility function filterModels from @promptbook/core
  • All OpenAI models contain description
  • All Anthropic models contain description
  • All DeepSeek models contain description
  • All Google models contain description
  • Fix remote server POST /login
  • Update and fix all status codes and responses in openapi
  • Migrate JSON.parse -> jsonParse (preparation for formats)
  • Migrate papaparse.parse -> csvParse (preparation for formats)
  • Rename FormatDefinition -> FormatParser
  • Limit rate of requests to models
  • Autoheal \r in CsvFormatParser CsvFormatDefinition
  • Add getIndexedDbStorage
  • Pipeline migrations
  • Add formfactor COMPLETION which emulates Completion variant of the model
  • Add JSDoc annotations to all entities which are exported from any package
  • When processing more than 50 values, if many items pass but some fail, use "~" for failed value and just console log the error.
  • Fix OpenAI pricing
  • Fix LLM cache
  • Add title and promptbookVersion to ExecutionTask
  • Cache getLocalStorage, getSessionStorage and getIndexedDbStorage
  • Pass databaseName and storeName into getIndexedDbStorage
  • Fix AzureOpenAiExecutionTools
  • Add maxRequestsPerMinute to LLM provider boilerplate configurations
  • ✨Auto-enhance model providers, try autonomous agent to work on Promptbook
  • ✨Auto-fix grammar and typos

0.90.0 and 0.91.0 were skipped

0.89.0 (2025-04-15)

User system and spending of credits

  • Update typescript to 5.2.2
  • Remote server requires root url /, if you want to run multiple services on the same server, use 3rd or 4th degree subdomain
  • [🌬] Make websocket transport work
  • Allow to pass custom execution tools to promptbook server
  • CLI can be connected to Promptbook remote server
    • Allow to specify BRING_YOUR_OWN_KEYS / REMOTE_SERVER in cli commands ptbk run, ptbk make, ptbk list-models and ptbk start-server
  • CLI can login to Promptbook remote server via username + password and store the token
  • Add login to application mode on remote server
  • Add User token to application mode on remote server
  • Rename countTotalUsage -> countUsage and add spending()
  • Rename PromptResultUsage -> Usage
  • Delete OpenAiExecutionTools.createAssistantSubtools
  • RemoteServer exposes httpServer, expressApp and socketIoServer - you can add custom routes and middlewares
  • Adding OpenAPI specification and Swagger to remote server
  • @types/* imports are moved to devDependencies
  • Rename remoteUrl -> remoteServerUrl
  • Rename DEFAULT_REMOTE_URL -> DEFAULT_REMOTE_SERVER_URL
  • Remove DEFAULT_REMOTE_URL_PATH (it will be always socket.io)
  • rootPath is not required anymore
  • Rename types PromptbookServer_Identification -> Identification
  • Change scraperFetch -> promptbookFetch and add PromptbookFetchError
  • Better error handling in entire Promptbook engine
  • Catch non-error throws and wrap + rethrow them as WrappedError
  • Creating a default community health file
  • Functions isValidCsvString and isValidXmlString

0.88.0 (2025-03-19)

Scripting and execution

  • Rename @promptbook/execute-javascript -> @promptbook/javascript
  • Rename extractVariablesFromScript -> extractVariablesFromScript and export from @promptbook/javascript (not @promptbook/utils)
  • Add route executions/last to remote server
  • Add $provideScriptingForNode
  • Converts JSON strings to JSON objects
  • Add jsonStringsToJsons to @promptbook/utils
  • Increase DEFAULT_MAX_EXECUTION_ATTEMPTS from 3 -> 10
  • Add a unique ID to the error, this error needs to be serialised and deserialised.

0.86.0 (2025-02-18)

Use .book as default extension for books

0.85.0 (2025-02-17)

[🐚] Server queue and tasks

  • Publishing Promptbook into Docker Hub
  • Remote server run in both REST and Socket.io mode
  • Remote server can run entire books not just single prompt tasks (for now just in REST mode)
  • In future remote server will support callbacks / pingbacks
  • Remote server has internal task queue
  • Remote server can be started via ptbk start-server
  • Hide $randomSeed
  • Remove TaskProgress
  • Remove assertsExecutionSuccessful
  • PipelineExecutor: Change onProgress -> ExecutionTask
  • Remote server allows to set rootPath
  • Remote server can run in Docker
  • In future remote server persists its queue in SQLite / .promptbook / Neo4j
  • Do not generate stats for pre-releases to speed up the build process
  • Allow pipeline URLs on private and unsecured networks

0.83.0 and 0.84.0 (2025-02-04)

@promptbook/editable and integration of markitdown

  • Integrate markitdown and export through @promptbook/markitdown
  • Export parsing internals to @promptbook/editable
  • Rename sourceContent -> knowledgeSourceContent
  • Multiple functions to manipulate with PipelineString
  • book notation supports values interpolation
  • Make equivalent of book notation the prompt exported through @promptbook/utils
  • Flat books does not expect return parameter
  • Wizard always returns simple result: string key in output
  • Using BUSL-1.1 license (only for @promptbook/utils keep using CC-BY-4.0)
  • Support of DeepSeek models
  • Support of o3-mini model by OpenAI
  • Change admin email to pavol@ptbk.io

0.82.0 (2025-01-16)

Compile via remote server

  • Add compilePipelineOnRemoteServer to package @promptbook/remote-client
  • Add preparePipelineOnRemoteServer to package @promptbook/remote-client
  • Changes in remote server that are not backward compatible
  • Add DEFAULT_TASK_TITLE
  • Enforce LF (\n) lines

0.81.0 (2025-01-12)

Editing, templates and flat pipelines

  • Backup original book as sources in PipelineJson
  • fetch is passed through ExecutionTools to allow proxying in browser
  • Make new package @promptbook/editable and move misc editing tools there
  • Make new package @promptbook/templates and add function getBookTemplate
  • Rename replaceParameters -> templateParameters
  • Add valueToString and numberToString utility function
  • Allow boolean, number, null, undefined and full json parameters in templateParameters (alongside with string)
  • Change --output to --output in CLI ptbk make
  • Re-introduction of package @promptbook/wizard
  • Allow flat pipelines
  • Root URL for flat pipelines
  • Change $provideLlmToolsForCli -> $provideLlmToolsForWizardOrCli
  • Do not require .book.md in pipeline url
  • More file paths are considered as valid
  • Walk to the root of the project and find the nearest .env file
  • $provideLlmToolsConfigurationFromEnv, $provideLlmToolsFromEnv, $provideLlmToolsForWizardOrCli, $provideLlmToolsForTestingAndScriptsAndPlayground are async
  • GENERATOR and IMAGE_GENERATOR formfactors
  • Rename removeContentComments -> removeMarkdownComments
  • Rename DEFAULT_TITLE -> DEFAULT_BOOK_TITLE
  • Rename precompilePipeline -> parsePipeline

0.80.0 (2025-01-01)

Simple chat notation

  • High-level chat notation
  • High-level abstractions
  • Introduction of compilePipeline
  • Add utility orderJson exported from @promptbook/utils
  • Add utility exportJson exported from @promptbook/utils (in previous versions this util was private and known as $asDeeplyFrozenSerializableJson)
  • Circular objects with same family references are considered NOT serializable
  • Interactive mode for FORMFACTOR CHATBOT in CLI
  • Deprecate pipelineJsonToString
  • Deprecate unpreparePipeline
  • Rename pipelineStringToJson -> compilePipeline
  • Rename pipelineStringToJsonSync -> precompilePipeline

0.79.0 (2024-12-27)

Implicit formfactors

  • You don't need to specify the formfactor or input+output params explicitly. Implementing the formfactor interface is sufficient.
  • Fix in deep cloning of arrays

0.78.0 (2024-12-14)

Utility functions

  • Add removePipelineCommand
  • Rename util renameParameter -> renamePipelineParameter
  • Rename util extractVariables -> extractVariablesFromScript
  • [👖] Utilities extractParameterNamesFromTask and renamePipelineParameter are not exported from @promptbook/utils but @promptbook/core because they are tightly interconnected with the Promptbook and cannot be used as universal utility

0.77.0 (2024-12-10)

Support for more models, add @promptbook/vercel and @promptbook/google packages.

  • @promptbook/vercel - Adapter for Vercel functionalities
  • @promptbook/google - Integration with Google's Gemini API
  • Option userId can be passed into all tools and instead of null, it can be undefined
  • Rename $currentDate -> $getCurrentDate

0.76.0 (2024-12-07)

Skipped, because of the mistake in the versioning. (It should be pre-release)

0.75.0 (2024-11-)

Formfactors, Rebranding

  • Add FormfactorCommand
  • Add Pipeline interfaces
  • Split ParameterJson into InputParameterJson, OutputParameterJson and IntermediateParameterJson
  • Reorganize /src folder
  • Rename Template -> Task
  • Rename TemplateCommand -> SectionCommand command
  • Make alongside SectionType the TaskType
  • 🤍 Change Whitepaper to Abstract
  • Rename default folder for your books from promptbook-collection -> books
  • Change claim of the project to "It's time for a paradigm shift! The future of software is in plain English, French or Latin."

0.74.0 (2024-11-11)

  • Proposal for version 1.0.0 both in Promptbook and Book language
  • Allow to run books directly in cli via ptbk run ./path/to/book.ptbk.md
  • Fix security warnings in dependencies
  • Enhance countLines and countPages utility function
  • No need to explicitly define the input and output parameters
  • Allow empty pipelines
  • Add BlackholeStorage
  • Rename .ptbk.* -> .book.*
  • Split PROMPTBOOK_VERSION -> BOOK_LANGUAGE_VERSION + PROMPTBOOK_ENGINE_VERSION
  • Finish split between Promptbook framework and Book language

0.73.0 (2024-11-08)

0.72.0 (2024-11-07)

Support for Assistants API (GPTs) from OpenAI

  • Add OpenAiAssistantExecutionTools
  • OpenAiExecutionTools.createAssistantSubtools
  • Add UNCERTAIN_USAGE
  • LLM Tools getClient method are public
  • LLM Tools options are not private anymore but protected
  • getClient methods are public
  • In remote server allow to pass not only userId but also appId and customOptions
  • In remote server userId can not be undefined anymore but null
  • OpenAiExecutionTools receives userId (not user)
  • Change Collection mode -> Application mode

0.71.0 (2024-11-07)

Knowledge scrapers [🐝]

  • Make new package @promptbook/pdf
  • Make new package @promptbook/documents
  • Make new package @promptbook/legacy-documents
  • Make new package @promptbook/website-crawler
  • Remove llm tools from PrepareAndScrapeOptions and add second arcument to misc preparation functions
  • Allow to import markdown files with knowledge
  • Allow to import .docx files with knowledge .docx -(Pandoc)-> .md
  • Allow to import .doc files with knowledge .doc -(LibreOffice)-> .docx -(Pandoc)-> .md
  • Allow to import .rtf files with knowledge .rtf -(LibreOffice)-> .docx -(Pandoc)-> .md
  • Allow to import websites with knowledge
  • Add new error KnowledgeScrapeError
  • Filesystem is passed as dependency
  • External programs are passed as dependency
  • Remove PipelineStringToJsonOptions in favour of PrepareAndScrapeOptions
  • Add MissingToolsError
  • Change FileStorage -> FileCacheStorage
  • Changed behavior of titleToName when passing URLs or file paths
  • Fix normalize functions when normalizing string containing slash char "/", "\"
  • Pass fs through ExecutionTools
  • Pass executables through ExecutionTools
  • Pass scrapers through ExecutionTools
  • Add utilities $provideExecutionToolsForBrowser and $provideExecutionToolsForNode and use them in samples
  • Add utilities $provideScrapersForBrowser and $provideScrapersForNode
  • Rename createLlmToolsFromConfigurationFromEnv -> $provideLlmToolsConfigurationFromEnv and createLlmToolsFromEnv -> $provideLlmToolsFromEnv
  • Rename getLlmToolsForTestingAndScriptsAndPlayground -> $provideLlmToolsForTestingAndScriptsAndPlayground
  • Rename getLlmToolsForCli -> $provideLlmToolsForCli
  • Change most Array -> ReadonlyArray
  • Unite CreatePipelineExecutorOptions and CreatePipelineExecutorSettings
  • Change --reload-cache to --reload in CLI
  • Prefix default values with DEFAULT_

0.70.0 ()

Support for local models - integrate Ollama

  • Make new package @promptbook/ollama
  • Add OllamaExecutionTools exported from @promptbook/ollama

0.69.0 (2024-09-)

Command FOREACH

  • Allow iterations with FOREACH command
  • Paremeter names are case insensitive and normalized
  • Big refactoring of createPipelineExecutor
  • Enhance and implement formats FormatDefinition
  • Allow to parse CSVs via CsvFormatDefinition
  • Change ListFormatDefinition -> TextFormatDefinition

0.68.0 (2024-09-08)

[🍧] Commands and command parser

  • There are 2 different commands, EXPECT and FORMAT
  • Rename BLOCK command -> TEMPLATE
  • EXPECT JSON changed to FORMAT JSON
  • Change usagePlaces -> isUsedInPipelineHead + isUsedInPipelineTemplate
  • All parsers have functions $applyToPipelineJson, $applyToTemplateJson, stringify, takeFromPipelineJson and takeFromTemplateJson
  • PipelineJson has defaultModelRequirements
  • PipelineJson has Chat model variant as default without need to specify it explicitly
  • [🥜] Rename "Prompt template" -> "Template"
  • Rename PromptTemplateJson -> TemplateJson
  • Rename extractParameterNamesFromPromptTemplate -> extractParameterNamesFromTemplate
  • Rename PromptTemplateJsonCommon -> TemplateJsonCommon
  • Rename PromptTemplateParameterJson -> ParameterJson
  • Rename PipelineJson.promptTemplates -> PipelineJson.templates
  • Rename PromptDialogJson -> DialogTemplateJson
  • Rename PROMPT_DIALOG -> DIALOG_TEMPLATE
  • Rename ScriptJson -> ScriptTemplateJson
  • Rename SCRIPT -> SCRIPT_TEMPLATE
  • Rename LlmTemplateJson -> PromptTemplateJson
  • Rename ParsingError -> ParseError

0.67.0 (2024-08-21)

[🚉] Types and interfaces, JSON serialization

  • Enhance 🤍 The Promptbook Whitepaper
  • Enhance the README.md
  • ExecutionReportJson is fully serializable as JSON
  • [🛫] Prompt is fully serializable as JSON
  • Add type string_postprocessing_function_name
  • Add isSerializableAsJson utility function, use it to protect inputs and check outputs and export from @promptbook/utils
  • Add serializeError and deserializeError utility functions and export from @promptbook/utils
  • Rename ReferenceError to PipelineUrlError
  • Make index of all errors and export from @promptbook/core
  • Mark all entities that are fully serializable as JSON by [🚉]
  • When running in browser, auto add dangerouslyAllowBrowser from createOpenAiExecutionTools
  • RemoteLlmExecutionTools automatically retries on error
  • Rename client_id -> string_user_id and clientId -> userId

0.66.0 (2024-08-19)

[🎰] Model updates and registers

  • Prefix all non-pure by $
  • Add model claude-3-5-sonnet-20240620 to AnthropicClaudeExecutionTools
  • [🐞] Fix usage counting in AnthropicClaudeExecutionTools
  • Update @anthropic-ai/sdk from 0.21.1 to 0.26.1
  • Update @azure/openai from 1.0.0-beta.12 to 2.0.0-beta.1
  • Update openai from 4.46.1 to 4.55.9
  • Add LlmExecutionToolsConstructor
  • Add $llmToolsConfigurationBoilerplatesRegister
  • Add $llmToolsRegister
  • Rename Openai ->OpenAi

0.65.0 (2024-08-15-)

[🍜] Anonymous server

  • Anonymous server
  • LlmConfiguration and createLlmToolsFromConfiguration
  • Better names for knowledge sources
  • Rename keys inside prepared knowledge
  • Use MultipleLlmExecutionTools more
  • LLM tools providers have constructor functions, for example OpenAiExecutionTools -> createOpenAiExecutionTools
  • remoteServerUrl is string_base_url

0.64.0 was skipped

0.63.0 (2024-08-11)

Better system for imports, exports and dependencies

  • Manage package exports automatically
  • Automatically export all types from @promptbook/types
  • Protext runtime-specific code - for example protect browser-specific to never reach @promptbook/node
  • Consiese README - move things to discussions
  • Make Partial<ModelRequirements> and optional

0.62.0 (2024-07-8)

[🎐] Better work with usage

  • Add usage to preparations and reports
  • Export function usageToHuman from @promptbook/core
  • Rename TotalCost to TotalUsage
  • Allow to reload cache
  • Fix error in uncertainNumber which always returned "uncertain 0"
  • [🐞] Fix usage counting in OpenAiExecutionTools

0.61.0 (2024-07-8)

Big syntax additions Working external knowledge, personas, preparation for instruments and actions

  • Add reserved parameter names
  • Add SAMPLE command with notation for parameter samples to .ptbk.md files
  • Add KNOWLEDGE command to .ptbk.md files
  • Change EXECUTE command to BLOCK command
  • Change executionType -> templateType
  • Rename SynraxError to ParsingError
  • Rename extractParameters to extractParameterNames
  • Rename ExecutionError to PipelineExecutionError
  • Remove TemplateError and replace with ExecutionError
  • Allow deep structure (h3, h4,...) in .ptbk.md files
  • Add callEmbeddingModel to LlmExecutionTools
  • callChatModel and callCompletionModel are not required to be implemented in LlmExecutionTools anymore
  • Remove MultipleLlmExecutionTools and make joinLlmExecutionTools function
  • You can pass simple array of LlmExecutionTools into ExecutionTools and it will be joined automatically via joinLlmExecutionTools
  • Remove the MarkdownStructure and replace by simpler solution flattenMarkdown + splitMarkdownIntoSections + parseMarkdownSection which works just with markdown strings and export from @promptbook/utils <- [🕞]
  • Markdown utils are exported through @promptbook/markdown-utils (and removed from @promptbook/utils)
  • String normalizers goes alongside with types; for example normalizeTo_SCREAMING_CASE -> string_SCREAMING_CASE
  • Export isValidUrl, isValidPipelineUrl, isValidFilePath, isValidJavascriptName, isValidSemanticVersion, isHostnameOnPrivateNetwork, isUrlOnPrivateNetwork and isValidUuid from @promptbook/utils
  • Add systemMessage, temperature and seed to ModelRequirements
  • Code blocks can be noteted both by ``` and >
  • Add caching and storage
  • Export utity stringifyPipelineJson to stringify PipelineJson with pretty formatting of loooooong knowledge indexes from @promptbook/core

0.60.0 (2024-07-15)

Renaming and making names more consistent and less disambigous

  • Rename word "promptbook"
    • Keep name "Promptbook" as name for this project.
    • Rename promptbook as pipeline of templates defined in .ptbk.md to "pipeline"
  • Rename word "library"
    • For library used as a collection of templates use name "collection"
    • For library used as this project and package use word "package"
  • Rename methods in LlmExecutionTools
    • gptChat -> callChatModel
    • gptComplete -> callCompletionModel
  • Rename custom errors
  • Rename folder promptbook-collection -> promptbook-collection
  • In CLI you ca use both promptbook and ptbk

0.59.0 (2024-06-30)

Preparation for system for management of external knowledge (RAG), vector embeddings and proper building of pipeline collection.

  • Add MaterialKnowledgePieceJson
  • Add KnowledgeJson
  • Add prepareKnowledgeFromMarkdown exported from @promptbook/core
  • Change promptbookStringToJson to async function (and add promptbookStringToJsonSync for promptbooks without external knowledge)
  • Change createPromptbookLibraryFromSources to createPromptbookLibraryFromJson and allow only compiled jsons as input + it is not async anymore
  • Allow only jsons as input in createLibraryFromPromise
  • Class SimplePromptbookLibrary not exposed at all, only type PromptbookLibrary and constructors
  • Rename all createPromptbookLibraryFromXyz to createLibraryFromXyz
  • Misc Tool classes not requires options anymore (like CallbackInterfaceTools, OpenAiExecutionTools, AnthropicClaudeExecutionTools, etc.)
  • Add util libraryToJson exported from @promptbook/core
  • CLI util ptbk make ... can convert promptbooks to JSON
  • promptbookStringToJson automatically looks for promptbook-collection.json in root of given directory
  • Rename validatePromptbookJson to validatePromptbook
  • Create embed method on LLM tools, PromptEmbeddingResult, EmbeddingVector and embeddingVectorToString
  • createLibraryFromDirectory still DONT use prebuild library (just detects it)

0.58.0 (2024-06-26)

  • Internal reorganization of folders and files
  • Export types as type export

0.57.0 (2024-06-15)

Better JSON Mode

  • OpenAiExecutionTools will use JSON mode natively
  • OpenAiExecutionTools Do not fail on empty (but valid string) responses

0.56.0 (2024-06-16)

Rename and reorganize libraries

  • Take createPromptbookLibraryFromDirectory from @promptbook/core -> @promptbook/node (to avoid dependency risk errors)
  • Rename @promptbook/fake-llmed -> @promptbook/fake-llm
  • Export PROMPTBOOK_ENGINE_VERSION from each package
  • Use export type in @promptbook/types

0.55.0 (2024-06-15)

Better usage computation and shape

  • Change shape of PromptResult.usage
  • Remove types number_positive_or_zero and number_negative_or_zero
  • Export type PromptResultUsage, PromptResultUsageCounts and UncertainNumber from @promptbook/types
  • Export util addUsage from @promptbook/core
  • Put usage directly in result of each execution
  • Export function usageToWorktime from @promptbook/core

0.54.0 (2024-06-08)

  • Custom errors ExpectError,NotFoundError,PromptbookExecutionError,PromptbookLogicError,PromptbookLibraryError,PromptbookSyntaxError exported from @promptbook/core

0.53.0 (2024-06-08)

Repair and organize imports

0.52.0 (2024-06-06)

Add support for Claude \ Anthropic models via package @promptbook/anthropic-claude and add Azure OpenAI models via package @promptbook/azure-openai

  • Export MultipleLlmExecutionTools from @promptbook/core
  • Always use "modelName" not just "model"
  • Standartization of model providers
  • Delete @promptbook/wizard
  • Move assertsExecutionSuccessful,checkExpectations,executionReportJsonToString,ExecutionReportStringOptions,ExecutionReportStringOptionsDefaults,isPassingExpectations,prettifyPromptbookString from @promptbook/utils to @promptbook/core
  • Make and use JavascriptExecutionTools as placeholder for better implementation with proper sandboxing
  • Implement createPromptbookLibraryFromDirectory export from @promptbook/core
  • Make PromptbookLibraryError
  • Check Promptbook URL uniqueness in SimplePromptbookLibrary (see [🦄])
  • Util createPromptbookLibraryFromPromise is not public anymore
  • Util forEachAsync export from @promptbook/utils

0.51.0 (2024-05-24)

Add new OpenaAI models gpt-4o and gpt-4o-2024-05-13

  • Add model gpt-4o
  • Add model gpt-4o-2024-05-13
  • Classes that implements LlmExecutionTools must expose compatible models
  • List OpenAI models dynamically
  • All GPT models have pricing information
  • Export OPENAI_MODELS from @promptbook/openai
  • Export types LlmTemplateJson, SimpleTemplateJson, ScriptJson, PromptDialogJson, Expectations from @promptbook/types
  • ModelRequirements.modelName is not required anymore
  • PromptbookExecutor does not require onProgress anymore
  • ExecutionTools does not require userInterface anymore, when not set, the user interface is disabled and promptbook which requires user interaction will fail
  • Export extractParameters, extractVariables and extractParametersFromPromptTemplate from @promptbook/utils
  • Add and export set operations difference, intersection and union from @promptbook/utils
  • Export POSTPROCESSING_FUNCTIONS from @promptbook/execute-javascript
  • No need to specify MODEL VARIANT and MODEL NAME in .ptbk.md explicitly, CHAT VARIANT will be used as default

0.50.0 (2024-05-17)

Was accidentally released as earlier, re-released fully completed as 0.51.0

0.48.0 and 0.49.0 (2024-05-08)

Better utilities (for Promptbase app)

  • Add reverse utility the promptbookJsonToString
  • Allow to put link callback into renderPromptbookMermaid
  • Better prompt template identification
  • Add function titleToName exported from @promptbook/utils
  • Add function renameParameter exported from @promptbook/utils
  • Rename "Script Template" to just "Script"

0.47.0 (2024-05-02)

Tools refactoring

  • Rename "natural" -> "llm"
  • Allow to pass multiple llm into ExecutionTools container
  • Export renderPromptbookMermaid through @promptbook/utils

0.46.0 (2024-04-28)

Reorganize packages

💡 Now you can just install promptbook or ptbk as alias for everything

  • New package promptbook as a link to all other packages
  • New package ptbk as an alias to promptbook
  • New package @promptbook/fake-llm
    • Move there MockedEchoLlmExecutionTools and MockedFackedLlmExecutionTools from @promptbook/core
  • New package @promptbook/langtail to prepare for Langtail integration

0.45.0 (2024-04-27)

More direct usage of OpenAI API, Refactoring

  • Pass directly Open AI otpions to OpenAiExecutionTools
    • Change openAiApiKey -> apiKey when creating new OpenAiExecutionTools
  • Change all import statements to import type when importing just types

0.44.0 (2024-04-26)

  • Lower bundle size
  • Normalization library n12 is not used and all its functions are bringed to @promptbook/utils
  • Better error names
  • Better error used
  • Make ExpectError private
  • @promptbook/core is not be peer dependency of @promptbook/utils
  • Rename expectAmount in json to expectations
  • Expectations are passed into prompt object and used in natural tools
  • Add MockedFackedLlmExecutionTools
  • Add utils checkExpectations and isPassingExpectations
  • Better error messages from JavascriptEvalExecutionTools
  • Each exported NPM package has full README
  • spaceTrim is re-exported from @promptbook/utils

0.43.0 (2024-03-26)

CLI utils exported from @promptbook/cli

After install you can use promptbook command in terminal:

npm i @promptbook/utils
npx ptbk prettify 'promptbook/**/*.ptbk.md'

0.42.0 (2024-03-24)

Better logo and branding of Promptbook.

0.41.0 (2024-03-23)

More options to create PromptbookLibrary

  • Utility createPromptbookLibraryFromDirectory
  • Utility createPromptbookLibraryFromUrl
  • Add extractBlock to build-in functions
  • Remove problematic usage of chalk and use colors instead
  • Export replaceParameters from @promptbook/utils

0.40.0 (2024-03-10)

Multiple factories for PromptbookLibrary, Custom errors, enhance templating

  • Throwing NotFoundError
  • Throwing PromptbookSyntaxError
  • Throwing PromptbookLogicError
  • Throwing PromptbookExecutionError
  • Throwing PromptbookReferenceError
  • Throwing UnexepctedError
  • Preserve col-chars in multi-line templates, See more in replaceParameters unit test
  • Change static methods of PromptbookLibrary to standalone functions
  • Static method createPromptbookLibraryFromSources receives spreaded arguments Array instead of Record
  • Add factory function createPromptbookLibraryFromPromise

0.39.0 (2024-03-09)

Working on Promptbook Library. Identify promptbooks by URL.

  • Change PromptbookLibrary class to interface
  • Add SimplePromptbookLibrary class which implements PromptbookLibrary
  • Rename PromptbookLibrary.promptbookNames to PromptbookLibrary.pipelineUrls
  • Remove PromptbookLibrary.createExecutor to separate responsibility
  • Make more renamings and reorganizations in PromptbookLibrary
  • Make PromptbookLibrary.listPipelines async method
  • Make PromptbookLibrary.getPipelineByUrl async method

0.38.0 (2024-03-09)

Remove "I" prefix from interfaces and change interfaces to types.

  • Rename IAutomaticTranslator -> AutomaticTranslator
  • Rename ITranslatorOptions -> TranslatorOptions
  • Rename IGoogleAutomaticTranslatorOptions -> GoogleAutomaticTranslatorOptions
  • Rename ILindatAutomaticTranslatorOptions -> LindatAutomaticTranslatorOptions
  • Remove unused IPersonProfile
  • Remove unused ILicense
  • Remove unused IRepository

Note: Keeping "I" prefix in internal tooling like IEntity, IExecCommandOptions, IExecCommandOptions Note: Also keeping stuff imported from external libraries like IDestroyable

0.37.0 (2024-03-08)

Explicit output parameters

  • Every promptbook has to have OUTPUT PARAMETER property in header

0.36.0 (2024-03-06)

Cleanup and renaming

  • Cleanup the project
  • Do not export unused types from @promptbook/types
  • Rename "Prompt template pipelines" to more meaningful "Promptbooks"
  • Remove DEFAULT_MODEL_REQUIREMENTS - You need to explicitly specify the requirements
  • Rename PromptTemplatePipelineLibrary -> PromptbookLibrary
  • Rename RemoteServerOptions.ptbkLibrary -> library
  • Add RemoteServerOptions.ptbkNames
  • Rename RemoteServerOptions.getPtp -> getPtbkByName
  • Do not use shortcut "Ptbk" but full "Promptbook" name in the code, classes, methods, etc.
  • Change command PTBK_URL to URL (but keep backward compatibility and preserve alias PTBK)
  • Change command PTBK_NAME to PROMPTBOOK_NAME (but keep backward compatibility and preserve alias PTBK)
  • Rename runRemoteServer -> startRemoteServer and return Destroyable object

0.35.1 (2024-03-06)

  • Add Mermaid graph to sample promptbooks
  • Fix spelling errors in OpenAI error messages

0.35.0 (2024-03-01)

  • You can use prettifyMarkdown for postprocessing

0.34.0 (2024-02-19)

  • Do not remove emojis or formatting from task title in progress

0.33.0 (Skipped)

Iterating over parameters

  • Parameters can be both string and Array<string>
    • Array<string> will itterate over all values
    • You can use postprocessing functions or EXECUTE SCRIPT to split string into array and vice versa

0.32.0 (2024-02-12)

Export less functions from @promptbook/utils

0.31.0 (2024-02-12)

Better execution reports

  • Filter out voids in executionReportJsonToString
  • Add timing information to ExecutionReportJson (In both text and chart format)
  • Add money cost information to ExecutionReportJson (In both text and chart format)
  • Escape code blocks in markdown
  • Do not export replaceParameters utility function

0.30.0 (2024-02-09)

  • Remove Promptbook (just using JSON PromptbookJson format)
    • CreatePtbkExecutorOptions has PromptbookJson
  • Promptbooks are executed in parallel
    • PromptTemplateJson contains dependentParameterNames
    • validatePromptbookJson is checking for circular dependencies
    • Test that joker is one of the dependent parameters

0.29.0 (2024-02-06)

  • Allow to use custom postprocessing functions
  • Allow async postprocessing functions

0.28.0 (2024-02-05)

Better execution report in markdown format

  • Add JOKER {foo} as a way how to skip part of the promptbook
  • Split UserInterfaceToolsPromptDialogOptions.prompt into promptTitle and promptMessage
  • Add UserInterfaceToolsPromptDialogOptions.priority
  • Add timing information to report
  • Maximum must be higher than minimum in EXPECT statement
  • Maximum 0 is not valid, should be at least 1 in EXPECT statement

0.27.0 (2024-02-03)

Moving logic from promptbookStringToJson to createPtbkExecutor

  • Allow postprocessing and expectations in all execution types
  • Postprocessing is happening before checking expectations
  • In PromptbookJson postprocessing is represented internally in each PromptTemplateJson not as separate PromptTemplateJson
  • Introduce ExpectError
  • Rename maxNaturalExecutionAttempts to maxExecutionAttempts (because now it is not just for natural execution)
  • If title in promptbook contains emojis, pass it innto report
  • Fix description in report
  • Asking user infinite times for input if the input not matches the expectations

0.26.0 (2024-02-03)

  • Add EXPECT JSON command to promptbooks
  • Split internal representation EXPECT into EXPECT_AMOUNT and EXPECT_FORMAT

0.25.0 (2024-02-03)

  • CreatePtbkExecutorSettings are not mandatory anymore

0.24.0 (2024-01-25)

  • Add postprocessing function trimCodeBlock
  • Add EXPECT command to promptbooks
  • Add ExecutionReport
  • Add parseNumber utility function
  • PtbkExecutor returns richer result and does not throw, just returns isSuccessful=false, You can use assertsExecutionSuccessful utility function to check if the execution was successful
  • Add assertsExecutionSuccessful utility function

0.23.0 (2024-01-25)

  • You are able to send markdown code block in prompts (without traces of escaping)
  • Postprocessing function trimEndOfCodeBlock is not working with escaped code blocks JUST with markdown code blocks
  • Rename extractBlocksFromMarkdown to extractAllBlocksFromMarkdown

0.20.2 (2024-01-16)

  • replaceParameters works with inlined JSONs

0.20.1 (2024-01-15)

  • Add postprocessing function trimEndOfCodeBlock

0.20.0 (2023-12-29)

  • Change keyword USE to MODEL VARIANT
  • Allow to specify exact model eg. MODEL NAME gpt-4-1106-preview