LLM Chat WebJar

Last modified by Michael Hamann on 2024/05/24 14:20

cogResources for the chat UI of the LLM Application.
Developed by

Matéo Munoz, Ludovic Dubost, Michael Hamann, Paul Pantiru

0 Votes
LicenseGNU Lesser General Public License 2.1

16.2.0 and above


The LLM Application also provides a webjar containing a javascript library (which is used by the Chat UI but can also be used independently) and a Chat UI based on the Assister chat embeddable in any external application, providing a way to connect and talk to the data indexed in the LLM Application from any web application authorized from within the XWiki instance.



It also provides settings for selecting between the models available, setting the temperature and switch between streaming and non-streaming responses.


Embedding the Chat UI into a webpage

Insert the following into your webpage header and replace [your xwiki domain] and [LLM Application version] with your actual values.
Note: This will be simplified in future releases.

<script src="[your xwiki domain]/xwiki/webjars/wiki%3Axwiki/application-ai-llm-chat-webjar/[LLM Application version]/marked.min.js"></script>
<script src="[your xwiki domain]/xwiki/webjars/wiki%3Axwiki/application-ai-llm-chat-webjar/[LLM Application version]/purify.min.js"></script>
<script src="[your xwiki domain]/xwiki/webjars/wiki%3Axwiki/application-ai-llm-chat-webjar/[LLM Application version]/aillm.js"></script>
<script id="chat-loader" data-base-url="[your xwiki domain]/xwiki" src="[your xwiki domain]/xwiki/webjars/wiki%3Axwiki/application-ai-llm-chat-webjar/[LLM Application version]/chatUILoader.js"></script>
<script type="module" src="[your xwiki domain]/xwiki/webjars/wiki%3Axwiki/application-ai-llm-chat-webjar/[LLM Application version]/dist/chat/chat.esm.js"></script>
<link rel="stylesheet" href="[your xwikidomain]/xwiki/webjars/wiki%3Axwiki/application-ai-llm-chat-webjar/[LLM Application version]/chatUI.css">
<link rel="stylesheet" href="[your xwiki domain]/xwiki/webjars/wiki%3Axwiki/application-ai-llm-chat-webjar/[LLM Application version]/dist/chat/chat.css">

The javascript library

The webjar also provies the aillm.js library, which is a JavaScript module that provides an interface to interact with the XWiki AI Chat API. It allows you to communicate with an AI language model and retrieve chat completions based on the provided messages and settings.


The XWikiAiAPI is a singleton object that encapsulates the methods and properties required to interact with the XWiki AI Chat API.


- baseURL: The base URL of the XWiki AI Chat API. Defaults to "http://localhost:8080/xwiki".
- wikiName: The name of the wiki. Default is 'xwiki'.
- apiKey: The API key for authentication. Default is an empty string.
- temperature: The temperature value for generating chat completions. Default is 1.
- stream: A boolean indicating whether to use streaming mode. Default is false.
- chatUISettings: An array of available chat UI settings. Available values are: "server-address","temperature","model" and "stream"


- getBaseURL(): Returns the current base URL.
- setBaseURL(url): Sets the base URL for the API requests.
- setApiKey(key): Sets the API key for authentication.
- setWikiName(name): Sets the name of the wiki.
- getModels(): Fetches the list of available models from the API.
- getPrompts(): Fetches the list of available prompts from the API.
- getCompletions(request, onMessageChunk): Sends a ChatCompletionRequest to retrieve chat completions. If stream is set to true, it handles the streamed response and calls the onMessageChunk callback with each received message chunk. If stream is false, it returns the complete response as a Promise.
- setChatUISettings(settings): Sets the available chat UI settings.
- getChatUISettings(): Returns the available chat UI settings.


The ChatCompletionRequest class represents a request for chat completions from the AI model.


constructor(model, temperature, messages, stream)

- model: The model to use for generating chat completions.
- temperature: The randomness of the generated completions, usually between 0 and 2.
- messages: An array of message objects, each containing a role and content.
- stream: A boolean indicating whether to use streaming mode.


- setModel(model): Sets the model name after validation.
- setTemperature(temperature): Sets the temperature value after validation.
- setMessages(messages): Sets the messages array after validation.
- setStream(stream): Sets the streaming mode after validation.
- addMessage(role, content): Adds a message to the messages array after validation.
- validate(): Validates the current state of the instance.
- toJSON(): Returns a JSON representation of the instance.


To use the aillm.js library, you need to include it in your application and create an instance of the ChatCompletionRequest class. Then, you can use the XWikiAiAPI methods to interact with the API and retrieve chat completions.


// Create a ChatCompletionRequest instance
const completionRequest = new ChatCompletionRequest(
 "AI.Models.mixtral", // model
 0.5, // temperature
 [], // messages
 true // streaming

// Add messages to the request
completionRequest.addMessage("user", "Hello!");
completionRequest.addMessage("assistant", "Hi there!");

// Retrieve chat completions
XWikiAiAPI.getCompletions(completionRequest, (messageChunk) => {
  console.log("Received message chunk:", messageChunk);
  .then(() => {
    console.log("Chat completions retrieved successfully.");
  .catch((error) => {
    console.error("Error retrieving chat completions:", error);

That's a basic overview of the aillm.js library and its usage. It provides a convenient way to interact with the XWiki AI Chat API and retrieve chat completions from an AI language model.


Dependencies for this extension (org.xwiki.contrib.llm:application-ai-llm-chat-webjar 0.3.1):

Get Connected