LLM Chat WebJar
Resources for the chat UI of the LLM Application. |
Type | webjar |
Category | WebJar |
Developed by | Matéo Munoz, Ludovic Dubost, Michael Hamann, Paul Pantiru |
Rating | |
License | GNU Lesser General Public License 2.1 |
Compatibility | 16.2.0 and above |
Table of contents
Description
The LLM Application also provides a webjar containing a javascript library (which is used by the Chat UI but can also be used independently) and a Chat UI based on the Assister chat embeddable in any external application, providing a way to connect and talk to the data indexed in the LLM Application from any web application authorized from within the XWiki instance.
Settings
It also provides settings for selecting between the models available, setting the temperature and switch between streaming and non-streaming responses.
Embedding the Chat UI into a webpage
Insert the following into your webpage header and replace [your xwiki domain] and [LLM Application version] with your actual values.
Note: This will be simplified in future releases.
<script src="[your xwiki domain]/xwiki/webjars/wiki%3Axwiki/application-ai-llm-chat-webjar/[LLM Application version]/purify.min.js"></script>
<script src="[your xwiki domain]/xwiki/webjars/wiki%3Axwiki/application-ai-llm-chat-webjar/[LLM Application version]/aillm.js"></script>
<script id="chat-widget" data-base-url="[your xwiki domain]/xwiki" src="[your xwiki domain]/xwiki/webjars/wiki%3Axwiki/application-ai-llm-chat-webjar/[LLM Application version]/chatWidget.js"></script>
<link rel="stylesheet" href="[your xwiki domain]/xwiki/webjars/wiki%3Axwiki/application-ai-llm-chat-webjar/[LLM Application version]/chatWidget.css">
The javascript library
XWikiAiAPI
The XWikiAiAPI is a singleton object that encapsulates the methods and properties required to interact with the XWiki AI Chat API.
Properties
- baseURL: The base URL of the XWiki AI Chat API. Defaults to "http://localhost:8080/xwiki".
- wikiName: The name of the wiki. Default is 'xwiki'.
- apiKey: The API key for authentication. Default is an empty string.
- temperature: The temperature value for generating chat completions. Default is 1.
- stream: A boolean indicating whether to use streaming mode. Default is false.
- chatUISettings: An array of available chat UI settings. Available values are: "server-address","temperature","model" and "stream"
Methods
- getBaseURL(): Returns the current base URL.
- setBaseURL(url): Sets the base URL for the API requests.
- setApiKey(key): Sets the API key for authentication.
- setWikiName(name): Sets the name of the wiki.
- getModels(): Fetches the list of available models from the API.
- getPrompts(): Fetches the list of available prompts from the API.
- getCompletions(request, onMessageChunk): Sends a ChatCompletionRequest to retrieve chat completions. If stream is set to true, it handles the streamed response and calls the onMessageChunk callback with each received message chunk. If stream is false, it returns the complete response as a Promise.
- setChatUISettings(settings): Sets the available chat UI settings.
- getChatUISettings(): Returns the available chat UI settings.
ChatCompletionRequest
The ChatCompletionRequest class represents a request for chat completions from the AI model.
Constructor
- model: The model to use for generating chat completions.
- temperature: The randomness of the generated completions, usually between 0 and 2.
- messages: An array of message objects, each containing a role and content.
- stream: A boolean indicating whether to use streaming mode.
Methods
- setModel(model): Sets the model name after validation.
- setTemperature(temperature): Sets the temperature value after validation.
- setMessages(messages): Sets the messages array after validation.
- setStream(stream): Sets the streaming mode after validation.
- addMessage(role, content): Adds a message to the messages array after validation.
- validate(): Validates the current state of the instance.
- toJSON(): Returns a JSON representation of the instance.
Usage
To use the aillm.js library, you need to include it in your application and create an instance of the ChatCompletionRequest class. Then, you can use the XWikiAiAPI methods to interact with the API and retrieve chat completions.
Example:
const completionRequest = new ChatCompletionRequest(
"AI.Models.mixtral", // model
0.5, // temperature
[], // messages
true // streaming
);
// Add messages to the request
completionRequest.addMessage("user", "Hello!");
completionRequest.addMessage("assistant", "Hi there!");
// Retrieve chat completions
XWikiAiAPI.getCompletions(completionRequest, (messageChunk) => {
console.log("Received message chunk:", messageChunk);
})
.then(() => {
console.log("Chat completions retrieved successfully.");
})
.catch((error) => {
console.error("Error retrieving chat completions:", error);
});
That's a basic overview of the aillm.js library and its usage. It provides a convenient way to interact with the XWiki AI Chat API and retrieve chat completions from an AI language model.
Dependencies
Dependencies for this extension (org.xwiki.contrib.llm:application-ai-llm-chat-webjar 0.6.2):
- org.webjars:requirejs 2.3.6
- org.xwiki.platform:xwiki-platform-webjars-api 16.2.0
- org.xwiki.platform:xwiki-platform-job-webjar 16.2.0