Skip to main content

Tutorial Part 2 - Progressive rendering

Let's look at a slightly more complex example, which uses the <Inline> component to progressively render the result of the LLM invocation as it is generated. This allows us to take the current result generated by the LLM and pass it into the next invocation of the LLM.

import * as AI from 'ai-jsx';
import { Inline } from 'ai-jsx/core/inline';
import { ChatCompletion, UserMessage } from 'ai-jsx/core/completion';
import { showInspector } from 'ai-jsx/core/inspector';

const app = (
<Inline>
<ChatCompletion>
<UserMessage>Come up with the name of a mythical forest animal.</UserMessage>
</ChatCompletion>
{'\n\n'}
{(conversation) => (
<ChatCompletion>
<UserMessage>Imagine a mythical forest animal called a "{conversation}". Tell me more about it.</UserMessage>
</ChatCompletion>
)}
{'\n\n'}
{(conversation) => (
<ChatCompletion>
<UserMessage>Now write a poem about this animal: {conversation}</UserMessage>
</ChatCompletion>
)}
</Inline>
);

showInspector(app);

As before, we are using <ChatCompletion> to invoke the LLM, but this time we are calling it three times -- the first time with a fixed prompt, and the second time with a prompt that includes the result of the first invocation, and a third time that includes the result of the first two invocations.

The <Inline> component renders each of its children in sequence. The first child is the first <ChatCompletion> with the initial static prompt. The second child of <Inline> is a string containing two newline characters. The third child is a function that takes a single argument -- conversation -- and returns a <ChatCompletion> component that invokes the LLM again. The conversation argument contains the state of the rendered output of the <Inline> component up to this point. In this way, you can build up a response consisting of multiple LLM invocations. The fourth child of <Inline> is another string containing two newline characters. Finally, the fifth child of <Inline> is another function that takes in the current state of the rendered output, and returns a <ChatCompletion> component that invokes the LLM a third time.

The Inspector

The other change we are seeing in this example is the use of showInspector to render the application, rather than just printing a string using console.log. showInspector starts up a live text display (in the terminal window) that shows the current state of the application as it is rendered. This can be helpful in debugging AI.JSX applications, rather than relying only on the final result of the rendering. With the inspector running, you can use the left and right arrow keys to step back and forth through the rendering history, and see the render tree at each step.