Dynamic Workflow Creation With Inngest and Sanity

In the process of building an Inngest function, a code chain was created. This chain was constructed in stages, adding system prompts at each point before formatting it into JSON. This chain was initially hard-coded, which meant a code check-in was necessary each time.

To make the process more dynamic, this function was moved into Sanity, a content collaboration platform. In Sanity, a workflow was created along with a series of actions. Each action was of type 'prompt' which allowed defining the title, model, role, and an optional name.

Once these actions are set in Sanity, they then can be accessed from the development server for testing and review. The workflow from Sanity is loaded and the actions are grabbed from the workflow.

Only the prompts are activated and their creation begins in a loop process. An array of prompts is sent to OpenAI as a conversation, effectively creating a built-in memory. The system then processes non-system inputs in the conversation.

Further, the system uses the Liquid platform, which parses and renders the transcript from a video, and inputs it into the prompt. The processed prompts are returned as a chat message, also known as 'chat completion request message'.

This chat completion message is simply an object with a role (usually system) and the content that it sent back. The multiple responses are accumulated and bundled to form the final output.

What is produced eventually is content with a title. The dynamic setting in Sanity allows editing of the chain, addition of new steps or creation of entirely new chains for different workflows. It's also simple to make changes to system prompts. This whole process makes customizing workflows more straightforward and flexible.

Transcript

[00:00] So this is an ingest function and it gets invoked and previously I was working with these prompts right and I can I have the strings here in the code and you can kind of go through and follow the chain and come down? And at each step I'm adding to that and I'm increasing the system prompt as I go along. Finally I format it and it's JSON and then I'm able to return that and the event's completed. And what I was thinking was, wow, it sure would be cool if all this wasn't hard-coded, if all this didn't require a code check-in to complete. So to accomplish that, what I did was move this over into Sanity, and I'm able to create a workflow and then I can add a series of actions to that workflow and then each of these actions is of type prompt.

[01:00] So this is a prompt action In there I can say what's the title of that, what model I want to use, what's the role of that prompt. So we have our system prompt or a user or the assistant. I can give a name. This is optional and I'm not doing that on many of the steps. But then what happens is I come back over here, or let's go into the Ingest.

[01:28] This is the development server, and what we can do inside of the development server is actually come in here and I've been been testing it so I can hit replay and this will kick it off over here so I can start to see my steps as they happen and what we get here is the function started right then it loads the workflow from sanity so that's this step right here load the workflow and then what it does is actually grabs the actions from that workflow if they exist if there are any actions and starts looping through it So it has a switch statement here, and currently I only have prompts activated, so that's the only type that's available. And then basically it just goes through and builds the prompt, and the prompt is an array of prompts, right? Like, so those get passed in to open AI as a conversation. So it's kind of have this built in memory. And then it skips to non-system messages.

[02:27] I get some input, which is either the content of the last message, I want these to build on each other, or it uses input, so in this case, the input that's coming in is a transcript from a video. I'm using Liquid, so Liquid parses and renders that, And you can see inside of sanity here where I'm coming down. I'm just using a little liquid template here. So it'll put that input into the prompt, passes those prompts and then returns. What actually gets returned is a chat message.

[03:05] So chat completion request message, which is just simply an object with a role. In this case, it'll be system and then the content, which is the content that it sent back. And as we watch this get created over here you can see it goes through the writer, function is completed, and we get our final output. And what I can see is here's the message where it's the content in the system, and then the response, and the response again. So it builds that up, and then the final, oops, the final form here is actually the content and title that's produced.

[03:49] So dive into AWS's command line tool using MB and then gives you a title. And these are usually alright. A lot of times you know they'll require human editing. But you can go in and change the chain. So if I wanted to add another editor step, that's not a problem.

[04:06] Or if another writing step, so I could chain it out as far, or dial it back, or create entirely new chains for different workflows that I wanted to have the writer workflow work for me, or if I wanted to change system prompts or whatever, all that becomes relatively simple to do.